id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
1212.3081
Atsuko Takamatsu
Atsuko Takamatsu, Takuji Ishikawa, Kyosuke Shinohara, Hiroshi Hamada
Asymmetric Rotational Stroke in Mouse Node Cilia during Left-Right Determination
5pages, 6 figures, Figs.4-6 are modified, results unchanged
null
10.1103/PhysRevE.87.050701
null
q-bio.TO physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Clockwise rotational movement of isolated single cilia in mice embryo was investigated in vivo. The movement generates leftward fluid flow in the node cavity and plays an important role in left-right determination. The leftward unidirectional flow results from tilting of the rotational axis of the cilium to posterior side. Because of the no-slip boundary condition at the cell surface, the upper stroke away from the boundary generates leftward flow, and the lower stroke close to the boundary generates slower rightward flow. By combining computational fluid dynamics with experimental observations, we demonstrate that the leftward stroke can be more effective than expected for cases in which cilia tilting alone is considered with the no-slip condition under constant driving force. Our results suggest that the driving force is asymmetric and that it is determined by the tilting angle and open angle of the rotating cilia. Specifically, it is maximized in the leftward stroke when the cilia moves comparatively far from the boundary.
[ { "created": "Thu, 13 Dec 2012 08:06:58 GMT", "version": "v1" }, { "created": "Fri, 22 Feb 2013 03:44:52 GMT", "version": "v2" } ]
2013-05-22
[ [ "Takamatsu", "Atsuko", "" ], [ "Ishikawa", "Takuji", "" ], [ "Shinohara", "Kyosuke", "" ], [ "Hamada", "Hiroshi", "" ] ]
Clockwise rotational movement of isolated single cilia in mice embryo was investigated in vivo. The movement generates leftward fluid flow in the node cavity and plays an important role in left-right determination. The leftward unidirectional flow results from tilting of the rotational axis of the cilium to posterior side. Because of the no-slip boundary condition at the cell surface, the upper stroke away from the boundary generates leftward flow, and the lower stroke close to the boundary generates slower rightward flow. By combining computational fluid dynamics with experimental observations, we demonstrate that the leftward stroke can be more effective than expected for cases in which cilia tilting alone is considered with the no-slip condition under constant driving force. Our results suggest that the driving force is asymmetric and that it is determined by the tilting angle and open angle of the rotating cilia. Specifically, it is maximized in the leftward stroke when the cilia moves comparatively far from the boundary.
2107.11837
Juami van Gils
Juami Hermine Mariama van Gils, Dea Gogishvili, Jan van Eck, Robbin Bouwmeester, Erik van Dijk, Sanne Abeln
How sticky are our proteins? Quantifying hydrophobicity of the human proteome
null
null
null
null
q-bio.BM
http://creativecommons.org/licenses/by/4.0/
Proteins tend to bury hydrophobic residues inside their core during the folding process to provide stability to the protein structure and to prevent aggregation. Nevertheless, proteins do expose some 'sticky' hydrophobic residues to the solvent. These residues can play an important functional role, for example in protein-protein and membrane interactions. Here, we investigate how hydrophobic protein surfaces are by providing three measures for surface hydrophobicity: the total hydrophobic surface area, the relative hydrophobic surface area, and - using our MolPatch method - the largest hydrophobic patch. Secondly, we analyse how difficult it is to predict these measures from sequence: by adapting solvent accessibility predictions from NetSurfP2.0, we obtain well-performing prediction methods for the THSA and RHSA, while predicting LHP is more difficult. Finally, we analyse implications of exposed hydrophobic surfaces: we show that hydrophobic proteins typically have low expression, suggesting cells avoid an overabundance of sticky proteins.
[ { "created": "Sun, 25 Jul 2021 16:31:02 GMT", "version": "v1" } ]
2021-07-27
[ [ "van Gils", "Juami Hermine Mariama", "" ], [ "Gogishvili", "Dea", "" ], [ "van Eck", "Jan", "" ], [ "Bouwmeester", "Robbin", "" ], [ "van Dijk", "Erik", "" ], [ "Abeln", "Sanne", "" ] ]
Proteins tend to bury hydrophobic residues inside their core during the folding process to provide stability to the protein structure and to prevent aggregation. Nevertheless, proteins do expose some 'sticky' hydrophobic residues to the solvent. These residues can play an important functional role, for example in protein-protein and membrane interactions. Here, we investigate how hydrophobic protein surfaces are by providing three measures for surface hydrophobicity: the total hydrophobic surface area, the relative hydrophobic surface area, and - using our MolPatch method - the largest hydrophobic patch. Secondly, we analyse how difficult it is to predict these measures from sequence: by adapting solvent accessibility predictions from NetSurfP2.0, we obtain well-performing prediction methods for the THSA and RHSA, while predicting LHP is more difficult. Finally, we analyse implications of exposed hydrophobic surfaces: we show that hydrophobic proteins typically have low expression, suggesting cells avoid an overabundance of sticky proteins.
0704.2474
Yi Xiao
Changjun Chen and Yi Xiao
Observation of Multiple folding Pathways of beta-hairpin Trpzip2 from Independent Continuous Folding Trajectories
13 pages, 8 figures
null
null
null
q-bio.BM
null
We report 10 successfully folding events of trpzip2 by molecular dynamics simulation. It is found that the trizip2 can fold into its native state through different zipper pathways, depending on the ways of forming hydrophobic core. We also find a very fast non-zipper pathway. This indicates that there may be no inconsistencies in the current pictures of beta-hairpin folding mechanisms. These pathways occur with different probabilities. zip-out is the most probable one. This may explain the recent experiment that the turn formation is the rate-limiting step for beta-hairpin folding.
[ { "created": "Thu, 19 Apr 2007 07:57:32 GMT", "version": "v1" } ]
2007-05-23
[ [ "Chen", "Changjun", "" ], [ "Xiao", "Yi", "" ] ]
We report 10 successfully folding events of trpzip2 by molecular dynamics simulation. It is found that the trizip2 can fold into its native state through different zipper pathways, depending on the ways of forming hydrophobic core. We also find a very fast non-zipper pathway. This indicates that there may be no inconsistencies in the current pictures of beta-hairpin folding mechanisms. These pathways occur with different probabilities. zip-out is the most probable one. This may explain the recent experiment that the turn formation is the rate-limiting step for beta-hairpin folding.
2011.01002
Alvason Zhenhua Li
Alvason Zhenhua Li, Karsten Eichholz, Anton Sholukh, Daniel Stone, Michelle A. Loprieno, Keith R. Jerome, Khamsone Phasouk, Kurt Diem, Jia Zhu, Lawrence Corey
RRScell method for automated single-cell profiling of multiplexed immunofluorescence cancer tissue
8 pages, 6 figures, markerUMAP cell clustering
null
null
null
q-bio.QM eess.IV q-bio.TO stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multiplexed immuno-fluorescence tissue imaging, allowing simultaneous detection of molecular properties of cells, is an essential tool for characterizing the complex cellular mechanisms in translational research and clinical practice. New image analysis approaches are needed because tissue section stained with a mixture of protein, DNA and RNA biomarkers are introducing various complexities, including spurious edges due to fluorescent staining artifacts between touching or overlapping cells. We have developed the RRScell method harnessing the stochastic random-reaction-seed (RRS) algorithm and deep neural learning U-net to extract single-cell resolution profiling-map of gene expression over a million cells tissue section accurately and automatically. Furthermore, with the use of manifold learning technique UMAP for cell phenotype cluster analysis, the AI-driven RRScell has equipped with a marker-based image cytometry analysis tool (markerUMAP) in quantifying spatial distribution of cell phenotypes from tissue images with a mixture of biomarkers. The results achieved in this study suggest that RRScell provides a robust enough way for extracting cytometric single cell morphology as well as biomarker content in various tissue types, while the build-in markerUMAP tool secures the efficiency of dimension reduction, making it viable as a general tool in the spatial analysis of high dimensional tissue image.
[ { "created": "Fri, 30 Oct 2020 06:41:19 GMT", "version": "v1" }, { "created": "Thu, 18 Mar 2021 08:08:04 GMT", "version": "v2" } ]
2021-03-19
[ [ "Li", "Alvason Zhenhua", "" ], [ "Eichholz", "Karsten", "" ], [ "Sholukh", "Anton", "" ], [ "Stone", "Daniel", "" ], [ "Loprieno", "Michelle A.", "" ], [ "Jerome", "Keith R.", "" ], [ "Phasouk", "Khamsone", ...
Multiplexed immuno-fluorescence tissue imaging, allowing simultaneous detection of molecular properties of cells, is an essential tool for characterizing the complex cellular mechanisms in translational research and clinical practice. New image analysis approaches are needed because tissue section stained with a mixture of protein, DNA and RNA biomarkers are introducing various complexities, including spurious edges due to fluorescent staining artifacts between touching or overlapping cells. We have developed the RRScell method harnessing the stochastic random-reaction-seed (RRS) algorithm and deep neural learning U-net to extract single-cell resolution profiling-map of gene expression over a million cells tissue section accurately and automatically. Furthermore, with the use of manifold learning technique UMAP for cell phenotype cluster analysis, the AI-driven RRScell has equipped with a marker-based image cytometry analysis tool (markerUMAP) in quantifying spatial distribution of cell phenotypes from tissue images with a mixture of biomarkers. The results achieved in this study suggest that RRScell provides a robust enough way for extracting cytometric single cell morphology as well as biomarker content in various tissue types, while the build-in markerUMAP tool secures the efficiency of dimension reduction, making it viable as a general tool in the spatial analysis of high dimensional tissue image.
2209.08076
Andrei Sontag
Andrei Sontag, Tim Rogers and Christian A. Yates
Stochastic drift in discrete waves of non-locally interacting-particles
14 pages, 9 figures
null
10.1103/PhysRevE.107.014128
null
q-bio.QM physics.soc-ph q-bio.PE
http://creativecommons.org/licenses/by/4.0/
In this paper, we investigate a generalised model of $N$ particles undergoing second-order non-local interactions on a lattice. Our results have applications across many research areas, including the modelling of migration, information dynamics and Muller's ratchet -- the irreversible accumulation of deleterious mutations in an evolving population. Strikingly, numerical simulations of the model are observed to deviate significantly from its mean-field approximation even for large population sizes. We show that the disagreement between deterministic and stochastic solutions stems from finite-size effects that change the propagation speed and cause the position of the wave to fluctuate. These effects are shown to decay anomalously as $(\log N)^{-2}$ and $(\log N)^{-3}$, respectively -- much slower than the usual $N^{-1/2}$ factor. Our results suggest that the accumulation of deleterious mutations in a Muller's ratchet and the loss of awareness in a population may occur much faster than predicted by the corresponding deterministic models. The general applicability of our model suggests that this unexpected scaling could be important in a wide range of real-world applications.
[ { "created": "Fri, 16 Sep 2022 17:39:35 GMT", "version": "v1" }, { "created": "Thu, 19 Jan 2023 18:17:24 GMT", "version": "v2" } ]
2023-01-20
[ [ "Sontag", "Andrei", "" ], [ "Rogers", "Tim", "" ], [ "Yates", "Christian A.", "" ] ]
In this paper, we investigate a generalised model of $N$ particles undergoing second-order non-local interactions on a lattice. Our results have applications across many research areas, including the modelling of migration, information dynamics and Muller's ratchet -- the irreversible accumulation of deleterious mutations in an evolving population. Strikingly, numerical simulations of the model are observed to deviate significantly from its mean-field approximation even for large population sizes. We show that the disagreement between deterministic and stochastic solutions stems from finite-size effects that change the propagation speed and cause the position of the wave to fluctuate. These effects are shown to decay anomalously as $(\log N)^{-2}$ and $(\log N)^{-3}$, respectively -- much slower than the usual $N^{-1/2}$ factor. Our results suggest that the accumulation of deleterious mutations in a Muller's ratchet and the loss of awareness in a population may occur much faster than predicted by the corresponding deterministic models. The general applicability of our model suggests that this unexpected scaling could be important in a wide range of real-world applications.
2107.05242
S{\o}ren Andersen
S{\o}ren S. L. Andersen
Real time large scale $\textit{in vivo}$ observations by light-sheet microscopy reveal intrinsic synchrony, plasticity and growth cone dynamics of midline crossing axons at the ventral floor plate of the zebrafish spinal cord
38 A4 pages, 19 PNG figures, 16 MP4 movies
null
null
null
q-bio.NC q-bio.QM q-bio.SC
http://creativecommons.org/licenses/by-nc-sa/4.0/
Axonal growth and guidance at the ventral floor plate is here followed $\textit{in vivo}$ in real time at high resolution by light-sheet microscopy along several hundred micrometers of the zebrafish spinal cord. The recordings show the strikingly stereotyped spatio-temporal control that governs midline crossing. Commissural axons are observed crossing the ventral floor plate midline perpendicularly at about 20 microns/h, in a manner dependent on the Robo3 receptor and with a growth rate minimum around the midline, confirming previous observations. At guidance points, commissural axons are seen to decrease their growth rate and growth cones increase in size. Commissural filopodia appear to interact with the nascent neural network, and thereby trigger immediate plastic and reversible sinusoidal-shaped bending movements of neighboring commissural shafts. Ipsilateral axons extend concurrently, but straight and without bends, at three to six times higher growth rates than commissurals, indicating they project their path on a substrate-bound surface rather than relying on diffusible guidance cues. Growing axons appeared to be under stretch, an observation that is of relevance for tension-based models of cortical morphogenesis. The \textit{in vivo} observations provide for a discussion of the current distinction between substrate-bound and diffusible guidance cues. The study applies the transparent zebrafish model that provides an experimental model system to explore further the cellular, molecular and physical mechanisms involved during axonal growth, guidance and midline crossing through a combination of $\textit{in vitro}$ and $\textit{in vivo}$ approaches.
[ { "created": "Mon, 12 Jul 2021 08:06:27 GMT", "version": "v1" } ]
2021-07-13
[ [ "Andersen", "Søren S. L.", "" ] ]
Axonal growth and guidance at the ventral floor plate is here followed $\textit{in vivo}$ in real time at high resolution by light-sheet microscopy along several hundred micrometers of the zebrafish spinal cord. The recordings show the strikingly stereotyped spatio-temporal control that governs midline crossing. Commissural axons are observed crossing the ventral floor plate midline perpendicularly at about 20 microns/h, in a manner dependent on the Robo3 receptor and with a growth rate minimum around the midline, confirming previous observations. At guidance points, commissural axons are seen to decrease their growth rate and growth cones increase in size. Commissural filopodia appear to interact with the nascent neural network, and thereby trigger immediate plastic and reversible sinusoidal-shaped bending movements of neighboring commissural shafts. Ipsilateral axons extend concurrently, but straight and without bends, at three to six times higher growth rates than commissurals, indicating they project their path on a substrate-bound surface rather than relying on diffusible guidance cues. Growing axons appeared to be under stretch, an observation that is of relevance for tension-based models of cortical morphogenesis. The \textit{in vivo} observations provide for a discussion of the current distinction between substrate-bound and diffusible guidance cues. The study applies the transparent zebrafish model that provides an experimental model system to explore further the cellular, molecular and physical mechanisms involved during axonal growth, guidance and midline crossing through a combination of $\textit{in vitro}$ and $\textit{in vivo}$ approaches.
1302.2430
Leo van Iersel
Mareike Fischer, Leo van Iersel, Steven Kelk, Celine Scornavacca
On Computing the Maximum Parsimony Score of a Phylogenetic Network
null
null
null
null
q-bio.PE math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Phylogenetic networks are used to display the relationship of different species whose evolution is not treelike, which is the case, for instance, in the presence of hybridization events or horizontal gene transfers. Tree inference methods such as Maximum Parsimony need to be modified in order to be applicable to networks. In this paper, we discuss two different definitions of Maximum Parsimony on networks, "hardwired" and "softwired", and examine the complexity of computing them given a network topology and a character. By exploiting a link with the problem Multicut, we show that computing the hardwired parsimony score for 2-state characters is polynomial-time solvable, while for characters with more states this problem becomes NP-hard but is still approximable and fixed parameter tractable in the parsimony score. On the other hand we show that, for the softwired definition, obtaining even weak approximation guarantees is already difficult for binary characters and restricted network topologies, and fixed-parameter tractable algorithms in the parsimony score are unlikely. On the positive side we show that computing the softwired parsimony score is fixed-parameter tractable in the level of the network, a natural parameter describing how tangled reticulate activity is in the network. Finally, we show that both the hardwired and softwired parsimony score can be computed efficiently using Integer Linear Programming. The software has been made freely available.
[ { "created": "Mon, 11 Feb 2013 10:11:54 GMT", "version": "v1" }, { "created": "Mon, 25 Mar 2013 15:44:03 GMT", "version": "v2" }, { "created": "Thu, 1 May 2014 12:02:28 GMT", "version": "v3" } ]
2014-05-02
[ [ "Fischer", "Mareike", "" ], [ "van Iersel", "Leo", "" ], [ "Kelk", "Steven", "" ], [ "Scornavacca", "Celine", "" ] ]
Phylogenetic networks are used to display the relationship of different species whose evolution is not treelike, which is the case, for instance, in the presence of hybridization events or horizontal gene transfers. Tree inference methods such as Maximum Parsimony need to be modified in order to be applicable to networks. In this paper, we discuss two different definitions of Maximum Parsimony on networks, "hardwired" and "softwired", and examine the complexity of computing them given a network topology and a character. By exploiting a link with the problem Multicut, we show that computing the hardwired parsimony score for 2-state characters is polynomial-time solvable, while for characters with more states this problem becomes NP-hard but is still approximable and fixed parameter tractable in the parsimony score. On the other hand we show that, for the softwired definition, obtaining even weak approximation guarantees is already difficult for binary characters and restricted network topologies, and fixed-parameter tractable algorithms in the parsimony score are unlikely. On the positive side we show that computing the softwired parsimony score is fixed-parameter tractable in the level of the network, a natural parameter describing how tangled reticulate activity is in the network. Finally, we show that both the hardwired and softwired parsimony score can be computed efficiently using Integer Linear Programming. The software has been made freely available.
2109.12327
Andreas Horn
Barbara Hollunder, Nanditha Rajamani, Shan H. Siddiqi, Carsten Finke, Andrea A. K\"uhn, Helen S. Mayberg, Michael D. Fox, Clemens Neudorfer, Andreas Horn
Toward Personalized Medicine in Connectomic Deep Brain Stimulation
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-nd/4.0/
At the group-level, deep brain stimulation leads to significant therapeutic benefit in a multitude of neurological and neuropsychiatric disorders. At the single-patient level, however, symptoms may sometimes persist despite "optimal" electrode placement at established treatment coordinates. This may be partly explained by limitations of disease-centric strategies that are unable to account for heterogeneous phenotypes and comorbidities observed in clinical practice. Instead, tailoring electrode placement and programming to individual patients' symptom profiles may increase the fraction of top responding patients. Here, we propose a three-step, circuit-based framework that aims to develop patient-specific treatment targets that address the unique symptom constellation prevalent in each patient. First, we describe how a symptom network target library could be established by mapping beneficial or undesirable DBS effects to distinct circuits based on (retrospective) group-level data. Second, we suggest ways of matching the resulting symptom networks to circuits defined in the individual patient (template matching). Third, we introduce network blending as a strategy to calculate optimal stimulation targets and parameters by selecting and weighting a set of symptom-specific networks based on the symptom profile and subjective priorities of the individual patient. We integrate the approach with published literature and conclude by discussing limitations and future challenges.
[ { "created": "Sat, 25 Sep 2021 09:50:11 GMT", "version": "v1" } ]
2021-09-28
[ [ "Hollunder", "Barbara", "" ], [ "Rajamani", "Nanditha", "" ], [ "Siddiqi", "Shan H.", "" ], [ "Finke", "Carsten", "" ], [ "Kühn", "Andrea A.", "" ], [ "Mayberg", "Helen S.", "" ], [ "Fox", "Michael D.", "" ...
At the group-level, deep brain stimulation leads to significant therapeutic benefit in a multitude of neurological and neuropsychiatric disorders. At the single-patient level, however, symptoms may sometimes persist despite "optimal" electrode placement at established treatment coordinates. This may be partly explained by limitations of disease-centric strategies that are unable to account for heterogeneous phenotypes and comorbidities observed in clinical practice. Instead, tailoring electrode placement and programming to individual patients' symptom profiles may increase the fraction of top responding patients. Here, we propose a three-step, circuit-based framework that aims to develop patient-specific treatment targets that address the unique symptom constellation prevalent in each patient. First, we describe how a symptom network target library could be established by mapping beneficial or undesirable DBS effects to distinct circuits based on (retrospective) group-level data. Second, we suggest ways of matching the resulting symptom networks to circuits defined in the individual patient (template matching). Third, we introduce network blending as a strategy to calculate optimal stimulation targets and parameters by selecting and weighting a set of symptom-specific networks based on the symptom profile and subjective priorities of the individual patient. We integrate the approach with published literature and conclude by discussing limitations and future challenges.
0709.1696
Lik Wee Lee
L.W. Lee, L. Yin, X.M. Zhu, P. Ao
A Generic Rate Equation for modeling Enzymatic Reactions under Living Conditions
Accepted by Journal of Biological Systems (8 Sep 2007)
Journal of Biological Systems, Vol. 15, No. 4 (2007) 495-514
10.1142/S0218339007002295
null
q-bio.MN q-bio.QM
null
Based on our experience in kinetic modeling of coupled multiple metabolic pathways we propose a generic rate equation for the dynamical modeling of metabolic kinetics. Its symmetric form makes the kinetic parameters (or functions) easy to relate to values in database and to use in computation. In addition, such form is workable to arbitrary number of substrates and products with different stoichiometry. We explicitly show how to obtain such rate equation exactly for various binding mechanisms. Hence the proposed rate equation is formally rigorous. Various features of such a generic rate equation are discussed. For irreversible reactions, the product inhibition which directly arise from enzymatic reaction is eliminated in a natural way. We also discuss how to include the effects of modifiers and cooperativity.
[ { "created": "Tue, 11 Sep 2007 18:42:17 GMT", "version": "v1" }, { "created": "Tue, 18 Dec 2007 02:14:07 GMT", "version": "v2" } ]
2007-12-18
[ [ "Lee", "L. W.", "" ], [ "Yin", "L.", "" ], [ "Zhu", "X. M.", "" ], [ "Ao", "P.", "" ] ]
Based on our experience in kinetic modeling of coupled multiple metabolic pathways we propose a generic rate equation for the dynamical modeling of metabolic kinetics. Its symmetric form makes the kinetic parameters (or functions) easy to relate to values in database and to use in computation. In addition, such form is workable to arbitrary number of substrates and products with different stoichiometry. We explicitly show how to obtain such rate equation exactly for various binding mechanisms. Hence the proposed rate equation is formally rigorous. Various features of such a generic rate equation are discussed. For irreversible reactions, the product inhibition which directly arise from enzymatic reaction is eliminated in a natural way. We also discuss how to include the effects of modifiers and cooperativity.
1609.04306
Duc Nguyen
Duc Duy Nguyen, Kelin Xia, Guo-Wei Wei
Generalized flexibility-rigidity index
12 pages, 4 figures
J. Chem. Phys. 144, 234106 (2016)
10.1063/1.4953851
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Flexibility-rigidity index (FRI) has been developed as a robust, accurate and efficient method for macromolecular thermal fluctuation analysis and B-factor prediction. The performance of FRI depends on its formulations of rigidity index and flexibility index. In this work, we introduce alternative rigidity and flexibility formulations. The structure of the classic Gaussian surface is utilized to construct a new type of rigidity index, which leads to a new class of rigidity densities with the classic Gaussian surface as a special case. Additionally, we introduce a new type of flexibility index based on the domain indicator property of normalized rigidity density. These generalized FRI (gFRI) methods have been extensively validated by the B-factor predictions of 364 proteins. Significantly outperforming the classic Gaussian network model (GNM), gFRI is a new generation of methodologies for accurate, robust and efficient analysis of protein flexibility and fluctuation. Finally, gFRI based molecular surface generation and flexibility visualization are demonstrated.
[ { "created": "Wed, 14 Sep 2016 15:08:41 GMT", "version": "v1" } ]
2016-09-15
[ [ "Nguyen", "Duc Duy", "" ], [ "Xia", "Kelin", "" ], [ "Wei", "Guo-Wei", "" ] ]
Flexibility-rigidity index (FRI) has been developed as a robust, accurate and efficient method for macromolecular thermal fluctuation analysis and B-factor prediction. The performance of FRI depends on its formulations of rigidity index and flexibility index. In this work, we introduce alternative rigidity and flexibility formulations. The structure of the classic Gaussian surface is utilized to construct a new type of rigidity index, which leads to a new class of rigidity densities with the classic Gaussian surface as a special case. Additionally, we introduce a new type of flexibility index based on the domain indicator property of normalized rigidity density. These generalized FRI (gFRI) methods have been extensively validated by the B-factor predictions of 364 proteins. Significantly outperforming the classic Gaussian network model (GNM), gFRI is a new generation of methodologies for accurate, robust and efficient analysis of protein flexibility and fluctuation. Finally, gFRI based molecular surface generation and flexibility visualization are demonstrated.
2406.16596
Marco Corrao
Marco Corrao, Hai He, Wolfram Liebermeister, Elad Noor
A compact model of Escherichia coli core and biosynthetic metabolism
null
null
null
null
q-bio.MN q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Metabolic models condense biochemical knowledge about organisms in a structured and standardised way. As large-scale network reconstructions are readily available for many organisms of interest, genome-scale models are being widely used among modellers and engineers. However, these large models can be difficult to analyse and visualise, and occasionally generate hard-to-interpret or even biologically unrealistic predictions. Out of the thousands of enzymatic reactions in a typical bacterial metabolism, only a few hundred comprise the metabolic pathways essential to produce energy carriers and biosynthetic precursors. These pathways carry relatively high flux, are central to maintaining and reproducing the cell, and provide precursors and energy to engineered metabolic pathways. Here, focusing on these central metabolic subsystems, we present a manually-curated medium-scale model of energy and biosynthesis metabolism for the well-studied prokaryote Escherichia coli K-12 MG1655. The model is a sub-network of the most recent genome-scale reconstruction, iML1515, and comes with an updated layer of database annotations, as well as a range of metabolic maps for visualisation. We enriched the stoichiometric network with extensive biological information and quantitative data, enhancing the scope and applicability of the model. In addition, here we assess the properties of this model in relation to its genome-scale parent and demonstrate the use of the network and supporting data in various scenarios, including enzyme-constrained flux balance analysis, elementary flux mode analysis, and thermodynamic analysis. Overall, we believe this model holds the potential to become a reference medium-scale metabolic model for E. coli.
[ { "created": "Mon, 24 Jun 2024 12:37:53 GMT", "version": "v1" } ]
2024-06-25
[ [ "Corrao", "Marco", "" ], [ "He", "Hai", "" ], [ "Liebermeister", "Wolfram", "" ], [ "Noor", "Elad", "" ] ]
Metabolic models condense biochemical knowledge about organisms in a structured and standardised way. As large-scale network reconstructions are readily available for many organisms of interest, genome-scale models are being widely used among modellers and engineers. However, these large models can be difficult to analyse and visualise, and occasionally generate hard-to-interpret or even biologically unrealistic predictions. Out of the thousands of enzymatic reactions in a typical bacterial metabolism, only a few hundred comprise the metabolic pathways essential to produce energy carriers and biosynthetic precursors. These pathways carry relatively high flux, are central to maintaining and reproducing the cell, and provide precursors and energy to engineered metabolic pathways. Here, focusing on these central metabolic subsystems, we present a manually-curated medium-scale model of energy and biosynthesis metabolism for the well-studied prokaryote Escherichia coli K-12 MG1655. The model is a sub-network of the most recent genome-scale reconstruction, iML1515, and comes with an updated layer of database annotations, as well as a range of metabolic maps for visualisation. We enriched the stoichiometric network with extensive biological information and quantitative data, enhancing the scope and applicability of the model. In addition, here we assess the properties of this model in relation to its genome-scale parent and demonstrate the use of the network and supporting data in various scenarios, including enzyme-constrained flux balance analysis, elementary flux mode analysis, and thermodynamic analysis. Overall, we believe this model holds the potential to become a reference medium-scale metabolic model for E. coli.
1412.0199
Axel G. Rossberg
Adrian Farcas and Axel G. Rossberg
Maximum sustainable yield from interacting fish stocks in an uncertain world: two policy choices and underlying trade-offs
21 pages, 1 figure, 2 tables, plus supplementary material (substantial textual revision of v5)
null
10.1093/icesjms/fsw113
null
q-bio.PE nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The case of fisheries management illustrates how the inherent structural instability of ecosystems can have deep-running policy implications. We contrast ten types of management plans to achieve maximum sustainable yields (MSY) from multiple stocks and compare their effectiveness based on a management strategy evaluation (MSE) that uses complex food webs in its operating model. Plans that target specific stock sizes ($B_{\text{MSY}}$) consistently led to higher yields than plans targeting specific fishing pressures ($F_{\text{MSY}}$). A new self-optimising control rule, introduced here for its robustness to structural instability, led to intermediate yields. Most plans outperformed single-species management plans with pressure targets set without considering multispecies interactions. However, more refined plans to "maximise the yield from each stock separately", in the sense of a Nash equilibrium, produced total yields comparable to plans aiming to maximise total harvested biomass, and were more robust to structural instability. Our analyses highlight trade-offs between yields, amenability to negotiations, pressures on biodiversity, and continuity with current approaches in the European context. Based on these results, we recommend directions for developments of EU fisheries policy.
[ { "created": "Sun, 30 Nov 2014 09:50:08 GMT", "version": "v1" }, { "created": "Thu, 11 Dec 2014 16:43:28 GMT", "version": "v2" }, { "created": "Mon, 12 Jan 2015 10:03:37 GMT", "version": "v3" }, { "created": "Sat, 26 Dec 2015 16:50:55 GMT", "version": "v4" }, { "c...
2016-10-04
[ [ "Farcas", "Adrian", "" ], [ "Rossberg", "Axel G.", "" ] ]
The case of fisheries management illustrates how the inherent structural instability of ecosystems can have deep-running policy implications. We contrast ten types of management plans to achieve maximum sustainable yields (MSY) from multiple stocks and compare their effectiveness based on a management strategy evaluation (MSE) that uses complex food webs in its operating model. Plans that target specific stock sizes ($B_{\text{MSY}}$) consistently led to higher yields than plans targeting specific fishing pressures ($F_{\text{MSY}}$). A new self-optimising control rule, introduced here for its robustness to structural instability, led to intermediate yields. Most plans outperformed single-species management plans with pressure targets set without considering multispecies interactions. However, more refined plans to "maximise the yield from each stock separately", in the sense of a Nash equilibrium, produced total yields comparable to plans aiming to maximise total harvested biomass, and were more robust to structural instability. Our analyses highlight trade-offs between yields, amenability to negotiations, pressures on biodiversity, and continuity with current approaches in the European context. Based on these results, we recommend directions for developments of EU fisheries policy.
2107.04340
Yong Huang
Yeji Wang, Shuo Wu, Yanwen Duan, Yong Huang
A Point Cloud-Based Deep Learning Strategy for Protein-Ligand Binding Affinity Prediction
null
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There is great interest to develop artificial intelligence-based protein-ligand affinity models due to their immense applications in drug discovery. In this paper, PointNet and PointTransformer, two pointwise multi-layer perceptrons have been applied for protein-ligand affinity prediction for the first time. Three-dimensional point clouds could be rapidly generated from the data sets in PDBbind-2016, which contain 3 772 and 11 327 individual point clouds derived from the refined or/and general sets, respectively. These point clouds were used to train PointNet or PointTransformer, resulting in protein-ligand affinity prediction models with Pearson correlation coefficients R = 0.831 or 0.859 from the larger point clouds respectively, based on the CASF-2016 benchmark test. The analysis of the parameters suggests that the two deep learning models were capable to learn many interactions between proteins and their ligands, and these key atoms for the interaction could be visualized in point clouds. The protein-ligand interaction features learned by PointTransformer could be further adapted for the XGBoost-based machine learning algorithm, resulting in prediction models with an average Rp of 0.831, which is on par with the state-of-the-art machine learning models based on PDBbind database. These results suggest that point clouds derived from the PDBbind datasets are useful to evaluate the performance of 3D point clouds-centered deep learning algorithms, which could learn critical protein-ligand interactions from natural evolution or medicinal chemistry and have wide applications in studying protein-ligand interactions.
[ { "created": "Fri, 9 Jul 2021 10:17:48 GMT", "version": "v1" } ]
2021-07-12
[ [ "Wang", "Yeji", "" ], [ "Wu", "Shuo", "" ], [ "Duan", "Yanwen", "" ], [ "Huang", "Yong", "" ] ]
There is great interest to develop artificial intelligence-based protein-ligand affinity models due to their immense applications in drug discovery. In this paper, PointNet and PointTransformer, two pointwise multi-layer perceptrons have been applied for protein-ligand affinity prediction for the first time. Three-dimensional point clouds could be rapidly generated from the data sets in PDBbind-2016, which contain 3 772 and 11 327 individual point clouds derived from the refined or/and general sets, respectively. These point clouds were used to train PointNet or PointTransformer, resulting in protein-ligand affinity prediction models with Pearson correlation coefficients R = 0.831 or 0.859 from the larger point clouds respectively, based on the CASF-2016 benchmark test. The analysis of the parameters suggests that the two deep learning models were capable to learn many interactions between proteins and their ligands, and these key atoms for the interaction could be visualized in point clouds. The protein-ligand interaction features learned by PointTransformer could be further adapted for the XGBoost-based machine learning algorithm, resulting in prediction models with an average Rp of 0.831, which is on par with the state-of-the-art machine learning models based on PDBbind database. These results suggest that point clouds derived from the PDBbind datasets are useful to evaluate the performance of 3D point clouds-centered deep learning algorithms, which could learn critical protein-ligand interactions from natural evolution or medicinal chemistry and have wide applications in studying protein-ligand interactions.
2006.08608
Mohammed El-Magd
Mohammed A. El-Magd
Is SARS-CoV-2 a new Frankenstein monster virus?
11 pages and 1 figure
null
null
null
q-bio.TO q-bio.BM
http://creativecommons.org/licenses/by-nc-sa/4.0/
Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), a novel virus of beta coronavirus genus originated in Wuhan, China in December 2019, resulted in the pandemic spread of Corona Virus Disease 2019 (COVID-19) worldwide. This genus also contains SARS-CoV (originated in China 2002-2003) and MERS-CoV (found in Saudi Arabia 2012). The nucleotide sequences of SARS-CoV-2 is closer to SARS-CoV (with approximately 80% identity) than to MERS-CoV. Despite these similarities, SARS-CoV-2 has two main features over the other coronaviruses. The first is the high contagious rate and the second is the immune response evasion. The higher transmission ability makes this virus quickly spread worldwide with a high mortality rate and more economic losses. This review will provide an overview of the current knowledge on the role of some viral and host cell factors in higher transmission and contagious spread of SARS-CoV-2.
[ { "created": "Sun, 14 Jun 2020 17:32:15 GMT", "version": "v1" } ]
2020-06-17
[ [ "El-Magd", "Mohammed A.", "" ] ]
Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), a novel virus of beta coronavirus genus originated in Wuhan, China in December 2019, resulted in the pandemic spread of Corona Virus Disease 2019 (COVID-19) worldwide. This genus also contains SARS-CoV (originated in China 2002-2003) and MERS-CoV (found in Saudi Arabia 2012). The nucleotide sequences of SARS-CoV-2 is closer to SARS-CoV (with approximately 80% identity) than to MERS-CoV. Despite these similarities, SARS-CoV-2 has two main features over the other coronaviruses. The first is the high contagious rate and the second is the immune response evasion. The higher transmission ability makes this virus quickly spread worldwide with a high mortality rate and more economic losses. This review will provide an overview of the current knowledge on the role of some viral and host cell factors in higher transmission and contagious spread of SARS-CoV-2.
2008.04435
Mia Morrell
Mia C. Morrell, Audrey J. Sederberg, Ilya Nemenman
Latent dynamical variables produce signatures of spatiotemporal criticality in large biological systems
null
Phys. Rev. Lett. 126, 118302 (2021)
10.1103/PhysRevLett.126.118302
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding the activity of large populations of neurons is difficult due to the combinatorial complexity of possible cell-cell interactions. To reduce the complexity, coarse-graining had been previously applied to experimental neural recordings, which showed over two decades of scaling in free energy, activity variance, eigenvalue spectra, and correlation time, hinting that the mouse hippocampus operates in a critical regime. We model the experiment by simulating conditionally independent binary neurons coupled to a small number of long-timescale stochastic fields and then replicating the coarse-graining procedure and analysis. This reproduces the experimentally-observed scalings, suggesting that they may arise from coupling the neural population activity to latent dynamic stimuli. Further, parameter sweeps for our model suggest that emergence of scaling requires most of the cells in a population to couple to the latent stimuli, predicting that even the celebrated place cells must also respond to non-place stimuli.
[ { "created": "Mon, 10 Aug 2020 22:07:12 GMT", "version": "v1" } ]
2021-03-24
[ [ "Morrell", "Mia C.", "" ], [ "Sederberg", "Audrey J.", "" ], [ "Nemenman", "Ilya", "" ] ]
Understanding the activity of large populations of neurons is difficult due to the combinatorial complexity of possible cell-cell interactions. To reduce the complexity, coarse-graining had been previously applied to experimental neural recordings, which showed over two decades of scaling in free energy, activity variance, eigenvalue spectra, and correlation time, hinting that the mouse hippocampus operates in a critical regime. We model the experiment by simulating conditionally independent binary neurons coupled to a small number of long-timescale stochastic fields and then replicating the coarse-graining procedure and analysis. This reproduces the experimentally-observed scalings, suggesting that they may arise from coupling the neural population activity to latent dynamic stimuli. Further, parameter sweeps for our model suggest that emergence of scaling requires most of the cells in a population to couple to the latent stimuli, predicting that even the celebrated place cells must also respond to non-place stimuli.
0711.2096
Eduardo Candelario-Jalil
Eduardo Candelario-Jalil, Saeid Taheri, Yi Yang, Rohit Sood, Mark Grossetete, Eduardo Y. Estrada, Bernd L. Fiebich, Gary A. Rosenberg
Cyclooxygenase Inhibition Limits Blood-Brain Barrier Disruption following Intracerebral Injection of Tumor Necrosis Factor-alpha in the Rat
null
Journal of Pharmacology and Experimental Therapeutics 323(2): 488-498 (2007)
null
null
q-bio.TO q-bio.BM
null
Increased permeability of the blood-brain barrier (BBB) is important in neurological disorders. Neuroinflammation is associated with increased BBB breakdown and brain injury. Tumor necrosis factor-a (TNF-a) is involved in BBB injury and edema formation through a mechanism involving matrix metalloproteinase (MMP) upregulation. There is emerging evidence indicating that cyclooxygenase (COX) inhibition limits BBB disruption following ischemic stroke and bacterial meningitis, but the mechanisms involved are not known. We used intracerebral injection of TNF-a to study the effect of COX inhibition on TNF-a-induced BBB breakdown, MMP expression/activity and oxidative stress. BBB disruption was evaluated by the uptake of 14C-sucrose into the brain and by magnetic resonance imaging (MRI) utilizing Gd-DTPA as a paramagnetic contrast agent. Using selective inhibitors of each COX isoform, we found that COX-1 activity is more important than COX-2 in BBB opening. TNF-a induced a significant upregulation of gelatinase B (MMP-9), stromelysin-1 (MMP-3) and COX-2. In addition, TNF-a significantly depleted glutathione as compared to saline. Indomethacin (10 mg/kg; i.p.), an inhibitor of COX-1 and COX-2, reduced BBB damage at 24 h. Indomethacin significantly attenuated MMP-9 and MMP-3 expression and activation, and prevented the loss of endogenous radical scavenging capacity following intracerebral injection of TNF-a. Our results show for the first time that BBB disruption during neuroinflammation can be significantly reduced by administration of COX inhibitors. Modulation of COX in brain injury by COX inhibitors or agents modulating prostaglandin E2 formation/signaling may be useful in clinical settings associated with BBB disruption.
[ { "created": "Tue, 13 Nov 2007 23:54:39 GMT", "version": "v1" } ]
2007-11-15
[ [ "Candelario-Jalil", "Eduardo", "" ], [ "Taheri", "Saeid", "" ], [ "Yang", "Yi", "" ], [ "Sood", "Rohit", "" ], [ "Grossetete", "Mark", "" ], [ "Estrada", "Eduardo Y.", "" ], [ "Fiebich", "Bernd L.", "" ],...
Increased permeability of the blood-brain barrier (BBB) is important in neurological disorders. Neuroinflammation is associated with increased BBB breakdown and brain injury. Tumor necrosis factor-a (TNF-a) is involved in BBB injury and edema formation through a mechanism involving matrix metalloproteinase (MMP) upregulation. There is emerging evidence indicating that cyclooxygenase (COX) inhibition limits BBB disruption following ischemic stroke and bacterial meningitis, but the mechanisms involved are not known. We used intracerebral injection of TNF-a to study the effect of COX inhibition on TNF-a-induced BBB breakdown, MMP expression/activity and oxidative stress. BBB disruption was evaluated by the uptake of 14C-sucrose into the brain and by magnetic resonance imaging (MRI) utilizing Gd-DTPA as a paramagnetic contrast agent. Using selective inhibitors of each COX isoform, we found that COX-1 activity is more important than COX-2 in BBB opening. TNF-a induced a significant upregulation of gelatinase B (MMP-9), stromelysin-1 (MMP-3) and COX-2. In addition, TNF-a significantly depleted glutathione as compared to saline. Indomethacin (10 mg/kg; i.p.), an inhibitor of COX-1 and COX-2, reduced BBB damage at 24 h. Indomethacin significantly attenuated MMP-9 and MMP-3 expression and activation, and prevented the loss of endogenous radical scavenging capacity following intracerebral injection of TNF-a. Our results show for the first time that BBB disruption during neuroinflammation can be significantly reduced by administration of COX inhibitors. Modulation of COX in brain injury by COX inhibitors or agents modulating prostaglandin E2 formation/signaling may be useful in clinical settings associated with BBB disruption.
1307.8263
Laurent Frantz
Konrad Lohse and Laurent A.F. Frantz
Maximum likelihood evidence for Neandertal admixture in Eurasian populations from three genomes
null
null
null
null
q-bio.PE q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Although there has been much interest in estimating divergence and admixture from genomic data, it has proven difficult to distinguish gene flow after divergence from alternative histories involving structure in the ancestral population. The lack of a formal test to distinguish these scenarios has sparked recent controversy about the possibility of interbreeding between Neandertals and modern humans in Eurasia. We derive the probability of mutational configurations in non-recombining sequence blocks under alternative histories of divergence with admixture and ancestral structure. Dividing the genome into short blocks makes it possible to compute maximum likelihood estimates of parameters under both models. We apply this method to triplets of human Neandertal genomes and quantify the relative support for models of long-term population structure in the ancestral African popuation and admixture from Neandertals into Eurasian populations after their expansion out of Africa. Our analysis allows us -- for the first time -- to formally reject a history of ancestral population structure and instead reveals strong support for admixture from Neandertals into Eurasian populations at a higher rate (3.4%-7.9%) than suggested previously.
[ { "created": "Wed, 31 Jul 2013 10:01:53 GMT", "version": "v1" } ]
2013-08-01
[ [ "Lohse", "Konrad", "" ], [ "Frantz", "Laurent A. F.", "" ] ]
Although there has been much interest in estimating divergence and admixture from genomic data, it has proven difficult to distinguish gene flow after divergence from alternative histories involving structure in the ancestral population. The lack of a formal test to distinguish these scenarios has sparked recent controversy about the possibility of interbreeding between Neandertals and modern humans in Eurasia. We derive the probability of mutational configurations in non-recombining sequence blocks under alternative histories of divergence with admixture and ancestral structure. Dividing the genome into short blocks makes it possible to compute maximum likelihood estimates of parameters under both models. We apply this method to triplets of human Neandertal genomes and quantify the relative support for models of long-term population structure in the ancestral African popuation and admixture from Neandertals into Eurasian populations after their expansion out of Africa. Our analysis allows us -- for the first time -- to formally reject a history of ancestral population structure and instead reveals strong support for admixture from Neandertals into Eurasian populations at a higher rate (3.4%-7.9%) than suggested previously.
1707.07852
J\'ozsef Vass Ph.D.
J\'ozsef Vass, Sergey N. Krylov
A Computational Resolution of the Inverse Problem of Kinetic Capillary Electrophoresis (KCE)
Contains 10 pages with 4 figures
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Determining kinetic rate constants is a highly relevant problem in biochemistry, so various methods have been designed to extract them from experimental data. Such methods have two main components: the experimental apparatus and the subsequent analysis, the latter often dependent on mathematical theory. Thus the theoretical approach taken influences the effectiveness of constant determination. A computational inverse problem approach is hereby presented, which does not merely give a single rough approximation of the sought constants, but is inherently capable of determining them from exact signals to arbitrary accuracy. This approach is thus not merely novel, but opens a whole new category of solution approaches in the field, enabled primarily by an efficient direct solver.
[ { "created": "Tue, 25 Jul 2017 08:55:26 GMT", "version": "v1" }, { "created": "Wed, 2 Aug 2017 18:51:17 GMT", "version": "v2" } ]
2017-08-04
[ [ "Vass", "József", "" ], [ "Krylov", "Sergey N.", "" ] ]
Determining kinetic rate constants is a highly relevant problem in biochemistry, so various methods have been designed to extract them from experimental data. Such methods have two main components: the experimental apparatus and the subsequent analysis, the latter often dependent on mathematical theory. Thus the theoretical approach taken influences the effectiveness of constant determination. A computational inverse problem approach is hereby presented, which does not merely give a single rough approximation of the sought constants, but is inherently capable of determining them from exact signals to arbitrary accuracy. This approach is thus not merely novel, but opens a whole new category of solution approaches in the field, enabled primarily by an efficient direct solver.
2311.02652
Vasyl' Davydovych
Vasyl' Davydovych, Vasyl' Dutka, Roman Cherniha
Reaction-diffusion equations in mathematical models arising in epidemiology
null
null
10.3390/sym15112025
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The review is devoted to analysis of mathematical models used for describing epidemic processes. A main focus is done on the models that are based on partial differential equations (PDEs), especially those that were developed and used for the COVID-19 pandemic modelling. Our attention is paid preferable to the studies in which not only results of numerical simulations are presented but analytical results as well. In particular, travelling fronts (waves), exact solutions, estimation of key epidemic parameters of the epidemic models with governing PDEs (typically reaction-diffusion equations) are discussed. The review may serve as a valuable source for researchers and practitioners in the field of mathematical modelling in epidemiology.
[ { "created": "Sun, 5 Nov 2023 13:40:57 GMT", "version": "v1" } ]
2024-03-01
[ [ "Davydovych", "Vasyl'", "" ], [ "Dutka", "Vasyl'", "" ], [ "Cherniha", "Roman", "" ] ]
The review is devoted to analysis of mathematical models used for describing epidemic processes. A main focus is done on the models that are based on partial differential equations (PDEs), especially those that were developed and used for the COVID-19 pandemic modelling. Our attention is paid preferable to the studies in which not only results of numerical simulations are presented but analytical results as well. In particular, travelling fronts (waves), exact solutions, estimation of key epidemic parameters of the epidemic models with governing PDEs (typically reaction-diffusion equations) are discussed. The review may serve as a valuable source for researchers and practitioners in the field of mathematical modelling in epidemiology.
1105.0955
Dan Siegal-Gaskins
Marisa C. Eisenberg, Joshua N. Ash, and Dan Siegal-Gaskins
In Silico Synchronization of Cellular Populations Through Expression Data Deconvolution
accepted for the 48th ACM/IEEE Design Automation Conference
null
10.1145/2024724.2024906
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cellular populations are typically heterogenous collections of cells at different points in their respective cell cycles, each with a cell cycle time that varies from individual to individual. As a result, true single-cell behavior, particularly that which is cell-cycle--dependent, is often obscured in population-level (averaged) measurements. We have developed a simple deconvolution method that can be used to remove the effects of asynchronous variability from population-level time-series data. In this paper, we summarize some recent progress in the development and application of our approach, and provide technical updates that result in increased biological fidelity. We also explore several preliminary validation results and discuss several ongoing applications that highlight the method's usefulness for estimating parameters in differential equation models of single-cell gene regulation.
[ { "created": "Wed, 4 May 2011 21:51:32 GMT", "version": "v1" } ]
2013-07-02
[ [ "Eisenberg", "Marisa C.", "" ], [ "Ash", "Joshua N.", "" ], [ "Siegal-Gaskins", "Dan", "" ] ]
Cellular populations are typically heterogenous collections of cells at different points in their respective cell cycles, each with a cell cycle time that varies from individual to individual. As a result, true single-cell behavior, particularly that which is cell-cycle--dependent, is often obscured in population-level (averaged) measurements. We have developed a simple deconvolution method that can be used to remove the effects of asynchronous variability from population-level time-series data. In this paper, we summarize some recent progress in the development and application of our approach, and provide technical updates that result in increased biological fidelity. We also explore several preliminary validation results and discuss several ongoing applications that highlight the method's usefulness for estimating parameters in differential equation models of single-cell gene regulation.
1710.11612
Zachary Kilpatrick PhD
Nikhil Krishnan, Daniel B Poll, and Zachary P Kilpatrick
Synaptic efficacy shapes resource limitations in working memory
26 pages, 12 figures
null
null
null
q-bio.NC nlin.PS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Working memory (WM) is limited in its temporal length and capacity. Classic conceptions of WM capacity assume the system possesses a finite number of slots, but recent evidence suggests WM may be a continuous resource. Resource models typically assume there is no hard upper bound on the number of items that can be stored, but WM fidelity decreases with the number of items. We analyze a neural field model of multi-item WM that associates each item with the location of a bump in a finite spatial domain, considering items that span a one-dimensional continuous feature space. Our analysis relates the neural architecture of the network to accumulated errors and capacity limitations arising during the delay period of a multi-item WM task. Networks with stronger synapses support wider bumps that interact more, whereas networks with weaker synapses support narrower bumps that are more susceptible to noise perturbations. There is an optimal synaptic strength that both limits bump interaction events and the effects of noise perturbations. This optimum shifts to weaker synapses as the number of items stored in the network is increased. Our model not only provides a neural circuit explanation for WM capacity, but also speaks to how capacity relates to the arrangement of stored items in a feature space.
[ { "created": "Tue, 31 Oct 2017 17:43:10 GMT", "version": "v1" }, { "created": "Sun, 11 Feb 2018 04:50:55 GMT", "version": "v2" } ]
2018-02-13
[ [ "Krishnan", "Nikhil", "" ], [ "Poll", "Daniel B", "" ], [ "Kilpatrick", "Zachary P", "" ] ]
Working memory (WM) is limited in its temporal length and capacity. Classic conceptions of WM capacity assume the system possesses a finite number of slots, but recent evidence suggests WM may be a continuous resource. Resource models typically assume there is no hard upper bound on the number of items that can be stored, but WM fidelity decreases with the number of items. We analyze a neural field model of multi-item WM that associates each item with the location of a bump in a finite spatial domain, considering items that span a one-dimensional continuous feature space. Our analysis relates the neural architecture of the network to accumulated errors and capacity limitations arising during the delay period of a multi-item WM task. Networks with stronger synapses support wider bumps that interact more, whereas networks with weaker synapses support narrower bumps that are more susceptible to noise perturbations. There is an optimal synaptic strength that both limits bump interaction events and the effects of noise perturbations. This optimum shifts to weaker synapses as the number of items stored in the network is increased. Our model not only provides a neural circuit explanation for WM capacity, but also speaks to how capacity relates to the arrangement of stored items in a feature space.
1111.0977
Vernon Williams MR
Vernon Williams
Simulation for Evolution of the Australian Netting Spider PM Eye
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper reports on a simulated evolution project, which had the goal of simulating the refractive components of the PM eye of Australian netting spider diopis subrufus on a desktop computer. The model for the simulation is the anatomy of the eye described by Blest and Land [Ble & Lan 1977]. The evolution simulation was able to produce hundreds of eyes with equivalent optical qualities to the measured eyes for the phenotype of the netting spider. These artificially evolved eyes began to occur in the computer simulation between 8X106 and 35X106 cycles after Demonstration of Darwinian Theory of Evolution; arXiv 1006.0480 simulated the evolution for the ctenid spider PM eye, cupiennius sale. This paper follows the previous paper, but the netting spider eye is more complex than the ctenid PM eye so the simulated evolution equations are more complex.
[ { "created": "Thu, 3 Nov 2011 20:12:15 GMT", "version": "v1" } ]
2011-11-07
[ [ "Williams", "Vernon", "" ] ]
This paper reports on a simulated evolution project, which had the goal of simulating the refractive components of the PM eye of Australian netting spider diopis subrufus on a desktop computer. The model for the simulation is the anatomy of the eye described by Blest and Land [Ble & Lan 1977]. The evolution simulation was able to produce hundreds of eyes with equivalent optical qualities to the measured eyes for the phenotype of the netting spider. These artificially evolved eyes began to occur in the computer simulation between 8X106 and 35X106 cycles after Demonstration of Darwinian Theory of Evolution; arXiv 1006.0480 simulated the evolution for the ctenid spider PM eye, cupiennius sale. This paper follows the previous paper, but the netting spider eye is more complex than the ctenid PM eye so the simulated evolution equations are more complex.
0902.1052
Andrea De Martino
C. Martelli, A. De Martino, E. Marinari, M. Marsili, I. Perez Castillo
Identifying essential genes in E. coli from a metabolic optimization principle
9 pages, to appear in PNAS, see http://www.pnas.org/content/early/2009/02/05/0813229106.abstract for the early edition
null
10.1073/pnas.0813229106
null
q-bio.MN cond-mat.dis-nn cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding the organization of reaction fluxes in cellular metabolism from the stoichiometry and the topology of the underlying biochemical network is a central issue in systems biology. In this task, it is important to devise reasonable approximation schemes that rely on the stoichiometric data only, because full-scale kinetic approaches are computationally affordable only for small networks (e.g. red blood cells, about 50 reactions). Methods commonly employed are based on finding the stationary flux configurations that satisfy mass-balance conditions for metabolites, often coupling them to local optimization rules (e.g. maximization of biomass production) to reduce the size of the solution space to a single point. Such methods have been widely applied and have proven able to reproduce experimental findings for relatively simple organisms in specific conditions. Here we define and study a constraint-based model of cellular metabolism where neither mass balance nor flux stationarity are postulated, and where the relevant flux configurations optimize the global growth of the system. In the case of E. coli, steady flux states are recovered as solutions, though mass-balance conditions are violated for some metabolites, implying a non-zero net production of the latter. Such solutions furthermore turn out to provide the correct statistics of fluxes for the bacterium E. coli in different environments and compare well with the available experimental evidence on individual fluxes. Conserved metabolic pools play a key role in determining growth rate and flux variability. Finally, we are able to connect phenomenological gene essentiality with `frozen' fluxes (i.e. fluxes with smaller allowed variability) in E. coli metabolism.
[ { "created": "Fri, 6 Feb 2009 10:33:02 GMT", "version": "v1" } ]
2009-11-13
[ [ "Martelli", "C.", "" ], [ "De Martino", "A.", "" ], [ "Marinari", "E.", "" ], [ "Marsili", "M.", "" ], [ "Castillo", "I. Perez", "" ] ]
Understanding the organization of reaction fluxes in cellular metabolism from the stoichiometry and the topology of the underlying biochemical network is a central issue in systems biology. In this task, it is important to devise reasonable approximation schemes that rely on the stoichiometric data only, because full-scale kinetic approaches are computationally affordable only for small networks (e.g. red blood cells, about 50 reactions). Methods commonly employed are based on finding the stationary flux configurations that satisfy mass-balance conditions for metabolites, often coupling them to local optimization rules (e.g. maximization of biomass production) to reduce the size of the solution space to a single point. Such methods have been widely applied and have proven able to reproduce experimental findings for relatively simple organisms in specific conditions. Here we define and study a constraint-based model of cellular metabolism where neither mass balance nor flux stationarity are postulated, and where the relevant flux configurations optimize the global growth of the system. In the case of E. coli, steady flux states are recovered as solutions, though mass-balance conditions are violated for some metabolites, implying a non-zero net production of the latter. Such solutions furthermore turn out to provide the correct statistics of fluxes for the bacterium E. coli in different environments and compare well with the available experimental evidence on individual fluxes. Conserved metabolic pools play a key role in determining growth rate and flux variability. Finally, we are able to connect phenomenological gene essentiality with `frozen' fluxes (i.e. fluxes with smaller allowed variability) in E. coli metabolism.
2109.00089
Ralf Kircheis
Ralf Kircheis
Coagulapathies after vaccination against SARS-CoV-2 may be derived from a combination effect of SARS-CoV-2 spike protein and adenovirus vector-triggered signaling pathways
null
null
null
null
q-bio.CB q-bio.BM q-bio.MN
http://creativecommons.org/licenses/by/4.0/
The novel coronavirus SARS-CoV-2 has resulted in a global pandemic with worldwide 6-digital infection rates and thousands death tolls daily. Enormeous effords are undertaken to achieve high coverage of immunization in order to reach herd immunity to stop spreading of SARS-CoV-2 infection. Several SARS-CoV-2 vaccines, based either on mRNA, viral vectors, or inactivated SARS-CoV-2 virus have been approved and are being applied worldwide. However, recently increased numbers of normally very rare types of thromboses associated with thrombocytopenia have been reported in particular in the context of the adenoviral vector vaccine ChAdOx1 nCoV-19 from Astra Zeneca. While statistical prevalence of these side effects seem to correlate with this particular vaccine type, i.e. adenonoviral vector based vaccines, the exact molecular mechanisms are still not clear. The present review summarizes current data and hypotheses for molecular and cellular mechanisms into one integrated hypothesis indicating that coagulopathies, including thromboses, thrombocytopenia and other related side effects are correlated to an interplay of the two components in the vaccine, i.e. the spike antigen and the adenoviral vector, with the innate and immune system which under certain circumstances can imitate the picture of a limited COVID-19 pathological picture.
[ { "created": "Tue, 31 Aug 2021 21:50:34 GMT", "version": "v1" } ]
2021-09-02
[ [ "Kircheis", "Ralf", "" ] ]
The novel coronavirus SARS-CoV-2 has resulted in a global pandemic with worldwide 6-digital infection rates and thousands death tolls daily. Enormeous effords are undertaken to achieve high coverage of immunization in order to reach herd immunity to stop spreading of SARS-CoV-2 infection. Several SARS-CoV-2 vaccines, based either on mRNA, viral vectors, or inactivated SARS-CoV-2 virus have been approved and are being applied worldwide. However, recently increased numbers of normally very rare types of thromboses associated with thrombocytopenia have been reported in particular in the context of the adenoviral vector vaccine ChAdOx1 nCoV-19 from Astra Zeneca. While statistical prevalence of these side effects seem to correlate with this particular vaccine type, i.e. adenonoviral vector based vaccines, the exact molecular mechanisms are still not clear. The present review summarizes current data and hypotheses for molecular and cellular mechanisms into one integrated hypothesis indicating that coagulopathies, including thromboses, thrombocytopenia and other related side effects are correlated to an interplay of the two components in the vaccine, i.e. the spike antigen and the adenoviral vector, with the innate and immune system which under certain circumstances can imitate the picture of a limited COVID-19 pathological picture.
1510.01313
Alexandre Vidal
Aur\'elie Garnier, Alexandre Vidal, Habib Benali
A theoretical study of the role of astrocyte activity in neuronal hyperexcitability using a new neuro-glial mass model
22 pages, 11 figures, article preprint
null
null
null
q-bio.NC math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The investigation of the neuronal environment allows us to better understand the activity of a cerebral region as a whole. The recent experimental evidences of the presence of transporters for glutamate and GABA in both neuronal and astrocyte compartments raise the question of the functional importance of the astrocytes in the regulation of the neuronal activity. We propose a new computational model at the mesoscopic scale embedding the recent knowledge on the physiology of neuron and astrocyte coupled activities. The neural compartment is a neural mass model with double excitatory feedback, and the glial compartment focus on the dynamics of glutamate and GABA concentrations. Using the proposed model, we first study the impact of a deficiency in the reuptake of GABA by astrocytes, which implies an increase in GABA concentration in the extracellular space. A decrease in the frequency of neural activity is observed and explained from the dynamics analysis. Second, we investigate the neuronal response to a deficiency in the reuptake of Glutamate by the astrocytes. In this case, we identify three behaviors : the neural activity may either be reduced, or enhanced or, alternatively, may experience a transient of high activity before stabilizing around a new activity regime with a frequency close to the nominal one. After translating theoretically the neuronal excitability modulation using the bifurcation structure of the neural mass model, we state the conditions on the glial feedback parameters corresponding to each behavior.
[ { "created": "Sun, 4 Oct 2015 15:14:59 GMT", "version": "v1" } ]
2015-10-07
[ [ "Garnier", "Aurélie", "" ], [ "Vidal", "Alexandre", "" ], [ "Benali", "Habib", "" ] ]
The investigation of the neuronal environment allows us to better understand the activity of a cerebral region as a whole. The recent experimental evidences of the presence of transporters for glutamate and GABA in both neuronal and astrocyte compartments raise the question of the functional importance of the astrocytes in the regulation of the neuronal activity. We propose a new computational model at the mesoscopic scale embedding the recent knowledge on the physiology of neuron and astrocyte coupled activities. The neural compartment is a neural mass model with double excitatory feedback, and the glial compartment focus on the dynamics of glutamate and GABA concentrations. Using the proposed model, we first study the impact of a deficiency in the reuptake of GABA by astrocytes, which implies an increase in GABA concentration in the extracellular space. A decrease in the frequency of neural activity is observed and explained from the dynamics analysis. Second, we investigate the neuronal response to a deficiency in the reuptake of Glutamate by the astrocytes. In this case, we identify three behaviors : the neural activity may either be reduced, or enhanced or, alternatively, may experience a transient of high activity before stabilizing around a new activity regime with a frequency close to the nominal one. After translating theoretically the neuronal excitability modulation using the bifurcation structure of the neural mass model, we state the conditions on the glial feedback parameters corresponding to each behavior.
2207.10458
Cedric Gommes
Cedric Gommes, Thomas Louis, Isabelle Bourgot, Erik Maquoi, Silvia Blacher, Agnes Noel
Remodelling of the fibre-aggregate structure of collagen gels by cancer-associated fibroblasts: a time-resolved grey-tone image analysis based on stochastic modelling
null
null
null
null
q-bio.QM physics.data-an
http://creativecommons.org/licenses/by/4.0/
In solid tumors, cells constantly interact with the surrounding extracellular matrix. In particular cancer-associated fibroblasts modulate the architecture of the matrix by exerting forces and contracting collagen fibres, creating paths that facilitate cancer cell migration. The characterization of the collagen fibre network and its space and time-dependent remodelling is therefore key to investigating the interactions between cells and the matrix, and to understanding tumor growth. The structural complexity and multiscale nature of the collagen network rule out classical image analysis algorithms, and call for specific methods. We propose an approach based on the mathematical modelling of the collagen network, and on the identification of the model parameters from the correlation functions and histograms of grey-tone images. The specific model considered accounts for both the small-scale fibrillar structure of the network and for the presence of large-scale aggregates. When applied to time-resolved images of cancer-associated fibroblasts actively invading a collagen matrix, the method reveals two different densification mechanisms for the matrix in direct contact or far from the cells. The very observation of two distinct phenomenologies hints at diverse mechanisms, which presumably involve both biochemical and mechanical effects.
[ { "created": "Thu, 21 Jul 2022 12:56:27 GMT", "version": "v1" } ]
2022-07-22
[ [ "Gommes", "Cedric", "" ], [ "Louis", "Thomas", "" ], [ "Bourgot", "Isabelle", "" ], [ "Maquoi", "Erik", "" ], [ "Blacher", "Silvia", "" ], [ "Noel", "Agnes", "" ] ]
In solid tumors, cells constantly interact with the surrounding extracellular matrix. In particular cancer-associated fibroblasts modulate the architecture of the matrix by exerting forces and contracting collagen fibres, creating paths that facilitate cancer cell migration. The characterization of the collagen fibre network and its space and time-dependent remodelling is therefore key to investigating the interactions between cells and the matrix, and to understanding tumor growth. The structural complexity and multiscale nature of the collagen network rule out classical image analysis algorithms, and call for specific methods. We propose an approach based on the mathematical modelling of the collagen network, and on the identification of the model parameters from the correlation functions and histograms of grey-tone images. The specific model considered accounts for both the small-scale fibrillar structure of the network and for the presence of large-scale aggregates. When applied to time-resolved images of cancer-associated fibroblasts actively invading a collagen matrix, the method reveals two different densification mechanisms for the matrix in direct contact or far from the cells. The very observation of two distinct phenomenologies hints at diverse mechanisms, which presumably involve both biochemical and mechanical effects.
1709.07742
Alejandro Fendrik
Alejandro J.Fendrik, Lilia Romanelli and Ernesto Rotondo
Neutral dynamics and cell renewal of colonic crypts in homeostatic regime
null
null
10.1088/1478-3975/aaab9f
null
q-bio.TO physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The self renewal process in colonic crypts is the object of several studies. We present here a new compartment model with the following characteristics: a) We distinguish different classes of cells: stem cells, 6 generations of transit amplifying cells and the differentiated cells. b) In order to take into account the monoclonal character of the crypts in homeostatic regime we include symmetric divisions of the stem cells. We first consider the dynamic differential equations that describe the evolution of the mean values of the populations but the small observed value of the total number of cells involved plus the huge dispersion of experimental data found in the literature leads us to study the stochastic discrete process. This analysis allows us to study fluctuations, the neutral drift that leads to monoclonality and the effects of the fixation of mutant clones.
[ { "created": "Fri, 22 Sep 2017 13:37:10 GMT", "version": "v1" } ]
2018-04-04
[ [ "Fendrik", "Alejandro J.", "" ], [ "Romanelli", "Lilia", "" ], [ "Rotondo", "Ernesto", "" ] ]
The self renewal process in colonic crypts is the object of several studies. We present here a new compartment model with the following characteristics: a) We distinguish different classes of cells: stem cells, 6 generations of transit amplifying cells and the differentiated cells. b) In order to take into account the monoclonal character of the crypts in homeostatic regime we include symmetric divisions of the stem cells. We first consider the dynamic differential equations that describe the evolution of the mean values of the populations but the small observed value of the total number of cells involved plus the huge dispersion of experimental data found in the literature leads us to study the stochastic discrete process. This analysis allows us to study fluctuations, the neutral drift that leads to monoclonality and the effects of the fixation of mutant clones.
1306.0924
Tomaso Aste
Ruggero Gramatica, T. Di Matteo, Stefano Giorgetti, Massimo Barbiani, Dorian Bevec, Tomaso Aste
Graph theory enables drug repurposing. How a mathematical model can drive the discovery of hidden Mechanisms of Action
8 pages, 7 figures
PLoS ONE 9 (2013) e84912
10.1371/journal.pone.0084912
null
q-bio.QM cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduced a methodology to efficiently exploit natural-language expressed biomedical knowledge for repurposing existing drugs towards diseases for which they were not initially intended. Leveraging on developments in Computational Linguistics and Graph Theory, a methodology is defined to build a graph representation of knowledge, which is automatically analysed to discover hidden relations between any drug and any disease: these relations are specific paths among the biomedical entities of the graph, representing possible Modes of Action for any given pharmacological compound. These paths are ranked according to their relevance, exploiting a measure induced by a stochastic process defined on the graph. Here we show, providing real-world examples, how the method successfully retrieves known pathophysiological Mode of Actions and finds new ones by meaningfully selecting and aggregating contributions from known bio-molecular interactions. Applications of this methodology are presented, and prove the efficacy of the method for selecting drugs as treatment options for rare diseases.
[ { "created": "Tue, 4 Jun 2013 20:41:23 GMT", "version": "v1" } ]
2014-06-17
[ [ "Gramatica", "Ruggero", "" ], [ "Di Matteo", "T.", "" ], [ "Giorgetti", "Stefano", "" ], [ "Barbiani", "Massimo", "" ], [ "Bevec", "Dorian", "" ], [ "Aste", "Tomaso", "" ] ]
We introduced a methodology to efficiently exploit natural-language expressed biomedical knowledge for repurposing existing drugs towards diseases for which they were not initially intended. Leveraging on developments in Computational Linguistics and Graph Theory, a methodology is defined to build a graph representation of knowledge, which is automatically analysed to discover hidden relations between any drug and any disease: these relations are specific paths among the biomedical entities of the graph, representing possible Modes of Action for any given pharmacological compound. These paths are ranked according to their relevance, exploiting a measure induced by a stochastic process defined on the graph. Here we show, providing real-world examples, how the method successfully retrieves known pathophysiological Mode of Actions and finds new ones by meaningfully selecting and aggregating contributions from known bio-molecular interactions. Applications of this methodology are presented, and prove the efficacy of the method for selecting drugs as treatment options for rare diseases.
1305.6365
Chinmaya Gupta
Chinmaya Gupta and Jos\'e Manuel L\'opez and William Ott and Kre\v{s}imir Jos\'ic and Matthew R. Bennett
Transcriptional delay stabilizes bistable gene networks
null
null
10.1103/PhysRevLett.111.058104
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Transcriptional delay can significantly impact the dynamics of gene networks. Here we examine how such delay affects bistable systems. We investigate several stochastic models of bistable gene networks and find that increasing delay dramatically increases the mean residence times near stable states. To explain this, we introduce a non-Markovian, analytically tractable reduced model. The model shows that stabilization is the consequence of an increased number of failed transitions between stable states. Each of the bistable systems that we simulate behaves in this manner.
[ { "created": "Tue, 28 May 2013 04:26:47 GMT", "version": "v1" } ]
2015-06-16
[ [ "Gupta", "Chinmaya", "" ], [ "López", "José Manuel", "" ], [ "Ott", "William", "" ], [ "Josíc", "Krešimir", "" ], [ "Bennett", "Matthew R.", "" ] ]
Transcriptional delay can significantly impact the dynamics of gene networks. Here we examine how such delay affects bistable systems. We investigate several stochastic models of bistable gene networks and find that increasing delay dramatically increases the mean residence times near stable states. To explain this, we introduce a non-Markovian, analytically tractable reduced model. The model shows that stabilization is the consequence of an increased number of failed transitions between stable states. Each of the bistable systems that we simulate behaves in this manner.
1703.01357
Peter Helfer
Peter Helfer and Thomas R. Shultz
A Computational Model of Systems Memory Consolidation and Reconsolidation
null
Hippocampus, 30, (2020) 659-677
10.1002/hipo.23187
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the mammalian brain, newly acquired memories depend on the hippocampus for maintenance and recall, but over time the neocortex takes over these functions, rendering memories hippocampus-independent. The process responsible for this transformation is called systems memory consolidation. However, reactivation of a well-consolidated memory can trigger a temporary return to a hippocampus-dependent state, a phenomenon known as systems memory reconsolidation. The neural mechanisms underlying systems memory consolidation and reconsolidation are not well understood. Here, we propose a neural model based on well-documented mechanisms of synaptic plasticity and stability and describe a computational implementation that demonstrates the model's ability to account for a range of findings from the systems consolidation and reconsolidation literature. We derive several predictions from the computational model and suggest experiments that may test its validity.
[ { "created": "Fri, 3 Mar 2017 23:21:23 GMT", "version": "v1" }, { "created": "Wed, 8 Mar 2017 18:22:45 GMT", "version": "v2" }, { "created": "Sat, 12 Aug 2017 22:06:35 GMT", "version": "v3" }, { "created": "Thu, 28 Mar 2019 15:14:41 GMT", "version": "v4" }, { "cre...
2021-07-02
[ [ "Helfer", "Peter", "" ], [ "Shultz", "Thomas R.", "" ] ]
In the mammalian brain, newly acquired memories depend on the hippocampus for maintenance and recall, but over time the neocortex takes over these functions, rendering memories hippocampus-independent. The process responsible for this transformation is called systems memory consolidation. However, reactivation of a well-consolidated memory can trigger a temporary return to a hippocampus-dependent state, a phenomenon known as systems memory reconsolidation. The neural mechanisms underlying systems memory consolidation and reconsolidation are not well understood. Here, we propose a neural model based on well-documented mechanisms of synaptic plasticity and stability and describe a computational implementation that demonstrates the model's ability to account for a range of findings from the systems consolidation and reconsolidation literature. We derive several predictions from the computational model and suggest experiments that may test its validity.
1410.0507
Claudius Gros
Rodrigo Echeveste and Claudius Gros
Generating functionals for computational intelligence: the Fisher information as an objective function for self-limiting Hebbian learning rules
null
Frontiers in Robotics and AI 1, 1 (2014)
10.3389/frobt.2014.00001
null
q-bio.NC cond-mat.dis-nn cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Generating functionals may guide the evolution of a dynamical system and constitute a possible route for handling the complexity of neural networks as relevant for computational intelligence. We propose and explore a new objective function, which allows to obtain plasticity rules for the afferent synaptic weights. The adaption rules are Hebbian, self-limiting, and result from the minimization of the Fisher information with respect to the synaptic flux. We perform a series of simulations examining the behavior of the new learning rules in various circumstances. The vector of synaptic weights aligns with the principal direction of input activities, whenever one is present. A linear discrimination is performed when there are two or more principal directions; directions having bimodal firing-rate distributions, being characterized by a negative excess kurtosis, are preferred. We find robust performance and full homeostatic adaption of the synaptic weights results as a by-product of the synaptic flux minimization. This self-limiting behavior allows for stable online learning for arbitrary durations. The neuron acquires new information when the statistics of input activities is changed at a certain point of the simulation, showing however, a distinct resilience to unlearn previously acquired knowledge. Learning is fast when starting with randomly drawn synaptic weights and substantially slower when the synaptic weights are already fully adapted.
[ { "created": "Thu, 2 Oct 2014 10:52:49 GMT", "version": "v1" } ]
2017-11-27
[ [ "Echeveste", "Rodrigo", "" ], [ "Gros", "Claudius", "" ] ]
Generating functionals may guide the evolution of a dynamical system and constitute a possible route for handling the complexity of neural networks as relevant for computational intelligence. We propose and explore a new objective function, which allows to obtain plasticity rules for the afferent synaptic weights. The adaption rules are Hebbian, self-limiting, and result from the minimization of the Fisher information with respect to the synaptic flux. We perform a series of simulations examining the behavior of the new learning rules in various circumstances. The vector of synaptic weights aligns with the principal direction of input activities, whenever one is present. A linear discrimination is performed when there are two or more principal directions; directions having bimodal firing-rate distributions, being characterized by a negative excess kurtosis, are preferred. We find robust performance and full homeostatic adaption of the synaptic weights results as a by-product of the synaptic flux minimization. This self-limiting behavior allows for stable online learning for arbitrary durations. The neuron acquires new information when the statistics of input activities is changed at a certain point of the simulation, showing however, a distinct resilience to unlearn previously acquired knowledge. Learning is fast when starting with randomly drawn synaptic weights and substantially slower when the synaptic weights are already fully adapted.
1907.07789
Momiao Xiong
Rong Jiao, Xiangning Chen, Eric Boerwinkle and Momiao Xiong
Genome-wide Causation Studies of Complex Diseases
61 pages, 5 figures
null
null
null
q-bio.GN stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite significant progress in dissecting the genetic architecture of complex diseases by genome-wide association studies (GWAS), the signals identified by association analysis may not have specific pathological relevance to diseases so that a large fraction of disease causing genetic variants is still hidden. Association is used to measure dependence between two variables or two sets of variables. Genome-wide association studies test association between a disease and SNPs (or other genetic variants) across the genome. Association analysis may detect superficial patterns between disease and genetic variants. Association signals provide limited information on the causal mechanism of diseases. The use of association analysis as a major analytical platform for genetic studies of complex diseases is a key issue that hampers discovery of the mechanism of diseases, calling into question the ability of GWAS to identify loci underlying diseases. It is time to move beyond association analysis toward techniques enabling the discovery of the underlying causal genetic strctures of complex diseases. To achieve this, we propose a concept of a genome-wide causation studies (GWCS) as an alternative to GWAS and develop additive noise models (ANMs) for genetic causation analysis. Type I error rates and power of the ANMs to test for causation are presented. We conduct GWCS of schizophrenia. Both simulation and real data analysis show that the proportion of the overlapped association and causation signals is small. Thus, we hope that our analysis will stimulate discussion of GWAS and GWCS.
[ { "created": "Wed, 17 Jul 2019 22:01:16 GMT", "version": "v1" } ]
2019-07-19
[ [ "Jiao", "Rong", "" ], [ "Chen", "Xiangning", "" ], [ "Boerwinkle", "Eric", "" ], [ "Xiong", "Momiao", "" ] ]
Despite significant progress in dissecting the genetic architecture of complex diseases by genome-wide association studies (GWAS), the signals identified by association analysis may not have specific pathological relevance to diseases so that a large fraction of disease causing genetic variants is still hidden. Association is used to measure dependence between two variables or two sets of variables. Genome-wide association studies test association between a disease and SNPs (or other genetic variants) across the genome. Association analysis may detect superficial patterns between disease and genetic variants. Association signals provide limited information on the causal mechanism of diseases. The use of association analysis as a major analytical platform for genetic studies of complex diseases is a key issue that hampers discovery of the mechanism of diseases, calling into question the ability of GWAS to identify loci underlying diseases. It is time to move beyond association analysis toward techniques enabling the discovery of the underlying causal genetic strctures of complex diseases. To achieve this, we propose a concept of a genome-wide causation studies (GWCS) as an alternative to GWAS and develop additive noise models (ANMs) for genetic causation analysis. Type I error rates and power of the ANMs to test for causation are presented. We conduct GWCS of schizophrenia. Both simulation and real data analysis show that the proportion of the overlapped association and causation signals is small. Thus, we hope that our analysis will stimulate discussion of GWAS and GWCS.
2403.16576
Xiangxin Zhou
Xiangxin Zhou, Dongyu Xue, Ruizhe Chen, Zaixiang Zheng, Liang Wang, Quanquan Gu
Antigen-Specific Antibody Design via Direct Energy-based Preference Optimization
null
null
null
null
q-bio.BM cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Antibody design, a crucial task with significant implications across various disciplines such as therapeutics and biology, presents considerable challenges due to its intricate nature. In this paper, we tackle antigen-specific antibody sequence-structure co-design as an optimization problem towards specific preferences, considering both rationality and functionality. Leveraging a pre-trained conditional diffusion model that jointly models sequences and structures of antibodies with equivariant neural networks, we propose direct energy-based preference optimization to guide the generation of antibodies with both rational structures and considerable binding affinities to given antigens. Our method involves fine-tuning the pre-trained diffusion model using a residue-level decomposed energy preference. Additionally, we employ gradient surgery to address conflicts between various types of energy, such as attraction and repulsion. Experiments on RAbD benchmark show that our approach effectively optimizes the energy of generated antibodies and achieves state-of-the-art performance in designing high-quality antibodies with low total energy and high binding affinity simultaneously, demonstrating the superiority of our approach.
[ { "created": "Mon, 25 Mar 2024 09:41:49 GMT", "version": "v1" }, { "created": "Wed, 26 Jun 2024 03:06:42 GMT", "version": "v2" } ]
2024-06-27
[ [ "Zhou", "Xiangxin", "" ], [ "Xue", "Dongyu", "" ], [ "Chen", "Ruizhe", "" ], [ "Zheng", "Zaixiang", "" ], [ "Wang", "Liang", "" ], [ "Gu", "Quanquan", "" ] ]
Antibody design, a crucial task with significant implications across various disciplines such as therapeutics and biology, presents considerable challenges due to its intricate nature. In this paper, we tackle antigen-specific antibody sequence-structure co-design as an optimization problem towards specific preferences, considering both rationality and functionality. Leveraging a pre-trained conditional diffusion model that jointly models sequences and structures of antibodies with equivariant neural networks, we propose direct energy-based preference optimization to guide the generation of antibodies with both rational structures and considerable binding affinities to given antigens. Our method involves fine-tuning the pre-trained diffusion model using a residue-level decomposed energy preference. Additionally, we employ gradient surgery to address conflicts between various types of energy, such as attraction and repulsion. Experiments on RAbD benchmark show that our approach effectively optimizes the energy of generated antibodies and achieves state-of-the-art performance in designing high-quality antibodies with low total energy and high binding affinity simultaneously, demonstrating the superiority of our approach.
2206.03269
Carlos Segovia
Carlos Segovia
Petri nets in epidemiology
16 pages, 16 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work we study Petri nets associated to ordinary differential equations coming from epidemiological models. We propose a geometric procedure to obtain the basic reproduction number. This number is defined as the average number of secondary individuals during their entire infectious lifetime.
[ { "created": "Mon, 30 May 2022 17:32:38 GMT", "version": "v1" } ]
2022-06-08
[ [ "Segovia", "Carlos", "" ] ]
In this work we study Petri nets associated to ordinary differential equations coming from epidemiological models. We propose a geometric procedure to obtain the basic reproduction number. This number is defined as the average number of secondary individuals during their entire infectious lifetime.
1806.05167
Ann Sizemore
Ann E. Sizemore, Jennifer Phillips-Cremins, Robert Ghrist, Danielle S. Bassett
The importance of the whole: topological data analysis for the network neuroscientist
17 pages, 6 figures
null
null
null
q-bio.QM q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The application of network techniques to the analysis of neural data has greatly improved our ability to quantify and describe these rich interacting systems. Among many important contributions, networks have proven useful in identifying sets of node pairs that are densely connected and that collectively support brain function. Yet the restriction to pairwise interactions prevents us from realizing intrinsic topological features such as cavities within the interconnection structure that may be just as crucial for proper function. To detect and quantify these topological features we must turn to methods from algebraic topology that encode data as a simplicial complex built of sets of interacting nodes called simplices. On this substrate, we can then use the relations between simplices and higher-order connectivity to expose cavities within the complex, thereby summarizing its topological nature. Here we provide an introduction to persistent homology, a fundamental method from applied topology that builds a global descriptor of system structure by chronicling the evolution of cavities as we move through a combinatorial object such as a weighted network. We detail the underlying mathematics and perform demonstrative calculations on the mouse structural connectome, electrical and chemical synapses in \textit{C. elegans}, and genomic interaction data. Finally we suggest avenues for future work and highlight new advances in mathematics that appear ready for use in revealing the architecture and function of neural systems.
[ { "created": "Wed, 13 Jun 2018 17:55:27 GMT", "version": "v1" } ]
2018-06-14
[ [ "Sizemore", "Ann E.", "" ], [ "Phillips-Cremins", "Jennifer", "" ], [ "Ghrist", "Robert", "" ], [ "Bassett", "Danielle S.", "" ] ]
The application of network techniques to the analysis of neural data has greatly improved our ability to quantify and describe these rich interacting systems. Among many important contributions, networks have proven useful in identifying sets of node pairs that are densely connected and that collectively support brain function. Yet the restriction to pairwise interactions prevents us from realizing intrinsic topological features such as cavities within the interconnection structure that may be just as crucial for proper function. To detect and quantify these topological features we must turn to methods from algebraic topology that encode data as a simplicial complex built of sets of interacting nodes called simplices. On this substrate, we can then use the relations between simplices and higher-order connectivity to expose cavities within the complex, thereby summarizing its topological nature. Here we provide an introduction to persistent homology, a fundamental method from applied topology that builds a global descriptor of system structure by chronicling the evolution of cavities as we move through a combinatorial object such as a weighted network. We detail the underlying mathematics and perform demonstrative calculations on the mouse structural connectome, electrical and chemical synapses in \textit{C. elegans}, and genomic interaction data. Finally we suggest avenues for future work and highlight new advances in mathematics that appear ready for use in revealing the architecture and function of neural systems.
2209.12633
Manvi Jain Ms
Manvi Jain, C.M. Markan
Calibration of off-the-shelf low-cost wearable EEG headset for application in field studies
7 pages, 7 figures, 1 table, Conference proceeding from National Systems Conference, 2021
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Electroencephalography (EEG) is an integral tool in neurocognitive research worldwide. However, research grade EEG (32/64ch) systems are expensive and have cumbersome setup designed for clinical usage not suited for rugged environment of field-studies outside lab. Further, the long setup-time of EEG can be intimidating to those who are restless subjects e.g., children or elderly. Off-the-shelf, low-cost, dry EEG devices (LCDE) have been proposed as promising options. However, small number of electrodes in LCDE limit the detection scalp-area reducing the utility of an LCDE only to a specific set of cognitive tasks based on the brain lobe scanned. This paper proposes a novel methodology for calibration of an LCDE (e.g., DREEM Headband) to identify the specific class of cognitive tasks a LCDE is likely suited for. The methodology involves comparative analysis of the recorded data using LCDE with EEG-like signals simulated (using BESA Simulator software) by embedding dipole in a brain lobe. The simulated scalp activity and corresponding source analysis helps identify the approximate regions of brain scanned by the LCDE device. On further comparative analysis of brain lobes source localized using Brain Electrical Source Analysis (BESA software) helps characterize LCDE for cognitive tasks. The major findings conclude a list of psychological studies which can be performed using various LCDE, capable of replacing traditional and expensive wet EEG systems in both inside and outside-lab settings.
[ { "created": "Mon, 26 Sep 2022 12:32:52 GMT", "version": "v1" } ]
2022-09-27
[ [ "Jain", "Manvi", "" ], [ "Markan", "C. M.", "" ] ]
Electroencephalography (EEG) is an integral tool in neurocognitive research worldwide. However, research grade EEG (32/64ch) systems are expensive and have cumbersome setup designed for clinical usage not suited for rugged environment of field-studies outside lab. Further, the long setup-time of EEG can be intimidating to those who are restless subjects e.g., children or elderly. Off-the-shelf, low-cost, dry EEG devices (LCDE) have been proposed as promising options. However, small number of electrodes in LCDE limit the detection scalp-area reducing the utility of an LCDE only to a specific set of cognitive tasks based on the brain lobe scanned. This paper proposes a novel methodology for calibration of an LCDE (e.g., DREEM Headband) to identify the specific class of cognitive tasks a LCDE is likely suited for. The methodology involves comparative analysis of the recorded data using LCDE with EEG-like signals simulated (using BESA Simulator software) by embedding dipole in a brain lobe. The simulated scalp activity and corresponding source analysis helps identify the approximate regions of brain scanned by the LCDE device. On further comparative analysis of brain lobes source localized using Brain Electrical Source Analysis (BESA software) helps characterize LCDE for cognitive tasks. The major findings conclude a list of psychological studies which can be performed using various LCDE, capable of replacing traditional and expensive wet EEG systems in both inside and outside-lab settings.
0801.2033
Tom Michoel
Anagha Joshi, Yves Van de Peer, Tom Michoel
Analysis of a Gibbs sampler method for model based clustering of gene expression data
8 pages, 7 figures
Bioinformatics 2008 24(2):176-183
10.1093/bioinformatics/btm562
null
q-bio.QM
null
Over the last decade, a large variety of clustering algorithms have been developed to detect coregulatory relationships among genes from microarray gene expression data. Model based clustering approaches have emerged as statistically well grounded methods, but the properties of these algorithms when applied to large-scale data sets are not always well understood. An in-depth analysis can reveal important insights about the performance of the algorithm, the expected quality of the output clusters, and the possibilities for extracting more relevant information out of a particular data set. We have extended an existing algorithm for model based clustering of genes to simultaneously cluster genes and conditions, and used three large compendia of gene expression data for S. cerevisiae to analyze its properties. The algorithm uses a Bayesian approach and a Gibbs sampling procedure to iteratively update the cluster assignment of each gene and condition. For large-scale data sets, the posterior distribution is strongly peaked on a limited number of equiprobable clusterings. A GO annotation analysis shows that these local maxima are all biologically equally significant, and that simultaneously clustering genes and conditions performs better than only clustering genes and assuming independent conditions. A collection of distinct equivalent clusterings can be summarized as a weighted graph on the set of genes, from which we extract fuzzy, overlapping clusters using a graph spectral method. The cores of these fuzzy clusters contain tight sets of strongly coexpressed genes, while the overlaps exhibit relations between genes showing only partial coexpression.
[ { "created": "Mon, 14 Jan 2008 09:49:57 GMT", "version": "v1" } ]
2008-01-15
[ [ "Joshi", "Anagha", "" ], [ "Van de Peer", "Yves", "" ], [ "Michoel", "Tom", "" ] ]
Over the last decade, a large variety of clustering algorithms have been developed to detect coregulatory relationships among genes from microarray gene expression data. Model based clustering approaches have emerged as statistically well grounded methods, but the properties of these algorithms when applied to large-scale data sets are not always well understood. An in-depth analysis can reveal important insights about the performance of the algorithm, the expected quality of the output clusters, and the possibilities for extracting more relevant information out of a particular data set. We have extended an existing algorithm for model based clustering of genes to simultaneously cluster genes and conditions, and used three large compendia of gene expression data for S. cerevisiae to analyze its properties. The algorithm uses a Bayesian approach and a Gibbs sampling procedure to iteratively update the cluster assignment of each gene and condition. For large-scale data sets, the posterior distribution is strongly peaked on a limited number of equiprobable clusterings. A GO annotation analysis shows that these local maxima are all biologically equally significant, and that simultaneously clustering genes and conditions performs better than only clustering genes and assuming independent conditions. A collection of distinct equivalent clusterings can be summarized as a weighted graph on the set of genes, from which we extract fuzzy, overlapping clusters using a graph spectral method. The cores of these fuzzy clusters contain tight sets of strongly coexpressed genes, while the overlaps exhibit relations between genes showing only partial coexpression.
1102.0448
Ruth Isserlin
Ruth Isserlin, Gary D. Bader, Aled Edwards, Stephen Frye, Timothy Willson, Frank H. Yu
The human genome and drug discovery after a decade. Roads (still) not taken
14 pages, 5 figures
Nature 470, 163-165 (10 February 2011)
10.1038/470163a
null
q-bio.OT q-bio.GN
http://creativecommons.org/licenses/by/3.0/
The draft sequence of the human genome became available almost a decade ago but the encoded proteome is not being explored to its fullest. Our bibliometric analysis of several large protein families, including those known to be "druggable", reveals that, even today, most papers focus on proteins that were known prior to 2000. It is evident that one or more aspects of the biomedical research system severely limits the exploration of the proteins in the 'dark matter' of the proteome, despite unbiased genetic approaches that have pointed to their functional relevance. It is perhaps not surprising that relatively few genome-derived targets have led to approved drugs.
[ { "created": "Wed, 2 Feb 2011 20:52:51 GMT", "version": "v1" }, { "created": "Thu, 10 Feb 2011 19:38:48 GMT", "version": "v2" } ]
2011-02-14
[ [ "Isserlin", "Ruth", "" ], [ "Bader", "Gary D.", "" ], [ "Edwards", "Aled", "" ], [ "Frye", "Stephen", "" ], [ "Willson", "Timothy", "" ], [ "Yu", "Frank H.", "" ] ]
The draft sequence of the human genome became available almost a decade ago but the encoded proteome is not being explored to its fullest. Our bibliometric analysis of several large protein families, including those known to be "druggable", reveals that, even today, most papers focus on proteins that were known prior to 2000. It is evident that one or more aspects of the biomedical research system severely limits the exploration of the proteins in the 'dark matter' of the proteome, despite unbiased genetic approaches that have pointed to their functional relevance. It is perhaps not surprising that relatively few genome-derived targets have led to approved drugs.
2201.00238
Changchuan Yin Dr.
Changchuan Yin
Evolutionary trend of SARS-CoV-2 inferred by the homopolymeric nucleotide repeats
null
null
null
null
q-bio.PE q-bio.QM stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is the causative agent of the current global COVID-19 pandemic, in which millions of lives have been lost. Understanding the zoonotic evolution of the coronavirus may provide insights for developing effective vaccines, monitoring the transmission trends, and preventing new zoonotic infections. Homopolymeric nucleotide repeats (HP), the most simple tandem repeats, are a ubiquitous feature of eukaryotic genomes. Yet the HP distributions and roles in coronavirus genome evolution are poorly investigated. In this study, we characterize the HP distributions and trends in the genomes of bat and human coronaviruses and SARS-CoV-2 variants. The results show that the SARS-CoV-2 genome is abundant in HPs, and has augmented HP contents during evolution. Especially, the disparity of HP poly-(A/T) and ploy-(C/G) of coronaviruses increases during the evolution in human hosts. The disparity of HP poly-(A/T) and ploy-(C/G) is correlated to host adaptation and the virulence level of the coronaviruses. Therefore, we propose that the HP disparity can be a quantitative measure for the zoonotic evolution levels of coronaviruses. Peculiarly, the HP disparity measure infers that SARS-CoV-2 Omicron variants have a high disparity of HP poly-(A/T) and ploy-(C/G), suggesting a high adaption to the human hosts.
[ { "created": "Sat, 1 Jan 2022 20:14:36 GMT", "version": "v1" } ]
2022-01-04
[ [ "Yin", "Changchuan", "" ] ]
Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) is the causative agent of the current global COVID-19 pandemic, in which millions of lives have been lost. Understanding the zoonotic evolution of the coronavirus may provide insights for developing effective vaccines, monitoring the transmission trends, and preventing new zoonotic infections. Homopolymeric nucleotide repeats (HP), the most simple tandem repeats, are a ubiquitous feature of eukaryotic genomes. Yet the HP distributions and roles in coronavirus genome evolution are poorly investigated. In this study, we characterize the HP distributions and trends in the genomes of bat and human coronaviruses and SARS-CoV-2 variants. The results show that the SARS-CoV-2 genome is abundant in HPs, and has augmented HP contents during evolution. Especially, the disparity of HP poly-(A/T) and ploy-(C/G) of coronaviruses increases during the evolution in human hosts. The disparity of HP poly-(A/T) and ploy-(C/G) is correlated to host adaptation and the virulence level of the coronaviruses. Therefore, we propose that the HP disparity can be a quantitative measure for the zoonotic evolution levels of coronaviruses. Peculiarly, the HP disparity measure infers that SARS-CoV-2 Omicron variants have a high disparity of HP poly-(A/T) and ploy-(C/G), suggesting a high adaption to the human hosts.
1510.02510
Robert Castelo
Alberto Roverato and Robert Castelo
The networked partial correlation and its application to the analysis of genetic interactions
23 pages, 5 figures; major revision of the paper after journal review; 24 pages, 7 figures; figures enlarged, added DOI
Journal of the Royal Statistical Society Series C -Applied Statistics, 66(3):647-665, 2017
10.1111/rssc.12166
null
q-bio.QM q-bio.GN q-bio.MN stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Genetic interactions confer robustness on cells in response to genetic perturbations. This often occurs through molecular buffering mechanisms that can be predicted using, among other features, the degree of coexpression between genes, commonly estimated through marginal measures of association such as Pearson or Spearman correlation coefficients. However, marginal correlations are sensitive to indirect effects and often partial correlations are used instead. Yet, partial correlations convey no information about the (linear) influence of the coexpressed genes on the entire multivariate system, which may be crucial to discriminate functional associations from genetic interactions. To address these two shortcomings, here we propose to use the edge weight derived from the covariance decomposition over the paths of the associated gene network. We call this new quantity the networked partial correlation and use it to analyze genetic interactions in yeast.
[ { "created": "Thu, 8 Oct 2015 21:12:49 GMT", "version": "v1" }, { "created": "Thu, 5 May 2016 09:50:57 GMT", "version": "v2" }, { "created": "Fri, 8 Jul 2016 16:59:01 GMT", "version": "v3" } ]
2017-03-14
[ [ "Roverato", "Alberto", "" ], [ "Castelo", "Robert", "" ] ]
Genetic interactions confer robustness on cells in response to genetic perturbations. This often occurs through molecular buffering mechanisms that can be predicted using, among other features, the degree of coexpression between genes, commonly estimated through marginal measures of association such as Pearson or Spearman correlation coefficients. However, marginal correlations are sensitive to indirect effects and often partial correlations are used instead. Yet, partial correlations convey no information about the (linear) influence of the coexpressed genes on the entire multivariate system, which may be crucial to discriminate functional associations from genetic interactions. To address these two shortcomings, here we propose to use the edge weight derived from the covariance decomposition over the paths of the associated gene network. We call this new quantity the networked partial correlation and use it to analyze genetic interactions in yeast.
1209.2638
Hinrich Arnoldt
Hinrich Arnoldt, Marc Timme, Stefan Grosskinsky
Frequency-dependent fitness induces multistability in coevolutionary dynamics
12 pages, 6 figures; J. R. Soc. Interface (2012)
null
10.1098/rsif.2012.0464
null
q-bio.PE math-ph math.MP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Evolution is simultaneously driven by a number of processes such as mutation, competition and random sampling. Understanding which of these processes is dominating the collective evolutionary dynamics in dependence on system properties is a fundamental aim of theoretical research. Recent works quantitatively studied coevolutionary dynamics of competing species with a focus on linearly frequency-dependent interactions, derived from a game-theoretic viewpoint. However, several aspects of evolutionary dynamics, e.g. limited resources, may induce effectively nonlinear frequency dependencies. Here we study the impact of nonlinear frequency dependence on evolutionary dynamics in a model class that covers linear frequency dependence as a special case. We focus on the simplest non-trivial setting of two genotypes and analyze the co-action of nonlinear frequency dependence with asymmetric mutation rates. We find that their co-action may induce novel metastable states as well as stochastic switching dynamics between them. Our results reveal how the different mechanisms of mutation, selection and genetic drift contribute to the dynamics and the emergence of metastable states, suggesting that multistability is a generic feature in systems with frequency-dependent fitness.
[ { "created": "Wed, 12 Sep 2012 15:15:07 GMT", "version": "v1" } ]
2012-09-13
[ [ "Arnoldt", "Hinrich", "" ], [ "Timme", "Marc", "" ], [ "Grosskinsky", "Stefan", "" ] ]
Evolution is simultaneously driven by a number of processes such as mutation, competition and random sampling. Understanding which of these processes is dominating the collective evolutionary dynamics in dependence on system properties is a fundamental aim of theoretical research. Recent works quantitatively studied coevolutionary dynamics of competing species with a focus on linearly frequency-dependent interactions, derived from a game-theoretic viewpoint. However, several aspects of evolutionary dynamics, e.g. limited resources, may induce effectively nonlinear frequency dependencies. Here we study the impact of nonlinear frequency dependence on evolutionary dynamics in a model class that covers linear frequency dependence as a special case. We focus on the simplest non-trivial setting of two genotypes and analyze the co-action of nonlinear frequency dependence with asymmetric mutation rates. We find that their co-action may induce novel metastable states as well as stochastic switching dynamics between them. Our results reveal how the different mechanisms of mutation, selection and genetic drift contribute to the dynamics and the emergence of metastable states, suggesting that multistability is a generic feature in systems with frequency-dependent fitness.
q-bio/0605006
Jorge F. Mejias
Jorge F. Mejias and Joaquin J. Torres
The role of synaptic facilitation in coincidence spike detection
19 pages, 8 figures
null
null
null
q-bio.NC
null
Using a realistic model of activity dependent dynamical synapses and a standard integrate and fire neuron model we study, both analytically and numerically, the conditions in which a postsynaptic neuron efficiently detects temporal coincidences of spikes arriving at certain frequency from N different afferents. We extend a previous work that only considers synaptic depression as the most important mechanism in the transmission of information through synapses, to a more general situation including also synaptic facilitation. Our study shows that: 1) facilitation enhances the detection of correlated signals arriving from a subset of presynaptic excitatory neurons, with different degrees of correlation among this subset, and 2) the presence of facilitation allows for a better detection of firing rate changes. Finally, we also observed that facilitation determines the existence of an optimal input frequency which allows the best performance for a wide (maximum) range of the neuron firing threshold. This optimal frequency can be controlled by means of facilitation parameters.
[ { "created": "Wed, 3 May 2006 19:36:52 GMT", "version": "v1" } ]
2007-05-23
[ [ "Mejias", "Jorge F.", "" ], [ "Torres", "Joaquin J.", "" ] ]
Using a realistic model of activity dependent dynamical synapses and a standard integrate and fire neuron model we study, both analytically and numerically, the conditions in which a postsynaptic neuron efficiently detects temporal coincidences of spikes arriving at certain frequency from N different afferents. We extend a previous work that only considers synaptic depression as the most important mechanism in the transmission of information through synapses, to a more general situation including also synaptic facilitation. Our study shows that: 1) facilitation enhances the detection of correlated signals arriving from a subset of presynaptic excitatory neurons, with different degrees of correlation among this subset, and 2) the presence of facilitation allows for a better detection of firing rate changes. Finally, we also observed that facilitation determines the existence of an optimal input frequency which allows the best performance for a wide (maximum) range of the neuron firing threshold. This optimal frequency can be controlled by means of facilitation parameters.
1707.01585
Daniel Fraiman
Daniel Fraiman and Ricardo Fraiman
Statistical comparison of (brain) networks
Three references added. A new paragraph was added in the Resting-state fMRI functional networks section
null
null
null
q-bio.NC cond-mat.dis-nn stat.AP stat.ME
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The study of random networks in a neuroscientific context has developed extensively over the last couple of decades. By contrast, techniques for the statistical analysis of these networks are less developed. In this paper, we focus on the statistical comparison of brain networks in a nonparametric framework and discuss the associated detection and identification problems. We tested network differences between groups with an analysis of variance (ANOVA) test we developed specifically for networks. We also propose and analyse the behaviour of a new statistical procedure designed to identify different subnetworks. As an example, we show the application of this tool in resting-state fMRI data obtained from the Human Connectome Project. Finally, we discuss the potential bias in neuroimaging findings that is generated by some behavioural and brain structure variables. Our method can also be applied to other kind of networks such as protein interaction networks, gene networks or social networks.
[ { "created": "Wed, 5 Jul 2017 21:26:58 GMT", "version": "v1" }, { "created": "Sun, 9 Jul 2017 22:12:41 GMT", "version": "v2" } ]
2017-07-11
[ [ "Fraiman", "Daniel", "" ], [ "Fraiman", "Ricardo", "" ] ]
The study of random networks in a neuroscientific context has developed extensively over the last couple of decades. By contrast, techniques for the statistical analysis of these networks are less developed. In this paper, we focus on the statistical comparison of brain networks in a nonparametric framework and discuss the associated detection and identification problems. We tested network differences between groups with an analysis of variance (ANOVA) test we developed specifically for networks. We also propose and analyse the behaviour of a new statistical procedure designed to identify different subnetworks. As an example, we show the application of this tool in resting-state fMRI data obtained from the Human Connectome Project. Finally, we discuss the potential bias in neuroimaging findings that is generated by some behavioural and brain structure variables. Our method can also be applied to other kind of networks such as protein interaction networks, gene networks or social networks.
2111.07559
Matthew Simpson
Maud El-Hachem, Scott W McCue, Matthew J Simpson
A continuum mathematical model of substrate-mediated tissue growth
46 pages, 10 figures, 1 supplementary material document
null
null
null
q-bio.TO math.DS nlin.PS
http://creativecommons.org/licenses/by/4.0/
We consider a continuum mathematical model of biological tissue formation inspired by recent experiments describing thin tissue growth in 3D-printed bioscaffolds. The continuum model involves a partial differential equation describing the density of tissue, $\hat{u}(\hat{\mathbf{x}},\hat{t})$, that is coupled to the concentration of an immobile extracellular substrate, $\hat{s}(\hat{\mathbf{x}},\hat{t})$. Cell migration is modelled with a nonlinear diffusion term, where the diffusive flux is proportional to $\hat{s}$, while a logistic growth term models cell proliferation. The extracellular substrate $\hat{s}$ is produced by cells, and undergoes linear decay. Preliminary numerical simulations show that this mathematical model, which we call the \textit{substrate model}, is able to recapitulate key features of recent tissue growth experiments, including the formation of sharp fronts. To provide a deeper understanding of the model we then analyse travelling wave solutions of the substrate model, showing that the model supports both sharp-fronted travelling wave solutions that move with a minimum wave speed, $c = c_{\rm{min}}$, as well as smooth-fronted travelling wave solutions that move with a faster travelling wave speed, $c > c_{\rm{min}}$. We provide a geometric interpretation that explains the difference between smooth- and sharp-fronted travelling wave solutions that is based on a slow manifold reduction of the desingularised three-dimensional phase space. In addition to exploring the nature of the smooth- and sharp-fronted travelling waves, we also develop and test a series of useful approximations that describe the shape of the travelling wave solutions in various limits. These approximations apply to both the sharp-fronted travelling wave solutions, and the smooth-fronted travelling wave solutions. Software to implement all calculations is available on GitHub.
[ { "created": "Mon, 15 Nov 2021 07:07:24 GMT", "version": "v1" }, { "created": "Mon, 22 Nov 2021 06:02:41 GMT", "version": "v2" } ]
2021-11-23
[ [ "El-Hachem", "Maud", "" ], [ "McCue", "Scott W", "" ], [ "Simpson", "Matthew J", "" ] ]
We consider a continuum mathematical model of biological tissue formation inspired by recent experiments describing thin tissue growth in 3D-printed bioscaffolds. The continuum model involves a partial differential equation describing the density of tissue, $\hat{u}(\hat{\mathbf{x}},\hat{t})$, that is coupled to the concentration of an immobile extracellular substrate, $\hat{s}(\hat{\mathbf{x}},\hat{t})$. Cell migration is modelled with a nonlinear diffusion term, where the diffusive flux is proportional to $\hat{s}$, while a logistic growth term models cell proliferation. The extracellular substrate $\hat{s}$ is produced by cells, and undergoes linear decay. Preliminary numerical simulations show that this mathematical model, which we call the \textit{substrate model}, is able to recapitulate key features of recent tissue growth experiments, including the formation of sharp fronts. To provide a deeper understanding of the model we then analyse travelling wave solutions of the substrate model, showing that the model supports both sharp-fronted travelling wave solutions that move with a minimum wave speed, $c = c_{\rm{min}}$, as well as smooth-fronted travelling wave solutions that move with a faster travelling wave speed, $c > c_{\rm{min}}$. We provide a geometric interpretation that explains the difference between smooth- and sharp-fronted travelling wave solutions that is based on a slow manifold reduction of the desingularised three-dimensional phase space. In addition to exploring the nature of the smooth- and sharp-fronted travelling waves, we also develop and test a series of useful approximations that describe the shape of the travelling wave solutions in various limits. These approximations apply to both the sharp-fronted travelling wave solutions, and the smooth-fronted travelling wave solutions. Software to implement all calculations is available on GitHub.
2002.10034
Sema Candemir
Sema Candemir, Xuan V. Nguyen, Luciano M. Prevedello, Matthew T. Bigelow, Richard D.White, Barbaros S. Erdal (for the Alzheimer's Disease Neuroimaging Initiative)
Predicting Rate of Cognitive Decline at Baseline Using a Deep Neural Network with Multidata Analysis
null
null
null
null
q-bio.QM cs.LG eess.IV q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Purpose: This study investigates whether a machine-learning-based system can predict the rate of cognitive decline in mildly cognitively impaired patients by processing only the clinical and imaging data collected at the initial visit. Approach: We built a predictive model based on a supervised hybrid neural network utilizing a 3-Dimensional Convolutional Neural Network to perform volume analysis of Magnetic Resonance Imaging and integration of non-imaging clinical data at the fully connected layer of the architecture. The experiments are conducted on the Alzheimers Disease Neuroimaging Initiative dataset. Results: Experimental results confirm that there is a correlation between cognitive decline and the data obtained at the first visit. The system achieved an area under the receiver operator curve (AUC) of 0.70 for cognitive decline class prediction. Conclusion: To our knowledge, this is the first study that predicts slowly deteriorating/stable or rapidly deteriorating classes by processing routinely collected baseline clinical and demographic data (Baseline MRI, Baseline MMSE, Scalar Volumetric data, Age, Gender, Education, Ethnicity, and Race). The training data is built based on MMSE-rate values. Unlike the studies in the literature that focus on predicting Mild Cognitive Impairment-to-Alzheimer`s disease conversion and disease classification, we approach the problem as an early prediction of cognitive decline rate in MCI patients.
[ { "created": "Mon, 24 Feb 2020 01:39:17 GMT", "version": "v1" }, { "created": "Mon, 8 Jun 2020 05:40:42 GMT", "version": "v2" }, { "created": "Mon, 5 Oct 2020 23:14:23 GMT", "version": "v3" } ]
2020-10-07
[ [ "Candemir", "Sema", "", "for the Alzheimer's Disease\n Neuroimaging Initiative" ], [ "Nguyen", "Xuan V.", "", "for the Alzheimer's Disease\n Neuroimaging Initiative" ], [ "Prevedello", "Luciano M.", "", "for the Alzheimer's Disease\n Neuroimaging Initiati...
Purpose: This study investigates whether a machine-learning-based system can predict the rate of cognitive decline in mildly cognitively impaired patients by processing only the clinical and imaging data collected at the initial visit. Approach: We built a predictive model based on a supervised hybrid neural network utilizing a 3-Dimensional Convolutional Neural Network to perform volume analysis of Magnetic Resonance Imaging and integration of non-imaging clinical data at the fully connected layer of the architecture. The experiments are conducted on the Alzheimers Disease Neuroimaging Initiative dataset. Results: Experimental results confirm that there is a correlation between cognitive decline and the data obtained at the first visit. The system achieved an area under the receiver operator curve (AUC) of 0.70 for cognitive decline class prediction. Conclusion: To our knowledge, this is the first study that predicts slowly deteriorating/stable or rapidly deteriorating classes by processing routinely collected baseline clinical and demographic data (Baseline MRI, Baseline MMSE, Scalar Volumetric data, Age, Gender, Education, Ethnicity, and Race). The training data is built based on MMSE-rate values. Unlike the studies in the literature that focus on predicting Mild Cognitive Impairment-to-Alzheimer`s disease conversion and disease classification, we approach the problem as an early prediction of cognitive decline rate in MCI patients.
1806.06445
Ali Rohani
Ali Rohani
Designing diagnostic platforms for analysis of disease patterns and probing disease emergence
Ph.D. Thesis
null
null
null
q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The emerging era of personalized medicine relies on medical decisions, practices, and products being tailored to the individual patient. Point-of-care systems, at the heart of this model, play two important roles. First, they are required for identifying subjects for optimal therapies based on their genetic make-up and epigenetic profile. Second, they will be used for assessing the progression of such therapies. Central to this vision is designing systems that, with minimal user-intervention, can transduce complex signals from biosystems in complement with clinical information to inform medical decision within point-of-care settings. To reach our ultimate goal of developing point-of-care systems and realizing personalized medicine, we are taking a multistep systems-level approach towards understanding cellular processes and biomolecular profiles, to quantify disease states and external interventions.
[ { "created": "Sun, 17 Jun 2018 20:57:41 GMT", "version": "v1" } ]
2018-06-19
[ [ "Rohani", "Ali", "" ] ]
The emerging era of personalized medicine relies on medical decisions, practices, and products being tailored to the individual patient. Point-of-care systems, at the heart of this model, play two important roles. First, they are required for identifying subjects for optimal therapies based on their genetic make-up and epigenetic profile. Second, they will be used for assessing the progression of such therapies. Central to this vision is designing systems that, with minimal user-intervention, can transduce complex signals from biosystems in complement with clinical information to inform medical decision within point-of-care settings. To reach our ultimate goal of developing point-of-care systems and realizing personalized medicine, we are taking a multistep systems-level approach towards understanding cellular processes and biomolecular profiles, to quantify disease states and external interventions.
2009.06035
Beatriz Carely Luna Olivera Dra.
Beatriz Luna, Marcelino Ram\'irez, Edgardo Gal\'an
Network analysis and disease subnets for the SARS-CoV-2/Human interactome
6 pages, 2 figures
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivation: With the aim to amplify and make sense of interactions of virus-human proteins in the case of SARS-CoV-2, we performed a structural analysis of the network of protein interactions obtained from the integration of three sources: 1) proteins of virus SARS-CoV-2, 2)physical interactions between SARS-CoV-2 and human proteins, 3) known interactions of these human proteins between them and the dossier of affections in which these proteins are implicated. Results: As a product of this research, we present two networks, one from the interactions virus-host, and the other restricted to host-host, the last one is not usually considered for network analysis. We identified the most important proteins in both networks, those that have the maximal value of calculated invariants, these proteins are considered as the most: affected, connected or those that best monitor the flow of information in the network, among them we find UBC, a human protein related with ubiquination, linked with different stages of coronavirus disease, and ORF7A a virus protein that induces apoptosis in infected cells, associated with virion tethering. Using the constructed networks, we establish the more significant diseases corresponding with human proteins and their connections with other proteins. It is relevant that the identified diseases coincide with comorbidities, particularly the subnetwork of diabetes involves a great quantity of virus and human proteins (56%) and interactions (60%), this could explain the effect of this condition as an important cause of disease complications.
[ { "created": "Sun, 13 Sep 2020 16:26:14 GMT", "version": "v1" } ]
2020-09-15
[ [ "Luna", "Beatriz", "" ], [ "Ramírez", "Marcelino", "" ], [ "Galán", "Edgardo", "" ] ]
Motivation: With the aim to amplify and make sense of interactions of virus-human proteins in the case of SARS-CoV-2, we performed a structural analysis of the network of protein interactions obtained from the integration of three sources: 1) proteins of virus SARS-CoV-2, 2)physical interactions between SARS-CoV-2 and human proteins, 3) known interactions of these human proteins between them and the dossier of affections in which these proteins are implicated. Results: As a product of this research, we present two networks, one from the interactions virus-host, and the other restricted to host-host, the last one is not usually considered for network analysis. We identified the most important proteins in both networks, those that have the maximal value of calculated invariants, these proteins are considered as the most: affected, connected or those that best monitor the flow of information in the network, among them we find UBC, a human protein related with ubiquination, linked with different stages of coronavirus disease, and ORF7A a virus protein that induces apoptosis in infected cells, associated with virion tethering. Using the constructed networks, we establish the more significant diseases corresponding with human proteins and their connections with other proteins. It is relevant that the identified diseases coincide with comorbidities, particularly the subnetwork of diabetes involves a great quantity of virus and human proteins (56%) and interactions (60%), this could explain the effect of this condition as an important cause of disease complications.
2405.15812
Zhengqing Miao
Zhengqing Miao and Meirong Zhao
Pseudo Channel: Time Embedding for Motor Imagery Decoding
13 pages, 5 figures
null
null
null
q-bio.NC cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motor imagery (MI) based EEG represents a frontier in enabling direct neural control of external devices and advancing neural rehabilitation. This study introduces a novel time embedding technique, termed traveling-wave based time embedding, utilized as a pseudo channel to enhance the decoding accuracy of MI-EEG signals across various neural network architectures. Unlike traditional neural network methods that fail to account for the temporal dynamics in MI-EEG in individual difference, our approach captures time-related changes for different participants based on a priori knowledge. Through extensive experimentation with multiple participants, we demonstrate that this method not only improves classification accuracy but also exhibits greater adaptability to individual differences compared to position encoding used in Transformer architecture. Significantly, our results reveal that traveling-wave based time embedding crucially enhances decoding accuracy, particularly for participants typically considered "EEG-illiteracy". As a novel direction in EEG research, the traveling-wave based time embedding not only offers fresh insights for neural network decoding strategies but also expands new avenues for research into attention mechanisms in neuroscience and a deeper understanding of EEG signals.
[ { "created": "Tue, 21 May 2024 12:55:11 GMT", "version": "v1" } ]
2024-05-28
[ [ "Miao", "Zhengqing", "" ], [ "Zhao", "Meirong", "" ] ]
Motor imagery (MI) based EEG represents a frontier in enabling direct neural control of external devices and advancing neural rehabilitation. This study introduces a novel time embedding technique, termed traveling-wave based time embedding, utilized as a pseudo channel to enhance the decoding accuracy of MI-EEG signals across various neural network architectures. Unlike traditional neural network methods that fail to account for the temporal dynamics in MI-EEG in individual difference, our approach captures time-related changes for different participants based on a priori knowledge. Through extensive experimentation with multiple participants, we demonstrate that this method not only improves classification accuracy but also exhibits greater adaptability to individual differences compared to position encoding used in Transformer architecture. Significantly, our results reveal that traveling-wave based time embedding crucially enhances decoding accuracy, particularly for participants typically considered "EEG-illiteracy". As a novel direction in EEG research, the traveling-wave based time embedding not only offers fresh insights for neural network decoding strategies but also expands new avenues for research into attention mechanisms in neuroscience and a deeper understanding of EEG signals.
2310.12581
Yuma Fujimoto
Yuma Fujimoto, Hisashi Ohtsuki
Evolutionary stability of cooperation by the leading eight norms in indirect reciprocity under noisy and private assessment
12 pages & 5 figures (main), 7 pages & 2 figures (supplement)
null
null
null
q-bio.PE cs.GT cs.MA physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Indirect reciprocity is a mechanism that explains large-scale cooperation in human societies. In indirect reciprocity, an individual chooses whether or not to cooperate with another based on reputation information, and others evaluate the action as good or bad. Under what evaluation rule (called ``social norm'') cooperation evolves has long been of central interest in the literature. It has been reported that if individuals can share their evaluations (i.e., public reputation), social norms called ``leading eight'' can be evolutionarily stable. On the other hand, when they cannot share their evaluations (i.e., private assessment), the evolutionary stability of cooperation is still in question. To tackle this problem, we create a novel method to analyze the reputation structure in the population under private assessment. Specifically, we characterize each individual by two variables, ``goodness'' (what proportion of the population considers the individual as good) and ``self-reputation'' (whether an individual thinks of him/herself as good or bad), and analyze the stochastic process of how these two variables change over time. We discuss evolutionary stability of each of the leading eight social norms by studying the robustness against invasions of unconditional cooperators and defectors. We identify key pivots in those social norms for establishing a high level of cooperation or stable cooperation against mutants. Our finding gives an insight into how human cooperation is established in a real-world society.
[ { "created": "Thu, 19 Oct 2023 08:43:08 GMT", "version": "v1" } ]
2023-10-20
[ [ "Fujimoto", "Yuma", "" ], [ "Ohtsuki", "Hisashi", "" ] ]
Indirect reciprocity is a mechanism that explains large-scale cooperation in human societies. In indirect reciprocity, an individual chooses whether or not to cooperate with another based on reputation information, and others evaluate the action as good or bad. Under what evaluation rule (called ``social norm'') cooperation evolves has long been of central interest in the literature. It has been reported that if individuals can share their evaluations (i.e., public reputation), social norms called ``leading eight'' can be evolutionarily stable. On the other hand, when they cannot share their evaluations (i.e., private assessment), the evolutionary stability of cooperation is still in question. To tackle this problem, we create a novel method to analyze the reputation structure in the population under private assessment. Specifically, we characterize each individual by two variables, ``goodness'' (what proportion of the population considers the individual as good) and ``self-reputation'' (whether an individual thinks of him/herself as good or bad), and analyze the stochastic process of how these two variables change over time. We discuss evolutionary stability of each of the leading eight social norms by studying the robustness against invasions of unconditional cooperators and defectors. We identify key pivots in those social norms for establishing a high level of cooperation or stable cooperation against mutants. Our finding gives an insight into how human cooperation is established in a real-world society.
1610.01192
Jing Xu
Winnie H. Liang, Qiaochu Li, K.M. Rifat Faysal, Stephen J. King, Ajay Gopinathan, Jing Xu
Microtubule Defects Influence Kinesin-Based Transport In Vitro
null
Biophysical Journal , Volume 110 , Issue 10 , 2229 - 2240 (2016)
10.1016/j.bpj.2016.04.029
null
q-bio.BM q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Microtubules are protein polymers that form "molecular highways" for long-range transport within living cells. Molecular motors actively step along microtubules to shuttle cellular materials between the nucleus and the cell periphery; this transport is critical for the survival and health of all eukaryotic cells. Structural defects in microtubules exist, but whether these defects impact molecular motor-based transport remains unknown. Here, we report a new, to our knowledge, approach that allowed us to directly investigate the impact of such defects. Using a modified optical-trapping method, we examined the group function of a major molecular motor, conventional kinesin, when transporting cargos along individual microtubules. We found that microtubule defects influence kinesin-based transport in vitro. The effects depend on motor number: cargos driven by a few motors tended to unbind prematurely from the microtubule, whereas cargos driven by more motors tended to pause. To our knowledge, our study provides the first direct link between microtubule defects and kinesin function. The effects uncovered in our study may have physiological relevance in vivo.
[ { "created": "Tue, 4 Oct 2016 20:37:45 GMT", "version": "v1" } ]
2016-10-06
[ [ "Liang", "Winnie H.", "" ], [ "Li", "Qiaochu", "" ], [ "Faysal", "K. M. Rifat", "" ], [ "King", "Stephen J.", "" ], [ "Gopinathan", "Ajay", "" ], [ "Xu", "Jing", "" ] ]
Microtubules are protein polymers that form "molecular highways" for long-range transport within living cells. Molecular motors actively step along microtubules to shuttle cellular materials between the nucleus and the cell periphery; this transport is critical for the survival and health of all eukaryotic cells. Structural defects in microtubules exist, but whether these defects impact molecular motor-based transport remains unknown. Here, we report a new, to our knowledge, approach that allowed us to directly investigate the impact of such defects. Using a modified optical-trapping method, we examined the group function of a major molecular motor, conventional kinesin, when transporting cargos along individual microtubules. We found that microtubule defects influence kinesin-based transport in vitro. The effects depend on motor number: cargos driven by a few motors tended to unbind prematurely from the microtubule, whereas cargos driven by more motors tended to pause. To our knowledge, our study provides the first direct link between microtubule defects and kinesin function. The effects uncovered in our study may have physiological relevance in vivo.
2306.00471
Jan Lukas Igelbrink
Jan Lukas Igelbrink, Adri\'an Gonz\'alez Casanova, Charline Smadi, Anton Wakolbinger
Muller's ratchet in a near-critical regime: tournament versus fitness proportional selection
The presentation is now focussed on the forward-in-time approach. The proof of the main result has been re-structured and expanded
null
null
null
q-bio.PE math.PR
http://creativecommons.org/licenses/by/4.0/
Muller's ratchet, in its prototype version, models a haploid, asexual population whose size~$N$ is constant over the generations. Slightly deleterious mutations are acquired along the lineages at a constant rate, and individuals carrying less mutations have a selective advantage. The classical variant considers {\it fitness proportional} selection, but other fitness schemes are conceivable as well. Inspired by the work of Etheridge et al. ([EPW09]) we propose a parameter scaling which fits well to the ``near-critical'' regime that was in the focus of [EPW09] (and in which the mutation-selection ratio diverges logarithmically as $N\to \infty$). Using a Moran model, we investigate the``rule of thumb'' given in [EPW09] for the click rate of the ``classical ratchet'' by putting it into the context of new results on the long-time evolution of the size of the best class of the ratchet with (binary) tournament selection, which (other than that of the classical ratchet) follows an autonomous dynamics up to the time of its extinction. In [GSW23] it was discovered that the tournament ratchet has a hierarchy of dual processes which can be constructed on top of an Ancestral Selection graph with a Poisson decoration. For a regime in which the mutation/selection-ratio remains bounded away from 1, this was used in [GSW23] to reveal the asymptotics of the click rates as well as that of the type frequency profile between clicks. We will describe how these ideas can be extended to the near-critical regime in which the mutation-selection ratio of the tournament ratchet converges to 1 as $N\to \infty$.
[ { "created": "Thu, 1 Jun 2023 09:20:51 GMT", "version": "v1" }, { "created": "Tue, 18 Jun 2024 18:50:44 GMT", "version": "v2" } ]
2024-06-21
[ [ "Igelbrink", "Jan Lukas", "" ], [ "Casanova", "Adrián González", "" ], [ "Smadi", "Charline", "" ], [ "Wakolbinger", "Anton", "" ] ]
Muller's ratchet, in its prototype version, models a haploid, asexual population whose size~$N$ is constant over the generations. Slightly deleterious mutations are acquired along the lineages at a constant rate, and individuals carrying less mutations have a selective advantage. The classical variant considers {\it fitness proportional} selection, but other fitness schemes are conceivable as well. Inspired by the work of Etheridge et al. ([EPW09]) we propose a parameter scaling which fits well to the ``near-critical'' regime that was in the focus of [EPW09] (and in which the mutation-selection ratio diverges logarithmically as $N\to \infty$). Using a Moran model, we investigate the``rule of thumb'' given in [EPW09] for the click rate of the ``classical ratchet'' by putting it into the context of new results on the long-time evolution of the size of the best class of the ratchet with (binary) tournament selection, which (other than that of the classical ratchet) follows an autonomous dynamics up to the time of its extinction. In [GSW23] it was discovered that the tournament ratchet has a hierarchy of dual processes which can be constructed on top of an Ancestral Selection graph with a Poisson decoration. For a regime in which the mutation/selection-ratio remains bounded away from 1, this was used in [GSW23] to reveal the asymptotics of the click rates as well as that of the type frequency profile between clicks. We will describe how these ideas can be extended to the near-critical regime in which the mutation-selection ratio of the tournament ratchet converges to 1 as $N\to \infty$.
1906.12006
Kim Cholsong
Hyon-il Ri, Chol-song Kim, Un-hak Pak, Myong-su Kang, Tae-mun Kim
Effect of different polarity solvents on total phenols and flavonoids content, and In-vitro antioxidant properties of flowers extract from Aurea Helianthus
7 pages, 2 figures
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The total phenols and flavonoids content of different polar solvent extracts from Aurea Helianthus flowers, and their antioxidant activity were determined. The ethanol extract of Aurea Helianthus flowers were suspended in water and fractionated using different polar solvents; hexane, chloroform, ethyl acetate, butanol and water. The parameters of each extract mentioned above were determined using Floin-ciocalteu reagent(FCR) method, AlCl3 colorimetry method, ferric reducing ability of plasma(FRAP) assay, total antioxidant activity(TAA) assay and DPPH radical scavenging assay. The highest total phenols content(516.21 mg GAE/g) and flavonoids content(326.06 mg QCE/g) were obtained in ethyl acetate extract, the correlation between TPC and TFC assay was founded to be 0.967. All polar solvent extracts of Aurea Helianthus flowers showed significant antioxidant effects, the hightest inhibition was obtained in ethyl acetate and choroform extracts and the lowest inhibition in the water extract. There is a good correlation of total phenols and flavonoids content with antioxidant activity. This work indicated that the polar solvent extracts of Aurea Helianthus flowers contain high phenols and flavonoids content and exhibited antioxidant activities in vitro, therefore, could be candidates for use as natural antioxidant.
[ { "created": "Fri, 28 Jun 2019 00:39:42 GMT", "version": "v1" } ]
2019-07-01
[ [ "Ri", "Hyon-il", "" ], [ "Kim", "Chol-song", "" ], [ "Pak", "Un-hak", "" ], [ "Kang", "Myong-su", "" ], [ "Kim", "Tae-mun", "" ] ]
The total phenols and flavonoids content of different polar solvent extracts from Aurea Helianthus flowers, and their antioxidant activity were determined. The ethanol extract of Aurea Helianthus flowers were suspended in water and fractionated using different polar solvents; hexane, chloroform, ethyl acetate, butanol and water. The parameters of each extract mentioned above were determined using Floin-ciocalteu reagent(FCR) method, AlCl3 colorimetry method, ferric reducing ability of plasma(FRAP) assay, total antioxidant activity(TAA) assay and DPPH radical scavenging assay. The highest total phenols content(516.21 mg GAE/g) and flavonoids content(326.06 mg QCE/g) were obtained in ethyl acetate extract, the correlation between TPC and TFC assay was founded to be 0.967. All polar solvent extracts of Aurea Helianthus flowers showed significant antioxidant effects, the hightest inhibition was obtained in ethyl acetate and choroform extracts and the lowest inhibition in the water extract. There is a good correlation of total phenols and flavonoids content with antioxidant activity. This work indicated that the polar solvent extracts of Aurea Helianthus flowers contain high phenols and flavonoids content and exhibited antioxidant activities in vitro, therefore, could be candidates for use as natural antioxidant.
1012.5958
Erich Schmid
Erich W. Schmid, Robert Wilke
Electric Stimulation of the Retina
21 pages
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Two computational models to be used as tools for experimental research on the retinal implant are presented. In the first model, the electric field produced by a multi-electrode array in a uniform retina is calculated. In the second model, the depolarization of the cell membrane of a probe cylinder is calculated. It is shown how these models can be used to answer questions as to cross talk of activated electrodes, bunching of field lines in monopole and dipole activation, sequential stimulation, etc. The depolarization as a function of time indicates that shorter signals stimulate better, as long as the current does not change sign during stimulation.
[ { "created": "Wed, 29 Dec 2010 15:13:43 GMT", "version": "v1" } ]
2010-12-30
[ [ "Schmid", "Erich W.", "" ], [ "Wilke", "Robert", "" ] ]
Two computational models to be used as tools for experimental research on the retinal implant are presented. In the first model, the electric field produced by a multi-electrode array in a uniform retina is calculated. In the second model, the depolarization of the cell membrane of a probe cylinder is calculated. It is shown how these models can be used to answer questions as to cross talk of activated electrodes, bunching of field lines in monopole and dipole activation, sequential stimulation, etc. The depolarization as a function of time indicates that shorter signals stimulate better, as long as the current does not change sign during stimulation.
1308.2135
Chris Nasrallah Ph.D.
Chris A. Nasrallah
The dynamics of alternative pathways to compensatory substitution
17 pages, 9 figures, 1 table. Accepted to RECOMB Comparative Genomics Meeting 2013, to be published in BMC Bioinformatics
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The role of epistatic interactions among loci is a central question in evolutionary biology and is increasingly relevant in the genomic age. While the population genetics of compensatory substitution have received considerable attention, most studies have focused on the case when natural selection is very strong against deleterious intermediates. In the biologically-plausible scenario of weak to moderate selection there exist two alternate pathways for compensatory substitution. In one pathway, a deleterious mutation becomes fixed prior to occurrence of the compensatory mutation. In the other, the two loci are simultaneously polymorphic. The rates of compensatory substitution along these two pathways and their relative probabilities are functions of the population size, selection strength, mutation rate, and recombination rate. In this paper these rates and path probabilities are derived analytically and verified using population genetic simulations. The expected time durations of these two paths are similar when selection is moderate, but not when selection is weak. The effect of recombination on the dynamics of the substitution process are explored using simulation. Using the derived rates, a phylogenetic substitution model of the compensatory evolution process is presented that could be used for inference of population genetic parameters from interspecific data.
[ { "created": "Fri, 9 Aug 2013 14:29:16 GMT", "version": "v1" } ]
2013-08-12
[ [ "Nasrallah", "Chris A.", "" ] ]
The role of epistatic interactions among loci is a central question in evolutionary biology and is increasingly relevant in the genomic age. While the population genetics of compensatory substitution have received considerable attention, most studies have focused on the case when natural selection is very strong against deleterious intermediates. In the biologically-plausible scenario of weak to moderate selection there exist two alternate pathways for compensatory substitution. In one pathway, a deleterious mutation becomes fixed prior to occurrence of the compensatory mutation. In the other, the two loci are simultaneously polymorphic. The rates of compensatory substitution along these two pathways and their relative probabilities are functions of the population size, selection strength, mutation rate, and recombination rate. In this paper these rates and path probabilities are derived analytically and verified using population genetic simulations. The expected time durations of these two paths are similar when selection is moderate, but not when selection is weak. The effect of recombination on the dynamics of the substitution process are explored using simulation. Using the derived rates, a phylogenetic substitution model of the compensatory evolution process is presented that could be used for inference of population genetic parameters from interspecific data.
2111.11386
Hayriye Gulbudak
Hayriye Gulbudak, Zhuolin Qu, Fabio Milner, Necibe Tuncer
Sensitivity analysis in an Immuno-Epidemiological Vector-Host Model
null
null
null
null
q-bio.PE q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Sensitivity Analysis (SA) is a useful tool to measure the impact of changes in model parameters on the infection dynamics, particularly to quantify the expected efficacy of disease control strategies. SA has only been applied to epidemic models at the population level, ignoring the effect of within-host virus-with-immune-system interactions on the disease spread. Connecting the scales from individual to population can help inform drug and vaccine development. Thus the value of understanding the impact of immunological parameters on epidemiological quantities. Here we consider an age-since-infection structured vector-host model, in which epidemiological parameters are formulated as functions of within-host virus and antibody densities, governed by an ODE system. We then use SA for these immuno-epidemiological models to investigate the impact of immunological parameters on population-level disease dynamics such as basic reproduction number, final size of the epidemic or the infectiousness at different phases of an outbreak. As a case study, we consider Rift Valley Fever Disease (RFVD) utilizing parameter estimations from prior studies. SA indicates that 1% increase in within-host pathogen growth rate can lead up to 8% increase in R0; up to 1% increase in steady-state infected host abundance, and up to 4% increase in infectiousness of hosts when the reproduction number R0 is larger than one. These significant increases in population-scale disease quantities suggest that control strategies that reduce the within-host pathogen growth can be important in reducing disease prevalence.
[ { "created": "Mon, 22 Nov 2021 17:54:33 GMT", "version": "v1" } ]
2021-11-23
[ [ "Gulbudak", "Hayriye", "" ], [ "Qu", "Zhuolin", "" ], [ "Milner", "Fabio", "" ], [ "Tuncer", "Necibe", "" ] ]
Sensitivity Analysis (SA) is a useful tool to measure the impact of changes in model parameters on the infection dynamics, particularly to quantify the expected efficacy of disease control strategies. SA has only been applied to epidemic models at the population level, ignoring the effect of within-host virus-with-immune-system interactions on the disease spread. Connecting the scales from individual to population can help inform drug and vaccine development. Thus the value of understanding the impact of immunological parameters on epidemiological quantities. Here we consider an age-since-infection structured vector-host model, in which epidemiological parameters are formulated as functions of within-host virus and antibody densities, governed by an ODE system. We then use SA for these immuno-epidemiological models to investigate the impact of immunological parameters on population-level disease dynamics such as basic reproduction number, final size of the epidemic or the infectiousness at different phases of an outbreak. As a case study, we consider Rift Valley Fever Disease (RFVD) utilizing parameter estimations from prior studies. SA indicates that 1% increase in within-host pathogen growth rate can lead up to 8% increase in R0; up to 1% increase in steady-state infected host abundance, and up to 4% increase in infectiousness of hosts when the reproduction number R0 is larger than one. These significant increases in population-scale disease quantities suggest that control strategies that reduce the within-host pathogen growth can be important in reducing disease prevalence.
1203.0489
Thomas Thorne
Thomas Thorne and Michael P.H Stumpf
Inference of Temporally Varying Bayesian Networks
null
null
null
null
q-bio.MN stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When analysing gene expression time series data an often overlooked but crucial aspect of the model is that the regulatory network structure may change over time. Whilst some approaches have addressed this problem previously in the literature, many are not well suited to the sequential nature of the data. Here we present a method that allows us to infer regulatory network structures that may vary between time points, utilising a set of hidden states that describe the network structure at a given time point. To model the distribution of the hidden states we have applied the Hierarchical Dirichlet Process Hideen Markov Model, a nonparametric extension of the traditional Hidden Markov Model, that does not require us to fix the number of hidden states in advance. We apply our method to exisiting microarray expression data as well as demonstrating is efficacy on simulated test data.
[ { "created": "Fri, 2 Mar 2012 15:21:02 GMT", "version": "v1" } ]
2012-03-05
[ [ "Thorne", "Thomas", "" ], [ "Stumpf", "Michael P. H", "" ] ]
When analysing gene expression time series data an often overlooked but crucial aspect of the model is that the regulatory network structure may change over time. Whilst some approaches have addressed this problem previously in the literature, many are not well suited to the sequential nature of the data. Here we present a method that allows us to infer regulatory network structures that may vary between time points, utilising a set of hidden states that describe the network structure at a given time point. To model the distribution of the hidden states we have applied the Hierarchical Dirichlet Process Hideen Markov Model, a nonparametric extension of the traditional Hidden Markov Model, that does not require us to fix the number of hidden states in advance. We apply our method to exisiting microarray expression data as well as demonstrating is efficacy on simulated test data.
0802.2884
Sara Walker
Marcelo Gleiser and Sara Imari Walker
An Extended Model for the Evolution of Prebiotic Homochirality: A Bottom-Up Approach to the Origin of Life
null
Orig. Life Evol. Biosph. 38, 293-315 (2008)
10.1007/s11084-008-9134-5
null
q-bio.BM astro-ph physics.chem-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A generalized autocatalytic model for chiral polymerization is investigated in detail. Apart from enantiomeric cross-inhibition, the model allows for the autogenic (non-catalytic) formation of left and right-handed monomers from a substrate with reaction rates $\epsilon_L$ and $\epsilon_R$, respectively. The spatiotemporal evolution of the net chiral asymmetry is studied for models with several values of the maximum polymer length, N. For N=2, we study the validity of the adiabatic approximation often cited in the literature. We show that the approximation obtains the correct equilibrium values of the net chirality, but fails to reproduce the short time behavior. We show also that the autogenic term in the full N=2 model behaves as a control parameter in a chiral symmetry- breaking phase transition leading to full homochirality from racemic initial conditions. We study the dynamics of the N -> infinity model with symmetric ($\epsilon_L = \epsilon_R$) autogenic formation, showing that it only achieves homochirality for $\epsilon < \epsilon_c$, where $\epsilon_c$ is an N-dependent critical value. For $\epsilon \leq \epsilon_c$ we investigate the behavior of models with several values of N, showing that the net chiral asymmetry grows as tanh(N). We show that for a given symmetric autogenic reaction rate, the net chirality and the concentrations of chirally pure polymers increase with the maximum polymer length in the model. We briefly discuss the consequences of our results for the development of homochirality in prebiotic Earth and possible experimental verification of our findings.
[ { "created": "Wed, 20 Feb 2008 15:31:01 GMT", "version": "v1" } ]
2008-07-24
[ [ "Gleiser", "Marcelo", "" ], [ "Walker", "Sara Imari", "" ] ]
A generalized autocatalytic model for chiral polymerization is investigated in detail. Apart from enantiomeric cross-inhibition, the model allows for the autogenic (non-catalytic) formation of left and right-handed monomers from a substrate with reaction rates $\epsilon_L$ and $\epsilon_R$, respectively. The spatiotemporal evolution of the net chiral asymmetry is studied for models with several values of the maximum polymer length, N. For N=2, we study the validity of the adiabatic approximation often cited in the literature. We show that the approximation obtains the correct equilibrium values of the net chirality, but fails to reproduce the short time behavior. We show also that the autogenic term in the full N=2 model behaves as a control parameter in a chiral symmetry- breaking phase transition leading to full homochirality from racemic initial conditions. We study the dynamics of the N -> infinity model with symmetric ($\epsilon_L = \epsilon_R$) autogenic formation, showing that it only achieves homochirality for $\epsilon < \epsilon_c$, where $\epsilon_c$ is an N-dependent critical value. For $\epsilon \leq \epsilon_c$ we investigate the behavior of models with several values of N, showing that the net chiral asymmetry grows as tanh(N). We show that for a given symmetric autogenic reaction rate, the net chirality and the concentrations of chirally pure polymers increase with the maximum polymer length in the model. We briefly discuss the consequences of our results for the development of homochirality in prebiotic Earth and possible experimental verification of our findings.
2107.13116
Dengming Ming
Qi Gao and Dengming Ming
Protein-protein interactions enhance the thermal resilience of SpyRing enzymes: a molecular dynamic simulation study
30pages, 10 figures
null
10.1371/journal.pone.0263792
null
q-bio.BM
http://creativecommons.org/licenses/by/4.0/
Recently a technique based on the interaction between adhesion proteins extracted from Streptococcus pyogenes, known as SpyRing, has been widely used to improve the thermal resilience of enzymes, the assembly of biostructures, cancer cell recognition and other fields. In SpyRing, the two termini of the target enzyme are respectively linked to the peptide SpyTag and its protein partner SpyCatcher. SpyTag spontaneously reacts with SpyCatcher to form an isopeptide bond, with which the target enzyme forms a close ring structure. It was believed that the covalent cyclization of protein skeleton caused by SpyRing reduces the conformational entropy of biological structure and improves its rigidity, thus improving the thermal resilience of the target enzyme. However, the effects of SpyTag/ SpyCatcher interaction with this enzyme are poorly understood, and their regulation of enzyme properties remains unclear. Here, for simplicity, we took the single domain enzyme lichenase from Bacillus subtilis 168 as an example, studied the interface interactions in the SpyRing system by molecular dynamics simulations, and examined the effects of the changes of electrostatic interaction and van der Waals interaction on the thermal resilience of target enzyme. The simulations showed that the interface between SpyTag/SpyCatcher and lichenase is different from that found by geometric matching method and highlighted key mutations that affect the intensity of interactions at the interface and might have effect on the thermal resilience of the enzyme. Our calculations provided new insights into the rational designs in the SpyRing.
[ { "created": "Wed, 28 Jul 2021 01:02:48 GMT", "version": "v1" } ]
2022-04-06
[ [ "Gao", "Qi", "" ], [ "Ming", "Dengming", "" ] ]
Recently a technique based on the interaction between adhesion proteins extracted from Streptococcus pyogenes, known as SpyRing, has been widely used to improve the thermal resilience of enzymes, the assembly of biostructures, cancer cell recognition and other fields. In SpyRing, the two termini of the target enzyme are respectively linked to the peptide SpyTag and its protein partner SpyCatcher. SpyTag spontaneously reacts with SpyCatcher to form an isopeptide bond, with which the target enzyme forms a close ring structure. It was believed that the covalent cyclization of protein skeleton caused by SpyRing reduces the conformational entropy of biological structure and improves its rigidity, thus improving the thermal resilience of the target enzyme. However, the effects of SpyTag/ SpyCatcher interaction with this enzyme are poorly understood, and their regulation of enzyme properties remains unclear. Here, for simplicity, we took the single domain enzyme lichenase from Bacillus subtilis 168 as an example, studied the interface interactions in the SpyRing system by molecular dynamics simulations, and examined the effects of the changes of electrostatic interaction and van der Waals interaction on the thermal resilience of target enzyme. The simulations showed that the interface between SpyTag/SpyCatcher and lichenase is different from that found by geometric matching method and highlighted key mutations that affect the intensity of interactions at the interface and might have effect on the thermal resilience of the enzyme. Our calculations provided new insights into the rational designs in the SpyRing.
1808.04804
Nikolai Zolotykh
Alena I. Kalyakulina, Igor I. Yusipov, Victor A. Moskalenko, Alexander V. Nikolskiy, Artem A. Kozlov, Nikolay Yu. Zolotykh, Mikhail V. Ivanchenko
Finding morphology points of electrocardiographic signal waves using wavelet analysis
Submitted to Radiophysics and Quantum Electronics
null
10.1007/s11141-019-09929-2
null
q-bio.QM eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A new algorithm has been developed for delineation of significant points of various electrocardiographic signal (ECG) waves, taking into account information from all available leads and providing similar or higher accuracy in comparison with other modern technologies. The test results for the QT database show a sensitivity above 97% when detecting ECG wave peaks and 96% for their onsets and offsets, as well as better positive predictive value compared to the previously known algorithms. In contrast to the previously published algorithms, the proposed approach also allows one to determine the morphology of waves. The segmentation mean errors of all significant points are below the tolerances defined by the Committee of General Standards for Electrocardiography (CSE).
[ { "created": "Tue, 14 Aug 2018 17:34:11 GMT", "version": "v1" } ]
2019-05-01
[ [ "Kalyakulina", "Alena I.", "" ], [ "Yusipov", "Igor I.", "" ], [ "Moskalenko", "Victor A.", "" ], [ "Nikolskiy", "Alexander V.", "" ], [ "Kozlov", "Artem A.", "" ], [ "Zolotykh", "Nikolay Yu.", "" ], [ "Ivanchenko"...
A new algorithm has been developed for delineation of significant points of various electrocardiographic signal (ECG) waves, taking into account information from all available leads and providing similar or higher accuracy in comparison with other modern technologies. The test results for the QT database show a sensitivity above 97% when detecting ECG wave peaks and 96% for their onsets and offsets, as well as better positive predictive value compared to the previously known algorithms. In contrast to the previously published algorithms, the proposed approach also allows one to determine the morphology of waves. The segmentation mean errors of all significant points are below the tolerances defined by the Committee of General Standards for Electrocardiography (CSE).
1904.13007
Zhaofei Yu
Yichen Zhang and Shanshan Jia and Yajing Zheng and Zhaofei Yu and Yonghong Tian and Siwei Ma and Tiejun Huang and Jian K. Liu
Reconstruction of Natural Visual Scenes from Neural Spikes with Deep Neural Networks
35 pages, 10 figures
null
null
null
q-bio.NC cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neural coding is one of the central questions in systems neuroscience for understanding how the brain processes stimulus from the environment, moreover, it is also a cornerstone for designing algorithms of brain-machine interface, where decoding incoming stimulus is highly demanded for better performance of physical devices. Traditionally researchers have focused on functional magnetic resonance imaging (fMRI) data as the neural signals of interest for decoding visual scenes. However, our visual perception operates in a fast time scale of millisecond in terms of an event termed neural spike. There are few studies of decoding by using spikes. Here we fulfill this aim by developing a novel decoding framework based on deep neural networks, named spike-image decoder (SID), for reconstructing natural visual scenes, including static images and dynamic videos, from experimentally recorded spikes of a population of retinal ganglion cells. The SID is an end-to-end decoder with one end as neural spikes and the other end as images, which can be trained directly such that visual scenes are reconstructed from spikes in a highly accurate fashion. Our SID also outperforms on the reconstruction of visual stimulus compared to existing fMRI decoding models. In addition, with the aid of a spike encoder, we show that SID can be generalized to arbitrary visual scenes by using the image datasets of MNIST, CIFAR10, and CIFAR100. Furthermore, with a pre-trained SID, one can decode any dynamic videos to achieve real-time encoding and decoding of visual scenes by spikes. Altogether, our results shed new light on neuromorphic computing for artificial visual systems, such as event-based visual cameras and visual neuroprostheses.
[ { "created": "Tue, 30 Apr 2019 01:15:24 GMT", "version": "v1" }, { "created": "Tue, 28 Jan 2020 13:04:24 GMT", "version": "v2" } ]
2020-01-29
[ [ "Zhang", "Yichen", "" ], [ "Jia", "Shanshan", "" ], [ "Zheng", "Yajing", "" ], [ "Yu", "Zhaofei", "" ], [ "Tian", "Yonghong", "" ], [ "Ma", "Siwei", "" ], [ "Huang", "Tiejun", "" ], [ "Liu", "Ji...
Neural coding is one of the central questions in systems neuroscience for understanding how the brain processes stimulus from the environment, moreover, it is also a cornerstone for designing algorithms of brain-machine interface, where decoding incoming stimulus is highly demanded for better performance of physical devices. Traditionally researchers have focused on functional magnetic resonance imaging (fMRI) data as the neural signals of interest for decoding visual scenes. However, our visual perception operates in a fast time scale of millisecond in terms of an event termed neural spike. There are few studies of decoding by using spikes. Here we fulfill this aim by developing a novel decoding framework based on deep neural networks, named spike-image decoder (SID), for reconstructing natural visual scenes, including static images and dynamic videos, from experimentally recorded spikes of a population of retinal ganglion cells. The SID is an end-to-end decoder with one end as neural spikes and the other end as images, which can be trained directly such that visual scenes are reconstructed from spikes in a highly accurate fashion. Our SID also outperforms on the reconstruction of visual stimulus compared to existing fMRI decoding models. In addition, with the aid of a spike encoder, we show that SID can be generalized to arbitrary visual scenes by using the image datasets of MNIST, CIFAR10, and CIFAR100. Furthermore, with a pre-trained SID, one can decode any dynamic videos to achieve real-time encoding and decoding of visual scenes by spikes. Altogether, our results shed new light on neuromorphic computing for artificial visual systems, such as event-based visual cameras and visual neuroprostheses.
1504.08116
Andreas Milias-Argeitis
Andreas Milias-Argeitis, Stefan Engblom, Pavol Bauer, Mustafa Khammash
Stochastic focusing coupled with negative feedback enables robust regulation in biochemical reaction networks
null
J. R. Soc. Interface 12(113), pp. 1-10 (2015)
10.1098/rsif.2015.0831
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Nature presents multiple intriguing examples of processes which proceed at high precision and regularity. This remarkable stability is frequently counter to modelers' experience with the inherent stochasticity of chemical reactions in the regime of low copy numbers. Moreover, the effects of noise and nonlinearities can lead to "counter-intuitive" behavior, as demonstrated for a basic enzymatic reaction scheme that can display stochastic focusing (SF). Under the assumption of rapid signal fluctuations, SF has been shown to convert a graded response into a threshold mechanism, thus attenuating the detrimental effects of signal noise. However, when the rapid fluctuation assumption is violated, this gain in sensitivity is generally obtained at the cost of very large product variance, and this unpredictable behavior may be one possible explanation of why, more than a decade after its introduction, SF has still not been observed in real biochemical systems. In this work we explore the noise properties of a simple enzymatic reaction mechanism with a small and fluctuating number of active enzymes that behaves as a high-gain, noisy amplifier due to SF caused by slow enzyme fluctuations. We then show that the inclusion of a plausible negative feedback mechanism turns the system from a noisy signal detector to a strong homeostatic mechanism by exchanging high gain with strong attenuation in output noise and robustness to parameter variations. Moreover, we observe that the discrepancy between deterministic and stochastic descriptions of stochastically focused systems in the evolution of the means almost completely disappears, despite very low molecule counts and the additional nonlinearity due to feedback. The reaction mechanism considered here can provide a possible resolution to the apparent conflict between intrinsic noise and high precision in critical intracellular processes.
[ { "created": "Thu, 30 Apr 2015 08:32:58 GMT", "version": "v1" }, { "created": "Tue, 29 Sep 2015 16:34:11 GMT", "version": "v2" } ]
2015-12-31
[ [ "Milias-Argeitis", "Andreas", "" ], [ "Engblom", "Stefan", "" ], [ "Bauer", "Pavol", "" ], [ "Khammash", "Mustafa", "" ] ]
Nature presents multiple intriguing examples of processes which proceed at high precision and regularity. This remarkable stability is frequently counter to modelers' experience with the inherent stochasticity of chemical reactions in the regime of low copy numbers. Moreover, the effects of noise and nonlinearities can lead to "counter-intuitive" behavior, as demonstrated for a basic enzymatic reaction scheme that can display stochastic focusing (SF). Under the assumption of rapid signal fluctuations, SF has been shown to convert a graded response into a threshold mechanism, thus attenuating the detrimental effects of signal noise. However, when the rapid fluctuation assumption is violated, this gain in sensitivity is generally obtained at the cost of very large product variance, and this unpredictable behavior may be one possible explanation of why, more than a decade after its introduction, SF has still not been observed in real biochemical systems. In this work we explore the noise properties of a simple enzymatic reaction mechanism with a small and fluctuating number of active enzymes that behaves as a high-gain, noisy amplifier due to SF caused by slow enzyme fluctuations. We then show that the inclusion of a plausible negative feedback mechanism turns the system from a noisy signal detector to a strong homeostatic mechanism by exchanging high gain with strong attenuation in output noise and robustness to parameter variations. Moreover, we observe that the discrepancy between deterministic and stochastic descriptions of stochastically focused systems in the evolution of the means almost completely disappears, despite very low molecule counts and the additional nonlinearity due to feedback. The reaction mechanism considered here can provide a possible resolution to the apparent conflict between intrinsic noise and high precision in critical intracellular processes.
1911.08585
Thomas Mesnard
Thomas Mesnard, Gaetan Vignoud, Joao Sacramento, Walter Senn, Yoshua Bengio
Ghost Units Yield Biologically Plausible Backprop in Deep Neural Networks
null
null
null
null
q-bio.NC cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the past few years, deep learning has transformed artificial intelligence research and led to impressive performance in various difficult tasks. However, it is still unclear how the brain can perform credit assignment across many areas as efficiently as backpropagation does in deep neural networks. In this paper, we introduce a model that relies on a new role for a neuronal inhibitory machinery, referred to as ghost units. By cancelling the feedback coming from the upper layer when no target signal is provided to the top layer, the ghost units enables the network to backpropagate errors and do efficient credit assignment in deep structures. While considering one-compartment neurons and requiring very few biological assumptions, it is able to approximate the error gradient and achieve good performance on classification tasks. Error backpropagation occurs through the recurrent dynamics of the network and thanks to biologically plausible local learning rules. In particular, it does not require separate feedforward and feedback circuits. Different mechanisms for cancelling the feedback were studied, ranging from complete duplication of the connectivity by long term processes to online replication of the feedback activity. This reduced system combines the essential elements to have a working biologically abstracted analogue of backpropagation with a simple formulation and proofs of the associated results. Therefore, this model is a step towards understanding how learning and memory are implemented in cortical multilayer structures, but it also raises interesting perspectives for neuromorphic hardware.
[ { "created": "Fri, 15 Nov 2019 17:47:00 GMT", "version": "v1" } ]
2019-11-21
[ [ "Mesnard", "Thomas", "" ], [ "Vignoud", "Gaetan", "" ], [ "Sacramento", "Joao", "" ], [ "Senn", "Walter", "" ], [ "Bengio", "Yoshua", "" ] ]
In the past few years, deep learning has transformed artificial intelligence research and led to impressive performance in various difficult tasks. However, it is still unclear how the brain can perform credit assignment across many areas as efficiently as backpropagation does in deep neural networks. In this paper, we introduce a model that relies on a new role for a neuronal inhibitory machinery, referred to as ghost units. By cancelling the feedback coming from the upper layer when no target signal is provided to the top layer, the ghost units enables the network to backpropagate errors and do efficient credit assignment in deep structures. While considering one-compartment neurons and requiring very few biological assumptions, it is able to approximate the error gradient and achieve good performance on classification tasks. Error backpropagation occurs through the recurrent dynamics of the network and thanks to biologically plausible local learning rules. In particular, it does not require separate feedforward and feedback circuits. Different mechanisms for cancelling the feedback were studied, ranging from complete duplication of the connectivity by long term processes to online replication of the feedback activity. This reduced system combines the essential elements to have a working biologically abstracted analogue of backpropagation with a simple formulation and proofs of the associated results. Therefore, this model is a step towards understanding how learning and memory are implemented in cortical multilayer structures, but it also raises interesting perspectives for neuromorphic hardware.
2401.10932
Yaron Ben-Ami
Y. Ben-Ami and B. D. Wood and J. M. Pitt-Francis and P. K. Maini and H. M. Byrne
Homogenisation of nonlinear blood flow in periodic networks: the limit of small haematocrit heterogeneity
34 pages, 8 figures
null
null
null
q-bio.TO cond-mat.soft physics.bio-ph
http://creativecommons.org/licenses/by/4.0/
In this work we develop a homogenisation methodology to upscale mathematical descriptions of microcirculatory blood flow from the microscale (where individual vessels are resolved) to the macroscopic (or tissue) scale. Due to the assumed two-phase nature of blood and specific features of red blood cells (RBCs), mathematical models for blood flow in the microcirculation are highly nonlinear, coupling the flow and RBC concentrations (haematocrit). In contrast to previous works which accomplished blood-flow homogenisation by assuming that the haematocrit level remains constant, here we allow for spatial heterogeneity in the haematocrit concentration and thus begin with a nonlinear microscale model. We simplify the analysis by considering the limit of small haematocrit heterogeneity which prevails when variations in haematocrit concentration between neighbouring vessels are small. Homogenisation results in a system of coupled, nonlinear partial differential equations describing the flow and haematocrit transport at the macroscale, in which a nonlinear Darcy-type model relates the flow and pressure gradient via a haematocrit-dependent permeability tensor. During the analysis we obtain further that haematocrit transport at the macroscale is governed by a purely advective equation. Applying the theory to particular examples of two- and three-dimensional geometries of periodic networks, we calculate the effective permeability tensor associated with blood flow in these vascular networks. We demonstrate how the statistical distribution of vessel lengths and diameters, together with the average haematocrit level, affect the statistical properties of the macroscopic permeability tensor. These data can be used to simulate blood flow and haematocrit transport at the macroscale.
[ { "created": "Tue, 16 Jan 2024 21:51:22 GMT", "version": "v1" } ]
2024-01-23
[ [ "Ben-Ami", "Y.", "" ], [ "Wood", "B. D.", "" ], [ "Pitt-Francis", "J. M.", "" ], [ "Maini", "P. K.", "" ], [ "Byrne", "H. M.", "" ] ]
In this work we develop a homogenisation methodology to upscale mathematical descriptions of microcirculatory blood flow from the microscale (where individual vessels are resolved) to the macroscopic (or tissue) scale. Due to the assumed two-phase nature of blood and specific features of red blood cells (RBCs), mathematical models for blood flow in the microcirculation are highly nonlinear, coupling the flow and RBC concentrations (haematocrit). In contrast to previous works which accomplished blood-flow homogenisation by assuming that the haematocrit level remains constant, here we allow for spatial heterogeneity in the haematocrit concentration and thus begin with a nonlinear microscale model. We simplify the analysis by considering the limit of small haematocrit heterogeneity which prevails when variations in haematocrit concentration between neighbouring vessels are small. Homogenisation results in a system of coupled, nonlinear partial differential equations describing the flow and haematocrit transport at the macroscale, in which a nonlinear Darcy-type model relates the flow and pressure gradient via a haematocrit-dependent permeability tensor. During the analysis we obtain further that haematocrit transport at the macroscale is governed by a purely advective equation. Applying the theory to particular examples of two- and three-dimensional geometries of periodic networks, we calculate the effective permeability tensor associated with blood flow in these vascular networks. We demonstrate how the statistical distribution of vessel lengths and diameters, together with the average haematocrit level, affect the statistical properties of the macroscopic permeability tensor. These data can be used to simulate blood flow and haematocrit transport at the macroscale.
2107.00388
Chenglin Yu
Chenglin Yu, Dingnan Cui, Muheng Shang, Shu Zhang, Lei Guo, Junwei Han, Lei Du and Alzheimer's Disease Neuroimaging Initiative
A Multi-task Deep Feature Selection Method for Brain Imaging Genetics
null
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Using brain imaging quantitative traits (QTs) to identify the genetic risk factors is an important research topic in imaging genetics. Many efforts have been made via building linear models, e.g. linear regression (LR), to extract the association between imaging QTs and genetic factors such as single nucleotide polymorphisms (SNPs). However, to the best of our knowledge, these linear models could not fully uncover the complicated relationship due to the loci's elusive and diverse impacts on imaging QTs. Though deep learning models can extract the nonlinear relationship, they could not select relevant genetic factors. In this paper, we proposed a novel multi-task deep feature selection (MTDFS) method for brain imaging genetics. MTDFS first adds a multi-task one-to-one layer and imposes a hybrid sparsity-inducing penalty to select relevant SNPs making significant contributions to abnormal imaging QTs. It then builds a multi-task deep neural network to model the complicated associations between imaging QTs and SNPs. MTDFS can not only extract the nonlinear relationship but also arms the deep neural network with the feature selection capability. We compared MTDFS to both LR and single-task DFS (DFS) methods on the real neuroimaging genetic data. The experimental results showed that MTDFS performed better than both LR and DFS in terms of the QT-SNP relationship identification and feature selection. In a word, MTDFS is powerful for identifying risk loci and could be a great supplement to the method library for brain imaging genetics.
[ { "created": "Thu, 1 Jul 2021 11:59:23 GMT", "version": "v1" } ]
2021-07-02
[ [ "Yu", "Chenglin", "" ], [ "Cui", "Dingnan", "" ], [ "Shang", "Muheng", "" ], [ "Zhang", "Shu", "" ], [ "Guo", "Lei", "" ], [ "Han", "Junwei", "" ], [ "Du", "Lei", "" ], [ "Initiative", "Alzheime...
Using brain imaging quantitative traits (QTs) to identify the genetic risk factors is an important research topic in imaging genetics. Many efforts have been made via building linear models, e.g. linear regression (LR), to extract the association between imaging QTs and genetic factors such as single nucleotide polymorphisms (SNPs). However, to the best of our knowledge, these linear models could not fully uncover the complicated relationship due to the loci's elusive and diverse impacts on imaging QTs. Though deep learning models can extract the nonlinear relationship, they could not select relevant genetic factors. In this paper, we proposed a novel multi-task deep feature selection (MTDFS) method for brain imaging genetics. MTDFS first adds a multi-task one-to-one layer and imposes a hybrid sparsity-inducing penalty to select relevant SNPs making significant contributions to abnormal imaging QTs. It then builds a multi-task deep neural network to model the complicated associations between imaging QTs and SNPs. MTDFS can not only extract the nonlinear relationship but also arms the deep neural network with the feature selection capability. We compared MTDFS to both LR and single-task DFS (DFS) methods on the real neuroimaging genetic data. The experimental results showed that MTDFS performed better than both LR and DFS in terms of the QT-SNP relationship identification and feature selection. In a word, MTDFS is powerful for identifying risk loci and could be a great supplement to the method library for brain imaging genetics.
1511.00235
Ido Kanter
Amir Goldental, Roni Vardi, Shira Sardi, Pinhas Sabo and Ido Kanter
Broadband Macroscopic Cortical Oscillations Emerge from Intrinsic Neuronal Response Failures
21 pages, 5 figures
Front. Neural Circuits 9:65 (2015)
10.3389/fncir.2015.00065
null
q-bio.NC cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Broadband spontaneous macroscopic neural oscillations are rhythmic cortical firing which were extensively examined during the last century, however, their possible origination is still controversial. In this work we show how macroscopic oscillations emerge in solely excitatory random networks and without topological constraints. We experimentally and theoretically show that these oscillations stem from the counterintuitive underlying mechanism - the intrinsic stochastic neuronal response failures. These neuronal response failures, which are characterized by short-term memory, lead to cooperation among neurons, resulting in sub- or several- Hertz macroscopic oscillations which coexist with high frequency gamma oscillations. A quantitative interplay between the statistical network properties and the emerging oscillations is supported by simulations of large networks based on single-neuron in-vitro experiments and a Langevin equation describing the network dynamics. Results call for the examination of these oscillations in the presence of inhibition and external drives.
[ { "created": "Sun, 1 Nov 2015 11:46:33 GMT", "version": "v1" } ]
2015-11-03
[ [ "Goldental", "Amir", "" ], [ "Vardi", "Roni", "" ], [ "Sardi", "Shira", "" ], [ "Sabo", "Pinhas", "" ], [ "Kanter", "Ido", "" ] ]
Broadband spontaneous macroscopic neural oscillations are rhythmic cortical firing which were extensively examined during the last century, however, their possible origination is still controversial. In this work we show how macroscopic oscillations emerge in solely excitatory random networks and without topological constraints. We experimentally and theoretically show that these oscillations stem from the counterintuitive underlying mechanism - the intrinsic stochastic neuronal response failures. These neuronal response failures, which are characterized by short-term memory, lead to cooperation among neurons, resulting in sub- or several- Hertz macroscopic oscillations which coexist with high frequency gamma oscillations. A quantitative interplay between the statistical network properties and the emerging oscillations is supported by simulations of large networks based on single-neuron in-vitro experiments and a Langevin equation describing the network dynamics. Results call for the examination of these oscillations in the presence of inhibition and external drives.
2403.16382
Paul Jaffe
Paul I. Jaffe, Gustavo X. Santiago-Reyes, Robert J. Schafer, Patrick G. Bissett, Russell A. Poldrack
An image-computable model of speeded decision-making
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Evidence accumulation models (EAMs) are the dominant framework for modeling response time (RT) data from speeded decision-making tasks. While providing a good quantitative description of RT data in terms of abstract perceptual representations, EAMs do not explain how the visual system extracts these representations in the first place. To address this limitation, we introduce the visual accumulator model (VAM), in which convolutional neural network models of visual processing and traditional EAMs are jointly fitted to trial-level RTs and raw (pixel-space) visual stimuli from individual subjects. Models fitted to large-scale cognitive training data from a stylized flanker task captured individual differences in congruency effects, RTs, and accuracy. We find evidence that the selection of task-relevant information occurs through the orthogonalization of relevant and irrelevant representations, demonstrating how our framework can be used to relate visual representations to behavioral outputs. Together, our work provides a probabilistic framework for both constraining neural network models of vision with behavioral data and studying how the visual system extracts representations that guide decisions.
[ { "created": "Mon, 25 Mar 2024 03:00:33 GMT", "version": "v1" } ]
2024-03-26
[ [ "Jaffe", "Paul I.", "" ], [ "Santiago-Reyes", "Gustavo X.", "" ], [ "Schafer", "Robert J.", "" ], [ "Bissett", "Patrick G.", "" ], [ "Poldrack", "Russell A.", "" ] ]
Evidence accumulation models (EAMs) are the dominant framework for modeling response time (RT) data from speeded decision-making tasks. While providing a good quantitative description of RT data in terms of abstract perceptual representations, EAMs do not explain how the visual system extracts these representations in the first place. To address this limitation, we introduce the visual accumulator model (VAM), in which convolutional neural network models of visual processing and traditional EAMs are jointly fitted to trial-level RTs and raw (pixel-space) visual stimuli from individual subjects. Models fitted to large-scale cognitive training data from a stylized flanker task captured individual differences in congruency effects, RTs, and accuracy. We find evidence that the selection of task-relevant information occurs through the orthogonalization of relevant and irrelevant representations, demonstrating how our framework can be used to relate visual representations to behavioral outputs. Together, our work provides a probabilistic framework for both constraining neural network models of vision with behavioral data and studying how the visual system extracts representations that guide decisions.
1209.5588
Paurush Praveen
Modeling inter-species molecular cross-talks in three host- parasite systems by expansion of their sparse information space
Paurush Praveen, Erfan Younesi and Martin Hofmann-Apitius
null
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The emergence of cross species interactions at protein level is a part of molecular mechanisms that lead to parasitic diseases. Comprehensive modelling can capture such interactions and could be useful to understand their pathophysiology and assist in identifying novel drug targets. Using combination of databases, text minig and predictive methods, we expanded the sparse information space of protein-protein interactions in three parasitic diseases namely, malaria, sleeping sickness and cattle east coast fever. These network models revealed significant similarities in molecular mechanisms underlaying host's invasion, immuno-modulation and energy metabolism. The models also suggested new possible pathways in the inter-species protein interaction maps. Enrichment of these maps with drug-target informations showed a plethora of drug space to be explored and led to proposal of two new targets for each malaria and sleeping sickness.
[ { "created": "Tue, 25 Sep 2012 12:17:41 GMT", "version": "v1" } ]
2015-03-13
[ [ "space", "Modeling inter-species molecular cross-talks in three host- parasite systems by expansion of their sparse information", "" ] ]
The emergence of cross species interactions at protein level is a part of molecular mechanisms that lead to parasitic diseases. Comprehensive modelling can capture such interactions and could be useful to understand their pathophysiology and assist in identifying novel drug targets. Using combination of databases, text minig and predictive methods, we expanded the sparse information space of protein-protein interactions in three parasitic diseases namely, malaria, sleeping sickness and cattle east coast fever. These network models revealed significant similarities in molecular mechanisms underlaying host's invasion, immuno-modulation and energy metabolism. The models also suggested new possible pathways in the inter-species protein interaction maps. Enrichment of these maps with drug-target informations showed a plethora of drug space to be explored and led to proposal of two new targets for each malaria and sleeping sickness.
0803.0438
Baruch Meerson
Michael Assaf, Alex Kamenev, Baruch Meerson
On population extinction risk in the aftermath of a catastrophic event
11 pages, 11 figures
Phys. Rev. E 79, 011127 (2009)
10.1103/PhysRevE.79.011127
null
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate how a catastrophic event (modeled as a temporary fall of the reproduction rate) increases the extinction probability of an isolated self-regulated stochastic population. Using a variant of the Verhulst logistic model as an example, we combine the probability generating function technique with an eikonal approximation to evaluate the exponentially large increase in the extinction probability caused by the catastrophe. This quantity is given by the eikonal action computed over "the optimal path" (instanton) of an effective classical Hamiltonian system with a time-dependent Hamiltonian. For a general catastrophe the eikonal equations can be solved numerically. For simple models of catastrophic events analytic solutions can be obtained. One such solution becomes quite simple close to the bifurcation point of the Verhulst model. The eikonal results for the increase in the extinction probability caused by a catastrophe agree well with numerical solutions of the master equation.
[ { "created": "Tue, 4 Mar 2008 13:48:33 GMT", "version": "v1" }, { "created": "Mon, 22 Sep 2008 03:52:25 GMT", "version": "v2" }, { "created": "Thu, 18 Dec 2008 13:13:54 GMT", "version": "v3" } ]
2014-08-06
[ [ "Assaf", "Michael", "" ], [ "Kamenev", "Alex", "" ], [ "Meerson", "Baruch", "" ] ]
We investigate how a catastrophic event (modeled as a temporary fall of the reproduction rate) increases the extinction probability of an isolated self-regulated stochastic population. Using a variant of the Verhulst logistic model as an example, we combine the probability generating function technique with an eikonal approximation to evaluate the exponentially large increase in the extinction probability caused by the catastrophe. This quantity is given by the eikonal action computed over "the optimal path" (instanton) of an effective classical Hamiltonian system with a time-dependent Hamiltonian. For a general catastrophe the eikonal equations can be solved numerically. For simple models of catastrophic events analytic solutions can be obtained. One such solution becomes quite simple close to the bifurcation point of the Verhulst model. The eikonal results for the increase in the extinction probability caused by a catastrophe agree well with numerical solutions of the master equation.
1512.06038
Stephen Hartley
Stephen W. Hartley, James C. Mullikin
Detection and Visualization of Differential Splicing in RNA-Seq Data with JunctionSeq
null
null
10.1093/nar/gkw501
null
q-bio.GN
http://creativecommons.org/publicdomain/zero/1.0/
Although RNA-Seq data provide unprecedented isoform-level expression information, detection of alternative isoform regulation (AIR) remains difficult, particularly when working with an incomplete transcript annotation. We introduce JunctionSeq, a new method that builds on the statistical techniques used by the well-established DEXSeq package to detect differential usage of both exonic regions and splice junctions. In particular, JunctionSeq is capable of detecting differentials in novel splice junctions without the need for an additional isoform assembly step, greatly improving performance when the available transcript annotation is flawed or incomplete. JunctionSeq also provides a powerful and streamlined visualization toolset that allows bioinformaticians to quickly and intuitively interpret their results. We tested our method on publicly available data from several experiments performed on the rat pineal gland and Toxoplasma gondii, successfully detecting known and previously validated AIR genes in 19 out of 19 gene-level hypothesis tests. Due to its ability to query novel splice sites, JunctionSeq is still able to detect these differentials even when all alternative isoforms for these genes were not included in the transcript annotation. JunctionSeq thus provides a powerful method for detecting alternative isoform regulation even with low-quality annotations. An implementation of JunctionSeq is available as an R/Bioconductor package.
[ { "created": "Fri, 18 Dec 2015 17:06:37 GMT", "version": "v1" }, { "created": "Mon, 8 Feb 2016 19:36:08 GMT", "version": "v2" } ]
2016-06-03
[ [ "Hartley", "Stephen W.", "" ], [ "Mullikin", "James C.", "" ] ]
Although RNA-Seq data provide unprecedented isoform-level expression information, detection of alternative isoform regulation (AIR) remains difficult, particularly when working with an incomplete transcript annotation. We introduce JunctionSeq, a new method that builds on the statistical techniques used by the well-established DEXSeq package to detect differential usage of both exonic regions and splice junctions. In particular, JunctionSeq is capable of detecting differentials in novel splice junctions without the need for an additional isoform assembly step, greatly improving performance when the available transcript annotation is flawed or incomplete. JunctionSeq also provides a powerful and streamlined visualization toolset that allows bioinformaticians to quickly and intuitively interpret their results. We tested our method on publicly available data from several experiments performed on the rat pineal gland and Toxoplasma gondii, successfully detecting known and previously validated AIR genes in 19 out of 19 gene-level hypothesis tests. Due to its ability to query novel splice sites, JunctionSeq is still able to detect these differentials even when all alternative isoforms for these genes were not included in the transcript annotation. JunctionSeq thus provides a powerful method for detecting alternative isoform regulation even with low-quality annotations. An implementation of JunctionSeq is available as an R/Bioconductor package.
0711.1938
Erich Schmid
Erich W. Schmid
Comparison of two models of electric neuro-stimulation and consequences for the design of retinal prostheses
10 pages, 4 figures
null
null
null
q-bio.NC
null
Two simple mathematical models of electric neuro-stimulation are derived and discussed. It is found that the common injected-charge model is less realistic than a model, in which a latency period, which follows after a short electric pulse, plays a role as important as the electric pulse. A stimulation signal is proposed that takes advantage of these findings and calls for experimental testing.
[ { "created": "Tue, 13 Nov 2007 09:32:33 GMT", "version": "v1" } ]
2007-11-14
[ [ "Schmid", "Erich W.", "" ] ]
Two simple mathematical models of electric neuro-stimulation are derived and discussed. It is found that the common injected-charge model is less realistic than a model, in which a latency period, which follows after a short electric pulse, plays a role as important as the electric pulse. A stimulation signal is proposed that takes advantage of these findings and calls for experimental testing.
1605.07247
Khalique Newaz
Fazle E. Faisal, Julie L. Chaney, Khalique Newaz, Jun Li, Scott J. Emrich, Patricia L. Clark, Tijana Milenkovic
Network approach integrates 3D structural and sequence data to improve protein structural comparison
null
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Initial protein structural comparisons were sequence-based. Since amino acids that are distant in the sequence can be close in the 3-dimensional (3D) structure, 3D contact approaches can complement sequence approaches. Traditional 3D contact approaches study 3D structures directly. Instead, 3D structures can be modeled as protein structure networks (PSNs). Then, network approaches can compare proteins by comparing their PSNs. Network approaches may improve upon traditional 3D contact approaches. We cannot use existing PSN approaches to test this, because: 1) They rely on naive measures of network topology. 2) They are not robust to PSN size. They cannot integrate 3) multiple PSN measures or 4) PSN data with sequence data, although this could help because the different data types capture complementary biological knowledge. We address these limitations by: 1) exploiting well-established graphlet measures via a new network approach, 2) introducing normalized graphlet measures to remove the bias of PSN size, 3) allowing for integrating multiple PSN measures, and 4) using ordered graphlets to combine the complementary PSN data and sequence data. We compare both synthetic networks and real-world PSNs more accurately and faster than existing network, 3D contact, or sequence approaches. Our approach finds PSN patterns that may be biochemically interesting.
[ { "created": "Tue, 24 May 2016 00:49:26 GMT", "version": "v1" }, { "created": "Mon, 27 Feb 2017 21:38:16 GMT", "version": "v2" } ]
2017-03-01
[ [ "Faisal", "Fazle E.", "" ], [ "Chaney", "Julie L.", "" ], [ "Newaz", "Khalique", "" ], [ "Li", "Jun", "" ], [ "Emrich", "Scott J.", "" ], [ "Clark", "Patricia L.", "" ], [ "Milenkovic", "Tijana", "" ] ]
Initial protein structural comparisons were sequence-based. Since amino acids that are distant in the sequence can be close in the 3-dimensional (3D) structure, 3D contact approaches can complement sequence approaches. Traditional 3D contact approaches study 3D structures directly. Instead, 3D structures can be modeled as protein structure networks (PSNs). Then, network approaches can compare proteins by comparing their PSNs. Network approaches may improve upon traditional 3D contact approaches. We cannot use existing PSN approaches to test this, because: 1) They rely on naive measures of network topology. 2) They are not robust to PSN size. They cannot integrate 3) multiple PSN measures or 4) PSN data with sequence data, although this could help because the different data types capture complementary biological knowledge. We address these limitations by: 1) exploiting well-established graphlet measures via a new network approach, 2) introducing normalized graphlet measures to remove the bias of PSN size, 3) allowing for integrating multiple PSN measures, and 4) using ordered graphlets to combine the complementary PSN data and sequence data. We compare both synthetic networks and real-world PSNs more accurately and faster than existing network, 3D contact, or sequence approaches. Our approach finds PSN patterns that may be biochemically interesting.
1808.05581
Michal Komorowski
Tomasz Jetka, Tomasz Winarski, Karol Nienaltowski, Slawomir Blonski, Michal Komorowski
Information-theoretic analysis of multivariate single - cell signaling responses using SLEMI
null
null
10.1371/journal.pcbi.1007132
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mathematical methods of information theory constitute essential tools to describe how stimuli are encoded in activities of signaling effectors. Exploring the information-theoretic perspective, however, remains conceptually, experimentally and computationally challenging. Specifically, existing computational tools enable efficient analysis of relatively simple systems, usually with one input and output only. Moreover, their robust and readily applicable implementations are missing. Here, we propose a novel algorithm to analyze signaling data within the framework of information theory. Our approach enables robust as well as statistically and computationally efficient analysis of signaling systems with high-dimensional outputs and a large number of input values. Analysis of the NF-kB single - cell signaling responses to TNF-a uniquely reveals that the NF-kB signaling dynamics improves discrimination of high concentrations of TNF-a with a modest impact on discrimination of low concentrations. Our readily applicable R-package, SLEMI - statistical learning based estimation of mutual information, allows the approach to be used by computational biologists with only elementary knowledge of information theory.
[ { "created": "Thu, 16 Aug 2018 16:53:59 GMT", "version": "v1" } ]
2019-09-11
[ [ "Jetka", "Tomasz", "" ], [ "Winarski", "Tomasz", "" ], [ "Nienaltowski", "Karol", "" ], [ "Blonski", "Slawomir", "" ], [ "Komorowski", "Michal", "" ] ]
Mathematical methods of information theory constitute essential tools to describe how stimuli are encoded in activities of signaling effectors. Exploring the information-theoretic perspective, however, remains conceptually, experimentally and computationally challenging. Specifically, existing computational tools enable efficient analysis of relatively simple systems, usually with one input and output only. Moreover, their robust and readily applicable implementations are missing. Here, we propose a novel algorithm to analyze signaling data within the framework of information theory. Our approach enables robust as well as statistically and computationally efficient analysis of signaling systems with high-dimensional outputs and a large number of input values. Analysis of the NF-kB single - cell signaling responses to TNF-a uniquely reveals that the NF-kB signaling dynamics improves discrimination of high concentrations of TNF-a with a modest impact on discrimination of low concentrations. Our readily applicable R-package, SLEMI - statistical learning based estimation of mutual information, allows the approach to be used by computational biologists with only elementary knowledge of information theory.
1807.04235
Gillian Grindstaff
Gillian Grindstaff and Megan Owen
Geometric comparison of phylogenetic trees with different leaf sets
27 pages, 8 figures
null
null
null
q-bio.PE cs.DM math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The metric space of phylogenetic trees defined by Billera, Holmes, and Vogtmann, which we refer to as BHV space, provides a natural geometric setting for describing collections of trees on the same set of taxa. However, it is sometimes necessary to analyze collections of trees on non-identical taxa sets (i.e., with different numbers of leaves), and in this context it is not evident how to apply BHV space. Davidson et al. recently approached this problem by describing a combinatorial algorithm extending tree topologies to regions in higher dimensional tree spaces, so that one can quickly compute which topologies contain a given tree as partial data. In this paper, we refine and adapt their algorithm to work for metric trees to give a full characterization of the subspace of extensions of a subtree. We describe how to apply our algorithm to define and search a space of possible supertrees and, for a collection of tree fragments with different leaf sets, to measure their compatibility.
[ { "created": "Wed, 11 Jul 2018 16:33:47 GMT", "version": "v1" } ]
2018-07-12
[ [ "Grindstaff", "Gillian", "" ], [ "Owen", "Megan", "" ] ]
The metric space of phylogenetic trees defined by Billera, Holmes, and Vogtmann, which we refer to as BHV space, provides a natural geometric setting for describing collections of trees on the same set of taxa. However, it is sometimes necessary to analyze collections of trees on non-identical taxa sets (i.e., with different numbers of leaves), and in this context it is not evident how to apply BHV space. Davidson et al. recently approached this problem by describing a combinatorial algorithm extending tree topologies to regions in higher dimensional tree spaces, so that one can quickly compute which topologies contain a given tree as partial data. In this paper, we refine and adapt their algorithm to work for metric trees to give a full characterization of the subspace of extensions of a subtree. We describe how to apply our algorithm to define and search a space of possible supertrees and, for a collection of tree fragments with different leaf sets, to measure their compatibility.
2005.03961
Ramin Hasibi
Ramin Hasibi, Tom Michoel
A Graph Feature Auto-Encoder for the Prediction of Unobserved Node Features on Biological Networks
null
null
null
null
q-bio.QM cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivation: Molecular interaction networks summarize complex biological processes as graphs, whose structure is informative of biological function at multiple scales. Simultaneously, omics technologies measure the variation or activity of genes, proteins, or metabolites across individuals or experimental conditions. Integrating the complementary viewpoints of biological networks and omics data is an important task in bioinformatics, but existing methods treat networks as discrete structures, which are intrinsically difficult to integrate with continuous node features or activity measures. Graph neural networks map graph nodes into a low-dimensional vector space representation, and can be trained to preserve both the local graph structure and the similarity between node features. Results: We studied the representation of transcriptional, protein-protein and genetic interaction networks in E. Coli and mouse using graph neural networks. We found that such representations explain a large proportion of variation in gene expression data, and that using gene expression data as node features improves the reconstruction of the graph from the embedding. We further proposed a new end-to-end graph feature auto-encoder which is trained on the feature reconstruction task, and showed that it performs better at predicting unobserved node features than auto-encoders that are trained on the graph reconstruction task before learning to predict node features. When applied to the problem of imputing missing data in single-cell RNAseq data, our graph feature auto-encoder outperformed a state-of-the-art imputation method that does not use protein interaction information, showing the benefit of integrating biological networks and omics data using graph representation learning.
[ { "created": "Fri, 8 May 2020 11:23:04 GMT", "version": "v1" }, { "created": "Wed, 23 Dec 2020 09:18:25 GMT", "version": "v2" } ]
2020-12-24
[ [ "Hasibi", "Ramin", "" ], [ "Michoel", "Tom", "" ] ]
Motivation: Molecular interaction networks summarize complex biological processes as graphs, whose structure is informative of biological function at multiple scales. Simultaneously, omics technologies measure the variation or activity of genes, proteins, or metabolites across individuals or experimental conditions. Integrating the complementary viewpoints of biological networks and omics data is an important task in bioinformatics, but existing methods treat networks as discrete structures, which are intrinsically difficult to integrate with continuous node features or activity measures. Graph neural networks map graph nodes into a low-dimensional vector space representation, and can be trained to preserve both the local graph structure and the similarity between node features. Results: We studied the representation of transcriptional, protein-protein and genetic interaction networks in E. Coli and mouse using graph neural networks. We found that such representations explain a large proportion of variation in gene expression data, and that using gene expression data as node features improves the reconstruction of the graph from the embedding. We further proposed a new end-to-end graph feature auto-encoder which is trained on the feature reconstruction task, and showed that it performs better at predicting unobserved node features than auto-encoders that are trained on the graph reconstruction task before learning to predict node features. When applied to the problem of imputing missing data in single-cell RNAseq data, our graph feature auto-encoder outperformed a state-of-the-art imputation method that does not use protein interaction information, showing the benefit of integrating biological networks and omics data using graph representation learning.
1805.01098
Matthieu Vignes
Olivia Angelin-Bonnet, Patrick J. Biggs and Matthieu Vignes
Gene regulatory networks: a primer in biological processes and statistical modelling
This chapter will appear in the forthcoming book "Gene Regulatory Networks: Methods and Protocols", published by Springer Nature
null
null
null
q-bio.QM q-bio.MN stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modelling gene regulatory networks not only requires a thorough understanding of the biological system depicted but also the ability to accurately represent this system from a mathematical perspective. Throughout this chapter, we aim to familiarise the reader with the biological processes and molecular factors at play in the process of gene expression regulation.We first describe the different interactions controlling each step of the expression process, from transcription to mRNA and protein decay. In the second section, we provide statistical tools to accurately represent this biological complexity in the form of mathematical models. Amongst other considerations, we discuss the topological properties of biological networks, the application of deterministic and stochastic frameworks and the quantitative modelling of regulation. We particularly focus on the use of such models for the simulation of expression data that can serve as a benchmark for the testing of network inference algorithms.
[ { "created": "Thu, 3 May 2018 03:31:16 GMT", "version": "v1" } ]
2018-05-04
[ [ "Angelin-Bonnet", "Olivia", "" ], [ "Biggs", "Patrick J.", "" ], [ "Vignes", "Matthieu", "" ] ]
Modelling gene regulatory networks not only requires a thorough understanding of the biological system depicted but also the ability to accurately represent this system from a mathematical perspective. Throughout this chapter, we aim to familiarise the reader with the biological processes and molecular factors at play in the process of gene expression regulation.We first describe the different interactions controlling each step of the expression process, from transcription to mRNA and protein decay. In the second section, we provide statistical tools to accurately represent this biological complexity in the form of mathematical models. Amongst other considerations, we discuss the topological properties of biological networks, the application of deterministic and stochastic frameworks and the quantitative modelling of regulation. We particularly focus on the use of such models for the simulation of expression data that can serve as a benchmark for the testing of network inference algorithms.
1501.01477
Massimo Rizzi
Massimo Rizzi
The emergence of power-law distributions of inter-spike intervals characterizes status epilepticus induced by pilocarpine administration in rats
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Objective. To ascertain the existence of power-law distributions of inter-spike intervals (ISI) occurring during the progression of status epilepticus (SE), so that the emergence of critical states could be reasonably hypothesized as being part of the intrinsic nature of the SE. Methods. Status epilepticus was induced by pilocarpine administration in post-natal 21-day rats (n=8). For each animal, 24 hours of EEG from the onset of the SE were analyzed according to the analytical procedure suggested by Clauset et al. (2009) which combines maximum-likelihood fitting methods with goodness-of-fit tests based on the Kolmogorov-Smirnov statistics and likelihood ratios. The analytical procedure was implemented by the freely available R software package "poweRlaw". Time of calculations was considerably shorten by the exploitation of High-Throughput Computing technology, a.k.a. Grid Computing technology. Results. The progression of the SE is characterized by the emergence of power-law correlations of ISI whose likelihood of occurrence increases the more the time from the onset of the SE elapses. Log-normal distribution of ISI is however widely represented. Additionally, undetermined distributions of ISI are represented as well, although confined within a restricted temporal window. The final stage of SE appears dominated only by power-law and log-normal distributions of ISI. Significance. The emergence of power-law correlations of ISI concretely supports the concept of the occurrence of critical states during the progression of SE. It is reasonably speculated, as a working hypothesis, that the occurrence of power-law distributions of ISI within the early stages of the SE could be a hallmark of the establishment of the route to epileptogenesis.
[ { "created": "Wed, 7 Jan 2015 13:01:06 GMT", "version": "v1" } ]
2015-01-08
[ [ "Rizzi", "Massimo", "" ] ]
Objective. To ascertain the existence of power-law distributions of inter-spike intervals (ISI) occurring during the progression of status epilepticus (SE), so that the emergence of critical states could be reasonably hypothesized as being part of the intrinsic nature of the SE. Methods. Status epilepticus was induced by pilocarpine administration in post-natal 21-day rats (n=8). For each animal, 24 hours of EEG from the onset of the SE were analyzed according to the analytical procedure suggested by Clauset et al. (2009) which combines maximum-likelihood fitting methods with goodness-of-fit tests based on the Kolmogorov-Smirnov statistics and likelihood ratios. The analytical procedure was implemented by the freely available R software package "poweRlaw". Time of calculations was considerably shorten by the exploitation of High-Throughput Computing technology, a.k.a. Grid Computing technology. Results. The progression of the SE is characterized by the emergence of power-law correlations of ISI whose likelihood of occurrence increases the more the time from the onset of the SE elapses. Log-normal distribution of ISI is however widely represented. Additionally, undetermined distributions of ISI are represented as well, although confined within a restricted temporal window. The final stage of SE appears dominated only by power-law and log-normal distributions of ISI. Significance. The emergence of power-law correlations of ISI concretely supports the concept of the occurrence of critical states during the progression of SE. It is reasonably speculated, as a working hypothesis, that the occurrence of power-law distributions of ISI within the early stages of the SE could be a hallmark of the establishment of the route to epileptogenesis.
1911.06692
Margaret Cheung
Andrei G. Gasic and Margaret S. Cheung
A Tale of Two Desolvation Potentials: An Investigation of Protein Behavior Under High Hydrostatic Pressure
6 figures
Journal of Physical Chemistry B 2020
10.1021/acs.jpcb.9b10734
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Hydrostatic pressure is a common perturbation to probe the conformations of proteins. There are two common forms of pressure dependent potentials of mean force (PMFs) derived from hydrophobic molecules available for the coarse grained molecular simulations of protein folding and unfolding under hydrostatic pressure. Although both PMF includes a desolvation barrier separating the well of a direct contact and the well of a solvent mediated contact, how these features vary with hydrostatic pressure is still debated. There is a need of a systematic comparison of these two PMFs on a protein. We investigated the two different pressure dependencies on the desolvation potential in a structure based protein model using coarse grained molecular simulations. We compared them to the known behavior a real protein based on experimental evidence. We showed that the protein s folding transition curve on the pressure temperature phase diagram depends on the relationship between the potential well minima and pressure. For protein that reduces the total volume under pressure, it is essential for the PMF to carry the feature that the direct contact well is essential less stable than the water mediated contact well at high pressure. We also comment on the practicality and importance of structure based minimalist models for understanding the phenomenological behavior of a protein under a wide range of phase space.
[ { "created": "Fri, 15 Nov 2019 15:22:02 GMT", "version": "v1" } ]
2020-11-17
[ [ "Gasic", "Andrei G.", "" ], [ "Cheung", "Margaret S.", "" ] ]
Hydrostatic pressure is a common perturbation to probe the conformations of proteins. There are two common forms of pressure dependent potentials of mean force (PMFs) derived from hydrophobic molecules available for the coarse grained molecular simulations of protein folding and unfolding under hydrostatic pressure. Although both PMF includes a desolvation barrier separating the well of a direct contact and the well of a solvent mediated contact, how these features vary with hydrostatic pressure is still debated. There is a need of a systematic comparison of these two PMFs on a protein. We investigated the two different pressure dependencies on the desolvation potential in a structure based protein model using coarse grained molecular simulations. We compared them to the known behavior a real protein based on experimental evidence. We showed that the protein s folding transition curve on the pressure temperature phase diagram depends on the relationship between the potential well minima and pressure. For protein that reduces the total volume under pressure, it is essential for the PMF to carry the feature that the direct contact well is essential less stable than the water mediated contact well at high pressure. We also comment on the practicality and importance of structure based minimalist models for understanding the phenomenological behavior of a protein under a wide range of phase space.
q-bio/0309004
Jennifer Ross
Jennifer L. Ross and D. Kuchnir Fygenson
Mobility of Taxol in Microtubule Bundles
pre print format
Biophys. J. 2003 84: 3959-3967
10.1016/S0006-3495(03)75123-6
null
q-bio.SC physics.bio-ph
null
Mobility of taxol inside microtubules was investigated using fluorescence recovery after photobleaching (FRAP) on flow-aligned bundles. Bundles were made of microtubules with either GMPCPP or GTP at the exchangeable site on the tubulin dimer. Recovery times were sensitive to bundle thickness and packing, indicating that taxol molecules are able to move laterally through the bundle. The density of open binding sites along a microtubule was varied by controlling the concentration of taxol in solution for GMPCPP samples. With > 63% sites occupied, recovery times were independent of taxol concentration and, therefore, inversely proportional to the microscopic dissociation rate, k_{off}. It was found that 10*k_{off} (GMPCPP) ~ k_{off} (GTP), consistent with, but not fully accounting for, the difference in equilibrium constants for taxol on GMPCPP and GTP microtubules. With < 63% sites occupied, recovery times decreased as ~ [Tax]^{-1/5} for both types of microtubules. We conclude that the diffusion of taxol along the microtubule interior is hindered by rebinding events when open sites are within ~7 nm of each other.
[ { "created": "Wed, 17 Sep 2003 19:07:19 GMT", "version": "v1" } ]
2009-11-10
[ [ "Ross", "Jennifer L.", "" ], [ "Fygenson", "D. Kuchnir", "" ] ]
Mobility of taxol inside microtubules was investigated using fluorescence recovery after photobleaching (FRAP) on flow-aligned bundles. Bundles were made of microtubules with either GMPCPP or GTP at the exchangeable site on the tubulin dimer. Recovery times were sensitive to bundle thickness and packing, indicating that taxol molecules are able to move laterally through the bundle. The density of open binding sites along a microtubule was varied by controlling the concentration of taxol in solution for GMPCPP samples. With > 63% sites occupied, recovery times were independent of taxol concentration and, therefore, inversely proportional to the microscopic dissociation rate, k_{off}. It was found that 10*k_{off} (GMPCPP) ~ k_{off} (GTP), consistent with, but not fully accounting for, the difference in equilibrium constants for taxol on GMPCPP and GTP microtubules. With < 63% sites occupied, recovery times decreased as ~ [Tax]^{-1/5} for both types of microtubules. We conclude that the diffusion of taxol along the microtubule interior is hindered by rebinding events when open sites are within ~7 nm of each other.
1407.1792
Alex Popinga
Alex Popinga, Tim Vaughan, Tanja Stadler, Alexei Drummond
Inferring epidemiological dynamics with Bayesian coalescent inference: The merits of deterministic and stochastic models
Submitted
null
10.1534/genetics.114.172791
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Estimation of epidemiological and population parameters from molecular sequence data has become central to the understanding of infectious disease dynamics. Various models have been proposed to infer details of the dynamics that describe epidemic progression. These include inference approaches derived from Kingman's coalescent theory. Here, we use recently described coalescent theory for epidemic dynamics to develop stochastic and deterministic coalescent SIR tree priors. We implement these in a Bayesian phylogenetic inference framework to permit joint estimation of SIR epidemic parameters and the sample genealogy. We assess the performance of the two coalescent models and also juxtapose results obtained with BDSIR, a recently published birth-death-sampling model for epidemic inference. Comparisons are made by analyzing sets of genealogies simulated under precisely known epidemiological parameters. Additionally, we analyze influenza A (H1N1) sequence data sampled in the Canterbury region of New Zealand and HIV-1 sequence data obtained from known UK infection clusters. We show that both coalescent SIR models are effective at estimating epidemiological parameters from data with large fundamental reproductive number $R_0$ and large population size $S_0$. Furthermore, we find that the stochastic variant generally outperforms its deterministic counterpart in terms of error, bias, and highest posterior density coverage, particularly for smaller $R_0$ and $S_0$. However, each of these inference models are shown to have undesirable properties in certain circumstances, especially for epidemic outbreaks with $R_0$ close to one or with small effective susceptible populations.
[ { "created": "Mon, 7 Jul 2014 18:24:10 GMT", "version": "v1" }, { "created": "Fri, 19 Dec 2014 11:17:05 GMT", "version": "v2" } ]
2014-12-25
[ [ "Popinga", "Alex", "" ], [ "Vaughan", "Tim", "" ], [ "Stadler", "Tanja", "" ], [ "Drummond", "Alexei", "" ] ]
Estimation of epidemiological and population parameters from molecular sequence data has become central to the understanding of infectious disease dynamics. Various models have been proposed to infer details of the dynamics that describe epidemic progression. These include inference approaches derived from Kingman's coalescent theory. Here, we use recently described coalescent theory for epidemic dynamics to develop stochastic and deterministic coalescent SIR tree priors. We implement these in a Bayesian phylogenetic inference framework to permit joint estimation of SIR epidemic parameters and the sample genealogy. We assess the performance of the two coalescent models and also juxtapose results obtained with BDSIR, a recently published birth-death-sampling model for epidemic inference. Comparisons are made by analyzing sets of genealogies simulated under precisely known epidemiological parameters. Additionally, we analyze influenza A (H1N1) sequence data sampled in the Canterbury region of New Zealand and HIV-1 sequence data obtained from known UK infection clusters. We show that both coalescent SIR models are effective at estimating epidemiological parameters from data with large fundamental reproductive number $R_0$ and large population size $S_0$. Furthermore, we find that the stochastic variant generally outperforms its deterministic counterpart in terms of error, bias, and highest posterior density coverage, particularly for smaller $R_0$ and $S_0$. However, each of these inference models are shown to have undesirable properties in certain circumstances, especially for epidemic outbreaks with $R_0$ close to one or with small effective susceptible populations.
1510.08344
Katar\'ina Bo\v{d}ov\'a
Katar\'ina Bo\v{d}ov\'a, Ga\v{s}per Tka\v{c}ik, Nicholas H. Barton
A general approximation for the dynamics of quantitative traits
null
null
10.1534/genetics.115.184127
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Selection, mutation and random drift affect the dynamics of allele frequencies and consequently of quantitative traits. While the macroscopic dynamics of quantitative traits can be measured, the underlying allele frequencies are typically unobserved. Can we understand how the macroscopic observables evolve without following these microscopic processes? The problem has previously been studied by analogy with statistical mechanics: the allele frequency distribution at each time is approximated by the stationary form, which maximizes entropy. We explore the limitations of this method when mutation is small ($4N\!\mu<1$) so that populations are typically close to fixation and we extend the theory in this regime to account for changes in mutation strength. We consider a single diallelic locus under either directional selection, or with over-dominance, and then generalise to multiple unlinked biallelic loci with unequal effects. We find that the maximum entropy approximation is remarkably accurate, even when mutation and selection change rapidly.
[ { "created": "Wed, 28 Oct 2015 15:27:07 GMT", "version": "v1" }, { "created": "Mon, 14 Mar 2016 15:01:30 GMT", "version": "v2" } ]
2016-03-15
[ [ "Boďová", "Katarína", "" ], [ "Tkačik", "Gašper", "" ], [ "Barton", "Nicholas H.", "" ] ]
Selection, mutation and random drift affect the dynamics of allele frequencies and consequently of quantitative traits. While the macroscopic dynamics of quantitative traits can be measured, the underlying allele frequencies are typically unobserved. Can we understand how the macroscopic observables evolve without following these microscopic processes? The problem has previously been studied by analogy with statistical mechanics: the allele frequency distribution at each time is approximated by the stationary form, which maximizes entropy. We explore the limitations of this method when mutation is small ($4N\!\mu<1$) so that populations are typically close to fixation and we extend the theory in this regime to account for changes in mutation strength. We consider a single diallelic locus under either directional selection, or with over-dominance, and then generalise to multiple unlinked biallelic loci with unequal effects. We find that the maximum entropy approximation is remarkably accurate, even when mutation and selection change rapidly.
1105.2530
Subhadip Raychaudhuri
Joanna Skommer, Somkanya C Das, Arjun Nair, Thomas Brittain, Subhadip Raychaudhuri
Nonlinear regulation of commitment to apoptosis by simultaneous inhibition of Bcl-2 and XIAP in leukemia and lymphoma cells
21 pages; 6 figures
Apoptosis 16:619-26 (2011)
null
null
q-bio.MN cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Apoptosis is a complex pathway regulated by the concerted action of multiple pro- and anti-apoptotic molecules. The intrinsic (mitochondrial) pathway of apoptosis is governed up-stream of mitochondria, by the family of Bcl-2 proteins, and down-stream of mitochondria, by low-probability events, such as apoptosome formation, and by feedback circuits involving caspases and inhibitor of apoptosis proteins (IAPs), such as XIAP. All these regulatory mechanisms ensure that cells only commit to death once a threshold of damage has been reached and the anti-apoptotic reserve of the cell is overcome. As cancer cells are invariably exposed to strong intracellular and extracellular stress stimuli, they are particularly reliant on the expression of anti-apoptotic proteins. Hence, many cancer cells undergo apoptosis when exposed to agents that inhibit anti-apoptotic Bcl-2 molecules, such as BH3 mimetics, while normal cells remain relatively insensitive to single agent treatments with the same class of molecules. Targeting different proteins within the apoptotic network with combinatorial treatment approaches often achieves even greater specificity. This led us to investigate the sensitivity of leukemia and lymphoma cells to a pro-apoptotic action of a BH3 mimetic combined with a small molecule inhibitor of XIAP. Using computational probabilistic model of apoptotic pathway, verified by experimental results from human leukemia and lymphoma cell lines, we show that inhibition of XIAP has a non-linear effect on sensitization towards apoptosis induced by the BH3 mimetic HA14-1. This study justifies further ex vivo and animal studies on the potential of the treatment of leukemia and lymphoma with a combination of BH3 mimetics and XIAP inhibitors.
[ { "created": "Thu, 12 May 2011 17:04:52 GMT", "version": "v1" } ]
2011-05-13
[ [ "Skommer", "Joanna", "" ], [ "Das", "Somkanya C", "" ], [ "Nair", "Arjun", "" ], [ "Brittain", "Thomas", "" ], [ "Raychaudhuri", "Subhadip", "" ] ]
Apoptosis is a complex pathway regulated by the concerted action of multiple pro- and anti-apoptotic molecules. The intrinsic (mitochondrial) pathway of apoptosis is governed up-stream of mitochondria, by the family of Bcl-2 proteins, and down-stream of mitochondria, by low-probability events, such as apoptosome formation, and by feedback circuits involving caspases and inhibitor of apoptosis proteins (IAPs), such as XIAP. All these regulatory mechanisms ensure that cells only commit to death once a threshold of damage has been reached and the anti-apoptotic reserve of the cell is overcome. As cancer cells are invariably exposed to strong intracellular and extracellular stress stimuli, they are particularly reliant on the expression of anti-apoptotic proteins. Hence, many cancer cells undergo apoptosis when exposed to agents that inhibit anti-apoptotic Bcl-2 molecules, such as BH3 mimetics, while normal cells remain relatively insensitive to single agent treatments with the same class of molecules. Targeting different proteins within the apoptotic network with combinatorial treatment approaches often achieves even greater specificity. This led us to investigate the sensitivity of leukemia and lymphoma cells to a pro-apoptotic action of a BH3 mimetic combined with a small molecule inhibitor of XIAP. Using computational probabilistic model of apoptotic pathway, verified by experimental results from human leukemia and lymphoma cell lines, we show that inhibition of XIAP has a non-linear effect on sensitization towards apoptosis induced by the BH3 mimetic HA14-1. This study justifies further ex vivo and animal studies on the potential of the treatment of leukemia and lymphoma with a combination of BH3 mimetics and XIAP inhibitors.
2402.00926
Myriam Bontonou
Myriam Bontonou, Ana\"is Haget, Maria Boulougouri, Benjamin Audit, Pierre Borgnat, Jean-Michel Arbona
A Comparative Analysis of Gene Expression Profiling by Statistical and Machine Learning Approaches
null
null
null
null
q-bio.GN cs.LG
http://creativecommons.org/licenses/by/4.0/
Many machine learning models have been proposed to classify phenotypes from gene expression data. In addition to their good performance, these models can potentially provide some understanding of phenotypes by extracting explanations for their decisions. These explanations often take the form of a list of genes ranked in order of importance for the predictions, the highest-ranked genes being interpreted as linked to the phenotype. We discuss the biological and the methodological limitations of such explanations. Experiments are performed on several datasets gathering cancer and healthy tissue samples from the TCGA, GTEx and TARGET databases. A collection of machine learning models including logistic regression, multilayer perceptron, and graph neural network are trained to classify samples according to their cancer type. Gene rankings are obtained from explainability methods adapted to these models, and compared to the ones from classical statistical feature selection methods such as mutual information, DESeq2, and EdgeR. Interestingly, on simple tasks, we observe that the information learned by black-box neural networks is related to the notion of differential expression. In all cases, a small set containing the best-ranked genes is sufficient to achieve a good classification. However, these genes differ significantly between the methods and similar classification performance can be achieved with numerous lower ranked genes. In conclusion, although these methods enable the identification of biomarkers characteristic of certain pathologies, our results question the completeness of the selected gene sets and thus of explainability by the identification of the underlying biological processes.
[ { "created": "Thu, 1 Feb 2024 18:17:36 GMT", "version": "v1" } ]
2024-02-05
[ [ "Bontonou", "Myriam", "" ], [ "Haget", "Anaïs", "" ], [ "Boulougouri", "Maria", "" ], [ "Audit", "Benjamin", "" ], [ "Borgnat", "Pierre", "" ], [ "Arbona", "Jean-Michel", "" ] ]
Many machine learning models have been proposed to classify phenotypes from gene expression data. In addition to their good performance, these models can potentially provide some understanding of phenotypes by extracting explanations for their decisions. These explanations often take the form of a list of genes ranked in order of importance for the predictions, the highest-ranked genes being interpreted as linked to the phenotype. We discuss the biological and the methodological limitations of such explanations. Experiments are performed on several datasets gathering cancer and healthy tissue samples from the TCGA, GTEx and TARGET databases. A collection of machine learning models including logistic regression, multilayer perceptron, and graph neural network are trained to classify samples according to their cancer type. Gene rankings are obtained from explainability methods adapted to these models, and compared to the ones from classical statistical feature selection methods such as mutual information, DESeq2, and EdgeR. Interestingly, on simple tasks, we observe that the information learned by black-box neural networks is related to the notion of differential expression. In all cases, a small set containing the best-ranked genes is sufficient to achieve a good classification. However, these genes differ significantly between the methods and similar classification performance can be achieved with numerous lower ranked genes. In conclusion, although these methods enable the identification of biomarkers characteristic of certain pathologies, our results question the completeness of the selected gene sets and thus of explainability by the identification of the underlying biological processes.
1606.05913
Ricardo Martinez-Garcia
Ricardo Martinez-Garcia, Corina E. Tarnita
Lack of ecological context can create the illusion of social success in Dictyostelium discoideum
40 pages, 6 figures + 9 Supplementary figures
PLoS Comput Biol 12(12): e1005246. 2016
10.1371/journal.pcbi.1005246
null
q-bio.PE physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Studies of cooperation in microbes often focus on one fitness component, with little information about or attention to the ecological context, and this can lead to paradoxical results. The life cycle of the social amoeba Dictyostelium discoideum includes a multicellular stage in which not necessarily clonal amoebae aggregate upon starvation to form a possibly chimeric (genetically heterogeneous) fruiting body made of dead stalk and spores. The lab-measured reproductive skew in the spores of chimeras indicates strong social antagonism; this should result in low genotypic diversity, which is inconsistent with observations from nature. Two studies have suggested that this inconsistency stems from the one-dimensional assessment of fitness (spore production) and that the solution lies in tradeoffs between multiple traits, e.g.: spore size versus viability; and staying vegetative versus becoming dormant. We theoretically explore different tradeoff-implementing mechanisms and provide a unifying ecological framework in which the two tradeoffs above, as well as novel ones, arise collectively in response to characteristics of the environment. We find that spore production comes at the expense of vegetative cell production, time to development, and, depending on the experimental setup, spore size and viability. Furthermore, we find that all existing experimental results regarding chimeric mixes can be qualitatively recapitulated without needing to invoke social interactions, which allows for simple resolutions to previously paradoxical results. We conclude that the complexities of life histories, including social behavior and multicellularity, can only be understood in the appropriate multidimensional ecological context.
[ { "created": "Sun, 19 Jun 2016 21:34:41 GMT", "version": "v1" } ]
2017-05-19
[ [ "Martinez-Garcia", "Ricardo", "" ], [ "Tarnita", "Corina E.", "" ] ]
Studies of cooperation in microbes often focus on one fitness component, with little information about or attention to the ecological context, and this can lead to paradoxical results. The life cycle of the social amoeba Dictyostelium discoideum includes a multicellular stage in which not necessarily clonal amoebae aggregate upon starvation to form a possibly chimeric (genetically heterogeneous) fruiting body made of dead stalk and spores. The lab-measured reproductive skew in the spores of chimeras indicates strong social antagonism; this should result in low genotypic diversity, which is inconsistent with observations from nature. Two studies have suggested that this inconsistency stems from the one-dimensional assessment of fitness (spore production) and that the solution lies in tradeoffs between multiple traits, e.g.: spore size versus viability; and staying vegetative versus becoming dormant. We theoretically explore different tradeoff-implementing mechanisms and provide a unifying ecological framework in which the two tradeoffs above, as well as novel ones, arise collectively in response to characteristics of the environment. We find that spore production comes at the expense of vegetative cell production, time to development, and, depending on the experimental setup, spore size and viability. Furthermore, we find that all existing experimental results regarding chimeric mixes can be qualitatively recapitulated without needing to invoke social interactions, which allows for simple resolutions to previously paradoxical results. We conclude that the complexities of life histories, including social behavior and multicellularity, can only be understood in the appropriate multidimensional ecological context.
1809.08504
Mason A. Porter
Bernadette J. Stolz, Tegan Emerson, Satu Nahkuri, Mason A. Porter, and Heather A. Harrington
Topological Data Analysis of Task-Based fMRI Data from Experiments on Schizophrenia
null
null
null
null
q-bio.QM cond-mat.dis-nn math.AT nlin.AO q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We use methods from computational algebraic topology to study functional brain networks, in which nodes represent brain regions and weighted edges encode the similarity of fMRI time series from each region. With these tools, which allow one to characterize topological invariants such as loops in high-dimensional data, we are able to gain understanding into low-dimensional structures in networks in a way that complements traditional approaches that are based on pairwise interactions. In the present paper, we use persistent homology to analyze networks that we construct from task-based fMRI data from schizophrenia patients, healthy controls, and healthy siblings of schizophrenia patients. We thereby explore the persistence of topological structures such as loops at different scales in these networks. We use persistence landscapes and persistence images to create output summaries from our persistent-homology calculations, and we study the persistence landscapes and images using $k$-means clustering and community detection. Based on our analysis of persistence landscapes, we find that the members of the sibling cohort have topological features (specifically, their 1-dimensional loops) that are distinct from the other two cohorts. From the persistence images, we are able to distinguish all three subject groups and to determine the brain regions in the loops (with four or more edges) that allow us to make these distinctions.
[ { "created": "Sat, 22 Sep 2018 23:52:57 GMT", "version": "v1" }, { "created": "Tue, 25 Jun 2019 00:53:46 GMT", "version": "v2" }, { "created": "Tue, 2 Jun 2020 05:30:39 GMT", "version": "v3" }, { "created": "Tue, 25 Aug 2020 22:25:23 GMT", "version": "v4" } ]
2020-08-27
[ [ "Stolz", "Bernadette J.", "" ], [ "Emerson", "Tegan", "" ], [ "Nahkuri", "Satu", "" ], [ "Porter", "Mason A.", "" ], [ "Harrington", "Heather A.", "" ] ]
We use methods from computational algebraic topology to study functional brain networks, in which nodes represent brain regions and weighted edges encode the similarity of fMRI time series from each region. With these tools, which allow one to characterize topological invariants such as loops in high-dimensional data, we are able to gain understanding into low-dimensional structures in networks in a way that complements traditional approaches that are based on pairwise interactions. In the present paper, we use persistent homology to analyze networks that we construct from task-based fMRI data from schizophrenia patients, healthy controls, and healthy siblings of schizophrenia patients. We thereby explore the persistence of topological structures such as loops at different scales in these networks. We use persistence landscapes and persistence images to create output summaries from our persistent-homology calculations, and we study the persistence landscapes and images using $k$-means clustering and community detection. Based on our analysis of persistence landscapes, we find that the members of the sibling cohort have topological features (specifically, their 1-dimensional loops) that are distinct from the other two cohorts. From the persistence images, we are able to distinguish all three subject groups and to determine the brain regions in the loops (with four or more edges) that allow us to make these distinctions.
1606.01401
Alexander Stewart
Alexander J. Stewart, Todd L. Parsons and Joshua B. Plotkin
Evolutionary consequences of behavioral diversity
26 pages, 4 figures
null
10.1073/pnas.1608990113
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Iterated games provide a framework to describe social interactions among groups of individuals. Recent work stimulated by the discovery of "zero-determinant" strategies has rapidly expanded our ability to analyze such interactions. This body of work has primarily focused on games in which players face a simple binary choice, to "cooperate" or "defect". Real individuals, however, often exhibit behavioral diversity, varying their input to a social interaction both qualitatively and quantitatively. Here we explore how access to a greater diversity of behavioral choices impacts the evolution of social dynamics in finite populations. We show that, in public goods games, some two-choice strategies can nonetheless resist invasion by all possible multi-choice invaders, even while engaging in relatively little punishment. We also show that access to greater behavioral choice results in more "rugged " fitness landscapes, with populations able to stabilize cooperation at multiple levels of investment, such that choice facilitates cooperation when returns on investments are low, but hinders cooperation when returns on investments are high. Finally, we analyze iterated rock-paper-scissors games, whose non-transitive payoff structure means unilateral control is difficult and zero-determinant strategies do not exist in general. Despite this, we find that a large portion of multi-choice strategies can invade and resist invasion by strategies that lack behavioral diversity -- so that even well-mixed populations will tend to evolve behavioral diversity.
[ { "created": "Sat, 4 Jun 2016 18:00:20 GMT", "version": "v1" } ]
2022-10-12
[ [ "Stewart", "Alexander J.", "" ], [ "Parsons", "Todd L.", "" ], [ "Plotkin", "Joshua B.", "" ] ]
Iterated games provide a framework to describe social interactions among groups of individuals. Recent work stimulated by the discovery of "zero-determinant" strategies has rapidly expanded our ability to analyze such interactions. This body of work has primarily focused on games in which players face a simple binary choice, to "cooperate" or "defect". Real individuals, however, often exhibit behavioral diversity, varying their input to a social interaction both qualitatively and quantitatively. Here we explore how access to a greater diversity of behavioral choices impacts the evolution of social dynamics in finite populations. We show that, in public goods games, some two-choice strategies can nonetheless resist invasion by all possible multi-choice invaders, even while engaging in relatively little punishment. We also show that access to greater behavioral choice results in more "rugged " fitness landscapes, with populations able to stabilize cooperation at multiple levels of investment, such that choice facilitates cooperation when returns on investments are low, but hinders cooperation when returns on investments are high. Finally, we analyze iterated rock-paper-scissors games, whose non-transitive payoff structure means unilateral control is difficult and zero-determinant strategies do not exist in general. Despite this, we find that a large portion of multi-choice strategies can invade and resist invasion by strategies that lack behavioral diversity -- so that even well-mixed populations will tend to evolve behavioral diversity.
2303.00161
Shota Hodono
Shota Hodono, Reuben Rideaux, Timo van Kerkoerle, Martijn A. Cloos
Initial experiences with Direct Imaging of Neuronal Activity (DIANA) in humans
8 pages, 10 figures
null
null
null
q-bio.NC physics.med-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Functional MRI (fMRI) has been widely used to study activity patterns in the human brain. It infers neuronal activity from the associated hemodynamic response, which fundamentally limits its spatial and temporal specificity. In mice, the Direct Imaging of Neuronal Activity (DIANA) method revealed MRI signals that correlated with intracellular voltage changes, showing high spatial and temporal specificity. In this work we attempted DIANA in humans. Four visual paradigms were tested, exploring different stimulus types (flickering noise patterns, and naturalistic images) and stimulus durations (50-200ms). Regions of interest (ROI) were derived from traditional fMRI acquisitions and anatomical scans. When using small manually drawn ROI, signals were detected that resembled possible functional activity. However, increasing the stimulus duration did not lead to corroborating signal changes. Moreover, these signals disappeared when averaged over larger functionally or anatomically derived ROI. Further analysis of the data highlighted, DIANA's sensitivity to inflow effects and subject motion. Therefore, care should be taken not to mistake artifacts for neuronal activity. Although we did not yet observe clear DIANA signals in humans, it is possible that a higher spatial resolution is needed to separate signals with opposite signs in different laminar compartments. However, obtaining such data may be particularly challenging because repetitive experiments with short interstimulus intervals can be strenuous for the subjects. To obtain better data, improvements in sequence and stimulus designs are needed to maximize the DIANA signal and minimize confounds. However, without a clear understanding of DIANA's biophysical underpinnings it is difficult to do so. Therefore, it may be more effective to first study DIANA's biophysical underpinnings in a controlled, yet biologically revenant, setting.
[ { "created": "Wed, 1 Mar 2023 01:23:54 GMT", "version": "v1" } ]
2023-03-02
[ [ "Hodono", "Shota", "" ], [ "Rideaux", "Reuben", "" ], [ "van Kerkoerle", "Timo", "" ], [ "Cloos", "Martijn A.", "" ] ]
Functional MRI (fMRI) has been widely used to study activity patterns in the human brain. It infers neuronal activity from the associated hemodynamic response, which fundamentally limits its spatial and temporal specificity. In mice, the Direct Imaging of Neuronal Activity (DIANA) method revealed MRI signals that correlated with intracellular voltage changes, showing high spatial and temporal specificity. In this work we attempted DIANA in humans. Four visual paradigms were tested, exploring different stimulus types (flickering noise patterns, and naturalistic images) and stimulus durations (50-200ms). Regions of interest (ROI) were derived from traditional fMRI acquisitions and anatomical scans. When using small manually drawn ROI, signals were detected that resembled possible functional activity. However, increasing the stimulus duration did not lead to corroborating signal changes. Moreover, these signals disappeared when averaged over larger functionally or anatomically derived ROI. Further analysis of the data highlighted, DIANA's sensitivity to inflow effects and subject motion. Therefore, care should be taken not to mistake artifacts for neuronal activity. Although we did not yet observe clear DIANA signals in humans, it is possible that a higher spatial resolution is needed to separate signals with opposite signs in different laminar compartments. However, obtaining such data may be particularly challenging because repetitive experiments with short interstimulus intervals can be strenuous for the subjects. To obtain better data, improvements in sequence and stimulus designs are needed to maximize the DIANA signal and minimize confounds. However, without a clear understanding of DIANA's biophysical underpinnings it is difficult to do so. Therefore, it may be more effective to first study DIANA's biophysical underpinnings in a controlled, yet biologically revenant, setting.
1708.09042
Fabrizio Lombardi
Fabrizio Lombardi, Hans J. Herrmann, Lucilla de Arcangelis
Balance of excitation and inhibition determines 1/f power spectrum in neuronal networks
null
Chaos 27, 047402 (2017)
10.1063/1.4979043
null
q-bio.NC cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The $1/f$-like decay observed in the power spectrum of electro-physiological signals, along with scale-free statistics of the so-called neuronal avalanches, constitute evidences of criticality in neuronal systems. Recent in vitro studies have shown that avalanche dynamics at criticality corresponds to some specific balance of excitation and inhibition, thus suggesting that this is a basic feature of the critical state of neuronal networks. In particular, a lack of inhibition significantly alters the temporal structure of the spontaneous avalanche activity and leads to an anomalous abundance of large avalanches. Here we study the relationship between network inhibition and the scaling exponent $\beta$ of the power spectral density (PSD) of avalanche activity in a neuronal network model inspired in Self-Organized Criticality (SOC). We find that this scaling exponent depends on the percentage of inhibitory synapses and tends to the value $\beta = 1$ for a percentage of about 30%. More specifically, $\beta$ is close to $2$, namely brownian noise, for purely excitatory networks and decreases towards values in the interval $[1,1.4]$ as the percentage of inhibitory synapses ranges between 20 and 30%, in agreement with experimental findings. These results indicate that the level of inhibition affects the frequency spectrum of resting brain activity and suggest the analysis of the PSD scaling behavior as a possible tool to study pathological conditions.
[ { "created": "Tue, 29 Aug 2017 21:54:17 GMT", "version": "v1" } ]
2017-08-31
[ [ "Lombardi", "Fabrizio", "" ], [ "Herrmann", "Hans J.", "" ], [ "de Arcangelis", "Lucilla", "" ] ]
The $1/f$-like decay observed in the power spectrum of electro-physiological signals, along with scale-free statistics of the so-called neuronal avalanches, constitute evidences of criticality in neuronal systems. Recent in vitro studies have shown that avalanche dynamics at criticality corresponds to some specific balance of excitation and inhibition, thus suggesting that this is a basic feature of the critical state of neuronal networks. In particular, a lack of inhibition significantly alters the temporal structure of the spontaneous avalanche activity and leads to an anomalous abundance of large avalanches. Here we study the relationship between network inhibition and the scaling exponent $\beta$ of the power spectral density (PSD) of avalanche activity in a neuronal network model inspired in Self-Organized Criticality (SOC). We find that this scaling exponent depends on the percentage of inhibitory synapses and tends to the value $\beta = 1$ for a percentage of about 30%. More specifically, $\beta$ is close to $2$, namely brownian noise, for purely excitatory networks and decreases towards values in the interval $[1,1.4]$ as the percentage of inhibitory synapses ranges between 20 and 30%, in agreement with experimental findings. These results indicate that the level of inhibition affects the frequency spectrum of resting brain activity and suggest the analysis of the PSD scaling behavior as a possible tool to study pathological conditions.
1901.11008
Christopher Lee
Christopher T. Lee, Justin G. Laughlin, Nils Angliviel de La Beaumelle, Rommie E. Amaro, J. Andrew McCammon, Ravi Ramamoorthi, Michael J. Holst, Padmini Rangamani
3D mesh processing using GAMer 2 to enable reaction-diffusion simulations in realistic cellular geometries
39 pages, 14 figures. High resolution figures and supplemental movies available upon request
null
10.1371/journal.pcbi.1007756
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent advances in electron microscopy have enabled the imaging of single cells in 3D at nanometer length scale resolutions. An uncharted frontier for in silico biology is the ability to simulate cellular processes using these observed geometries. Enabling such simulations requires watertight meshing of electron micrograph images into 3D volume meshes, which can then form the basis of computer simulations of such processes using numerical techniques such as the Finite Element Method. In this paper, we describe the use of our recently rewritten mesh processing software, GAMer 2, to bridge the gap between poorly conditioned meshes generated from segmented micrographs and boundary marked tetrahedral meshes which are compatible with simulation. We demonstrate the application of a workflow using GAMer 2 to a series of electron micrographs of neuronal dendrite morphology explored at three different length scales and show that the resulting meshes are suitable for finite element simulations. This work is an important step towards making physical simulations of biological processes in realistic geometries routine. Innovations in algorithms to reconstruct and simulate cellular length scale phenomena based on emerging structural data will enable realistic physical models and advance discovery at the interface of geometry and cellular processes. We posit that a new frontier at the intersection of computational technologies and single cell biology is now open.
[ { "created": "Tue, 29 Jan 2019 23:08:02 GMT", "version": "v1" }, { "created": "Tue, 23 Jul 2019 23:21:11 GMT", "version": "v2" }, { "created": "Tue, 17 Dec 2019 05:07:21 GMT", "version": "v3" } ]
2020-07-01
[ [ "Lee", "Christopher T.", "" ], [ "Laughlin", "Justin G.", "" ], [ "de La Beaumelle", "Nils Angliviel", "" ], [ "Amaro", "Rommie E.", "" ], [ "McCammon", "J. Andrew", "" ], [ "Ramamoorthi", "Ravi", "" ], [ "Holst", ...
Recent advances in electron microscopy have enabled the imaging of single cells in 3D at nanometer length scale resolutions. An uncharted frontier for in silico biology is the ability to simulate cellular processes using these observed geometries. Enabling such simulations requires watertight meshing of electron micrograph images into 3D volume meshes, which can then form the basis of computer simulations of such processes using numerical techniques such as the Finite Element Method. In this paper, we describe the use of our recently rewritten mesh processing software, GAMer 2, to bridge the gap between poorly conditioned meshes generated from segmented micrographs and boundary marked tetrahedral meshes which are compatible with simulation. We demonstrate the application of a workflow using GAMer 2 to a series of electron micrographs of neuronal dendrite morphology explored at three different length scales and show that the resulting meshes are suitable for finite element simulations. This work is an important step towards making physical simulations of biological processes in realistic geometries routine. Innovations in algorithms to reconstruct and simulate cellular length scale phenomena based on emerging structural data will enable realistic physical models and advance discovery at the interface of geometry and cellular processes. We posit that a new frontier at the intersection of computational technologies and single cell biology is now open.
1705.06010
Hue Sun Chan
Jianhui Song, Gregory-Neal Gomes, Tongfei Shi, Claudiu C. Gradinaru, and Hue Sun Chan
Conformational Heterogeneity and FRET Data Interpretation for Dimensions of Unfolded Proteins
33 pages, 7 figures; 4 supporting figures. Accepted for publication in Biophysical Journal (content same as v2)
Biophysical Journal 113:1012-1024 (2017)
10.1016/j.bpj.2017.07.023
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A mathematico-physically valid formulation is required to infer properties of disordered protein conformations from single-molecule F\"orster resonance energy transfer (smFRET). Conformational dimensions inferred by conventional approaches that presume a homogeneous conformational ensemble can be unphysical. When all possible---heterogeneous as well as homogeneous---conformational distributions are taken into account without prejudgement, a single value of average transfer efficiency $\langle E\rangle$ between dyes at two chain ends is generally consistent with highly diverse, multiple values of the average radius of gyration $\langle R_{\rm g}\rangle$. Here we utilize unbiased conformational statistics from a coarse-grained explicit-chain model to establish a general logical framework to quantify this fundamental ambiguity in smFRET inference. As an application, we address the long-standing controversy regarding the denaturant dependence of $\langle R_{\rm g}\rangle$ of unfolded proteins, focusing on Protein L as an example. Conventional smFRET inference concluded that $\langle R_{\rm g}\rangle$ of unfolded Protein L is highly sensitive to [GuHCl], but data from small-angle X-ray scattering (SAXS) suggested a near-constant $\langle R_{\rm g}\rangle$ irrespective of [GuHCl]. Strikingly, the present analysis indicates that although the reported $\langle E\rangle$ values for Protein L at [GuHCl] = 1 M and 7 M are very different at 0.75 and 0.45, respectively, the Bayesian $R^2_{\rm g}$ distributions consistent with these two $\langle E\rangle$ values overlap by as much as $75\%$. Our findings suggest, in general, that the smFRET-SAXS discrepancy regarding unfolded protein dimensions likely arise from highly heterogeneous conformational ensembles at low or zero denaturant, and that additional experimental probes are needed to ascertain the nature of this heterogeneity.
[ { "created": "Wed, 17 May 2017 05:05:09 GMT", "version": "v1" }, { "created": "Thu, 27 Jul 2017 01:21:59 GMT", "version": "v2" }, { "created": "Tue, 1 Aug 2017 00:18:35 GMT", "version": "v3" } ]
2017-09-13
[ [ "Song", "Jianhui", "" ], [ "Gomes", "Gregory-Neal", "" ], [ "Shi", "Tongfei", "" ], [ "Gradinaru", "Claudiu C.", "" ], [ "Chan", "Hue Sun", "" ] ]
A mathematico-physically valid formulation is required to infer properties of disordered protein conformations from single-molecule F\"orster resonance energy transfer (smFRET). Conformational dimensions inferred by conventional approaches that presume a homogeneous conformational ensemble can be unphysical. When all possible---heterogeneous as well as homogeneous---conformational distributions are taken into account without prejudgement, a single value of average transfer efficiency $\langle E\rangle$ between dyes at two chain ends is generally consistent with highly diverse, multiple values of the average radius of gyration $\langle R_{\rm g}\rangle$. Here we utilize unbiased conformational statistics from a coarse-grained explicit-chain model to establish a general logical framework to quantify this fundamental ambiguity in smFRET inference. As an application, we address the long-standing controversy regarding the denaturant dependence of $\langle R_{\rm g}\rangle$ of unfolded proteins, focusing on Protein L as an example. Conventional smFRET inference concluded that $\langle R_{\rm g}\rangle$ of unfolded Protein L is highly sensitive to [GuHCl], but data from small-angle X-ray scattering (SAXS) suggested a near-constant $\langle R_{\rm g}\rangle$ irrespective of [GuHCl]. Strikingly, the present analysis indicates that although the reported $\langle E\rangle$ values for Protein L at [GuHCl] = 1 M and 7 M are very different at 0.75 and 0.45, respectively, the Bayesian $R^2_{\rm g}$ distributions consistent with these two $\langle E\rangle$ values overlap by as much as $75\%$. Our findings suggest, in general, that the smFRET-SAXS discrepancy regarding unfolded protein dimensions likely arise from highly heterogeneous conformational ensembles at low or zero denaturant, and that additional experimental probes are needed to ascertain the nature of this heterogeneity.
2102.02046
Akke Mats Houben
Akke Mats Houben
Matched transient and steady-state approximation of first-passage-time distributions of coloured noise driven leaky neurons
11 pages, 3 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The first-passage-time distribution of a leaky integrate-and-fire neuron driven by a characteristically coloured noise is approximated by matching a transient and a steady-state solution of the membrane voltage distribution. These approximations follow from a simple manipulation, made possible by the specific `eigen' colouring of the noise, which allows to express the membrane potential as a Gaussian diffusion process on top of a deterministic exponential movement. Following, the presented method is extended to the case of an arbitrarily coloured noise driving by factoring out the `eigen' noise and replacing the residue with an equivalent Gaussian process. It is shown that the obtained expressions agree well with numerical simulations for different values of the neuron parameters and noise colouring.
[ { "created": "Wed, 3 Feb 2021 13:14:51 GMT", "version": "v1" } ]
2021-02-04
[ [ "Houben", "Akke Mats", "" ] ]
The first-passage-time distribution of a leaky integrate-and-fire neuron driven by a characteristically coloured noise is approximated by matching a transient and a steady-state solution of the membrane voltage distribution. These approximations follow from a simple manipulation, made possible by the specific `eigen' colouring of the noise, which allows to express the membrane potential as a Gaussian diffusion process on top of a deterministic exponential movement. Following, the presented method is extended to the case of an arbitrarily coloured noise driving by factoring out the `eigen' noise and replacing the residue with an equivalent Gaussian process. It is shown that the obtained expressions agree well with numerical simulations for different values of the neuron parameters and noise colouring.
0801.4684
Luciano da Fontoura Costa
Luciano da Fontoura Costa
Communities in Neuronal Complex Networks Revealed by Activation Patterns
11 pages, 7 figures. Comments and suggestions welcomed
null
null
null
q-bio.NC cond-mat.dis-nn physics.soc-ph
null
Recently, it has been shown that the communities in neuronal networks of the integrate-and-fire type can be identified by considering patterns containing the beginning times for each cell to receive the first non-zero activation. The received activity was integrated in order to facilitate the spiking of each neuron and to constrain the activation inside the communities, but no time decay of such activation was considered. The present article shows that, by taking into account exponential decays of the stored activation, it is possible to identify the communities also in terms of the patterns of activation along the initial steps of the transient dynamics. The potential of this method is illustrated with respect to complex neuronal networks involving four communities, each of a different type (Erd\H{o}s-R\'eny, Barab\'asi-Albert, Watts-Strogatz as well as a simple geographical model). Though the consideration of activation decay has been found to enhance the communities separation, too intense decays tend to yield less discrimination.
[ { "created": "Wed, 30 Jan 2008 14:18:34 GMT", "version": "v1" } ]
2008-01-31
[ [ "Costa", "Luciano da Fontoura", "" ] ]
Recently, it has been shown that the communities in neuronal networks of the integrate-and-fire type can be identified by considering patterns containing the beginning times for each cell to receive the first non-zero activation. The received activity was integrated in order to facilitate the spiking of each neuron and to constrain the activation inside the communities, but no time decay of such activation was considered. The present article shows that, by taking into account exponential decays of the stored activation, it is possible to identify the communities also in terms of the patterns of activation along the initial steps of the transient dynamics. The potential of this method is illustrated with respect to complex neuronal networks involving four communities, each of a different type (Erd\H{o}s-R\'eny, Barab\'asi-Albert, Watts-Strogatz as well as a simple geographical model). Though the consideration of activation decay has been found to enhance the communities separation, too intense decays tend to yield less discrimination.
2007.06336
Bram Burger
Bram Burger, Marc Vaudel, Harald Barsnes
On the importance of block randomisation when designing proteomics experiments
9 pages, 4 figures
null
null
null
q-bio.QM stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Randomisation is used in experimental design to reduce the prevalence of unanticipated confounders. Complete randomisation can however create unbalanced designs, for example, grouping all samples of the same condition in the same batch. Block randomisation is an approach that can prevent severe imbalances in sample allocation with respect to both known and unknown confounders. This feature provides the reader with an introduction to blocking and randomisation, insights into how to effectively organise samples during experimental design, with special considerations with respect to proteomics.
[ { "created": "Mon, 13 Jul 2020 12:12:07 GMT", "version": "v1" } ]
2020-07-14
[ [ "Burger", "Bram", "" ], [ "Vaudel", "Marc", "" ], [ "Barsnes", "Harald", "" ] ]
Randomisation is used in experimental design to reduce the prevalence of unanticipated confounders. Complete randomisation can however create unbalanced designs, for example, grouping all samples of the same condition in the same batch. Block randomisation is an approach that can prevent severe imbalances in sample allocation with respect to both known and unknown confounders. This feature provides the reader with an introduction to blocking and randomisation, insights into how to effectively organise samples during experimental design, with special considerations with respect to proteomics.
2204.09768
Hyunjoong Kim
Hyunjoong Kim, Yoichiro Mori, Joshua B. Plotkin
Optimality of intercellular signaling: direct transport versus diffusion
null
null
10.1103/PhysRevE.106.054411
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Intercellular signaling has an important role in organism development, but not all communication occurs using the same mechanism. Here, we analyze the energy efficiency of intercellular signaling by two canonical mechanisms: diffusion of signaling molecules and direct transport mediated by signaling cellular protrusions. We show that efficient contact formation for direct transport can be established by an optimal rate of projecting protrusions, which depends on the availability of information about the location of the target cell. The optimal projection rate also depends on how signaling molecules are transported along the protrusion, in particular the ratio of the energy cost for contact formation and molecule synthesis. Also, we compare the efficiency of the two signaling mechanisms, under various model parameters. We find that the direct transport is favored over the diffusion when transporting a large amount of signaling molecules. There is a critical number of signaling molecules at which the efficiency of the two mechanisms are the same. The critical number is small when the distance between cells is far, which helps explain why protrusion-based mechanisms are observed in long-range cellular communications.
[ { "created": "Wed, 20 Apr 2022 20:14:28 GMT", "version": "v1" }, { "created": "Mon, 24 Oct 2022 17:53:22 GMT", "version": "v2" } ]
2022-11-30
[ [ "Kim", "Hyunjoong", "" ], [ "Mori", "Yoichiro", "" ], [ "Plotkin", "Joshua B.", "" ] ]
Intercellular signaling has an important role in organism development, but not all communication occurs using the same mechanism. Here, we analyze the energy efficiency of intercellular signaling by two canonical mechanisms: diffusion of signaling molecules and direct transport mediated by signaling cellular protrusions. We show that efficient contact formation for direct transport can be established by an optimal rate of projecting protrusions, which depends on the availability of information about the location of the target cell. The optimal projection rate also depends on how signaling molecules are transported along the protrusion, in particular the ratio of the energy cost for contact formation and molecule synthesis. Also, we compare the efficiency of the two signaling mechanisms, under various model parameters. We find that the direct transport is favored over the diffusion when transporting a large amount of signaling molecules. There is a critical number of signaling molecules at which the efficiency of the two mechanisms are the same. The critical number is small when the distance between cells is far, which helps explain why protrusion-based mechanisms are observed in long-range cellular communications.
2110.08575
Shiladitya Banerjee
Diana Serbanescu, Nikola Ojkic, Shiladitya Banerjee
Cellular resource allocation strategies for cell size and shape control in bacteria
Review article, 13 pages, 5 figures,
The FEBS Journal (2021)
10.1111/febs.16234
null
q-bio.CB physics.bio-ph
http://creativecommons.org/licenses/by-nc-nd/4.0/
Bacteria are highly adaptive microorganisms that thrive in a wide range of growth conditions via changes in cell morphologies and macromolecular composition. How bacterial morphologies are regulated in diverse environmental conditions is a longstanding question. Regulation of cell size and shape implies control mechanisms that couple the growth and division of bacteria to their cellular environment and macromolecular composition. In the past decade, simple quantitative laws have emerged that connect cell growth to proteomic composition and the nutrient availability. However, the relationships between cell size, shape and growth physiology remain challenging to disentangle and unifying models are lacking. In this review, we focus on regulatory models of cell size control that reveal the connections between bacterial cell morphology and growth physiology. In particular, we discuss how changes in nutrient conditions and translational perturbations regulate the cell size, growth rate and proteome composition. Integrating quantitative models with experimental data, we identify the physiological principles of bacterial size regulation, and discuss the optimization strategies of cellular resource allocation for size control.
[ { "created": "Sat, 16 Oct 2021 13:55:16 GMT", "version": "v1" } ]
2021-10-26
[ [ "Serbanescu", "Diana", "" ], [ "Ojkic", "Nikola", "" ], [ "Banerjee", "Shiladitya", "" ] ]
Bacteria are highly adaptive microorganisms that thrive in a wide range of growth conditions via changes in cell morphologies and macromolecular composition. How bacterial morphologies are regulated in diverse environmental conditions is a longstanding question. Regulation of cell size and shape implies control mechanisms that couple the growth and division of bacteria to their cellular environment and macromolecular composition. In the past decade, simple quantitative laws have emerged that connect cell growth to proteomic composition and the nutrient availability. However, the relationships between cell size, shape and growth physiology remain challenging to disentangle and unifying models are lacking. In this review, we focus on regulatory models of cell size control that reveal the connections between bacterial cell morphology and growth physiology. In particular, we discuss how changes in nutrient conditions and translational perturbations regulate the cell size, growth rate and proteome composition. Integrating quantitative models with experimental data, we identify the physiological principles of bacterial size regulation, and discuss the optimization strategies of cellular resource allocation for size control.
1901.01916
Thomas FitzGerald
Thomas HB FitzGerald, Dorothea Hammerer, Thomas D Sambrook, Will D Penny
Bayesian inference over model-spaces increases the accuracy of model comparison and allows formal testing of hypotheses about model distributions in experimental populations
21 pages, 6 figures
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Determining the best model or models for a particular data set, a process known as Bayesian model comparison, is a critical part of probabilistic inference. Typically, this process assumes a fixed model-space (that is, a fixed set of candidate models). However, it is also possible to perform Bayesian inference over model-spaces themselves, thus determining which spaces provide the best explanation for observed data. Model-space inference (MSI) allows the effective exclusion of poorly performing models (a process analogous to Automatic Relevance Detection), and thus mitigates against the well-known phenomenon of model dilution, resulting in posterior probability estimates that are, on average, more accurate than those produced when using a fixed model-space. We focus on model comparison in the context of multiple independent data sets (as produced, for example, by multi-subject behavioural or neuroimaging studies), and cast our proposal as a development of random-effects Bayesian Model Selection, the current state-of-the-art in the field. We demonstrate the increased accuracy of MSI using simulated behavioural and neuroimaging data, as well as by assessing predictive performance in previously-acquired empirical data. Additionally, we explore other applications of MSI, including formal testing for a diversity of models within a population, and comparison of model-spaces between populations. Our approach thus provides an important new tool for model comparison.
[ { "created": "Mon, 7 Jan 2019 16:52:07 GMT", "version": "v1" } ]
2019-01-08
[ [ "FitzGerald", "Thomas HB", "" ], [ "Hammerer", "Dorothea", "" ], [ "Sambrook", "Thomas D", "" ], [ "Penny", "Will D", "" ] ]
Determining the best model or models for a particular data set, a process known as Bayesian model comparison, is a critical part of probabilistic inference. Typically, this process assumes a fixed model-space (that is, a fixed set of candidate models). However, it is also possible to perform Bayesian inference over model-spaces themselves, thus determining which spaces provide the best explanation for observed data. Model-space inference (MSI) allows the effective exclusion of poorly performing models (a process analogous to Automatic Relevance Detection), and thus mitigates against the well-known phenomenon of model dilution, resulting in posterior probability estimates that are, on average, more accurate than those produced when using a fixed model-space. We focus on model comparison in the context of multiple independent data sets (as produced, for example, by multi-subject behavioural or neuroimaging studies), and cast our proposal as a development of random-effects Bayesian Model Selection, the current state-of-the-art in the field. We demonstrate the increased accuracy of MSI using simulated behavioural and neuroimaging data, as well as by assessing predictive performance in previously-acquired empirical data. Additionally, we explore other applications of MSI, including formal testing for a diversity of models within a population, and comparison of model-spaces between populations. Our approach thus provides an important new tool for model comparison.
1907.07770
Ingrid Membrillo-Solis
Ingrid Membrillo-Solis, Mariam Pirashvili, Lee Steinberg, Jacek Brodzki, Jeremy G. Frey
Topology and geometry of molecular conformational spaces and energy landscapes
32 pages
null
null
null
q-bio.QM physics.data-an
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding the geometry and topology of configuration or conformational spaces of molecules has relevant applications in chemistry and biology such as the proteins folding problem, drug design and the structure activity relationship problem. Despite their relevance, configuration spaces of molecules are only partially understood. In this paper we discuss both theoretical and computational approaches to the configuration spaces of molecules and their associated energy landscapes. Our mathematical approach shows that when symmetries of the molecules are taken into account, configuration spaces of molecules give rise to certain principal bundles and orbifolds. We also make use of a variety of geometric and topological tools for data analysis to study the topology and geometry of these spaces.
[ { "created": "Thu, 18 Jul 2019 11:45:08 GMT", "version": "v1" } ]
2019-07-19
[ [ "Membrillo-Solis", "Ingrid", "" ], [ "Pirashvili", "Mariam", "" ], [ "Steinberg", "Lee", "" ], [ "Brodzki", "Jacek", "" ], [ "Frey", "Jeremy G.", "" ] ]
Understanding the geometry and topology of configuration or conformational spaces of molecules has relevant applications in chemistry and biology such as the proteins folding problem, drug design and the structure activity relationship problem. Despite their relevance, configuration spaces of molecules are only partially understood. In this paper we discuss both theoretical and computational approaches to the configuration spaces of molecules and their associated energy landscapes. Our mathematical approach shows that when symmetries of the molecules are taken into account, configuration spaces of molecules give rise to certain principal bundles and orbifolds. We also make use of a variety of geometric and topological tools for data analysis to study the topology and geometry of these spaces.
1512.06475
Richard McMurtrey
Richard J. McMurtrey
Analytic and Numerical Models of Oxygen and Nutrient Diffusion, Metabolism Dynamics, and Architecture Optimization in Three-Dimensional Tissue Constructs with Applications and Insights in Cerebral Organoids
null
null
10.1089/ten.TEC.2015.0375
null
q-bio.TO
http://creativecommons.org/licenses/by-nc-sa/4.0/
Diffusion models are important in tissue engineering as they enable an understanding of molecular delivery to cells in tissue constructs. As three-dimensional (3D) tissue constructs become larger, more intricate, and more clinically applicable, it will be essential to understand internal dynamics and signaling molecule concentrations throughout the tissue. Diffusion characteristics present a significant limitation in many engineered tissues, particularly for avascular tissues and for cells whose viability, differentiation, or function are affected by concentrations of oxygen and nutrients. This paper seeks to provide novel analytic solutions for certain cases of steady-state and non-steady-state diffusion and metabolism in 3D construct designs (planar, cylindrical, and spherical forms), solutions that otherwise require mathematical approximations achieved through numerical methods. This model is applied to cerebral organoids, where it is shown that limitations in diffusion and organoid size can be partially overcome by localizing metabolically-active cells to an outer layer in a sphere, a regionalization process that is known to occur through neuroglial precursor migration both in organoids and in early brain development. The given prototypical solutions include a review of metabolic information for many cell types and can be broadly applied to many forms of tissue constructs. This work enables researchers to model oxygen and nutrient delivery to cells, predict cell viability, design constructs with improved diffusion capabilities, and accurately control molecular concentrations in tissue constructs that may be used in studying models of development and disease or for conditioning cells to enhance survival after insults like ischemia or implantation into the body, thereby providing a framework for better understanding and exploring the characteristics of engineered tissue constructs.
[ { "created": "Mon, 21 Dec 2015 02:30:48 GMT", "version": "v1" } ]
2015-12-22
[ [ "McMurtrey", "Richard J.", "" ] ]
Diffusion models are important in tissue engineering as they enable an understanding of molecular delivery to cells in tissue constructs. As three-dimensional (3D) tissue constructs become larger, more intricate, and more clinically applicable, it will be essential to understand internal dynamics and signaling molecule concentrations throughout the tissue. Diffusion characteristics present a significant limitation in many engineered tissues, particularly for avascular tissues and for cells whose viability, differentiation, or function are affected by concentrations of oxygen and nutrients. This paper seeks to provide novel analytic solutions for certain cases of steady-state and non-steady-state diffusion and metabolism in 3D construct designs (planar, cylindrical, and spherical forms), solutions that otherwise require mathematical approximations achieved through numerical methods. This model is applied to cerebral organoids, where it is shown that limitations in diffusion and organoid size can be partially overcome by localizing metabolically-active cells to an outer layer in a sphere, a regionalization process that is known to occur through neuroglial precursor migration both in organoids and in early brain development. The given prototypical solutions include a review of metabolic information for many cell types and can be broadly applied to many forms of tissue constructs. This work enables researchers to model oxygen and nutrient delivery to cells, predict cell viability, design constructs with improved diffusion capabilities, and accurately control molecular concentrations in tissue constructs that may be used in studying models of development and disease or for conditioning cells to enhance survival after insults like ischemia or implantation into the body, thereby providing a framework for better understanding and exploring the characteristics of engineered tissue constructs.
1212.6209
Joshua Shaevitz
Yi Deng, Philip Coen, Mingzhai Sun, Joshua W. Shaevitz
Efficient Multiple Object Tracking Using Mutually Repulsive Active Membranes
18 pages, 6 figures, 1 table
null
10.1371/journal.pone.0065769
null
q-bio.QM cs.CV physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Studies of social and group behavior in interacting organisms require high-throughput analysis of the motion of a large number of individual subjects. Computer vision techniques offer solutions to specific tracking problems, and allow automated and efficient tracking with minimal human intervention. In this work, we adopt the open active contour model to track the trajectories of moving objects at high density. We add repulsive interactions between open contours to the original model, treat the trajectories as an extrusion in the temporal dimension, and show applications to two tracking problems. The walking behavior of Drosophila is studied at different population density and gender composition. We demonstrate that individual male flies have distinct walking signatures, and that the social interaction between flies in a mixed gender arena is gender specific. We also apply our model to studies of trajectories of gliding Myxococcus xanthus bacteria at high density. We examine the individual gliding behavioral statistics in terms of the gliding speed distribution. Using these two examples at very distinctive spatial scales, we illustrate the use of our algorithm on tracking both short rigid bodies (Drosophila) and long flexible objects (Myxococcus xanthus). Our repulsive active membrane model reaches error rates better than $5\times 10^{-6}$ per fly per second for Drosophila tracking and comparable results for Myxococcus xanthus.
[ { "created": "Wed, 26 Dec 2012 16:30:40 GMT", "version": "v1" } ]
2015-06-12
[ [ "Deng", "Yi", "" ], [ "Coen", "Philip", "" ], [ "Sun", "Mingzhai", "" ], [ "Shaevitz", "Joshua W.", "" ] ]
Studies of social and group behavior in interacting organisms require high-throughput analysis of the motion of a large number of individual subjects. Computer vision techniques offer solutions to specific tracking problems, and allow automated and efficient tracking with minimal human intervention. In this work, we adopt the open active contour model to track the trajectories of moving objects at high density. We add repulsive interactions between open contours to the original model, treat the trajectories as an extrusion in the temporal dimension, and show applications to two tracking problems. The walking behavior of Drosophila is studied at different population density and gender composition. We demonstrate that individual male flies have distinct walking signatures, and that the social interaction between flies in a mixed gender arena is gender specific. We also apply our model to studies of trajectories of gliding Myxococcus xanthus bacteria at high density. We examine the individual gliding behavioral statistics in terms of the gliding speed distribution. Using these two examples at very distinctive spatial scales, we illustrate the use of our algorithm on tracking both short rigid bodies (Drosophila) and long flexible objects (Myxococcus xanthus). Our repulsive active membrane model reaches error rates better than $5\times 10^{-6}$ per fly per second for Drosophila tracking and comparable results for Myxococcus xanthus.
q-bio/0507031
Nicolas Destainville
N. Meilhac, L. Le Guyader, L. Salome, N. Destainville
Detection of confinement and jumps in single molecule membrane trajectories
4 pages, 3 figures
Phys. Rev. E 73, 011915 (2006)
10.1103/PhysRevE.73.011915
null
q-bio.QM cond-mat.stat-mech q-bio.SC
null
We propose a novel variant of the algorithm by Simson et al. [R. Simson, E.D. Sheets, K. Jacobson, Biophys. J. 69, 989 (1995)]. Their algorithm was developed to detect transient confinement zones in experimental single particle tracking trajectories of diffusing membrane proteins or lipids. We show that our algorithm is able to detect confinement in a wider class of confining potential shapes than Simson et al.'s one. Furthermore it enables to detect not only temporary confinement but also jumps between confinement zones. Jumps are predicted by membrane skeleton fence and picket models. In the case of experimental trajectories of $\mu$-opioid receptors, which belong to the family of G-protein-coupled receptors involved in a signal transduction pathway, this algorithm confirms that confinement cannot be explained solely by rigid fences.
[ { "created": "Wed, 20 Jul 2005 13:13:13 GMT", "version": "v1" }, { "created": "Mon, 13 Feb 2006 08:20:25 GMT", "version": "v2" } ]
2009-11-11
[ [ "Meilhac", "N.", "" ], [ "Guyader", "L. Le", "" ], [ "Salome", "L.", "" ], [ "Destainville", "N.", "" ] ]
We propose a novel variant of the algorithm by Simson et al. [R. Simson, E.D. Sheets, K. Jacobson, Biophys. J. 69, 989 (1995)]. Their algorithm was developed to detect transient confinement zones in experimental single particle tracking trajectories of diffusing membrane proteins or lipids. We show that our algorithm is able to detect confinement in a wider class of confining potential shapes than Simson et al.'s one. Furthermore it enables to detect not only temporary confinement but also jumps between confinement zones. Jumps are predicted by membrane skeleton fence and picket models. In the case of experimental trajectories of $\mu$-opioid receptors, which belong to the family of G-protein-coupled receptors involved in a signal transduction pathway, this algorithm confirms that confinement cannot be explained solely by rigid fences.
1509.02824
Mitchell Newberry
Mitchell G Newberry, Daniel B Ennis, Van M Savage
Testing Foundations of Biological Scaling Theory Using Automated Measurements of Vascular Networks
null
PLoS Comput Biol 11(8): e1004455
10.1371/journal.pcbi.1004455
null
q-bio.TO q-bio.PE q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Scientists have long sought to understand how vascular networks supply blood and oxygen to cells throughout the body. Recent work focuses on principles that constrain how vessel size changes through branching generations from the aorta to capillaries and uses scaling exponents to quantify these changes. Prominent scaling theories predict that combinations of these exponents explain how metabolic, growth, and other biological rates vary with body size. Nevertheless, direct measurements of individual vessel segments have been limited because existing techniques for measuring vasculature are invasive, time consuming, and technically difficult. We developed software that extracts the length, radius, and connectivity of in vivo vessels from contrast-enhanced 3D Magnetic Resonance Angiography. Using data from 20 human subjects, we calculated scaling exponents by four methods--two derived from local properties of branching junctions and two from whole-network properties. Although these methods are often used interchangeably in the literature, we do not find general agreement between these methods, particularly for vessel lengths. Measurements for length of vessels also diverge from theoretical values, but those for radius show stronger agreement. Our results demonstrate that vascular network models cannot ignore certain complexities of real vascular systems and indicate the need to discover new principles regarding vessel lengths.
[ { "created": "Wed, 9 Sep 2015 15:59:57 GMT", "version": "v1" } ]
2015-09-10
[ [ "Newberry", "Mitchell G", "" ], [ "Ennis", "Daniel B", "" ], [ "Savage", "Van M", "" ] ]
Scientists have long sought to understand how vascular networks supply blood and oxygen to cells throughout the body. Recent work focuses on principles that constrain how vessel size changes through branching generations from the aorta to capillaries and uses scaling exponents to quantify these changes. Prominent scaling theories predict that combinations of these exponents explain how metabolic, growth, and other biological rates vary with body size. Nevertheless, direct measurements of individual vessel segments have been limited because existing techniques for measuring vasculature are invasive, time consuming, and technically difficult. We developed software that extracts the length, radius, and connectivity of in vivo vessels from contrast-enhanced 3D Magnetic Resonance Angiography. Using data from 20 human subjects, we calculated scaling exponents by four methods--two derived from local properties of branching junctions and two from whole-network properties. Although these methods are often used interchangeably in the literature, we do not find general agreement between these methods, particularly for vessel lengths. Measurements for length of vessels also diverge from theoretical values, but those for radius show stronger agreement. Our results demonstrate that vascular network models cannot ignore certain complexities of real vascular systems and indicate the need to discover new principles regarding vessel lengths.