id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
1206.4209
Jakub Otwinowski
Jakub Otwinowski, Ilya Nemenman
Genotype to phenotype mapping and the fitness landscape of the E. coli lac promoter
null
PloS One, 8(5), e61570 2013
10.1371/journal.pone.0061570
null
q-bio.PE q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Genotype-to-phenotype maps and the related fitness landscapes that include epistatic interactions are difficult to measure because of their high dimensional structure. Here we construct such a map using the recently collected corpora of high-throughput sequence data from the 75 base pairs long mutagenized E. coli lac promoter region, where each sequence is associated with its phenotype, the induced transcriptional activity measured by a fluorescent reporter. We find that the additive (non-epistatic) contributions of individual mutations account for about two-thirds of the explainable phenotype variance, while pairwise epistasis explains about 7% of the variance for the full mutagenized sequence and about 15% for the subsequence associated with protein binding sites. Surprisingly, there is no evidence for third order epistatic contributions, and our inferred fitness landscape is essentially single peaked, with a small amount of antagonistic epistasis. There is a significant selective pressure on the wild type, which we deduce to be multi-objective optimal for gene expression in environments with different nutrient sources. We identify transcription factor (CRP) and RNA polymerase binding sites in the promotor region and their interactions without difficult optimization steps. In particular, we observe evidence for previously unexplored genetic regulatory mechanisms, possibly kinetic in nature. We conclude with a cautionary note that inferred properties of fitness landscapes may be severely influenced by biases in the sequence data.
[ { "created": "Tue, 19 Jun 2012 13:55:12 GMT", "version": "v1" }, { "created": "Fri, 25 Jan 2013 01:11:36 GMT", "version": "v2" } ]
2014-04-04
[ [ "Otwinowski", "Jakub", "" ], [ "Nemenman", "Ilya", "" ] ]
Genotype-to-phenotype maps and the related fitness landscapes that include epistatic interactions are difficult to measure because of their high dimensional structure. Here we construct such a map using the recently collected corpora of high-throughput sequence data from the 75 base pairs long mutagenized E. coli lac promoter region, where each sequence is associated with its phenotype, the induced transcriptional activity measured by a fluorescent reporter. We find that the additive (non-epistatic) contributions of individual mutations account for about two-thirds of the explainable phenotype variance, while pairwise epistasis explains about 7% of the variance for the full mutagenized sequence and about 15% for the subsequence associated with protein binding sites. Surprisingly, there is no evidence for third order epistatic contributions, and our inferred fitness landscape is essentially single peaked, with a small amount of antagonistic epistasis. There is a significant selective pressure on the wild type, which we deduce to be multi-objective optimal for gene expression in environments with different nutrient sources. We identify transcription factor (CRP) and RNA polymerase binding sites in the promotor region and their interactions without difficult optimization steps. In particular, we observe evidence for previously unexplored genetic regulatory mechanisms, possibly kinetic in nature. We conclude with a cautionary note that inferred properties of fitness landscapes may be severely influenced by biases in the sequence data.
1709.09703
Thierry Mora
Mikhail V. Pogorelyy, Anastasia A. Minervina, Dmitriy M. Chudakov, Ilgar Z. Mamedov, Yury B. Lebedev, Thierry Mora, Aleksandra M. Walczak
Method for identification of condition-associated public antigen receptor sequences
null
eLife 2018;7:e33050
10.7554/eLife.33050
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Diverse repertoires of hypervariable immunoglobulin receptors (TCR and BCR) recognize antigens in the adaptive immune system. The development of immunoglobulin receptor repertoire sequencing methods makes it possible to perform repertoire-wide disease association studies of antigen receptor sequences. We developed a statistical framework for associating receptors to disease from only a small cohort of patients, with no need for a control cohort. Our method successfully identifies previously validated Cytomegalovirus and type 1 diabetes responsive receptors.
[ { "created": "Wed, 27 Sep 2017 19:11:34 GMT", "version": "v1" } ]
2018-04-16
[ [ "Pogorelyy", "Mikhail V.", "" ], [ "Minervina", "Anastasia A.", "" ], [ "Chudakov", "Dmitriy M.", "" ], [ "Mamedov", "Ilgar Z.", "" ], [ "Lebedev", "Yury B.", "" ], [ "Mora", "Thierry", "" ], [ "Walczak", "Alek...
Diverse repertoires of hypervariable immunoglobulin receptors (TCR and BCR) recognize antigens in the adaptive immune system. The development of immunoglobulin receptor repertoire sequencing methods makes it possible to perform repertoire-wide disease association studies of antigen receptor sequences. We developed a statistical framework for associating receptors to disease from only a small cohort of patients, with no need for a control cohort. Our method successfully identifies previously validated Cytomegalovirus and type 1 diabetes responsive receptors.
2211.09010
Ryan Wilkinson
Ryan Wilkinson and Marcus Roper
Modeling Insights from COVID-19 Incidence Data: Part I -- Comparing COVID-19 Cases Between Different-Sized Populations
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Comparing how different populations have suffered under COVID-19 is a core part of ongoing investigations into how public policy and social inequalities influence the number of and severity of COVID-19 cases. But COVID-19 incidence can vary multifold from one subpopulation to another, including between neighborhoods of the same city, making comparisons of case rates deceptive. At the same time, although epidemiological heterogeneities are increasingly well-represented in mathematical models of disease spread, fitting these models to real data on case numbers presents a tremendous challenge, as does interpreting the models to answer questions such as: Which public health policies achieve the best outcomes? Which social sacrifices are most worth making? Here we compare COVID-19 case-curves between different US states, by clustering case surges between March 2020 and March 2021 into groups with similar dynamics. We advance the hypothesis that each surge is driven by a subpopulation of COVID-19 contacting individuals, and make detecting the size of that population a step within our clustering algorithm. Clustering reveals that case trajectories in each state conform to one of a small number (4-6) of archetypal dynamics. Our results suggest that while the spread of COVID-19 in different states is heterogeneous, there are underlying universalities in the spread of the disease that may yet be predictable by models with reduced mathematical complexity. These universalities also prove to be surprisingly robust to school closures, which we choose as a common, but high social cost, public health measure.
[ { "created": "Mon, 14 Nov 2022 21:12:20 GMT", "version": "v1" } ]
2022-11-17
[ [ "Wilkinson", "Ryan", "" ], [ "Roper", "Marcus", "" ] ]
Comparing how different populations have suffered under COVID-19 is a core part of ongoing investigations into how public policy and social inequalities influence the number of and severity of COVID-19 cases. But COVID-19 incidence can vary multifold from one subpopulation to another, including between neighborhoods of the same city, making comparisons of case rates deceptive. At the same time, although epidemiological heterogeneities are increasingly well-represented in mathematical models of disease spread, fitting these models to real data on case numbers presents a tremendous challenge, as does interpreting the models to answer questions such as: Which public health policies achieve the best outcomes? Which social sacrifices are most worth making? Here we compare COVID-19 case-curves between different US states, by clustering case surges between March 2020 and March 2021 into groups with similar dynamics. We advance the hypothesis that each surge is driven by a subpopulation of COVID-19 contacting individuals, and make detecting the size of that population a step within our clustering algorithm. Clustering reveals that case trajectories in each state conform to one of a small number (4-6) of archetypal dynamics. Our results suggest that while the spread of COVID-19 in different states is heterogeneous, there are underlying universalities in the spread of the disease that may yet be predictable by models with reduced mathematical complexity. These universalities also prove to be surprisingly robust to school closures, which we choose as a common, but high social cost, public health measure.
1610.03627
Stephan Krohn
Stephan Krohn and Dirk Ostwald
Computing Integrated Information
Revised version of the original manuscript
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Integrated information theory (IIT) has established itself as one of the leading theories for the study of consciousness. IIT essentially proposes that quantitative consciousness is identical to maximally integrated conceptual information, quantified by a measure called $\Phi^{max}$, and that phenomenological experience corresponds to the associated set of maximally irreducible cause-effect repertoires of a physical system being in a certain state. However, in order to ultimately apply the theory to experimental data, a sufficiently general formulation is needed. With the current work, we provide this general formulation, which comprehensively and parsimoniously expresses $\Phi^{max}$ in the language of probabilistic models. Here, the stochastic process describing a system under scrutiny corresponds to a first-order time-invariant Markov process, and all necessary mathematical operations for the definition of $\Phi^{max}$ are fully specified by a system's joint probability distribution over two adjacent points in discrete time. We present a detailed constructive rule for the decomposition of a system into two disjoint subsystems based on flexible marginalization and factorization of this joint distribution. Furthermore, we suspend the approach of interventional calculus based on system perturbations, which allows us to omit undefined conditional distributions and virtualization. We validate our formulation in a previously established discrete example system, in which we furthermore address the previously unexplored theoretical issue of quale underdetermination due to non-uniqueness of maximally irreducible cause-effect repertoires, which in turn also entails the sensitivity of $\Phi^{max}$ to the shape of the conceptual structure in qualia space. In constructive spirit, we propose several modifications of the framework in order to address some of these issues.
[ { "created": "Wed, 12 Oct 2016 07:47:28 GMT", "version": "v1" }, { "created": "Fri, 3 Mar 2017 11:01:11 GMT", "version": "v2" } ]
2017-03-06
[ [ "Krohn", "Stephan", "" ], [ "Ostwald", "Dirk", "" ] ]
Integrated information theory (IIT) has established itself as one of the leading theories for the study of consciousness. IIT essentially proposes that quantitative consciousness is identical to maximally integrated conceptual information, quantified by a measure called $\Phi^{max}$, and that phenomenological experience corresponds to the associated set of maximally irreducible cause-effect repertoires of a physical system being in a certain state. However, in order to ultimately apply the theory to experimental data, a sufficiently general formulation is needed. With the current work, we provide this general formulation, which comprehensively and parsimoniously expresses $\Phi^{max}$ in the language of probabilistic models. Here, the stochastic process describing a system under scrutiny corresponds to a first-order time-invariant Markov process, and all necessary mathematical operations for the definition of $\Phi^{max}$ are fully specified by a system's joint probability distribution over two adjacent points in discrete time. We present a detailed constructive rule for the decomposition of a system into two disjoint subsystems based on flexible marginalization and factorization of this joint distribution. Furthermore, we suspend the approach of interventional calculus based on system perturbations, which allows us to omit undefined conditional distributions and virtualization. We validate our formulation in a previously established discrete example system, in which we furthermore address the previously unexplored theoretical issue of quale underdetermination due to non-uniqueness of maximally irreducible cause-effect repertoires, which in turn also entails the sensitivity of $\Phi^{max}$ to the shape of the conceptual structure in qualia space. In constructive spirit, we propose several modifications of the framework in order to address some of these issues.
1404.3944
Fabio Chalub
Fabio A. C. C. Chalub
Asymptotic expression for the fixation probability of a mutant in star graphs
9 pages, 2 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the Moran process in a graph called the "star" and obtain the asymptotic expression for the fixation probability of a single mutant when the size of the graph is large. The expression obtained corrects the previously known expression announced in reference [E Lieberman, C Hauert, and MA Nowak. Evolutionary dynamics on graphs. Nature, 433(7023):312-316, 2005] and further studied in [M. Broom and J. Rychtar. An analysis of the fixation probability of a mutant on special classes of non-directed graphs. Proc. R. Soc. A-Math. Phys. Eng. Sci., 464(2098):2609-2627, 2008]. We also show that the star graph is an accelerator of evolution, if the graph is large enough.
[ { "created": "Tue, 15 Apr 2014 14:57:53 GMT", "version": "v1" }, { "created": "Fri, 18 Mar 2016 17:36:35 GMT", "version": "v2" } ]
2016-03-21
[ [ "Chalub", "Fabio A. C. C.", "" ] ]
We consider the Moran process in a graph called the "star" and obtain the asymptotic expression for the fixation probability of a single mutant when the size of the graph is large. The expression obtained corrects the previously known expression announced in reference [E Lieberman, C Hauert, and MA Nowak. Evolutionary dynamics on graphs. Nature, 433(7023):312-316, 2005] and further studied in [M. Broom and J. Rychtar. An analysis of the fixation probability of a mutant on special classes of non-directed graphs. Proc. R. Soc. A-Math. Phys. Eng. Sci., 464(2098):2609-2627, 2008]. We also show that the star graph is an accelerator of evolution, if the graph is large enough.
q-bio/0611059
Dmitri Parkhomchuk
Dmitri Parkhomchuk
Di-nucleotide Entropy as a Measure of Genomic Sequence Functionality
10 pages, 7 figures, grammatical revision
null
null
null
q-bio.GN
null
Considering vast amounts of genomic sequences of mostly unknown functionality, in-silico prediction of functional regions is an important enterprise. Many genomic browsers employ GC content, which was observed to be elevated in gene-rich functional regions. This report shows that the entropy of di- and tri-nucleotides distributions provides a superior measure of genomic sequence functionality, and proposes an explanation on why the GC content must be elevated (closer to 50%) in functional regions. Regions with high entropy strongly co-localize with exons and provide genome-wide evidences of purifying selection acting on non-coding regions, such as decreased SNPs density. The observations suggest that functional non-coding regions are optimised for mutation load in a way, that transition mutations have less impact on functionality than transversions, leading to the decrease in transversions to transitions ratio in functional regions.
[ { "created": "Fri, 17 Nov 2006 17:30:19 GMT", "version": "v1" }, { "created": "Wed, 22 Nov 2006 15:20:00 GMT", "version": "v2" }, { "created": "Tue, 19 Dec 2006 12:01:55 GMT", "version": "v3" } ]
2007-05-23
[ [ "Parkhomchuk", "Dmitri", "" ] ]
Considering vast amounts of genomic sequences of mostly unknown functionality, in-silico prediction of functional regions is an important enterprise. Many genomic browsers employ GC content, which was observed to be elevated in gene-rich functional regions. This report shows that the entropy of di- and tri-nucleotides distributions provides a superior measure of genomic sequence functionality, and proposes an explanation on why the GC content must be elevated (closer to 50%) in functional regions. Regions with high entropy strongly co-localize with exons and provide genome-wide evidences of purifying selection acting on non-coding regions, such as decreased SNPs density. The observations suggest that functional non-coding regions are optimised for mutation load in a way, that transition mutations have less impact on functionality than transversions, leading to the decrease in transversions to transitions ratio in functional regions.
1309.2087
Joseph Ryan
Joseph F. Ryan
Baa.pl: A tool to evaluate de novo genome assemblies with RNA transcripts
9 pages, 2 figures
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Assessing the correctness of genome assemblies is an important step in any genome project. Several methods exist, but most are computationally intensive and, in some cases, inappropriate. Here I present baa.pl, a fast and easy-to-use program that uses transcript data to evaluate genomic assemblies. Through simulations using human chromosome 22, I show that baa.pl excels at detecting levels of missing sequence and contiguity. The program is freely available at: https://github.com/josephryan/baa.pl
[ { "created": "Mon, 9 Sep 2013 09:48:09 GMT", "version": "v1" }, { "created": "Fri, 7 Feb 2014 19:44:38 GMT", "version": "v2" } ]
2014-02-10
[ [ "Ryan", "Joseph F.", "" ] ]
Assessing the correctness of genome assemblies is an important step in any genome project. Several methods exist, but most are computationally intensive and, in some cases, inappropriate. Here I present baa.pl, a fast and easy-to-use program that uses transcript data to evaluate genomic assemblies. Through simulations using human chromosome 22, I show that baa.pl excels at detecting levels of missing sequence and contiguity. The program is freely available at: https://github.com/josephryan/baa.pl
2007.13115
Brian A Ference
B.A. Ference, G. Davey Smith, M. V. Holmes, A. L. Catapano, K. K. Ray, S. J. Nicholls
Challenges in constructing genetic instruments for pharmacologic therapies
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The genes that encode the targets of most therapies do not have rare variants with large-effect or common variants with moderate effects on the biomarker reflecting the pharmacologic action of the corresponding therapy. Therefore, providing genetic target validation for most therapies is challenging. Novel methods are being developed to combine multiple variants in the gene encoding the target of a therapy that are weakly associated with the biomarker reflecting the pharmacologic action of that therapy into a genetic score that can be used as an adequate instrumental variable. We describe one approach to solve this important problem.
[ { "created": "Sun, 26 Jul 2020 12:22:16 GMT", "version": "v1" } ]
2020-07-28
[ [ "Ference", "B. A.", "" ], [ "Smith", "G. Davey", "" ], [ "Holmes", "M. V.", "" ], [ "Catapano", "A. L.", "" ], [ "Ray", "K. K.", "" ], [ "Nicholls", "S. J.", "" ] ]
The genes that encode the targets of most therapies do not have rare variants with large-effect or common variants with moderate effects on the biomarker reflecting the pharmacologic action of the corresponding therapy. Therefore, providing genetic target validation for most therapies is challenging. Novel methods are being developed to combine multiple variants in the gene encoding the target of a therapy that are weakly associated with the biomarker reflecting the pharmacologic action of that therapy into a genetic score that can be used as an adequate instrumental variable. We describe one approach to solve this important problem.
1005.0348
Peter Csermely
Peter Csermely, Robin Palotai and Ruth Nussinov
Induced fit, conformational selection and independent dynamic segments: an extended view of binding events
9 pages, 2 Figures, 1 Table, 2 boxes, Trends in Biochemical Sciences 2010 October issue cover story
Trends in Biochemical Sciences 2010 vol. 35, 539-546
10.1016/j.tibs.2010.04.009
null
q-bio.BM nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Single molecule and NMR measurements of protein dynamics increasingly uncover the complexity of binding scenarios. Here we describe an extended conformational selection model which embraces a repertoire of selection and adjustment processes. Induced fit can be viewed as a subset of this repertoire, whose contribution is affected by the bond-types stabilizing the interaction and the differences between the interacting partners. We argue that protein segments whose dynamics are distinct from the rest of the protein ('discrete breathers') can govern conformational transitions and allosteric propagation that accompany binding processes, and as such may be more sensitive to mutational events. Additionally, we highlight the dynamic complexity of binding scenarios as they relate to events such as aggregation and signalling, and the crowded cellular environment.
[ { "created": "Mon, 3 May 2010 17:14:52 GMT", "version": "v1" }, { "created": "Sat, 2 Oct 2010 08:43:45 GMT", "version": "v2" } ]
2010-10-05
[ [ "Csermely", "Peter", "" ], [ "Palotai", "Robin", "" ], [ "Nussinov", "Ruth", "" ] ]
Single molecule and NMR measurements of protein dynamics increasingly uncover the complexity of binding scenarios. Here we describe an extended conformational selection model which embraces a repertoire of selection and adjustment processes. Induced fit can be viewed as a subset of this repertoire, whose contribution is affected by the bond-types stabilizing the interaction and the differences between the interacting partners. We argue that protein segments whose dynamics are distinct from the rest of the protein ('discrete breathers') can govern conformational transitions and allosteric propagation that accompany binding processes, and as such may be more sensitive to mutational events. Additionally, we highlight the dynamic complexity of binding scenarios as they relate to events such as aggregation and signalling, and the crowded cellular environment.
1910.06119
Christoph Von Tycowicz
Christoph von Tycowicz
Towards Shape-based Knee Osteoarthritis Classification using Graph Convolutional Networks
null
null
null
null
q-bio.QM eess.IV
http://creativecommons.org/licenses/by/4.0/
We present a transductive learning approach for morphometric osteophyte grading based on geometric deep learning. We formulate the grading task as semi-supervised node classification problem on a graph embedded in shape space. To account for the high-dimensionality and non-Euclidean structure of shape space we employ a combination of an intrinsic dimension reduction together with a graph convolutional neural network. We demonstrate the performance of our derived classifier in comparisons to an alternative extrinsic approach.
[ { "created": "Fri, 11 Oct 2019 10:38:03 GMT", "version": "v1" } ]
2019-10-15
[ [ "von Tycowicz", "Christoph", "" ] ]
We present a transductive learning approach for morphometric osteophyte grading based on geometric deep learning. We formulate the grading task as semi-supervised node classification problem on a graph embedded in shape space. To account for the high-dimensionality and non-Euclidean structure of shape space we employ a combination of an intrinsic dimension reduction together with a graph convolutional neural network. We demonstrate the performance of our derived classifier in comparisons to an alternative extrinsic approach.
0910.0429
Lalit Ponnala
Lalit Ponnala
On the clustering of rare codons and its effect on translation
null
null
null
null
q-bio.GN q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The presence of clusters of rare codons is known to negatively impact the efficiency and accuracy of protein production. In this paper, we demonstrate a statistical method of identifying such clusters in the coding sequence of a gene. Using E. coli as our model organism, we show that genes having denser clusters tend to have lower protein yields.
[ { "created": "Fri, 2 Oct 2009 15:35:21 GMT", "version": "v1" } ]
2009-10-05
[ [ "Ponnala", "Lalit", "" ] ]
The presence of clusters of rare codons is known to negatively impact the efficiency and accuracy of protein production. In this paper, we demonstrate a statistical method of identifying such clusters in the coding sequence of a gene. Using E. coli as our model organism, we show that genes having denser clusters tend to have lower protein yields.
0802.0556
Tijana Milenkovi\'c
Tijana Milenkovic and Natasa Przulj
Uncovering Biological Network Function via Graphlet Degree Signatures
First submitted to Nature Biotechnology on July 16, 2007. Presented at BioPathways'07 pre-conference of ISMB/ECCB'07, July 19-20, 2007, Vienna, Austria. Published in full in the Posters section of the Schedule of the RECOMB Satellite Conference on Systems Biology, November 30 - December 1, 2007, University of California, San Diego, USA
null
null
Technical Report No. 08-01
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Proteins are essential macromolecules of life and thus understanding their function is of great importance. The number of functionally unclassified proteins is large even for simple and well studied organisms such as baker's yeast. Methods for determining protein function have shifted their focus from targeting specific proteins based solely on sequence homology to analyses of the entire proteome based on protein-protein interaction (PPI) networks. Since proteins aggregate to perform a certain function, analyzing structural properties of PPI networks may provide useful clues about the biological function of individual proteins, protein complexes they participate in, and even larger subcellular machines. We design a sensitive graph theoretic method for comparing local structures of node neighborhoods that demonstrates that in PPI networks, biological function of a node and its local network structure are closely related. The method groups topologically similar proteins under this measure in a PPI network and shows that these protein groups belong to the same protein complexes, perform the same biological functions, are localized in the same subcellular compartments, and have the same tissue expressions. Moreover, we apply our technique on a proteome-scale network data and infer biological function of yet unclassified proteins demonstrating that our method can provide valuable guidelines for future experimental research.
[ { "created": "Tue, 5 Feb 2008 07:35:24 GMT", "version": "v1" } ]
2008-02-06
[ [ "Milenkovic", "Tijana", "" ], [ "Przulj", "Natasa", "" ] ]
Proteins are essential macromolecules of life and thus understanding their function is of great importance. The number of functionally unclassified proteins is large even for simple and well studied organisms such as baker's yeast. Methods for determining protein function have shifted their focus from targeting specific proteins based solely on sequence homology to analyses of the entire proteome based on protein-protein interaction (PPI) networks. Since proteins aggregate to perform a certain function, analyzing structural properties of PPI networks may provide useful clues about the biological function of individual proteins, protein complexes they participate in, and even larger subcellular machines. We design a sensitive graph theoretic method for comparing local structures of node neighborhoods that demonstrates that in PPI networks, biological function of a node and its local network structure are closely related. The method groups topologically similar proteins under this measure in a PPI network and shows that these protein groups belong to the same protein complexes, perform the same biological functions, are localized in the same subcellular compartments, and have the same tissue expressions. Moreover, we apply our technique on a proteome-scale network data and infer biological function of yet unclassified proteins demonstrating that our method can provide valuable guidelines for future experimental research.
1811.07341
Rajesh Karmakar
Rajesh Karmakar
Noise in gene expression may be a choice of cellular system
Accepted for publication in International Journal of Modern Physics C
null
10.1142/S0129183119500141
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Gene expression and its regulation is a nonequilibrium stochastic process. Different molecules are involved in several biochemical steps in this process with low copies. It is observed that the stochasticity in biochemical processes is mainly due to the low copy number of the molecules present in the system. Several studies also show that the nonequilibrium biochemical processes require energy cost. But cellular system has developed itself through natural evolution by minimizing energy cost for optimum output. Here we study the role of stochasticity qualitatively in a network of two genes using stochastic simulation method and approximately measure the energy consumption for the gene expression process. We find that the noise in gene expression process reduces the energy cost of protein synthesis. Therefore, we argued that the stochasticity in gene expression may be a choice of cellular system for protein synthesis with minimum energy cost.
[ { "created": "Sun, 18 Nov 2018 15:44:34 GMT", "version": "v1" }, { "created": "Mon, 25 Feb 2019 13:52:50 GMT", "version": "v2" }, { "created": "Sun, 31 Mar 2019 16:40:35 GMT", "version": "v3" } ]
2019-05-22
[ [ "Karmakar", "Rajesh", "" ] ]
Gene expression and its regulation is a nonequilibrium stochastic process. Different molecules are involved in several biochemical steps in this process with low copies. It is observed that the stochasticity in biochemical processes is mainly due to the low copy number of the molecules present in the system. Several studies also show that the nonequilibrium biochemical processes require energy cost. But cellular system has developed itself through natural evolution by minimizing energy cost for optimum output. Here we study the role of stochasticity qualitatively in a network of two genes using stochastic simulation method and approximately measure the energy consumption for the gene expression process. We find that the noise in gene expression process reduces the energy cost of protein synthesis. Therefore, we argued that the stochasticity in gene expression may be a choice of cellular system for protein synthesis with minimum energy cost.
q-bio/0310002
Danilo Menicucci
R. Balocchi, D. Menicucci, E. Santarcangelo, L. Sebastiani, A. Gemignani, B. Ghelarducci, M. Varanini
Deriving the respiratory sinus arrhythmia from the heartbeat time series using Empirical Mode Decomposition
12 pages, 6 figures. Will be published on "Chaos, Solitons and Fractals"
null
10.1016/S0960-0779(03)00441-7
null
q-bio.TO q-bio.QM
null
Heart rate variability (HRV) is a well-known phenomenon whose characteristics are of great clinical relevance in pathophysiologic investigations. In particular, respiration is a powerful modulator of HRV contributing to the oscillations at highest frequency. Like almost all natural phenomena, HRV is the result of many nonlinearly interacting processes; therefore any linear analysis has the potential risk of underestimating, or even missing, a great amount of information content. Recently the technique of Empirical Mode Decomposition (EMD) has been proposed as a new tool for the analysis of nonlinear and nonstationary data. We applied EMD analysis to decompose the heartbeat intervals series, derived from one electrocardiographic (ECG) signal of 13 subjects, into their components in order to identify the modes associated with breathing. After each decomposition the mode showing the highest frequency and the corresponding respiratory signal were Hilbert transformed and the instantaneous phases extracted were then compared. The results obtained indicate a synchronization of order 1:1 between the two series proving the existence of phase and frequency coupling between the component associated with breathing and the respiratory signal itself in all subjects.
[ { "created": "Wed, 1 Oct 2003 20:14:27 GMT", "version": "v1" } ]
2015-06-26
[ [ "Balocchi", "R.", "" ], [ "Menicucci", "D.", "" ], [ "Santarcangelo", "E.", "" ], [ "Sebastiani", "L.", "" ], [ "Gemignani", "A.", "" ], [ "Ghelarducci", "B.", "" ], [ "Varanini", "M.", "" ] ]
Heart rate variability (HRV) is a well-known phenomenon whose characteristics are of great clinical relevance in pathophysiologic investigations. In particular, respiration is a powerful modulator of HRV contributing to the oscillations at highest frequency. Like almost all natural phenomena, HRV is the result of many nonlinearly interacting processes; therefore any linear analysis has the potential risk of underestimating, or even missing, a great amount of information content. Recently the technique of Empirical Mode Decomposition (EMD) has been proposed as a new tool for the analysis of nonlinear and nonstationary data. We applied EMD analysis to decompose the heartbeat intervals series, derived from one electrocardiographic (ECG) signal of 13 subjects, into their components in order to identify the modes associated with breathing. After each decomposition the mode showing the highest frequency and the corresponding respiratory signal were Hilbert transformed and the instantaneous phases extracted were then compared. The results obtained indicate a synchronization of order 1:1 between the two series proving the existence of phase and frequency coupling between the component associated with breathing and the respiratory signal itself in all subjects.
1708.02626
Ruth Davidson
Yingying Ren, Sihan Zha, Jingwen Bi, Jos\'e A. Sanchez, Cara Monical, Michelle Delcourt, Rosemary K. Guzman, and Ruth Davidson
A combinatorial method for connecting BHV spaces representing different numbers of taxa
Updated section on applications and link to github software release
null
null
null
q-bio.QM math.CO q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The phylogenetic tree space introduced by Billera, Holmes, and Vogtmann (BHV tree space) is a CAT(0) continuous space that represents trees with edge weights with an intrinsic geodesic distance measure. The geodesic distance measure unique to BHV tree space is well known to be computable in polynomial time, which makes it a potentially powerful tool for optimization problems in phylogenetics and phylogenomics. Specifically, there is significant interest in comparing and combining phylogenetic trees. For example, BHV tree space has been shown to be potentially useful in tree summary and consensus methods, which require combining trees with different number of leaves. Yet an open problem is to transition between BHV tree spaces of different maximal dimension, where each maximal dimension corresponds to the complete set of edge-weighted trees with a fixed number of leaves. We show a combinatorial method to transition between copies of BHV tree spaces in which trees with different numbers of taxa can be studied, derived from its topological structure and geometric properties. This method removes obstacles for embedding problems such as supertree and consensus methods in the BHV treespace framework.
[ { "created": "Tue, 8 Aug 2017 19:51:03 GMT", "version": "v1" }, { "created": "Sun, 3 Dec 2017 20:44:10 GMT", "version": "v2" } ]
2021-10-29
[ [ "Ren", "Yingying", "" ], [ "Zha", "Sihan", "" ], [ "Bi", "Jingwen", "" ], [ "Sanchez", "José A.", "" ], [ "Monical", "Cara", "" ], [ "Delcourt", "Michelle", "" ], [ "Guzman", "Rosemary K.", "" ], [ ...
The phylogenetic tree space introduced by Billera, Holmes, and Vogtmann (BHV tree space) is a CAT(0) continuous space that represents trees with edge weights with an intrinsic geodesic distance measure. The geodesic distance measure unique to BHV tree space is well known to be computable in polynomial time, which makes it a potentially powerful tool for optimization problems in phylogenetics and phylogenomics. Specifically, there is significant interest in comparing and combining phylogenetic trees. For example, BHV tree space has been shown to be potentially useful in tree summary and consensus methods, which require combining trees with different number of leaves. Yet an open problem is to transition between BHV tree spaces of different maximal dimension, where each maximal dimension corresponds to the complete set of edge-weighted trees with a fixed number of leaves. We show a combinatorial method to transition between copies of BHV tree spaces in which trees with different numbers of taxa can be studied, derived from its topological structure and geometric properties. This method removes obstacles for embedding problems such as supertree and consensus methods in the BHV treespace framework.
0905.3262
Kavita Jain
Kavita Jain
Time to fixation in the presence of recombination
Corrected definition of linkage disequilibrium, new Section 5 added; Version to appear in Theo. Pop. Biol
Theo. Pop. Biol. 77, 23 (2010)
null
null
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the evolutionary dynamics of a haploid population of infinite size recombining with a probability $r$ in a two locus model. Starting from a low fitness locus, the population is evolved under mutation, selection and recombination until a finite fraction of the population reaches the fittest locus. An analytical method is developed to calculate the fixation time $T$ to the fittest locus for various choices of epistasis. We find that (1) for negative epistasis, $T$ decreases slowly for small $r$ but decays fast at larger $r$ (2) for positive epistasis, $T$ increases linearly for small $r$ and mildly for large $r$ (3) for compensatory mutation, $T$ diverges as a power law with logarithmic corrections as the recombination fraction approaches a critical value. Our calculations are seen to be in good agreement with the exact numerical results.
[ { "created": "Wed, 20 May 2009 10:01:59 GMT", "version": "v1" }, { "created": "Thu, 22 Oct 2009 08:21:59 GMT", "version": "v2" } ]
2010-05-04
[ [ "Jain", "Kavita", "" ] ]
We study the evolutionary dynamics of a haploid population of infinite size recombining with a probability $r$ in a two locus model. Starting from a low fitness locus, the population is evolved under mutation, selection and recombination until a finite fraction of the population reaches the fittest locus. An analytical method is developed to calculate the fixation time $T$ to the fittest locus for various choices of epistasis. We find that (1) for negative epistasis, $T$ decreases slowly for small $r$ but decays fast at larger $r$ (2) for positive epistasis, $T$ increases linearly for small $r$ and mildly for large $r$ (3) for compensatory mutation, $T$ diverges as a power law with logarithmic corrections as the recombination fraction approaches a critical value. Our calculations are seen to be in good agreement with the exact numerical results.
1205.6010
Filippo Utro
Davide Corona, Valeria Di Benedetto, Raffaele Giancarlo and Filippo Utro
The Chromatin Organization of an Eukaryotic Genome : Sequence Specific+ Statistical=Combinatorial (Extended Abstract)
Work presented at the 8th SIBBM Seminar (Annual Conference Meeting of the Italian Biophysics and Molecular Biology Society)- May 24-26 2012, Palermo, Italy
null
null
null
q-bio.GN cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Nucleosome organization in eukaryotic genomes has a deep impact on gene function. Although progress has been recently made in the identification of various concurring factors influencing nucleosome positioning, it is still unclear whether nucleosome positions are sequence dictated or determined by a random process. It has been postulated for a long time that,in the proximity of TSS, a barrier determines the position of the +1 nucleosome and then geometric constraints alter the random positioning process determining nucleosomal phasing. Such a pattern fades out as one moves away from the barrier to become again a random positioning process. Although this statistical model is widely accepted,the molecular nature of the barrier is still unknown. Moreover,we are far from the identification of a set of sequence rules able:to account for the genome-wide nucleosome organization;to explain the nature of the barriers on which the statistical mechanism hinges;to allow for a smooth transition from sequence-dictated to statistical positioning and back. We show that sequence complexity,quantified via various methods, can be the rule able to at least partially account for all the above.In particular, we have conducted our analyses on 4 high resolution nucleosomal maps of the model eukaryotes and found that nucleosome depleted regions can be well distinguished from nucleosome enriched regions by sequence complexity measures.In particular, (a) the depleted regions are less complex than the enriched ones, (b) around TSS complexity measures alone are in striking agreement with in vivo nucleosome occupancy,in particular precisely indicating the positions of the +1 and -1 nucleosomes. Those findings indicate that the intrinsic richness of subsequences within sequences plays a role in nucleosomal formation in genomes, and that sequence complexity constitutes the molecular nature of nucleosome barrier.
[ { "created": "Sun, 27 May 2012 23:49:33 GMT", "version": "v1" } ]
2012-05-29
[ [ "Corona", "Davide", "" ], [ "Di Benedetto", "Valeria", "" ], [ "Giancarlo", "Raffaele", "" ], [ "Utro", "Filippo", "" ] ]
Nucleosome organization in eukaryotic genomes has a deep impact on gene function. Although progress has been recently made in the identification of various concurring factors influencing nucleosome positioning, it is still unclear whether nucleosome positions are sequence dictated or determined by a random process. It has been postulated for a long time that,in the proximity of TSS, a barrier determines the position of the +1 nucleosome and then geometric constraints alter the random positioning process determining nucleosomal phasing. Such a pattern fades out as one moves away from the barrier to become again a random positioning process. Although this statistical model is widely accepted,the molecular nature of the barrier is still unknown. Moreover,we are far from the identification of a set of sequence rules able:to account for the genome-wide nucleosome organization;to explain the nature of the barriers on which the statistical mechanism hinges;to allow for a smooth transition from sequence-dictated to statistical positioning and back. We show that sequence complexity,quantified via various methods, can be the rule able to at least partially account for all the above.In particular, we have conducted our analyses on 4 high resolution nucleosomal maps of the model eukaryotes and found that nucleosome depleted regions can be well distinguished from nucleosome enriched regions by sequence complexity measures.In particular, (a) the depleted regions are less complex than the enriched ones, (b) around TSS complexity measures alone are in striking agreement with in vivo nucleosome occupancy,in particular precisely indicating the positions of the +1 and -1 nucleosomes. Those findings indicate that the intrinsic richness of subsequences within sequences plays a role in nucleosomal formation in genomes, and that sequence complexity constitutes the molecular nature of nucleosome barrier.
1308.0718
Ling Xue Ms
Ling Xue and Caterina Scoglio
The network-level reproduction number and extinction threshold for vector-borne diseases
null
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by-nc-sa/3.0/
The reproduction number of deterministic models is an essential quantity to predict whether an epidemic will spread or die out. Thresholds for disease extinction contribute crucial knowledge on disease control, elimination, and mitigation of infectious diseases. Relationships between the basic reproduction numbers of two network-based ordinary differential equation vector-host models, and extinction thresholds of corresponding continuous-time Markov chain models are derived under some assumptions. Numerical simulation results for malaria and Rift Valley fever transmission on heterogeneous networks are in agreement with analytical results without any assumptions, reinforcing the relationships may always exist and proposing a mathematical problem of proving their existences in general. Moreover, numerical simulations show that the reproduction number is not monotonically increasing or decreasing with the extinction threshold. Key parameters in predicting uncertainty of extinction thresholds are identified using Latin Hypercube Sampling/Partial Rank Correlation Coefficient. Consistent trends of extinction probability observed through numerical simulations provide novel insights into mitigation strategies to increase the disease extinction probability. Research findings may improve understandings of thresholds for disease persistence in order to control vector-borne diseases.
[ { "created": "Sat, 3 Aug 2013 16:43:23 GMT", "version": "v1" }, { "created": "Tue, 6 Aug 2013 21:23:45 GMT", "version": "v2" } ]
2013-08-08
[ [ "Xue", "Ling", "" ], [ "Scoglio", "Caterina", "" ] ]
The reproduction number of deterministic models is an essential quantity to predict whether an epidemic will spread or die out. Thresholds for disease extinction contribute crucial knowledge on disease control, elimination, and mitigation of infectious diseases. Relationships between the basic reproduction numbers of two network-based ordinary differential equation vector-host models, and extinction thresholds of corresponding continuous-time Markov chain models are derived under some assumptions. Numerical simulation results for malaria and Rift Valley fever transmission on heterogeneous networks are in agreement with analytical results without any assumptions, reinforcing the relationships may always exist and proposing a mathematical problem of proving their existences in general. Moreover, numerical simulations show that the reproduction number is not monotonically increasing or decreasing with the extinction threshold. Key parameters in predicting uncertainty of extinction thresholds are identified using Latin Hypercube Sampling/Partial Rank Correlation Coefficient. Consistent trends of extinction probability observed through numerical simulations provide novel insights into mitigation strategies to increase the disease extinction probability. Research findings may improve understandings of thresholds for disease persistence in order to control vector-borne diseases.
0803.0367
Reginald Smith
Reginald D. Smith
Plant-Mycorrhiza Percent Infection as Evidence of Coupled Metabolism
11 pages; accepted by the Journal of Theoretical Biology (in press)
Journal of Theoretical Biology 259 (2009), pp. 172-175
10.1016/j.jtbi.2009.02.020
null
q-bio.PE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A common feature of mycorrhizal observation is the growth of the infection on the plant root as a percent of the infected root or root tip length. Often, this is measured as a logistic curve with an eventual, though usually transient, plateau. It is shown in this paper that the periods of stable percent infection in the mycorrhizal growth cycle correspond to periods where both the plant and mycorrhiza growth rates and likely metabolism are tightly coupled.
[ { "created": "Tue, 4 Mar 2008 04:41:44 GMT", "version": "v1" }, { "created": "Sun, 14 Dec 2008 03:14:42 GMT", "version": "v2" }, { "created": "Sat, 28 Feb 2009 05:13:37 GMT", "version": "v3" } ]
2009-05-26
[ [ "Smith", "Reginald D.", "" ] ]
A common feature of mycorrhizal observation is the growth of the infection on the plant root as a percent of the infected root or root tip length. Often, this is measured as a logistic curve with an eventual, though usually transient, plateau. It is shown in this paper that the periods of stable percent infection in the mycorrhizal growth cycle correspond to periods where both the plant and mycorrhiza growth rates and likely metabolism are tightly coupled.
2405.07295
Rajat Saxena
Rajat Saxena, Bruce L. McNaughton
Bridging Neuroscience and AI: Environmental Enrichment as a Model for Forward Knowledge Transfer
23 pages, 5 figures
null
null
null
q-bio.NC cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Continual learning (CL) refers to an agent's capability to learn from a continuous stream of data and transfer knowledge without forgetting old information. One crucial aspect of CL is forward transfer, i.e., improved and faster learning on a new task by leveraging information from prior knowledge. While this ability comes naturally to biological brains, it poses a significant challenge for artificial intelligence (AI). Here, we suggest that environmental enrichment (EE) can be used as a biological model for studying forward transfer, inspiring human-like AI development. EE refers to animal studies that enhance cognitive, social, motor, and sensory stimulation and is a model for what, in humans, is referred to as 'cognitive reserve'. Enriched animals show significant improvement in learning speed and performance on new tasks, typically exhibiting forward transfer. We explore anatomical, molecular, and neuronal changes post-EE and discuss how artificial neural networks (ANNs) can be used to predict neural computation changes after enriched experiences. Finally, we provide a synergistic way of combining neuroscience and AI research that paves the path toward developing AI capable of rapid and efficient new task learning.
[ { "created": "Sun, 12 May 2024 14:33:50 GMT", "version": "v1" }, { "created": "Wed, 12 Jun 2024 23:59:48 GMT", "version": "v2" } ]
2024-06-14
[ [ "Saxena", "Rajat", "" ], [ "McNaughton", "Bruce L.", "" ] ]
Continual learning (CL) refers to an agent's capability to learn from a continuous stream of data and transfer knowledge without forgetting old information. One crucial aspect of CL is forward transfer, i.e., improved and faster learning on a new task by leveraging information from prior knowledge. While this ability comes naturally to biological brains, it poses a significant challenge for artificial intelligence (AI). Here, we suggest that environmental enrichment (EE) can be used as a biological model for studying forward transfer, inspiring human-like AI development. EE refers to animal studies that enhance cognitive, social, motor, and sensory stimulation and is a model for what, in humans, is referred to as 'cognitive reserve'. Enriched animals show significant improvement in learning speed and performance on new tasks, typically exhibiting forward transfer. We explore anatomical, molecular, and neuronal changes post-EE and discuss how artificial neural networks (ANNs) can be used to predict neural computation changes after enriched experiences. Finally, we provide a synergistic way of combining neuroscience and AI research that paves the path toward developing AI capable of rapid and efficient new task learning.
2007.00212
Ryuta Mizutani
Ryuta Mizutani, Rino Saiga, Yoshiro Yamamoto, Masayuki Uesugi, Akihisa Takeuchi, Kentaro Uesugi, Yasuko Terada, Yoshio Suzuki, Vincent De Andrade, Francesco De Carlo, Susumu Takekoshi, Chie Inomoto, Naoya Nakamura, Youta Torii, Itaru Kushima, Shuji Iritani, Norio Ozaki, Kenichi Oshima, Masanari Itokawa, Makoto Arai
Structural diverseness of neurons between brain areas and between cases
3 figures
null
null
null
q-bio.NC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The cerebral cortex is composed of multiple cortical areas that exert a wide variety of brain functions. Although human brain neurons are genetically and areally mosaic, the three-dimensional structural differences between neurons in different brain areas or between the neurons of different individuals have not been delineated. Here, we report a nanometer-scale geometric analysis of brain tissues of the superior temporal gyrus of 4 schizophrenia and 4 control cases by using synchrotron radiation nanotomography. The results of the analysis and a comparison with results for the anterior cingulate cortex indicated that 1) neuron structures are dissimilar between brain areas and that 2) the dissimilarity varies from case to case. The structural diverseness was mainly observed in terms of the neurite curvature that inversely correlates with the diameters of the neurites and spines. The analysis also revealed the geometric differences between the neurons of the schizophrenia and control cases, suggesting that neuron structure is associated with brain function. The area dependency of the neuron structure and its diverseness between individuals should represent the individuality of brain functions.
[ { "created": "Wed, 1 Jul 2020 03:50:21 GMT", "version": "v1" } ]
2020-07-02
[ [ "Mizutani", "Ryuta", "" ], [ "Saiga", "Rino", "" ], [ "Yamamoto", "Yoshiro", "" ], [ "Uesugi", "Masayuki", "" ], [ "Takeuchi", "Akihisa", "" ], [ "Uesugi", "Kentaro", "" ], [ "Terada", "Yasuko", "" ], [...
The cerebral cortex is composed of multiple cortical areas that exert a wide variety of brain functions. Although human brain neurons are genetically and areally mosaic, the three-dimensional structural differences between neurons in different brain areas or between the neurons of different individuals have not been delineated. Here, we report a nanometer-scale geometric analysis of brain tissues of the superior temporal gyrus of 4 schizophrenia and 4 control cases by using synchrotron radiation nanotomography. The results of the analysis and a comparison with results for the anterior cingulate cortex indicated that 1) neuron structures are dissimilar between brain areas and that 2) the dissimilarity varies from case to case. The structural diverseness was mainly observed in terms of the neurite curvature that inversely correlates with the diameters of the neurites and spines. The analysis also revealed the geometric differences between the neurons of the schizophrenia and control cases, suggesting that neuron structure is associated with brain function. The area dependency of the neuron structure and its diverseness between individuals should represent the individuality of brain functions.
2404.13888
Sang-Yoon Kim
Sang-Yoon Kim and Woochang Lim
Functions of Direct and Indirect Pathways for Action Selection Are Quantitatively Analyzed in A Spiking Neural Network of The Basal Ganglia
null
null
null
null
q-bio.NC physics.bio-ph
http://creativecommons.org/publicdomain/zero/1.0/
We are concerned about action selection in the basal ganglia (BG). We quantitatively analyze functions of direct pathway (DP) and indirect pathway (IP) for action selection in a spiking neural network with 3 competing channels. For such quantitative analysis, in each channel, we obtain the competition degree ${\cal C}_d$, given by the ratio of strength of DP (${\cal S}_{DP}$) to strength of IP (${\cal S}_{IP}$) (i.e., ${\cal C}_d = {\cal S}_{DP} / {\cal S}_{IP}$). Then, a desired action is selected in the channel with the largest ${\cal C}_d$. Desired action selection is made mainly due to strong focused inhibitory projection to the output nucleus, SNr (substantia nigra pars reticulata) via the DP in the corresponding channel. Unlike the case of DP, there are two types of IPs; intra-channel IP and inter-channel IP, due to widespread diffusive excitation from the STN (subthalamic nucleus). The intra-channel IP serves a function of brake to suppress the desired action selection. In contrast, the inter-channel IP to the SNr in the neighboring channels suppresses competing actions, leading to highlight the desired action selection. In this way, function of the inter-channel IP is opposite to that of the intra-channel IP. However, to the best of our knowledge, no quantitative analysis for such functions of the DP and the two IPs was made. Here, through direct calculations of the DP and the intra- and the inter-channel IP presynaptic currents into the SNr in each channel, we obtain the competition degree of each channel to determine a desired action, and then functions of the DP and the intra- and inter-channel IPs are quantitatively made clear.
[ { "created": "Mon, 22 Apr 2024 05:31:40 GMT", "version": "v1" }, { "created": "Wed, 3 Jul 2024 07:10:26 GMT", "version": "v2" } ]
2024-07-04
[ [ "Kim", "Sang-Yoon", "" ], [ "Lim", "Woochang", "" ] ]
We are concerned about action selection in the basal ganglia (BG). We quantitatively analyze functions of direct pathway (DP) and indirect pathway (IP) for action selection in a spiking neural network with 3 competing channels. For such quantitative analysis, in each channel, we obtain the competition degree ${\cal C}_d$, given by the ratio of strength of DP (${\cal S}_{DP}$) to strength of IP (${\cal S}_{IP}$) (i.e., ${\cal C}_d = {\cal S}_{DP} / {\cal S}_{IP}$). Then, a desired action is selected in the channel with the largest ${\cal C}_d$. Desired action selection is made mainly due to strong focused inhibitory projection to the output nucleus, SNr (substantia nigra pars reticulata) via the DP in the corresponding channel. Unlike the case of DP, there are two types of IPs; intra-channel IP and inter-channel IP, due to widespread diffusive excitation from the STN (subthalamic nucleus). The intra-channel IP serves a function of brake to suppress the desired action selection. In contrast, the inter-channel IP to the SNr in the neighboring channels suppresses competing actions, leading to highlight the desired action selection. In this way, function of the inter-channel IP is opposite to that of the intra-channel IP. However, to the best of our knowledge, no quantitative analysis for such functions of the DP and the two IPs was made. Here, through direct calculations of the DP and the intra- and the inter-channel IP presynaptic currents into the SNr in each channel, we obtain the competition degree of each channel to determine a desired action, and then functions of the DP and the intra- and inter-channel IPs are quantitatively made clear.
1712.05597
Philipp Schubert R
Britta Munkes and Philipp R Schubert, Rolf Karez, Thorsten BH Reusch
Experimental assessment of critical anthropogenic sediment burial in eelgrass Zostera marina
26 pages, 5 figures, 1 table
Marine Pollution Bulletin, Volume 100, Issue 1, 15 November 2015, Pages 144-153
10.1016/j.marpolbul.2015.09.013
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Seagrass meadows, one of the worlds most important and productive coastal habitats, are threatened by a range of anthropogenic actions. Burial of seagrass plants due to coastal activities is one important anthropogenic pressure leading to decline of local populations. In our study, we assessed the response of eelgrass Zostera marina to sediment burial from physiological, morphological, and population parameters. In a full factorial field experiment, burial level (5-20 cm) and burial duration (4-16 weeks) were manipulated. Negative effects were visible even at the lowest burial level (5 cm) and shortest duration (4 weeks), with increasing effects over time and burial level. Buried seagrasses showed higher shoot mortality, delayed growth and flowering and lower carbohydrate storage. The observed effects will likely have an impact on next years survival of buried plants. Our results have implications for the management of this important coastal plant.
[ { "created": "Fri, 15 Dec 2017 10:05:37 GMT", "version": "v1" } ]
2017-12-18
[ [ "Munkes", "Britta", "" ], [ "Schubert", "Philipp R", "" ], [ "Karez", "Rolf", "" ], [ "Reusch", "Thorsten BH", "" ] ]
Seagrass meadows, one of the worlds most important and productive coastal habitats, are threatened by a range of anthropogenic actions. Burial of seagrass plants due to coastal activities is one important anthropogenic pressure leading to decline of local populations. In our study, we assessed the response of eelgrass Zostera marina to sediment burial from physiological, morphological, and population parameters. In a full factorial field experiment, burial level (5-20 cm) and burial duration (4-16 weeks) were manipulated. Negative effects were visible even at the lowest burial level (5 cm) and shortest duration (4 weeks), with increasing effects over time and burial level. Buried seagrasses showed higher shoot mortality, delayed growth and flowering and lower carbohydrate storage. The observed effects will likely have an impact on next years survival of buried plants. Our results have implications for the management of this important coastal plant.
q-bio/0501019
Alexander Gorban
A.N. Gorban, M. Kudryashev, T. Popova
Informational Way to Protein Alphabet: Entropic Classification of Amino Acids
13 p. 6 Tabs, 3 Figs, Reduced version with additional study of membrane and globular proteins
null
null
null
q-bio.BM physics.bio-ph q-bio.QM
null
What are proteins made from, as the working parts of the living cells protein machines? To answer this question, we need a technology to disassemble proteins onto elementary func-tional details and to prepare lumped description of such details. This lumped description might have a multiple material realization (in amino acids). Our hypothesis is that informational approach to this problem is possible. We propose a way of hierarchical classification that makes the primary structure of protein maximally non-random. The first steps of the suggested research program are realized: the method and the analysis of optimal informational protein binary alphabet. The general method is used to answer several specific questions, for example: (i) Is there a syntactic difference between Globular and Membrane proteins? (ii) Are proteins random sequences of amino acids (a long discussion)? For these questions, the answers are as follows: (i) There exists significant syntactic difference between Globular and Membrane proteins, and this difference is described; (ii) Amino acid sequences in proteins are definitely not random.
[ { "created": "Thu, 13 Jan 2005 12:52:16 GMT", "version": "v1" }, { "created": "Sun, 16 Jan 2005 14:28:56 GMT", "version": "v2" }, { "created": "Wed, 30 Nov 2005 15:43:37 GMT", "version": "v3" }, { "created": "Mon, 5 Nov 2007 17:15:03 GMT", "version": "v4" } ]
2007-11-05
[ [ "Gorban", "A. N.", "" ], [ "Kudryashev", "M.", "" ], [ "Popova", "T.", "" ] ]
What are proteins made from, as the working parts of the living cells protein machines? To answer this question, we need a technology to disassemble proteins onto elementary func-tional details and to prepare lumped description of such details. This lumped description might have a multiple material realization (in amino acids). Our hypothesis is that informational approach to this problem is possible. We propose a way of hierarchical classification that makes the primary structure of protein maximally non-random. The first steps of the suggested research program are realized: the method and the analysis of optimal informational protein binary alphabet. The general method is used to answer several specific questions, for example: (i) Is there a syntactic difference between Globular and Membrane proteins? (ii) Are proteins random sequences of amino acids (a long discussion)? For these questions, the answers are as follows: (i) There exists significant syntactic difference between Globular and Membrane proteins, and this difference is described; (ii) Amino acid sequences in proteins are definitely not random.
1505.03711
Yedidyah Dordek
Yedidyah Dordek, Ron Meir, Dori Derdikman
Extracting grid characteristics from spatially distributed place cell inputs using non-negative PCA
35 pages, 14 figures, 1 table
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many recent models study the downstream projection from grid cells to place cells, while recent data has pointed out the importance of the feedback projection. We thus asked how grid cells are affected by the nature of the input from the place cells. We propose a two-layered neural network with feedforward weights connecting place-like input cells to grid cell outputs. Place-to-grid weights were learned via a generalized Hebbian rule. The architecture of this network highly resembles neural networks used to perform Principal Component Analysis (PCA). Our results indicate that if the components of the feedforward neural network were non-negative, the output converged to a hexagonal lattice. Without the non-negativity constraint the output converged to a square lattice. Consistent with experiments, grid alignment to walls was ~7{\deg} and grid spacing ratio between consecutive modules was ~1.4. Our results express a possible linkage between place-cell to grid-cell interactions and PCA, suggesting that grid cells represent a process of constrained dimensionality reduction that can be viewed also as a process of variance maximization of the information from place-cells.
[ { "created": "Thu, 14 May 2015 13:08:37 GMT", "version": "v1" }, { "created": "Fri, 19 Jun 2015 15:29:14 GMT", "version": "v2" }, { "created": "Tue, 14 Jul 2015 21:26:27 GMT", "version": "v3" } ]
2015-07-16
[ [ "Dordek", "Yedidyah", "" ], [ "Meir", "Ron", "" ], [ "Derdikman", "Dori", "" ] ]
Many recent models study the downstream projection from grid cells to place cells, while recent data has pointed out the importance of the feedback projection. We thus asked how grid cells are affected by the nature of the input from the place cells. We propose a two-layered neural network with feedforward weights connecting place-like input cells to grid cell outputs. Place-to-grid weights were learned via a generalized Hebbian rule. The architecture of this network highly resembles neural networks used to perform Principal Component Analysis (PCA). Our results indicate that if the components of the feedforward neural network were non-negative, the output converged to a hexagonal lattice. Without the non-negativity constraint the output converged to a square lattice. Consistent with experiments, grid alignment to walls was ~7{\deg} and grid spacing ratio between consecutive modules was ~1.4. Our results express a possible linkage between place-cell to grid-cell interactions and PCA, suggesting that grid cells represent a process of constrained dimensionality reduction that can be viewed also as a process of variance maximization of the information from place-cells.
1510.04245
Rostislav Serota
Z. Liu, O. Pavlov Garcia, J. G. Holden, R. A. Serota
Modeling response time with power-law distributions
null
null
null
null
q-bio.NC q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding the properties of response time distributions is a long-standing problem in cognitive science. We provide a tutorial overview of several contemporary models that assume power law scaling is a plausible description of the skewed heavy tails that are typically expressed in response time distributions. We discuss several properties and markers of these distribution functions that have implications for cognitive and neurophysiological organization supporting a given cognitive activity. We illustrate how a power law assumption suggests that collecting larger samples, and combining individual subjects' data into a single set for a distribution-function analysis allows for a better comparison of a group of interest to a control group. We demonstrate our techniques in contrasts of response time measurements of children with and without dyslexia.
[ { "created": "Wed, 14 Oct 2015 19:27:00 GMT", "version": "v1" } ]
2015-10-15
[ [ "Liu", "Z.", "" ], [ "Garcia", "O. Pavlov", "" ], [ "Holden", "J. G.", "" ], [ "Serota", "R. A.", "" ] ]
Understanding the properties of response time distributions is a long-standing problem in cognitive science. We provide a tutorial overview of several contemporary models that assume power law scaling is a plausible description of the skewed heavy tails that are typically expressed in response time distributions. We discuss several properties and markers of these distribution functions that have implications for cognitive and neurophysiological organization supporting a given cognitive activity. We illustrate how a power law assumption suggests that collecting larger samples, and combining individual subjects' data into a single set for a distribution-function analysis allows for a better comparison of a group of interest to a control group. We demonstrate our techniques in contrasts of response time measurements of children with and without dyslexia.
1912.13005
Brooks Emerick
Brooks Emerick and Abhyudai Singh
Global redistribution and local migration in semi-discrete host-parasitoid population dynamic models
27 pages, 8 figures
null
null
null
q-bio.PE math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Host-parasitoid population dynamics is often probed using a semi-discrete/hybrid modeling framework. Here, the update functions in the discrete-time model connecting year-to-year changes in the population densities are obtained by solving ordinary differential equations that mechanistically describe interactions when hosts become vulnerable to parasitoid attacks. We use this semi-discrete formalism to study two key spatial effects: local movement (migration) of parasitoids between patches during the vulnerable period; and yearly redistribution of populations across patches outside the vulnerable period. Our results show that in the absence of any redistribution, constant density-independent migration and parasitoid attack rates are unable to stabilize an otherwise unstable host-parasitoid population dynamics. Interestingly, inclusion of host redistribution (but not parasitoid redistribution) before the start of the vulnerable period can lead to stable coexistence of both species. Next, we consider a Type-III functional response (parasitoid attack rate increases with host density), where the absence of any spatial effects leads to a neutrally stable host-parasitoid equilibrium. As before, density-independent parasitoid migration by itself is again insufficient to stabilize the population dynamics and host redistribution provides a stabilizing influence. Finally, we show that a Type-III functional response combined with density-dependent parasitoid migration leads to stable coexistence, even in the absence of population redistributions. In summary, we have systematically characterized parameter regimes leading to stable/unstable population dynamics with different forms of spatial heterogeneity coupled to the parasitoid's functional response using mechanistically formulated semi-discrete models.
[ { "created": "Mon, 30 Dec 2019 16:57:44 GMT", "version": "v1" } ]
2020-01-01
[ [ "Emerick", "Brooks", "" ], [ "Singh", "Abhyudai", "" ] ]
Host-parasitoid population dynamics is often probed using a semi-discrete/hybrid modeling framework. Here, the update functions in the discrete-time model connecting year-to-year changes in the population densities are obtained by solving ordinary differential equations that mechanistically describe interactions when hosts become vulnerable to parasitoid attacks. We use this semi-discrete formalism to study two key spatial effects: local movement (migration) of parasitoids between patches during the vulnerable period; and yearly redistribution of populations across patches outside the vulnerable period. Our results show that in the absence of any redistribution, constant density-independent migration and parasitoid attack rates are unable to stabilize an otherwise unstable host-parasitoid population dynamics. Interestingly, inclusion of host redistribution (but not parasitoid redistribution) before the start of the vulnerable period can lead to stable coexistence of both species. Next, we consider a Type-III functional response (parasitoid attack rate increases with host density), where the absence of any spatial effects leads to a neutrally stable host-parasitoid equilibrium. As before, density-independent parasitoid migration by itself is again insufficient to stabilize the population dynamics and host redistribution provides a stabilizing influence. Finally, we show that a Type-III functional response combined with density-dependent parasitoid migration leads to stable coexistence, even in the absence of population redistributions. In summary, we have systematically characterized parameter regimes leading to stable/unstable population dynamics with different forms of spatial heterogeneity coupled to the parasitoid's functional response using mechanistically formulated semi-discrete models.
1611.09542
Felix Xiaofeng Ye
Jeremiah Li, Felix X.-F. Ye, Hong Qian and Sui Huang
Time Dependent Saddle Node Bifurcation: Breaking Time and the Point of No Return in a Non-Autonomous Model of Critical Transitions
null
null
10.1016/j.physd.2019.02.005
null
q-bio.QM math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There is a growing awareness that catastrophic phenomena in biology and medicine can be mathematically represented in terms of saddle-node bifurcations. In particular, the term `tipping', or critical transition has in recent years entered the discourse of the general public in relation to ecology, medicine, and public health. The saddle-node bifurcation and its associated theory of catastrophe as put forth by Thom and Zeeman has seen applications in a wide range of fields including molecular biophysics, mesoscopic physics, and climate science. In this paper, we investigate a simple model of a non-autonomous system with a time-dependent parameter $p(\tau)$ and its corresponding `dynamic' (time-dependent) saddle-node bifurcation by the modern theory of non-autonomous dynamical systems. We show that the actual point of no return for a system undergoing tipping can be significantly delayed in comparison to the {\em breaking time} $\hat{\tau}$ at which the corresponding autonomous system with a time-independent parameter $p_{a}= p(\hat{\tau})$ undergoes a bifurcation. A dimensionless parameter $\alpha=\lambda p_0^3V^{-2}$ is introduced, in which $\lambda$ is the curvature of the autonomous saddle-node bifurcation according to parameter $p(\tau)$, which has an initial value of $p_{0}$ and a constant rate of change $V$. We find that the breaking time $\hat{\tau}$ is always less than the actual point of no return $\tau^*$ after which the critical transition is irreversible; specifically, the relation $\tau^*-\hat{\tau}\simeq 2.338(\lambda V)^{-\frac{1}{3}}$ is analytically obtained. For a system with a small $\lambda V$, there exists a significant window of opportunity $(\hat{\tau},\tau^*)$ during which rapid reversal of the environment can save the system from catastrophe.
[ { "created": "Tue, 29 Nov 2016 09:55:54 GMT", "version": "v1" }, { "created": "Wed, 11 Jan 2017 11:36:14 GMT", "version": "v2" }, { "created": "Thu, 3 Jan 2019 17:44:10 GMT", "version": "v3" } ]
2019-10-29
[ [ "Li", "Jeremiah", "" ], [ "Ye", "Felix X. -F.", "" ], [ "Qian", "Hong", "" ], [ "Huang", "Sui", "" ] ]
There is a growing awareness that catastrophic phenomena in biology and medicine can be mathematically represented in terms of saddle-node bifurcations. In particular, the term `tipping', or critical transition has in recent years entered the discourse of the general public in relation to ecology, medicine, and public health. The saddle-node bifurcation and its associated theory of catastrophe as put forth by Thom and Zeeman has seen applications in a wide range of fields including molecular biophysics, mesoscopic physics, and climate science. In this paper, we investigate a simple model of a non-autonomous system with a time-dependent parameter $p(\tau)$ and its corresponding `dynamic' (time-dependent) saddle-node bifurcation by the modern theory of non-autonomous dynamical systems. We show that the actual point of no return for a system undergoing tipping can be significantly delayed in comparison to the {\em breaking time} $\hat{\tau}$ at which the corresponding autonomous system with a time-independent parameter $p_{a}= p(\hat{\tau})$ undergoes a bifurcation. A dimensionless parameter $\alpha=\lambda p_0^3V^{-2}$ is introduced, in which $\lambda$ is the curvature of the autonomous saddle-node bifurcation according to parameter $p(\tau)$, which has an initial value of $p_{0}$ and a constant rate of change $V$. We find that the breaking time $\hat{\tau}$ is always less than the actual point of no return $\tau^*$ after which the critical transition is irreversible; specifically, the relation $\tau^*-\hat{\tau}\simeq 2.338(\lambda V)^{-\frac{1}{3}}$ is analytically obtained. For a system with a small $\lambda V$, there exists a significant window of opportunity $(\hat{\tau},\tau^*)$ during which rapid reversal of the environment can save the system from catastrophe.
2003.10017
Luis Alvarez
Luis Alvarez
An empirical algorithm to forecast the evolution of the number of COVID-19 symptomatic patients after social distancing interventions
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present an empirical algorithm to forecast the evolution of the number of COVID-19 symptomatic patients in the early stages of the pandemic spread and after strict social distancing interventions. The algorithm is based on a low dimensional model for the variation of the exponential growth rate that decreases after the implementation of strict social distancing measures. From the observable data given by the number of tested positive, our model estimates the number of infected hindcast introducing in the model formulation the incubation time. We also use the model to follow the number of infected patients who later die using the registered number of deaths and the distribution time from infection to death. The relationship of the proposed model with the SIR models is studied. Model parameters fitting is done by minimizing a quadratic error between the data and the model forecast. An extended model is also proposed that allows a longer term forecast. An online implementation of the model is avalaible at www.ctim.es/covid19
[ { "created": "Sun, 22 Mar 2020 22:47:50 GMT", "version": "v1" }, { "created": "Thu, 19 Nov 2020 10:20:28 GMT", "version": "v2" } ]
2020-11-20
[ [ "Alvarez", "Luis", "" ] ]
We present an empirical algorithm to forecast the evolution of the number of COVID-19 symptomatic patients in the early stages of the pandemic spread and after strict social distancing interventions. The algorithm is based on a low dimensional model for the variation of the exponential growth rate that decreases after the implementation of strict social distancing measures. From the observable data given by the number of tested positive, our model estimates the number of infected hindcast introducing in the model formulation the incubation time. We also use the model to follow the number of infected patients who later die using the registered number of deaths and the distribution time from infection to death. The relationship of the proposed model with the SIR models is studied. Model parameters fitting is done by minimizing a quadratic error between the data and the model forecast. An extended model is also proposed that allows a longer term forecast. An online implementation of the model is avalaible at www.ctim.es/covid19
2004.07827
Pietro Battiston
Pietro Battiston, Simona Gamba
COVID-19: $R_0$ is lower where outbreak is larger
Data and code are available upon request
null
null
null
q-bio.PE econ.GN physics.soc-ph q-fin.EC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We use daily data from Lombardy, the Italian region most affected by the COVID-19 outbreak, to calibrate a SIR model individually on each municipality. These are all covered by the same health system and, in the post-lockdown phase we focus on, all subject to the same social distancing regulations. We find that municipalities with a higher number of cases at the beginning of the period analyzed have a lower rate of diffusion, which cannot be imputed to herd immunity. In particular, there is a robust and strongly significant negative correlation between the estimated basic reproduction number ($R_0$) and the initial outbreak size, in contrast with the role of $R_0$ as a \emph{predictor} of outbreak size. We explore different possible explanations for this phenomenon and conclude that a higher number of cases causes changes of behavior, such as a more strict adoption of social distancing measures among the population, that reduce the spread. This result calls for a transparent, real-time distribution of detailed epidemiological data, as such data affects the behavior of populations in areas affected by the outbreak.
[ { "created": "Thu, 16 Apr 2020 09:49:15 GMT", "version": "v1" } ]
2020-04-20
[ [ "Battiston", "Pietro", "" ], [ "Gamba", "Simona", "" ] ]
We use daily data from Lombardy, the Italian region most affected by the COVID-19 outbreak, to calibrate a SIR model individually on each municipality. These are all covered by the same health system and, in the post-lockdown phase we focus on, all subject to the same social distancing regulations. We find that municipalities with a higher number of cases at the beginning of the period analyzed have a lower rate of diffusion, which cannot be imputed to herd immunity. In particular, there is a robust and strongly significant negative correlation between the estimated basic reproduction number ($R_0$) and the initial outbreak size, in contrast with the role of $R_0$ as a \emph{predictor} of outbreak size. We explore different possible explanations for this phenomenon and conclude that a higher number of cases causes changes of behavior, such as a more strict adoption of social distancing measures among the population, that reduce the spread. This result calls for a transparent, real-time distribution of detailed epidemiological data, as such data affects the behavior of populations in areas affected by the outbreak.
1910.11311
M. Ali Al-Radhawi
M. Ali Al-Radhawi and Eduardo D. Sontag
Analysis of a reduced model of epithelial-mesenchymal fate determination in cancer metastasis as a singularly-perturbed monotone system
null
null
null
null
q-bio.MN math.DS math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Tumor metastasis is one of the main factors responsible for the high fatality rate of cancer. Metastasis can occur after malignant cells transition from the epithelial phenotype to the mesenchymal phenotype. This transformation allows cells to migrate via the circulatory system and subsequently settle in distant organs after undergoing the reverse transition from the mesenchymal to the epithelial phenotypes. The core gene regulatory network controlling these transitions consists of a system made up of coupled SNAIL/miRNA-34 and ZEB1/miRNA-200 subsystems. In this work, we formulate a mathematical model of the core regulatory motif and analyze its long-term behavior. We start by developing a detailed reaction network with 24 state variables. Assuming fast promoter and mRNA kinetics, we then show how to reduce our model to a monotone four-dimensional system. For the reduced system, monotone dynamical systems theory can be used to prove generic convergence to the set of equilibria for all bounded trajectories. The theory does not apply to the full model, which is not monotone, but we briefly discuss results for singularly-perturbed monotone systems that provide a tool to extend convergence results from reduced to full systems, under appropriate time separation assumptions.
[ { "created": "Thu, 24 Oct 2019 17:42:51 GMT", "version": "v1" }, { "created": "Thu, 16 Jan 2020 22:05:51 GMT", "version": "v2" }, { "created": "Thu, 14 May 2020 05:16:15 GMT", "version": "v3" } ]
2020-05-15
[ [ "Al-Radhawi", "M. Ali", "" ], [ "Sontag", "Eduardo D.", "" ] ]
Tumor metastasis is one of the main factors responsible for the high fatality rate of cancer. Metastasis can occur after malignant cells transition from the epithelial phenotype to the mesenchymal phenotype. This transformation allows cells to migrate via the circulatory system and subsequently settle in distant organs after undergoing the reverse transition from the mesenchymal to the epithelial phenotypes. The core gene regulatory network controlling these transitions consists of a system made up of coupled SNAIL/miRNA-34 and ZEB1/miRNA-200 subsystems. In this work, we formulate a mathematical model of the core regulatory motif and analyze its long-term behavior. We start by developing a detailed reaction network with 24 state variables. Assuming fast promoter and mRNA kinetics, we then show how to reduce our model to a monotone four-dimensional system. For the reduced system, monotone dynamical systems theory can be used to prove generic convergence to the set of equilibria for all bounded trajectories. The theory does not apply to the full model, which is not monotone, but we briefly discuss results for singularly-perturbed monotone systems that provide a tool to extend convergence results from reduced to full systems, under appropriate time separation assumptions.
1903.06499
C. Soule
Marcelle Kaufman (ULB), Christophe Soul\'e (IHES)
On the multistationarity of chemical reaction networks
null
Journal of Theoretical Biology, Elsevier, In press, 465, pp.126-133
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a new conjecture about a necessary condition that a (bio)chemical network has to satisfy for it to exhibit multistationarity. According to a Theorem of Feliu and Wiuf [27, 12], the conjecture is known for strictly monotonic kinetics. We give several examples illustrating our conjecture.
[ { "created": "Fri, 15 Mar 2019 12:47:20 GMT", "version": "v1" } ]
2019-03-18
[ [ "Kaufman", "Marcelle", "", "ULB" ], [ "Soulé", "Christophe", "", "IHES" ] ]
We present a new conjecture about a necessary condition that a (bio)chemical network has to satisfy for it to exhibit multistationarity. According to a Theorem of Feliu and Wiuf [27, 12], the conjecture is known for strictly monotonic kinetics. We give several examples illustrating our conjecture.
1707.04980
Conrad Burden
Conrad J. Burden and Yi Wei
Mutation in Populations Governed by a Galton-Watson Branching Process
30 pages, 5 figures, Now matches published version, changes are mainly to Introduction and references
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A population genetics model based on a multitype branching process, or equivalently a Galton-Watson branching process for multiple alleles, is pre- sented. The diffusion limit forward Kolmogorov equation is derived for the case of neutral mutations. The asymptotic stationary solution is obtained and has the property that the extant population partitions into subpopulations whose relative sizes are determined by mutation rates. An approximate time-dependent solution is obtained in the limit of low mutation rates. This solution has the property that the system undergoes a rapid transition from a drift-dominated phase to a mutation-dominated phase in which the distribution collapses onto the asymptotic stationary distribution. The changeover point of the transition is determined by the per-generation growth factor and mutation rate. The approximate solution is confirmed using numerical simulations.
[ { "created": "Mon, 17 Jul 2017 02:28:18 GMT", "version": "v1" }, { "created": "Mon, 19 Feb 2018 22:53:28 GMT", "version": "v2" } ]
2018-02-21
[ [ "Burden", "Conrad J.", "" ], [ "Wei", "Yi", "" ] ]
A population genetics model based on a multitype branching process, or equivalently a Galton-Watson branching process for multiple alleles, is pre- sented. The diffusion limit forward Kolmogorov equation is derived for the case of neutral mutations. The asymptotic stationary solution is obtained and has the property that the extant population partitions into subpopulations whose relative sizes are determined by mutation rates. An approximate time-dependent solution is obtained in the limit of low mutation rates. This solution has the property that the system undergoes a rapid transition from a drift-dominated phase to a mutation-dominated phase in which the distribution collapses onto the asymptotic stationary distribution. The changeover point of the transition is determined by the per-generation growth factor and mutation rate. The approximate solution is confirmed using numerical simulations.
q-bio/0511004
Pablo Echenique
Pablo Echenique and J. L. Alonso
Definition of Systematic, Approximately Separable and Modular Internal Coordinates (SASMIC) for macromolecular simulation
27 pages, 10 figures, LaTeX, mcite package
J. Comp. Chem. 27 (2006) 1076-1087
10.1002/jcc.20424
null
q-bio.BM cond-mat.soft
null
A set of rules is defined to systematically number the groups and the atoms of organic molecules and, particularly, of polypeptides in a modular manner. Supported by this numeration, a set of internal coordinates is defined. These coordinates (termed Systematic, Approximately Separable and Modular Internal Coordinates, SASMIC) are straightforwardly written in Z-matrix form and may be directly implemented in typical Quantum Chemistry packages. A number of Perl scripts that automatically generate the Z-matrix files for polypeptides are provided as supplementary material. The main difference with other Z-matrix-like coordinates normally used in the literature is that normal dihedral angles (``principal dihedrals'' in this work) are only used to fix the orientation of whole groups and a somewhat non-standard type of dihedrals, termed ``phase dihedrals'', are used to describe the covalent structure inside the groups. This physical approach allows to approximately separate soft and hard movements of the molecule using only topological information and to directly implement constraints. As an application, we use the coordinates defined and ab initio quantum mechanical calculations to assess the commonly assumed approximation of the free energy, obtained from ``integrating out'' the side chain degree of freedom chi, by the Potential Energy Surface (PES) in the protected dipeptide HCO-L-Ala-NH2. We also present a sub-box of the Hessian matrix in two different sets of coordinates to illustrate the approximate separation of soft and hard movements when the coordinates defined in this work are used.
[ { "created": "Thu, 3 Nov 2005 10:36:44 GMT", "version": "v1" }, { "created": "Mon, 4 Dec 2006 17:41:48 GMT", "version": "v2" } ]
2007-12-19
[ [ "Echenique", "Pablo", "" ], [ "Alonso", "J. L.", "" ] ]
A set of rules is defined to systematically number the groups and the atoms of organic molecules and, particularly, of polypeptides in a modular manner. Supported by this numeration, a set of internal coordinates is defined. These coordinates (termed Systematic, Approximately Separable and Modular Internal Coordinates, SASMIC) are straightforwardly written in Z-matrix form and may be directly implemented in typical Quantum Chemistry packages. A number of Perl scripts that automatically generate the Z-matrix files for polypeptides are provided as supplementary material. The main difference with other Z-matrix-like coordinates normally used in the literature is that normal dihedral angles (``principal dihedrals'' in this work) are only used to fix the orientation of whole groups and a somewhat non-standard type of dihedrals, termed ``phase dihedrals'', are used to describe the covalent structure inside the groups. This physical approach allows to approximately separate soft and hard movements of the molecule using only topological information and to directly implement constraints. As an application, we use the coordinates defined and ab initio quantum mechanical calculations to assess the commonly assumed approximation of the free energy, obtained from ``integrating out'' the side chain degree of freedom chi, by the Potential Energy Surface (PES) in the protected dipeptide HCO-L-Ala-NH2. We also present a sub-box of the Hessian matrix in two different sets of coordinates to illustrate the approximate separation of soft and hard movements when the coordinates defined in this work are used.
2306.16292
Kwon-Seok Chae
Kwon-Seok Chae, In-Taek Oh, Soo Hyun Jeong, Yong-Hwan Kim, Soo-Chan Kim, Yongkuk Kim
Geomagnetic field influences probabilistic abstract decision-making in humans
32 pages, 5 figures, 4 supplementary figures, 2 supplementary tables, and separate 15 raw data files
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-nd/4.0/
To resolve disputes or determine the order of things, people commonly use binary choices such as tossing a coin, even though it is obscure whether the empirical probability equals to the theoretical probability. The geomagnetic field (GMF) is broadly applied as a sensory cue for various movements in many organisms including humans, although our understanding is limited. Here we reveal a GMF-modulated probabilistic abstract decision-making in humans and the underlying mechanism, exploiting the zero-sum binary stone choice of Go game as a proof-of-principle. The large-scale data analyses of professional Go matches and in situ stone choice games showed that the empirical probabilities of the stone selections were remarkably different from the theoretical probability. In laboratory experiments, experimental probability in the decision-making was significantly influenced by GMF conditions and specific magnetic resonance frequency. Time series and stepwise systematic analyses pinpointed the intentionally uncontrollable decision-making as a primary modulating target. Notably, the continuum of GMF lines and anisotropic magnetic interplay between players were crucial to influence the magnetic field resonance-mediated abstract decision-making. Our findings provide unique insights into the impact of sensing GMF in decision-makings at tipping points and the quantum mechanical mechanism for manifesting the gap between theoretical and empirical probability in 3-dimensional living space.
[ { "created": "Wed, 28 Jun 2023 15:11:05 GMT", "version": "v1" }, { "created": "Sat, 22 Jul 2023 04:58:49 GMT", "version": "v2" } ]
2023-07-25
[ [ "Chae", "Kwon-Seok", "" ], [ "Oh", "In-Taek", "" ], [ "Jeong", "Soo Hyun", "" ], [ "Kim", "Yong-Hwan", "" ], [ "Kim", "Soo-Chan", "" ], [ "Kim", "Yongkuk", "" ] ]
To resolve disputes or determine the order of things, people commonly use binary choices such as tossing a coin, even though it is obscure whether the empirical probability equals to the theoretical probability. The geomagnetic field (GMF) is broadly applied as a sensory cue for various movements in many organisms including humans, although our understanding is limited. Here we reveal a GMF-modulated probabilistic abstract decision-making in humans and the underlying mechanism, exploiting the zero-sum binary stone choice of Go game as a proof-of-principle. The large-scale data analyses of professional Go matches and in situ stone choice games showed that the empirical probabilities of the stone selections were remarkably different from the theoretical probability. In laboratory experiments, experimental probability in the decision-making was significantly influenced by GMF conditions and specific magnetic resonance frequency. Time series and stepwise systematic analyses pinpointed the intentionally uncontrollable decision-making as a primary modulating target. Notably, the continuum of GMF lines and anisotropic magnetic interplay between players were crucial to influence the magnetic field resonance-mediated abstract decision-making. Our findings provide unique insights into the impact of sensing GMF in decision-makings at tipping points and the quantum mechanical mechanism for manifesting the gap between theoretical and empirical probability in 3-dimensional living space.
2005.00106
Wahab Abdul Iddrisu
Wahab Abdul Iddrisu, Peter Appiahene, Justice A. Kessie
Effects of weather and policy intervention on COVID-19 infection in Ghana
14 pages, 5 figures, 3 tables
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Even though laboratory and epidemiological studies have demonstrated the effects of ambient temperature on the transmission and survival of coronaviruses, not much has been done on the effects of weather on the spread of COVID-19. This study investigates the effects of temperature, humidity, precipitation, wind speed and the specific government policy intervention of partial lockdown on the new cases of COVID-19 infection in Ghana. Daily data on confirmed cases of COVID-19 from March 13, 2020 to April 21, 2020 were obtained from the official website of Our World in Data (OWID) dedicated to COVID-19 while satellite climate data for the same period was obtained from the official website of NASA's Prediction of Worldwide Energy Resources (POWER) project. Considering the nature of the data and the objectives of the study, a time series generalized linear model which allows for regressing on past observations of the response variable and covariates was used for model fitting. The results indicate significant effects of maximum temperature, relative humidity and precipitation in predicting new cases of the disease. Also, results of the intervention analysis indicate that the null hypothesis of no significant effect of the specific policy intervention of partial lockdown should be rejected (p-value=0.0164) at a 5\% level of significance. These findings provide useful insights for policymakers and the public.
[ { "created": "Tue, 28 Apr 2020 20:15:05 GMT", "version": "v1" } ]
2020-05-04
[ [ "Iddrisu", "Wahab Abdul", "" ], [ "Appiahene", "Peter", "" ], [ "Kessie", "Justice A.", "" ] ]
Even though laboratory and epidemiological studies have demonstrated the effects of ambient temperature on the transmission and survival of coronaviruses, not much has been done on the effects of weather on the spread of COVID-19. This study investigates the effects of temperature, humidity, precipitation, wind speed and the specific government policy intervention of partial lockdown on the new cases of COVID-19 infection in Ghana. Daily data on confirmed cases of COVID-19 from March 13, 2020 to April 21, 2020 were obtained from the official website of Our World in Data (OWID) dedicated to COVID-19 while satellite climate data for the same period was obtained from the official website of NASA's Prediction of Worldwide Energy Resources (POWER) project. Considering the nature of the data and the objectives of the study, a time series generalized linear model which allows for regressing on past observations of the response variable and covariates was used for model fitting. The results indicate significant effects of maximum temperature, relative humidity and precipitation in predicting new cases of the disease. Also, results of the intervention analysis indicate that the null hypothesis of no significant effect of the specific policy intervention of partial lockdown should be rejected (p-value=0.0164) at a 5\% level of significance. These findings provide useful insights for policymakers and the public.
1305.7478
Matteo Cavaliere
Attila Csik\'asz-Nagy, Luis M. Escudero, Martial Guillaud, Sean Sedwards, Buzz Baum, Matteo Cavaliere
Cooperation and competition in the dynamics of tissue architecture during homeostasis and tumorigenesis
11 pages, 5 figures; Seminars in Cancer Biology (2013)
null
10.1016/j.semcancer.2013.05.009
null
q-bio.TO q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The construction of a network of cell-to-cell contacts makes it possible to characterize the patterns and spatial organisation of tissues. Such networks are highly dynamic, depending on the changes of the tissue architecture caused by cell division, death and migration. Local competitive and cooperative cell-to-cell interactions influence the choices cells make. We review the literature on quantitative data of epithelial tissue topology and present a dynamical network model that can be used to explore the evolutionary dynamics of a two dimensional tissue architecture with arbitrary cell-to-cell interactions. In particular, we show that various forms of experimentally observed types of interactions can be modelled using game theory. We discuss a model of cooperative and non-cooperative cell-to-cell communication that can capture the interplay between cellular competition and tissue dynamics. We conclude with an outlook on the possible uses of this approach in modelling tumorigenesis and tissue homeostasis.
[ { "created": "Fri, 31 May 2013 16:26:14 GMT", "version": "v1" }, { "created": "Mon, 24 Jun 2013 07:57:35 GMT", "version": "v2" } ]
2013-06-25
[ [ "Csikász-Nagy", "Attila", "" ], [ "Escudero", "Luis M.", "" ], [ "Guillaud", "Martial", "" ], [ "Sedwards", "Sean", "" ], [ "Baum", "Buzz", "" ], [ "Cavaliere", "Matteo", "" ] ]
The construction of a network of cell-to-cell contacts makes it possible to characterize the patterns and spatial organisation of tissues. Such networks are highly dynamic, depending on the changes of the tissue architecture caused by cell division, death and migration. Local competitive and cooperative cell-to-cell interactions influence the choices cells make. We review the literature on quantitative data of epithelial tissue topology and present a dynamical network model that can be used to explore the evolutionary dynamics of a two dimensional tissue architecture with arbitrary cell-to-cell interactions. In particular, we show that various forms of experimentally observed types of interactions can be modelled using game theory. We discuss a model of cooperative and non-cooperative cell-to-cell communication that can capture the interplay between cellular competition and tissue dynamics. We conclude with an outlook on the possible uses of this approach in modelling tumorigenesis and tissue homeostasis.
1802.07568
Sebastian Kmiecik
Aleksander Kuriata, Aleksandra Maria Gierut, Tymoteusz Oleniecki, Maciej Pawel Ciemny, Andrzej Kolinski, Mateusz Kurcinski and Sebastian Kmiecik
CABS-flex 2.0: a web server for fast simulations of flexibility of protein structures
null
Nucleic Acids Research, gky356, 2018
10.1093/nar/gky356
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Classical simulations of protein flexibility remain computationally expensive, especially for large proteins. A few years ago, we developed a fast method for predicting protein structure fluctuations that uses a single protein model as the input. The method has been made available as the CABS-flex web server and applied in numerous studies of protein structure-function relationships. Here, we present a major update of the CABS-flex web server to version 2.0. The new features include: extension of the method to significantly larger and multimeric proteins, customizable distance restraints and simulation parameters, contact maps and a new, enhanced web server interface. CABS-flex 2.0 is freely available at http://biocomp.chem.uw.edu.pl/CABSflex2
[ { "created": "Wed, 21 Feb 2018 13:51:17 GMT", "version": "v1" } ]
2018-05-17
[ [ "Kuriata", "Aleksander", "" ], [ "Gierut", "Aleksandra Maria", "" ], [ "Oleniecki", "Tymoteusz", "" ], [ "Ciemny", "Maciej Pawel", "" ], [ "Kolinski", "Andrzej", "" ], [ "Kurcinski", "Mateusz", "" ], [ "Kmiecik", ...
Classical simulations of protein flexibility remain computationally expensive, especially for large proteins. A few years ago, we developed a fast method for predicting protein structure fluctuations that uses a single protein model as the input. The method has been made available as the CABS-flex web server and applied in numerous studies of protein structure-function relationships. Here, we present a major update of the CABS-flex web server to version 2.0. The new features include: extension of the method to significantly larger and multimeric proteins, customizable distance restraints and simulation parameters, contact maps and a new, enhanced web server interface. CABS-flex 2.0 is freely available at http://biocomp.chem.uw.edu.pl/CABSflex2
1006.0519
Badri Padhukasahasram
Badri Padhukasahasram
Probability that a chromosome is lost without trace under the neutral Wright-Fisher model with recombination
Additional Information, Padhukasahasram et al. 2008, Genetics, FORWSIM algorithm
Methodology and Computing in Applied Probability, Online First, 11 May 2012
10.1007/s11009-012-9288-5
null
q-bio.PE math.PR q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
I describe an analytical approximation for calculating the short-term probability of loss of a chromosome under the neutral Wright-Fisher model with recombination. I also present an upper and lower bound for this probability. Exact analytical calculation of this quantity is difficult and computationally expensive because the number of different ways in which a chromosome can be lost, grows very large in the presence of recombination. Simulations indicate that the probabilities obtained using my approximate formula are always comparable to the true expectations provided that the number of generations remains small. These results are useful in the context of an algorithm that we recently developed for simulating Wright-Fisher populations forward in time. C++ programs that can efficiently calculate these formulas are available on request.
[ { "created": "Wed, 2 Jun 2010 23:06:16 GMT", "version": "v1" }, { "created": "Thu, 10 Jun 2010 04:44:53 GMT", "version": "v2" }, { "created": "Tue, 20 Jul 2010 04:17:45 GMT", "version": "v3" }, { "created": "Sat, 16 Oct 2010 18:43:54 GMT", "version": "v4" }, { "cr...
2012-05-21
[ [ "Padhukasahasram", "Badri", "" ] ]
I describe an analytical approximation for calculating the short-term probability of loss of a chromosome under the neutral Wright-Fisher model with recombination. I also present an upper and lower bound for this probability. Exact analytical calculation of this quantity is difficult and computationally expensive because the number of different ways in which a chromosome can be lost, grows very large in the presence of recombination. Simulations indicate that the probabilities obtained using my approximate formula are always comparable to the true expectations provided that the number of generations remains small. These results are useful in the context of an algorithm that we recently developed for simulating Wright-Fisher populations forward in time. C++ programs that can efficiently calculate these formulas are available on request.
q-bio/0605048
Thomas R. Weikl
Thomas R. Weikl and Ken A. Dill
Transition States in Protein Folding Kinetics: The Structural Interpretation of Phi-values
26 pages, 7 figures, 5 tables
null
null
null
q-bio.BM
null
Phi-values are experimental measures of the effects of mutations on the folding kinetics of a protein. A central question is which structural information Phi-values contain about the transition state of folding. Traditionally, a Phi-value is interpreted as the 'nativeness' of a mutated residue in the transition state. However, this interpretation is often problematic because it assumes a linear relation between the nativeness of the residue and its free-energy contribution. We present here a better structural interpretation of Phi-values for mutations within a given helix. Our interpretation is based on a simple physical model that distinguishes between secondary and tertiary free-energy contributions of helical residues. From a linear fit of our model to the experimental data, we obtain two structural parameters: the extent of helix formation in the transition state, and the nativeness of tertiary interactions in the transition state. We apply our model to all proteins with well-characterized helices for which more than 10 Phi-values are available: protein A, CI2, and protein L. The model captures nonclassical Phi-values <0 or >1 in these helices, and explains how different mutations at a given site can lead to different Phi-values.
[ { "created": "Tue, 30 May 2006 10:04:49 GMT", "version": "v1" } ]
2007-05-23
[ [ "Weikl", "Thomas R.", "" ], [ "Dill", "Ken A.", "" ] ]
Phi-values are experimental measures of the effects of mutations on the folding kinetics of a protein. A central question is which structural information Phi-values contain about the transition state of folding. Traditionally, a Phi-value is interpreted as the 'nativeness' of a mutated residue in the transition state. However, this interpretation is often problematic because it assumes a linear relation between the nativeness of the residue and its free-energy contribution. We present here a better structural interpretation of Phi-values for mutations within a given helix. Our interpretation is based on a simple physical model that distinguishes between secondary and tertiary free-energy contributions of helical residues. From a linear fit of our model to the experimental data, we obtain two structural parameters: the extent of helix formation in the transition state, and the nativeness of tertiary interactions in the transition state. We apply our model to all proteins with well-characterized helices for which more than 10 Phi-values are available: protein A, CI2, and protein L. The model captures nonclassical Phi-values <0 or >1 in these helices, and explains how different mutations at a given site can lead to different Phi-values.
2212.12646
Chen-Gia Tsai
Chen-Gia Tsai, Yi-Fan Fu, and Chia-Wei Li
Reward prediction errors arising from switches between major and minor modes in music: An fMRI study
submitted to Psychophysiology
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-nd/4.0/
Evidence has accumulated that prediction error processing plays a role in the enjoyment of music listening. The present study examined listeners' neural responses to the signed reward prediction errors (RPEs) arising from switches between major and minor modes in music. We manipulated the final chord of J. S. Bach's keyboard pieces so that each major-mode passage ended with either the major (Major-Major) or minor (Major-Minor) tonic chord, and each minor-mode passage ended with either the minor (Minor-Minor) or major (Minor-Major) tonic chord. In Western music, the major and minor modes have positive and negative connotations, respectively. Therefore, the outcome of the final chord in Major-Minor stimuli was associated with negative RPE, whereas that in Minor-Major was associated with positive RPE. Twenty-three musically experienced adults underwent functional magnetic resonance imaging while listening to Major-Major, Major-Minor, Minor-Minor, and Minor-Major stimuli. We found that activity in the subgenual anterior cingulate cortex (extending into the ventromedial prefrontal cortex) during the final chord for Major-Major was significantly higher than that for Major-Minor. Conversely, a frontoparietal network for Major-Minor exhibited significantly increased activity compared to Major-Major. The contrasts between Minor-Minor and Minor-Major yielded regions implicated in interoception. We discuss our results in relation to executive functions and the emotional connotations of major versus minor mode.
[ { "created": "Sat, 24 Dec 2022 03:42:43 GMT", "version": "v1" } ]
2022-12-27
[ [ "Tsai", "Chen-Gia", "" ], [ "Fu", "Yi-Fan", "" ], [ "Li", "Chia-Wei", "" ] ]
Evidence has accumulated that prediction error processing plays a role in the enjoyment of music listening. The present study examined listeners' neural responses to the signed reward prediction errors (RPEs) arising from switches between major and minor modes in music. We manipulated the final chord of J. S. Bach's keyboard pieces so that each major-mode passage ended with either the major (Major-Major) or minor (Major-Minor) tonic chord, and each minor-mode passage ended with either the minor (Minor-Minor) or major (Minor-Major) tonic chord. In Western music, the major and minor modes have positive and negative connotations, respectively. Therefore, the outcome of the final chord in Major-Minor stimuli was associated with negative RPE, whereas that in Minor-Major was associated with positive RPE. Twenty-three musically experienced adults underwent functional magnetic resonance imaging while listening to Major-Major, Major-Minor, Minor-Minor, and Minor-Major stimuli. We found that activity in the subgenual anterior cingulate cortex (extending into the ventromedial prefrontal cortex) during the final chord for Major-Major was significantly higher than that for Major-Minor. Conversely, a frontoparietal network for Major-Minor exhibited significantly increased activity compared to Major-Major. The contrasts between Minor-Minor and Minor-Major yielded regions implicated in interoception. We discuss our results in relation to executive functions and the emotional connotations of major versus minor mode.
2012.00094
Lagnajit Pattanaik
Lagnajit Pattanaik, Octavian-Eugen Ganea, Ian Coley, Klavs F. Jensen, William H. Green, Connor W. Coley
Message Passing Networks for Molecules with Tetrahedral Chirality
null
null
null
null
q-bio.QM cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Molecules with identical graph connectivity can exhibit different physical and biological properties if they exhibit stereochemistry-a spatial structural characteristic. However, modern neural architectures designed for learning structure-property relationships from molecular structures treat molecules as graph-structured data and therefore are invariant to stereochemistry. Here, we develop two custom aggregation functions for message passing neural networks to learn properties of molecules with tetrahedral chirality, one common form of stereochemistry. We evaluate performance on synthetic data as well as a newly-proposed protein-ligand docking dataset with relevance to drug discovery. Results show modest improvements over a baseline sum aggregator, highlighting opportunities for further architecture development.
[ { "created": "Tue, 24 Nov 2020 03:03:09 GMT", "version": "v1" }, { "created": "Fri, 4 Dec 2020 15:43:10 GMT", "version": "v2" } ]
2020-12-07
[ [ "Pattanaik", "Lagnajit", "" ], [ "Ganea", "Octavian-Eugen", "" ], [ "Coley", "Ian", "" ], [ "Jensen", "Klavs F.", "" ], [ "Green", "William H.", "" ], [ "Coley", "Connor W.", "" ] ]
Molecules with identical graph connectivity can exhibit different physical and biological properties if they exhibit stereochemistry-a spatial structural characteristic. However, modern neural architectures designed for learning structure-property relationships from molecular structures treat molecules as graph-structured data and therefore are invariant to stereochemistry. Here, we develop two custom aggregation functions for message passing neural networks to learn properties of molecules with tetrahedral chirality, one common form of stereochemistry. We evaluate performance on synthetic data as well as a newly-proposed protein-ligand docking dataset with relevance to drug discovery. Results show modest improvements over a baseline sum aggregator, highlighting opportunities for further architecture development.
1403.1236
Boryana Doyle
Boryana Doyle, Geoffrey Fudenberg, Maxim Imakaev, Leonid A. Mirny
Chromatin Loops as Allosteric Modulators of Enhancer-Promoter Interactions
Main text and all figures combined
PLoS Computational Biology 10(10): e1003867 (2014)
10.1371/journal.pcbi.1003867
null
q-bio.GN q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The classic model of eukaryotic gene expression requires direct spatial contact between a distal enhancer and a proximal promoter. Recent Chromosome Conformation Capture (3C) studies show that enhancers and promoters are embedded in a complex network of looping interactions. Here we use a polymer model of chromatin fiber to investigate whether, and to what extent, looping interactions between elements in the vicinity of an enhancer-promoter pair can influence their contact frequency. Our equilibrium polymer simulations show that a chromatin loop, formed by elements flanking either an enhancer or a promoter, suppresses enhancer-promoter interactions, working as an insulator. A loop formed by elements located in the region between an enhancer and a promoter, on the contrary, facilitates their interactions. We find that different mechanisms underlie insulation and facilitation; insulation occurs due to steric exclusion by the loop, and is a global effect, while facilitation occurs due to an effective shortening of the enhancer-promoter genomic distance, and is a local effect. Consistently, we find that these effects manifest quite differently for in silico 3C and microscopy. Our results show that looping interactions that do not directly involve an enhancer-promoter pair can nevertheless significantly modulate their interactions. This phenomenon is analogous to allosteric regulation in proteins, where a conformational change triggered by binding of a regulatory molecule to one site affects the state of another site.
[ { "created": "Wed, 5 Mar 2014 19:53:08 GMT", "version": "v1" }, { "created": "Sat, 22 Nov 2014 22:21:16 GMT", "version": "v2" } ]
2014-11-25
[ [ "Doyle", "Boryana", "" ], [ "Fudenberg", "Geoffrey", "" ], [ "Imakaev", "Maxim", "" ], [ "Mirny", "Leonid A.", "" ] ]
The classic model of eukaryotic gene expression requires direct spatial contact between a distal enhancer and a proximal promoter. Recent Chromosome Conformation Capture (3C) studies show that enhancers and promoters are embedded in a complex network of looping interactions. Here we use a polymer model of chromatin fiber to investigate whether, and to what extent, looping interactions between elements in the vicinity of an enhancer-promoter pair can influence their contact frequency. Our equilibrium polymer simulations show that a chromatin loop, formed by elements flanking either an enhancer or a promoter, suppresses enhancer-promoter interactions, working as an insulator. A loop formed by elements located in the region between an enhancer and a promoter, on the contrary, facilitates their interactions. We find that different mechanisms underlie insulation and facilitation; insulation occurs due to steric exclusion by the loop, and is a global effect, while facilitation occurs due to an effective shortening of the enhancer-promoter genomic distance, and is a local effect. Consistently, we find that these effects manifest quite differently for in silico 3C and microscopy. Our results show that looping interactions that do not directly involve an enhancer-promoter pair can nevertheless significantly modulate their interactions. This phenomenon is analogous to allosteric regulation in proteins, where a conformational change triggered by binding of a regulatory molecule to one site affects the state of another site.
2004.12902
Linus Schumacher
Liam J. Ruske, Jochen Kursawe, Anestis Tsakiridis, Valerie Wilson, Alexander G. Fletcher, Richard A. Blythe, Linus J. Schumacher
Coupled differentiation and division of embryonic stem cells inferred from clonal snapshots
null
null
10.1088/1478-3975/aba041
null
q-bio.QM physics.bio-ph q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The deluge of single-cell data obtained by sequencing, imaging and epigenetic markers has led to an increasingly detailed description of cell state. However, it remains challenging to identify how cells transition between different states, in part because data are typically limited to snapshots in time. A prerequisite for inferring cell state transitions from such snapshots is to distinguish whether transitions are coupled to cell divisions. To address this, we present two minimal branching process models of cell division and differentiation in a well-mixed population. These models describe dynamics where differentiation and division are coupled or uncoupled. For each model, we derive analytic expressions for each subpopulation's mean and variance and for the likelihood, allowing exact Bayesian parameter inference and model selection in the idealised case of fully observed trajectories of differentiation and division events. In the case of snapshots, we present a sample path algorithm and use this to predict optimal temporal spacing of measurements for experimental design. We then apply this methodology to an \textit{in vitro} dataset assaying the clonal growth of epiblast stem cells in culture conditions promoting self-renewal or differentiation. Here, the larger number of cell states necessitates approximate Bayesian computation. For both culture conditions, our inference supports the model where cell state transitions are coupled to division. For culture conditions promoting differentiation, our analysis indicates a possible shift in dynamics, with these processes becoming more coupled over time.
[ { "created": "Mon, 27 Apr 2020 16:04:08 GMT", "version": "v1" }, { "created": "Thu, 25 Jun 2020 08:18:58 GMT", "version": "v2" } ]
2020-06-30
[ [ "Ruske", "Liam J.", "" ], [ "Kursawe", "Jochen", "" ], [ "Tsakiridis", "Anestis", "" ], [ "Wilson", "Valerie", "" ], [ "Fletcher", "Alexander G.", "" ], [ "Blythe", "Richard A.", "" ], [ "Schumacher", "Linus J....
The deluge of single-cell data obtained by sequencing, imaging and epigenetic markers has led to an increasingly detailed description of cell state. However, it remains challenging to identify how cells transition between different states, in part because data are typically limited to snapshots in time. A prerequisite for inferring cell state transitions from such snapshots is to distinguish whether transitions are coupled to cell divisions. To address this, we present two minimal branching process models of cell division and differentiation in a well-mixed population. These models describe dynamics where differentiation and division are coupled or uncoupled. For each model, we derive analytic expressions for each subpopulation's mean and variance and for the likelihood, allowing exact Bayesian parameter inference and model selection in the idealised case of fully observed trajectories of differentiation and division events. In the case of snapshots, we present a sample path algorithm and use this to predict optimal temporal spacing of measurements for experimental design. We then apply this methodology to an \textit{in vitro} dataset assaying the clonal growth of epiblast stem cells in culture conditions promoting self-renewal or differentiation. Here, the larger number of cell states necessitates approximate Bayesian computation. For both culture conditions, our inference supports the model where cell state transitions are coupled to division. For culture conditions promoting differentiation, our analysis indicates a possible shift in dynamics, with these processes becoming more coupled over time.
2212.12392
Sakuntala Chatterjee
Sakuntala Chatterjee
Short time extremal response to step stimulus for a single cell {\sl E. coli}
15 pages, 11 figures
J. Stat. Mech. (2022) 123503
10.1088/1742-5468/aca589
null
q-bio.CB cond-mat.stat-mech physics.bio-ph
http://creativecommons.org/licenses/by/4.0/
After application of a step stimulus, in the form of a sudden change in attractant environment, the receptor activity and tumbling bias of an {\sl E. coli} cell change sharply to reach their extremal values before they gradually relax to their post-stimulus adapted levels in the long time limit. We perform numerical simulations and exact calculations to investigate the short time response of the cell. For both activity and tumbling bias, we exactly derive the condition for extremal response and find good agreement with simulations. We also make experimentally verifiable prediction that there is an optimum size of the step stimulus at which the extremal response is reached in the shortest possible time.
[ { "created": "Fri, 23 Dec 2022 15:16:40 GMT", "version": "v1" } ]
2022-12-26
[ [ "Chatterjee", "Sakuntala", "" ] ]
After application of a step stimulus, in the form of a sudden change in attractant environment, the receptor activity and tumbling bias of an {\sl E. coli} cell change sharply to reach their extremal values before they gradually relax to their post-stimulus adapted levels in the long time limit. We perform numerical simulations and exact calculations to investigate the short time response of the cell. For both activity and tumbling bias, we exactly derive the condition for extremal response and find good agreement with simulations. We also make experimentally verifiable prediction that there is an optimum size of the step stimulus at which the extremal response is reached in the shortest possible time.
1302.4559
Szymon {\L}{\ke}ski
Szymon {\L}\k{e}ski, Henrik Lind\'en, Tom Tetzlaff, Klas H. Pettersen, Gaute T. Einevoll
Frequency dependence of signal power and spatial reach of the local field potential
null
null
10.1371/journal.pcbi.1003137
null
q-bio.NC q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The first recording of electrical potential from brain activity was reported already in 1875, but still the interpretation of the signal is debated. To take full advantage of the new generation of microelectrodes with hundreds or even thousands of electrode contacts, an accurate quantitative link between what is measured and the underlying neural circuit activity is needed. Here we address the question of how the observed frequency dependence of recorded local field potentials (LFPs) should be interpreted. By use of a well-established biophysical modeling scheme, combined with detailed reconstructed neuronal morphologies, we find that correlations in the synaptic inputs onto a population of pyramidal cells may significantly boost the low-frequency components of the generated LFP. We further find that these low-frequency components may be less `local' than the high-frequency LFP components in the sense that (1) the size of signal-generation region of the LFP recorded at an electrode is larger and (2) that the LFP generated by a synaptically activated population spreads further outside the population edge due to volume conduction.
[ { "created": "Tue, 19 Feb 2013 09:52:38 GMT", "version": "v1" } ]
2022-05-17
[ [ "Łęski", "Szymon", "" ], [ "Lindén", "Henrik", "" ], [ "Tetzlaff", "Tom", "" ], [ "Pettersen", "Klas H.", "" ], [ "Einevoll", "Gaute T.", "" ] ]
The first recording of electrical potential from brain activity was reported already in 1875, but still the interpretation of the signal is debated. To take full advantage of the new generation of microelectrodes with hundreds or even thousands of electrode contacts, an accurate quantitative link between what is measured and the underlying neural circuit activity is needed. Here we address the question of how the observed frequency dependence of recorded local field potentials (LFPs) should be interpreted. By use of a well-established biophysical modeling scheme, combined with detailed reconstructed neuronal morphologies, we find that correlations in the synaptic inputs onto a population of pyramidal cells may significantly boost the low-frequency components of the generated LFP. We further find that these low-frequency components may be less `local' than the high-frequency LFP components in the sense that (1) the size of signal-generation region of the LFP recorded at an electrode is larger and (2) that the LFP generated by a synaptically activated population spreads further outside the population edge due to volume conduction.
1806.04634
Daniel Moyer
Daniel Moyer, Paul M. Thompson, Greg Ver Steeg
Measures of Tractography Convergence
11 pages
null
null
null
q-bio.QM cs.LG q-bio.TO stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the present work, we use information theory to understand the empirical convergence rate of tractography, a widely-used approach to reconstruct anatomical fiber pathways in the living brain. Based on diffusion MRI data, tractography is the starting point for many methods to study brain connectivity. Of the available methods to perform tractography, most reconstruct a finite set of streamlines, or 3D curves, representing probable connections between anatomical regions, yet relatively little is known about how the sampling of this set of streamlines affects downstream results, and how exhaustive the sampling should be. Here we provide a method to measure the information theoretic surprise (self-cross entropy) for tract sampling schema. We then empirically assess four streamline methods. We demonstrate that the relative information gain is very low after a moderate number of streamlines have been generated for each tested method. The results give rise to several guidelines for optimal sampling in brain connectivity analyses.
[ { "created": "Tue, 12 Jun 2018 16:30:25 GMT", "version": "v1" } ]
2018-06-13
[ [ "Moyer", "Daniel", "" ], [ "Thompson", "Paul M.", "" ], [ "Steeg", "Greg Ver", "" ] ]
In the present work, we use information theory to understand the empirical convergence rate of tractography, a widely-used approach to reconstruct anatomical fiber pathways in the living brain. Based on diffusion MRI data, tractography is the starting point for many methods to study brain connectivity. Of the available methods to perform tractography, most reconstruct a finite set of streamlines, or 3D curves, representing probable connections between anatomical regions, yet relatively little is known about how the sampling of this set of streamlines affects downstream results, and how exhaustive the sampling should be. Here we provide a method to measure the information theoretic surprise (self-cross entropy) for tract sampling schema. We then empirically assess four streamline methods. We demonstrate that the relative information gain is very low after a moderate number of streamlines have been generated for each tested method. The results give rise to several guidelines for optimal sampling in brain connectivity analyses.
2308.15597
David Cutler
David J. Cutler, Kiana Jodeiry, Andrew J. Bass and Michael P. Epstein
The Quantitative Genetics of Human Disease: 1 Foundations
null
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by-nc-nd/4.0/
In this the first of an anticipated four paper series, fundamental results of quantitative genetics are presented from a first principles approach. While none of these results are in any sense new, they are presented in extended detail to precisely distinguish between definition and assumption, with a further emphasis on distinguishing quantities from their usual approximations. Terminology frequently encountered in the field of human genetic disease studies will be defined in terms of their quantitive genetics form. Methods for estimation of both quantitative genetics and the related human genetics quantities will be demonstrated. While practitioners in the field of human quantitative disease studies may find this work pedantic in detail, the principle target audience for this work is trainees reasonably familiar with population genetics theory, but with less experience in its application to human disease studies. We introduce much of this formalism because in later papers in this series, we demonstrate that common areas of confusion in human disease studies can be resolved be appealing directly to these formal definitions. The second paper in this series will discuss polygenic risk scores. The third paper will concern the question of "missing" heritability and the role interactions may play. The fourth paper will discuss sexually dimorphic disease and the potential role of the X chromosome.
[ { "created": "Tue, 29 Aug 2023 19:42:18 GMT", "version": "v1" } ]
2023-08-31
[ [ "Cutler", "David J.", "" ], [ "Jodeiry", "Kiana", "" ], [ "Bass", "Andrew J.", "" ], [ "Epstein", "Michael P.", "" ] ]
In this the first of an anticipated four paper series, fundamental results of quantitative genetics are presented from a first principles approach. While none of these results are in any sense new, they are presented in extended detail to precisely distinguish between definition and assumption, with a further emphasis on distinguishing quantities from their usual approximations. Terminology frequently encountered in the field of human genetic disease studies will be defined in terms of their quantitive genetics form. Methods for estimation of both quantitative genetics and the related human genetics quantities will be demonstrated. While practitioners in the field of human quantitative disease studies may find this work pedantic in detail, the principle target audience for this work is trainees reasonably familiar with population genetics theory, but with less experience in its application to human disease studies. We introduce much of this formalism because in later papers in this series, we demonstrate that common areas of confusion in human disease studies can be resolved be appealing directly to these formal definitions. The second paper in this series will discuss polygenic risk scores. The third paper will concern the question of "missing" heritability and the role interactions may play. The fourth paper will discuss sexually dimorphic disease and the potential role of the X chromosome.
1802.04069
Lars Rothkegel
Lars Oliver Martin Rothkegel, Heiko Herbert Sch\"utt, Hans Arne Trukenbrod, Felix Alexander Wichmann and Ralf Engbert
Searchers adjust their eye movement dynamics to the target characteristics in natural scenes
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When searching a target in a natural scene, both the target's visual properties and similarity to the background influence whether (and how fast) humans are able to find it. However, thus far it has been unclear whether searchers adjust the dynamics of their eye movements (e.g., fixation durations, saccade amplitudes) to the target they search for. In our experiment participants searched natural scenes for six artificial targets with different spatial frequency throughout eight consecutive sessions. High-spatial frequency targets led to smaller saccade amplitudes and shorter fixation durations than low-spatial frequency targets if target identity was known before the trial. If a saccade was programmed in the same direction as the previous saccade (saccadic momentum), fixation durations and successive saccade amplitudes were not influenced by target type. Visual saliency and empirical density at the endpoints of saccadic momentum saccades were comparatively low, indicating that these saccades were less selective. Our results demonstrate that searchers adjust their eye movement dynamics to the search target in a sensible fashion, since low-spatial frequencies are visible farther into the periphery than high-spatial frequencies. Additionally, the saccade direction specificity of our effects suggests a separation of saccades into a default scanning mechanism and a selective, target-dependent mechanism.
[ { "created": "Mon, 12 Feb 2018 14:34:01 GMT", "version": "v1" } ]
2018-02-13
[ [ "Rothkegel", "Lars Oliver Martin", "" ], [ "Schütt", "Heiko Herbert", "" ], [ "Trukenbrod", "Hans Arne", "" ], [ "Wichmann", "Felix Alexander", "" ], [ "Engbert", "Ralf", "" ] ]
When searching a target in a natural scene, both the target's visual properties and similarity to the background influence whether (and how fast) humans are able to find it. However, thus far it has been unclear whether searchers adjust the dynamics of their eye movements (e.g., fixation durations, saccade amplitudes) to the target they search for. In our experiment participants searched natural scenes for six artificial targets with different spatial frequency throughout eight consecutive sessions. High-spatial frequency targets led to smaller saccade amplitudes and shorter fixation durations than low-spatial frequency targets if target identity was known before the trial. If a saccade was programmed in the same direction as the previous saccade (saccadic momentum), fixation durations and successive saccade amplitudes were not influenced by target type. Visual saliency and empirical density at the endpoints of saccadic momentum saccades were comparatively low, indicating that these saccades were less selective. Our results demonstrate that searchers adjust their eye movement dynamics to the search target in a sensible fashion, since low-spatial frequencies are visible farther into the periphery than high-spatial frequencies. Additionally, the saccade direction specificity of our effects suggests a separation of saccades into a default scanning mechanism and a selective, target-dependent mechanism.
1207.7017
Robert Reid
Robert W. Reid, Melanie D. Spencer, Timothy J. Hamp, Anthony A. Fodor
ARISA data from the human gut microbiome can detect individual differences observed by 454 sequencing regardless of binning strategy
22 pages, 4 figures, 2 tables
null
null
null
q-bio.GN q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
ARISA (Automated Ribosomal Intergenic Spacer Analysis) is a low-cost technique that allows for the rapid comparison of different microbial environments. In this study, we asked if a set of ARISA profiles can distinguish human microbial environments from one another with the same accuracy as results generated from 454 high throughput DNA sequencing. Using a set of human microbial communities where the sequencing results cluster by subject, we tested how choices made during ARISA data processing influence clustering. We found that choice of clustering methods had a profound effect with Ward's clustering generating profiles the most similar to 454 sequencing. Factors such as bin size, using presence or absence calls and technical replicate manipulation had a negligible effect on clustering. In fact, no established bin sizing method reported in the literature performed significantly different results than simply picking bin intervals at random. We conclude that in an analysis of ARISA data from an ecosystem of sufficient complexity to saturate bins, a careful choice of clustering algorithm is essential whereas differing strategies for choosing bins are likely to have a much less pronounced effect on the outcome of the analysis. As a tool for distinguishing complex microbial communities, ARISA closely approximates the results obtained from DNA sequencing at a fraction of the cost; however ARISA fails to reproduce the sequencing results perfectly.
[ { "created": "Mon, 30 Jul 2012 17:56:32 GMT", "version": "v1" } ]
2012-07-31
[ [ "Reid", "Robert W.", "" ], [ "Spencer", "Melanie D.", "" ], [ "Hamp", "Timothy J.", "" ], [ "Fodor", "Anthony A.", "" ] ]
ARISA (Automated Ribosomal Intergenic Spacer Analysis) is a low-cost technique that allows for the rapid comparison of different microbial environments. In this study, we asked if a set of ARISA profiles can distinguish human microbial environments from one another with the same accuracy as results generated from 454 high throughput DNA sequencing. Using a set of human microbial communities where the sequencing results cluster by subject, we tested how choices made during ARISA data processing influence clustering. We found that choice of clustering methods had a profound effect with Ward's clustering generating profiles the most similar to 454 sequencing. Factors such as bin size, using presence or absence calls and technical replicate manipulation had a negligible effect on clustering. In fact, no established bin sizing method reported in the literature performed significantly different results than simply picking bin intervals at random. We conclude that in an analysis of ARISA data from an ecosystem of sufficient complexity to saturate bins, a careful choice of clustering algorithm is essential whereas differing strategies for choosing bins are likely to have a much less pronounced effect on the outcome of the analysis. As a tool for distinguishing complex microbial communities, ARISA closely approximates the results obtained from DNA sequencing at a fraction of the cost; however ARISA fails to reproduce the sequencing results perfectly.
1611.04168
Eugene Katrukha
Eugene Katrukha
Dynamic instabilities in the kinetics of growth and disassembly of microtubules
thesis (in Russian)
null
null
null
q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Dynamic instability of microtubules is considered using frameworks of non-linear thermodynamics and non-equilibrium reaction-diffusion systems. Stochastic assembly/disassembly phases in the polymerization dynamics of microtubules are treated as a result of collective clusterization of microdefects (holes in structure). The model explains experimentally observed power law dependence of catastrophe frequency from the microtubule growth rate. Additional reaction-diffusion-precipitation model is developed to account for kinetic limitations in microtubule dynamics. It is shown that large scale periodic microtubules length fluctuations are accompanied by concentration autowaves. We built corresponding parametric diagram mapping areas of stationary, non-stationary and metastable solutions. The loss of stability for the stationary solutions happens through bifurcation of Andronov-Hopf. Using parametric diagram we classify cytostatic effect of microtubule stabilizing drugs in four major classes and analyze their compatibility and possible synergistic effect for cancer treatment therapy.
[ { "created": "Sun, 13 Nov 2016 18:32:50 GMT", "version": "v1" } ]
2016-11-15
[ [ "Katrukha", "Eugene", "" ] ]
Dynamic instability of microtubules is considered using frameworks of non-linear thermodynamics and non-equilibrium reaction-diffusion systems. Stochastic assembly/disassembly phases in the polymerization dynamics of microtubules are treated as a result of collective clusterization of microdefects (holes in structure). The model explains experimentally observed power law dependence of catastrophe frequency from the microtubule growth rate. Additional reaction-diffusion-precipitation model is developed to account for kinetic limitations in microtubule dynamics. It is shown that large scale periodic microtubules length fluctuations are accompanied by concentration autowaves. We built corresponding parametric diagram mapping areas of stationary, non-stationary and metastable solutions. The loss of stability for the stationary solutions happens through bifurcation of Andronov-Hopf. Using parametric diagram we classify cytostatic effect of microtubule stabilizing drugs in four major classes and analyze their compatibility and possible synergistic effect for cancer treatment therapy.
q-bio/0603008
Le Zhang
Le Zhang, Chaitanya A. Athale, Thomas S. Deisboeck
Development of a Three-Dimensional Multiscale Agent-Based Tumor Model: Simulating Gene-Protein Interaction Profiles, Cell Phenotypes & Multicellular Patterns in Brain Cancer
40 pages, 6 figures
null
null
null
q-bio.TO q-bio.MN
null
Experimental evidence suggests that epidermal growth factor receptor (EGFR)-mediated activation of the signaling protein phospholipase C gamma plays a critical role in a cancer cell's phenotypic decision to either proliferate or to migrate at a given point in time. Here, we present a novel three-dimensional multiscale agent-based model to simulate this cellular decision process in the context of a virtual brain tumor. Each tumor cell is equipped with an EGFR gene-protein interaction network module that also connects to a simplified cell cycle description. The simulation results show that over time proliferative and migratory cell populations not only oscillate but also directly impact the spatio-temporal expansion patterns of the entire cancer system. The percentage change in the concentration of the sub-cellular interaction network's molecular components fluctuates, and, for the proliferation-to-migration switch we find that the phenotype triggering molecular profile to some degree varies as the tumor system grows and the microenvironment changes. We discuss potential implications of these findings for experimental and clinical cancer research.
[ { "created": "Mon, 6 Mar 2006 22:10:40 GMT", "version": "v1" }, { "created": "Thu, 22 Jun 2006 19:39:36 GMT", "version": "v2" } ]
2007-05-23
[ [ "Zhang", "Le", "" ], [ "Athale", "Chaitanya A.", "" ], [ "Deisboeck", "Thomas S.", "" ] ]
Experimental evidence suggests that epidermal growth factor receptor (EGFR)-mediated activation of the signaling protein phospholipase C gamma plays a critical role in a cancer cell's phenotypic decision to either proliferate or to migrate at a given point in time. Here, we present a novel three-dimensional multiscale agent-based model to simulate this cellular decision process in the context of a virtual brain tumor. Each tumor cell is equipped with an EGFR gene-protein interaction network module that also connects to a simplified cell cycle description. The simulation results show that over time proliferative and migratory cell populations not only oscillate but also directly impact the spatio-temporal expansion patterns of the entire cancer system. The percentage change in the concentration of the sub-cellular interaction network's molecular components fluctuates, and, for the proliferation-to-migration switch we find that the phenotype triggering molecular profile to some degree varies as the tumor system grows and the microenvironment changes. We discuss potential implications of these findings for experimental and clinical cancer research.
2106.03667
So Nakashima
So Nakashima and Tetsuya J. Kobayashi
Acceleration of Evolutionary Processes by Learning and Extended Fisher's Fundamental Theorem
19 pages, 4 figures
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
Natural selection is general and powerful concept not only to explain evolutionary processes of biological organisms but also to design engineering systems such as genetic algorithms and particle filters. There is a surge of interest, both from biology and engineering, in considering natural selection of intellectual agents that can learn individually. Learning by individual agents of better behaviors for survival may accelerate the evolutionary processes by natural selection. We have accumulating pieces of evidence that organisms can transmit its information to the next generation via epigenetic states or memes. Also, such idea is important for engineering applications. To accelerate the evolutionary process, an agent should change their strategy so that the population fitness increases the most. Equivalently, an agent should update the strategy towards a gradient of the population fitness. However, it has not yet been clarified whether and how an agent can estimate the gradient and accelerate the evolutionary process. We also lack methodology to quantify the acceleration to understand and predict the impact of learning. In this paper, we address these problems. We show that an learning agent can accelerate the evolutionary process by proposing ancestral learning, which uses the information transmitted from the ancestor (ancestral information). We next show that the ancestral information is sufficient to estimate the gradient. In particular, learning can accelerate the evolutionary process without communications between agents. Finally, to quantify the acceleration, we extend the Fisher's fundamental theorem (FF-thm) for natural selection to ancestral learning. Our extended FF-thm relates the acceleration of the evolutionary process to the variety of individual fitness of the agent. By the theorem, we can quantitatively understand when and why learning is beneficial.
[ { "created": "Mon, 7 Jun 2021 14:45:28 GMT", "version": "v1" }, { "created": "Tue, 8 Jun 2021 03:44:08 GMT", "version": "v2" } ]
2021-06-09
[ [ "Nakashima", "So", "" ], [ "Kobayashi", "Tetsuya J.", "" ] ]
Natural selection is general and powerful concept not only to explain evolutionary processes of biological organisms but also to design engineering systems such as genetic algorithms and particle filters. There is a surge of interest, both from biology and engineering, in considering natural selection of intellectual agents that can learn individually. Learning by individual agents of better behaviors for survival may accelerate the evolutionary processes by natural selection. We have accumulating pieces of evidence that organisms can transmit its information to the next generation via epigenetic states or memes. Also, such idea is important for engineering applications. To accelerate the evolutionary process, an agent should change their strategy so that the population fitness increases the most. Equivalently, an agent should update the strategy towards a gradient of the population fitness. However, it has not yet been clarified whether and how an agent can estimate the gradient and accelerate the evolutionary process. We also lack methodology to quantify the acceleration to understand and predict the impact of learning. In this paper, we address these problems. We show that an learning agent can accelerate the evolutionary process by proposing ancestral learning, which uses the information transmitted from the ancestor (ancestral information). We next show that the ancestral information is sufficient to estimate the gradient. In particular, learning can accelerate the evolutionary process without communications between agents. Finally, to quantify the acceleration, we extend the Fisher's fundamental theorem (FF-thm) for natural selection to ancestral learning. Our extended FF-thm relates the acceleration of the evolutionary process to the variety of individual fitness of the agent. By the theorem, we can quantitatively understand when and why learning is beneficial.
1708.03154
Marco Formentin
Chengyi Tu and Samir Suweis and Jacopo Grilli and Marco Formentin and Amos Maritan
Reconciling cooperation, biodiversity and stability in complex ecological communities
25 pages, 10 figures
null
null
null
q-bio.PE math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Empirical observations show that ecological communities can have a huge number of coexisting species, also with few or limited number of resources. These ecosystems are characterized by multiple type of interactions, in particular displaying cooperative behaviors. However, standard modeling of population dynamics based on Lotka-Volterra type of equations predicts that ecosystem stability should decrease as the number of species in the community increases and that cooperative systems are less stable than communities with only competitive and/or exploitative interactions. Here we propose a stochastic model of population dynamics, which includes exploitative interactions as well as cooperative interactions induced by cross-feeding. The model is exactly solved and we obtain results for relevant macro-ecological patterns, such as species abundance distributions and correlation functions. In the large system size limit, any number of species can coexist for a very general class of interaction networks and stability increases as the number of species grows. For pure mutualistic/commensalistic interactions we determine the topological properties of the network that guarantee species coexistence. We also show that the stationary state is globally stable and that inferring species interactions through species abundance correlation analysis may be misleading. Our theoretical approach thus show that appropriate models of cooperation naturally leads to a solution of the long-standing question about complexity-stability paradox and on how highly biodiverse communities can coexist.
[ { "created": "Thu, 10 Aug 2017 10:24:19 GMT", "version": "v1" }, { "created": "Mon, 18 Dec 2017 10:27:39 GMT", "version": "v2" }, { "created": "Thu, 17 May 2018 09:03:39 GMT", "version": "v3" } ]
2018-05-21
[ [ "Tu", "Chengyi", "" ], [ "Suweis", "Samir", "" ], [ "Grilli", "Jacopo", "" ], [ "Formentin", "Marco", "" ], [ "Maritan", "Amos", "" ] ]
Empirical observations show that ecological communities can have a huge number of coexisting species, also with few or limited number of resources. These ecosystems are characterized by multiple type of interactions, in particular displaying cooperative behaviors. However, standard modeling of population dynamics based on Lotka-Volterra type of equations predicts that ecosystem stability should decrease as the number of species in the community increases and that cooperative systems are less stable than communities with only competitive and/or exploitative interactions. Here we propose a stochastic model of population dynamics, which includes exploitative interactions as well as cooperative interactions induced by cross-feeding. The model is exactly solved and we obtain results for relevant macro-ecological patterns, such as species abundance distributions and correlation functions. In the large system size limit, any number of species can coexist for a very general class of interaction networks and stability increases as the number of species grows. For pure mutualistic/commensalistic interactions we determine the topological properties of the network that guarantee species coexistence. We also show that the stationary state is globally stable and that inferring species interactions through species abundance correlation analysis may be misleading. Our theoretical approach thus show that appropriate models of cooperation naturally leads to a solution of the long-standing question about complexity-stability paradox and on how highly biodiverse communities can coexist.
2107.13099
Yue Wang
Yue Wang and Zikun Wang
Inference on the structure of gene regulatory networks
null
Journal of Theoretical Biology 539 (2022): 111055
null
null
q-bio.MN
http://creativecommons.org/licenses/by/4.0/
In this paper, we conduct theoretical analyses on inferring the structure of gene regulatory networks. Depending on the experimental method and data type, the inference problem is classified into 20 different scenarios. For each scenario, we discuss the problem that with enough data, under what assumptions, what can be inferred about the structure. For scenarios that have been covered in the literature, we provide a brief review. For scenarios that have not been covered in literature, if the structure can be inferred, we propose new mathematical inference methods and evaluate them on simulated data. Otherwise, we prove that the structure cannot be inferred.
[ { "created": "Tue, 27 Jul 2021 22:59:37 GMT", "version": "v1" }, { "created": "Sat, 4 Sep 2021 03:12:46 GMT", "version": "v2" }, { "created": "Fri, 22 Oct 2021 00:38:03 GMT", "version": "v3" }, { "created": "Tue, 7 Dec 2021 21:58:32 GMT", "version": "v4" }, { "cre...
2022-02-18
[ [ "Wang", "Yue", "" ], [ "Wang", "Zikun", "" ] ]
In this paper, we conduct theoretical analyses on inferring the structure of gene regulatory networks. Depending on the experimental method and data type, the inference problem is classified into 20 different scenarios. For each scenario, we discuss the problem that with enough data, under what assumptions, what can be inferred about the structure. For scenarios that have been covered in the literature, we provide a brief review. For scenarios that have not been covered in literature, if the structure can be inferred, we propose new mathematical inference methods and evaluate them on simulated data. Otherwise, we prove that the structure cannot be inferred.
2212.03202
Sebastiano Pilati
F. Pellicani, D. Dal Ben, A. Perali, S. Pilati
Machine Learning Scoring Functions for Drug Discoveries from Experimental and Computer-Generated Protein-Ligand Structures: Towards Per-Target Scoring Functions
22 pages, 8 figures
Molecules 2023, 28, 1661
10.3390/molecules28041661
null
q-bio.QM cond-mat.dis-nn physics.chem-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, machine learning has been proposed as a promising strategy to build accurate scoring functions for computational docking finalized to numerically empowered drug discovery. However, the latest studies have suggested that over-optimistic results had been reported due to the correlations present in the experimental databases used for training and testing. Here, we investigate the performance of an artificial neural network in binding affinity predictions, comparing results obtained using both experimental protein-ligand structures as well as larger sets of computer-generated structures created using commercial software. Interestingly, similar performances are obtained on both databases. We find a noticeable performance suppression when moving from random horizontal tests to vertical tests performed on target proteins not included in the training data. The possibility to train the network on relatively easily created computer-generated databases leads us to explore per-target scoring functions, trained and tested ad-hoc on complexes including only one target protein. Encouraging results are obtained, depending on the type of protein being addressed.
[ { "created": "Tue, 6 Dec 2022 18:21:56 GMT", "version": "v1" }, { "created": "Wed, 15 Feb 2023 15:46:00 GMT", "version": "v2" } ]
2023-02-17
[ [ "Pellicani", "F.", "" ], [ "Ben", "D. Dal", "" ], [ "Perali", "A.", "" ], [ "Pilati", "S.", "" ] ]
In recent years, machine learning has been proposed as a promising strategy to build accurate scoring functions for computational docking finalized to numerically empowered drug discovery. However, the latest studies have suggested that over-optimistic results had been reported due to the correlations present in the experimental databases used for training and testing. Here, we investigate the performance of an artificial neural network in binding affinity predictions, comparing results obtained using both experimental protein-ligand structures as well as larger sets of computer-generated structures created using commercial software. Interestingly, similar performances are obtained on both databases. We find a noticeable performance suppression when moving from random horizontal tests to vertical tests performed on target proteins not included in the training data. The possibility to train the network on relatively easily created computer-generated databases leads us to explore per-target scoring functions, trained and tested ad-hoc on complexes including only one target protein. Encouraging results are obtained, depending on the type of protein being addressed.
2202.00451
Alessandra Micheletti
Giovanni Bocchi, Patrizio Frosini, Alessandra Micheletti, Alessandro Pedretti, Carmen Gratteri, Filippo Lunghini, Andrea Rosario Beccari, Carmine Talarico
GENEOnet: A new machine learning paradigm based on Group Equivariant Non-Expansive Operators. An application to protein pocket detection
null
null
null
null
q-bio.BM cs.AI cs.LG math.OC
http://creativecommons.org/licenses/by/4.0/
Nowadays there is a big spotlight cast on the development of techniques of explainable machine learning. Here we introduce a new computational paradigm based on Group Equivariant Non-Expansive Operators, that can be regarded as the product of a rising mathematical theory of information-processing observers. This approach, that can be adjusted to different situations, may have many advantages over other common tools, like Neural Networks, such as: knowledge injection and information engineering, selection of relevant features, small number of parameters and higher transparency. We chose to test our method, called GENEOnet, on a key problem in drug design: detecting pockets on the surface of proteins that can host ligands. Experimental results confirmed that our method works well even with a quite small training set, providing thus a great computational advantage, while the final comparison with other state-of-the-art methods shows that GENEOnet provides better or comparable results in terms of accuracy.
[ { "created": "Mon, 31 Jan 2022 11:14:51 GMT", "version": "v1" } ]
2022-02-02
[ [ "Bocchi", "Giovanni", "" ], [ "Frosini", "Patrizio", "" ], [ "Micheletti", "Alessandra", "" ], [ "Pedretti", "Alessandro", "" ], [ "Gratteri", "Carmen", "" ], [ "Lunghini", "Filippo", "" ], [ "Beccari", "Andrea...
Nowadays there is a big spotlight cast on the development of techniques of explainable machine learning. Here we introduce a new computational paradigm based on Group Equivariant Non-Expansive Operators, that can be regarded as the product of a rising mathematical theory of information-processing observers. This approach, that can be adjusted to different situations, may have many advantages over other common tools, like Neural Networks, such as: knowledge injection and information engineering, selection of relevant features, small number of parameters and higher transparency. We chose to test our method, called GENEOnet, on a key problem in drug design: detecting pockets on the surface of proteins that can host ligands. Experimental results confirmed that our method works well even with a quite small training set, providing thus a great computational advantage, while the final comparison with other state-of-the-art methods shows that GENEOnet provides better or comparable results in terms of accuracy.
q-bio/0312011
Rodrick Wallace
Rodrick Wallace, Deborah N. Wallace
Structured psychosocial stress and the US obesity epidemic
12 pages, 6 figures
null
null
null
q-bio.NC q-bio.QM
null
We examine the accelerating 'obesity epidemic' in the US from the perspective of generalized language-of-thought arguments relating a cognitive hypothalamic-pituitary-adrenal axis to an embedding context of structured psychosocial stress. From a Rate Distortion perspective, the obesity epidemic is an image of ratcheting social pathology -- indexed by massive, policy-driven, deurbanization and deindustrialization -- impressed upon the bodies of American adults and children. The resulting pattern of developmental disorder, while stratified by expected divisions of class and ethnicity, is nonetheless relentlessly engulfing even affluent majority populations.
[ { "created": "Mon, 8 Dec 2003 18:07:06 GMT", "version": "v1" } ]
2007-05-23
[ [ "Wallace", "Rodrick", "" ], [ "Wallace", "Deborah N.", "" ] ]
We examine the accelerating 'obesity epidemic' in the US from the perspective of generalized language-of-thought arguments relating a cognitive hypothalamic-pituitary-adrenal axis to an embedding context of structured psychosocial stress. From a Rate Distortion perspective, the obesity epidemic is an image of ratcheting social pathology -- indexed by massive, policy-driven, deurbanization and deindustrialization -- impressed upon the bodies of American adults and children. The resulting pattern of developmental disorder, while stratified by expected divisions of class and ethnicity, is nonetheless relentlessly engulfing even affluent majority populations.
2108.02742
Mansur Zhussupbekov
Mansur Zhussupbekov, Rodrigo Mendez Rojano, Wei-Tao Wu, Mehrdad Massoudi, James F. Antaki
A continuum model for the unfolding of von Willebrand Factor
null
null
10.1007/s10439-021-02845-5
null
q-bio.QM physics.bio-ph physics.flu-dyn
http://creativecommons.org/licenses/by-nc-nd/4.0/
von Willebrand Factor is a mechano-sensitive protein circulating in blood that mediates platelet adhesion to subendothelial collagen and platelet aggregation at high shear rates. Its hemostatic function and thrombogenic effect, as well as susceptibility to enzymatic cleavage, are regulated by a conformational change from a collapsed globular state to a stretched state. Therefore, it is essential to account for the conformation of the vWF multimers when modeling vWF-mediated thrombosis or vWF degradation. We introduce a continuum model of vWF unfolding that is developed within the framework of our multi-constituent model of platelet-mediated thrombosis. The model considers two interconvertible vWF species corresponding to the collapsed and stretched conformational states. vWF unfolding takes place via two regimes: tumbling in simple shear and strong unfolding in flows with dominant extensional component. These two regimes were demonstrated in a Couette flow between parallel plates and an extensional flow in a cross-slot geometry. The vWF unfolding model was then verified in several microfluidic systems designed for inducing high-shear vWF-mediated thrombosis and screening for von Willebrand Disease. The model predicted high concentration of stretched vWF in key regions where occlusive thrombosis was observed experimentally. Strong unfolding caused by the extensional flow was limited to the center axis or middle plane of the channels, whereas vWF unfolding near the channel walls relied upon the shear tumbling mechanism. The continuum model of vWF unfolding presented in this work can be employed in numerical simulations of vWF-mediated thrombosis or vWF degradation in complex geometries. However, extending the model to 3-D arbitrary flows and turbulent flows will pose considerable challenges.
[ { "created": "Thu, 5 Aug 2021 17:18:47 GMT", "version": "v1" }, { "created": "Wed, 11 Aug 2021 16:52:46 GMT", "version": "v2" } ]
2021-08-19
[ [ "Zhussupbekov", "Mansur", "" ], [ "Rojano", "Rodrigo Mendez", "" ], [ "Wu", "Wei-Tao", "" ], [ "Massoudi", "Mehrdad", "" ], [ "Antaki", "James F.", "" ] ]
von Willebrand Factor is a mechano-sensitive protein circulating in blood that mediates platelet adhesion to subendothelial collagen and platelet aggregation at high shear rates. Its hemostatic function and thrombogenic effect, as well as susceptibility to enzymatic cleavage, are regulated by a conformational change from a collapsed globular state to a stretched state. Therefore, it is essential to account for the conformation of the vWF multimers when modeling vWF-mediated thrombosis or vWF degradation. We introduce a continuum model of vWF unfolding that is developed within the framework of our multi-constituent model of platelet-mediated thrombosis. The model considers two interconvertible vWF species corresponding to the collapsed and stretched conformational states. vWF unfolding takes place via two regimes: tumbling in simple shear and strong unfolding in flows with dominant extensional component. These two regimes were demonstrated in a Couette flow between parallel plates and an extensional flow in a cross-slot geometry. The vWF unfolding model was then verified in several microfluidic systems designed for inducing high-shear vWF-mediated thrombosis and screening for von Willebrand Disease. The model predicted high concentration of stretched vWF in key regions where occlusive thrombosis was observed experimentally. Strong unfolding caused by the extensional flow was limited to the center axis or middle plane of the channels, whereas vWF unfolding near the channel walls relied upon the shear tumbling mechanism. The continuum model of vWF unfolding presented in this work can be employed in numerical simulations of vWF-mediated thrombosis or vWF degradation in complex geometries. However, extending the model to 3-D arbitrary flows and turbulent flows will pose considerable challenges.
1310.1801
Peter beim Graben
Peter beim Graben and Serafim Rodrigues
On the electrodynamics of neural networks
31 pages; 3 figures; to appear in "Neural Fields: Theory and Applications", edited by S. Coombes, P. beim Graben, R. Potthast, and J. J. Wright; Springer Verlag Berlin (2014)
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a microscopic approach for the coupling of cortical activity, as resulting from proper dipole currents of pyramidal neurons, to the electromagnetic field in extracellular fluid in presence of diffusion and Ohmic conduction. Starting from a full-fledged three-compartment model of a single pyramidal neuron, including shunting and dendritic propagation, we derive an observation model for dendritic dipole currents in extracellular space and thereby for the dendritic field potential that contributes to the local field potential of a neural population. Under reasonable simplifications, we then derive a leaky integrate-and-fire model for the dynamics of a neural network, which facilitates comparison with existing neural network and observation models. In particular, we compare our results with a related model by means of numerical simulations. Performing a continuum limit, neural activity becomes represented by a neural field equation, while an observation model for electric field potentials is obtained from the interaction of cortical dipole currents with charge density in non-resistive extracellular space as described by the Nernst-Planck equation. Our work consistently satisfies the widespread dipole assumption discussed in the neuroscientific literature.
[ { "created": "Mon, 7 Oct 2013 14:37:56 GMT", "version": "v1" } ]
2013-10-08
[ [ "Graben", "Peter beim", "" ], [ "Rodrigues", "Serafim", "" ] ]
We present a microscopic approach for the coupling of cortical activity, as resulting from proper dipole currents of pyramidal neurons, to the electromagnetic field in extracellular fluid in presence of diffusion and Ohmic conduction. Starting from a full-fledged three-compartment model of a single pyramidal neuron, including shunting and dendritic propagation, we derive an observation model for dendritic dipole currents in extracellular space and thereby for the dendritic field potential that contributes to the local field potential of a neural population. Under reasonable simplifications, we then derive a leaky integrate-and-fire model for the dynamics of a neural network, which facilitates comparison with existing neural network and observation models. In particular, we compare our results with a related model by means of numerical simulations. Performing a continuum limit, neural activity becomes represented by a neural field equation, while an observation model for electric field potentials is obtained from the interaction of cortical dipole currents with charge density in non-resistive extracellular space as described by the Nernst-Planck equation. Our work consistently satisfies the widespread dipole assumption discussed in the neuroscientific literature.
1104.1355
Cedric Ginestet
Cedric E. Ginestet and Andrew Simmons
Recursive Shortest Path Algorithm with Application to Density-integration of Weighted Graphs
null
null
null
null
q-bio.MN cs.DS q-bio.NC stat.CO
http://creativecommons.org/licenses/by-nc-sa/3.0/
Graph theory is increasingly commonly utilised in genetics, proteomics and neuroimaging. In such fields, the data of interest generally constitute weighted graphs. Analysis of such weighted graphs often require the integration of topological metrics with respect to the density of the graph. Here, density refers to the proportion of the number of edges present in that graph. When topological metrics based on shortest paths are of interest, such density-integration usually necessitates the iterative application of Dijkstra's algorithm in order to compute the shortest path matrix at each density level. In this short note, we describe a recursive shortest path algorithm based on single edge updating, which replaces the need for the iterative use of Dijkstra's algorithm. Our proposed procedure is based on pairs of breadth-first searches around each of the vertices incident to the edge added at each recursion. An algorithmic analysis of the proposed technique is provided. When the graph of interest is coded as an adjacency list, our algorithm can be shown to be more efficient than an iterative use of Dijkstra's algorithm.
[ { "created": "Thu, 7 Apr 2011 15:23:24 GMT", "version": "v1" } ]
2011-06-09
[ [ "Ginestet", "Cedric E.", "" ], [ "Simmons", "Andrew", "" ] ]
Graph theory is increasingly commonly utilised in genetics, proteomics and neuroimaging. In such fields, the data of interest generally constitute weighted graphs. Analysis of such weighted graphs often require the integration of topological metrics with respect to the density of the graph. Here, density refers to the proportion of the number of edges present in that graph. When topological metrics based on shortest paths are of interest, such density-integration usually necessitates the iterative application of Dijkstra's algorithm in order to compute the shortest path matrix at each density level. In this short note, we describe a recursive shortest path algorithm based on single edge updating, which replaces the need for the iterative use of Dijkstra's algorithm. Our proposed procedure is based on pairs of breadth-first searches around each of the vertices incident to the edge added at each recursion. An algorithmic analysis of the proposed technique is provided. When the graph of interest is coded as an adjacency list, our algorithm can be shown to be more efficient than an iterative use of Dijkstra's algorithm.
2111.02343
Fan Bai
Fan Bai
An age-of-infection model with both symptomatic and asymptomatic infections
22 pages, 10 figures
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
We formulate a general age-of-infection epidemic model with two pathways: the symptomatic infections and the asymptomatic infections. We then calculate the basic reproduction number $\mathcal{R}_0$ and establish the final size relation. It is shown that the ratio of accumulated counts of symptomatic patients and asymptomatic patients is determined by the symptomatic ratio $f$ which is defined as the probability of eventually becoming symptomatic after being infected. We also formulate and study a general age-of-infection model with disease deaths and with two infection pathways. The final size relation is investigated, and the upper and lower bounds for final epidemic size are given. Several numerical simulations are performed to verify the analytical results.
[ { "created": "Wed, 3 Nov 2021 16:52:51 GMT", "version": "v1" } ]
2021-11-04
[ [ "Bai", "Fan", "" ] ]
We formulate a general age-of-infection epidemic model with two pathways: the symptomatic infections and the asymptomatic infections. We then calculate the basic reproduction number $\mathcal{R}_0$ and establish the final size relation. It is shown that the ratio of accumulated counts of symptomatic patients and asymptomatic patients is determined by the symptomatic ratio $f$ which is defined as the probability of eventually becoming symptomatic after being infected. We also formulate and study a general age-of-infection model with disease deaths and with two infection pathways. The final size relation is investigated, and the upper and lower bounds for final epidemic size are given. Several numerical simulations are performed to verify the analytical results.
1411.3191
Michael Schwemmer
Michael A. Schwemmer, Adrienne L. Fairhall, Sophie Den\'eve, and Eric T. Shea-Brown
Constructing precisely computing networks with biophysical spiking neurons
46 pages, 14 figures, Updated to final version
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While spike timing has been shown to carry detailed stimulus information at the sensory periphery, its possible role in network computation is less clear. Most models of computation by neural networks are based on population firing rates. In equivalent spiking implementations, firing is assumed to be random such that averaging across populations of neurons recovers the rate-based approach. Recently, however, Den\'eve and colleagues have suggested that the spiking behavior of neurons may be fundamental to how neuronal networks compute, with precise spike timing determined by each neuron's contribution to producing the desired output. By postulating that each neuron fires in order to reduce the error in the network's output, it was demonstrated that linear computations can be carried out by networks of integrate-and-fire neurons that communicate through instantaneous synapses. This left open, however, the possibility that realistic networks, with conductance-based neurons with subthreshold nonlinearity and the slower timescales of biophysical synapses, may not fit into this framework. Here, we show how the spike-based approach can be extended to biophysically plausible networks. We then show that our network reproduces a number of key features of cortical networks including irregular and Poisson-like spike times and a tight balance between excitation and inhibition. Lastly, we discuss how the behavior of our model scales with network size, or with the number of neurons "recorded" from a larger computing network. These results significantly increase the biological plausibility of the spike-based approach to network computation.
[ { "created": "Wed, 12 Nov 2014 15:47:56 GMT", "version": "v1" }, { "created": "Thu, 16 Jul 2015 16:49:02 GMT", "version": "v2" } ]
2015-07-17
[ [ "Schwemmer", "Michael A.", "" ], [ "Fairhall", "Adrienne L.", "" ], [ "Denéve", "Sophie", "" ], [ "Shea-Brown", "Eric T.", "" ] ]
While spike timing has been shown to carry detailed stimulus information at the sensory periphery, its possible role in network computation is less clear. Most models of computation by neural networks are based on population firing rates. In equivalent spiking implementations, firing is assumed to be random such that averaging across populations of neurons recovers the rate-based approach. Recently, however, Den\'eve and colleagues have suggested that the spiking behavior of neurons may be fundamental to how neuronal networks compute, with precise spike timing determined by each neuron's contribution to producing the desired output. By postulating that each neuron fires in order to reduce the error in the network's output, it was demonstrated that linear computations can be carried out by networks of integrate-and-fire neurons that communicate through instantaneous synapses. This left open, however, the possibility that realistic networks, with conductance-based neurons with subthreshold nonlinearity and the slower timescales of biophysical synapses, may not fit into this framework. Here, we show how the spike-based approach can be extended to biophysically plausible networks. We then show that our network reproduces a number of key features of cortical networks including irregular and Poisson-like spike times and a tight balance between excitation and inhibition. Lastly, we discuss how the behavior of our model scales with network size, or with the number of neurons "recorded" from a larger computing network. These results significantly increase the biological plausibility of the spike-based approach to network computation.
1902.00951
Jan-Hendrik Schleimer
Jan-Hendrik Schleimer, Janina Hesse, Susana Andrea Contreras, Susanne Schreiber
Firing statistics in the bistable regime of neurons with homoclinic spike generation
null
Phys. Rev. E 103, 012407 (2021)
10.1103/PhysRevE.103.012407
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-sa/4.0/
Neuronal voltage dynamics of regularly firing neurons typically has one stable attractor: either a fixed point (like in the subthreshold regime) or a limit cycle that defines the tonic firing of action potentials (in the suprathreshold regime). In two of the three spike onset bifurcation sequences that are known to give rise to all-or-none type action potentials, however, the resting-state fixpoint and limit cycle spiking can coexist in an intermediate regime, resulting in bistable dynamics. Here, noise can induce switches between the attractors, i.e., between rest and spiking, and thus increase the variability of the spike train compared to neurons with only one stable attractor. Qualitative features of the resulting spike statistics depend on the spike onset bifurcations. This study focuses on the creation of the spiking limit cycle via the saddle-homoclinic orbit (HOM) bifurcation and derives interspike interval (ISI) densities for a conductance-based neuron model in the bistable regime. The ISI densities of bistable homoclinic neurons are found to be unimodal yet distinct from the inverse Gaussian distribution associated with the saddle-node-on-invariant-cycle (SNIC) bifurcation. It is demonstrated that for the HOM bifurcation the transition between rest and spiking is mainly determined along the downstroke of the action potential -- a dynamical feature that is not captured by the commonly used reset neuron models. The deduced spike statistics can help to identify HOM dynamics in experimental data.
[ { "created": "Sun, 3 Feb 2019 18:42:04 GMT", "version": "v1" }, { "created": "Tue, 6 Oct 2020 18:09:10 GMT", "version": "v2" } ]
2021-01-20
[ [ "Schleimer", "Jan-Hendrik", "" ], [ "Hesse", "Janina", "" ], [ "Contreras", "Susana Andrea", "" ], [ "Schreiber", "Susanne", "" ] ]
Neuronal voltage dynamics of regularly firing neurons typically has one stable attractor: either a fixed point (like in the subthreshold regime) or a limit cycle that defines the tonic firing of action potentials (in the suprathreshold regime). In two of the three spike onset bifurcation sequences that are known to give rise to all-or-none type action potentials, however, the resting-state fixpoint and limit cycle spiking can coexist in an intermediate regime, resulting in bistable dynamics. Here, noise can induce switches between the attractors, i.e., between rest and spiking, and thus increase the variability of the spike train compared to neurons with only one stable attractor. Qualitative features of the resulting spike statistics depend on the spike onset bifurcations. This study focuses on the creation of the spiking limit cycle via the saddle-homoclinic orbit (HOM) bifurcation and derives interspike interval (ISI) densities for a conductance-based neuron model in the bistable regime. The ISI densities of bistable homoclinic neurons are found to be unimodal yet distinct from the inverse Gaussian distribution associated with the saddle-node-on-invariant-cycle (SNIC) bifurcation. It is demonstrated that for the HOM bifurcation the transition between rest and spiking is mainly determined along the downstroke of the action potential -- a dynamical feature that is not captured by the commonly used reset neuron models. The deduced spike statistics can help to identify HOM dynamics in experimental data.
1701.07879
Gerard Rinkus
Gerard Rinkus
A Radically New Theory of how the Brain Represents and Computes with Probabilities
33 pages, 10 figures - Sec. explaining single cell tuning fns as artifacts of embedding SDRs in superposition removed (for future paper) - Clarified that a given SDR code represents the whole likelihood distribution over stored hypotheses at a coarsely-ranked level of fidelity (Submitted for review)
null
null
null
q-bio.NC cs.CV cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The brain is believed to implement probabilistic reasoning and to represent information via population, or distributed, coding. Most previous population-based probabilistic (PPC) theories share several basic properties: 1) continuous-valued neurons; 2) fully(densely)-distributed codes, i.e., all(most) units participate in every code; 3) graded synapses; 4) rate coding; 5) units have innate unimodal tuning functions (TFs); 6) intrinsically noisy units; and 7) noise/correlation is considered harmful. We present a radically different theory that assumes: 1) binary units; 2) only a small subset of units, i.e., a sparse distributed representation (SDR) (cell assembly), comprises any individual code; 3) binary synapses; 4) signaling formally requires only single (i.e., first) spikes; 5) units initially have completely flat TFs (all weights zero); 6) units are far less intrinsically noisy than traditionally thought; rather 7) noise is a resource generated/used to cause similar inputs to map to similar codes, controlling a tradeoff between storage capacity and embedding the input space statistics in the pattern of intersections over stored codes, epiphenomenally determining correlation patterns across neurons. The theory, Sparsey, was introduced 20+ years ago as a canonical cortical circuit/algorithm model achieving efficient sequence learning/recognition, but not elaborated as an alternative to PPC theories. Here, we show that: a) the active SDR simultaneously represents both the most similar/likely input and the entire (coarsely-ranked) similarity likelihood/distribution over all stored inputs (hypotheses); and b) given an input, the SDR code selection algorithm, which underlies both learning and inference, updates both the most likely hypothesis and the entire likelihood distribution (cf. belief update) with a number of steps that remains constant as the number of stored items increases.
[ { "created": "Thu, 26 Jan 2017 21:16:32 GMT", "version": "v1" }, { "created": "Mon, 30 Jan 2017 03:37:42 GMT", "version": "v2" }, { "created": "Thu, 23 Feb 2017 19:16:39 GMT", "version": "v3" }, { "created": "Wed, 21 Feb 2018 23:00:01 GMT", "version": "v4" } ]
2018-02-23
[ [ "Rinkus", "Gerard", "" ] ]
The brain is believed to implement probabilistic reasoning and to represent information via population, or distributed, coding. Most previous population-based probabilistic (PPC) theories share several basic properties: 1) continuous-valued neurons; 2) fully(densely)-distributed codes, i.e., all(most) units participate in every code; 3) graded synapses; 4) rate coding; 5) units have innate unimodal tuning functions (TFs); 6) intrinsically noisy units; and 7) noise/correlation is considered harmful. We present a radically different theory that assumes: 1) binary units; 2) only a small subset of units, i.e., a sparse distributed representation (SDR) (cell assembly), comprises any individual code; 3) binary synapses; 4) signaling formally requires only single (i.e., first) spikes; 5) units initially have completely flat TFs (all weights zero); 6) units are far less intrinsically noisy than traditionally thought; rather 7) noise is a resource generated/used to cause similar inputs to map to similar codes, controlling a tradeoff between storage capacity and embedding the input space statistics in the pattern of intersections over stored codes, epiphenomenally determining correlation patterns across neurons. The theory, Sparsey, was introduced 20+ years ago as a canonical cortical circuit/algorithm model achieving efficient sequence learning/recognition, but not elaborated as an alternative to PPC theories. Here, we show that: a) the active SDR simultaneously represents both the most similar/likely input and the entire (coarsely-ranked) similarity likelihood/distribution over all stored inputs (hypotheses); and b) given an input, the SDR code selection algorithm, which underlies both learning and inference, updates both the most likely hypothesis and the entire likelihood distribution (cf. belief update) with a number of steps that remains constant as the number of stored items increases.
1609.00185
Pierre Magal
Pierre Magal and Zhengyang Zhang
Competition for light in forest population dynamics: from computer simulator to mathematical model
null
null
null
null
q-bio.PE math.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this article we build a mathematical model for forest growth and we compare this model with a computer forest simulator named SORTIE. The main ingredient taken into account in both models is the competition for light between trees. The parameters of the mathematical model are estimated by using SORTIE model, when the parameter values of SORTIE model correspond to the ones previously evaluated for the Great Mountain Forest in USA. We construct a size structured population dynamics model with one and two species and with spatial structure.
[ { "created": "Thu, 1 Sep 2016 11:10:09 GMT", "version": "v1" } ]
2016-09-02
[ [ "Magal", "Pierre", "" ], [ "Zhang", "Zhengyang", "" ] ]
In this article we build a mathematical model for forest growth and we compare this model with a computer forest simulator named SORTIE. The main ingredient taken into account in both models is the competition for light between trees. The parameters of the mathematical model are estimated by using SORTIE model, when the parameter values of SORTIE model correspond to the ones previously evaluated for the Great Mountain Forest in USA. We construct a size structured population dynamics model with one and two species and with spatial structure.
2107.00806
Zitong Wang
Zitong Jerry Wang, Matt Thomson
Signaling receptor localization maximizes cellular information acquisition in spatially-structured, natural environments
null
null
null
null
q-bio.CB
http://creativecommons.org/licenses/by-nc-nd/4.0/
Cells in natural environments like tissue or soil sense and respond to extracellular ligands with intricately structured and non-monotonic spatial distributions that are sculpted by processes such as fluid flow and substrate adhesion. Nevertheless, traditional approaches to studying cell sensing assume signals are either uniform or monotonic, neglecting spatial structures of natural environments. In this work, we show that spatial sensing and navigation can be optimized by adapting the spatial organization of signaling pathways to the spatial structure of the environment. By viewing cell surface receptors as a sensor network, we develop an information theoretic framework for computing the optimal spatial organization of a sensing system for a given spatial signaling environment. Applying the framework to simulated environments, we find that spatial receptor localization maximizes information acquisition in many natural contexts, including tissue and soil. Receptor localization extends naturally to produce a dynamic protocol for redistributing signaling receptors during cell navigation and can be implemented in a cell using a feedback scheme. In a simulated tissue environment, dynamic receptor localization boosts navigation efficiency by 30-fold. Broadly, our framework readily adapts to studying how the spatial organization of signaling components other than receptors can be modulated to improve cellular information processing.
[ { "created": "Fri, 2 Jul 2021 02:45:08 GMT", "version": "v1" } ]
2021-07-05
[ [ "Wang", "Zitong Jerry", "" ], [ "Thomson", "Matt", "" ] ]
Cells in natural environments like tissue or soil sense and respond to extracellular ligands with intricately structured and non-monotonic spatial distributions that are sculpted by processes such as fluid flow and substrate adhesion. Nevertheless, traditional approaches to studying cell sensing assume signals are either uniform or monotonic, neglecting spatial structures of natural environments. In this work, we show that spatial sensing and navigation can be optimized by adapting the spatial organization of signaling pathways to the spatial structure of the environment. By viewing cell surface receptors as a sensor network, we develop an information theoretic framework for computing the optimal spatial organization of a sensing system for a given spatial signaling environment. Applying the framework to simulated environments, we find that spatial receptor localization maximizes information acquisition in many natural contexts, including tissue and soil. Receptor localization extends naturally to produce a dynamic protocol for redistributing signaling receptors during cell navigation and can be implemented in a cell using a feedback scheme. In a simulated tissue environment, dynamic receptor localization boosts navigation efficiency by 30-fold. Broadly, our framework readily adapts to studying how the spatial organization of signaling components other than receptors can be modulated to improve cellular information processing.
2104.00684
Scott Olesen
Scott W. Olesen, Maxim Imakaev, Claire Duvallet
Defining the lead time of wastewater-based epidemiology for COVID-19
null
null
null
null
q-bio.OT
http://creativecommons.org/licenses/by-sa/4.0/
Individuals infected with SARS-CoV-2, the virus that causes COVID-19, may shed the virus in stool before developing symptoms, suggesting that measurements of SARS-CoV-2 concentrations in wastewater could be a "leading indicator" of COVID-19 prevalence. Multiple studies have corroborated the leading indicator concept by showing that the correlation between wastewater measurements and COVID-19 case counts is maximized when case counts are lagged. However, the meaning of "leading indicator" will depend on the specific application of wastewater-based epidemiology, and the correlation analysis is not relevant for all applications. In fact, the quantification of a leading indicator will depend on epidemiological, biological, and health systems factors. Thus, there is no single "lead time" for wastewater-based COVID-19 monitoring. To illustrate this complexity, we enumerate three different applications of wastewater-based epidemiology for COVID-19: a qualitative "early warning" system; an independent, quantitative estimate of disease prevalence; and a quantitative alert of bursts of disease incidence. The leading indicator concept has different definitions and utility in each application.
[ { "created": "Thu, 1 Apr 2021 18:48:59 GMT", "version": "v1" }, { "created": "Wed, 23 Jun 2021 18:16:53 GMT", "version": "v2" } ]
2021-06-25
[ [ "Olesen", "Scott W.", "" ], [ "Imakaev", "Maxim", "" ], [ "Duvallet", "Claire", "" ] ]
Individuals infected with SARS-CoV-2, the virus that causes COVID-19, may shed the virus in stool before developing symptoms, suggesting that measurements of SARS-CoV-2 concentrations in wastewater could be a "leading indicator" of COVID-19 prevalence. Multiple studies have corroborated the leading indicator concept by showing that the correlation between wastewater measurements and COVID-19 case counts is maximized when case counts are lagged. However, the meaning of "leading indicator" will depend on the specific application of wastewater-based epidemiology, and the correlation analysis is not relevant for all applications. In fact, the quantification of a leading indicator will depend on epidemiological, biological, and health systems factors. Thus, there is no single "lead time" for wastewater-based COVID-19 monitoring. To illustrate this complexity, we enumerate three different applications of wastewater-based epidemiology for COVID-19: a qualitative "early warning" system; an independent, quantitative estimate of disease prevalence; and a quantitative alert of bursts of disease incidence. The leading indicator concept has different definitions and utility in each application.
2202.03080
Govind Kaigala
Iago Pereiro, Anna Fomitcheva Khartchenko, Robert D. Lovchik and Govind V. Kaigala
Advection-enhanced kinetics in microtiter plates for improved surface assay quantitation and multiplexing capabilities
12 pages, 5 figures
null
null
null
q-bio.BM
http://creativecommons.org/licenses/by/4.0/
Surface assays such as ELISA are pervasive in clinics and research and predominantly standardized in microtiter plates (MTP). MTPs provide many advantages but are often detrimental to surface assay efficiency due to inherent mass transport limitations. Microscale flows can overcome these and largely improve assay kinetics. However, the disruptive nature of microfluidics with existing labware and protocols has narrowed its transformative potential. We present WellProbe, a novel microfluidic concept compatible with MTPs. With it, we show that immunoassays become more sensitive at low concentrations (up to 9-fold signal improvement in 12-fold less time), richer in information with 3-4 different kinetic conditions, and can be used to estimate kinetic parameters, minimize washing steps and non-specific binding, and identify compromised results. We further multiplex single-well assays combining WellProbes kinetic regions with tailored microarrays. Finally, we demonstrate our system in a context of immunoglobulin subclass evaluation, increasingly regarded as clinically relevant.
[ { "created": "Mon, 7 Feb 2022 11:27:29 GMT", "version": "v1" } ]
2022-02-08
[ [ "Pereiro", "Iago", "" ], [ "Khartchenko", "Anna Fomitcheva", "" ], [ "Lovchik", "Robert D.", "" ], [ "Kaigala", "Govind V.", "" ] ]
Surface assays such as ELISA are pervasive in clinics and research and predominantly standardized in microtiter plates (MTP). MTPs provide many advantages but are often detrimental to surface assay efficiency due to inherent mass transport limitations. Microscale flows can overcome these and largely improve assay kinetics. However, the disruptive nature of microfluidics with existing labware and protocols has narrowed its transformative potential. We present WellProbe, a novel microfluidic concept compatible with MTPs. With it, we show that immunoassays become more sensitive at low concentrations (up to 9-fold signal improvement in 12-fold less time), richer in information with 3-4 different kinetic conditions, and can be used to estimate kinetic parameters, minimize washing steps and non-specific binding, and identify compromised results. We further multiplex single-well assays combining WellProbes kinetic regions with tailored microarrays. Finally, we demonstrate our system in a context of immunoglobulin subclass evaluation, increasingly regarded as clinically relevant.
1303.0013
Pascal Grange
Pascal Grange, Michael Hawrylycz, Partha P. Mitra
Cell-type-specific microarray data and the Allen atlas: quantitative analysis of brain-wide patterns of correlation and density
V1: 88 pages, 37 figures, 43 tables; V2: 137 pages, new section discussing different fitting panels
null
null
null
q-bio.NC q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Allen Atlas of the adult mouse brain is used to estimate the region-specificity of 64 cell types whose transcriptional profile in the mouse brain has been measured in microarray experiments. We systematically analyze the preliminary results presented in [arXiv:1111.6217], using the techniques implemented in the Brain Gene Expression Analysis toolbox. In particular, for each cell-type-specific sample in the study, we compute a brain-wide correlation profile to the Allen Atlas, and estimate a brain-wide density profile by solving a quadratic optimization problem at each voxel in the mouse brain. We characterize the neuroanatomical properties of the correlation and density profiles by ranking the regions of the left hemisphere delineated in the Allen Reference Atlas. We compare these rankings to prior biological knowledge of the brain region from which the cell-type-specific sample was extracted.
[ { "created": "Thu, 28 Feb 2013 21:02:36 GMT", "version": "v1" }, { "created": "Sat, 12 Oct 2013 01:45:57 GMT", "version": "v2" } ]
2013-10-15
[ [ "Grange", "Pascal", "" ], [ "Hawrylycz", "Michael", "" ], [ "Mitra", "Partha P.", "" ] ]
The Allen Atlas of the adult mouse brain is used to estimate the region-specificity of 64 cell types whose transcriptional profile in the mouse brain has been measured in microarray experiments. We systematically analyze the preliminary results presented in [arXiv:1111.6217], using the techniques implemented in the Brain Gene Expression Analysis toolbox. In particular, for each cell-type-specific sample in the study, we compute a brain-wide correlation profile to the Allen Atlas, and estimate a brain-wide density profile by solving a quadratic optimization problem at each voxel in the mouse brain. We characterize the neuroanatomical properties of the correlation and density profiles by ranking the regions of the left hemisphere delineated in the Allen Reference Atlas. We compare these rankings to prior biological knowledge of the brain region from which the cell-type-specific sample was extracted.
1109.1221
Antti Niemi
Andrei Krokhotin, Antti J. Niemi
Protein Regge Trajectories, Phase Coexistence and Physics of Alzheimer's Disease
4 figures
null
null
null
q-bio.BM cond-mat.soft physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Alzheimer's disease causes severe neurodegeneration in the brain that leads to a certain death. The defining factor is the formation of extracellular senile amyloid plaques in the brain. However, therapeutic approaches to remove them have not been effective in humans, and so our understanding of the cause of Alzheimer's disease remains incomplete. Here we investigate physical processes that might relate to its onset. Instead of the extracellular amyloid, we scrutinize the intracellular domain of its precursor protein. We argue for a phenomenon that has never before been discussed in the context of polymer physics: Like ice and water together, the intracellular domain of the amyloid precursor protein forms a state of phase coexistence with another protein. This leads to an inherent instability that could well be among the missing pieces in the puzzle of Alzheimer's disease.
[ { "created": "Tue, 6 Sep 2011 16:20:26 GMT", "version": "v1" } ]
2011-09-07
[ [ "Krokhotin", "Andrei", "" ], [ "Niemi", "Antti J.", "" ] ]
Alzheimer's disease causes severe neurodegeneration in the brain that leads to a certain death. The defining factor is the formation of extracellular senile amyloid plaques in the brain. However, therapeutic approaches to remove them have not been effective in humans, and so our understanding of the cause of Alzheimer's disease remains incomplete. Here we investigate physical processes that might relate to its onset. Instead of the extracellular amyloid, we scrutinize the intracellular domain of its precursor protein. We argue for a phenomenon that has never before been discussed in the context of polymer physics: Like ice and water together, the intracellular domain of the amyloid precursor protein forms a state of phase coexistence with another protein. This leads to an inherent instability that could well be among the missing pieces in the puzzle of Alzheimer's disease.
2407.13892
Michael Fuchs
Michael Fuchs and Bernhard Gittenberger
Sackin Indices for Labeled and Unlabeled Classes of Galled Trees
Dedicated to Michael Drmota on the occasion of his 60th birthday. 25 pages
null
null
null
q-bio.PE math.CO
http://creativecommons.org/licenses/by/4.0/
The Sackin index is an important measure for the balance of phylogenetic trees. We investigate two extensions of the Sackin index to the class of galled trees and two of its subclasses (simplex galled trees and normal galled trees) where we consider both labeled and unlabeled galled trees. In all cases, we show that the mean of the Sackin index for a network which is uniformly sampled from its class is asymptotic to $\mu n^{3/2}$ for an explicit constant $\mu$. In addition, we show that the scaled Sackin index convergences weakly and with all its moments to the Airy distribution.
[ { "created": "Thu, 18 Jul 2024 20:40:10 GMT", "version": "v1" } ]
2024-07-22
[ [ "Fuchs", "Michael", "" ], [ "Gittenberger", "Bernhard", "" ] ]
The Sackin index is an important measure for the balance of phylogenetic trees. We investigate two extensions of the Sackin index to the class of galled trees and two of its subclasses (simplex galled trees and normal galled trees) where we consider both labeled and unlabeled galled trees. In all cases, we show that the mean of the Sackin index for a network which is uniformly sampled from its class is asymptotic to $\mu n^{3/2}$ for an explicit constant $\mu$. In addition, we show that the scaled Sackin index convergences weakly and with all its moments to the Airy distribution.
q-bio/0701054
Georgy Karev
Georgy P. Karev
Inhomogeneous maps and mathematical theory of selection
35 pages, 5 figures; submitted to JDEA
null
null
null
q-bio.PE q-bio.QM
null
In this paper we develop a theory of general selection systems with discrete time and explore the evolution of selection systems, in particular, inhomogeneous populations. We show that the knowledge of the initial distribution of the selection system allows us to determine explicitly the system distribution at the entire time interval. All statistical characteristics of interest, such as mean values of the fitness or any trait can be predicted effectively for indefinite time and these predictions dramatically depend on the initial distribution. The Fisher Fundamental theorem of natural selection (FTNS) and more general the Price equations are the famous results of the mathematical selection theory. We show that the problem of dynamic insufficiency for the Price equations and for the FTNS can be resolved within the framework of selection systems. Effective formulas for solutions of the Price equations and for the FTNS are derived. Applications of the developed theory to some other problems of mathematical biology (dynamics of inhomogeneous logistic and Ricker model, selection in rotifer populations) are also given. Complex behavior of the total population size, the mean fitness (in contrast to the plain FTNS) and other traits is possible for inhomogeneous populations with density-dependent fitness. The temporary dynamics of these quantities can be investigated with the help of suggested methods.
[ { "created": "Wed, 31 Jan 2007 18:52:52 GMT", "version": "v1" } ]
2007-05-23
[ [ "Karev", "Georgy P.", "" ] ]
In this paper we develop a theory of general selection systems with discrete time and explore the evolution of selection systems, in particular, inhomogeneous populations. We show that the knowledge of the initial distribution of the selection system allows us to determine explicitly the system distribution at the entire time interval. All statistical characteristics of interest, such as mean values of the fitness or any trait can be predicted effectively for indefinite time and these predictions dramatically depend on the initial distribution. The Fisher Fundamental theorem of natural selection (FTNS) and more general the Price equations are the famous results of the mathematical selection theory. We show that the problem of dynamic insufficiency for the Price equations and for the FTNS can be resolved within the framework of selection systems. Effective formulas for solutions of the Price equations and for the FTNS are derived. Applications of the developed theory to some other problems of mathematical biology (dynamics of inhomogeneous logistic and Ricker model, selection in rotifer populations) are also given. Complex behavior of the total population size, the mean fitness (in contrast to the plain FTNS) and other traits is possible for inhomogeneous populations with density-dependent fitness. The temporary dynamics of these quantities can be investigated with the help of suggested methods.
2406.18439
Thierry Mora
Gabriel Mahuas, Thomas Buffet, Olivier Marre, Ulisse Ferrari, Thierry Mora
Strong, but not weak, noise correlations are beneficial for population coding
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neural correlations play a critical role in sensory information coding. They are of two kinds: signal correlations, when neurons have overlapping sensitivities, and noise correlations from network effects and shared noise. It is commonly thought that stimulus and noise correlations should have opposite signs to improve coding. However, experiments from early sensory systems and cortex typically show the opposite effect, with many pairs of neurons showing both types of correlations to be positive and large. Here, we develop a theory of information coding by correlated neurons which resolves this paradox. We show that noise correlations are always beneficial if they are strong enough. Extensive tests on retinal recordings under different visual stimuli confirm our predictions. Finally, using neuronal recordings and modeling, we show that for high dimensional stimuli noise correlation benefits the encoding of fine-grained details of visual stimuli, at the expense of large-scale features, which are already well encoded.
[ { "created": "Wed, 26 Jun 2024 15:43:13 GMT", "version": "v1" } ]
2024-06-27
[ [ "Mahuas", "Gabriel", "" ], [ "Buffet", "Thomas", "" ], [ "Marre", "Olivier", "" ], [ "Ferrari", "Ulisse", "" ], [ "Mora", "Thierry", "" ] ]
Neural correlations play a critical role in sensory information coding. They are of two kinds: signal correlations, when neurons have overlapping sensitivities, and noise correlations from network effects and shared noise. It is commonly thought that stimulus and noise correlations should have opposite signs to improve coding. However, experiments from early sensory systems and cortex typically show the opposite effect, with many pairs of neurons showing both types of correlations to be positive and large. Here, we develop a theory of information coding by correlated neurons which resolves this paradox. We show that noise correlations are always beneficial if they are strong enough. Extensive tests on retinal recordings under different visual stimuli confirm our predictions. Finally, using neuronal recordings and modeling, we show that for high dimensional stimuli noise correlation benefits the encoding of fine-grained details of visual stimuli, at the expense of large-scale features, which are already well encoded.
2206.13910
Sinnu Thomas
Edilson F. Arruda, Tarun Sharma, Rodrigo e A. Alexandre, Sinnu Susan Thomas
Epidemic Control Modeling using Parsimonious Models and Markov Decision Processes
null
null
null
null
q-bio.PE cs.LG math.OC physics.soc-ph
http://creativecommons.org/licenses/by/4.0/
Many countries have experienced at least two waves of the COVID-19 pandemic. The second wave is far more dangerous as distinct strains appear more harmful to human health, but it stems from the complacency about the first wave. This paper introduces a parsimonious yet representative stochastic epidemic model that simulates the uncertain spread of the disease regardless of the latency and recovery time distributions. We also propose a Markov decision process to seek an optimal trade-off between the usage of the healthcare system and the economic costs of an epidemic. We apply the model to COVID-19 data from New Delhi, India and simulate the epidemic spread with different policy review times. The results show that the optimal policy acts swiftly to curb the epidemic in the first wave, thus avoiding the collapse of the healthcare system and the future costs of posterior outbreaks. An analysis of the recent collapse of the healthcare system of India during the second COVID-19 wave suggests that many lives could have been preserved if swift mitigation was promoted after the first wave.
[ { "created": "Thu, 23 Jun 2022 12:45:13 GMT", "version": "v1" } ]
2022-06-29
[ [ "Arruda", "Edilson F.", "" ], [ "Sharma", "Tarun", "" ], [ "Alexandre", "Rodrigo e A.", "" ], [ "Thomas", "Sinnu Susan", "" ] ]
Many countries have experienced at least two waves of the COVID-19 pandemic. The second wave is far more dangerous as distinct strains appear more harmful to human health, but it stems from the complacency about the first wave. This paper introduces a parsimonious yet representative stochastic epidemic model that simulates the uncertain spread of the disease regardless of the latency and recovery time distributions. We also propose a Markov decision process to seek an optimal trade-off between the usage of the healthcare system and the economic costs of an epidemic. We apply the model to COVID-19 data from New Delhi, India and simulate the epidemic spread with different policy review times. The results show that the optimal policy acts swiftly to curb the epidemic in the first wave, thus avoiding the collapse of the healthcare system and the future costs of posterior outbreaks. An analysis of the recent collapse of the healthcare system of India during the second COVID-19 wave suggests that many lives could have been preserved if swift mitigation was promoted after the first wave.
1002.1054
Carsten Conradi
Carsten Conradi, Dietrich Flockerzi
Switching in mass action networks based on linear inequalities
in revision SIAM Journal on Applied Dynamical Systems
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many biochemical processes can successfully be described by dynamical systems allowing some form of switching when, depending on their initial conditions, solutions of the dynamical system end up in different regions of state space (associated with different biochemical functions). Switching is often realized by a bistable system (i.e. a dynamical system allowing two stable steady state solutions) and, in the majority of cases, bistability is established numerically. In our point of view this approach is too restrictive, as, one the one hand, due to predominant parameter uncertainty numerical methods are generally difficult to apply to realistic models originating in Systems Biology. And on the other hand switching already arises with the occurrence of a saddle type steady state (characterized by a Jacobian where exactly one Eigenvalue is positive and the remaining eigenvalues have negative real part). Consequently we derive conditions based on linear inequalities that allow the analytic computation of states and parameters where the Jacobian derived from a mass action network has a defective zero eigenvalue so that -- under certain genericity conditions -- a saddle-node bifurcation occurs. Our conditions are applicable to general mass action networks involving at least one conservation relation, however, they are only sufficient (as infeasibility of linear inequalities does not exclude defective zero eigenvalues).
[ { "created": "Thu, 4 Feb 2010 18:22:15 GMT", "version": "v1" }, { "created": "Mon, 26 Sep 2011 09:56:37 GMT", "version": "v2" } ]
2011-09-27
[ [ "Conradi", "Carsten", "" ], [ "Flockerzi", "Dietrich", "" ] ]
Many biochemical processes can successfully be described by dynamical systems allowing some form of switching when, depending on their initial conditions, solutions of the dynamical system end up in different regions of state space (associated with different biochemical functions). Switching is often realized by a bistable system (i.e. a dynamical system allowing two stable steady state solutions) and, in the majority of cases, bistability is established numerically. In our point of view this approach is too restrictive, as, one the one hand, due to predominant parameter uncertainty numerical methods are generally difficult to apply to realistic models originating in Systems Biology. And on the other hand switching already arises with the occurrence of a saddle type steady state (characterized by a Jacobian where exactly one Eigenvalue is positive and the remaining eigenvalues have negative real part). Consequently we derive conditions based on linear inequalities that allow the analytic computation of states and parameters where the Jacobian derived from a mass action network has a defective zero eigenvalue so that -- under certain genericity conditions -- a saddle-node bifurcation occurs. Our conditions are applicable to general mass action networks involving at least one conservation relation, however, they are only sufficient (as infeasibility of linear inequalities does not exclude defective zero eigenvalues).
1312.6209
Prasanna Bhogale
Prasanna M. Bhogale, Robin A. Sorg, Jan-Willem Veening, Johannes Berg
What makes the lac-pathway switch: identifying the fluctuations that trigger phenotype switching in gene regulatory systems
Version 2
Nucl. Acids Res. (13 October 2014) 42 (18): 11321-11328
10.1093/nar/gku839
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multistable gene regulatory systems sustain different levels of gene expression under identical external conditions. Such multistability is used to encode phenotypic states in processes including nutrient uptake and persistence in bacteria, fate selection in viral infection, cell cycle control, and development. Stochastic switching between different phenotypes can occur as the result of random fluctuations in molecular copy numbers of mRNA and proteins arising in transcription, translation, transport, and binding. However, which component of a pathway triggers such a transition is generally not known. By linking single-cell experiments on the lactose-uptake pathway in E. coli to molecular simulations, we devise a general method to pinpoint the particular fluctuation driving phenotype switching and apply this method to the transition between the uninduced and induced states of the lac genes. We find that the transition to the induced state is not caused only by the single event of lac-repressor unbinding, but depends crucially on the time period over which the repressor remains unbound from the lac-operon. We confirm this notion in strains with a high expression level of the repressor (leading to shorter periods over which the lac-operon remains unbound), which show a reduced switching rate. Our techniques apply to multi-stable gene regulatory systems in general and allow to identify the molecular mechanisms behind stochastic transitions in gene regulatory circuits.
[ { "created": "Sat, 21 Dec 2013 06:05:22 GMT", "version": "v1" }, { "created": "Fri, 12 Sep 2014 14:41:34 GMT", "version": "v2" } ]
2014-12-05
[ [ "Bhogale", "Prasanna M.", "" ], [ "Sorg", "Robin A.", "" ], [ "Veening", "Jan-Willem", "" ], [ "Berg", "Johannes", "" ] ]
Multistable gene regulatory systems sustain different levels of gene expression under identical external conditions. Such multistability is used to encode phenotypic states in processes including nutrient uptake and persistence in bacteria, fate selection in viral infection, cell cycle control, and development. Stochastic switching between different phenotypes can occur as the result of random fluctuations in molecular copy numbers of mRNA and proteins arising in transcription, translation, transport, and binding. However, which component of a pathway triggers such a transition is generally not known. By linking single-cell experiments on the lactose-uptake pathway in E. coli to molecular simulations, we devise a general method to pinpoint the particular fluctuation driving phenotype switching and apply this method to the transition between the uninduced and induced states of the lac genes. We find that the transition to the induced state is not caused only by the single event of lac-repressor unbinding, but depends crucially on the time period over which the repressor remains unbound from the lac-operon. We confirm this notion in strains with a high expression level of the repressor (leading to shorter periods over which the lac-operon remains unbound), which show a reduced switching rate. Our techniques apply to multi-stable gene regulatory systems in general and allow to identify the molecular mechanisms behind stochastic transitions in gene regulatory circuits.
1403.1920
Alan Stocker
Alexander Tank and Alan A. Stocker
Biased perception leads to biased action: Validating a Bayesian model of interception
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We tested whether and how biases in visual perception might influence motor actions. To do so, we designed an interception task in which subjects had to indicate the time when a moving object, whose trajectory was occluded, would reach a target area. Subjects made their judgments based on a brief display of the object's initial motion at a given starting point. Based on the known illusion that slow contrast stimuli appear to move slower than high contrast ones, we predict that if perception directly influences motion actions subjects would show delayed interception times for low contrast objects. In order to provide a more quantitative prediction, we developed a Bayesian model for the complete sensory-motor interception task. Using fit parameters for the prior and likelihood on visual speed from a previous study we were able to predict not only the expected interception times but also the precise characteristics of response variability. Psychophysical experiments confirm the model's predictions. Individual differences in subjects' timing responses can be accounted for by individual differences in the perceptual priors on visual speed. Taken together, our behavioral and model results show that biases in perception percolate downstream and cause action biases that are fully predictable.
[ { "created": "Sat, 8 Mar 2014 02:23:05 GMT", "version": "v1" } ]
2014-03-11
[ [ "Tank", "Alexander", "" ], [ "Stocker", "Alan A.", "" ] ]
We tested whether and how biases in visual perception might influence motor actions. To do so, we designed an interception task in which subjects had to indicate the time when a moving object, whose trajectory was occluded, would reach a target area. Subjects made their judgments based on a brief display of the object's initial motion at a given starting point. Based on the known illusion that slow contrast stimuli appear to move slower than high contrast ones, we predict that if perception directly influences motion actions subjects would show delayed interception times for low contrast objects. In order to provide a more quantitative prediction, we developed a Bayesian model for the complete sensory-motor interception task. Using fit parameters for the prior and likelihood on visual speed from a previous study we were able to predict not only the expected interception times but also the precise characteristics of response variability. Psychophysical experiments confirm the model's predictions. Individual differences in subjects' timing responses can be accounted for by individual differences in the perceptual priors on visual speed. Taken together, our behavioral and model results show that biases in perception percolate downstream and cause action biases that are fully predictable.
2209.13591
Alain Zemkoho
Hanyu Wang and Emmanuel K. Tsinda and Anthony J. Dunn and Francis Chikweto and Nusreen Ahmed and Emanuela Pelosi and Alain B. Zemkoho
Deep learning forward and reverse primer design to detect SARS-CoV-2 emerging variants
6 figures and 2 tables
null
null
null
q-bio.GN cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Surges that have been observed at different periods in the number of COVID-19 cases are associated with the emergence of multiple SARS-CoV-2 (Severe Acute Respiratory Virus) variants. The design of methods to support laboratory detection are crucial in the monitoring of these variants. Hence, in this paper, we develop a semi-automated method to design both forward and reverse primer sets to detect SARS-CoV-2 variants. To proceed, we train deep Convolution Neural Networks (CNNs) to classify labelled SARS-CoV-2 variants and identify partial genomic features needed for the forward and reverse Polymerase Chain Reaction (PCR) primer design. Our proposed approach supplements existing ones while promoting the emerging concept of neural network assisted primer design for PCR. Our CNN model was trained using a database of SARS-CoV-2 full-length genomes from GISAID and tested on a separate dataset from NCBI, with 98\% accuracy for the classification of variants. This result is based on the development of three different methods of feature extraction, and the selected primer sequences for each SARS-CoV-2 variant detection (except Omicron) were present in more than 95 \% of sequences in an independent set of 5000 same variant sequences, and below 5 \% in other independent datasets with 5000 sequences of each variant. In total, we obtain 22 forward and reverse primer pairs with flexible length sizes (18-25 base pairs) with an expected amplicon length ranging between 42 and 3322 nucleotides. Besides the feature appearance, in-silico primer checks confirmed that the identified primer pairs are suitable for accurate SARS-CoV-2 variant detection by means of PCR tests.
[ { "created": "Sun, 25 Sep 2022 20:09:22 GMT", "version": "v1" } ]
2022-09-29
[ [ "Wang", "Hanyu", "" ], [ "Tsinda", "Emmanuel K.", "" ], [ "Dunn", "Anthony J.", "" ], [ "Chikweto", "Francis", "" ], [ "Ahmed", "Nusreen", "" ], [ "Pelosi", "Emanuela", "" ], [ "Zemkoho", "Alain B.", "" ]...
Surges that have been observed at different periods in the number of COVID-19 cases are associated with the emergence of multiple SARS-CoV-2 (Severe Acute Respiratory Virus) variants. The design of methods to support laboratory detection are crucial in the monitoring of these variants. Hence, in this paper, we develop a semi-automated method to design both forward and reverse primer sets to detect SARS-CoV-2 variants. To proceed, we train deep Convolution Neural Networks (CNNs) to classify labelled SARS-CoV-2 variants and identify partial genomic features needed for the forward and reverse Polymerase Chain Reaction (PCR) primer design. Our proposed approach supplements existing ones while promoting the emerging concept of neural network assisted primer design for PCR. Our CNN model was trained using a database of SARS-CoV-2 full-length genomes from GISAID and tested on a separate dataset from NCBI, with 98\% accuracy for the classification of variants. This result is based on the development of three different methods of feature extraction, and the selected primer sequences for each SARS-CoV-2 variant detection (except Omicron) were present in more than 95 \% of sequences in an independent set of 5000 same variant sequences, and below 5 \% in other independent datasets with 5000 sequences of each variant. In total, we obtain 22 forward and reverse primer pairs with flexible length sizes (18-25 base pairs) with an expected amplicon length ranging between 42 and 3322 nucleotides. Besides the feature appearance, in-silico primer checks confirmed that the identified primer pairs are suitable for accurate SARS-CoV-2 variant detection by means of PCR tests.
1012.0062
Steffen Klaere
Steffen Klaere and Volkmar Liebscher
An algebraic analysis of the two state Markov model on tripod trees
32 pages, four figures
null
null
null
q-bio.PE math.AG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Methods of phylogenetic inference use more and more complex models to generate trees from data. However, even simple models and their implications are not fully understood. Here, we investigate the two-state Markov model on a tripod tree, inferring conditions under which a given set of observations gives rise to such a model. This type of investigation has been undertaken before by several scientists from different fields of research. In contrast to other work we fully analyse the model, presenting conditions under which one can infer a model from the observation or at least get support for the tree-shaped interdependence of the leaves considered. We also present all conditions under which the results can be extended from tripod trees to quartet trees, a step necessary to reconstruct at least a topology. Apart from finding conditions under which such an extension works we discuss example cases for which such an extension does not work.
[ { "created": "Tue, 30 Nov 2010 23:23:51 GMT", "version": "v1" }, { "created": "Sat, 18 Dec 2010 05:16:33 GMT", "version": "v2" }, { "created": "Sat, 3 Dec 2011 00:18:23 GMT", "version": "v3" } ]
2011-12-06
[ [ "Klaere", "Steffen", "" ], [ "Liebscher", "Volkmar", "" ] ]
Methods of phylogenetic inference use more and more complex models to generate trees from data. However, even simple models and their implications are not fully understood. Here, we investigate the two-state Markov model on a tripod tree, inferring conditions under which a given set of observations gives rise to such a model. This type of investigation has been undertaken before by several scientists from different fields of research. In contrast to other work we fully analyse the model, presenting conditions under which one can infer a model from the observation or at least get support for the tree-shaped interdependence of the leaves considered. We also present all conditions under which the results can be extended from tripod trees to quartet trees, a step necessary to reconstruct at least a topology. Apart from finding conditions under which such an extension works we discuss example cases for which such an extension does not work.
1902.08470
Thomas R. Weikl
Francesco Bonazzi and Thomas R. Weikl
Membrane morphologies induced by arc-shaped scaffolds are determined by arc angle and coverage
16 pages, 6 figures + 8 supplementary figures, to appear in Biophysical Journal
null
10.1016/j.bpj.2019.02.017
null
q-bio.SC cond-mat.soft physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The intricate shapes of biological membranes such as tubules and membrane stacks are induced by proteins. In this article, we systematically investigate the membrane shapes induced by arc-shaped scaffolds such as proteins and protein complexes with coarse-grained modeling and simulations. We find that arc-shaped scaffolds induce membrane tubules at membrane coverages larger than a threshold of about 40%, irrespective of their arc angle. The membrane morphologies at intermediate coverages below this tubulation threshold, in contrast, strongly depend on the arc angle. Scaffolds with arc angles of about 60 degree akin to N-BAR domains do not change the membrane shape at coverages below the tubulation threshold, while scaffolds with arc angles larger than about 120 degree induce double-membrane stacks at intermediate coverages. The scaffolds stabilize the curved membrane edges that connect the membrane stacks, as suggested for complexes of reticulon proteins. Our results provide general insights on the determinants of membrane shaping by arc-shaped scaffolds.
[ { "created": "Fri, 22 Feb 2019 12:43:13 GMT", "version": "v1" } ]
2019-05-01
[ [ "Bonazzi", "Francesco", "" ], [ "Weikl", "Thomas R.", "" ] ]
The intricate shapes of biological membranes such as tubules and membrane stacks are induced by proteins. In this article, we systematically investigate the membrane shapes induced by arc-shaped scaffolds such as proteins and protein complexes with coarse-grained modeling and simulations. We find that arc-shaped scaffolds induce membrane tubules at membrane coverages larger than a threshold of about 40%, irrespective of their arc angle. The membrane morphologies at intermediate coverages below this tubulation threshold, in contrast, strongly depend on the arc angle. Scaffolds with arc angles of about 60 degree akin to N-BAR domains do not change the membrane shape at coverages below the tubulation threshold, while scaffolds with arc angles larger than about 120 degree induce double-membrane stacks at intermediate coverages. The scaffolds stabilize the curved membrane edges that connect the membrane stacks, as suggested for complexes of reticulon proteins. Our results provide general insights on the determinants of membrane shaping by arc-shaped scaffolds.
0705.3724
Liu Quanxing
Quan-Xing Liu, Bai-Lian Li and Zhen Jin
Resonance and frequency-locking phenomena in spatially extended phytoplankton-zooplankton system with additive noise and periodic forces
Some typos errors are proof, and some strong relate references are added
J. Stat. Mech. (2008) P05011
10.1088/1742-5468/2008/05/P05011
null
q-bio.PE cond-mat.stat-mech nlin.PS q-bio.OT
null
In this paper, we present a spatial version of phytoplankton-zooplankton model that includes some important factors such as external periodic forces, noise, and diffusion processes. The spatially extended phytoplankton-zooplankton system is from the original study by Scheffer [M Scheffer, Fish and nutrients interplay determines algal biomass: a minimal model, Oikos \textbf{62} (1991) 271-282]. Our results show that the spatially extended system exhibit a resonant patterns and frequency-locking phenomena. The system also shows that the noise and the external periodic forces play a constructive role in the Scheffer's model: first, the noise can enhance the oscillation of phytoplankton species' density and format a large clusters in the space when the noise intensity is within certain interval. Second, the external periodic forces can induce 4:1 and 1:1 frequency-locking and spatially homogeneous oscillation phenomena to appear. Finally, the resonant patterns are observed in the system when the spatial noises and external periodic forces are both turned on. Moreover, we found that the 4:1 frequency-locking transform into 1:1 frequency-locking when the noise intensity increased. In addition to elucidating our results outside the domain of Turing instability, we provide further analysis of Turing linear stability with the help of the numerical calculation by using the Maple software. Significantly, oscillations are enhanced in the system when the noise term presents. These results indicate that the oceanic plankton bloom may partly due to interplay between the stochastic factors and external forces instead of deterministic factors. These results also may help us to understand the effects arising from undeniable subject to random fluctuations in oceanic plankton bloom.
[ { "created": "Fri, 25 May 2007 11:38:18 GMT", "version": "v1" }, { "created": "Fri, 7 Dec 2007 08:30:41 GMT", "version": "v2" } ]
2008-05-23
[ [ "Liu", "Quan-Xing", "" ], [ "Li", "Bai-Lian", "" ], [ "Jin", "Zhen", "" ] ]
In this paper, we present a spatial version of phytoplankton-zooplankton model that includes some important factors such as external periodic forces, noise, and diffusion processes. The spatially extended phytoplankton-zooplankton system is from the original study by Scheffer [M Scheffer, Fish and nutrients interplay determines algal biomass: a minimal model, Oikos \textbf{62} (1991) 271-282]. Our results show that the spatially extended system exhibit a resonant patterns and frequency-locking phenomena. The system also shows that the noise and the external periodic forces play a constructive role in the Scheffer's model: first, the noise can enhance the oscillation of phytoplankton species' density and format a large clusters in the space when the noise intensity is within certain interval. Second, the external periodic forces can induce 4:1 and 1:1 frequency-locking and spatially homogeneous oscillation phenomena to appear. Finally, the resonant patterns are observed in the system when the spatial noises and external periodic forces are both turned on. Moreover, we found that the 4:1 frequency-locking transform into 1:1 frequency-locking when the noise intensity increased. In addition to elucidating our results outside the domain of Turing instability, we provide further analysis of Turing linear stability with the help of the numerical calculation by using the Maple software. Significantly, oscillations are enhanced in the system when the noise term presents. These results indicate that the oceanic plankton bloom may partly due to interplay between the stochastic factors and external forces instead of deterministic factors. These results also may help us to understand the effects arising from undeniable subject to random fluctuations in oceanic plankton bloom.
1512.04591
Jeffrey West
Jeffrey West, Zaki Hasnain, Jeremy Mason, Paul K. Newton
The Prisoner's dilemma as a cancer model
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Tumor development is an evolutionary process in which a heterogeneous population of cells with differential growth capabilities compete for resources in order to gain a proliferative advantage. What are the minimal ingredients needed to recreate some of the emergent features of such a developing complex ecosystem? What is a tumor doing before we can detect it? We outline a mathematical model, driven by a stochastic Moran process, in which cancer cells and healthy cells compete for dominance in the population. Each are assigned payoffs according to a Prisoner's Dilemma evolutionary game where the healthy cells are the cooperators and the cancer cells are the defectors. With point mutational dynamics, heredity, and a fitness landscape controlling birth and death rates, natural selection acts on the cell population and simulated "cancer-like" features emerge, such as Gompertzian tumor growth driven by heterogeneity, the log-kill law which (linearly) relates therapeutic dose density to the (log) probability of cancer cell survival, and the Norton-Simon hypothesis which (linearly) relates tumor regression rates to tumor growth rates. We highlight the utility, clarity, and power that such models provide, despite (and because of) their simplicity and built-in assumptions.
[ { "created": "Mon, 14 Dec 2015 22:40:37 GMT", "version": "v1" }, { "created": "Tue, 22 Dec 2015 06:07:27 GMT", "version": "v2" }, { "created": "Tue, 5 Jan 2016 00:03:39 GMT", "version": "v3" }, { "created": "Sat, 16 Jan 2016 23:26:01 GMT", "version": "v4" } ]
2016-01-19
[ [ "West", "Jeffrey", "" ], [ "Hasnain", "Zaki", "" ], [ "Mason", "Jeremy", "" ], [ "Newton", "Paul K.", "" ] ]
Tumor development is an evolutionary process in which a heterogeneous population of cells with differential growth capabilities compete for resources in order to gain a proliferative advantage. What are the minimal ingredients needed to recreate some of the emergent features of such a developing complex ecosystem? What is a tumor doing before we can detect it? We outline a mathematical model, driven by a stochastic Moran process, in which cancer cells and healthy cells compete for dominance in the population. Each are assigned payoffs according to a Prisoner's Dilemma evolutionary game where the healthy cells are the cooperators and the cancer cells are the defectors. With point mutational dynamics, heredity, and a fitness landscape controlling birth and death rates, natural selection acts on the cell population and simulated "cancer-like" features emerge, such as Gompertzian tumor growth driven by heterogeneity, the log-kill law which (linearly) relates therapeutic dose density to the (log) probability of cancer cell survival, and the Norton-Simon hypothesis which (linearly) relates tumor regression rates to tumor growth rates. We highlight the utility, clarity, and power that such models provide, despite (and because of) their simplicity and built-in assumptions.
2208.02433
Jiahao Ma
Jinhuan Ke, Jiahao Ma, Xiyu Yin, Robin Singh
Simulation and application of COVID-19 compartment model using physics-informed neural network
null
null
null
null
q-bio.QM cs.LG physics.soc-ph q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
COVID-19 pandemic has had a disruptive and irreversible impact globally, yet traditional epidemiological modeling approaches such as the susceptible-infected-recovered (SIR) model have exhibited limited effectiveness in forecasting of the up-to-date pandemic situation. In this work, susceptible-vaccinated-exposed-infected-dead-recovered (SVEIDR) model and its variants -- aged and vaccination-structured SVEIDR models -- are introduced to encode the effect of social contact for different age groups and vaccination status. Then, we implement the physics-informed neural network (PiNN) on both simulated and real-world data. The PiNN model enables robust analysis of the dynamic spread, prediction, and parameter optimization of the COVID-19 compartmental models. The models exhibit relative root mean square error (RRMSE) of $<4\%$ for all components and provide incubation, death, and recovery rates of $\gamma= 0.0130$, $\lambda=0.0001$, and $\rho=0.0037$, respectively, for the first 310 days of the epidemic in the US with RRMSE of $<0.35\%$ for all components. To further improve the model performance, temporally varying parameters can be included, such as vaccination, transmission, and incubation rates. Our implementation highlights PiNN as a reliable candidate approach for forecasting real-world data and can be applied to other compartmental model variants of interest.
[ { "created": "Thu, 4 Aug 2022 03:59:37 GMT", "version": "v1" }, { "created": "Tue, 13 Sep 2022 09:15:42 GMT", "version": "v2" }, { "created": "Sat, 17 Sep 2022 13:17:40 GMT", "version": "v3" }, { "created": "Wed, 12 Oct 2022 04:25:51 GMT", "version": "v4" } ]
2022-10-13
[ [ "Ke", "Jinhuan", "" ], [ "Ma", "Jiahao", "" ], [ "Yin", "Xiyu", "" ], [ "Singh", "Robin", "" ] ]
COVID-19 pandemic has had a disruptive and irreversible impact globally, yet traditional epidemiological modeling approaches such as the susceptible-infected-recovered (SIR) model have exhibited limited effectiveness in forecasting of the up-to-date pandemic situation. In this work, susceptible-vaccinated-exposed-infected-dead-recovered (SVEIDR) model and its variants -- aged and vaccination-structured SVEIDR models -- are introduced to encode the effect of social contact for different age groups and vaccination status. Then, we implement the physics-informed neural network (PiNN) on both simulated and real-world data. The PiNN model enables robust analysis of the dynamic spread, prediction, and parameter optimization of the COVID-19 compartmental models. The models exhibit relative root mean square error (RRMSE) of $<4\%$ for all components and provide incubation, death, and recovery rates of $\gamma= 0.0130$, $\lambda=0.0001$, and $\rho=0.0037$, respectively, for the first 310 days of the epidemic in the US with RRMSE of $<0.35\%$ for all components. To further improve the model performance, temporally varying parameters can be included, such as vaccination, transmission, and incubation rates. Our implementation highlights PiNN as a reliable candidate approach for forecasting real-world data and can be applied to other compartmental model variants of interest.
1907.12030
Macoto Kikuchi
Shintaro Nagata and Macoto Kikuchi
Emergence of cooperative bistability and robustness of gene regulatory networks
null
PLoS Comput Biol 16(6): e1007969 (2020)
10.1371/journal.pcbi.1007969
null
q-bio.MN cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Gene regulatory networks (GRNs) are complex systems in which many genes regulate mutually to adapt the cell state to environmental conditions. In addition to function, the GRNs possess several kinds of robustness. This robustness means that systems do not lose their functionality when exposed to disturbances such as mutations or noise, and is widely observed at many levels in living systems. Both function and robustness have been acquired through evolution. In this respect, GRNs utilized in living systems are rare among all possible GRNs. In this study, we explored the fitness landscape of GRNs and investigated how robustness emerged in highly-fit GRNs. We considered a toy model of GRNs with one input gene and one output gene. The difference in the expression level of the output gene between two input states, "on" and "off", was considered as fitness. Thus, the determination of the fitness of a GRN was based on how sensitively it responded to the input. We employed the multicanonical Monte Carlo method, which can sample GRNs randomly in a wide range of fitness levels, and classified the GRNs according to their fitness. As a result, the following properties were found: (1) Highly-fit GRNs exhibited bistability for intermediate input between "on" and "off". This bistability emerges necessarily as fitness increases. (2) These highly-fit GRNs were robust against noise because of their bistability. (3) GRNs that were robust against mutations were not extremely rare among the highly-fit GRNs. This implies that mutational robustness is readily acquired through the evolutionary process.
[ { "created": "Sun, 28 Jul 2019 07:07:48 GMT", "version": "v1" }, { "created": "Thu, 2 Apr 2020 06:24:08 GMT", "version": "v2" }, { "created": "Mon, 4 Dec 2023 06:12:45 GMT", "version": "v3" } ]
2023-12-05
[ [ "Nagata", "Shintaro", "" ], [ "Kikuchi", "Macoto", "" ] ]
Gene regulatory networks (GRNs) are complex systems in which many genes regulate mutually to adapt the cell state to environmental conditions. In addition to function, the GRNs possess several kinds of robustness. This robustness means that systems do not lose their functionality when exposed to disturbances such as mutations or noise, and is widely observed at many levels in living systems. Both function and robustness have been acquired through evolution. In this respect, GRNs utilized in living systems are rare among all possible GRNs. In this study, we explored the fitness landscape of GRNs and investigated how robustness emerged in highly-fit GRNs. We considered a toy model of GRNs with one input gene and one output gene. The difference in the expression level of the output gene between two input states, "on" and "off", was considered as fitness. Thus, the determination of the fitness of a GRN was based on how sensitively it responded to the input. We employed the multicanonical Monte Carlo method, which can sample GRNs randomly in a wide range of fitness levels, and classified the GRNs according to their fitness. As a result, the following properties were found: (1) Highly-fit GRNs exhibited bistability for intermediate input between "on" and "off". This bistability emerges necessarily as fitness increases. (2) These highly-fit GRNs were robust against noise because of their bistability. (3) GRNs that were robust against mutations were not extremely rare among the highly-fit GRNs. This implies that mutational robustness is readily acquired through the evolutionary process.
0906.0125
Vesna Memisevic
Vesna Memisevic, Tijana Milenkovic, and Natasa Przulj
An integrative approach to modeling biological networks
10 pages, 3 tables, 4 figures
Vesna Memisevic, Tijana Milenkovic and Natasa Przulj. An integrative approach to modeling biological networks. Journal of Integrative Bioinformatics, 7(3):120, 2010.
10.2390/biecoll-jib-2010-120
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Since proteins carry out biological processes by interacting with other proteins, analyzing the structure of protein-protein interaction (PPI) networks could explain complex biological mechanisms, evolution, and disease. Similarly, studying protein structure networks, residue interaction graphs (RIGs), might provide insights into protein folding, stability, and function. The first step towards understanding these networks is finding an adequate network model that closely replicates their structure. Evaluating the fit of a model to the data requires comparing the model with real-world networks. Since network comparisons are computationally infeasible, they rely on heuristics, or "network properties." We show that it is difficult to assess the reliability of the fit of a model with any individual network property. Thus, our approach integrates a variety of network properties and further combines these with a series of probabilistic methods to predict an appropriate network model for biological networks. We find geometric random graphs, that model spatial relationships between objects, to be the best-fitting model for RIGs. This validates the correctness of our method, since RIGs have previously been shown to be geometric. We apply our approach to noisy PPI networks and demonstrate that their structure is also consistent with geometric random graphs.
[ { "created": "Sun, 31 May 2009 00:32:57 GMT", "version": "v1" }, { "created": "Thu, 24 Sep 2009 06:10:06 GMT", "version": "v2" } ]
2010-04-22
[ [ "Memisevic", "Vesna", "" ], [ "Milenkovic", "Tijana", "" ], [ "Przulj", "Natasa", "" ] ]
Since proteins carry out biological processes by interacting with other proteins, analyzing the structure of protein-protein interaction (PPI) networks could explain complex biological mechanisms, evolution, and disease. Similarly, studying protein structure networks, residue interaction graphs (RIGs), might provide insights into protein folding, stability, and function. The first step towards understanding these networks is finding an adequate network model that closely replicates their structure. Evaluating the fit of a model to the data requires comparing the model with real-world networks. Since network comparisons are computationally infeasible, they rely on heuristics, or "network properties." We show that it is difficult to assess the reliability of the fit of a model with any individual network property. Thus, our approach integrates a variety of network properties and further combines these with a series of probabilistic methods to predict an appropriate network model for biological networks. We find geometric random graphs, that model spatial relationships between objects, to be the best-fitting model for RIGs. This validates the correctness of our method, since RIGs have previously been shown to be geometric. We apply our approach to noisy PPI networks and demonstrate that their structure is also consistent with geometric random graphs.
q-bio/0403018
Yisroel Brumer
Yisroel Brumer and Eugene I. Shakhnovich
The Importance of DNA Repair in Tumor Suppression
7 pages, 5 figures; Approximation replaced with exact calculation; Minor error corrected; Minor changes to model system
null
10.1103/PhysRevE.70.061912
null
q-bio.GN cond-mat.other q-bio.OT
null
The transition from a normal to cancerous cell requires a number of highly specific mutations that affect cell cycle regulation, apoptosis, differentiation, and many other cell functions. One hallmark of cancerous genomes is genomic instability, with mutation rates far greater than those of normal cells. In microsatellite instability (MIN tumors), these are often caused by damage to mismatch repair genes, allowing further mutation of the genome and tumor progression. These mutation rates may lie near the error catastrophe found in the quasispecies model of adaptive RNA genomes, suggesting that further increasing mutation rates will destroy cancerous genomes. However, recent results have demonstrated that DNA genomes exhibit an error threshold at mutation rates far lower than their conservative counterparts. Furthermore, while the maximum viable mutation rate in conservative systems increases indefinitely with increasing master sequence fitness, the semiconservative threshold plateaus at a relatively low value. This implies a paradox, wherein inaccessible mutation rates are found in viable tumor cells. In this paper, we address this paradox, demonstrating an isomorphism between the conservatively replicating (RNA) quasispecies model and the semiconservative (DNA) model with post-methylation DNA repair mechanisms impaired. Thus, as DNA repair becomes inactivated, the maximum viable mutation rate increases smoothly to that of a conservatively replicating system on a transformed landscape, with an upper bound that is dependent on replication rates. We postulate that inactivation of post-methylation repair mechanisms are fundamental to the progression of a tumor cell and hence these mechanisms act as a method for prevention and destruction of cancerous genomes.
[ { "created": "Mon, 15 Mar 2004 14:38:24 GMT", "version": "v1" }, { "created": "Tue, 4 May 2004 15:01:31 GMT", "version": "v2" } ]
2009-11-10
[ [ "Brumer", "Yisroel", "" ], [ "Shakhnovich", "Eugene I.", "" ] ]
The transition from a normal to cancerous cell requires a number of highly specific mutations that affect cell cycle regulation, apoptosis, differentiation, and many other cell functions. One hallmark of cancerous genomes is genomic instability, with mutation rates far greater than those of normal cells. In microsatellite instability (MIN tumors), these are often caused by damage to mismatch repair genes, allowing further mutation of the genome and tumor progression. These mutation rates may lie near the error catastrophe found in the quasispecies model of adaptive RNA genomes, suggesting that further increasing mutation rates will destroy cancerous genomes. However, recent results have demonstrated that DNA genomes exhibit an error threshold at mutation rates far lower than their conservative counterparts. Furthermore, while the maximum viable mutation rate in conservative systems increases indefinitely with increasing master sequence fitness, the semiconservative threshold plateaus at a relatively low value. This implies a paradox, wherein inaccessible mutation rates are found in viable tumor cells. In this paper, we address this paradox, demonstrating an isomorphism between the conservatively replicating (RNA) quasispecies model and the semiconservative (DNA) model with post-methylation DNA repair mechanisms impaired. Thus, as DNA repair becomes inactivated, the maximum viable mutation rate increases smoothly to that of a conservatively replicating system on a transformed landscape, with an upper bound that is dependent on replication rates. We postulate that inactivation of post-methylation repair mechanisms are fundamental to the progression of a tumor cell and hence these mechanisms act as a method for prevention and destruction of cancerous genomes.
1508.03526
Marco Antoniotti
Andrea Paroni, Alex Graudenzi, Giulio Caravagna, Chiara Damiani, Giancarlo Mauri, Marco Antoniotti
CABeRNET: a Cytoscape app for Augmented Boolean models of gene Regulatory NETworks
18 pages, 3 figures
null
null
null
q-bio.MN q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background. Dynamical models of gene regulatory networks (GRNs) are highly effective in describing complex biological phenomena and processes, such as cell differentiation and cancer development. Yet, the topological and functional characterization of real GRNs is often still partial and an exhaustive picture of their functioning is missing. Motivation. We here introduce CABeRNET, a Cytoscape app for the generation, simulation and analysis of Boolean models of GRNs, specifically focused on their augmentation when a only partial topological and functional characterization of the network is available. By generating large ensembles of networks in which user-defined entities and relations are added to the original core, CABeRNET allows to formulate hypotheses on the missing portions of real networks, as well to investigate their generic properties, in the spirit of complexity science. Results. CABeRNET offers a series of innovative simulation and modeling functions and tools, including (but not being limited to) the dynamical characterization of the gene activation patterns ruling cell types and differentiation fates, and sophisticated robustness assessments, as in the case of gene knockouts. The integration within the widely used Cytoscape framework for the visualization and analysis of biological networks, makes CABeRNET a new essential instrument for both the bioinformatician and the computational biologist, as well as a computational support for the experimentalist. An example application concerning the analysis of an augmented T-helper cell GRN is provided.
[ { "created": "Sun, 26 Jul 2015 15:13:58 GMT", "version": "v1" } ]
2015-08-17
[ [ "Paroni", "Andrea", "" ], [ "Graudenzi", "Alex", "" ], [ "Caravagna", "Giulio", "" ], [ "Damiani", "Chiara", "" ], [ "Mauri", "Giancarlo", "" ], [ "Antoniotti", "Marco", "" ] ]
Background. Dynamical models of gene regulatory networks (GRNs) are highly effective in describing complex biological phenomena and processes, such as cell differentiation and cancer development. Yet, the topological and functional characterization of real GRNs is often still partial and an exhaustive picture of their functioning is missing. Motivation. We here introduce CABeRNET, a Cytoscape app for the generation, simulation and analysis of Boolean models of GRNs, specifically focused on their augmentation when a only partial topological and functional characterization of the network is available. By generating large ensembles of networks in which user-defined entities and relations are added to the original core, CABeRNET allows to formulate hypotheses on the missing portions of real networks, as well to investigate their generic properties, in the spirit of complexity science. Results. CABeRNET offers a series of innovative simulation and modeling functions and tools, including (but not being limited to) the dynamical characterization of the gene activation patterns ruling cell types and differentiation fates, and sophisticated robustness assessments, as in the case of gene knockouts. The integration within the widely used Cytoscape framework for the visualization and analysis of biological networks, makes CABeRNET a new essential instrument for both the bioinformatician and the computational biologist, as well as a computational support for the experimentalist. An example application concerning the analysis of an augmented T-helper cell GRN is provided.
2408.06563
Shila Ghazanfar
Martin Hemberg, Federico Marini, Shila Ghazanfar, Ahmad Al Ajami, Najla Abassi, Benedict Anchang, B\'er\'enice A. Benayoun, Yue Cao, Ken Chen, Yesid Cuesta-Astroz, Zach DeBruine, Calliope A. Dendrou, Iwijn De Vlaminck, Katharina Imkeller, Ilya Korsunsky, Alex R. Lederer, Pieter Meysman, Clint Miller, Kerry Mullan, Uwe Ohler, Nikolaos Patikas, Jonas Schuck, Jacqueline HY Siu, Timothy J. Triche Jr., Alex Tsankov, Sander W. van der Laan, Masanao Yajima, Jean Yang, Fabio Zanini, Ivana Jelic
Insights, opportunities and challenges provided by large cell atlases
null
null
null
null
q-bio.GN
http://creativecommons.org/licenses/by-nc-nd/4.0/
The field of single-cell biology is growing rapidly and is generating large amounts of data from a variety of species, disease conditions, tissues, and organs. Coordinated efforts such as CZI CELLxGENE, HuBMAP, Broad Institute Single Cell Portal, and DISCO, allow researchers to access large volumes of curated datasets. Although the majority of the data is from scRNAseq experiments, a wide range of other modalities are represented as well. These resources have created an opportunity to build and expand the computational biology ecosystem to develop tools necessary for data reuse, and for extracting novel biological insights. Here, we highlight achievements made so far, areas where further development is needed, and specific challenges that need to be overcome.
[ { "created": "Tue, 13 Aug 2024 01:55:06 GMT", "version": "v1" } ]
2024-08-14
[ [ "Hemberg", "Martin", "" ], [ "Marini", "Federico", "" ], [ "Ghazanfar", "Shila", "" ], [ "Ajami", "Ahmad Al", "" ], [ "Abassi", "Najla", "" ], [ "Anchang", "Benedict", "" ], [ "Benayoun", "Bérénice A.", "" ...
The field of single-cell biology is growing rapidly and is generating large amounts of data from a variety of species, disease conditions, tissues, and organs. Coordinated efforts such as CZI CELLxGENE, HuBMAP, Broad Institute Single Cell Portal, and DISCO, allow researchers to access large volumes of curated datasets. Although the majority of the data is from scRNAseq experiments, a wide range of other modalities are represented as well. These resources have created an opportunity to build and expand the computational biology ecosystem to develop tools necessary for data reuse, and for extracting novel biological insights. Here, we highlight achievements made so far, areas where further development is needed, and specific challenges that need to be overcome.
2107.11364
Andrea Radtke
Andrea J. Radtke, Colin J. Chu, Ziv Yaniv, Li Yao, James Marr, Rebecca T. Beuschel, Hiroshi Ichise, Anita Gola, Juraj Kabat, Bradley Lowekamp, Emily Speranza, Joshua Croteau, Nishant Thakur, Danny Jonigk, Jeremy Davis, Jonathan M. Hernandez, and Ronald N. Germain
IBEX: An open and extensible method for high content multiplex imaging of diverse tissues
null
null
null
null
q-bio.TO
http://creativecommons.org/licenses/by/4.0/
High content imaging is needed to catalogue the variety of cellular phenotypes and multi-cellular ecosystems present in metazoan tissues. We recently developed Iterative Bleaching Extends multi-pleXity (IBEX), an iterative immunolabeling and chemical bleaching method that enables multiplexed imaging (>65 parameters) in diverse tissues, including human organs relevant for international consortia efforts. IBEX is compatible with over 250 commercially available antibodies, 16 unique fluorophores, and can be easily adopted to different imaging platforms using slides and non-proprietary imaging chambers. The overall protocol consists of iterative cycles of antibody labelling, imaging, and chemical bleaching that can be completed at relatively low cost in 2-5 days by biologists with basic laboratory skills. To support widespread adoption, we provide extensive details on tissue processing, curated lists of validated antibodies, and tissue-specific panels for multiplex imaging. Furthermore, instructions are included on how to automate the method using competitively priced instruments and reagents. Finally, we present a software solution for image alignment that can be executed by individuals without programming experience using open source software and freeware. In summary, IBEX is an open and versatile method that can be readily implemented by academic laboratories and scaled to achieve high content mapping of diverse tissues in support of a Human Reference Atlas or other such applications.
[ { "created": "Fri, 23 Jul 2021 17:29:46 GMT", "version": "v1" } ]
2021-07-26
[ [ "Radtke", "Andrea J.", "" ], [ "Chu", "Colin J.", "" ], [ "Yaniv", "Ziv", "" ], [ "Yao", "Li", "" ], [ "Marr", "James", "" ], [ "Beuschel", "Rebecca T.", "" ], [ "Ichise", "Hiroshi", "" ], [ "Gola",...
High content imaging is needed to catalogue the variety of cellular phenotypes and multi-cellular ecosystems present in metazoan tissues. We recently developed Iterative Bleaching Extends multi-pleXity (IBEX), an iterative immunolabeling and chemical bleaching method that enables multiplexed imaging (>65 parameters) in diverse tissues, including human organs relevant for international consortia efforts. IBEX is compatible with over 250 commercially available antibodies, 16 unique fluorophores, and can be easily adopted to different imaging platforms using slides and non-proprietary imaging chambers. The overall protocol consists of iterative cycles of antibody labelling, imaging, and chemical bleaching that can be completed at relatively low cost in 2-5 days by biologists with basic laboratory skills. To support widespread adoption, we provide extensive details on tissue processing, curated lists of validated antibodies, and tissue-specific panels for multiplex imaging. Furthermore, instructions are included on how to automate the method using competitively priced instruments and reagents. Finally, we present a software solution for image alignment that can be executed by individuals without programming experience using open source software and freeware. In summary, IBEX is an open and versatile method that can be readily implemented by academic laboratories and scaled to achieve high content mapping of diverse tissues in support of a Human Reference Atlas or other such applications.
2210.02996
Alexandra Proca
Alexandra M. Proca, Fernando E. Rosas, Andrea I. Luppi, Daniel Bor, Matthew Crosby, Pedro A.M. Mediano
Synergistic information supports modality integration and flexible learning in neural networks solving multiple tasks
33 pages, 15 figures
null
null
null
q-bio.NC cs.AI
http://creativecommons.org/licenses/by/4.0/
Striking progress has recently been made in understanding human cognition by analyzing how its neuronal underpinnings are engaged in different modes of information processing. Specifically, neural information can be decomposed into synergistic, redundant, and unique features, with synergistic components being particularly aligned with complex cognition. However, two fundamental questions remain unanswered: (a) precisely how and why a cognitive system can become highly synergistic; and (b) how these informational states map onto artificial neural networks in various learning modes. To address these questions, here we employ an information-decomposition framework to investigate the information processing strategies adopted by simple artificial neural networks performing a variety of cognitive tasks in both supervised and reinforcement learning settings. Our results show that synergy increases as neural networks learn multiple diverse tasks. Furthermore, performance in tasks requiring integration of multiple information sources critically relies on synergistic neurons. Finally, randomly turning off neurons during training through dropout increases network redundancy, corresponding to an increase in robustness. Overall, our results suggest that while redundant information is required for robustness to perturbations in the learning process, synergistic information is used to combine information from multiple modalities -- and more generally for flexible and efficient learning. These findings open the door to new ways of investigating how and why learning systems employ specific information-processing strategies, and support the principle that the capacity for general-purpose learning critically relies in the system's information dynamics.
[ { "created": "Thu, 6 Oct 2022 15:36:27 GMT", "version": "v1" } ]
2022-10-07
[ [ "Proca", "Alexandra M.", "" ], [ "Rosas", "Fernando E.", "" ], [ "Luppi", "Andrea I.", "" ], [ "Bor", "Daniel", "" ], [ "Crosby", "Matthew", "" ], [ "Mediano", "Pedro A. M.", "" ] ]
Striking progress has recently been made in understanding human cognition by analyzing how its neuronal underpinnings are engaged in different modes of information processing. Specifically, neural information can be decomposed into synergistic, redundant, and unique features, with synergistic components being particularly aligned with complex cognition. However, two fundamental questions remain unanswered: (a) precisely how and why a cognitive system can become highly synergistic; and (b) how these informational states map onto artificial neural networks in various learning modes. To address these questions, here we employ an information-decomposition framework to investigate the information processing strategies adopted by simple artificial neural networks performing a variety of cognitive tasks in both supervised and reinforcement learning settings. Our results show that synergy increases as neural networks learn multiple diverse tasks. Furthermore, performance in tasks requiring integration of multiple information sources critically relies on synergistic neurons. Finally, randomly turning off neurons during training through dropout increases network redundancy, corresponding to an increase in robustness. Overall, our results suggest that while redundant information is required for robustness to perturbations in the learning process, synergistic information is used to combine information from multiple modalities -- and more generally for flexible and efficient learning. These findings open the door to new ways of investigating how and why learning systems employ specific information-processing strategies, and support the principle that the capacity for general-purpose learning critically relies in the system's information dynamics.
q-bio/0501030
Dietrich Stauffer
Kerstin Hoef-Emden
Molecular Phylogenetic Analyses and Real Life Data
12 pages including 4 figs., for Computing in Science and Engineering (Review), abstract change, 1 paragraph changed in section 5 and 1 reference added, correction of Fig. 3 (text moved away from arrow), modification of conclusions
null
null
null
q-bio.GN
null
In molecular phylogeny, relationships among organisms are reconstructed using DNA or protein sequences and are displayed as trees. A linear increase in the number of sequences results in an exponential increase of possible trees. Thus, inferring trees from molecular data was shown to be NP-hard. This causes problems, if large data sets are used. This review gives an introduction to molecular phylogenetic methods and to the problems biologists are facing in molecular phylogenetic analyses.
[ { "created": "Sun, 23 Jan 2005 08:48:29 GMT", "version": "v1" }, { "created": "Sat, 29 Jan 2005 18:08:21 GMT", "version": "v2" } ]
2007-05-23
[ [ "Hoef-Emden", "Kerstin", "" ] ]
In molecular phylogeny, relationships among organisms are reconstructed using DNA or protein sequences and are displayed as trees. A linear increase in the number of sequences results in an exponential increase of possible trees. Thus, inferring trees from molecular data was shown to be NP-hard. This causes problems, if large data sets are used. This review gives an introduction to molecular phylogenetic methods and to the problems biologists are facing in molecular phylogenetic analyses.
1009.4478
Sergei Maslov
Tin Yau Pang and Sergei Maslov
Toolbox model of evolution of metabolic pathways on networks of arbitrary topology
34 pages, 9 figures, 2 tables
null
null
null
q-bio.MN q-bio.GN q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In prokaryotic genomes the number of transcriptional regulators is known to quadratically scale with the total number of protein-coding genes. Toolbox model was recently proposed to explain this scaling for metabolic enzymes and their regulators. According to its rules the metabolic network of an organism evolves by horizontal transfer of pathways from other species. These pathways are part of a larger "universal" network formed by the union of all species-specific networks. It remained to be understood, however, how the topological properties of this universal network influence the scaling law of functional content of genomes. In this study we answer this question by first analyzing the scaling properties of the toolbox model on arbitrary tree-like universal networks. We mathematically prove that the critical branching topology, in which the average number of upstream neighbors of a node is equal to one, is both necessary and sufficient for the quadratic scaling. Conversely, the toolbox model on trees with exponentially expanding, supercritical topology is characterized by the linear scaling with logarithmic corrections. We further generalize our model to include reactions with multiple substrates/products as well as branched or cyclic metabolic pathways. Unlike the original model the new version employs evolutionary optimized pathways with the smallest number of reactions necessary to achieve their metabolic tasks. Numerical simulations of this most realistic model on the universal network from the KEGG database again produced approximately quadratic scaling. Our results demonstrate why, in spite of their "small-world" topology, real-life metabolic networks are characterized by a broad distribution of pathway lengths and sizes of metabolic regulons in regulatory networks.
[ { "created": "Wed, 22 Sep 2010 21:07:32 GMT", "version": "v1" } ]
2010-09-24
[ [ "Pang", "Tin Yau", "" ], [ "Maslov", "Sergei", "" ] ]
In prokaryotic genomes the number of transcriptional regulators is known to quadratically scale with the total number of protein-coding genes. Toolbox model was recently proposed to explain this scaling for metabolic enzymes and their regulators. According to its rules the metabolic network of an organism evolves by horizontal transfer of pathways from other species. These pathways are part of a larger "universal" network formed by the union of all species-specific networks. It remained to be understood, however, how the topological properties of this universal network influence the scaling law of functional content of genomes. In this study we answer this question by first analyzing the scaling properties of the toolbox model on arbitrary tree-like universal networks. We mathematically prove that the critical branching topology, in which the average number of upstream neighbors of a node is equal to one, is both necessary and sufficient for the quadratic scaling. Conversely, the toolbox model on trees with exponentially expanding, supercritical topology is characterized by the linear scaling with logarithmic corrections. We further generalize our model to include reactions with multiple substrates/products as well as branched or cyclic metabolic pathways. Unlike the original model the new version employs evolutionary optimized pathways with the smallest number of reactions necessary to achieve their metabolic tasks. Numerical simulations of this most realistic model on the universal network from the KEGG database again produced approximately quadratic scaling. Our results demonstrate why, in spite of their "small-world" topology, real-life metabolic networks are characterized by a broad distribution of pathway lengths and sizes of metabolic regulons in regulatory networks.
1012.1521
Benjamin Pfeuty
Benjamin Pfeuty (PhLAM), Quentin Thommen (PhLAM), Marc Lefranc (PhLAM)
Robust entrainment of circadian oscillators requires specific phase response curves
null
null
10.1016/j.bpj.2011.04.043
null
q-bio.QM physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The circadian clocks keeping time of day in many living organisms rely on self-sustained biochemical oscillations which can be entrained by external cues, such as light, to the 24-hour cycle induced by Earth rotation. However, environmental cues are unreliable due to the variability of habitats, weather conditions or cue-sensing mechanisms among individuals. A tempting hypothesis is that circadian clocks have evolved so as to be robust to fluctuations in daylight or other cues when entrained by the day/night cycle. To test this hypothesis, we analyze the synchronization behavior of weakly and periodically forced oscillators in terms of their phase response curve (PRC), which measures phase changes induced by a perturbation applied at different phases. We establish a general relationship between, on the one side, the robustness of key entrainment properties such as stability and phase shift and, on the other side, the shape of the PRC as characterized by a specific curvature or the existence of a dead zone. This result can be applied to computational models of circadian clocks where it accounts for the disparate robustness properties of various forcing schemes. Finally, the analysis of PRCs measured experimentally in several organisms strongly suggests a case of convergent evolution toward an optimal strategy for maintaining a clock that is accurate and robust to environmental fluctuations.
[ { "created": "Tue, 7 Dec 2010 14:44:24 GMT", "version": "v1" } ]
2015-05-20
[ [ "Pfeuty", "Benjamin", "", "PhLAM" ], [ "Thommen", "Quentin", "", "PhLAM" ], [ "Lefranc", "Marc", "", "PhLAM" ] ]
The circadian clocks keeping time of day in many living organisms rely on self-sustained biochemical oscillations which can be entrained by external cues, such as light, to the 24-hour cycle induced by Earth rotation. However, environmental cues are unreliable due to the variability of habitats, weather conditions or cue-sensing mechanisms among individuals. A tempting hypothesis is that circadian clocks have evolved so as to be robust to fluctuations in daylight or other cues when entrained by the day/night cycle. To test this hypothesis, we analyze the synchronization behavior of weakly and periodically forced oscillators in terms of their phase response curve (PRC), which measures phase changes induced by a perturbation applied at different phases. We establish a general relationship between, on the one side, the robustness of key entrainment properties such as stability and phase shift and, on the other side, the shape of the PRC as characterized by a specific curvature or the existence of a dead zone. This result can be applied to computational models of circadian clocks where it accounts for the disparate robustness properties of various forcing schemes. Finally, the analysis of PRCs measured experimentally in several organisms strongly suggests a case of convergent evolution toward an optimal strategy for maintaining a clock that is accurate and robust to environmental fluctuations.
1302.7075
Austin Meyer
Austin G. Meyer, Sara L. Sawyer, Andrew D. Ellington, Claus O. Wilke
Analyzing Machupo virus-receptor binding by molecular dynamics simulations
33 pages, 8 figures, 5 tables
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In many biological applications, we would like to be able to computationally predict mutational effects on affinity in protein-protein interactions. However, many commonly used methods to predict these effects perform poorly in important test cases. In particular, the effects of multiple mutations, non-alanine substitutions, and flexible loops are difficult to predict with available tools and protocols. We present here an existing method applied in a novel way to a new test case; we interrogate affinity differences resulting from mutations in a host-virus protein-protein interface. We use steered molecular dynamics (SMD) to computationally pull the machupo virus (MACV) spike glycoprotein (GP1) away from the human transferrin receptor (hTfR1). We then approximate affinity using the maximum applied force of separation and the area under the force-versus-distance curve. We find, even without the rigor and planning required for free energy calculations, that these quantities can provide novel biophysical insight into the GP1/hTfR1 interaction. First, with no prior knowledge of the system we can differentiate among wild type and mutant complexes. Moreover, we show that this simple SMD scheme correlates well with relative free energy differences computed via free energy perturbation. Second, although the static co-crystal structure shows two large hydrogen-bonding networks in the GP1/hTfR1 interface, our simulations indicate that one of them may not be important for tight binding. Third, one viral site known to be critical for infection may mark an important evolutionary suppressor site for infection-resistant hTfR1 mutants. Finally, our approach provides a framework to compare the effects of multiple mutations, individually and jointly, on protein-protein interactions.
[ { "created": "Thu, 28 Feb 2013 03:59:45 GMT", "version": "v1" }, { "created": "Wed, 26 Jun 2013 23:16:28 GMT", "version": "v2" }, { "created": "Tue, 14 Jan 2014 02:16:13 GMT", "version": "v3" } ]
2014-01-15
[ [ "Meyer", "Austin G.", "" ], [ "Sawyer", "Sara L.", "" ], [ "Ellington", "Andrew D.", "" ], [ "Wilke", "Claus O.", "" ] ]
In many biological applications, we would like to be able to computationally predict mutational effects on affinity in protein-protein interactions. However, many commonly used methods to predict these effects perform poorly in important test cases. In particular, the effects of multiple mutations, non-alanine substitutions, and flexible loops are difficult to predict with available tools and protocols. We present here an existing method applied in a novel way to a new test case; we interrogate affinity differences resulting from mutations in a host-virus protein-protein interface. We use steered molecular dynamics (SMD) to computationally pull the machupo virus (MACV) spike glycoprotein (GP1) away from the human transferrin receptor (hTfR1). We then approximate affinity using the maximum applied force of separation and the area under the force-versus-distance curve. We find, even without the rigor and planning required for free energy calculations, that these quantities can provide novel biophysical insight into the GP1/hTfR1 interaction. First, with no prior knowledge of the system we can differentiate among wild type and mutant complexes. Moreover, we show that this simple SMD scheme correlates well with relative free energy differences computed via free energy perturbation. Second, although the static co-crystal structure shows two large hydrogen-bonding networks in the GP1/hTfR1 interface, our simulations indicate that one of them may not be important for tight binding. Third, one viral site known to be critical for infection may mark an important evolutionary suppressor site for infection-resistant hTfR1 mutants. Finally, our approach provides a framework to compare the effects of multiple mutations, individually and jointly, on protein-protein interactions.
2005.01696
Bernard Offmann
Surbhi Dhingra, Ramanathan Sowdhamini, Yves-Henri Sanejouand, Fr\'ed\'eric Cadet, Bernard Offmann
Customised fragment libraries for ab initio protein structure prediction using a structural alphabet
18 pages, 7 figures, 2 Tables, 2 Supplementary Figures, 2 Supplementary Tables
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by-nc-sa/4.0/
Motivation: Computational protein structure prediction has taken over the structural community in past few decades, mostly focusing on the development of Template-Free modelling (TFM) or ab initio modelling protocols. Fragment-based assembly (FBA), falls under this category and is by far the most popular approach to solve the spatial arrangements of proteins. FBA approaches usually rely on sequence based profile comparison to generate fragments from a representative structural database. Here we report the use of Protein Blocks (PBs), a structural alphabet (SA) to perform such sequence comparison and to build customised fragment libraries for TFM. Results: We demonstrate that predicted PB sequences for a query protein can be used to search for high quality fragments that overall cover above 90% of the query. The fragments generated are of minimum length of 11 residues, and fragments that cover more than 30% of the query length were often obtained. Our work shows that PBs can serve as a good way to extract structurally similar fragments from a database of representatives of non-homologous structures and of the proteins that contain less ordered regions.
[ { "created": "Mon, 4 May 2020 17:50:38 GMT", "version": "v1" } ]
2020-05-05
[ [ "Dhingra", "Surbhi", "" ], [ "Sowdhamini", "Ramanathan", "" ], [ "Sanejouand", "Yves-Henri", "" ], [ "Cadet", "Frédéric", "" ], [ "Offmann", "Bernard", "" ] ]
Motivation: Computational protein structure prediction has taken over the structural community in past few decades, mostly focusing on the development of Template-Free modelling (TFM) or ab initio modelling protocols. Fragment-based assembly (FBA), falls under this category and is by far the most popular approach to solve the spatial arrangements of proteins. FBA approaches usually rely on sequence based profile comparison to generate fragments from a representative structural database. Here we report the use of Protein Blocks (PBs), a structural alphabet (SA) to perform such sequence comparison and to build customised fragment libraries for TFM. Results: We demonstrate that predicted PB sequences for a query protein can be used to search for high quality fragments that overall cover above 90% of the query. The fragments generated are of minimum length of 11 residues, and fragments that cover more than 30% of the query length were often obtained. Our work shows that PBs can serve as a good way to extract structurally similar fragments from a database of representatives of non-homologous structures and of the proteins that contain less ordered regions.
1204.4762
Jeremy Sumner
Barbara R. Holland, Peter D. Jarvis, and Jeremy G. Sumner
Low-parameter phylogenetic estimation under the general Markov model
22 pages, 6 figures
null
null
null
q-bio.QM math.ST q-bio.PE stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In their 2008 and 2009 papers, Sumner and colleagues introduced the "squangles" - a small set of Markov invariants for phylogenetic quartets. The squangles are consistent with the general Markov model (GM) and can be used to infer quartets without the need to explicitly estimate all parameters. As GM is inhomogeneous and hence non-stationary, the squangles are expected to perform well compared to standard approaches when there are changes in base-composition amongst species. However, GM includes the IID assumption, so the squangles should be confounded by data generated with invariant sites or with rate-variation across sites. Here we implement the squangles in a least-squares setting that returns quartets weighted by either confidence or internal edge lengths; and use these as input into a variety of quartet-based supertree methods. For the first time, we quantitatively investigate the robustness of the squangles to the breaking of IID assumptions on both simulated and real data sets; and we suggest a modification that improves the performance of the squangles in the presence of invariant sites. Our conclusion is that the squangles provide a novel tool for phylogenetic estimation that is complementary to methods that explicitly account for rate-variation across sites, but rely on homogeneous - and hence stationary - models.
[ { "created": "Fri, 20 Apr 2012 23:33:42 GMT", "version": "v1" } ]
2012-04-24
[ [ "Holland", "Barbara R.", "" ], [ "Jarvis", "Peter D.", "" ], [ "Sumner", "Jeremy G.", "" ] ]
In their 2008 and 2009 papers, Sumner and colleagues introduced the "squangles" - a small set of Markov invariants for phylogenetic quartets. The squangles are consistent with the general Markov model (GM) and can be used to infer quartets without the need to explicitly estimate all parameters. As GM is inhomogeneous and hence non-stationary, the squangles are expected to perform well compared to standard approaches when there are changes in base-composition amongst species. However, GM includes the IID assumption, so the squangles should be confounded by data generated with invariant sites or with rate-variation across sites. Here we implement the squangles in a least-squares setting that returns quartets weighted by either confidence or internal edge lengths; and use these as input into a variety of quartet-based supertree methods. For the first time, we quantitatively investigate the robustness of the squangles to the breaking of IID assumptions on both simulated and real data sets; and we suggest a modification that improves the performance of the squangles in the presence of invariant sites. Our conclusion is that the squangles provide a novel tool for phylogenetic estimation that is complementary to methods that explicitly account for rate-variation across sites, but rely on homogeneous - and hence stationary - models.
1703.00357
Ver\'onica Mir\'o-Pina
Ver\'onica Mir\'o Pina, Emmanuel Schertzer
How does geographical distance translate into genetic distance?
null
Stochastic Processes and their Applications, Volume 129, Issue 10, October 2019, Pages 3893-3921
10.1016/j.spa.2018.11.004
null
q-bio.PE math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Geographic structure can affect patterns of genetic differentiation and speciation rates. In this article, we investigate the dynamics of genetic distances in a geographically structured metapopulation. We model the metapopulation as a weighted directed graph, with d vertices corresponding to d subpopulations that evolve according to an individual based model. The dynamics of the genetic distances is then controlled by two types of transitions -mutation and migration events. We show that, under a rare mutation - rare migration regime, intra subpopulation diversity can be neglected and our model can be approximated by a population based model. We show that under a large population - large number of loci limit, the genetic distance between two subpopulations converges to a deterministic quantity that can asymptotically be expressed in terms of the hitting time between two random walks in the metapopulation graph. Our result shows that the genetic distance between two subpopulations does not only depend on the direct migration rates between them but on the whole metapopulation structure.
[ { "created": "Wed, 1 Mar 2017 15:56:41 GMT", "version": "v1" }, { "created": "Mon, 19 Feb 2018 17:49:20 GMT", "version": "v2" }, { "created": "Wed, 26 Sep 2018 15:24:30 GMT", "version": "v3" } ]
2020-02-25
[ [ "Pina", "Verónica Miró", "" ], [ "Schertzer", "Emmanuel", "" ] ]
Geographic structure can affect patterns of genetic differentiation and speciation rates. In this article, we investigate the dynamics of genetic distances in a geographically structured metapopulation. We model the metapopulation as a weighted directed graph, with d vertices corresponding to d subpopulations that evolve according to an individual based model. The dynamics of the genetic distances is then controlled by two types of transitions -mutation and migration events. We show that, under a rare mutation - rare migration regime, intra subpopulation diversity can be neglected and our model can be approximated by a population based model. We show that under a large population - large number of loci limit, the genetic distance between two subpopulations converges to a deterministic quantity that can asymptotically be expressed in terms of the hitting time between two random walks in the metapopulation graph. Our result shows that the genetic distance between two subpopulations does not only depend on the direct migration rates between them but on the whole metapopulation structure.
1211.4543
Bernard Fendler
Ying Cai, Bernard Fendler, Gurinder S. Atwal
Utilizing RNA-Seq Data for Cancer Network Inference
4 pages, 2 figures, 2 tables, conference GENSIPS' 12
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An important challenge in cancer systems biology is to uncover the complex network of interactions between genes (tumor suppressor genes and oncogenes) implicated in cancer. Next generation sequencing provides unparalleled ability to probe the expression levels of the entire set of cancer genes and their transcript isoforms. However, there are onerous statistical and computational issues in interpreting high-dimensional sequencing data and inferring the underlying genetic network. In this study, we analyzed RNA-Seq data from lymphoblastoid cell lines derived from a population of 69 human individuals and implemented a probabilistic framework to construct biologically-relevant genetic networks. In particular, we employed a graphical lasso analysis, motivated by considerations of the maximum entropy formalism, to estimate the sparse inverse covariance matrix of RNA-Seq data. Gene ontology, pathway enrichment and protein-protein path length analysis were all carried out to validate the biological context of the predicted network of interacting cancer gene isoforms.
[ { "created": "Mon, 19 Nov 2012 19:45:59 GMT", "version": "v1" }, { "created": "Thu, 6 Dec 2012 19:06:26 GMT", "version": "v2" } ]
2012-12-10
[ [ "Cai", "Ying", "" ], [ "Fendler", "Bernard", "" ], [ "Atwal", "Gurinder S.", "" ] ]
An important challenge in cancer systems biology is to uncover the complex network of interactions between genes (tumor suppressor genes and oncogenes) implicated in cancer. Next generation sequencing provides unparalleled ability to probe the expression levels of the entire set of cancer genes and their transcript isoforms. However, there are onerous statistical and computational issues in interpreting high-dimensional sequencing data and inferring the underlying genetic network. In this study, we analyzed RNA-Seq data from lymphoblastoid cell lines derived from a population of 69 human individuals and implemented a probabilistic framework to construct biologically-relevant genetic networks. In particular, we employed a graphical lasso analysis, motivated by considerations of the maximum entropy formalism, to estimate the sparse inverse covariance matrix of RNA-Seq data. Gene ontology, pathway enrichment and protein-protein path length analysis were all carried out to validate the biological context of the predicted network of interacting cancer gene isoforms.
2408.06794
Ari Rappoport
Ari Rappoport
A Polyunsaturated Fatty Acid (PUFA) Theory of Schizophrenia
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-sa/4.0/
I present a theory of schizophrenia (SZ) that mechanistically explains its etiology, symptoms, pathophysiology, and treatment. SZ involves the chronic release of membrane polyunsaturated fatty acids (PUFAs) and their utilization for the synthesis of stress-induced plasticity agents such as endocannabinoids (ECBs). The causal event in SZ is prolonged stress during a sensitive period, which can induce prolonged and heritable changes. The physiological effect of the released PUFAs and their products is to disconnect neurons from their inputs and promote intrinsic excitability. I show that these effects can explain the positive, negative, cognitive, and mood symptoms of SZ, as well as the mechanisms of many known triggers of psychosis. The theory is supported by overwhelming evidence addressing lipids, immunity, ECBs, neuromodulators, hormones, neurotransmitters, and cortical parameters in SZ. It explains why antipsychotic drugs are effective against positive symptoms, and why they do not affect the other symptoms. Finally, I present promising treatment directions implied by the theory, including some that are immediately available.
[ { "created": "Tue, 13 Aug 2024 10:25:25 GMT", "version": "v1" } ]
2024-08-14
[ [ "Rappoport", "Ari", "" ] ]
I present a theory of schizophrenia (SZ) that mechanistically explains its etiology, symptoms, pathophysiology, and treatment. SZ involves the chronic release of membrane polyunsaturated fatty acids (PUFAs) and their utilization for the synthesis of stress-induced plasticity agents such as endocannabinoids (ECBs). The causal event in SZ is prolonged stress during a sensitive period, which can induce prolonged and heritable changes. The physiological effect of the released PUFAs and their products is to disconnect neurons from their inputs and promote intrinsic excitability. I show that these effects can explain the positive, negative, cognitive, and mood symptoms of SZ, as well as the mechanisms of many known triggers of psychosis. The theory is supported by overwhelming evidence addressing lipids, immunity, ECBs, neuromodulators, hormones, neurotransmitters, and cortical parameters in SZ. It explains why antipsychotic drugs are effective against positive symptoms, and why they do not affect the other symptoms. Finally, I present promising treatment directions implied by the theory, including some that are immediately available.