id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
2211.14373
Fernando Bastos
Jaime Barros Silva Filho, Fernando de Souza Bastos, Diogo da Silva Machado, Maria Luiza Ferreira Delfim
Application of Molecular Topology to the Prediction of Antioxidant Activity in a Group of Phenolic Compounds
null
null
null
null
q-bio.BM stat.AP
http://creativecommons.org/licenses/by/4.0/
The study of compounds with antioxidant capabilities is of great interest to the scientific community, as it has implications in several areas, from Agricultural Sciences to Biological Sciences, including Food Engineering, Medicine and Pharmacy. In applications related to human health, it is known that antioxidant activity can delay or inhibit oxidative damage to cells, reducing damage caused by free radicals, helping in the treatment, or even preventing or postponing the onset of various diseases. Among the compounds that have antioxidant properties, there are several classes of Phenolic Compounds, which include several compounds with different chemical structures. In this work, based on the molecular branching of compounds and their intramolecular charge distributions, and using Molecular Topology, we propose a significant topological-mathematical model to evaluate the potential of candidate compounds to have an antioxidant function.
[ { "created": "Fri, 25 Nov 2022 21:06:08 GMT", "version": "v1" } ]
2022-11-29
[ [ "Filho", "Jaime Barros Silva", "" ], [ "Bastos", "Fernando de Souza", "" ], [ "Machado", "Diogo da Silva", "" ], [ "Delfim", "Maria Luiza Ferreira", "" ] ]
The study of compounds with antioxidant capabilities is of great interest to the scientific community, as it has implications in several areas, from Agricultural Sciences to Biological Sciences, including Food Engineering, Medicine and Pharmacy. In applications related to human health, it is known that antioxidant activity can delay or inhibit oxidative damage to cells, reducing damage caused by free radicals, helping in the treatment, or even preventing or postponing the onset of various diseases. Among the compounds that have antioxidant properties, there are several classes of Phenolic Compounds, which include several compounds with different chemical structures. In this work, based on the molecular branching of compounds and their intramolecular charge distributions, and using Molecular Topology, we propose a significant topological-mathematical model to evaluate the potential of candidate compounds to have an antioxidant function.
1910.14339
Luis F Seoane PhD
Seoane LF, Sol\'e R
How Turing parasites expand the computational landscape of digital life
16 pages, 10 figures in main paper, supporting material not included
Physical Review E. 2023 Oct 23;108(4):044407
10.1103/PhysRevE.108.044407
null
q-bio.PE cond-mat.dis-nn nlin.AO nlin.CG q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Why are living systems complex? Why does the biosphere contain living beings with complexity features beyond those of the simplest replicators? What kind of evolutionary pressures result in more complex life forms? These are key questions that pervade the problem of how complexity arises in evolution. One particular way of tackling this is grounded in an algorithmic description of life: living organisms can be seen as systems that extract and process information from their surroundings in order to reduce uncertainty. Here we take this computational approach using a simple bit string model of coevolving agents and their parasites. While agents try to predict their worlds, parasites do the same with their hosts. The result of this process is that, in order to escape their parasites, the host agents expand their computational complexity despite the cost of maintaining it. This, in turn, is followed by increasingly complex parasitic counterparts. Such arms races display several qualitative phases, from monotonous to punctuated evolution or even ecological collapse. Our minimal model illustrates the relevance of parasites in providing an active mechanism for expanding living complexity beyond simple replicators, suggesting that parasitic agents are likely to be a major evolutionary driver for biological complexity.
[ { "created": "Thu, 31 Oct 2019 10:02:20 GMT", "version": "v1" }, { "created": "Thu, 30 Apr 2020 16:59:48 GMT", "version": "v2" }, { "created": "Wed, 14 Oct 2020 14:24:03 GMT", "version": "v3" }, { "created": "Fri, 10 Nov 2023 18:06:43 GMT", "version": "v4" } ]
2023-11-13
[ [ "LF", "Seoane", "" ], [ "R", "Solé", "" ] ]
Why are living systems complex? Why does the biosphere contain living beings with complexity features beyond those of the simplest replicators? What kind of evolutionary pressures result in more complex life forms? These are key questions that pervade the problem of how complexity arises in evolution. One particular way of tackling this is grounded in an algorithmic description of life: living organisms can be seen as systems that extract and process information from their surroundings in order to reduce uncertainty. Here we take this computational approach using a simple bit string model of coevolving agents and their parasites. While agents try to predict their worlds, parasites do the same with their hosts. The result of this process is that, in order to escape their parasites, the host agents expand their computational complexity despite the cost of maintaining it. This, in turn, is followed by increasingly complex parasitic counterparts. Such arms races display several qualitative phases, from monotonous to punctuated evolution or even ecological collapse. Our minimal model illustrates the relevance of parasites in providing an active mechanism for expanding living complexity beyond simple replicators, suggesting that parasitic agents are likely to be a major evolutionary driver for biological complexity.
2110.07069
Bryan He
Bryan He, Matthew Thomson, Meena Subramaniam, Richard Perez, Chun Jimmie Ye, James Zou
CloudPred: Predicting Patient Phenotypes From Single-cell RNA-seq
Preprint of an article published in Pacific Symposium on Biocomputing \copyright\ 2021 World Scientific Publishing Co., Singapore, http://psb.stanford.edu/
null
null
null
q-bio.QM cs.LG
http://creativecommons.org/licenses/by/4.0/
Single-cell RNA sequencing (scRNA-seq) has the potential to provide powerful, high-resolution signatures to inform disease prognosis and precision medicine. This paper takes an important first step towards this goal by developing an interpretable machine learning algorithm, CloudPred, to predict individuals' disease phenotypes from their scRNA-seq data. Predicting phenotype from scRNA-seq is challenging for standard machine learning methods -- the number of cells measured can vary by orders of magnitude across individuals and the cell populations are also highly heterogeneous. Typical analysis creates pseudo-bulk samples which are biased toward prior annotations and also lose the single cell resolution. CloudPred addresses these challenges via a novel end-to-end differentiable learning algorithm which is coupled with a biologically informed mixture of cell types model. CloudPred automatically infers the cell subpopulation that are salient for the phenotype without prior annotations. We developed a systematic simulation platform to evaluate the performance of CloudPred and several alternative methods we propose, and find that CloudPred outperforms the alternative methods across several settings. We further validated CloudPred on a real scRNA-seq dataset of 142 lupus patients and controls. CloudPred achieves AUROC of 0.98 while identifying a specific subpopulation of CD4 T cells whose presence is highly indicative of lupus. CloudPred is a powerful new framework to predict clinical phenotypes from scRNA-seq data and to identify relevant cells.
[ { "created": "Wed, 13 Oct 2021 22:41:30 GMT", "version": "v1" } ]
2021-10-15
[ [ "He", "Bryan", "" ], [ "Thomson", "Matthew", "" ], [ "Subramaniam", "Meena", "" ], [ "Perez", "Richard", "" ], [ "Ye", "Chun Jimmie", "" ], [ "Zou", "James", "" ] ]
Single-cell RNA sequencing (scRNA-seq) has the potential to provide powerful, high-resolution signatures to inform disease prognosis and precision medicine. This paper takes an important first step towards this goal by developing an interpretable machine learning algorithm, CloudPred, to predict individuals' disease phenotypes from their scRNA-seq data. Predicting phenotype from scRNA-seq is challenging for standard machine learning methods -- the number of cells measured can vary by orders of magnitude across individuals and the cell populations are also highly heterogeneous. Typical analysis creates pseudo-bulk samples which are biased toward prior annotations and also lose the single cell resolution. CloudPred addresses these challenges via a novel end-to-end differentiable learning algorithm which is coupled with a biologically informed mixture of cell types model. CloudPred automatically infers the cell subpopulation that are salient for the phenotype without prior annotations. We developed a systematic simulation platform to evaluate the performance of CloudPred and several alternative methods we propose, and find that CloudPred outperforms the alternative methods across several settings. We further validated CloudPred on a real scRNA-seq dataset of 142 lupus patients and controls. CloudPred achieves AUROC of 0.98 while identifying a specific subpopulation of CD4 T cells whose presence is highly indicative of lupus. CloudPred is a powerful new framework to predict clinical phenotypes from scRNA-seq data and to identify relevant cells.
2104.09182
Elisabeth Rens
Elisabeth G. Rens and Leah Edelstein-Keshet
Cellular tango: How extracellular matrix adhesion choreographs Rac-Rho signaling and cell movement
null
null
10.1088/1478-3975/ac2888
null
q-bio.CB math.DS
http://creativecommons.org/licenses/by/4.0/
The small GTPases Rac and Rho are known to regulate eukaryotic cell shape, promoting front protrusion (Rac) or rear retraction (Rho) of the cell edge. Such cell deformation changes the contact and adhesion of cell to the extracellular matrix (ECM), while ECM signaling through integrin receptors also affects GTPase activity. We develop and investigate a model for this three-way feedback loop in 1D and 2D spatial domains, as well as in a fully deforming 2D cell shape. The model consists of reaction-diffusion equations solved numerically with open-source software, Morpheus, and with custom-built cellular Potts model simulations. We find a variety of patterns and cell behaviors, including persistent polarity, flipped front-back cell polarity oscillations, and random protrusion-retraction. We show that the observed spatial patterns depend on the cell shape, and vice versa. The cell stiffness and biophysical properties also affect patterning, overall cell migration phenotypes.
[ { "created": "Mon, 19 Apr 2021 10:18:13 GMT", "version": "v1" } ]
2021-12-15
[ [ "Rens", "Elisabeth G.", "" ], [ "Edelstein-Keshet", "Leah", "" ] ]
The small GTPases Rac and Rho are known to regulate eukaryotic cell shape, promoting front protrusion (Rac) or rear retraction (Rho) of the cell edge. Such cell deformation changes the contact and adhesion of cell to the extracellular matrix (ECM), while ECM signaling through integrin receptors also affects GTPase activity. We develop and investigate a model for this three-way feedback loop in 1D and 2D spatial domains, as well as in a fully deforming 2D cell shape. The model consists of reaction-diffusion equations solved numerically with open-source software, Morpheus, and with custom-built cellular Potts model simulations. We find a variety of patterns and cell behaviors, including persistent polarity, flipped front-back cell polarity oscillations, and random protrusion-retraction. We show that the observed spatial patterns depend on the cell shape, and vice versa. The cell stiffness and biophysical properties also affect patterning, overall cell migration phenotypes.
2302.01019
Serge Kernbach
Serge Kernbach, Olga Kernbach, Andreas Kernbach
Biophysical aspects of neurocognitive modeling with long-term sustained temperature variations
null
null
null
null
q-bio.NC physics.bio-ph physics.med-ph
http://creativecommons.org/licenses/by-nc-nd/4.0/
Long-term focused attention with visualization and breathing exercises is at the core of various Eastern traditions. Neurocognitive and psychosomatic phenomena demonstrated during such exercises were instrumentally explored with EEG and other sensors. Neurocognitive modeling in the form of meditative visualization produced persistent temperature effects in the body long after the exercise finished; this raises the question about their psychosomatic or biophysical origin. The work explores this question by comparing experiments with focusing attention inside and outside the body. EEG, temperature, heart and breathing sensors monitor internal body conditions, high resolution differential calorimetric sensors are used to detect thermal effects outside the body. Experiments with 159 attempts (2427 operator-sensor sessions) were carried over five months, control measurements run in the same conditions in parallel to experimental series. Increase of body temperature up to moderate fever zone 38.5 C and intentional control of up and down trend of core temperature by 1.6 C are demonstrated. Persistent temperature variations last >60 min. Experiments also demonstrated induced thermal fluctuations at 10^-3 C level in external calorimetric systems with 15 ml of water for 60-90 min. Repeatability of these attempts is over 90%, statistical Chi-square and Mann-Whitney tests reject the null hypotheses about random character of outcomes. Thus, the obtained data confirm the persistent thermal effects reported in previous publications and indicate their biophysical dimension. To explain these results we refer to a new model in neuroscience that involves spin phenomena in biochemical and physical systems. These experiments demonstrate complex biophysical mechanisms of altered states of consciousness; their function in the body's neurohumoral regulation and non-classical brain functions is discussed.
[ { "created": "Thu, 2 Feb 2023 11:14:43 GMT", "version": "v1" } ]
2023-02-03
[ [ "Kernbach", "Serge", "" ], [ "Kernbach", "Olga", "" ], [ "Kernbach", "Andreas", "" ] ]
Long-term focused attention with visualization and breathing exercises is at the core of various Eastern traditions. Neurocognitive and psychosomatic phenomena demonstrated during such exercises were instrumentally explored with EEG and other sensors. Neurocognitive modeling in the form of meditative visualization produced persistent temperature effects in the body long after the exercise finished; this raises the question about their psychosomatic or biophysical origin. The work explores this question by comparing experiments with focusing attention inside and outside the body. EEG, temperature, heart and breathing sensors monitor internal body conditions, high resolution differential calorimetric sensors are used to detect thermal effects outside the body. Experiments with 159 attempts (2427 operator-sensor sessions) were carried over five months, control measurements run in the same conditions in parallel to experimental series. Increase of body temperature up to moderate fever zone 38.5 C and intentional control of up and down trend of core temperature by 1.6 C are demonstrated. Persistent temperature variations last >60 min. Experiments also demonstrated induced thermal fluctuations at 10^-3 C level in external calorimetric systems with 15 ml of water for 60-90 min. Repeatability of these attempts is over 90%, statistical Chi-square and Mann-Whitney tests reject the null hypotheses about random character of outcomes. Thus, the obtained data confirm the persistent thermal effects reported in previous publications and indicate their biophysical dimension. To explain these results we refer to a new model in neuroscience that involves spin phenomena in biochemical and physical systems. These experiments demonstrate complex biophysical mechanisms of altered states of consciousness; their function in the body's neurohumoral regulation and non-classical brain functions is discussed.
1407.4824
Joseph Crawford
Joseph Crawford, Yihan Sun, Tijana Milenkovi\'c
Fair Evaluation of Global Network Aligners
19 pages. 10 figures. Presented at the 2014 ISMB Conference, July 13-15, Boston, MA
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Biological network alignment identifies topologically and functionally conserved regions between networks of different species. It encompasses two algorithmic steps: node cost function (NCF), which measures similarities between nodes in different networks, and alignment strategy (AS), which uses these similarities to rapidly identify high-scoring alignments. Different methods use both different NCFs and different ASs. Thus, it is unclear whether the superiority of a method comes from its NCF, its AS, or both. We already showed on MI-GRAAL and IsoRankN that combining NCF of one method and AS of another method can lead to a new superior method. Here, we evaluate MI-GRAAL against newer GHOST to potentially further improve alignment quality. Also, we approach several important questions that have not been asked systematically thus far. First, we ask how much of the node similarity information in NCF should come from sequence data compared to topology data. Existing methods determine this more-less arbitrarily, which could affect the resulting alignment(s). Second, when topology is used in NCF, we ask how large the size of the neighborhoods of the compared nodes should be. Existing methods assume that larger neighborhood sizes are better. We find that MI-GRAAL's NCF is superior to GHOST's NCF, while the performance of the methods' ASs is data-dependent. Thus, the combination of MI-GRAAL's NCF and GHOST's AS could be a new superior method for certain data. Also, which amount of sequence information is used within NCF does not affect alignment quality, while the inclusion of topological information is crucial. Finally, larger neighborhood sizes are preferred, but often, it is the second largest size that is superior, and using this size would decrease computational complexity. Together, our results give several general recommendations for a fair evaluation of network alignment methods.
[ { "created": "Thu, 17 Jul 2014 20:23:00 GMT", "version": "v1" } ]
2014-07-21
[ [ "Crawford", "Joseph", "" ], [ "Sun", "Yihan", "" ], [ "Milenković", "Tijana", "" ] ]
Biological network alignment identifies topologically and functionally conserved regions between networks of different species. It encompasses two algorithmic steps: node cost function (NCF), which measures similarities between nodes in different networks, and alignment strategy (AS), which uses these similarities to rapidly identify high-scoring alignments. Different methods use both different NCFs and different ASs. Thus, it is unclear whether the superiority of a method comes from its NCF, its AS, or both. We already showed on MI-GRAAL and IsoRankN that combining NCF of one method and AS of another method can lead to a new superior method. Here, we evaluate MI-GRAAL against newer GHOST to potentially further improve alignment quality. Also, we approach several important questions that have not been asked systematically thus far. First, we ask how much of the node similarity information in NCF should come from sequence data compared to topology data. Existing methods determine this more-less arbitrarily, which could affect the resulting alignment(s). Second, when topology is used in NCF, we ask how large the size of the neighborhoods of the compared nodes should be. Existing methods assume that larger neighborhood sizes are better. We find that MI-GRAAL's NCF is superior to GHOST's NCF, while the performance of the methods' ASs is data-dependent. Thus, the combination of MI-GRAAL's NCF and GHOST's AS could be a new superior method for certain data. Also, which amount of sequence information is used within NCF does not affect alignment quality, while the inclusion of topological information is crucial. Finally, larger neighborhood sizes are preferred, but often, it is the second largest size that is superior, and using this size would decrease computational complexity. Together, our results give several general recommendations for a fair evaluation of network alignment methods.
1406.3762
Amir Toor
Max Jameson-Lee, Vishal Koparde, Phil Griffith, Allison F. Scalora, Juliana K. Sampson, Haniya Khalid, Nihar U. Sheth, Michael Batalo, Myrna G. Serrano, Catherine H. Roberts, Michael L. Hess, Gregory A. Buck, Michael C. Neale, Masoud H. Manjili, Amir A. Toor
In Silico Derivation of HLA-Specific Alloreactivity Potential from Whole Exome Sequencing of Stem Cell Transplant Donors and Recipients: Understanding the Quantitative Immuno-biology of Allogeneic Transplantation
Abstract: 235, Words: 6422, Figures: 7, Tables: 3, Supplementary figures: 2, Supplementary tables: 2
Frontiers in Immunology 2014. 5:529
10.3389/fimmu.2014.00529
null
q-bio.QM q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Donor T cell mediated graft vs. host effects may result from the aggregate alloreactivity to minor histocompatibility antigens (mHA) presented by the HLA in each donor-recipient pair (DRP) undergoing stem cell transplantation (SCT). Whole exome sequencing has demonstrated extensive nucleotide sequence variation in HLA-matched DRP. Non-synonymous single nucleotide polymorphisms (nsSNPs) in the GVH direction (polymorphisms present in recipient and absent in donor) were identified in 4 HLA-matched related and 5 unrelated DRP. The nucleotide sequence flanking each SNP was obtained utilizing the ANNOVAR software package. All possible nonameric-peptides encoded by the non-synonymous SNP were then interrogated in-silico for their likelihood to be presented by the HLA class I molecules in individual DRP, using the Immune-Epitope Database (IEDB) SMM algorithm. The IEDB-SMM algorithm predicted a median 18,396 peptides/DRP which bound HLA with an IC50 of <500nM, and 2254 peptides/DRP with an IC50 of <50nM. Unrelated donors generally had higher numbers of peptides presented by the HLA. A similarly large library of presented peptides was identified when the data was interrogated using the Net MHCPan algorithm. These peptides were uniformly distributed in the various organ systems. The bioinformatic algorithm presented here demonstrates that there may be a high level of minor histocompatibility antigen variation in HLA-matched individuals, constituting an HLA-specific alloreactivity potential. These data provide a possible explanation for how relatively minor adjustments in GVHD prophylaxis yield relatively similar outcomes in HLA matched and mismatched SCT recipients.
[ { "created": "Sat, 14 Jun 2014 18:29:58 GMT", "version": "v1" } ]
2014-12-16
[ [ "Jameson-Lee", "Max", "" ], [ "Koparde", "Vishal", "" ], [ "Griffith", "Phil", "" ], [ "Scalora", "Allison F.", "" ], [ "Sampson", "Juliana K.", "" ], [ "Khalid", "Haniya", "" ], [ "Sheth", "Nihar U.", "" ], [ "Batalo", "Michael", "" ], [ "Serrano", "Myrna G.", "" ], [ "Roberts", "Catherine H.", "" ], [ "Hess", "Michael L.", "" ], [ "Buck", "Gregory A.", "" ], [ "Neale", "Michael C.", "" ], [ "Manjili", "Masoud H.", "" ], [ "Toor", "Amir A.", "" ] ]
Donor T cell mediated graft vs. host effects may result from the aggregate alloreactivity to minor histocompatibility antigens (mHA) presented by the HLA in each donor-recipient pair (DRP) undergoing stem cell transplantation (SCT). Whole exome sequencing has demonstrated extensive nucleotide sequence variation in HLA-matched DRP. Non-synonymous single nucleotide polymorphisms (nsSNPs) in the GVH direction (polymorphisms present in recipient and absent in donor) were identified in 4 HLA-matched related and 5 unrelated DRP. The nucleotide sequence flanking each SNP was obtained utilizing the ANNOVAR software package. All possible nonameric-peptides encoded by the non-synonymous SNP were then interrogated in-silico for their likelihood to be presented by the HLA class I molecules in individual DRP, using the Immune-Epitope Database (IEDB) SMM algorithm. The IEDB-SMM algorithm predicted a median 18,396 peptides/DRP which bound HLA with an IC50 of <500nM, and 2254 peptides/DRP with an IC50 of <50nM. Unrelated donors generally had higher numbers of peptides presented by the HLA. A similarly large library of presented peptides was identified when the data was interrogated using the Net MHCPan algorithm. These peptides were uniformly distributed in the various organ systems. The bioinformatic algorithm presented here demonstrates that there may be a high level of minor histocompatibility antigen variation in HLA-matched individuals, constituting an HLA-specific alloreactivity potential. These data provide a possible explanation for how relatively minor adjustments in GVHD prophylaxis yield relatively similar outcomes in HLA matched and mismatched SCT recipients.
q-bio/0608001
Gyorgy Korniss
L. O'Malley, B. Kozma, G. Korniss, Z. Racz, T. Caraco
Fisher Waves and Front Roughening in a Two-Species Invasion Model with Preemptive Competition
8 pages, 5 figures; Papers on related work can be found at http://www.rpi.edu/~korniss/Research
Phys. Rev. E 74, 041116 (2006) .
10.1103/PhysRevE.74.041116
null
q-bio.PE cond-mat.stat-mech
null
We study front propagation when an invading species competes with a resident; we assume nearest-neighbor preemptive competition for resources in an individual-based, two-dimensional lattice model. The asymptotic front velocity exhibits power-law dependence on the difference between the two species' clonal propagation rates (key ecological parameters). The mean-field approximation behaves similarly, but the power law's exponent slightly differs from the individual-based model's result. We also study roughening of the front, using the framework of non-equilibrium interface growth. Our analysis indicates that initially flat, linear invading fronts exhibit Kardar-Parisi-Zhang (KPZ) roughening in one transverse dimension. Further, this finding implies, and is also confirmed by simulations, that the temporal correction to the asymptotic front velocity is of ${\cal O}(t^{-2/3})$.
[ { "created": "Tue, 1 Aug 2006 14:41:42 GMT", "version": "v1" } ]
2007-05-23
[ [ "O'Malley", "L.", "" ], [ "Kozma", "B.", "" ], [ "Korniss", "G.", "" ], [ "Racz", "Z.", "" ], [ "Caraco", "T.", "" ] ]
We study front propagation when an invading species competes with a resident; we assume nearest-neighbor preemptive competition for resources in an individual-based, two-dimensional lattice model. The asymptotic front velocity exhibits power-law dependence on the difference between the two species' clonal propagation rates (key ecological parameters). The mean-field approximation behaves similarly, but the power law's exponent slightly differs from the individual-based model's result. We also study roughening of the front, using the framework of non-equilibrium interface growth. Our analysis indicates that initially flat, linear invading fronts exhibit Kardar-Parisi-Zhang (KPZ) roughening in one transverse dimension. Further, this finding implies, and is also confirmed by simulations, that the temporal correction to the asymptotic front velocity is of ${\cal O}(t^{-2/3})$.
2011.06515
Adriano de Albuquerque Batista
Adriano A. Batista and Severino Hor\'acio da Silva
An epidemiological compartmental model with automated parameter estimation and forecasting of the spread of COVID-19 with analysis of data from Germany and Brazil
37 pages, 9 figures
Front. Appl. Math. Stat., 13 April 2022
10.3389/fams.2022.645614
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we adapt the epidemiological SIR model to study the evolution of the dissemination of COVID-19 in Germany and Brazil (nationally, in the State of Paraiba, and in the City of Campina Grande). We prove the well posedness and the continuous dependence of the model dynamics on its parameters. We also propose a simple probabilistic method for the evolution of the active cases that is instrumental for the automatic estimation of parameters of the epidemiological model. We obtained statistical estimates of the active cases based the probabilistic method and on the confirmed cases data. From this estimated time series we obtained a time-dependent contagion rate, which reflects a lower or higher adherence to social distancing by the involved populations. By also analysing the data on daily deaths, we obtained the daily lethality and recovery rates. We then integrate the equations of motion of the model using these time-dependent parameters. We validate our epidemiological model by fitting the official data of confirmed, recovered, death, and active cases due to the pandemic with the theoretical predictions. We obtained very good fits of the data with this method. The automated procedure developed here could be used for basically any population with a minimum of extra work. Finally, we also propose and validate a forecasting method based on Markov chains for the evolution of the epidemiological data for up to two weeks.
[ { "created": "Wed, 11 Nov 2020 17:54:01 GMT", "version": "v1" } ]
2022-04-20
[ [ "Batista", "Adriano A.", "" ], [ "da Silva", "Severino Horácio", "" ] ]
In this work, we adapt the epidemiological SIR model to study the evolution of the dissemination of COVID-19 in Germany and Brazil (nationally, in the State of Paraiba, and in the City of Campina Grande). We prove the well posedness and the continuous dependence of the model dynamics on its parameters. We also propose a simple probabilistic method for the evolution of the active cases that is instrumental for the automatic estimation of parameters of the epidemiological model. We obtained statistical estimates of the active cases based the probabilistic method and on the confirmed cases data. From this estimated time series we obtained a time-dependent contagion rate, which reflects a lower or higher adherence to social distancing by the involved populations. By also analysing the data on daily deaths, we obtained the daily lethality and recovery rates. We then integrate the equations of motion of the model using these time-dependent parameters. We validate our epidemiological model by fitting the official data of confirmed, recovered, death, and active cases due to the pandemic with the theoretical predictions. We obtained very good fits of the data with this method. The automated procedure developed here could be used for basically any population with a minimum of extra work. Finally, we also propose and validate a forecasting method based on Markov chains for the evolution of the epidemiological data for up to two weeks.
1308.6074
Sohan Seth
Sohan Seth, Niko V\"alim\"aki, Samuel Kaski, Antti Honkela
Exploration and retrieval of whole-metagenome sequencing samples
16 pages; additional results
null
null
null
q-bio.GN cs.CE cs.IR
http://creativecommons.org/licenses/by/3.0/
Over the recent years, the field of whole metagenome shotgun sequencing has witnessed significant growth due to the high-throughput sequencing technologies that allow sequencing genomic samples cheaper, faster, and with better coverage than before. This technical advancement has initiated the trend of sequencing multiple samples in different conditions or environments to explore the similarities and dissimilarities of the microbial communities. Examples include the human microbiome project and various studies of the human intestinal tract. With the availability of ever larger databases of such measurements, finding samples similar to a given query sample is becoming a central operation. In this paper, we develop a content-based exploration and retrieval method for whole metagenome sequencing samples. We apply a distributed string mining framework to efficiently extract all informative sequence $k$-mers from a pool of metagenomic samples and use them to measure the dissimilarity between two samples. We evaluate the performance of the proposed approach on two human gut metagenome data sets as well as human microbiome project metagenomic samples. We observe significant enrichment for diseased gut samples in results of queries with another diseased sample and very high accuracy in discriminating between different body sites even though the method is unsupervised. A software implementation of the DSM framework is available at https://github.com/HIITMetagenomics/dsm-framework
[ { "created": "Wed, 28 Aug 2013 07:28:35 GMT", "version": "v1" }, { "created": "Thu, 3 Apr 2014 09:57:56 GMT", "version": "v2" } ]
2014-04-04
[ [ "Seth", "Sohan", "" ], [ "Välimäki", "Niko", "" ], [ "Kaski", "Samuel", "" ], [ "Honkela", "Antti", "" ] ]
Over the recent years, the field of whole metagenome shotgun sequencing has witnessed significant growth due to the high-throughput sequencing technologies that allow sequencing genomic samples cheaper, faster, and with better coverage than before. This technical advancement has initiated the trend of sequencing multiple samples in different conditions or environments to explore the similarities and dissimilarities of the microbial communities. Examples include the human microbiome project and various studies of the human intestinal tract. With the availability of ever larger databases of such measurements, finding samples similar to a given query sample is becoming a central operation. In this paper, we develop a content-based exploration and retrieval method for whole metagenome sequencing samples. We apply a distributed string mining framework to efficiently extract all informative sequence $k$-mers from a pool of metagenomic samples and use them to measure the dissimilarity between two samples. We evaluate the performance of the proposed approach on two human gut metagenome data sets as well as human microbiome project metagenomic samples. We observe significant enrichment for diseased gut samples in results of queries with another diseased sample and very high accuracy in discriminating between different body sites even though the method is unsupervised. A software implementation of the DSM framework is available at https://github.com/HIITMetagenomics/dsm-framework
1312.2827
Christian Althaus Ph.D.
Nicola Low, Janneke C.M. Heijne, Sereina A. Herzog, Christian L. Althaus
Re-infection by untreated partners of people treated for Chlamydia trachomatis and Neisseria gonorrhoeae: mathematical modelling study
Short report, 1 figure
Sex Transm Infect. 2014 May;90(3):254-6
10.1136/sextrans-2013-051279
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Objectives: Re-infection after treatment for Chlamydia trachomatis or Neisseria gonorrhoeae reduces the effect of control interventions. We explored the impact of delays in partner treatment on the expected probability of re-infection of index cases using a mathematical model. Methods: We used previously reported parameter distributions to calculate the probability that index cases would be re-infected by their untreated partners. We then assumed different delays between index case and partner treatment to calculate the probabilities of re-infection. Results: In the absence of partner treatment, the medians of the expected re-infection probabilities are 19.4% (interquartile range (IQR) 9.2-31.6%) for chlamydia and 12.5% (IQR 5.6-22.2%) for gonorrhoea. If all current partners receive treatment three days after the index case, the expected re-infection probabilities are 4.2% (IQR 2.1-6.9%) for chlamydia and 5.5% (IQR 2.6-9.5%) for gonorrhoea. Conclusions: Quicker partner referral and treatment can substantially reduce re-infection rates for chlamydia and gonorrhoea. The formula we used to calculate re-infection rates can be used to inform the design of randomised controlled trials of novel partner notification technologies like accelerated partner therapy.
[ { "created": "Tue, 10 Dec 2013 15:10:13 GMT", "version": "v1" } ]
2014-08-20
[ [ "Low", "Nicola", "" ], [ "Heijne", "Janneke C. M.", "" ], [ "Herzog", "Sereina A.", "" ], [ "Althaus", "Christian L.", "" ] ]
Objectives: Re-infection after treatment for Chlamydia trachomatis or Neisseria gonorrhoeae reduces the effect of control interventions. We explored the impact of delays in partner treatment on the expected probability of re-infection of index cases using a mathematical model. Methods: We used previously reported parameter distributions to calculate the probability that index cases would be re-infected by their untreated partners. We then assumed different delays between index case and partner treatment to calculate the probabilities of re-infection. Results: In the absence of partner treatment, the medians of the expected re-infection probabilities are 19.4% (interquartile range (IQR) 9.2-31.6%) for chlamydia and 12.5% (IQR 5.6-22.2%) for gonorrhoea. If all current partners receive treatment three days after the index case, the expected re-infection probabilities are 4.2% (IQR 2.1-6.9%) for chlamydia and 5.5% (IQR 2.6-9.5%) for gonorrhoea. Conclusions: Quicker partner referral and treatment can substantially reduce re-infection rates for chlamydia and gonorrhoea. The formula we used to calculate re-infection rates can be used to inform the design of randomised controlled trials of novel partner notification technologies like accelerated partner therapy.
q-bio/0501021
Michael Stiber
Michael Stiber
Spike timing precision and neural error correction: local behavior
23 pages, 27 figures, to be published in Neural Computation
Neural Computation, v. 17, n. 7, 1577-1601, 2005
null
null
q-bio.NC cs.NE math.DS
null
The effects of spike timing precision and dynamical behavior on error correction in spiking neurons were investigated. Stationary discharges -- phase locked, quasiperiodic, or chaotic -- were induced in a simulated neuron by presenting pacemaker presynaptic spike trains across a model of a prototypical inhibitory synapse. Reduced timing precision was modeled by jittering presynaptic spike times. Aftereffects of errors -- in this communication, missed presynaptic spikes -- were determined by comparing postsynaptic spike times between simulations identical except for the presence or absence of errors. Results show that the effects of an error vary greatly depending on the ongoing dynamical behavior. In the case of phase lockings, a high degree of presynaptic spike timing precision can provide significantly faster error recovery. For non-locked behaviors, isolated missed spikes can have little or no discernible aftereffects (or even serve to paradoxically reduce uncertainty in postsynaptic spike timing), regardless of presynaptic imprecision. This suggests two possible categories of error correction: high-precision locking with rapid recovery and low-precision non-locked with error immunity.
[ { "created": "Fri, 14 Jan 2005 13:25:42 GMT", "version": "v1" } ]
2007-05-23
[ [ "Stiber", "Michael", "" ] ]
The effects of spike timing precision and dynamical behavior on error correction in spiking neurons were investigated. Stationary discharges -- phase locked, quasiperiodic, or chaotic -- were induced in a simulated neuron by presenting pacemaker presynaptic spike trains across a model of a prototypical inhibitory synapse. Reduced timing precision was modeled by jittering presynaptic spike times. Aftereffects of errors -- in this communication, missed presynaptic spikes -- were determined by comparing postsynaptic spike times between simulations identical except for the presence or absence of errors. Results show that the effects of an error vary greatly depending on the ongoing dynamical behavior. In the case of phase lockings, a high degree of presynaptic spike timing precision can provide significantly faster error recovery. For non-locked behaviors, isolated missed spikes can have little or no discernible aftereffects (or even serve to paradoxically reduce uncertainty in postsynaptic spike timing), regardless of presynaptic imprecision. This suggests two possible categories of error correction: high-precision locking with rapid recovery and low-precision non-locked with error immunity.
1707.08284
Johannes M\"uller
Lukas Heinrich, Johannes M\"uller, Aur\'elien Tellier, Daniel Zivkovi\'c
Effects of population- and seed bank noise on neutral evolution and efficacy of natural selection
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Population genetics models typically consider a fixed population size and a unique selection coefficient. However, population dynamics inherently generate noise in numbers of individuals and selection acts on various components of the individuals' fitness. In plant species with seed banks, the size of both the above- and below-ground compartments present noise depending on seed production and the state of the seed bank. We investigate if this noise has consequences on 1)~the rate of genetic drift, and 2)~the efficacy of selection. We consider four variants of two-allele Moran-type models defined by combinations of presence and absence of noise in above-ground and seed bank compartments. Time scale analysis and dimension reduction methods allow us to reduce the corresponding Fokker-Planck equation to a one-dimensional diffusion approximation of a Moran model. We first show that if the above-ground noise classically affects the rate of genetic drift, below-ground noise reduces the diversity storage effect of the seed bank. Second, we consider that selection can act on four different components of the plant fitness: plant or seed death rate, seed production or seed germination. Our striking result is that the efficacy of selection for seed death rate or germination rate is reduced by seed bank noise, whereas selection occurring on plant death rate or seed production is not affected. We derive the expected site-frequency spectrum reflecting this heterogeneity in selection efficacy between genes underpinning different plant fitness components. Our results highlight the importance to consider the effect of ecological noise to predict the impact of seed banks on neutral and selective evolution.
[ { "created": "Wed, 26 Jul 2017 04:16:12 GMT", "version": "v1" }, { "created": "Mon, 11 Dec 2017 12:46:26 GMT", "version": "v2" } ]
2017-12-12
[ [ "Heinrich", "Lukas", "" ], [ "Müller", "Johannes", "" ], [ "Tellier", "Aurélien", "" ], [ "Zivković", "Daniel", "" ] ]
Population genetics models typically consider a fixed population size and a unique selection coefficient. However, population dynamics inherently generate noise in numbers of individuals and selection acts on various components of the individuals' fitness. In plant species with seed banks, the size of both the above- and below-ground compartments present noise depending on seed production and the state of the seed bank. We investigate if this noise has consequences on 1)~the rate of genetic drift, and 2)~the efficacy of selection. We consider four variants of two-allele Moran-type models defined by combinations of presence and absence of noise in above-ground and seed bank compartments. Time scale analysis and dimension reduction methods allow us to reduce the corresponding Fokker-Planck equation to a one-dimensional diffusion approximation of a Moran model. We first show that if the above-ground noise classically affects the rate of genetic drift, below-ground noise reduces the diversity storage effect of the seed bank. Second, we consider that selection can act on four different components of the plant fitness: plant or seed death rate, seed production or seed germination. Our striking result is that the efficacy of selection for seed death rate or germination rate is reduced by seed bank noise, whereas selection occurring on plant death rate or seed production is not affected. We derive the expected site-frequency spectrum reflecting this heterogeneity in selection efficacy between genes underpinning different plant fitness components. Our results highlight the importance to consider the effect of ecological noise to predict the impact of seed banks on neutral and selective evolution.
1911.06107
Joe Kileel
Nathan Zelesko, Amit Moscovich, Joe Kileel, Amit Singer
Earthmover-based manifold learning for analyzing molecular conformation spaces
5 pages, 4 figures, 1 table
IEEE 17th International Symposium on Biomedical Imaging (ISBI) 2020
10.1109/ISBI45749.2020.9098723
null
q-bio.BM cs.LG eess.IV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose a novel approach for manifold learning that combines the Earthmover's distance (EMD) with the diffusion maps method for dimensionality reduction. We demonstrate the potential benefits of this approach for learning shape spaces of proteins and other flexible macromolecules using a simulated dataset of 3-D density maps that mimic the non-uniform rotary motion of ATP synthase. Our results show that EMD-based diffusion maps require far fewer samples to recover the intrinsic geometry than the standard diffusion maps algorithm that is based on the Euclidean distance. To reduce the computational burden of calculating the EMD for all volume pairs, we employ a wavelet-based approximation to the EMD which reduces the computation of the pairwise EMDs to a computation of pairwise weighted-$\ell_1$ distances between wavelet coefficient vectors.
[ { "created": "Wed, 16 Oct 2019 01:38:52 GMT", "version": "v1" } ]
2022-05-24
[ [ "Zelesko", "Nathan", "" ], [ "Moscovich", "Amit", "" ], [ "Kileel", "Joe", "" ], [ "Singer", "Amit", "" ] ]
In this paper, we propose a novel approach for manifold learning that combines the Earthmover's distance (EMD) with the diffusion maps method for dimensionality reduction. We demonstrate the potential benefits of this approach for learning shape spaces of proteins and other flexible macromolecules using a simulated dataset of 3-D density maps that mimic the non-uniform rotary motion of ATP synthase. Our results show that EMD-based diffusion maps require far fewer samples to recover the intrinsic geometry than the standard diffusion maps algorithm that is based on the Euclidean distance. To reduce the computational burden of calculating the EMD for all volume pairs, we employ a wavelet-based approximation to the EMD which reduces the computation of the pairwise EMDs to a computation of pairwise weighted-$\ell_1$ distances between wavelet coefficient vectors.
1908.10166
Casimiro Adays Curbelo Monta\~nez
Casimiro Aday Curbelo Monta\~nez, Paul Fergus, Carl Chalmers, Nurul Ahamed Hassain Malim, Basma Abdulaimma, Denis Reilly, and Francesco Falciani
SAERMA: Stacked Autoencoder Rule Mining Algorithm for the Interpretation of Epistatic Interactions in GWAS for Extreme Obesity
12 pages, 6 figures, 12 tables, 9 equations, journal
null
null
null
q-bio.GN cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One of the most important challenges in the analysis of high-throughput genetic data is the development of efficient computational methods to identify statistically significant Single Nucleotide Polymorphisms (SNPs). Genome-wide association studies (GWAS) use single-locus analysis where each SNP is independently tested for association with phenotypes. The limitation with this approach, however, is its inability to explain genetic variation in complex diseases. Alternative approaches are required to model the intricate relationships between SNPs. Our proposed approach extends GWAS by combining deep learning stacked autoencoders (SAEs) and association rule mining (ARM) to identify epistatic interactions between SNPs. Following traditional GWAS quality control and association analysis, the most significant SNPs are selected and used in the subsequent analysis to investigate epistasis. SAERMA controls the classification results produced in the final fully connected multi-layer feedforward artificial neural network (MLP) by manipulating the interestingness measures, support and confidence, in the rule generation process. The best classification results were achieved with 204 SNPs compressed to 100 units (77% AUC, 77% SE, 68% SP, 53% Gini, logloss=0.58, and MSE=0.20), although it was possible to achieve 73% AUC (77% SE, 63% SP, 45% Gini, logloss=0.62, and MSE=0.21) with 50 hidden units - both supported by close model interpretation.
[ { "created": "Tue, 27 Aug 2019 12:49:05 GMT", "version": "v1" } ]
2019-08-28
[ [ "Montañez", "Casimiro Aday Curbelo", "" ], [ "Fergus", "Paul", "" ], [ "Chalmers", "Carl", "" ], [ "Malim", "Nurul Ahamed Hassain", "" ], [ "Abdulaimma", "Basma", "" ], [ "Reilly", "Denis", "" ], [ "Falciani", "Francesco", "" ] ]
One of the most important challenges in the analysis of high-throughput genetic data is the development of efficient computational methods to identify statistically significant Single Nucleotide Polymorphisms (SNPs). Genome-wide association studies (GWAS) use single-locus analysis where each SNP is independently tested for association with phenotypes. The limitation with this approach, however, is its inability to explain genetic variation in complex diseases. Alternative approaches are required to model the intricate relationships between SNPs. Our proposed approach extends GWAS by combining deep learning stacked autoencoders (SAEs) and association rule mining (ARM) to identify epistatic interactions between SNPs. Following traditional GWAS quality control and association analysis, the most significant SNPs are selected and used in the subsequent analysis to investigate epistasis. SAERMA controls the classification results produced in the final fully connected multi-layer feedforward artificial neural network (MLP) by manipulating the interestingness measures, support and confidence, in the rule generation process. The best classification results were achieved with 204 SNPs compressed to 100 units (77% AUC, 77% SE, 68% SP, 53% Gini, logloss=0.58, and MSE=0.20), although it was possible to achieve 73% AUC (77% SE, 63% SP, 45% Gini, logloss=0.62, and MSE=0.21) with 50 hidden units - both supported by close model interpretation.
1802.01211
Casper Beentjes
Casper H. L. Beentjes and Ruth E. Baker
Quasi-Monte Carlo methods applied to tau-leaping in stochastic biological systems
null
Bull. Math. Biol. (2019) 81: 2931
10.1007/s11538-018-0442-2
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Quasi-Monte Carlo methods have proven to be effective extensions of traditional Monte Carlo methods in, amongst others, problems of quadrature and the sample path simulation of stochastic differential equations. By replacing the random number input stream in a simulation procedure by a low-discrepancy number input stream, variance reductions of several orders have been observed in financial applications. Analysis of stochastic effects in well-mixed chemical reaction networks often relies on sample path simulation using Monte Carlo methods, even though these methods suffer from typical slow $\mathcal{O}(N^{-1/2})$ convergence rates as a function of the number of sample paths $N$. This paper investigates the combination of (randomised) quasi-Monte Carlo methods with an efficient sample path simulation procedure, namely $\tau$-leaping. We show that this combination is often more effective than traditional Monte Carlo simulation in terms of the decay of statistical errors. The observed convergence rate behaviour is, however, non-trivial due to the discrete nature of the models of chemical reactions. We explain how this affects the performance of quasi-Monte Carlo methods by looking at a test problem in standard quadrature.
[ { "created": "Sun, 4 Feb 2018 22:32:21 GMT", "version": "v1" }, { "created": "Tue, 1 May 2018 10:41:02 GMT", "version": "v2" } ]
2019-12-12
[ [ "Beentjes", "Casper H. L.", "" ], [ "Baker", "Ruth E.", "" ] ]
Quasi-Monte Carlo methods have proven to be effective extensions of traditional Monte Carlo methods in, amongst others, problems of quadrature and the sample path simulation of stochastic differential equations. By replacing the random number input stream in a simulation procedure by a low-discrepancy number input stream, variance reductions of several orders have been observed in financial applications. Analysis of stochastic effects in well-mixed chemical reaction networks often relies on sample path simulation using Monte Carlo methods, even though these methods suffer from typical slow $\mathcal{O}(N^{-1/2})$ convergence rates as a function of the number of sample paths $N$. This paper investigates the combination of (randomised) quasi-Monte Carlo methods with an efficient sample path simulation procedure, namely $\tau$-leaping. We show that this combination is often more effective than traditional Monte Carlo simulation in terms of the decay of statistical errors. The observed convergence rate behaviour is, however, non-trivial due to the discrete nature of the models of chemical reactions. We explain how this affects the performance of quasi-Monte Carlo methods by looking at a test problem in standard quadrature.
1610.01898
Christian Geier
Christian Geier, Alexander Rothkegel, Christian E. Elger, Klaus Lehnertz
Bursting and Synchrony in Networks of Model Neurons
null
R. Tetzlaff and C. E. Elger and K. Lehnertz (2013), Recent Advances in Predicting and Preventing Epileptic Seizures, page 108-116, Singapore, World Scientific
null
null
q-bio.NC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bursting neurons are considered to be a potential cause of over-excitability and seizure susceptibility. The functional influence of these neurons in extended epileptic networks is still poorly understood. There is mounting evidence that the dynamics of neuronal networks is influenced not only by neuronal and synaptic properties but also by network topology. We investigate numerically the influence of different neuron dynamics on global synchrony in neuronal networks with complex connection topologies.
[ { "created": "Thu, 6 Oct 2016 14:52:22 GMT", "version": "v1" } ]
2016-10-07
[ [ "Geier", "Christian", "" ], [ "Rothkegel", "Alexander", "" ], [ "Elger", "Christian E.", "" ], [ "Lehnertz", "Klaus", "" ] ]
Bursting neurons are considered to be a potential cause of over-excitability and seizure susceptibility. The functional influence of these neurons in extended epileptic networks is still poorly understood. There is mounting evidence that the dynamics of neuronal networks is influenced not only by neuronal and synaptic properties but also by network topology. We investigate numerically the influence of different neuron dynamics on global synchrony in neuronal networks with complex connection topologies.
1810.04412
Ankit Kumar Shukla
Ashutosh Gupta, Somya Mani, and Ankit Shukla
Synthesis for Vesicle Traffic Systems
18 pages, 2 figures, 1 table
null
10.1007/978-3-319-99429-1_6
null
q-bio.SC cs.CE cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Vesicle Traffic Systems (VTSs) are the material transport mechanisms among the compartments inside the biological cells. The compartments are viewed as nodes that are labeled with the containing chemicals and the transport channels are similarly viewed as labeled edges between the nodes. Understanding VTSs is an ongoing area of research and for many cells they are partially known. For example, there may be undiscovered edges, nodes, or their labels in a VTS of a cell. It has been speculated that there are properties that the VTSs must satisfy. For example, stability, i.e., every chemical that is leaving a compartment comes back. Many synthesis questions may arise in this scenario, where we want to complete a partially known VTS under a given property. In the paper, we present novel encodings of the above questions into the QBF (quantified Boolean formula) satisfiability problems. We have implemented the encodings in a highly configurable tool and applied to a couple of found-in-nature VTSs and several synthetic graphs. Our results demonstrate that our method can scale up to the graphs of interest.
[ { "created": "Wed, 10 Oct 2018 08:34:50 GMT", "version": "v1" } ]
2018-10-12
[ [ "Gupta", "Ashutosh", "" ], [ "Mani", "Somya", "" ], [ "Shukla", "Ankit", "" ] ]
Vesicle Traffic Systems (VTSs) are the material transport mechanisms among the compartments inside the biological cells. The compartments are viewed as nodes that are labeled with the containing chemicals and the transport channels are similarly viewed as labeled edges between the nodes. Understanding VTSs is an ongoing area of research and for many cells they are partially known. For example, there may be undiscovered edges, nodes, or their labels in a VTS of a cell. It has been speculated that there are properties that the VTSs must satisfy. For example, stability, i.e., every chemical that is leaving a compartment comes back. Many synthesis questions may arise in this scenario, where we want to complete a partially known VTS under a given property. In the paper, we present novel encodings of the above questions into the QBF (quantified Boolean formula) satisfiability problems. We have implemented the encodings in a highly configurable tool and applied to a couple of found-in-nature VTSs and several synthetic graphs. Our results demonstrate that our method can scale up to the graphs of interest.
2012.10295
Andrew Whitwham
Petr Danecek, James K. Bonfield, Jennifer Liddle, John Marshall, Valeriu Ohan, Martin O Pollard, Andrew Whitwham, Thomas Keane, Shane A. McCarthy, Robert M. Davies, Heng Li
Twelve years of SAMtools and BCFtools
null
null
null
null
q-bio.GN
http://creativecommons.org/licenses/by/4.0/
Background SAMtools and BCFtools are widely used programs for processing and analysing high-throughput sequencing data. Findings The first version appeared online twelve years ago and has been maintained and further developed ever since, with many new features and improvements added over the years. The SAMtools and BCFtools packages represent a unique collection of tools that have been used in numerous other software projects and countless genomic pipelines. Conclusion Both SAMtools and BCFtools are freely available on GitHub under the permissive MIT licence, free for both non-commercial and commercial use. Both packages have been installed over a million times via Bioconda. The source code and documentation are available from http://www.htslib.org.
[ { "created": "Fri, 18 Dec 2020 15:19:36 GMT", "version": "v1" }, { "created": "Tue, 2 Feb 2021 11:19:20 GMT", "version": "v2" } ]
2021-02-03
[ [ "Danecek", "Petr", "" ], [ "Bonfield", "James K.", "" ], [ "Liddle", "Jennifer", "" ], [ "Marshall", "John", "" ], [ "Ohan", "Valeriu", "" ], [ "Pollard", "Martin O", "" ], [ "Whitwham", "Andrew", "" ], [ "Keane", "Thomas", "" ], [ "McCarthy", "Shane A.", "" ], [ "Davies", "Robert M.", "" ], [ "Li", "Heng", "" ] ]
Background SAMtools and BCFtools are widely used programs for processing and analysing high-throughput sequencing data. Findings The first version appeared online twelve years ago and has been maintained and further developed ever since, with many new features and improvements added over the years. The SAMtools and BCFtools packages represent a unique collection of tools that have been used in numerous other software projects and countless genomic pipelines. Conclusion Both SAMtools and BCFtools are freely available on GitHub under the permissive MIT licence, free for both non-commercial and commercial use. Both packages have been installed over a million times via Bioconda. The source code and documentation are available from http://www.htslib.org.
2001.10535
Jinsong Meng
Jinsong Meng
Bridging the Gap Between Consciousness and Matter: Recurrent Out-of-Body Projection of Visual Awareness Revealed by the Law of Non-Identity
20 pages, 9 figures, and 1 table. Comments: 21 pages, 9 figures, and 1 table: typos corrected, references added, a figure moved from the supplementary part to the main text, typesetting changed from double columns to one column. Comments: 28 pages, 10 figures, and 1 table: style changed to APA, title changed, typos corrected, references added, a figure added
Integrative Psychological and Behavioral Science,Volume 57 Issue 2, June 2023
10.1007/s12124-023-09775-y
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Consciousness is an explicit outcome of brain activity. However, the link between consciousness and the material world remains to be explored. We applied a new logic tool, the non-identity law, to the analysis of the visual dynamics related to the naturalistic observation of a night-shot still life. We show that visual awareness possesses a postponed, recurrent out-of-body projection pathway and that the out-of-body projection is superimposed onto the original, which is reciprocally verified by vision and touch. This suggests that the visual system instinctually not only represents the subjective image (brain-generated imagery) but also projects the image back onto the original or to a specific place according to the cues of the manipulated afferent messenger light signaling pathway. This finding provides a foundation for understanding the subjectivity and intentionality of consciousness from the perspective of visual awareness and the isomorphic relations between an unknowable original, private experience, and shareable expression. The result paves the way for scientific research on consciousness and facilitates the integration of humanities and natural science.
[ { "created": "Wed, 29 Jan 2020 18:20:37 GMT", "version": "v1" }, { "created": "Wed, 18 Nov 2020 18:55:10 GMT", "version": "v2" }, { "created": "Thu, 15 Jul 2021 14:35:15 GMT", "version": "v3" } ]
2023-05-25
[ [ "Meng", "Jinsong", "" ] ]
Consciousness is an explicit outcome of brain activity. However, the link between consciousness and the material world remains to be explored. We applied a new logic tool, the non-identity law, to the analysis of the visual dynamics related to the naturalistic observation of a night-shot still life. We show that visual awareness possesses a postponed, recurrent out-of-body projection pathway and that the out-of-body projection is superimposed onto the original, which is reciprocally verified by vision and touch. This suggests that the visual system instinctually not only represents the subjective image (brain-generated imagery) but also projects the image back onto the original or to a specific place according to the cues of the manipulated afferent messenger light signaling pathway. This finding provides a foundation for understanding the subjectivity and intentionality of consciousness from the perspective of visual awareness and the isomorphic relations between an unknowable original, private experience, and shareable expression. The result paves the way for scientific research on consciousness and facilitates the integration of humanities and natural science.
q-bio/0409010
Bernd Burghardt
Bernd Burghardt and Alexander K. Hartmann
Dependence of RNA secondary structure on the energy model
8 pages, 9 figures
null
10.1103/PhysRevE.71.021913
null
q-bio.QM
null
We analyze a microscopic RNA model, which includes two widely used models as limiting cases, namely it contains terms for bond as well as for stacking energies. We numerically investigate possible changes in the qualitative and quantitative behaviour while going from one model to the other; in particular we test, whether a transition occurs, when continuously moving from one model to the other. For this we calculate various thermodynamic quantities, both at zero temperature as well as at finite temperatures. All calculations can be done efficiently in polynomial time by a dynamic programming algorithm. We do not find a sign for transition between the models, but the critical exponent $\nu$ of the correlation length, describing the phase transition in all models to an ordered low-temperature phase, seems to depend continuously on the model. Finally, we apply the epsilon-Coupling method, to study low excitations. The exponent $\theta$ describing the energy-scaling of the excitations seems to depend not much on the energy model.
[ { "created": "Tue, 7 Sep 2004 09:04:35 GMT", "version": "v1" } ]
2009-11-10
[ [ "Burghardt", "Bernd", "" ], [ "Hartmann", "Alexander K.", "" ] ]
We analyze a microscopic RNA model, which includes two widely used models as limiting cases, namely it contains terms for bond as well as for stacking energies. We numerically investigate possible changes in the qualitative and quantitative behaviour while going from one model to the other; in particular we test, whether a transition occurs, when continuously moving from one model to the other. For this we calculate various thermodynamic quantities, both at zero temperature as well as at finite temperatures. All calculations can be done efficiently in polynomial time by a dynamic programming algorithm. We do not find a sign for transition between the models, but the critical exponent $\nu$ of the correlation length, describing the phase transition in all models to an ordered low-temperature phase, seems to depend continuously on the model. Finally, we apply the epsilon-Coupling method, to study low excitations. The exponent $\theta$ describing the energy-scaling of the excitations seems to depend not much on the energy model.
1410.1836
Stuart Kauffman
Stuart Kauffman
A Holistic, Non-algorithmic View of Cultural Evolution: Commentary on Review Article by Prof. Liane Gabora
3 pages
Physics of Life Reviews, 10(2), 154-155
10.1016/j.plrev.2013.05.005
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There is surely some truth to the notion that culture evolves, but the Darwinian view of culture is trivial. Gabora does two things in this paper. First, she levels a reasoned and devastating attack on the adequacy of a Darwinian theory of cultural evolution, showing that cultural evolution violates virtually all prerequisites to be encompassed by Darwin's standard theory. Second, she advances the central concept that it is whole world views that evolve. A world view emerges when the capacity of memories to evoke one another surpasses a phase transition yielding a richly interconnected conceptual web, a world view. She proposes that cultural evolves not through a Darwinian process such as meme theory, but through communal exchange of facets of world views. Each section of her argument is completely convincing.
[ { "created": "Tue, 7 Oct 2014 18:25:05 GMT", "version": "v1" } ]
2015-06-23
[ [ "Kauffman", "Stuart", "" ] ]
There is surely some truth to the notion that culture evolves, but the Darwinian view of culture is trivial. Gabora does two things in this paper. First, she levels a reasoned and devastating attack on the adequacy of a Darwinian theory of cultural evolution, showing that cultural evolution violates virtually all prerequisites to be encompassed by Darwin's standard theory. Second, she advances the central concept that it is whole world views that evolve. A world view emerges when the capacity of memories to evoke one another surpasses a phase transition yielding a richly interconnected conceptual web, a world view. She proposes that cultural evolves not through a Darwinian process such as meme theory, but through communal exchange of facets of world views. Each section of her argument is completely convincing.
2405.02524
Danilo Bernardo
Danilo Bernardo, Xihe Xie, Parul Verma, Jonathan Kim, Virginia Liu, Adam L. Numis, Ye Wu, Hannah C. Glass, Pew-Thian Yap, Srikantan S. Nagarajan, Ashish Raj
Simulation-based Inference of Developmental EEG Maturation with the Spectral Graph Model
40 pages, 6 figures, 19 supplementary figures
Commun Phys 7, 255 (2024)
10.1038/s42005-024-01748-w
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-nd/4.0/
The spectral content of macroscopic neural activity evolves throughout development, yet how this maturation relates to underlying brain network formation and dynamics remains unknown. Here, we assess the developmental maturation of electroencephalogram spectra via Bayesian model inversion of the spectral graph model, a parsimonious whole-brain model of spatiospectral neural activity derived from linearized neural field models coupled by the structural connectome. Simulation-based inference was used to estimate age-varying spectral graph model parameter posterior distributions from electroencephalogram spectra spanning the developmental period. This model-fitting approach accurately captures observed developmental electroencephalogram spectral maturation via a neurobiologically consistent progression of key neural parameters: long-range coupling, axonal conduction speed, and excitatory:inhibitory balance. These results suggest that the spectral maturation of macroscopic neural activity observed during typical development is supported by age-dependent functional adaptations in localized neural dynamics and their long-range coupling across the macroscopic structural network.
[ { "created": "Fri, 3 May 2024 23:44:30 GMT", "version": "v1" }, { "created": "Thu, 11 Jul 2024 21:48:14 GMT", "version": "v2" }, { "created": "Fri, 26 Jul 2024 08:34:40 GMT", "version": "v3" } ]
2024-08-05
[ [ "Bernardo", "Danilo", "" ], [ "Xie", "Xihe", "" ], [ "Verma", "Parul", "" ], [ "Kim", "Jonathan", "" ], [ "Liu", "Virginia", "" ], [ "Numis", "Adam L.", "" ], [ "Wu", "Ye", "" ], [ "Glass", "Hannah C.", "" ], [ "Yap", "Pew-Thian", "" ], [ "Nagarajan", "Srikantan S.", "" ], [ "Raj", "Ashish", "" ] ]
The spectral content of macroscopic neural activity evolves throughout development, yet how this maturation relates to underlying brain network formation and dynamics remains unknown. Here, we assess the developmental maturation of electroencephalogram spectra via Bayesian model inversion of the spectral graph model, a parsimonious whole-brain model of spatiospectral neural activity derived from linearized neural field models coupled by the structural connectome. Simulation-based inference was used to estimate age-varying spectral graph model parameter posterior distributions from electroencephalogram spectra spanning the developmental period. This model-fitting approach accurately captures observed developmental electroencephalogram spectral maturation via a neurobiologically consistent progression of key neural parameters: long-range coupling, axonal conduction speed, and excitatory:inhibitory balance. These results suggest that the spectral maturation of macroscopic neural activity observed during typical development is supported by age-dependent functional adaptations in localized neural dynamics and their long-range coupling across the macroscopic structural network.
1808.07785
Karen Petrosyan
K.G. Petrosyan
Inter-state switching in stochastic gene expression: Exact solution, an adiabatic limit and oscillations in molecular distributions
6 pages, 6 figures
null
null
null
q-bio.MN cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the stochastic gene expression process with inter-state flip-flops. An exact steady-state solution to the master equation is calculated. One of the main goals in this paper is to investigate whether the probability distribution of gene copies contains even-odd number oscillations. A master equation previously derived in the adiabatic limit of fast switching by Kepler and Elston \cite{kepler} suggests that the oscillations should be present. However our analysis demonstrates that the oscillations do not happen not only in the adiabatic case but they are entirely absent. We discuss the adiabatic approximation in detail. The other goal is to establish the master equation that takes into account external fluctuations that is similar to the master equation in the adiabatic approximation. The equation allows even-odd oscillations. The reason the behaviour occurs is an underlying interference of Poisson and Gaussian processes. The master equation contains an extra term that describes the gene copy number unconventional diffusion and is responsible for the oscillations. We also point out to a similar phenomenon in quantum physics.
[ { "created": "Thu, 23 Aug 2018 14:53:13 GMT", "version": "v1" }, { "created": "Thu, 28 Mar 2019 08:44:58 GMT", "version": "v2" }, { "created": "Thu, 24 Sep 2020 07:57:20 GMT", "version": "v3" } ]
2020-09-25
[ [ "Petrosyan", "K. G.", "" ] ]
We consider the stochastic gene expression process with inter-state flip-flops. An exact steady-state solution to the master equation is calculated. One of the main goals in this paper is to investigate whether the probability distribution of gene copies contains even-odd number oscillations. A master equation previously derived in the adiabatic limit of fast switching by Kepler and Elston \cite{kepler} suggests that the oscillations should be present. However our analysis demonstrates that the oscillations do not happen not only in the adiabatic case but they are entirely absent. We discuss the adiabatic approximation in detail. The other goal is to establish the master equation that takes into account external fluctuations that is similar to the master equation in the adiabatic approximation. The equation allows even-odd oscillations. The reason the behaviour occurs is an underlying interference of Poisson and Gaussian processes. The master equation contains an extra term that describes the gene copy number unconventional diffusion and is responsible for the oscillations. We also point out to a similar phenomenon in quantum physics.
1805.10892
Eero Satuvuori
Eero Satuvuori, Mario Mulansky, Andreas Daffertshofer, Thomas Kreuz
Using spike train distances to identify the most discriminative neuronal subpopulation
14 pages, 9 Figures
null
10.1016/j.jneumeth.2018.09.008
null
q-bio.NC physics.data-an
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: Spike trains of multiple neurons can be analyzed following the summed population (SP) or the labeled line (LL) hypothesis. Responses to external stimuli are generated by a neuronal population as a whole or the individual neurons have encoding capacities of their own. The SPIKE-distance estimated either for a single, pooled spike train over a population or for each neuron separately can serve to quantify these responses. New Method: For the SP case we compare three algorithms that search for the most discriminative subpopulation over all stimulus pairs. For the LL case we introduce a new algorithm that combines neurons that individually separate different pairs of stimuli best. Results: The best approach for SP is a brute force search over all possible subpopulations. However, it is only feasible for small populations. For more realistic settings, simulated annealing clearly outperforms gradient algorithms with only a limited increase in computational load. Our novel LL approach can handle very involved coding scenarios despite its computational ease. Comparison with Existing Methods: Spike train distances have been extended to the analysis of neural populations interpolating between SP and LL coding. This includes parametrizing the importance of distinguishing spikes being fired in different neurons. Yet, these approaches only consider the population as a whole. The explicit focus on subpopulations render our algorithms complimentary. Conclusions: The spectrum of encoding possibilities in neural populations is broad. The SP and LL cases are two extremes for which our algorithms provide correct identification results.
[ { "created": "Mon, 28 May 2018 12:40:10 GMT", "version": "v1" }, { "created": "Fri, 14 Sep 2018 09:10:05 GMT", "version": "v2" } ]
2018-09-17
[ [ "Satuvuori", "Eero", "" ], [ "Mulansky", "Mario", "" ], [ "Daffertshofer", "Andreas", "" ], [ "Kreuz", "Thomas", "" ] ]
Background: Spike trains of multiple neurons can be analyzed following the summed population (SP) or the labeled line (LL) hypothesis. Responses to external stimuli are generated by a neuronal population as a whole or the individual neurons have encoding capacities of their own. The SPIKE-distance estimated either for a single, pooled spike train over a population or for each neuron separately can serve to quantify these responses. New Method: For the SP case we compare three algorithms that search for the most discriminative subpopulation over all stimulus pairs. For the LL case we introduce a new algorithm that combines neurons that individually separate different pairs of stimuli best. Results: The best approach for SP is a brute force search over all possible subpopulations. However, it is only feasible for small populations. For more realistic settings, simulated annealing clearly outperforms gradient algorithms with only a limited increase in computational load. Our novel LL approach can handle very involved coding scenarios despite its computational ease. Comparison with Existing Methods: Spike train distances have been extended to the analysis of neural populations interpolating between SP and LL coding. This includes parametrizing the importance of distinguishing spikes being fired in different neurons. Yet, these approaches only consider the population as a whole. The explicit focus on subpopulations render our algorithms complimentary. Conclusions: The spectrum of encoding possibilities in neural populations is broad. The SP and LL cases are two extremes for which our algorithms provide correct identification results.
1108.0673
Mois\'es Santill\'an Dr.
Eder Zavala-L\'opez, Mois\'es Santill\'an
Oscillation arrest in the mouse somitogenesis clock presumably takes place via an infinite period bifurcation
13 pages, 2 figures
null
null
null
q-bio.CB physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work we address the question of how oscillations are arrested in the mouse somitogenesis clock when the determination front reaches presomitic cells. Based upon available experimental evidence we hypothesize that the mechanism underlying such a phenomenon involves the interaction between a limit cycle (originated by a delayed negative feedback loop) and a bistable switch (originated by a positive feedback loop). With this hypothesis in mind we construct the simplest possible model comprising both negative and positive feedback loops and show that (with a suitable choice of paremeters): 1) it can show an oscillatory behavior, 2) oscillations are arrested via an infinite-period bifurcation whenever the different gene-expression regulator-inputs act together in an additive rather than in a multiplicative fashion, and 3) this mechanism for oscillation arrest is compatible whit plentiful experimental observations.
[ { "created": "Tue, 2 Aug 2011 20:05:23 GMT", "version": "v1" } ]
2015-03-13
[ [ "Zavala-López", "Eder", "" ], [ "Santillán", "Moisés", "" ] ]
In this work we address the question of how oscillations are arrested in the mouse somitogenesis clock when the determination front reaches presomitic cells. Based upon available experimental evidence we hypothesize that the mechanism underlying such a phenomenon involves the interaction between a limit cycle (originated by a delayed negative feedback loop) and a bistable switch (originated by a positive feedback loop). With this hypothesis in mind we construct the simplest possible model comprising both negative and positive feedback loops and show that (with a suitable choice of paremeters): 1) it can show an oscillatory behavior, 2) oscillations are arrested via an infinite-period bifurcation whenever the different gene-expression regulator-inputs act together in an additive rather than in a multiplicative fashion, and 3) this mechanism for oscillation arrest is compatible whit plentiful experimental observations.
0707.0804
Brigitte Gaillard
M. Boos (DEPE-Iphc), C. Zimmer (DEPE-Iphc), A. Carriere (DEPE-Iphc), J.P. Robin (DEPE-Iphc), O. Petit (DEPE-Iphc)
Post-hatching parental care behaviour and hormonal status in a precocial bird
null
Behavioural Processes (18/05/2007) sous presse
10.1016/j.beproc.2007.05.003
null
q-bio.PE
null
In birds, the link between parental care behaviour and prolactin release during incubation persists after hatching in altricial birds, but has never been precisely studied during the whole rearing period in precocial species, such as ducks. The present study aims to understand how changes in parental care after hatching are related to circulating prolactin levels in mallard hens rearing ducklings. Blood was sampled in hens over at least 13 post-hatching weeks and the behaviour of the hens and the ducklings was recorded daily until fledging. Contacts between hens and the ducklings, leadership of the ducklings and gathering of them steadily decreased over post-hatching time. Conversely, resting, preening and agonistic behaviour of hens towards ducklings increased. Plasma prolactin concentrations remained at high levels after hatching and then fell after week 6 when body mass and structural size of the young were close to those of the hen. Parental care behaviour declined linearly with brood age, showed a disruption of the hen-brood bond at week 6 post-hatching and was related to prolactin concentration according to a sigmoid function. Our results suggest that a definite threshold in circulating prolactin is necessary to promote and/or to maintain post-hatching parental care in ducks.
[ { "created": "Thu, 5 Jul 2007 15:12:49 GMT", "version": "v1" } ]
2007-07-06
[ [ "Boos", "M.", "", "DEPE-Iphc" ], [ "Zimmer", "C.", "", "DEPE-Iphc" ], [ "Carriere", "A.", "", "DEPE-Iphc" ], [ "Robin", "J. P.", "", "DEPE-Iphc" ], [ "Petit", "O.", "", "DEPE-Iphc" ] ]
In birds, the link between parental care behaviour and prolactin release during incubation persists after hatching in altricial birds, but has never been precisely studied during the whole rearing period in precocial species, such as ducks. The present study aims to understand how changes in parental care after hatching are related to circulating prolactin levels in mallard hens rearing ducklings. Blood was sampled in hens over at least 13 post-hatching weeks and the behaviour of the hens and the ducklings was recorded daily until fledging. Contacts between hens and the ducklings, leadership of the ducklings and gathering of them steadily decreased over post-hatching time. Conversely, resting, preening and agonistic behaviour of hens towards ducklings increased. Plasma prolactin concentrations remained at high levels after hatching and then fell after week 6 when body mass and structural size of the young were close to those of the hen. Parental care behaviour declined linearly with brood age, showed a disruption of the hen-brood bond at week 6 post-hatching and was related to prolactin concentration according to a sigmoid function. Our results suggest that a definite threshold in circulating prolactin is necessary to promote and/or to maintain post-hatching parental care in ducks.
1603.05343
Roberto D. Pascual-Marqui
RD Pascual-Marqui, P Faber, T Kinoshita, Y Kitaura, K Kochi, P Milz, K Nishida, M Yoshimura
The dual frequency RV-coupling coefficient: a novel measure for quantifying cross-frequency information transactions in the brain
technical report, pre-print, 2016-03-16
null
null
null
q-bio.NC stat.ME
http://creativecommons.org/licenses/by-nc-sa/4.0/
Identifying dynamic transactions between brain regions has become increasingly important. Measurements within and across brain structures, demonstrating the occurrence of bursts of beta/gamma oscillations only during one specific phase of each theta/alpha cycle, have motivated the need to advance beyond linear and stationary time series models. Here we offer a novel measure, namely, the "dual frequency RV-coupling coefficient", for assessing different types of frequency-frequency interactions that subserve information flow in the brain. This is a measure of coherence between two complex-valued vectors, consisting of the set of Fourier coefficients for two different frequency bands, within or across two brain regions. RV-coupling is expressed in terms of instantaneous and lagged components. Furthermore, by using normalized Fourier coefficients (unit modulus), phase-type couplings can also be measured. The dual frequency RV-coupling coefficient is based on previous work: the second order bispectrum, i.e. the dual-frequency coherence (Thomson 1982; Haykin & Thomson 1998); the RV-coefficient (Escoufier 1973); Gorrostieta et al (2012); and Pascual-Marqui et al (2011). This paper presents the new measure, and outlines relevant statistical tests. The novel aspects of the "dual frequency RV-coupling coefficient" are: (1) it can be applied to two multivariate time series; (2) the method is not limited to single discrete frequencies, and in addition, the frequency bands are treated by means of appropriate multivariate statistical methodology; (3) the method makes use of a novel generalization of the RV-coefficient for complex-valued multivariate data; (4) real and imaginary covariance contributions to the RV-coherence are obtained, allowing the definition of a "lagged-coupling" measure that is minimally affected by the low spatial resolution of estimated cortical electric neuronal activity.
[ { "created": "Thu, 17 Mar 2016 03:02:50 GMT", "version": "v1" }, { "created": "Fri, 18 Mar 2016 00:52:48 GMT", "version": "v2" } ]
2016-03-21
[ [ "Pascual-Marqui", "RD", "" ], [ "Faber", "P", "" ], [ "Kinoshita", "T", "" ], [ "Kitaura", "Y", "" ], [ "Kochi", "K", "" ], [ "Milz", "P", "" ], [ "Nishida", "K", "" ], [ "Yoshimura", "M", "" ] ]
Identifying dynamic transactions between brain regions has become increasingly important. Measurements within and across brain structures, demonstrating the occurrence of bursts of beta/gamma oscillations only during one specific phase of each theta/alpha cycle, have motivated the need to advance beyond linear and stationary time series models. Here we offer a novel measure, namely, the "dual frequency RV-coupling coefficient", for assessing different types of frequency-frequency interactions that subserve information flow in the brain. This is a measure of coherence between two complex-valued vectors, consisting of the set of Fourier coefficients for two different frequency bands, within or across two brain regions. RV-coupling is expressed in terms of instantaneous and lagged components. Furthermore, by using normalized Fourier coefficients (unit modulus), phase-type couplings can also be measured. The dual frequency RV-coupling coefficient is based on previous work: the second order bispectrum, i.e. the dual-frequency coherence (Thomson 1982; Haykin & Thomson 1998); the RV-coefficient (Escoufier 1973); Gorrostieta et al (2012); and Pascual-Marqui et al (2011). This paper presents the new measure, and outlines relevant statistical tests. The novel aspects of the "dual frequency RV-coupling coefficient" are: (1) it can be applied to two multivariate time series; (2) the method is not limited to single discrete frequencies, and in addition, the frequency bands are treated by means of appropriate multivariate statistical methodology; (3) the method makes use of a novel generalization of the RV-coefficient for complex-valued multivariate data; (4) real and imaginary covariance contributions to the RV-coherence are obtained, allowing the definition of a "lagged-coupling" measure that is minimally affected by the low spatial resolution of estimated cortical electric neuronal activity.
1107.2223
Marc de Lussanet H.E.
Marc H. E. de Lussanet
A hexamer origin of the echinoderms' five rays
10 pages, 6 figures
Evolution and Development 2011, 13(2):228-238
10.1111/j.1525-142x.2011.00472.x
null
q-bio.PE q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Of the major deuterostome groups, the echinoderms with their multiple forms and complex development are arguably the most mysterious. Although larval echinoderms are bilaterally symmetric, the adult body seems to abandon the larval body plan and to develop independently a new structure with different symmetries. The prevalent pentamer structure, the asymmetry of Loven's rule and the variable location of the periproct and madrepore present enormous difficulties in homologizing structures across the major clades, despite the excellent fossil record. This irregularity in body forms seems to place echinoderms outside the other deuterostomes. Here I propose that the predominant five-ray structure is derived from a hexamer structure that is grounded directly in the structure of the bilaterally symmetric larva. This hypothesis implies that the adult echinoderm body can be derived directly from the larval bilateral symmetry and thus firmly ranks even the adult echinoderms among the bilaterians. In order to test the hypothesis rigorously, a model is developed in which one ray is missing between rays IV-V (Loven's schema) or rays C-D (Carpenter's schema). The model is used to make predictions, which are tested and verified for the process of metamorphosis and for the morphology of recent and fossil forms. The theory provides fundamental insight into the M-plane and the Ubisch', Loven's and Carpenter's planes and generalizes them for all echinoderms. The theory also makes robust predictions about the evolution of the pentamer structure and its developmental basis. *** including corrections (see footnotes) ***
[ { "created": "Tue, 12 Jul 2011 09:32:28 GMT", "version": "v1" }, { "created": "Mon, 1 Apr 2013 19:45:10 GMT", "version": "v2" } ]
2013-04-02
[ [ "de Lussanet", "Marc H. E.", "" ] ]
Of the major deuterostome groups, the echinoderms with their multiple forms and complex development are arguably the most mysterious. Although larval echinoderms are bilaterally symmetric, the adult body seems to abandon the larval body plan and to develop independently a new structure with different symmetries. The prevalent pentamer structure, the asymmetry of Loven's rule and the variable location of the periproct and madrepore present enormous difficulties in homologizing structures across the major clades, despite the excellent fossil record. This irregularity in body forms seems to place echinoderms outside the other deuterostomes. Here I propose that the predominant five-ray structure is derived from a hexamer structure that is grounded directly in the structure of the bilaterally symmetric larva. This hypothesis implies that the adult echinoderm body can be derived directly from the larval bilateral symmetry and thus firmly ranks even the adult echinoderms among the bilaterians. In order to test the hypothesis rigorously, a model is developed in which one ray is missing between rays IV-V (Loven's schema) or rays C-D (Carpenter's schema). The model is used to make predictions, which are tested and verified for the process of metamorphosis and for the morphology of recent and fossil forms. The theory provides fundamental insight into the M-plane and the Ubisch', Loven's and Carpenter's planes and generalizes them for all echinoderms. The theory also makes robust predictions about the evolution of the pentamer structure and its developmental basis. *** including corrections (see footnotes) ***
2005.05595
Yves Dumont
Maria Soledad Aronna (FGV), Yves Dumont (UMR AMAP)
On nonlinear pest/vector control via the Sterile Insect Technique: impact of residual fertility
null
null
null
null
q-bio.PE math.DS math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider a minimalist model for the Sterile Insect Technique (SIT), assuming that residual fertility can occur in the sterile male population.Taking into account that we are able to get regular measurements from the biological system along the control duration, such as the size of the wild insect population, we study different control strategies that involve either continuous or periodic impulsive releases. We show that a combination of open-loop control with constant large releases and closed-loop nonlinear control, i.e. when releases are adjusted according to the wild population size estimates, leads to the best strategy in terms both of number of releases and total quantity of sterile males to be released.Last but not least, we show that SIT can be successful only if the residual fertility is less than a threshold value that depends on the wild population biological parameters. However, even for small values, the residual fertility induces the use of such large releases that SIT alone is not always reasonable from a practical point of view and thus requires to be combined with other control tools. We provide applications against a mosquito species, \textit{Aedes albopictus}, and a fruit fly, \textit{Bactrocera dorsalis}, and discuss the possibility of using SIT when residual fertility, among the sterile males, can occur.
[ { "created": "Tue, 12 May 2020 08:04:01 GMT", "version": "v1" } ]
2020-05-13
[ [ "Aronna", "Maria Soledad", "", "FGV" ], [ "Dumont", "Yves", "", "UMR AMAP" ] ]
We consider a minimalist model for the Sterile Insect Technique (SIT), assuming that residual fertility can occur in the sterile male population.Taking into account that we are able to get regular measurements from the biological system along the control duration, such as the size of the wild insect population, we study different control strategies that involve either continuous or periodic impulsive releases. We show that a combination of open-loop control with constant large releases and closed-loop nonlinear control, i.e. when releases are adjusted according to the wild population size estimates, leads to the best strategy in terms both of number of releases and total quantity of sterile males to be released.Last but not least, we show that SIT can be successful only if the residual fertility is less than a threshold value that depends on the wild population biological parameters. However, even for small values, the residual fertility induces the use of such large releases that SIT alone is not always reasonable from a practical point of view and thus requires to be combined with other control tools. We provide applications against a mosquito species, \textit{Aedes albopictus}, and a fruit fly, \textit{Bactrocera dorsalis}, and discuss the possibility of using SIT when residual fertility, among the sterile males, can occur.
2107.08010
Rebekah Rogers
Rebekah L. Rogers, Stephanie L. Grizzard, and Jeffrey T. Garner
Strong, recent selective sweeps reshape genetic diversity in freshwater bivalve Megalonaias nervosa
7 figures, 6 supplementary tables, 21 supplementary figures. 60 pages total
null
null
null
q-bio.PE q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Freshwater Unionid bivalves have recently faced ecological upheaval through pollution, barriers to dispersal, human harvesting, and changes in fish-host prevalence. Currently, over 70% of species are threatened, endangered or extinct. To characterize the genetic response to these recent selective pressures, we collected population genetic data for one successful bivalve species, Megalonaias nervosa. We identify megabase sized regions that are nearly monomorphic across the population, a signal of strong, recent selection reshaping genetic diversity. These signatures of selection encompass a total of 73Mb, greater response to selection than is commonly seen in population genetic models. We observe 102 duplicate genes with high dN/dS on terminal branches among regions with sweeps, suggesting that gene duplication is a causative mechanism of recent adaptation in M. nervosa. Genes in sweeps reflect functional classes known to be important for Unionid survival, including anticoagulation genes important for fish host parasitization, detox genes, mitochondria management, and shell formation. We identify selective sweeps in regions with no known functional impacts, suggesting mechanisms of adaptation that deserve greater attention in future work on species survival. In contrast, polymorphic transposable element insertions appear to be detrimental and underrepresented among regions with sweeps. TE site frequency spectra are skewed toward singleton variants, and TEs among regions with sweeps are present only at low frequency. Our work suggests that duplicate genes are an essential source of genetic novelty that has helped this successful species succeed in environments where others have struggled. These results suggest that gene duplications deserve greater attention in non-model population genomics, especially in species that have recently faced sudden environmental challenges.
[ { "created": "Fri, 16 Jul 2021 16:56:48 GMT", "version": "v1" }, { "created": "Fri, 27 May 2022 21:17:31 GMT", "version": "v2" }, { "created": "Thu, 17 Nov 2022 16:35:08 GMT", "version": "v3" } ]
2022-11-18
[ [ "Rogers", "Rebekah L.", "" ], [ "Grizzard", "Stephanie L.", "" ], [ "Garner", "Jeffrey T.", "" ] ]
Freshwater Unionid bivalves have recently faced ecological upheaval through pollution, barriers to dispersal, human harvesting, and changes in fish-host prevalence. Currently, over 70% of species are threatened, endangered or extinct. To characterize the genetic response to these recent selective pressures, we collected population genetic data for one successful bivalve species, Megalonaias nervosa. We identify megabase sized regions that are nearly monomorphic across the population, a signal of strong, recent selection reshaping genetic diversity. These signatures of selection encompass a total of 73Mb, greater response to selection than is commonly seen in population genetic models. We observe 102 duplicate genes with high dN/dS on terminal branches among regions with sweeps, suggesting that gene duplication is a causative mechanism of recent adaptation in M. nervosa. Genes in sweeps reflect functional classes known to be important for Unionid survival, including anticoagulation genes important for fish host parasitization, detox genes, mitochondria management, and shell formation. We identify selective sweeps in regions with no known functional impacts, suggesting mechanisms of adaptation that deserve greater attention in future work on species survival. In contrast, polymorphic transposable element insertions appear to be detrimental and underrepresented among regions with sweeps. TE site frequency spectra are skewed toward singleton variants, and TEs among regions with sweeps are present only at low frequency. Our work suggests that duplicate genes are an essential source of genetic novelty that has helped this successful species succeed in environments where others have struggled. These results suggest that gene duplications deserve greater attention in non-model population genomics, especially in species that have recently faced sudden environmental challenges.
1410.0566
Nadya Morozova
Nadya Morozova and Robert Penner
Geometry of Morphogenesis
null
null
null
null
q-bio.OT math.MG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a formalism for the geometry of eukaryotic cells and organisms.Cells are taken to be star-convex with good biological reason. This allows for a convenient description of their extent in space as well as all manner of cell surface gradients. We assume that a spectrum of such cell surface markers determines an epigenetic code for organism shape. The union of cells in space at a moment in time is by definition the organism taken as a metric subspace of Euclidean space, which can be further equipped with an arbitrary measure. Each cell determines a point in space thus assigning a finite configuration of distinct points in space to an organism, and a bundle over this configuration space is introduced with fiber a Hilbert space recording specific epigenetic data. On this bundle, a Lagrangian formulation of morphogenetic dynamics is proposed based on Gromov-Hausdorff distance which at once describes both embryo development and regenerative growth.
[ { "created": "Wed, 1 Oct 2014 10:23:46 GMT", "version": "v1" } ]
2014-10-03
[ [ "Morozova", "Nadya", "" ], [ "Penner", "Robert", "" ] ]
We introduce a formalism for the geometry of eukaryotic cells and organisms.Cells are taken to be star-convex with good biological reason. This allows for a convenient description of their extent in space as well as all manner of cell surface gradients. We assume that a spectrum of such cell surface markers determines an epigenetic code for organism shape. The union of cells in space at a moment in time is by definition the organism taken as a metric subspace of Euclidean space, which can be further equipped with an arbitrary measure. Each cell determines a point in space thus assigning a finite configuration of distinct points in space to an organism, and a bundle over this configuration space is introduced with fiber a Hilbert space recording specific epigenetic data. On this bundle, a Lagrangian formulation of morphogenetic dynamics is proposed based on Gromov-Hausdorff distance which at once describes both embryo development and regenerative growth.
1303.3963
Ovidiu Radulescu
Vincent Noel, Dima Grigoriev, Sergei Vakulenko and Ovidiu Radulescu
Tropicalization and tropical equilibration of chemical reactions
13 pages, 1 figure, workshop Tropical-12, Moskow, August 26-31, 2012; in press Contemporary Mathematics
null
null
null
q-bio.MN math.AG
http://creativecommons.org/licenses/publicdomain/
Systems biology uses large networks of biochemical reactions to model the functioning of biological cells from the molecular to the cellular scale. The dynamics of dissipative reaction networks with many well separated time scales can be described as a sequence of successive equilibrations of different subsets of variables of the system. Polynomial systems with separation are equilibrated when at least two monomials, of opposite signs, have the same order of magnitude and dominate the others. These equilibrations and the corresponding truncated dynamics, obtained by eliminating the dominated terms, find a natural formulation in tropical analysis and can be used for model reduction.
[ { "created": "Sat, 16 Mar 2013 10:17:45 GMT", "version": "v1" }, { "created": "Mon, 27 May 2013 10:07:03 GMT", "version": "v2" } ]
2013-05-28
[ [ "Noel", "Vincent", "" ], [ "Grigoriev", "Dima", "" ], [ "Vakulenko", "Sergei", "" ], [ "Radulescu", "Ovidiu", "" ] ]
Systems biology uses large networks of biochemical reactions to model the functioning of biological cells from the molecular to the cellular scale. The dynamics of dissipative reaction networks with many well separated time scales can be described as a sequence of successive equilibrations of different subsets of variables of the system. Polynomial systems with separation are equilibrated when at least two monomials, of opposite signs, have the same order of magnitude and dominate the others. These equilibrations and the corresponding truncated dynamics, obtained by eliminating the dominated terms, find a natural formulation in tropical analysis and can be used for model reduction.
1711.08548
Mohd Almie Alias
Mohd Almie Alias and Pascal R Buenzli
Osteoblasts infill irregular pores under curvature and porosity controls: A hypothesis-testing analysis of cell behaviours
14 pages, 11 figures, Appendix
Biomech Model Mechanobiol (2018) 17:1357-1371
10.1007/s10237-018-1031-x
null
q-bio.CB physics.bio-ph q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The geometric control of bone tissue growth plays a significant role in bone remodelling, age-related bone loss, and tissue engineering. However, how exactly geometry influences the behaviour of bone-forming cells remains elusive. Geometry modulates cell populations collectively through the evolving space available to the cells, but it may also modulate the individual behaviours of cells. To factor out the collective influence of geometry and gain access to the geometric regulation of individual cell behaviours, we develop a mathematical model of the infilling of cortical bone pores and use it with available experimental data on cortical infilling rates. Testing different possible modes of geometric controls of individual cell behaviours consistent with the experimental data, we find that efficient smoothing of irregular pores only occurs when cell secretory rate is controlled by porosity rather than curvature. This porosity control suggests the convergence of a large scale of intercellular signalling to single bone-forming cells, consistent with that provided by the osteocyte network in response to mechanical stimulus. After validating the mathematical model with the histological record of a real cortical pore infilling, we explore the infilling of a population of randomly generated initial pore shapes. We find that amongst all the geometric regulations considered, the collective influence of curvature on cell crowding is a dominant factor for how fast cortical bone pores infill, and we suggest that the irregularity of cement lines thereby explains some of the variability in double labelling data as well as the overall speed of osteon infilling.
[ { "created": "Thu, 23 Nov 2017 01:15:35 GMT", "version": "v1" }, { "created": "Fri, 6 Apr 2018 14:40:53 GMT", "version": "v2" } ]
2020-05-28
[ [ "Alias", "Mohd Almie", "" ], [ "Buenzli", "Pascal R", "" ] ]
The geometric control of bone tissue growth plays a significant role in bone remodelling, age-related bone loss, and tissue engineering. However, how exactly geometry influences the behaviour of bone-forming cells remains elusive. Geometry modulates cell populations collectively through the evolving space available to the cells, but it may also modulate the individual behaviours of cells. To factor out the collective influence of geometry and gain access to the geometric regulation of individual cell behaviours, we develop a mathematical model of the infilling of cortical bone pores and use it with available experimental data on cortical infilling rates. Testing different possible modes of geometric controls of individual cell behaviours consistent with the experimental data, we find that efficient smoothing of irregular pores only occurs when cell secretory rate is controlled by porosity rather than curvature. This porosity control suggests the convergence of a large scale of intercellular signalling to single bone-forming cells, consistent with that provided by the osteocyte network in response to mechanical stimulus. After validating the mathematical model with the histological record of a real cortical pore infilling, we explore the infilling of a population of randomly generated initial pore shapes. We find that amongst all the geometric regulations considered, the collective influence of curvature on cell crowding is a dominant factor for how fast cortical bone pores infill, and we suggest that the irregularity of cement lines thereby explains some of the variability in double labelling data as well as the overall speed of osteon infilling.
1503.07396
Delfim F. M. Torres
Amira Rachah, Delfim F. M. Torres
Mathematical Modelling, Simulation, and Optimal Control of the 2014 Ebola Outbreak in West Africa
This is a preprint of a paper whose final and definite form is Discrete Dynamics in Nature and Society (Print ISSN: 1026-0226; Online ISSN: 1607-887X) 2015, Article ID 842792, 9 pp. See http://dx.doi.org/10.1155/2015/842792 Submitted 26-Dec-2014; revised 27-Feb-2015; accepted 28-Feb-2015
Discrete Dyn. Nat. Soc. 2015 (2015), Art. ID 842792, 9 pp
10.1155/2015/842792
null
q-bio.PE math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Ebola virus is currently one of the most virulent pathogens for humans. The latest major outbreak occurred in Guinea, Sierra Leonne and Liberia in 2014. With the aim of understanding the spread of infection in the affected countries, it is crucial to modelize the virus and simulate it. In this paper, we begin by studying a simple mathematical model that describes the 2014 Ebola outbreak in Liberia. Then, we use numerical simulations and available data provided by the World Health Organization to validate the obtained mathematical model. Moreover, we develop a new mathematical model including vaccination of individuals. We discuss different cases of vaccination in order to predict the effect of vaccination on the infected individuals over time. Finally, we apply optimal control to study the impact of vaccination on the spread of the Ebola virus. The optimal control problem is solved numerically by using a direct multi-shooting method.
[ { "created": "Fri, 13 Mar 2015 01:02:33 GMT", "version": "v1" } ]
2015-03-26
[ [ "Rachah", "Amira", "" ], [ "Torres", "Delfim F. M.", "" ] ]
The Ebola virus is currently one of the most virulent pathogens for humans. The latest major outbreak occurred in Guinea, Sierra Leonne and Liberia in 2014. With the aim of understanding the spread of infection in the affected countries, it is crucial to modelize the virus and simulate it. In this paper, we begin by studying a simple mathematical model that describes the 2014 Ebola outbreak in Liberia. Then, we use numerical simulations and available data provided by the World Health Organization to validate the obtained mathematical model. Moreover, we develop a new mathematical model including vaccination of individuals. We discuss different cases of vaccination in order to predict the effect of vaccination on the infected individuals over time. Finally, we apply optimal control to study the impact of vaccination on the spread of the Ebola virus. The optimal control problem is solved numerically by using a direct multi-shooting method.
1612.08550
Giovanni Bussi
Richard A. Cunha and Giovanni Bussi
Unravelling Mg$^{2+}$-RNA binding with atomistic molecular dynamics
null
RNA 2017, 23, 628-638
10.1261/rna.060079.116
null
q-bio.BM physics.bio-ph physics.chem-ph physics.comp-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Interaction with divalent cations is of paramount importance for RNA structural stability and function. We here report a detailed molecular dynamics study of all the possible binding sites for Mg$^{2+}$ on a RNA duplex, including both direct (inner sphere) and indirect (outer sphere) binding. In order to tackle sampling issues, we develop a modified version of bias-exchange metadynamics which allows us to simultaneously compute affinities with previously unreported statistical accuracy. Results correctly reproduce trends observed in crystallographic databases. Based on this, we simulate a carefully chosen set of models that allows us to quantify the effects of competition with monovalent cations, RNA flexibility, and RNA hybridization. Our simulations reproduce the decrease and increase of Mg$^{2+}$ affinity due to ion competition and hybridization respectively, and predict that RNA flexibility has a site dependent effect. This suggests a non trivial interplay between RNA conformational entropy and divalent cation binding.
[ { "created": "Tue, 27 Dec 2016 09:28:58 GMT", "version": "v1" } ]
2017-04-19
[ [ "Cunha", "Richard A.", "" ], [ "Bussi", "Giovanni", "" ] ]
Interaction with divalent cations is of paramount importance for RNA structural stability and function. We here report a detailed molecular dynamics study of all the possible binding sites for Mg$^{2+}$ on a RNA duplex, including both direct (inner sphere) and indirect (outer sphere) binding. In order to tackle sampling issues, we develop a modified version of bias-exchange metadynamics which allows us to simultaneously compute affinities with previously unreported statistical accuracy. Results correctly reproduce trends observed in crystallographic databases. Based on this, we simulate a carefully chosen set of models that allows us to quantify the effects of competition with monovalent cations, RNA flexibility, and RNA hybridization. Our simulations reproduce the decrease and increase of Mg$^{2+}$ affinity due to ion competition and hybridization respectively, and predict that RNA flexibility has a site dependent effect. This suggests a non trivial interplay between RNA conformational entropy and divalent cation binding.
1509.07918
Daniele Ramazzotti
Giulio Caravagna, Alex Graudenzi, Daniele Ramazzotti, Rebeca Sanz-Pamplona, Luca De Sano, Giancarlo Mauri, Victor Moreno, Marco Antoniotti, Bud Mishra
Algorithmic Methods to Infer the Evolutionary Trajectories in Cancer Progression
null
null
10.1073/pnas.1520213113
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The genomic evolution inherent to cancer relates directly to a renewed focus on the voluminous next generation sequencing (NGS) data, and machine learning for the inference of explanatory models of how the (epi)genomic events are choreographed in cancer initiation and development. However, despite the increasing availability of multiple additional -omics data, this quest has been frustrated by various theoretical and technical hurdles, mostly stemming from the dramatic heterogeneity of the disease. In this paper, we build on our recent works on "selective advantage" relation among driver mutations in cancer progression and investigate its applicability to the modeling problem at the population level. Here, we introduce PiCnIc (Pipeline for Cancer Inference), a versatile, modular and customizable pipeline to extract ensemble-level progression models from cross-sectional sequenced cancer genomes. The pipeline has many translational implications as it combines state-of-the-art techniques for sample stratification, driver selection, identification of fitness-equivalent exclusive alterations and progression model inference. We demonstrate PiCnIc's ability to reproduce much of the current knowledge on colorectal cancer progression, as well as to suggest novel experimentally verifiable hypotheses.
[ { "created": "Fri, 25 Sep 2015 23:01:37 GMT", "version": "v1" }, { "created": "Tue, 6 Oct 2015 15:13:30 GMT", "version": "v2" }, { "created": "Thu, 19 May 2016 17:49:26 GMT", "version": "v3" }, { "created": "Wed, 8 Mar 2017 21:08:33 GMT", "version": "v4" } ]
2017-03-10
[ [ "Caravagna", "Giulio", "" ], [ "Graudenzi", "Alex", "" ], [ "Ramazzotti", "Daniele", "" ], [ "Sanz-Pamplona", "Rebeca", "" ], [ "De Sano", "Luca", "" ], [ "Mauri", "Giancarlo", "" ], [ "Moreno", "Victor", "" ], [ "Antoniotti", "Marco", "" ], [ "Mishra", "Bud", "" ] ]
The genomic evolution inherent to cancer relates directly to a renewed focus on the voluminous next generation sequencing (NGS) data, and machine learning for the inference of explanatory models of how the (epi)genomic events are choreographed in cancer initiation and development. However, despite the increasing availability of multiple additional -omics data, this quest has been frustrated by various theoretical and technical hurdles, mostly stemming from the dramatic heterogeneity of the disease. In this paper, we build on our recent works on "selective advantage" relation among driver mutations in cancer progression and investigate its applicability to the modeling problem at the population level. Here, we introduce PiCnIc (Pipeline for Cancer Inference), a versatile, modular and customizable pipeline to extract ensemble-level progression models from cross-sectional sequenced cancer genomes. The pipeline has many translational implications as it combines state-of-the-art techniques for sample stratification, driver selection, identification of fitness-equivalent exclusive alterations and progression model inference. We demonstrate PiCnIc's ability to reproduce much of the current knowledge on colorectal cancer progression, as well as to suggest novel experimentally verifiable hypotheses.
1805.01349
Mansoor Sheikh
A Santaolalla, M Sheikh, M Van Hemelrijck, A Portieri, ACC Coolen
Improved resection margins in breast-conserving surgery using Terahertz Pulsed imaging data
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
New statistical methods were employed to improve the ability to distinguish benign from malignant breast tissue ex vivo in a recent study. The ultimately aim was to improve the intraoperative assessment of positive tumour margins in breast-conserving surgery (BCS), potentially reducing patient re-operation rates. A multivariate Bayesian classifier was applied to the waveform samples produced by a Terahertz Pulsed Imaging (TPI) handheld probe system in order to discriminate tumour from benign breast tissue, obtaining a sensitivity of 96% and specificity of 95%. We compare these results to traditional and to state-of-the-art methods for determining resection margins. Given the general nature of the classifier, it is expected that this method can be applied to other tumour types where resection margins are also critical.
[ { "created": "Thu, 3 May 2018 15:02:40 GMT", "version": "v1" } ]
2018-05-04
[ [ "Santaolalla", "A", "" ], [ "Sheikh", "M", "" ], [ "Van Hemelrijck", "M", "" ], [ "Portieri", "A", "" ], [ "Coolen", "ACC", "" ] ]
New statistical methods were employed to improve the ability to distinguish benign from malignant breast tissue ex vivo in a recent study. The ultimately aim was to improve the intraoperative assessment of positive tumour margins in breast-conserving surgery (BCS), potentially reducing patient re-operation rates. A multivariate Bayesian classifier was applied to the waveform samples produced by a Terahertz Pulsed Imaging (TPI) handheld probe system in order to discriminate tumour from benign breast tissue, obtaining a sensitivity of 96% and specificity of 95%. We compare these results to traditional and to state-of-the-art methods for determining resection margins. Given the general nature of the classifier, it is expected that this method can be applied to other tumour types where resection margins are also critical.
1404.7842
Howard Deutsch
Howard M. Deutsch, Xiaocong Michael Ye and Margaret M. Schweri
Synthesis and Pharmacology of Ester Modified (+/-)-threo-Methylphenidate Analogs
15 pages, 1 figure
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As part of a program to develop compounds with potential to treat cocaine abuse, eleven (+/-)-threo-methylphenidate (TMP; Ritalin) derivatives were synthesized and tested in rat striatal tissue preparations for inhibitory potency against [3H]WIN 35,428 binding (WIN) to the dopamine (DA) transporter, [3H]citalopram binding (CIT) to the serotonin transporter, and [3H]DA uptake. The ester function was replaced by other functional groups in all of the compounds; some also contained substituents on the phenyl ring and/or the piperidine nitrogen. Potencies against WIN, measured as IC50, ranged from 27 nM to 7,000 nM, compared to an IC50 of 83 nM for TMP itself. Potency against [3H]DA uptake was approximately two-fold less than that against WIN, but generally exhibited the same rank order. With one exception, the compounds were significantly less potent against CIT than WIN. The one exception, which has a rigid planar conformation at the altered ester position, is unique in that it also is much less potent against [3H]DA uptake relative to WIN, compared to the other derivatives. The three compounds with dichloro groups on the phenyl ring did not exhibit positive cooperativity, as has been observed with several previously synthesized halogenated TMP derivatives. Taken together, these compounds should help to further our understanding of the stimulant binding sites on both the dopamine and serotonin transporters.
[ { "created": "Wed, 30 Apr 2014 19:24:18 GMT", "version": "v1" } ]
2014-05-01
[ [ "Deutsch", "Howard M.", "" ], [ "Ye", "Xiaocong Michael", "" ], [ "Schweri", "Margaret M.", "" ] ]
As part of a program to develop compounds with potential to treat cocaine abuse, eleven (+/-)-threo-methylphenidate (TMP; Ritalin) derivatives were synthesized and tested in rat striatal tissue preparations for inhibitory potency against [3H]WIN 35,428 binding (WIN) to the dopamine (DA) transporter, [3H]citalopram binding (CIT) to the serotonin transporter, and [3H]DA uptake. The ester function was replaced by other functional groups in all of the compounds; some also contained substituents on the phenyl ring and/or the piperidine nitrogen. Potencies against WIN, measured as IC50, ranged from 27 nM to 7,000 nM, compared to an IC50 of 83 nM for TMP itself. Potency against [3H]DA uptake was approximately two-fold less than that against WIN, but generally exhibited the same rank order. With one exception, the compounds were significantly less potent against CIT than WIN. The one exception, which has a rigid planar conformation at the altered ester position, is unique in that it also is much less potent against [3H]DA uptake relative to WIN, compared to the other derivatives. The three compounds with dichloro groups on the phenyl ring did not exhibit positive cooperativity, as has been observed with several previously synthesized halogenated TMP derivatives. Taken together, these compounds should help to further our understanding of the stimulant binding sites on both the dopamine and serotonin transporters.
0801.2982
John Rhodes
Elizabeth S. Allman, John A. Rhodes
The Identifiability of Covarion Models in Phylogenetics
12 pages, 2 figures; Final version
null
null
null
q-bio.PE
null
Covarion models of character evolution describe inhomogeneities in substitution processes through time. In phylogenetics, such models are used to describe changing functional constraints or selection regimes during the evolution of biological sequences. In this work the identifiability of such models for generic parameters on a known phylogenetic tree is established, provided the number of covarion classes does not exceed the size of the observable state space. `Generic parameters' as used here means all parameters except possibly those in a set of measure zero within the parameter space. Combined with earlier results, this implies both the tree and generic numerical parameters are identifiable if the number of classes is strictly smaller than the number of observable states.
[ { "created": "Fri, 18 Jan 2008 21:37:45 GMT", "version": "v1" }, { "created": "Mon, 26 May 2008 22:43:41 GMT", "version": "v2" } ]
2008-05-27
[ [ "Allman", "Elizabeth S.", "" ], [ "Rhodes", "John A.", "" ] ]
Covarion models of character evolution describe inhomogeneities in substitution processes through time. In phylogenetics, such models are used to describe changing functional constraints or selection regimes during the evolution of biological sequences. In this work the identifiability of such models for generic parameters on a known phylogenetic tree is established, provided the number of covarion classes does not exceed the size of the observable state space. `Generic parameters' as used here means all parameters except possibly those in a set of measure zero within the parameter space. Combined with earlier results, this implies both the tree and generic numerical parameters are identifiable if the number of classes is strictly smaller than the number of observable states.
1707.03922
Peter Clote
Peter Clote and Amir H. Bayegan
RNA folding kinetics using Monte Carlo and Gillespie algorithms?
30 pages, 10 figures, with appendix
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
RNA secondary structure folding kinetics is known to be important for the biological function of certain processes, such as the hok/sok system in E. coli. Although linear algebra provides an exact computational solution of secondary structure folding kinetics with respect to the Turner energy model for tiny (~ 20 nt) RNA sequences, the folding kinetics for larger sequences can only be approximated by binning structures into macrostates in a coarse-grained model, or by repeatedly simulating secondary structure folding with either the Monte Carlo algorithm or the Gillespie algorithm. Here we investigate the relation between the Monte Carlo algorithm and the Gillespie algorithm. We prove that asymptotically, the expected time for a K-step trajectory of the Monte Carlo algorithm is equal to <N> times that of the Gillespie algorithm, where <N> denotes the Boltzmann expected network degree. If the network is regular (i.e. every node has the same degree), then the mean first passage time (MFPT) computed by the Monte Carlo algorithm is equal to MFPT computed by the Gillespie algorithm multiplied by <N>; however, this is not true for non-regular networks. In particular, RNA secondary structure folding kinetics, as computed by the Monte Carlo algorithm, is not equal to the folding kinetics, as computed by the Gillespie algorithm, although the mean first passage times are roughly correlated. Simulation software for RNA secondary structure folding according to the Monte Carlo and Gille- spie algorithms is publicly available, as is our software to compute the expected degree of the net- work of secondary structures of a given RNA sequence { see http://bioinformatics.bc.edu/clote/ RNAexpNumNbors.
[ { "created": "Wed, 12 Jul 2017 22:08:16 GMT", "version": "v1" } ]
2017-07-14
[ [ "Clote", "Peter", "" ], [ "Bayegan", "Amir H.", "" ] ]
RNA secondary structure folding kinetics is known to be important for the biological function of certain processes, such as the hok/sok system in E. coli. Although linear algebra provides an exact computational solution of secondary structure folding kinetics with respect to the Turner energy model for tiny (~ 20 nt) RNA sequences, the folding kinetics for larger sequences can only be approximated by binning structures into macrostates in a coarse-grained model, or by repeatedly simulating secondary structure folding with either the Monte Carlo algorithm or the Gillespie algorithm. Here we investigate the relation between the Monte Carlo algorithm and the Gillespie algorithm. We prove that asymptotically, the expected time for a K-step trajectory of the Monte Carlo algorithm is equal to <N> times that of the Gillespie algorithm, where <N> denotes the Boltzmann expected network degree. If the network is regular (i.e. every node has the same degree), then the mean first passage time (MFPT) computed by the Monte Carlo algorithm is equal to MFPT computed by the Gillespie algorithm multiplied by <N>; however, this is not true for non-regular networks. In particular, RNA secondary structure folding kinetics, as computed by the Monte Carlo algorithm, is not equal to the folding kinetics, as computed by the Gillespie algorithm, although the mean first passage times are roughly correlated. Simulation software for RNA secondary structure folding according to the Monte Carlo and Gille- spie algorithms is publicly available, as is our software to compute the expected degree of the net- work of secondary structures of a given RNA sequence { see http://bioinformatics.bc.edu/clote/ RNAexpNumNbors.
1805.02124
Hiroshi Isshiki
Hiroshi Isshiki
Genetic Drift and Mutation
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In genetic drift of small population, it is well known that even when the ratio of alleles is 0.5, specific genes are fixed in or disappear from the population. It seems the reason why inbreeding is avoided. On the other hand, this phenomenon suggests an interesting possibility. The mutant gene does not increase the number of genes at once in a large population. A gene is partially fixed by increasing the number within a small population because of inbreeding, and the gene increases in a large group by Darwin's natural selection. It would be more reasonable to think in this way. We studied this mathematically based on the concept of genetic drift. This suggested that inbreeding could be useful as a trigger for fixation of mutation.
[ { "created": "Sat, 5 May 2018 23:21:49 GMT", "version": "v1" } ]
2018-05-08
[ [ "Isshiki", "Hiroshi", "" ] ]
In genetic drift of small population, it is well known that even when the ratio of alleles is 0.5, specific genes are fixed in or disappear from the population. It seems the reason why inbreeding is avoided. On the other hand, this phenomenon suggests an interesting possibility. The mutant gene does not increase the number of genes at once in a large population. A gene is partially fixed by increasing the number within a small population because of inbreeding, and the gene increases in a large group by Darwin's natural selection. It would be more reasonable to think in this way. We studied this mathematically based on the concept of genetic drift. This suggested that inbreeding could be useful as a trigger for fixation of mutation.
2111.06115
Katharina Huber
Katharina T. Huber and Vincent Moulton and Megan Owen and Andreas Spillner and Katherine St. John
The space of equidistant phylogenetic cactuses
null
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by-nc-nd/4.0/
We introduce and investigate the space of \emph{equidistant} $X$-\emph{cactuses}. These are rooted, arc weighted, phylogenetic networks with leaf set $X$, where $X$ is a finite set of species, and all leaves have the same distance from the root. The space contains as a subset the space of ultrametric trees on $X$ that was introduced by Gavryushkin and Drummond. We show that equidistant-cactus space is a CAT(0)-metric space which implies, for example, that there are unique geodesic paths between points. As a key step to proving this, we present a combinatorial result concerning \emph{ranked} rooted $X$-cactuses. In particular, we show that such networks can be encoded in terms of a pairwise compatibility condition arising from a poset of collections of pairs of subsets of $X$ that satisfy certain set-theoretic properties. As a corollary, we also obtain an encoding of ranked, rooted $X$-trees in terms of partitions of $X$, which provides an alternative proof that the space of ultrametric trees on $X$ is CAT(0). As with spaces of phylogenetic trees, we expect that our results should provide the basis for and new directions in performing statistical analyses for collections of phylogenetic networks with arc lengths.
[ { "created": "Thu, 11 Nov 2021 09:27:12 GMT", "version": "v1" } ]
2021-11-12
[ [ "Huber", "Katharina T.", "" ], [ "Moulton", "Vincent", "" ], [ "Owen", "Megan", "" ], [ "Spillner", "Andreas", "" ], [ "John", "Katherine St.", "" ] ]
We introduce and investigate the space of \emph{equidistant} $X$-\emph{cactuses}. These are rooted, arc weighted, phylogenetic networks with leaf set $X$, where $X$ is a finite set of species, and all leaves have the same distance from the root. The space contains as a subset the space of ultrametric trees on $X$ that was introduced by Gavryushkin and Drummond. We show that equidistant-cactus space is a CAT(0)-metric space which implies, for example, that there are unique geodesic paths between points. As a key step to proving this, we present a combinatorial result concerning \emph{ranked} rooted $X$-cactuses. In particular, we show that such networks can be encoded in terms of a pairwise compatibility condition arising from a poset of collections of pairs of subsets of $X$ that satisfy certain set-theoretic properties. As a corollary, we also obtain an encoding of ranked, rooted $X$-trees in terms of partitions of $X$, which provides an alternative proof that the space of ultrametric trees on $X$ is CAT(0). As with spaces of phylogenetic trees, we expect that our results should provide the basis for and new directions in performing statistical analyses for collections of phylogenetic networks with arc lengths.
q-bio/0412032
Marek Cieplak
Marek Cieplak
Mechanical Stretching of Proteins: Calmodulin and Titin
To be published in a special bio-issue of Physica A; 14 figures
null
10.1016/j.physa.2004.12.032
null
q-bio.BM cond-mat.stat-mech
null
Mechanical unfolding of several domains of calmodulin and titin is studied using a Go-like model with a realistic contact map and Lennard-Jones contact interactions. It is shown that this simple model captures the experimentally observed difference between the two proteins: titin is a spring that is tough and strong whereas calmodulin acts like a weak spring with featureless force-displacement curves. The difference is related to the dominance of the alpha secondary structures in the native structure of calmodulin. The tandem arrangements of calmodulin unwind simultaneously in each domain whereas the domains in titin unravel in a serial fashion. The sequences of contact events during unraveling are correlated with the contact order, i.e. with the separation between contact making amino acids along the backbone in the native state. Temperature is found to affect stretching in a profound way.
[ { "created": "Thu, 16 Dec 2004 16:32:37 GMT", "version": "v1" } ]
2009-11-10
[ [ "Cieplak", "Marek", "" ] ]
Mechanical unfolding of several domains of calmodulin and titin is studied using a Go-like model with a realistic contact map and Lennard-Jones contact interactions. It is shown that this simple model captures the experimentally observed difference between the two proteins: titin is a spring that is tough and strong whereas calmodulin acts like a weak spring with featureless force-displacement curves. The difference is related to the dominance of the alpha secondary structures in the native structure of calmodulin. The tandem arrangements of calmodulin unwind simultaneously in each domain whereas the domains in titin unravel in a serial fashion. The sequences of contact events during unraveling are correlated with the contact order, i.e. with the separation between contact making amino acids along the backbone in the native state. Temperature is found to affect stretching in a profound way.
2203.06665
Moo K. Chung
Moo K. Chung, Jamie L. Hanson, Seth D. Pollak
Statistical Analysis on Brain Surfaces
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-sa/4.0/
In this paper, we review widely used statistical analysis frameworks for data defined along cortical and subcortical surfaces that have been developed in last two decades. The cerebral cortex has the topology of a 2D highly convoluted sheet. For data obtained along curved non-Euclidean surfaces, traditional statistical analysis and smoothing techniques based on the Euclidean metric structure are inefficient. To increase the signal-to-noise ratio (SNR) and to boost the sensitivity of the analysis, it is necessary to smooth out noisy surface data. However, this requires smoothing data on curved cortical manifolds and assigning smoothing weights based on the geodesic distance along the surface. Thus, many cortical surface data analysis frameworks are differential geometric in nature. The smoothed surface data is then treated as smooth random fields and statistical inferences can be performed within Keith Worsley's random field theory. The methods described in this paper are illustrated with the hippocampus surface data set. Using this case study, we will determine if there is an effect of family income on the growth of hippocampus in children in detail. There are a total of 124 children and 82 of them have repeat magnetic resonance images (MRI) two years later.
[ { "created": "Sun, 13 Mar 2022 14:35:21 GMT", "version": "v1" } ]
2022-03-15
[ [ "Chung", "Moo K.", "" ], [ "Hanson", "Jamie L.", "" ], [ "Pollak", "Seth D.", "" ] ]
In this paper, we review widely used statistical analysis frameworks for data defined along cortical and subcortical surfaces that have been developed in last two decades. The cerebral cortex has the topology of a 2D highly convoluted sheet. For data obtained along curved non-Euclidean surfaces, traditional statistical analysis and smoothing techniques based on the Euclidean metric structure are inefficient. To increase the signal-to-noise ratio (SNR) and to boost the sensitivity of the analysis, it is necessary to smooth out noisy surface data. However, this requires smoothing data on curved cortical manifolds and assigning smoothing weights based on the geodesic distance along the surface. Thus, many cortical surface data analysis frameworks are differential geometric in nature. The smoothed surface data is then treated as smooth random fields and statistical inferences can be performed within Keith Worsley's random field theory. The methods described in this paper are illustrated with the hippocampus surface data set. Using this case study, we will determine if there is an effect of family income on the growth of hippocampus in children in detail. There are a total of 124 children and 82 of them have repeat magnetic resonance images (MRI) two years later.
2009.01060
Andrew Eckford
Hadeel Elayan, Andrew W. Eckford, and Raviraj Adve
Information Rates of Controlled Protein Interactions Using Terahertz Communication
Accepted for publication in IEEE Transactions on Nanobioscience
null
null
null
q-bio.MN cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we present a paradigm bridging electromagnetic (EM) and molecular communication through a stimuli-responsive intra-body model. It has been established that protein molecules, which play a key role in governing cell behavior, can be selectively stimulated using Terahertz (THz) band frequencies. By triggering protein vibrational modes using THz waves, we induce changes in protein conformation, resulting in the activation of a controlled cascade of biochemical and biomechanical events. To analyze such an interaction, we formulate a communication system composed of a nanoantenna transmitter and a protein receiver. We adopt a Markov chain model to account for protein stochasticity with transition rates governed by the nanoantenna force. Both two-state and multi-state protein models are presented to depict different biological configurations. Closed form expressions for the mutual information of each scenario is derived and maximized to find the capacity between the input nanoantenna force and the protein state. The results we obtain indicate that controlled protein signaling provides a communication platform for information transmission between the nanoantenna and the protein with a clear physical significance. The analysis reported in this work should further research into the EM-based control of protein networks.
[ { "created": "Wed, 2 Sep 2020 13:34:10 GMT", "version": "v1" } ]
2020-09-03
[ [ "Elayan", "Hadeel", "" ], [ "Eckford", "Andrew W.", "" ], [ "Adve", "Raviraj", "" ] ]
In this work, we present a paradigm bridging electromagnetic (EM) and molecular communication through a stimuli-responsive intra-body model. It has been established that protein molecules, which play a key role in governing cell behavior, can be selectively stimulated using Terahertz (THz) band frequencies. By triggering protein vibrational modes using THz waves, we induce changes in protein conformation, resulting in the activation of a controlled cascade of biochemical and biomechanical events. To analyze such an interaction, we formulate a communication system composed of a nanoantenna transmitter and a protein receiver. We adopt a Markov chain model to account for protein stochasticity with transition rates governed by the nanoantenna force. Both two-state and multi-state protein models are presented to depict different biological configurations. Closed form expressions for the mutual information of each scenario is derived and maximized to find the capacity between the input nanoantenna force and the protein state. The results we obtain indicate that controlled protein signaling provides a communication platform for information transmission between the nanoantenna and the protein with a clear physical significance. The analysis reported in this work should further research into the EM-based control of protein networks.
1304.2468
Takumi Tsutaya
Takumi Tsutaya and Minoru Yoneda
WARN: an R package for quantitative reconstruction of weaning ages in archaeological populations using bone collagen nitrogen isotope ratios
18 pages, 1 table, and 3 figures
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Nitrogen isotope analysis of bone collagen has been used to reconstruct the breastfeeding practices of archaeological human populations. However, weaning ages have been estimated subjectively because of a lack of both information on subadult bone collagen turnover rates and appropriate analytical models. Here, we present a model for analyzing cross-sectional delta-15N data of subadult bone collagen, which incorporates newly estimated bone collagen turnover rates and a framework of approximate Bayesian computation. Temporal changes in human subadult bone collagen turnover rates were estimated anew from data on tissue-level bone metabolism reported in previous studies. A model for reconstructing precise weaning ages was then developed and incorporating the estimated turnover rates. The model is presented as a new open source R package, WARN (Weaning Age Reconstruction with Nitrogen isotope analysis), which computes the age at the start and end of weaning, 15N-enrichment through maternal to infant tissue, and delta-15N value of collagen synthesized entirely from weaning foods with their posterior probabilities. A precise reconstruction of past breastfeeding and weaning practices over a wide range of time periods and geographic regions could make it possible to understand this unique feature of human life history and cultural diversity in infant feeding practices.
[ { "created": "Tue, 9 Apr 2013 06:44:37 GMT", "version": "v1" } ]
2013-04-10
[ [ "Tsutaya", "Takumi", "" ], [ "Yoneda", "Minoru", "" ] ]
Nitrogen isotope analysis of bone collagen has been used to reconstruct the breastfeeding practices of archaeological human populations. However, weaning ages have been estimated subjectively because of a lack of both information on subadult bone collagen turnover rates and appropriate analytical models. Here, we present a model for analyzing cross-sectional delta-15N data of subadult bone collagen, which incorporates newly estimated bone collagen turnover rates and a framework of approximate Bayesian computation. Temporal changes in human subadult bone collagen turnover rates were estimated anew from data on tissue-level bone metabolism reported in previous studies. A model for reconstructing precise weaning ages was then developed and incorporating the estimated turnover rates. The model is presented as a new open source R package, WARN (Weaning Age Reconstruction with Nitrogen isotope analysis), which computes the age at the start and end of weaning, 15N-enrichment through maternal to infant tissue, and delta-15N value of collagen synthesized entirely from weaning foods with their posterior probabilities. A precise reconstruction of past breastfeeding and weaning practices over a wide range of time periods and geographic regions could make it possible to understand this unique feature of human life history and cultural diversity in infant feeding practices.
1505.06956
Nadav M. Shnerb
Matan Danino and Nadav M. Shnerb
Age-abundance relationships for neutral communities
null
null
10.1103/PhysRevE.92.042706
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neutral models for the dynamics of a system of competing species are used, nowadays, to describe a wide variety of empirical communities. These models are used in many situations, ranging from population genetics and ecological biodiversity to macroevolution and cancer tumors. One of the main issues discussed within this framework is the relationships between the abundance of a species and its age. Here we provide a comprehensive analysis of the age-abundance relationships for fixed-size and growing communities. Explicit formulas for the average and the most likely age of a species with abundance $n$ are given, together with the full probability distribution function. We further discuss the universality of these results and their applicability to the tropical forest community.
[ { "created": "Tue, 26 May 2015 14:00:16 GMT", "version": "v1" } ]
2015-10-28
[ [ "Danino", "Matan", "" ], [ "Shnerb", "Nadav M.", "" ] ]
Neutral models for the dynamics of a system of competing species are used, nowadays, to describe a wide variety of empirical communities. These models are used in many situations, ranging from population genetics and ecological biodiversity to macroevolution and cancer tumors. One of the main issues discussed within this framework is the relationships between the abundance of a species and its age. Here we provide a comprehensive analysis of the age-abundance relationships for fixed-size and growing communities. Explicit formulas for the average and the most likely age of a species with abundance $n$ are given, together with the full probability distribution function. We further discuss the universality of these results and their applicability to the tropical forest community.
1606.00897
Stefan Bauer
Stefan Bauer and Nicolas Carion and Peter Sch\"uffler and Thomas Fuchs and Peter Wild and Joachim M. Buhmann
Multi-Organ Cancer Classification and Survival Analysis
null
null
null
null
q-bio.QM cs.LG q-bio.TO stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Accurate and robust cell nuclei classification is the cornerstone for a wider range of tasks in digital and Computational Pathology. However, most machine learning systems require extensive labeling from expert pathologists for each individual problem at hand, with no or limited abilities for knowledge transfer between datasets and organ sites. In this paper we implement and evaluate a variety of deep neural network models and model ensembles for nuclei classification in renal cell cancer (RCC) and prostate cancer (PCa). We propose a convolutional neural network system based on residual learning which significantly improves over the state-of-the-art in cell nuclei classification. Finally, we show that the combination of tissue types during training increases not only classification accuracy but also overall survival analysis.
[ { "created": "Thu, 2 Jun 2016 21:09:00 GMT", "version": "v1" }, { "created": "Fri, 2 Dec 2016 20:06:14 GMT", "version": "v2" } ]
2016-12-05
[ [ "Bauer", "Stefan", "" ], [ "Carion", "Nicolas", "" ], [ "Schüffler", "Peter", "" ], [ "Fuchs", "Thomas", "" ], [ "Wild", "Peter", "" ], [ "Buhmann", "Joachim M.", "" ] ]
Accurate and robust cell nuclei classification is the cornerstone for a wider range of tasks in digital and Computational Pathology. However, most machine learning systems require extensive labeling from expert pathologists for each individual problem at hand, with no or limited abilities for knowledge transfer between datasets and organ sites. In this paper we implement and evaluate a variety of deep neural network models and model ensembles for nuclei classification in renal cell cancer (RCC) and prostate cancer (PCa). We propose a convolutional neural network system based on residual learning which significantly improves over the state-of-the-art in cell nuclei classification. Finally, we show that the combination of tissue types during training increases not only classification accuracy but also overall survival analysis.
1011.4134
John Rhodes
John A. Rhodes and Seth Sullivant
Identifiability of Large Phylogenetic Mixture Models
15 pages
null
null
null
q-bio.PE math.AG math.ST stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Phylogenetic mixture models are statistical models of character evolution allowing for heterogeneity. Each of the classes in some unknown partition of the characters may evolve by different processes, or even along different trees. The fundamental question of whether parameters of such a model are identifiable is difficult to address, due to the complexity of the parameterization. We analyze mixture models on large trees, with many mixture components, showing that both numerical and tree parameters are indeed identifiable in these models when all trees are the same. We also explore the extent to which our algebraic techniques can be employed to extend the result to mixtures on different trees.
[ { "created": "Thu, 18 Nov 2010 04:47:20 GMT", "version": "v1" } ]
2010-11-19
[ [ "Rhodes", "John A.", "" ], [ "Sullivant", "Seth", "" ] ]
Phylogenetic mixture models are statistical models of character evolution allowing for heterogeneity. Each of the classes in some unknown partition of the characters may evolve by different processes, or even along different trees. The fundamental question of whether parameters of such a model are identifiable is difficult to address, due to the complexity of the parameterization. We analyze mixture models on large trees, with many mixture components, showing that both numerical and tree parameters are indeed identifiable in these models when all trees are the same. We also explore the extent to which our algebraic techniques can be employed to extend the result to mixtures on different trees.
2107.05386
Herbert Sauro Dr
Herbert M. Sauro
The Practice of Ensuring Repeatable and Reproducible Computational Models
# figures
null
null
null
q-bio.OT q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent studies have shown that the majority of published computational models in systems biology and physiology are not repeatable or reproducible. There are a variety of reasons for this. One of the most likely reasons is that given how busy modern researchers are and the fact that no credit is given to authors for publishing repeatable work, it is inevitable that this will be the case. The situation can only be rectified when government agencies, universities and other research institutions change policies and that journals begin to insist that published work is in fact at least repeatable if not reproducible. In this chapter guidelines are described that can be used by researchers to help make sure their work is repeatable. A scoring system is suggested that authors can use to determine how well they are doing.
[ { "created": "Wed, 7 Jul 2021 19:59:04 GMT", "version": "v1" } ]
2021-07-13
[ [ "Sauro", "Herbert M.", "" ] ]
Recent studies have shown that the majority of published computational models in systems biology and physiology are not repeatable or reproducible. There are a variety of reasons for this. One of the most likely reasons is that given how busy modern researchers are and the fact that no credit is given to authors for publishing repeatable work, it is inevitable that this will be the case. The situation can only be rectified when government agencies, universities and other research institutions change policies and that journals begin to insist that published work is in fact at least repeatable if not reproducible. In this chapter guidelines are described that can be used by researchers to help make sure their work is repeatable. A scoring system is suggested that authors can use to determine how well they are doing.
1905.01540
Justin Yeakel
Uttam Bhat and Christopher P. Kempes and Justin D. Yeakel
Scaling of the risk landscape drives optimal life history strategies and the evolution of grazing
9 pages, 5 figures, 3 Supplementary Appendices, 2 Supplementary Figures
null
10.1073/pnas.1907998117
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Consumers face numerous risks that can be minimized by incorporating different life-history strategies. How much and when a consumer adds to its energetic reserves or invests in reproduction are key behavioral and physiological adaptations that structure much of how organisms interact. Here we develop a theoretical framework that explicitly accounts for stochastic fluctuations of an individual consumer's energetic reserves while foraging and reproducing on a landscape with resources that range from uniformly distributed to highly clustered. First, we show that optimal life-history strategies vary in response to changes in the mean productivity of the resource landscape, where depleted environments promote reproduction at lower energetic states, greater investment in each reproduction event, and smaller litter sizes. We then show that if resource variance scales with body size due to landscape clustering, consumers that forage for clustered foods are susceptible to strong Allee effects, increasing extinction risk. Finally, we show that the proposed relationship between consumer body size, resource clustering, and Allee effect-induced population instability offers key ecological insights into the evolution of large-bodied grazing herbivores from small-bodied browsing ancestors.
[ { "created": "Sat, 4 May 2019 18:53:29 GMT", "version": "v1" } ]
2022-10-12
[ [ "Bhat", "Uttam", "" ], [ "Kempes", "Christopher P.", "" ], [ "Yeakel", "Justin D.", "" ] ]
Consumers face numerous risks that can be minimized by incorporating different life-history strategies. How much and when a consumer adds to its energetic reserves or invests in reproduction are key behavioral and physiological adaptations that structure much of how organisms interact. Here we develop a theoretical framework that explicitly accounts for stochastic fluctuations of an individual consumer's energetic reserves while foraging and reproducing on a landscape with resources that range from uniformly distributed to highly clustered. First, we show that optimal life-history strategies vary in response to changes in the mean productivity of the resource landscape, where depleted environments promote reproduction at lower energetic states, greater investment in each reproduction event, and smaller litter sizes. We then show that if resource variance scales with body size due to landscape clustering, consumers that forage for clustered foods are susceptible to strong Allee effects, increasing extinction risk. Finally, we show that the proposed relationship between consumer body size, resource clustering, and Allee effect-induced population instability offers key ecological insights into the evolution of large-bodied grazing herbivores from small-bodied browsing ancestors.
2212.00741
Benjamin Hayden
Nisarg Desai, Praneet Bala, Rebecca Richardson, Jessica Raper, Jan Zimmermann, Benjamin Hayden
OpenApePose: a database of annotated ape photographs for pose estimation
null
null
null
null
q-bio.QM cs.CV
http://creativecommons.org/licenses/by/4.0/
Because of their close relationship with humans, non-human apes (chimpanzees, bonobos, gorillas, orangutans, and gibbons, including siamangs) are of great scientific interest. The goal of understanding their complex behavior would be greatly advanced by the ability to perform video-based pose tracking. Tracking, however, requires high-quality annotated datasets of ape photographs. Here we present OpenApePose, a new public dataset of 71,868 photographs, annotated with 16 body landmarks, of six ape species in naturalistic contexts. We show that a standard deep net (HRNet-W48) trained on ape photos can reliably track out-of-sample ape photos better than networks trained on monkeys (specifically, the OpenMonkeyPose dataset) and on humans (COCO) can. This trained network can track apes almost as well as the other networks can track their respective taxa, and models trained without one of the six ape species can track the held out species better than the monkey and human models can. Ultimately, the results of our analyses highlight the importance of large specialized databases for animal tracking systems and confirm the utility of our new ape database.
[ { "created": "Wed, 30 Nov 2022 16:53:18 GMT", "version": "v1" }, { "created": "Fri, 22 Sep 2023 14:53:04 GMT", "version": "v2" } ]
2023-09-25
[ [ "Desai", "Nisarg", "" ], [ "Bala", "Praneet", "" ], [ "Richardson", "Rebecca", "" ], [ "Raper", "Jessica", "" ], [ "Zimmermann", "Jan", "" ], [ "Hayden", "Benjamin", "" ] ]
Because of their close relationship with humans, non-human apes (chimpanzees, bonobos, gorillas, orangutans, and gibbons, including siamangs) are of great scientific interest. The goal of understanding their complex behavior would be greatly advanced by the ability to perform video-based pose tracking. Tracking, however, requires high-quality annotated datasets of ape photographs. Here we present OpenApePose, a new public dataset of 71,868 photographs, annotated with 16 body landmarks, of six ape species in naturalistic contexts. We show that a standard deep net (HRNet-W48) trained on ape photos can reliably track out-of-sample ape photos better than networks trained on monkeys (specifically, the OpenMonkeyPose dataset) and on humans (COCO) can. This trained network can track apes almost as well as the other networks can track their respective taxa, and models trained without one of the six ape species can track the held out species better than the monkey and human models can. Ultimately, the results of our analyses highlight the importance of large specialized databases for animal tracking systems and confirm the utility of our new ape database.
2201.01855
Xu Wang
XU Wang and Huan Zhao and Weiwei TU and Hao Li and Yu Sun and Xiaochen Bo
Graph Neural Networks for Double-Strand DNA Breaks Prediction
null
null
null
null
q-bio.QM cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Double-strand DNA breaks (DSBs) are a form of DNA damage that can cause abnormal chromosomal rearrangements. Recent technologies based on high-throughput experiments have obvious high costs and technical challenges.Therefore, we design a graph neural network based method to predict DSBs (GraphDSB), using DNA sequence features and chromosome structure information. In order to improve the expression ability of the model, we introduce Jumping Knowledge architecture and several effective structural encoding methods. The contribution of structural information to the prediction of DSBs is verified by the experiments on datasets from normal human epidermal keratinocytes (NHEK) and chronic myeloid leukemia cell line (K562), and the ablation studies further demonstrate the effectiveness of the designed components in the proposed GraphDSB framework. Finally, we use GNNExplainer to analyze the contribution of node features and topology to DSBs prediction, and proved the high contribution of 5-mer DNA sequence features and two chromatin interaction modes.
[ { "created": "Tue, 4 Jan 2022 08:40:08 GMT", "version": "v1" } ]
2022-01-07
[ [ "Wang", "XU", "" ], [ "Zhao", "Huan", "" ], [ "TU", "Weiwei", "" ], [ "Li", "Hao", "" ], [ "Sun", "Yu", "" ], [ "Bo", "Xiaochen", "" ] ]
Double-strand DNA breaks (DSBs) are a form of DNA damage that can cause abnormal chromosomal rearrangements. Recent technologies based on high-throughput experiments have obvious high costs and technical challenges.Therefore, we design a graph neural network based method to predict DSBs (GraphDSB), using DNA sequence features and chromosome structure information. In order to improve the expression ability of the model, we introduce Jumping Knowledge architecture and several effective structural encoding methods. The contribution of structural information to the prediction of DSBs is verified by the experiments on datasets from normal human epidermal keratinocytes (NHEK) and chronic myeloid leukemia cell line (K562), and the ablation studies further demonstrate the effectiveness of the designed components in the proposed GraphDSB framework. Finally, we use GNNExplainer to analyze the contribution of node features and topology to DSBs prediction, and proved the high contribution of 5-mer DNA sequence features and two chromatin interaction modes.
1204.3683
Balaji Sriram
Balaji Sriram and Pamela Reinagel
Strong surround antagonism in the dLGN of the awake rat
29 pages, 6 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Classical center-surround antagonism in the early visual system is thought to serve important functions such as enhancing edge detection and increasing sparseness. The relative strength of the center and surround determine the specific computation achieved. For example, weak surrounds achieve low-pass spatial frequency filtering and are optimal for denoising when signal-to-noise ratio (SNR) is low. Balanced surrounds achieve band-pass spatial frequency filtering and are optimal for decorrelation of responses when SNR is high. Surround strength has been measured in the retina and dorsal Lateral Geniculate Nucleus (dLGN), primarily in anesthetized or ex vivo preparations. Here we revisit the center-surround architecture of dLGN neurons in the un-anesthetized rat. We report the spatial frequency tuning responses of N=47 neurons. We fit these tuning curves to a difference-of-Gaussians (DOG) model of the spatial receptive field. We find that some dLGN neurons in the awake rat (N=8/47) have weak surrounds. The majority of cells in our sample (N=29/47), however, have well-balanced center and surround strengths and band-pass tuning curves. We also observed several neurons (N=10/47) with notched or dual-band-pass tuning curves, a response class that has not been described previously. Within the space of circularly concentric DOG models, strong surrounds were necessary and sufficient to explain the dual-band-pass spatial frequency tuning of these cells. It remains to be determined what advantage if any is conferred by this novel response class, or by the heterogeneity of surround strength as such. We conclude that surround antagonism can be strong in the dLGN of the awake rat.
[ { "created": "Tue, 17 Apr 2012 01:46:27 GMT", "version": "v1" } ]
2012-04-18
[ [ "Sriram", "Balaji", "" ], [ "Reinagel", "Pamela", "" ] ]
Classical center-surround antagonism in the early visual system is thought to serve important functions such as enhancing edge detection and increasing sparseness. The relative strength of the center and surround determine the specific computation achieved. For example, weak surrounds achieve low-pass spatial frequency filtering and are optimal for denoising when signal-to-noise ratio (SNR) is low. Balanced surrounds achieve band-pass spatial frequency filtering and are optimal for decorrelation of responses when SNR is high. Surround strength has been measured in the retina and dorsal Lateral Geniculate Nucleus (dLGN), primarily in anesthetized or ex vivo preparations. Here we revisit the center-surround architecture of dLGN neurons in the un-anesthetized rat. We report the spatial frequency tuning responses of N=47 neurons. We fit these tuning curves to a difference-of-Gaussians (DOG) model of the spatial receptive field. We find that some dLGN neurons in the awake rat (N=8/47) have weak surrounds. The majority of cells in our sample (N=29/47), however, have well-balanced center and surround strengths and band-pass tuning curves. We also observed several neurons (N=10/47) with notched or dual-band-pass tuning curves, a response class that has not been described previously. Within the space of circularly concentric DOG models, strong surrounds were necessary and sufficient to explain the dual-band-pass spatial frequency tuning of these cells. It remains to be determined what advantage if any is conferred by this novel response class, or by the heterogeneity of surround strength as such. We conclude that surround antagonism can be strong in the dLGN of the awake rat.
2207.03523
Mikail Khona
Mikail Khona, Sarthak Chandra, Joy J. Ma, Ila Fiete
Winning the lottery with neural connectivity constraints: faster learning across cognitive tasks with spatially constrained sparse RNNs
12 pages, 5 main text figures
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Recurrent neural networks (RNNs) are often used to model circuits in the brain, and can solve a variety of difficult computational problems requiring memory, error-correction, or selection [Hopfield, 1982, Maass et al., 2002, Maass, 2011]. However, fully-connected RNNs contrast structurally with their biological counterparts, which are extremely sparse (~0.1%). Motivated by the neocortex, where neural connectivity is constrained by physical distance along cortical sheets and other synaptic wiring costs, we introduce locality masked RNNs (LM-RNNs) that utilize task-agnostic predetermined graphs with sparsity as low as 4%. We study LM-RNNs in a multitask learning setting relevant to cognitive systems neuroscience with a commonly used set of tasks, 20-Cog-tasks [Yang et al., 2019]. We show through reductio ad absurdum that 20-Cog-tasks can be solved by a small pool of separated autapses that we can mechanistically analyze and understand. Thus, these tasks fall short of the goal of inducing complex recurrent dynamics and modular structure in RNNs. We next contribute a new cognitive multi-task battery, Mod-Cog, consisting of upto 132 tasks that expands by 7-fold the number of tasks and task-complexity of 20-Cog-tasks. Importantly, while autapses can solve the simple 20-Cog-tasks, the expanded task-set requires richer neural architectures and continuous attractor dynamics. On these tasks, we show that LM-RNNs with an optimal sparsity result in faster training and better data-efficiency than fully connected networks.
[ { "created": "Thu, 7 Jul 2022 18:37:29 GMT", "version": "v1" }, { "created": "Mon, 29 May 2023 19:49:38 GMT", "version": "v2" } ]
2023-05-31
[ [ "Khona", "Mikail", "" ], [ "Chandra", "Sarthak", "" ], [ "Ma", "Joy J.", "" ], [ "Fiete", "Ila", "" ] ]
Recurrent neural networks (RNNs) are often used to model circuits in the brain, and can solve a variety of difficult computational problems requiring memory, error-correction, or selection [Hopfield, 1982, Maass et al., 2002, Maass, 2011]. However, fully-connected RNNs contrast structurally with their biological counterparts, which are extremely sparse (~0.1%). Motivated by the neocortex, where neural connectivity is constrained by physical distance along cortical sheets and other synaptic wiring costs, we introduce locality masked RNNs (LM-RNNs) that utilize task-agnostic predetermined graphs with sparsity as low as 4%. We study LM-RNNs in a multitask learning setting relevant to cognitive systems neuroscience with a commonly used set of tasks, 20-Cog-tasks [Yang et al., 2019]. We show through reductio ad absurdum that 20-Cog-tasks can be solved by a small pool of separated autapses that we can mechanistically analyze and understand. Thus, these tasks fall short of the goal of inducing complex recurrent dynamics and modular structure in RNNs. We next contribute a new cognitive multi-task battery, Mod-Cog, consisting of upto 132 tasks that expands by 7-fold the number of tasks and task-complexity of 20-Cog-tasks. Importantly, while autapses can solve the simple 20-Cog-tasks, the expanded task-set requires richer neural architectures and continuous attractor dynamics. On these tasks, we show that LM-RNNs with an optimal sparsity result in faster training and better data-efficiency than fully connected networks.
2306.16819
Kai Yi
Kai Yi, Bingxin Zhou, Yiqing Shen, Pietro Li\`o, Yu Guang Wang
Graph Denoising Diffusion for Inverse Protein Folding
NeurIPS 2023
null
null
null
q-bio.QM cs.AI
http://creativecommons.org/licenses/by/4.0/
Inverse protein folding is challenging due to its inherent one-to-many mapping characteristic, where numerous possible amino acid sequences can fold into a single, identical protein backbone. This task involves not only identifying viable sequences but also representing the sheer diversity of potential solutions. However, existing discriminative models, such as transformer-based auto-regressive models, struggle to encapsulate the diverse range of plausible solutions. In contrast, diffusion probabilistic models, as an emerging genre of generative approaches, offer the potential to generate a diverse set of sequence candidates for determined protein backbones. We propose a novel graph denoising diffusion model for inverse protein folding, where a given protein backbone guides the diffusion process on the corresponding amino acid residue types. The model infers the joint distribution of amino acids conditioned on the nodes' physiochemical properties and local environment. Moreover, we utilize amino acid replacement matrices for the diffusion forward process, encoding the biologically-meaningful prior knowledge of amino acids from their spatial and sequential neighbors as well as themselves, which reduces the sampling space of the generative process. Our model achieves state-of-the-art performance over a set of popular baseline methods in sequence recovery and exhibits great potential in generating diverse protein sequences for a determined protein backbone structure.
[ { "created": "Thu, 29 Jun 2023 09:55:30 GMT", "version": "v1" }, { "created": "Tue, 7 Nov 2023 08:28:11 GMT", "version": "v2" } ]
2023-11-08
[ [ "Yi", "Kai", "" ], [ "Zhou", "Bingxin", "" ], [ "Shen", "Yiqing", "" ], [ "Liò", "Pietro", "" ], [ "Wang", "Yu Guang", "" ] ]
Inverse protein folding is challenging due to its inherent one-to-many mapping characteristic, where numerous possible amino acid sequences can fold into a single, identical protein backbone. This task involves not only identifying viable sequences but also representing the sheer diversity of potential solutions. However, existing discriminative models, such as transformer-based auto-regressive models, struggle to encapsulate the diverse range of plausible solutions. In contrast, diffusion probabilistic models, as an emerging genre of generative approaches, offer the potential to generate a diverse set of sequence candidates for determined protein backbones. We propose a novel graph denoising diffusion model for inverse protein folding, where a given protein backbone guides the diffusion process on the corresponding amino acid residue types. The model infers the joint distribution of amino acids conditioned on the nodes' physiochemical properties and local environment. Moreover, we utilize amino acid replacement matrices for the diffusion forward process, encoding the biologically-meaningful prior knowledge of amino acids from their spatial and sequential neighbors as well as themselves, which reduces the sampling space of the generative process. Our model achieves state-of-the-art performance over a set of popular baseline methods in sequence recovery and exhibits great potential in generating diverse protein sequences for a determined protein backbone structure.
1603.03238
Pierre Casadebaig
Victor Picheny and Pierre Casadebaig and Ronan Tr\'epos and Robert Faivre and David Da Silva and Patrick Vincourt and Evelyne Costes
Using numerical plant models and phenotypic correlation space to design achievable ideotypes
25 pages, 5 figures, 2017, Plant, Cell and Environment
null
10.1111/pce.13001
null
q-bio.QM q-bio.PE
http://creativecommons.org/licenses/by/4.0/
Numerical plant models can predict the outcome of plant traits modifications resulting from genetic variations, on plant performance, by simulating physiological processes and their interaction with the environment. Optimization methods complement those models to design ideotypes, i.e. ideal values of a set of plant traits resulting in optimal adaptation for given combinations of environment and management, mainly through the maximization of a performance criteria (e.g. yield, light interception). As use of simulation models gains momentum in plant breeding, numerical experiments must be carefully engineered to provide accurate and attainable results, rooting them in biological reality. Here, we propose a multi-objective optimization formulation that includes a metric of performance, returned by the numerical model, and a metric of feasibility, accounting for correlations between traits based on field observations. We applied this approach to two contrasting models: a process-based crop model of sunflower and a functional-structural plant model of apple trees. In both cases, the method successfully characterized key plant traits and identified a continuum of optimal solutions, ranging from the most feasible to the most efficient. The present study thus provides successful proof of concept for this enhanced modeling approach, which identified paths for desirable trait modification, including direction and intensity.
[ { "created": "Thu, 10 Mar 2016 12:43:04 GMT", "version": "v1" }, { "created": "Thu, 15 Sep 2016 12:33:11 GMT", "version": "v2" }, { "created": "Mon, 19 Jun 2017 07:42:58 GMT", "version": "v3" } ]
2017-06-20
[ [ "Picheny", "Victor", "" ], [ "Casadebaig", "Pierre", "" ], [ "Trépos", "Ronan", "" ], [ "Faivre", "Robert", "" ], [ "Da Silva", "David", "" ], [ "Vincourt", "Patrick", "" ], [ "Costes", "Evelyne", "" ] ]
Numerical plant models can predict the outcome of plant traits modifications resulting from genetic variations, on plant performance, by simulating physiological processes and their interaction with the environment. Optimization methods complement those models to design ideotypes, i.e. ideal values of a set of plant traits resulting in optimal adaptation for given combinations of environment and management, mainly through the maximization of a performance criteria (e.g. yield, light interception). As use of simulation models gains momentum in plant breeding, numerical experiments must be carefully engineered to provide accurate and attainable results, rooting them in biological reality. Here, we propose a multi-objective optimization formulation that includes a metric of performance, returned by the numerical model, and a metric of feasibility, accounting for correlations between traits based on field observations. We applied this approach to two contrasting models: a process-based crop model of sunflower and a functional-structural plant model of apple trees. In both cases, the method successfully characterized key plant traits and identified a continuum of optimal solutions, ranging from the most feasible to the most efficient. The present study thus provides successful proof of concept for this enhanced modeling approach, which identified paths for desirable trait modification, including direction and intensity.
2206.11233
Navid Ghassemi
Parisa Moridian, Navid Ghassemi, Mahboobeh Jafari, Salam Salloum-Asfar, Delaram Sadeghi, Marjane Khodatars, Afshin Shoeibi, Abbas Khosravi, Sai Ho Ling, Abdulhamit Subasi, Roohallah Alizadehsani, Juan M. Gorriz, Sara A Abdulla, U. Rajendra Acharya
Automatic autism spectrum disorder detection using artificial intelligence methods with MRI neuroimaging: A review
null
Moridian, et. al., Automatic autism spectrum disorder detection using artificial intelligence methods with MRI neuroimaging: A review, Frontiers in Molecular Neuroscience, Volume 15, 2022
10.3389/fnmol.2022.999605
null
q-bio.NC cs.LG eess.IV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Autism spectrum disorder (ASD) is a brain condition characterized by diverse signs and symptoms that appear in early childhood. ASD is also associated with communication deficits and repetitive behavior in affected individuals. Various ASD detection methods have been developed, including neuroimaging modalities and psychological tests. Among these methods, magnetic resonance imaging (MRI) imaging modalities are of paramount importance to physicians. Clinicians rely on MRI modalities to diagnose ASD accurately. The MRI modalities are non-invasive methods that include functional (fMRI) and structural (sMRI) neuroimaging methods. However, diagnosing ASD with fMRI and sMRI for specialists is often laborious and time-consuming; therefore, several computer-aided design systems (CADS) based on artificial intelligence (AI) have been developed to assist specialist physicians. Conventional machine learning (ML) and deep learning (DL) are the most popular schemes of AI used for diagnosing ASD. This study aims to review the automated detection of ASD using AI. We review several CADS that have been developed using ML techniques for the automated diagnosis of ASD using MRI modalities. There has been very limited work on the use of DL techniques to develop automated diagnostic models for ASD. A summary of the studies developed using DL is provided in the Supplementary Appendix. Then, the challenges encountered during the automated diagnosis of ASD using MRI and AI techniques are described in detail. Additionally, a graphical comparison of studies using ML and DL to diagnose ASD automatically is discussed. We suggest future approaches to detecting ASDs using AI techniques and MRI neuroimaging.
[ { "created": "Mon, 20 Jun 2022 16:14:21 GMT", "version": "v1" }, { "created": "Sun, 17 Jul 2022 09:39:33 GMT", "version": "v2" }, { "created": "Thu, 6 Oct 2022 15:58:56 GMT", "version": "v3" } ]
2022-10-07
[ [ "Moridian", "Parisa", "" ], [ "Ghassemi", "Navid", "" ], [ "Jafari", "Mahboobeh", "" ], [ "Salloum-Asfar", "Salam", "" ], [ "Sadeghi", "Delaram", "" ], [ "Khodatars", "Marjane", "" ], [ "Shoeibi", "Afshin", "" ], [ "Khosravi", "Abbas", "" ], [ "Ling", "Sai Ho", "" ], [ "Subasi", "Abdulhamit", "" ], [ "Alizadehsani", "Roohallah", "" ], [ "Gorriz", "Juan M.", "" ], [ "Abdulla", "Sara A", "" ], [ "Acharya", "U. Rajendra", "" ] ]
Autism spectrum disorder (ASD) is a brain condition characterized by diverse signs and symptoms that appear in early childhood. ASD is also associated with communication deficits and repetitive behavior in affected individuals. Various ASD detection methods have been developed, including neuroimaging modalities and psychological tests. Among these methods, magnetic resonance imaging (MRI) imaging modalities are of paramount importance to physicians. Clinicians rely on MRI modalities to diagnose ASD accurately. The MRI modalities are non-invasive methods that include functional (fMRI) and structural (sMRI) neuroimaging methods. However, diagnosing ASD with fMRI and sMRI for specialists is often laborious and time-consuming; therefore, several computer-aided design systems (CADS) based on artificial intelligence (AI) have been developed to assist specialist physicians. Conventional machine learning (ML) and deep learning (DL) are the most popular schemes of AI used for diagnosing ASD. This study aims to review the automated detection of ASD using AI. We review several CADS that have been developed using ML techniques for the automated diagnosis of ASD using MRI modalities. There has been very limited work on the use of DL techniques to develop automated diagnostic models for ASD. A summary of the studies developed using DL is provided in the Supplementary Appendix. Then, the challenges encountered during the automated diagnosis of ASD using MRI and AI techniques are described in detail. Additionally, a graphical comparison of studies using ML and DL to diagnose ASD automatically is discussed. We suggest future approaches to detecting ASDs using AI techniques and MRI neuroimaging.
2106.02785
Kenji Doya
Kenji Doya
Canonical Cortical Circuits and the Duality of Bayesian Inference and Optimal Control
13 pages, 3 figure
Current Opinion in Behavioral Sciences, 41, 160-166 (2021)
10.1016/j.cobeha.2021.07.003
null
q-bio.NC
http://creativecommons.org/licenses/by-sa/4.0/
The duality of sensory inference and motor control has been known since the 1960s and has recently been recognized as the commonality in computations required for the posterior distributions in Bayesian inference and the value functions in optimal control. Meanwhile, an intriguing question about the brain is why the entire neocortex shares a canonical six-layer architecture while its posterior and anterior halves are engaged in sensory processing and motor control, respectively. Here we consider the hypothesis that the sensory and motor cortical circuits implement the dual computations for Bayesian inference and optimal control, or perceptual and value-based decision making, respectively. We first review the classic duality of inference and control in linear quadratic systems and then review the correspondence between dynamic Bayesian inference and optimal control. Based on the architecture of the canonical cortical circuit, we explore how different cortical neurons may represent variables and implement computations.
[ { "created": "Sat, 5 Jun 2021 03:23:13 GMT", "version": "v1" }, { "created": "Sat, 3 Jul 2021 22:13:31 GMT", "version": "v2" } ]
2021-10-12
[ [ "Doya", "Kenji", "" ] ]
The duality of sensory inference and motor control has been known since the 1960s and has recently been recognized as the commonality in computations required for the posterior distributions in Bayesian inference and the value functions in optimal control. Meanwhile, an intriguing question about the brain is why the entire neocortex shares a canonical six-layer architecture while its posterior and anterior halves are engaged in sensory processing and motor control, respectively. Here we consider the hypothesis that the sensory and motor cortical circuits implement the dual computations for Bayesian inference and optimal control, or perceptual and value-based decision making, respectively. We first review the classic duality of inference and control in linear quadratic systems and then review the correspondence between dynamic Bayesian inference and optimal control. Based on the architecture of the canonical cortical circuit, we explore how different cortical neurons may represent variables and implement computations.
1810.04726
Sergei Maslov
Veronika Dubinkina, Yulia Fridman, Parth Pandey, and Sergei Maslov
Alternative stable states in a model of microbial community limited by multiple essential nutrients
null
null
null
null
q-bio.PE physics.bio-ph
http://creativecommons.org/licenses/by/4.0/
Microbial communities routinely have several alternative stable states observed for the same environmental parameters. Sudden and irreversible transitions between these states make external manipulation of these systems more complicated. To better understand the mechanisms and origins of multistability in microbial communities, we introduce and study a model of a microbial ecosystem colonized by multiple specialist species selected from a fixed pool. Growth of each species can be limited by essential nutrients of two types, e.g. carbon and nitrogen, each represented in the environment by multiple metabolites. We demonstrate that our model has an exponentially large number of potential stable states realized for different environmental parameters. Using game theoretical methods adapted from the stable marriage problem we predict all of these states based only on ranked lists of competitive abilities of species for each of the nutrients. We show that for every set of nutrient influxes, several mutually uninvadable stable states are generally feasible and we distinguish them based upon their dynamic stability. We further explore an intricate network of discontinuous transitions (regime shifts) between these alternative states both in the course of community assembly, or upon changes of nutrient influxes.
[ { "created": "Wed, 10 Oct 2018 19:44:47 GMT", "version": "v1" } ]
2018-10-12
[ [ "Dubinkina", "Veronika", "" ], [ "Fridman", "Yulia", "" ], [ "Pandey", "Parth", "" ], [ "Maslov", "Sergei", "" ] ]
Microbial communities routinely have several alternative stable states observed for the same environmental parameters. Sudden and irreversible transitions between these states make external manipulation of these systems more complicated. To better understand the mechanisms and origins of multistability in microbial communities, we introduce and study a model of a microbial ecosystem colonized by multiple specialist species selected from a fixed pool. Growth of each species can be limited by essential nutrients of two types, e.g. carbon and nitrogen, each represented in the environment by multiple metabolites. We demonstrate that our model has an exponentially large number of potential stable states realized for different environmental parameters. Using game theoretical methods adapted from the stable marriage problem we predict all of these states based only on ranked lists of competitive abilities of species for each of the nutrients. We show that for every set of nutrient influxes, several mutually uninvadable stable states are generally feasible and we distinguish them based upon their dynamic stability. We further explore an intricate network of discontinuous transitions (regime shifts) between these alternative states both in the course of community assembly, or upon changes of nutrient influxes.
0803.0635
Luca Sbano
Markus Kirkilionis and Luca Sbano
An Averaging Principle for Combined Interaction Graphs. Part I: Connectivity and Applications to Genetic Switches
To appear in Advances in Complex Systems
null
null
null
q-bio.MN math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Time-continuous dynamical systems defined on graphs are often used to model complex systems with many interacting components in a non-spatial context. In the reverse sense attaching meaningful dynamics to given 'interaction diagrams' is a central bottleneck problem in many application areas, especially in cell biology where various such diagrams with different conventions describing molecular regulation are presently in use. In most situations these diagrams can only be interpreted by the use of both discrete and continuous variables during the modelling process, corresponding to both deterministic and stochastic hybrid dynamics. The conventions in genetics are well-known, and therefore we use this field for illustration purposes. In [25] and [26] the authors showed that with the help of a multi-scale analysis stochastic systems with both continuous variables and finite state spaces can be approximated by dynamical systems whose leading order time evolution is given by a combination of ordinary differential equations (ODEs) and Markov chains. The leading order term in these dynamical systems is called average dynamics and turns out to be an adequate concept to analyse a class of simplified hybrid systems. Once the dynamics is defifined the mutual interaction of both ODEs and Markov chains can be analysed through the (reverse) introduction of the so called Interaction Graph, a concept originally invented for time-continuous dynamical systems, see [5]. Here we transfer this graph concept to the average dynamics, which itself is introduced as an heuristic tool to construct models of reaction or contact networks. The graphical concepts introduced form the basis for any subsequent study of the qualitative properties of hybrid models in terms of connectivity and (feedback) loop formation.
[ { "created": "Wed, 5 Mar 2008 11:14:55 GMT", "version": "v1" }, { "created": "Tue, 18 Mar 2008 13:57:55 GMT", "version": "v2" }, { "created": "Thu, 1 Jul 2010 10:04:29 GMT", "version": "v3" } ]
2010-07-02
[ [ "Kirkilionis", "Markus", "" ], [ "Sbano", "Luca", "" ] ]
Time-continuous dynamical systems defined on graphs are often used to model complex systems with many interacting components in a non-spatial context. In the reverse sense attaching meaningful dynamics to given 'interaction diagrams' is a central bottleneck problem in many application areas, especially in cell biology where various such diagrams with different conventions describing molecular regulation are presently in use. In most situations these diagrams can only be interpreted by the use of both discrete and continuous variables during the modelling process, corresponding to both deterministic and stochastic hybrid dynamics. The conventions in genetics are well-known, and therefore we use this field for illustration purposes. In [25] and [26] the authors showed that with the help of a multi-scale analysis stochastic systems with both continuous variables and finite state spaces can be approximated by dynamical systems whose leading order time evolution is given by a combination of ordinary differential equations (ODEs) and Markov chains. The leading order term in these dynamical systems is called average dynamics and turns out to be an adequate concept to analyse a class of simplified hybrid systems. Once the dynamics is defifined the mutual interaction of both ODEs and Markov chains can be analysed through the (reverse) introduction of the so called Interaction Graph, a concept originally invented for time-continuous dynamical systems, see [5]. Here we transfer this graph concept to the average dynamics, which itself is introduced as an heuristic tool to construct models of reaction or contact networks. The graphical concepts introduced form the basis for any subsequent study of the qualitative properties of hybrid models in terms of connectivity and (feedback) loop formation.
2303.15552
Christine Heitsch
Forrest Hurley and Christine Heitsch
RNAprofiling 2.0: Enhanced cluster analysis of structural ensembles
9 pages, 2 figures; supplement 6 pages, 3 figures, 1 table
null
10.1016/j.jmb.2023.168047
null
q-bio.BM
http://creativecommons.org/licenses/by-nc-nd/4.0/
Understanding the base pairing of an RNA sequence provides insight into its molecular structure.By mining suboptimal sampling data, RNAprofiling 1.0 identifies the dominant helices in low-energy secondary structures as features, organizes them into profiles which partition the Boltzmann sample, and highlights key similarities/differences among the most informative, i.e. selected, profiles in a graphical format. Version 2.0 enhances every step of this approach. First, the featured substructures are expanded from helices to stems. Second, profile selection includes low-frequency pairings similar to featured ones. In conjunction, these updates extend the utility of the method to sequences up to length 600, as evaluated over a sizable dataset. Third, relationships are visualized in a decision tree which highlights the most important structural differences. Finally, this cluster analysis is made accessible to experimental researchers in a portable format as an interactive webpage, permitting a much greater understanding of trade-offs among different possible base pairing combinations.
[ { "created": "Mon, 27 Mar 2023 19:10:37 GMT", "version": "v1" } ]
2023-03-29
[ [ "Hurley", "Forrest", "" ], [ "Heitsch", "Christine", "" ] ]
Understanding the base pairing of an RNA sequence provides insight into its molecular structure.By mining suboptimal sampling data, RNAprofiling 1.0 identifies the dominant helices in low-energy secondary structures as features, organizes them into profiles which partition the Boltzmann sample, and highlights key similarities/differences among the most informative, i.e. selected, profiles in a graphical format. Version 2.0 enhances every step of this approach. First, the featured substructures are expanded from helices to stems. Second, profile selection includes low-frequency pairings similar to featured ones. In conjunction, these updates extend the utility of the method to sequences up to length 600, as evaluated over a sizable dataset. Third, relationships are visualized in a decision tree which highlights the most important structural differences. Finally, this cluster analysis is made accessible to experimental researchers in a portable format as an interactive webpage, permitting a much greater understanding of trade-offs among different possible base pairing combinations.
1406.3828
Conor Smyth
Conor Smyth, Iva \v{S}pakulova, Owen Cotton-Barratt, Sajjad Rafiq, William Tapper, Rosanna Upstill-Goddard, John L. Hopper, Enes Makalic, Daniel F. Schmidt, Miroslav Kapuscinski, J\"org Fliege, Andrew Collins, Jacek Brodzki, Diana M. Eccles, Ben D. MacArthur
Genome disorder and breast cancer susceptibility
null
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many common diseases have a complex genetic basis in which large numbers of genetic variations combine with environmental and lifestyle factors to determine risk. However, quantifying such polygenic effects and their relationship to disease risk has been challenging. In order to address these difficulties we developed a global measure of the information content of an individual's genome relative to a reference population, which may be used to assess differences in global genome structure between cases and appropriate controls. Informally this measure, which we call relative genome information (RGI), quantifies the relative "disorder" of an individual's genome. In order to test its ability to predict disease risk we used RGI to compare single nucleotide polymorphism genotypes from two independent samples of women with early-onset breast cancer with three independent sets of controls. We found that RGI was significantly elevated in both sets of breast cancer cases in comparison with all three sets of controls, with disease risk rising sharply with RGI (odds ratio greater than 12 for the highest percentile RGI). Furthermore, we found that these differences are not due to associations with common variants at a small number of disease-associated loci, but rather are due to the combined associations of thousands of markers distributed throughout the genome. Our results indicate that the information content of an individual's genome may be used to measure the risk of a complex disease, and suggest that early-onset breast cancer has a strongly polygenic basis.
[ { "created": "Sun, 15 Jun 2014 15:47:35 GMT", "version": "v1" } ]
2014-06-17
[ [ "Smyth", "Conor", "" ], [ "Špakulova", "Iva", "" ], [ "Cotton-Barratt", "Owen", "" ], [ "Rafiq", "Sajjad", "" ], [ "Tapper", "William", "" ], [ "Upstill-Goddard", "Rosanna", "" ], [ "Hopper", "John L.", "" ], [ "Makalic", "Enes", "" ], [ "Schmidt", "Daniel F.", "" ], [ "Kapuscinski", "Miroslav", "" ], [ "Fliege", "Jörg", "" ], [ "Collins", "Andrew", "" ], [ "Brodzki", "Jacek", "" ], [ "Eccles", "Diana M.", "" ], [ "MacArthur", "Ben D.", "" ] ]
Many common diseases have a complex genetic basis in which large numbers of genetic variations combine with environmental and lifestyle factors to determine risk. However, quantifying such polygenic effects and their relationship to disease risk has been challenging. In order to address these difficulties we developed a global measure of the information content of an individual's genome relative to a reference population, which may be used to assess differences in global genome structure between cases and appropriate controls. Informally this measure, which we call relative genome information (RGI), quantifies the relative "disorder" of an individual's genome. In order to test its ability to predict disease risk we used RGI to compare single nucleotide polymorphism genotypes from two independent samples of women with early-onset breast cancer with three independent sets of controls. We found that RGI was significantly elevated in both sets of breast cancer cases in comparison with all three sets of controls, with disease risk rising sharply with RGI (odds ratio greater than 12 for the highest percentile RGI). Furthermore, we found that these differences are not due to associations with common variants at a small number of disease-associated loci, but rather are due to the combined associations of thousands of markers distributed throughout the genome. Our results indicate that the information content of an individual's genome may be used to measure the risk of a complex disease, and suggest that early-onset breast cancer has a strongly polygenic basis.
2211.01313
Michael Levin
Joel Grodstein, Michael Levin
Closing the Loop on Morphogenesis: A Mathematical Model of Morphogenesis by Closed-Loop Reaction-Diffusion
20 pages, 3 tables, 5 figures
null
null
null
q-bio.MN q-bio.CB q-bio.QM q-bio.SC
http://creativecommons.org/licenses/by-nc-nd/4.0/
Morphogenesis, the establishment and repair of emergent complex anatomy by groups of cells, is a fascinating and biomedically-relevant problem. One of its most fascinating aspects is that a developing embryo can reliably recover from disturbances, such as splitting into twins. While this reliability implies some type of goal-seeking error minimization over a morphogenic field, there are many gaps with respect to detailed, constructive models of such a process being used to implement the collective intelligence of cellular swarms. We describe a closed-loop negative-feedback system for creating reaction-diffusion (RD) patterns with high reliability. It uses a cellular automaton to characterize a morphogen pattern, then compares it to a goal and adjusts accordingly, providing a framework for modeling anatomical homeostasis and robust generation of target morphologies. Specifically, we create a RD pattern with N repetitions, where N is easily changeable. Furthermore, the individual repetitions of the RD pattern can be easily stretched or shrunk under genetic control to create, e.g., some morphological features larger than others. Finally, the cellular automaton uses a computation wave that scans the morphogen pattern unidirectionally to characterize the features that the negative feedback then controls. By taking advantage of a prior process asymmetrically establishing planar polarity (e.g., head vs. tail), our automaton is greatly simplified. This work contributes to the exciting effort of understanding design principles of morphological computation, which can be used to understand evolved developmental mechanisms, manipulate them in regenerative medicine settings, or embed a degree of synthetic intelligence into novel bioengineered constructs.
[ { "created": "Wed, 2 Nov 2022 17:35:53 GMT", "version": "v1" } ]
2022-11-03
[ [ "Grodstein", "Joel", "" ], [ "Levin", "Michael", "" ] ]
Morphogenesis, the establishment and repair of emergent complex anatomy by groups of cells, is a fascinating and biomedically-relevant problem. One of its most fascinating aspects is that a developing embryo can reliably recover from disturbances, such as splitting into twins. While this reliability implies some type of goal-seeking error minimization over a morphogenic field, there are many gaps with respect to detailed, constructive models of such a process being used to implement the collective intelligence of cellular swarms. We describe a closed-loop negative-feedback system for creating reaction-diffusion (RD) patterns with high reliability. It uses a cellular automaton to characterize a morphogen pattern, then compares it to a goal and adjusts accordingly, providing a framework for modeling anatomical homeostasis and robust generation of target morphologies. Specifically, we create a RD pattern with N repetitions, where N is easily changeable. Furthermore, the individual repetitions of the RD pattern can be easily stretched or shrunk under genetic control to create, e.g., some morphological features larger than others. Finally, the cellular automaton uses a computation wave that scans the morphogen pattern unidirectionally to characterize the features that the negative feedback then controls. By taking advantage of a prior process asymmetrically establishing planar polarity (e.g., head vs. tail), our automaton is greatly simplified. This work contributes to the exciting effort of understanding design principles of morphological computation, which can be used to understand evolved developmental mechanisms, manipulate them in regenerative medicine settings, or embed a degree of synthetic intelligence into novel bioengineered constructs.
1805.09774
Dorian Florescu Dr
Dorian Florescu, Daniel Coca
Learning with precise spike times: A new decoding algorithm for liquid state machines
34 pages, 7 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There is extensive evidence that biological neural networks encode information in the precise timing of the spikes generated and transmitted by neurons, which offers several advantages over rate-based codes. Here we adopt a vector space formulation of spike train sequences and introduce a new liquid state machine (LSM) network architecture and a new forward orthogonal regression algorithm to learn an input-output signal mapping or to decode the brain activity. The proposed algorithm uses precise spike timing to select the presynaptic neurons relevant to each learning task. We show that using precise spike timing to train the LSM and selecting the readout neurons leads to a significant increase in performance on binary classification tasks as well as in decoding neural activity from multielectrode array recordings, compared with what is achieved using the standard architecture and training method.
[ { "created": "Thu, 24 May 2018 16:52:59 GMT", "version": "v1" }, { "created": "Sat, 13 Jul 2019 22:50:09 GMT", "version": "v2" } ]
2019-07-16
[ [ "Florescu", "Dorian", "" ], [ "Coca", "Daniel", "" ] ]
There is extensive evidence that biological neural networks encode information in the precise timing of the spikes generated and transmitted by neurons, which offers several advantages over rate-based codes. Here we adopt a vector space formulation of spike train sequences and introduce a new liquid state machine (LSM) network architecture and a new forward orthogonal regression algorithm to learn an input-output signal mapping or to decode the brain activity. The proposed algorithm uses precise spike timing to select the presynaptic neurons relevant to each learning task. We show that using precise spike timing to train the LSM and selecting the readout neurons leads to a significant increase in performance on binary classification tasks as well as in decoding neural activity from multielectrode array recordings, compared with what is achieved using the standard architecture and training method.
2108.04820
Thi Ngan Dong
Thi Ngan Dong and Megha Khosla
MuCoMiD: A Multitask Convolutional Learning Framework for miRNA-Disease Association Prediction
null
null
null
null
q-bio.QM cs.LG
http://creativecommons.org/licenses/by/4.0/
Growing evidence from recent studies implies that microRNA or miRNA could serve as biomarkers in various complex human diseases. Since wet-lab experiments are expensive and time-consuming, computational techniques for miRNA-disease association prediction have attracted a lot of attention in recent years. Data scarcity is one of the major challenges in building reliable machine learning models. Data scarcity combined with the use of precalculated hand-crafted input features has led to problems of overfitting and data leakage. We overcome the limitations of existing works by proposing a novel multi-tasking graph convolution-based approach, which we refer to as MuCoMiD. MuCoMiD allows automatic feature extraction while incorporating knowledge from five heterogeneous biological information sources (interactions between miRNA/diseases and protein-coding genes (PCG), interactions between protein-coding genes, miRNA family information, and disease ontology) in a multi-task setting which is a novel perspective and has not been studied before. To effectively test the generalization capability of our model, we construct large-scale experiments on standard benchmark datasets as well as our proposed larger independent test sets and case studies. MuCoMiD shows an improvement of at least 3% in 5-fold CV evaluation on HMDDv2.0 and HMDDv3.0 datasets and at least 35% on larger independent test sets with unseen miRNA and diseases over state-of-the-art approaches. We share our code for reproducibility and future research at https://git.l3s.uni-hannover.de/dong/cmtt.
[ { "created": "Sun, 8 Aug 2021 10:01:46 GMT", "version": "v1" }, { "created": "Sun, 21 Nov 2021 13:57:22 GMT", "version": "v2" }, { "created": "Mon, 29 Nov 2021 09:37:28 GMT", "version": "v3" } ]
2021-11-30
[ [ "Dong", "Thi Ngan", "" ], [ "Khosla", "Megha", "" ] ]
Growing evidence from recent studies implies that microRNA or miRNA could serve as biomarkers in various complex human diseases. Since wet-lab experiments are expensive and time-consuming, computational techniques for miRNA-disease association prediction have attracted a lot of attention in recent years. Data scarcity is one of the major challenges in building reliable machine learning models. Data scarcity combined with the use of precalculated hand-crafted input features has led to problems of overfitting and data leakage. We overcome the limitations of existing works by proposing a novel multi-tasking graph convolution-based approach, which we refer to as MuCoMiD. MuCoMiD allows automatic feature extraction while incorporating knowledge from five heterogeneous biological information sources (interactions between miRNA/diseases and protein-coding genes (PCG), interactions between protein-coding genes, miRNA family information, and disease ontology) in a multi-task setting which is a novel perspective and has not been studied before. To effectively test the generalization capability of our model, we construct large-scale experiments on standard benchmark datasets as well as our proposed larger independent test sets and case studies. MuCoMiD shows an improvement of at least 3% in 5-fold CV evaluation on HMDDv2.0 and HMDDv3.0 datasets and at least 35% on larger independent test sets with unseen miRNA and diseases over state-of-the-art approaches. We share our code for reproducibility and future research at https://git.l3s.uni-hannover.de/dong/cmtt.
1401.3587
Mahashweta Basu
Mahashweta Basu
Communities of dense weighted networks: MicroRNA co-target network as an example
14 pages, 6 eps figures
null
null
null
q-bio.MN physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Complex networks are intrinsically modular. Resolving small modules is particularly difficult when the network is densely connected; wide variation of link weights invites additional complexities. In this article we present an algorithm to detect community structure in densely connected weighted networks. First, modularity of the network is calculated by erasing the links having weights smaller than a cutoff $q.$ Then one takes all the disjoint components obtained at $q=q_c,$ where the modularity is maximum, and modularize the components individually using Newman Girvan's algorithm for weighted networks. We show, taking microRNA (miRNA) co-target network of Homo sapiens as an example, that this algorithm could reveal miRNA modules which are known to be relevant in biological context.
[ { "created": "Wed, 15 Jan 2014 13:52:47 GMT", "version": "v1" } ]
2014-01-16
[ [ "Basu", "Mahashweta", "" ] ]
Complex networks are intrinsically modular. Resolving small modules is particularly difficult when the network is densely connected; wide variation of link weights invites additional complexities. In this article we present an algorithm to detect community structure in densely connected weighted networks. First, modularity of the network is calculated by erasing the links having weights smaller than a cutoff $q.$ Then one takes all the disjoint components obtained at $q=q_c,$ where the modularity is maximum, and modularize the components individually using Newman Girvan's algorithm for weighted networks. We show, taking microRNA (miRNA) co-target network of Homo sapiens as an example, that this algorithm could reveal miRNA modules which are known to be relevant in biological context.
2302.05338
Cameron Smith
Ximo Pechuan-Jorge, Raymond S. Puzio, Cameron Smith
Algebraic structure of hierarchic first-order reaction networks applicable to models of clone size distribution and stochastic gene expression
9 pages
null
null
null
q-bio.MN math-ph math.MP math.PR q-bio.PE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In biology, stochastic branching processes with a two-stage, hierarchical structure arise in the study of population dynamics, gene expression, and phylogenetic inference. These models have been commonly analyzed using generating functions, the method of characteristics and various perturbative approximations. Here we describe a general method for analyzing hierarchic first-order reaction networks using Lie theory. Crucially, we identify the fact that the Lie group associated to hierarchic reaction networks decomposes as a wreath product of the groups associated to the subnetworks of the independent and dependent types. After explaining the general method, we illustrate it on a model of population dynamics and the so-called two-state or telegraph model of single-gene transcription. Solutions to such processes provide essential input to downstream methods designed to attempt to infer parameters of these and related models.
[ { "created": "Thu, 2 Feb 2023 21:19:12 GMT", "version": "v1" } ]
2023-02-13
[ [ "Pechuan-Jorge", "Ximo", "" ], [ "Puzio", "Raymond S.", "" ], [ "Smith", "Cameron", "" ] ]
In biology, stochastic branching processes with a two-stage, hierarchical structure arise in the study of population dynamics, gene expression, and phylogenetic inference. These models have been commonly analyzed using generating functions, the method of characteristics and various perturbative approximations. Here we describe a general method for analyzing hierarchic first-order reaction networks using Lie theory. Crucially, we identify the fact that the Lie group associated to hierarchic reaction networks decomposes as a wreath product of the groups associated to the subnetworks of the independent and dependent types. After explaining the general method, we illustrate it on a model of population dynamics and the so-called two-state or telegraph model of single-gene transcription. Solutions to such processes provide essential input to downstream methods designed to attempt to infer parameters of these and related models.
1409.1838
Christopher Lester
Christopher Lester, Christian A. Yates, Michael B. Giles, Ruth E. Baker
An adaptive multi-level simulation algorithm for stochastic biological systems
23 pages
J. Chem. Phys. 142, 024113 (2015)
10.1063/1.4904980
null
q-bio.QM q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Discrete-state, continuous-time Markov models are widely used in the modeling of biochemical reaction networks. Their complexity often precludes analytic solution, and we rely on stochastic simulation algorithms to estimate system statistics. The Gillespie algorithm is exact, but computationally costly as it simulates every single reaction. As such, approximate stochastic simulation algorithms such as the tau-leap algorithm are often used. Potentially computationally more efficient, the system statistics generated suffer from significant bias unless tau is relatively small, in which case the computational time can be comparable to that of the Gillespie algorithm. The multi-level method (Anderson and Higham, Multiscale Model. Simul. 2012) tackles this problem. A base estimator is computed using many (cheap) sample paths at low accuracy. The bias inherent in this estimator is then reduced using a number of corrections. Each correction term is estimated using a collection of paired sample paths where one path of each pair is generated at a higher accuracy compared to the other (and so more expensive). By sharing random variables between these paired paths the variance of each correction estimator can be reduced. This renders the multi-level method very efficient as only a relatively small number of paired paths are required to calculate each correction term. In the original multi-level method, each sample path is simulated using the tau-leap algorithm with a fixed value of $\tau$. This approach can result in poor performance when the reaction activity of a system changes substantially over the timescale of interest. By introducing a novel, adaptive time-stepping approach where $\tau$ is chosen according to the stochastic behaviour of each sample path we extend the applicability of the multi-level method to such cases. We demonstrate the efficiency of our method using a number of examples.
[ { "created": "Fri, 5 Sep 2014 15:31:17 GMT", "version": "v1" }, { "created": "Fri, 12 Dec 2014 15:18:47 GMT", "version": "v2" } ]
2016-05-20
[ [ "Lester", "Christopher", "" ], [ "Yates", "Christian A.", "" ], [ "Giles", "Michael B.", "" ], [ "Baker", "Ruth E.", "" ] ]
Discrete-state, continuous-time Markov models are widely used in the modeling of biochemical reaction networks. Their complexity often precludes analytic solution, and we rely on stochastic simulation algorithms to estimate system statistics. The Gillespie algorithm is exact, but computationally costly as it simulates every single reaction. As such, approximate stochastic simulation algorithms such as the tau-leap algorithm are often used. Potentially computationally more efficient, the system statistics generated suffer from significant bias unless tau is relatively small, in which case the computational time can be comparable to that of the Gillespie algorithm. The multi-level method (Anderson and Higham, Multiscale Model. Simul. 2012) tackles this problem. A base estimator is computed using many (cheap) sample paths at low accuracy. The bias inherent in this estimator is then reduced using a number of corrections. Each correction term is estimated using a collection of paired sample paths where one path of each pair is generated at a higher accuracy compared to the other (and so more expensive). By sharing random variables between these paired paths the variance of each correction estimator can be reduced. This renders the multi-level method very efficient as only a relatively small number of paired paths are required to calculate each correction term. In the original multi-level method, each sample path is simulated using the tau-leap algorithm with a fixed value of $\tau$. This approach can result in poor performance when the reaction activity of a system changes substantially over the timescale of interest. By introducing a novel, adaptive time-stepping approach where $\tau$ is chosen according to the stochastic behaviour of each sample path we extend the applicability of the multi-level method to such cases. We demonstrate the efficiency of our method using a number of examples.
1403.6171
Timothy Phan
Timothy S. Phan, John K-J. Li
Propagation of Uncertainty and Analysis of Signal-to-Noise in Nonlinear Compliance Estimations of an Arterial System Model
Conference on Information Sciences and Systems (CISS) 2014
null
null
null
q-bio.QM q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The arterial system dynamically loads the heart through changes in arterial compliance. The pressure-volume relation of arteries is known to be nonlinear, but arterial compliance is often modeled as a constant value, due to ease of estimation and interpretation. Incorporating nonlinear arterial compliance affords insight into the continuous variations of arterial compliance in a cardiac cycle and its effects on the heart, as the arterial system is coupled with the left ventricle. We recently proposed a method for estimating nonlinear compliance parameters that yielded good results under various vasoactive states. This study examines the performance of the proposed method by quantifying the uncertainty of the method in the presence of noise and propagating the uncertainty through the system model to analyze its effects on model predictions. Kernel density estimation used within a bootstrap Monte Carlo simulation showed the method to be stable for various vasoactive states.
[ { "created": "Mon, 24 Mar 2014 22:24:57 GMT", "version": "v1" } ]
2014-03-26
[ [ "Phan", "Timothy S.", "" ], [ "Li", "John K-J.", "" ] ]
The arterial system dynamically loads the heart through changes in arterial compliance. The pressure-volume relation of arteries is known to be nonlinear, but arterial compliance is often modeled as a constant value, due to ease of estimation and interpretation. Incorporating nonlinear arterial compliance affords insight into the continuous variations of arterial compliance in a cardiac cycle and its effects on the heart, as the arterial system is coupled with the left ventricle. We recently proposed a method for estimating nonlinear compliance parameters that yielded good results under various vasoactive states. This study examines the performance of the proposed method by quantifying the uncertainty of the method in the presence of noise and propagating the uncertainty through the system model to analyze its effects on model predictions. Kernel density estimation used within a bootstrap Monte Carlo simulation showed the method to be stable for various vasoactive states.
1405.1611
Karsten Kruse
L. Wettmann, M. Bonny, K. Kruse
Bistable protein distributions in rod-shaped bacteria
13 pages, 6 figures
null
null
null
q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The distributions of many proteins in rod-shaped bacteria are far from homogenous. Often they accumulate at the cell poles or in the cell center. At the same time, the copy number of proteins in a single cell is relatively small making the patterns noisy. To explore limits to protein patterns due to molecular noise, we studied a generic mechanism for spontaneous polar protein assemblies in rod-shaped bacteria, which is based on cooperative binding of proteins to the cytoplasmic membrane. For mono-polar assemblies, we find that the switching time between the two poles increases exponentially with the cell length and with the protein number.
[ { "created": "Wed, 7 May 2014 14:16:24 GMT", "version": "v1" } ]
2014-05-08
[ [ "Wettmann", "L.", "" ], [ "Bonny", "M.", "" ], [ "Kruse", "K.", "" ] ]
The distributions of many proteins in rod-shaped bacteria are far from homogenous. Often they accumulate at the cell poles or in the cell center. At the same time, the copy number of proteins in a single cell is relatively small making the patterns noisy. To explore limits to protein patterns due to molecular noise, we studied a generic mechanism for spontaneous polar protein assemblies in rod-shaped bacteria, which is based on cooperative binding of proteins to the cytoplasmic membrane. For mono-polar assemblies, we find that the switching time between the two poles increases exponentially with the cell length and with the protein number.
2206.14706
Diego Ferreiro
Ignacio E. S\'anchez, Ezequiel A. Galpern, Mart\'in M. Garibaldi, Diego U. Ferreiro
Molecular information theory meets protein folding
33pages, 2 figures, plus supporting information
null
null
null
q-bio.BM cs.IT math.IT
http://creativecommons.org/licenses/by-nc-sa/4.0/
We propose an application of molecular information theory to analyze the folding of single domain proteins. We analyze results from various areas of protein science, such as sequence-based potentials, reduced amino acid alphabets, backbone configurational entropy, secondary structure content, residue burial layers, and mutational studies of protein stability changes. We found that the average information contained in the sequences of evolved proteins is very close to the average information needed to specify a fold ~2.2 $\pm$ 0.3 bits/(site operation). The effective alphabet size in evolved proteins equals the effective number of conformations of a residue in the compact unfolded state at around 5. We calculated an energy-to-information conversion efficiency upon folding of around 50%, lower than the theoretical limit of 70%, but much higher than human built macroscopic machines. We propose a simple mapping between molecular information theory and energy landscape theory and explore the connections between sequence evolution, configurational entropy and the energetics of protein folding.
[ { "created": "Wed, 29 Jun 2022 15:16:10 GMT", "version": "v1" } ]
2022-06-30
[ [ "Sánchez", "Ignacio E.", "" ], [ "Galpern", "Ezequiel A.", "" ], [ "Garibaldi", "Martín M.", "" ], [ "Ferreiro", "Diego U.", "" ] ]
We propose an application of molecular information theory to analyze the folding of single domain proteins. We analyze results from various areas of protein science, such as sequence-based potentials, reduced amino acid alphabets, backbone configurational entropy, secondary structure content, residue burial layers, and mutational studies of protein stability changes. We found that the average information contained in the sequences of evolved proteins is very close to the average information needed to specify a fold ~2.2 $\pm$ 0.3 bits/(site operation). The effective alphabet size in evolved proteins equals the effective number of conformations of a residue in the compact unfolded state at around 5. We calculated an energy-to-information conversion efficiency upon folding of around 50%, lower than the theoretical limit of 70%, but much higher than human built macroscopic machines. We propose a simple mapping between molecular information theory and energy landscape theory and explore the connections between sequence evolution, configurational entropy and the energetics of protein folding.
2004.06916
Edilson Arruda
L. Tarrataca, C.M. Dias, D. B. Haddad, and E. F. Arruda
Flattening the curves: on-off lock-down strategies for COVID-19 with an application to Brazi
null
null
10.1186/s13362-020-00098-w
null
q-bio.PE cs.LG cs.SY eess.SY q-bio.QM stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The current COVID-19 pandemic is affecting different countries in different ways. The assortment of reporting techniques alongside other issues, such as underreporting and budgetary constraints, makes predicting the spread and lethality of the virus a challenging task. This work attempts to gain a better understanding of how COVID-19 will affect one of the least studied countries, namely Brazil. Currently, several Brazilian states are in a state of lock-down. However, there is political pressure for this type of measures to be lifted. This work considers the impact that such a termination would have on how the virus evolves locally. This was done by extending the SEIR model with an on / off strategy. Given the simplicity of SEIR we also attempted to gain more insight by developing a neural regressor. We chose to employ features that current clinical studies have pinpointed has having a connection to the lethality of COVID-19. We discuss how this data can be processed in order to obtain a robust assessment.
[ { "created": "Wed, 15 Apr 2020 07:37:08 GMT", "version": "v1" } ]
2021-01-08
[ [ "Tarrataca", "L.", "" ], [ "Dias", "C. M.", "" ], [ "Haddad", "D. B.", "" ], [ "Arruda", "E. F.", "" ] ]
The current COVID-19 pandemic is affecting different countries in different ways. The assortment of reporting techniques alongside other issues, such as underreporting and budgetary constraints, makes predicting the spread and lethality of the virus a challenging task. This work attempts to gain a better understanding of how COVID-19 will affect one of the least studied countries, namely Brazil. Currently, several Brazilian states are in a state of lock-down. However, there is political pressure for this type of measures to be lifted. This work considers the impact that such a termination would have on how the virus evolves locally. This was done by extending the SEIR model with an on / off strategy. Given the simplicity of SEIR we also attempted to gain more insight by developing a neural regressor. We chose to employ features that current clinical studies have pinpointed has having a connection to the lethality of COVID-19. We discuss how this data can be processed in order to obtain a robust assessment.
1806.08634
Juan Eugenio Iglesias
Juan Eugenio Iglesias, Ricardo Insausti, Garikoitz Lerma-Usabiaga, Martina Bocchetta, Koen Van Leemput, Douglas N Greve, Andre van der Kouwe, Bruce Fischl, Cesar Caballero-Gaudes, Pedro M Paz-Alonso
A probabilistic atlas of the human thalamic nuclei combining ex vivo MRI and histology
null
null
null
null
q-bio.NC cs.CV physics.med-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The human thalamus is a brain structure that comprises numerous, highly specific nuclei. Since these nuclei are known to have different functions and to be connected to different areas of the cerebral cortex, it is of great interest for the neuroimaging community to study their volume, shape and connectivity in vivo with MRI. In this study, we present a probabilistic atlas of the thalamic nuclei built using ex vivo brain MRI scans and histological data, as well as the application of the atlas to in vivo MRI segmentation. The atlas was built using manual delineation of 26 thalamic nuclei on the serial histology of 12 whole thalami from six autopsy samples, combined with manual segmentations of the whole thalamus and surrounding structures (caudate, putamen, hippocampus, etc.) made on in vivo brain MR data from 39 subjects. The 3D structure of the histological data and corresponding manual segmentations was recovered using the ex vivo MRI as reference frame, and stacks of blockface photographs acquired during the sectioning as intermediate target. The atlas, which was encoded as an adaptive tetrahedral mesh, shows a good agreement with with previous histological studies of the thalamus in terms of volumes of representative nuclei. When applied to segmentation of in vivo scans using Bayesian inference, the atlas shows excellent test-retest reliability, robustness to changes in input MRI contrast, and ability to detect differential thalamic effects in subjects with Alzheimer's disease. The probabilistic atlas and companion segmentation tool are publicly available as part of the neuroimaging package FreeSurfer.
[ { "created": "Fri, 22 Jun 2018 12:42:37 GMT", "version": "v1" } ]
2018-06-25
[ [ "Iglesias", "Juan Eugenio", "" ], [ "Insausti", "Ricardo", "" ], [ "Lerma-Usabiaga", "Garikoitz", "" ], [ "Bocchetta", "Martina", "" ], [ "Van Leemput", "Koen", "" ], [ "Greve", "Douglas N", "" ], [ "van der Kouwe", "Andre", "" ], [ "Fischl", "Bruce", "" ], [ "Caballero-Gaudes", "Cesar", "" ], [ "Paz-Alonso", "Pedro M", "" ] ]
The human thalamus is a brain structure that comprises numerous, highly specific nuclei. Since these nuclei are known to have different functions and to be connected to different areas of the cerebral cortex, it is of great interest for the neuroimaging community to study their volume, shape and connectivity in vivo with MRI. In this study, we present a probabilistic atlas of the thalamic nuclei built using ex vivo brain MRI scans and histological data, as well as the application of the atlas to in vivo MRI segmentation. The atlas was built using manual delineation of 26 thalamic nuclei on the serial histology of 12 whole thalami from six autopsy samples, combined with manual segmentations of the whole thalamus and surrounding structures (caudate, putamen, hippocampus, etc.) made on in vivo brain MR data from 39 subjects. The 3D structure of the histological data and corresponding manual segmentations was recovered using the ex vivo MRI as reference frame, and stacks of blockface photographs acquired during the sectioning as intermediate target. The atlas, which was encoded as an adaptive tetrahedral mesh, shows a good agreement with with previous histological studies of the thalamus in terms of volumes of representative nuclei. When applied to segmentation of in vivo scans using Bayesian inference, the atlas shows excellent test-retest reliability, robustness to changes in input MRI contrast, and ability to detect differential thalamic effects in subjects with Alzheimer's disease. The probabilistic atlas and companion segmentation tool are publicly available as part of the neuroimaging package FreeSurfer.
1804.01203
Min Xu
Yixiu Zhao, Xiangrui Zeng, Qiang Guo, Min Xu
An integration of fast alignment and maximum-likelihood methods for electron subtomogram averaging and classification
17 pages
Intelligent Systems for Molecular Biology (ISMB) 2018, Bioinformatics
10.1093/bioinformatics/bty267
null
q-bio.QM cs.CV stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motivation: Cellular Electron CryoTomography (CECT) is an emerging 3D imaging technique that visualizes subcellular organization of single cells at submolecular resolution and in near-native state. CECT captures large numbers of macromolecular complexes of highly diverse structures and abundances. However, the structural complexity and imaging limits complicate the systematic de novo structural recovery and recognition of these macromolecular complexes. Efficient and accurate reference-free subtomogram averaging and classification represent the most critical tasks for such analysis. Existing subtomogram alignment based methods are prone to the missing wedge effects and low signal-to-noise ratio (SNR). Moreover, existing maximum-likelihood based methods rely on integration operations, which are in principle computationally infeasible for accurate calculation. Results: Built on existing works, we propose an integrated method, Fast Alignment Maximum Likelihood method (FAML), which uses fast subtomogram alignment to sample sub-optimal rigid transformations. The transformations are then used to approximate integrals for maximum-likelihood update of subtomogram averages through expectation-maximization algorithm. Our tests on simulated and experimental subtomograms showed that, compared to our previously developed fast alignment method (FA), FAML is significantly more robust to noise and missing wedge effects with moderate increases of computation cost.Besides, FAML performs well with significantly fewer input subtomograms when the FA method fails. Therefore, FAML can serve as a key component for improved construction of initial structural models from macromolecules captured by CECT.
[ { "created": "Wed, 4 Apr 2018 01:16:20 GMT", "version": "v1" } ]
2018-05-16
[ [ "Zhao", "Yixiu", "" ], [ "Zeng", "Xiangrui", "" ], [ "Guo", "Qiang", "" ], [ "Xu", "Min", "" ] ]
Motivation: Cellular Electron CryoTomography (CECT) is an emerging 3D imaging technique that visualizes subcellular organization of single cells at submolecular resolution and in near-native state. CECT captures large numbers of macromolecular complexes of highly diverse structures and abundances. However, the structural complexity and imaging limits complicate the systematic de novo structural recovery and recognition of these macromolecular complexes. Efficient and accurate reference-free subtomogram averaging and classification represent the most critical tasks for such analysis. Existing subtomogram alignment based methods are prone to the missing wedge effects and low signal-to-noise ratio (SNR). Moreover, existing maximum-likelihood based methods rely on integration operations, which are in principle computationally infeasible for accurate calculation. Results: Built on existing works, we propose an integrated method, Fast Alignment Maximum Likelihood method (FAML), which uses fast subtomogram alignment to sample sub-optimal rigid transformations. The transformations are then used to approximate integrals for maximum-likelihood update of subtomogram averages through expectation-maximization algorithm. Our tests on simulated and experimental subtomograms showed that, compared to our previously developed fast alignment method (FA), FAML is significantly more robust to noise and missing wedge effects with moderate increases of computation cost.Besides, FAML performs well with significantly fewer input subtomograms when the FA method fails. Therefore, FAML can serve as a key component for improved construction of initial structural models from macromolecules captured by CECT.
1411.2136
Alexander Mathis
Alexander Mathis and Martin B. Stemmler and Andreas V.M. Herz
Probable nature of higher-dimensional symmetries underlying mammalian grid-cell activity patterns
12 pages, 6 figures
null
10.7554/eLife.05979
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Lattices abound in nature - from the crystal structure of minerals to the honey-comb organization of ommatidia in the compound eye of insects. Such regular arrangements provide solutions for optimally dense packings, efficient resource distribution and cryptographic schemes, highlighting the importance of lattice theory in mathematics and physics, biology and economics, and computer science and coding theory. Do lattices also play a role in how the brain represents information? To answer this question, we focus on higher-dimensional stimulus domains, with particular emphasis on neural representations of the physical space explored by an animal. Using information theory, we ask how to optimize the spatial resolution of neuronal lattice codes. We show that the hexagonal activity patterns of grid cells found in the hippocampal formation of mammals navigating on a flat surface lead to the highest spatial resolution in a two-dimensional world. For species that move freely in a three-dimensional environment, the firing fields should be arranged along a face-centered cubic (FCC) lattice or a equally dense non-lattice variant thereof known as a hexagonal close packing (HCP). This quantitative prediction could be tested experimentally in flying bats, arboreal monkeys, or cetaceans. More generally, our theoretical results suggest that the brain encodes higher-dimensional sensory or cognitive variables with populations of grid-cell-like neurons whose activity patterns exhibit lattice structures at multiple, nested scales.
[ { "created": "Sat, 8 Nov 2014 16:37:00 GMT", "version": "v1" } ]
2015-05-12
[ [ "Mathis", "Alexander", "" ], [ "Stemmler", "Martin B.", "" ], [ "Herz", "Andreas V. M.", "" ] ]
Lattices abound in nature - from the crystal structure of minerals to the honey-comb organization of ommatidia in the compound eye of insects. Such regular arrangements provide solutions for optimally dense packings, efficient resource distribution and cryptographic schemes, highlighting the importance of lattice theory in mathematics and physics, biology and economics, and computer science and coding theory. Do lattices also play a role in how the brain represents information? To answer this question, we focus on higher-dimensional stimulus domains, with particular emphasis on neural representations of the physical space explored by an animal. Using information theory, we ask how to optimize the spatial resolution of neuronal lattice codes. We show that the hexagonal activity patterns of grid cells found in the hippocampal formation of mammals navigating on a flat surface lead to the highest spatial resolution in a two-dimensional world. For species that move freely in a three-dimensional environment, the firing fields should be arranged along a face-centered cubic (FCC) lattice or a equally dense non-lattice variant thereof known as a hexagonal close packing (HCP). This quantitative prediction could be tested experimentally in flying bats, arboreal monkeys, or cetaceans. More generally, our theoretical results suggest that the brain encodes higher-dimensional sensory or cognitive variables with populations of grid-cell-like neurons whose activity patterns exhibit lattice structures at multiple, nested scales.
0911.1066
Mireille Regnier
Anatoly Ivashchenko, Galina Boldina, Aizhan Turmagambetova, Mireille R\'egnier
Using profiles based on hydropathy properties to define essential regions for splicing
null
International Journal of Biological Sciences (2009) 10 p
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We define new profiles based on hydropathy properties and point out specific profiles for regions surrounding splice sites. We built a set T of flanking regions of genes with 1-3 introns from 21st and 22nd chromosomes. These genes contained 313 introns and 385 exons and were extracted from GenBank. They were used in order to define hydropathy profiles. Most human introns, around 99.66%, are likely to be U2- type introns. They have highly degenerate sequence motifs and many different sequences can function as U2-type splice sites. Our new profiles allow to identify regions which have conservative biochemical features that are essential for recognition by spliceosome. We have also found differences between hydropathy profiles for U2 or U12-types of introns on sets of spice sites extracted from SpliceRack database in order to distinguish GT?AG introns belonging to U2 and U12-types. Indeed, intron type cannot be simply determined by the dinucleotide termini. We show that there is a similarity of hydropathy profiles inside intron types. On the one hand, GT?AG and GC?AG introns belonging to U2-type have resembling hydropathy profiles as well as AT?AC and GT?AG introns belonging to U12-type. On the other hand, hydropathy profiles of U2 and U12-types GT?AG introns are completely different. Finally, we define and compute a pvalue; we compare our profiles with the profiles provided by a classical method, Pictogram.
[ { "created": "Thu, 5 Nov 2009 16:04:23 GMT", "version": "v1" } ]
2009-11-06
[ [ "Ivashchenko", "Anatoly", "" ], [ "Boldina", "Galina", "" ], [ "Turmagambetova", "Aizhan", "" ], [ "Régnier", "Mireille", "" ] ]
We define new profiles based on hydropathy properties and point out specific profiles for regions surrounding splice sites. We built a set T of flanking regions of genes with 1-3 introns from 21st and 22nd chromosomes. These genes contained 313 introns and 385 exons and were extracted from GenBank. They were used in order to define hydropathy profiles. Most human introns, around 99.66%, are likely to be U2- type introns. They have highly degenerate sequence motifs and many different sequences can function as U2-type splice sites. Our new profiles allow to identify regions which have conservative biochemical features that are essential for recognition by spliceosome. We have also found differences between hydropathy profiles for U2 or U12-types of introns on sets of spice sites extracted from SpliceRack database in order to distinguish GT?AG introns belonging to U2 and U12-types. Indeed, intron type cannot be simply determined by the dinucleotide termini. We show that there is a similarity of hydropathy profiles inside intron types. On the one hand, GT?AG and GC?AG introns belonging to U2-type have resembling hydropathy profiles as well as AT?AC and GT?AG introns belonging to U12-type. On the other hand, hydropathy profiles of U2 and U12-types GT?AG introns are completely different. Finally, we define and compute a pvalue; we compare our profiles with the profiles provided by a classical method, Pictogram.
1911.08188
Matteo De Rosa
Michela Bollati, Emanuele Scalone, Francesco Bon\`i, Eloise Mastrangelo, Toni Giorgino, Mario Milani, Matteo de Rosa
High-resolution crystal structure of gelsolin domain 2 in complex with the physiological calcium ion
null
Biochem Biophys Res Commun. 2019 Oct 8;518(1):94-99
10.1016/j.bbrc.2019.08.013
null
q-bio.BM
http://creativecommons.org/licenses/by-nc-sa/4.0/
The second domain of gelsolin (G2) hosts mutations responsible for a hereditary form of amyloidosis. The active form of gelsolin is Ca2+-bound; it is also a dynamic protein, hence structural biologists often rely on the study of the isolated G2. However, the wild type G2 structure that have been used so far in comparative studies is bound to a crystallographic Cd2+, in lieu of the physiological calcium. Here, we report the wild type structure of G2 in complex with Ca2+ highlighting subtle ion-dependent differences. Previous findings on different G2 mutations are also briefly revised in light of these results.
[ { "created": "Tue, 19 Nov 2019 10:08:27 GMT", "version": "v1" } ]
2019-11-20
[ [ "Bollati", "Michela", "" ], [ "Scalone", "Emanuele", "" ], [ "Bonì", "Francesco", "" ], [ "Mastrangelo", "Eloise", "" ], [ "Giorgino", "Toni", "" ], [ "Milani", "Mario", "" ], [ "de Rosa", "Matteo", "" ] ]
The second domain of gelsolin (G2) hosts mutations responsible for a hereditary form of amyloidosis. The active form of gelsolin is Ca2+-bound; it is also a dynamic protein, hence structural biologists often rely on the study of the isolated G2. However, the wild type G2 structure that have been used so far in comparative studies is bound to a crystallographic Cd2+, in lieu of the physiological calcium. Here, we report the wild type structure of G2 in complex with Ca2+ highlighting subtle ion-dependent differences. Previous findings on different G2 mutations are also briefly revised in light of these results.
0810.4099
Vahid Rezania
Vahid Rezania and Jack Tuszynski
A first principle (3+1) dimensional model for microtubule polymerization
12 pages, 2 figures. Accepted in Physics Letters A
Physics Letters A 372 (2008) 7051--7056
10.1016/j.physleta.2008.10.038
null
q-bio.QM cond-mat.stat-mech q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we propose a microscopic model to study the polymerization of microtubules (MTs). Starting from fundamental reactions during MT's assembly and disassembly processes, we systematically derive a nonlinear system of equations that determines the dynamics of microtubules in 3D. %coexistence with tubulin dimers in a solution. We found that the dynamics of a MT is mathematically expressed via a cubic-quintic nonlinear Schrodinger (NLS) equation. Interestingly, the generic 3D solution of the NLS equation exhibits linear growing and shortening in time as well as temporal fluctuations about a mean value which are qualitatively similar to the dynamic instability of MTs observed experimentally. By solving equations numerically, we have found spatio-temporal patterns consistent with experimental observations.
[ { "created": "Wed, 22 Oct 2008 15:57:35 GMT", "version": "v1" } ]
2009-11-13
[ [ "Rezania", "Vahid", "" ], [ "Tuszynski", "Jack", "" ] ]
In this paper we propose a microscopic model to study the polymerization of microtubules (MTs). Starting from fundamental reactions during MT's assembly and disassembly processes, we systematically derive a nonlinear system of equations that determines the dynamics of microtubules in 3D. %coexistence with tubulin dimers in a solution. We found that the dynamics of a MT is mathematically expressed via a cubic-quintic nonlinear Schrodinger (NLS) equation. Interestingly, the generic 3D solution of the NLS equation exhibits linear growing and shortening in time as well as temporal fluctuations about a mean value which are qualitatively similar to the dynamic instability of MTs observed experimentally. By solving equations numerically, we have found spatio-temporal patterns consistent with experimental observations.
1312.1673
David McCandlish
David M. McCandlish, Charles L. Epstein, and Joshua B. Plotkin
Formal properties of the probability of fixation: identities, inequalities and approximations
Minor edits; added appendix
Theoretical Population Biology, 99: 98-113 (2015)
10.1016/j.tpb.2014.11.004
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The formula for the probability of fixation of a new mutation is widely used in theoretical population genetics and molecular evolution. Here we derive a series of identities, inequalities and approximations for the exact probability of fixation of a new mutation under the Moran process (equivalent results hold for the approximate probability of fixation for the Wright-Fisher process after an appropriate change of variables). We show that the behavior of the logarithm of the probability of fixation is particularly simple when the selection coefficient is measured as a difference of Malthusian fitnesses, and we exploit this simplicity to derive several inequalities and approximations. We also present a comprehensive comparison of both existing and new approximations for the probability of fixation, highlighting in particular approximations that result in a reversible Markov chain when used to model the dynamics of evolution under weak mutation.
[ { "created": "Thu, 5 Dec 2013 20:27:01 GMT", "version": "v1" }, { "created": "Tue, 28 Jan 2014 20:13:47 GMT", "version": "v2" }, { "created": "Wed, 26 Feb 2014 19:48:58 GMT", "version": "v3" }, { "created": "Thu, 10 Apr 2014 22:00:44 GMT", "version": "v4" } ]
2015-03-10
[ [ "McCandlish", "David M.", "" ], [ "Epstein", "Charles L.", "" ], [ "Plotkin", "Joshua B.", "" ] ]
The formula for the probability of fixation of a new mutation is widely used in theoretical population genetics and molecular evolution. Here we derive a series of identities, inequalities and approximations for the exact probability of fixation of a new mutation under the Moran process (equivalent results hold for the approximate probability of fixation for the Wright-Fisher process after an appropriate change of variables). We show that the behavior of the logarithm of the probability of fixation is particularly simple when the selection coefficient is measured as a difference of Malthusian fitnesses, and we exploit this simplicity to derive several inequalities and approximations. We also present a comprehensive comparison of both existing and new approximations for the probability of fixation, highlighting in particular approximations that result in a reversible Markov chain when used to model the dynamics of evolution under weak mutation.
1412.2786
Guowei Wei
Kristopher Opron and Kelin Xia and Guo-Wei Wei
Fast and Anisotropic Flexibility-Rigidity Index
10 figures and 50 references
Journal of Chemical Physics, 140(23), 234105, (2014)
10.1063/1.4882258
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The flexibility-rigidity index (FRI) is a newly proposed method for the construction of atomic rigidity functions. The FRI method analyzes protein rigidity and flexibility and is capable of predicting protein B-factors without resorting to matrix diagonalization. A fundamental assumption used in the FRI is that protein structures are uniquely determined by various internal and external interactions, while the protein functions, such as stability and flexibility, are solely determined by the structure. As such, one can predict protein flexibility without resorting to the protein interaction Hamiltonian. Consequently, bypassing the matrix diagonalization, the original FRI has a computational complexity of O(N^2). This work introduces a fast FRI (fFRI) algorithm for the flexibility analysis of large macromolecules. The proposed fFRI further reduces the computational complexity to O(N). Additionally, we propose anisotropic FRI (aFRI) algorithms for the analysis of protein collective dynamics. The aFRI algorithms admit adaptive Hessian matrices, from a completely global 3N*3N matrix to completely local 3*3 matrices. However, these local 3*3 matrices have built in much non-local correlation. Furthermore, we compare the accuracy and efficiency of FRI with some {established} approaches to flexibility analysis, namely, normal mode analysis (NMA) and Gaussian network model (GNM). The accuracy of the FRI method is tested. The FRI, particularly the fFRI, is orders of magnitude more efficient and about 10% more accurate overall than some of the most popular methods in the field. The proposed fFRI is able to predict B-factors for alpha-carbons of the HIV virus capsid (313,236 residues) in less than 30 seconds on a single processor using only one core. Finally, we demonstrate the application of FRI and aFRI to protein domain analysis.
[ { "created": "Mon, 8 Dec 2014 21:46:23 GMT", "version": "v1" } ]
2014-12-10
[ [ "Opron", "Kristopher", "" ], [ "Xia", "Kelin", "" ], [ "Wei", "Guo-Wei", "" ] ]
The flexibility-rigidity index (FRI) is a newly proposed method for the construction of atomic rigidity functions. The FRI method analyzes protein rigidity and flexibility and is capable of predicting protein B-factors without resorting to matrix diagonalization. A fundamental assumption used in the FRI is that protein structures are uniquely determined by various internal and external interactions, while the protein functions, such as stability and flexibility, are solely determined by the structure. As such, one can predict protein flexibility without resorting to the protein interaction Hamiltonian. Consequently, bypassing the matrix diagonalization, the original FRI has a computational complexity of O(N^2). This work introduces a fast FRI (fFRI) algorithm for the flexibility analysis of large macromolecules. The proposed fFRI further reduces the computational complexity to O(N). Additionally, we propose anisotropic FRI (aFRI) algorithms for the analysis of protein collective dynamics. The aFRI algorithms admit adaptive Hessian matrices, from a completely global 3N*3N matrix to completely local 3*3 matrices. However, these local 3*3 matrices have built in much non-local correlation. Furthermore, we compare the accuracy and efficiency of FRI with some {established} approaches to flexibility analysis, namely, normal mode analysis (NMA) and Gaussian network model (GNM). The accuracy of the FRI method is tested. The FRI, particularly the fFRI, is orders of magnitude more efficient and about 10% more accurate overall than some of the most popular methods in the field. The proposed fFRI is able to predict B-factors for alpha-carbons of the HIV virus capsid (313,236 residues) in less than 30 seconds on a single processor using only one core. Finally, we demonstrate the application of FRI and aFRI to protein domain analysis.
1609.07068
Yujiang Wang
Yujiang Wang, Andrew J Trevelyan, Antonio Valentin, Gonzalo Alarcon, Peter N Taylor, Marcus Kaiser
Mechanisms underlying different onset patterns of focal seizures
null
PLOS Computational Biology 2017 13(5): e1005475
10.1371/journal.pcbi.1005475
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Focal seizures are episodes of pathological brain activity that appear to arise from a localised area of the brain. The onset patterns of focal seizure activity have been studied intensively, and they have largely been distinguished into two types - low amplitude fast oscillations (LAF), or high amplitude spikes (HAS). Here we explore whether these two patterns arise from fundamentally different mechanisms. Here, we use a previously established computational model of neocortical tissue, and validate it as an adequate model using clinical recordings of focal seizures. We then reproduce the two onset patterns in their most defining properties and investigate the possible mechanisms underlying the different focal seizure onset patterns in the model. We show that the two patterns are associated with different mechanisms at the spatial scale of a single ECoG electrode. The LAF onset is initiated by independent patches of localised activity, which slowly invade the surrounding tissue and coalesce over time. In contrast, the HAS onset is a global, systemic transition to a coexisting seizure state triggered by a local event. We find that such a global transition is enabled by an increase in the excitability of the "healthy" surrounding tissue, which by itself does not generate seizures, but can support seizure activity when incited. In our simulations, the difference in surrounding tissue excitability also offers a simple explanation of the clinically reported difference in surgical outcomes. Finally, we demonstrate in the model how changes in tissue excitability could be elucidated, in principle, using active stimulation. Taken together, our modelling results suggest that the excitability of the tissue surrounding the seizure core may play a determining role in the seizure onset pattern, as well as in the surgical outcome.
[ { "created": "Thu, 22 Sep 2016 17:03:08 GMT", "version": "v1" }, { "created": "Fri, 5 May 2017 13:20:15 GMT", "version": "v2" } ]
2017-05-08
[ [ "Wang", "Yujiang", "" ], [ "Trevelyan", "Andrew J", "" ], [ "Valentin", "Antonio", "" ], [ "Alarcon", "Gonzalo", "" ], [ "Taylor", "Peter N", "" ], [ "Kaiser", "Marcus", "" ] ]
Focal seizures are episodes of pathological brain activity that appear to arise from a localised area of the brain. The onset patterns of focal seizure activity have been studied intensively, and they have largely been distinguished into two types - low amplitude fast oscillations (LAF), or high amplitude spikes (HAS). Here we explore whether these two patterns arise from fundamentally different mechanisms. Here, we use a previously established computational model of neocortical tissue, and validate it as an adequate model using clinical recordings of focal seizures. We then reproduce the two onset patterns in their most defining properties and investigate the possible mechanisms underlying the different focal seizure onset patterns in the model. We show that the two patterns are associated with different mechanisms at the spatial scale of a single ECoG electrode. The LAF onset is initiated by independent patches of localised activity, which slowly invade the surrounding tissue and coalesce over time. In contrast, the HAS onset is a global, systemic transition to a coexisting seizure state triggered by a local event. We find that such a global transition is enabled by an increase in the excitability of the "healthy" surrounding tissue, which by itself does not generate seizures, but can support seizure activity when incited. In our simulations, the difference in surrounding tissue excitability also offers a simple explanation of the clinically reported difference in surgical outcomes. Finally, we demonstrate in the model how changes in tissue excitability could be elucidated, in principle, using active stimulation. Taken together, our modelling results suggest that the excitability of the tissue surrounding the seizure core may play a determining role in the seizure onset pattern, as well as in the surgical outcome.
2104.08989
Sara Clifton
Kylie J Landa, Lauren M Mossman, Rachel J Whitaker, Zoi Rapti, Sara M Clifton
Phage-antibiotic synergy inhibited by temperate and chronic virus competition
23 pages, 5 figures, 3 tables. arXiv admin note: text overlap with arXiv:1911.07233
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
As antibiotic resistance grows more frequent for common bacterial infections, alternative treatment strategies such as phage therapy have become more widely studied in the medical field. While many studies have explored the efficacy of antibiotics, phage therapy, or synergistic combinations of phages and antibiotics, the impact of virus competition on the efficacy of antibiotic treatment has not yet been considered. Here, we model the synergy between antibiotics and two viral types, temperate and chronic, in controlling bacterial infections. We demonstrate that while combinations of antibiotic and temperate viruses exhibit synergy, competition between temperate and chronic viruses inhibits bacterial control with antibiotics. In fact, our model reveals that antibiotic treatment may counterintuitively increase the bacterial load when a large fraction of the bacteria develop antibiotic-resistance.
[ { "created": "Mon, 19 Apr 2021 00:43:21 GMT", "version": "v1" } ]
2021-04-20
[ [ "Landa", "Kylie J", "" ], [ "Mossman", "Lauren M", "" ], [ "Whitaker", "Rachel J", "" ], [ "Rapti", "Zoi", "" ], [ "Clifton", "Sara M", "" ] ]
As antibiotic resistance grows more frequent for common bacterial infections, alternative treatment strategies such as phage therapy have become more widely studied in the medical field. While many studies have explored the efficacy of antibiotics, phage therapy, or synergistic combinations of phages and antibiotics, the impact of virus competition on the efficacy of antibiotic treatment has not yet been considered. Here, we model the synergy between antibiotics and two viral types, temperate and chronic, in controlling bacterial infections. We demonstrate that while combinations of antibiotic and temperate viruses exhibit synergy, competition between temperate and chronic viruses inhibits bacterial control with antibiotics. In fact, our model reveals that antibiotic treatment may counterintuitively increase the bacterial load when a large fraction of the bacteria develop antibiotic-resistance.
1411.6341
Brandon Barker
Brandon Barker, Lin Xu, Zhenglong Gu
Dynamic Epistasis under Varying Environmental Perturbations
22 pages, 9 figures
PLoS ONE 10(1): e0114911
10.1371/journal.pone.0114911
null
q-bio.MN q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Epistasis describes the phenomenon that mutations at different loci do not have independent effects with regard to certain phenotypes. Understanding the global epistatic landscape is vital for many genetic and evolutionary theories. Current knowledge for epistatic dynamics under multiple conditions is limited by the technological difficulties in experimentally screening epistatic relations among genes. We explored this issue by applying flux balance analysis to simulate epistatic landscapes under various environmental perturbations. Specifically, we looked at gene-gene epistatic interactions, where the mutations were assumed to occur in different genes. We predicted that epistasis tends to become more positive from glucose-abundant to nutrient-limiting conditions, indicating that selection might be less effective in removing deleterious mutations in the latter. We also observed a stable core of epistatic interactions in all tested conditions, as well as many epistatic interactions unique to each condition. Interestingly, genes in the stable epistatic interaction network are directly linked to most other genes whereas genes with condition-specific epistasis form a scale-free network. Furthermore, genes with stable epistasis tend to have similar evolutionary rates, whereas this co-evolving relationship does not hold for genes with condition-specific epistasis. Our findings provide a novel genome-wide picture about epistatic dynamics under environmental perturbations.
[ { "created": "Mon, 24 Nov 2014 03:36:11 GMT", "version": "v1" } ]
2015-02-11
[ [ "Barker", "Brandon", "" ], [ "Xu", "Lin", "" ], [ "Gu", "Zhenglong", "" ] ]
Epistasis describes the phenomenon that mutations at different loci do not have independent effects with regard to certain phenotypes. Understanding the global epistatic landscape is vital for many genetic and evolutionary theories. Current knowledge for epistatic dynamics under multiple conditions is limited by the technological difficulties in experimentally screening epistatic relations among genes. We explored this issue by applying flux balance analysis to simulate epistatic landscapes under various environmental perturbations. Specifically, we looked at gene-gene epistatic interactions, where the mutations were assumed to occur in different genes. We predicted that epistasis tends to become more positive from glucose-abundant to nutrient-limiting conditions, indicating that selection might be less effective in removing deleterious mutations in the latter. We also observed a stable core of epistatic interactions in all tested conditions, as well as many epistatic interactions unique to each condition. Interestingly, genes in the stable epistatic interaction network are directly linked to most other genes whereas genes with condition-specific epistasis form a scale-free network. Furthermore, genes with stable epistasis tend to have similar evolutionary rates, whereas this co-evolving relationship does not hold for genes with condition-specific epistasis. Our findings provide a novel genome-wide picture about epistatic dynamics under environmental perturbations.
2109.08898
Deeptajyoti Sen Dr.
Deeptajyoti Sen, Andrew Morozov, S. Ghorai and Malay Banerjee
Bifurcation analysis of the predator-prey model with the Allee effect in the predator
null
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
The use of predator-prey models in theoretical ecology has a long history, and the model equations have largely evolved since the original Lotka-Volterra system towards more realistic descriptions of the processes of predation, reproduction and mortality. One important aspect is the recognition of the fact that the growth of a population can be subject to an Allee effect, where the per capita growth rate increases with the population density. Including an Allee effect has been shown to fundamentally change predator-prey dynamics and strongly impact species persistence, but previous studies mostly focused on scenarios of an Allee effect in the prey population. Here we explore a predator-prey model with an ecologically important case of the Allee effect in the predator population where it occurs in the numerical response of predator without affecting its functional response. Biologically, this can result from various scenarios such as a lack of mating partners, sperm limitation and cooperative breeding mechanisms, among others. Unlike previous studies, we consider here a generic mathematical formulation of the Allee effect without specifying a concrete parameterisation of the functional form, and analyse the possible local bifurcations in the system. Further, we explore the global bifurcation structure of the model and its possible dynamical regimes for three different concrete parameterisations of the Allee effect. The model possesses a complex bifurcation structure: there can be multiple coexistence states including two stable limit cycles. Inclusion of the Allee effect in the predator generally has a destabilising effect on the coexistence equilibrium. We also show that regardless of the parametrisation of the Allee effect, enrichment of the environment will eventually result in extinction of the predator population.
[ { "created": "Sat, 18 Sep 2021 10:44:35 GMT", "version": "v1" } ]
2021-09-21
[ [ "Sen", "Deeptajyoti", "" ], [ "Morozov", "Andrew", "" ], [ "Ghorai", "S.", "" ], [ "Banerjee", "Malay", "" ] ]
The use of predator-prey models in theoretical ecology has a long history, and the model equations have largely evolved since the original Lotka-Volterra system towards more realistic descriptions of the processes of predation, reproduction and mortality. One important aspect is the recognition of the fact that the growth of a population can be subject to an Allee effect, where the per capita growth rate increases with the population density. Including an Allee effect has been shown to fundamentally change predator-prey dynamics and strongly impact species persistence, but previous studies mostly focused on scenarios of an Allee effect in the prey population. Here we explore a predator-prey model with an ecologically important case of the Allee effect in the predator population where it occurs in the numerical response of predator without affecting its functional response. Biologically, this can result from various scenarios such as a lack of mating partners, sperm limitation and cooperative breeding mechanisms, among others. Unlike previous studies, we consider here a generic mathematical formulation of the Allee effect without specifying a concrete parameterisation of the functional form, and analyse the possible local bifurcations in the system. Further, we explore the global bifurcation structure of the model and its possible dynamical regimes for three different concrete parameterisations of the Allee effect. The model possesses a complex bifurcation structure: there can be multiple coexistence states including two stable limit cycles. Inclusion of the Allee effect in the predator generally has a destabilising effect on the coexistence equilibrium. We also show that regardless of the parametrisation of the Allee effect, enrichment of the environment will eventually result in extinction of the predator population.
1410.0930
Octavio Miramontes
Octavio Miramontes, Og DeSouza, Leticia Ribeiro Paiva, Alessandra Marins and Sirio Orozco
L\'evy flights and self-similar exploratory behaviour of termite workers: beyond model fitting
13 pages, 11 figures. Unrevised version. Final version to appear in Plos ONE
null
10.1371/journal.pone.0111183
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Animal movements have been related to optimal foraging strategies where self-similar trajectories are central. Most of the experimental studies done so far have focused mainly on fitting statistical models to data in order to test for movement patterns described by power-laws. Here we show by analyzing over half a million movement displacements that isolated termite workers actually exhibit a range of very interesting dynamical properties --including L\'evy flights-- in their exploratory behaviour. Going beyond the current trend of statistical model fitting alone, our study analyses anomalous diffusion and structure functions to estimate values of the scaling exponents describing displacement statistics. We evince the fractal nature of the movement patterns and show how the scaling exponents describing termite space exploration intriguingly comply with mathematical relations found in the physics of transport phenomena. By doing this, we rescue a rich variety of physical and biological phenomenology that can be potentially important and meaningful for the study of complex animal behavior and, in particular, for the study of how patterns of exploratory behaviour of individual social insects may impact not only their feeding demands but also nestmate encounter patterns and, hence, their dynamics at the social scale.
[ { "created": "Fri, 3 Oct 2014 18:06:02 GMT", "version": "v1" } ]
2015-06-23
[ [ "Miramontes", "Octavio", "" ], [ "DeSouza", "Og", "" ], [ "Paiva", "Leticia Ribeiro", "" ], [ "Marins", "Alessandra", "" ], [ "Orozco", "Sirio", "" ] ]
Animal movements have been related to optimal foraging strategies where self-similar trajectories are central. Most of the experimental studies done so far have focused mainly on fitting statistical models to data in order to test for movement patterns described by power-laws. Here we show by analyzing over half a million movement displacements that isolated termite workers actually exhibit a range of very interesting dynamical properties --including L\'evy flights-- in their exploratory behaviour. Going beyond the current trend of statistical model fitting alone, our study analyses anomalous diffusion and structure functions to estimate values of the scaling exponents describing displacement statistics. We evince the fractal nature of the movement patterns and show how the scaling exponents describing termite space exploration intriguingly comply with mathematical relations found in the physics of transport phenomena. By doing this, we rescue a rich variety of physical and biological phenomenology that can be potentially important and meaningful for the study of complex animal behavior and, in particular, for the study of how patterns of exploratory behaviour of individual social insects may impact not only their feeding demands but also nestmate encounter patterns and, hence, their dynamics at the social scale.
1411.6684
Feng Fu
Feng Fu, Martin A. Nowak, and Sebastian Bonhoeffer
Spatial heterogeneity in drug concentrations can facilitate the emergence of resistance to cancer therapy
Collaborations on further followup work are extremely welcome. Please see contact details at http://www.tb.ethz.ch/people/person-detail.html?persid=189998
null
10.1371/journal.pcbi.1004142
null
q-bio.PE q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Acquired resistance is one of the major barriers to successful cancer therapy. The development of resistance is commonly attributed to genetic heterogeneity. However, heterogeneity of drug penetration of the tumor microenvironment both on the microscopic level within solid tumors as well as on the macroscopic level across metastases may also contribute to acquired drug resistance. Here we use mathematical models to investigate the effect of drug heterogeneity on the probability of escape from treatment and time to resistance. Specifically we address scenarios with sufficiently efficient therapies that suppress growth of all preexisting genetic variants in the compartment with highest drug concentration. To study the joint effect of drug heterogeneity, growth rate, and evolution of resistance we analyze a multitype stochastic branching process describing growth of cancer cells in two compartments with different drug concentration and limited migration between compartments. We show that resistance is more likely to arise first in the low drug compartment and from there populate the high drug compartment. Moreover, we show that only below a threshold rate of cell migration does spatial heterogeneity accelerate resistance evolution, otherwise deterring drug resistance with excessively high migration rates. Our results provide new insights into understanding why cancers tend to quickly become resistant, and that cell migration and the presence of sanctuary sites with little drug exposure are essential to this end.
[ { "created": "Mon, 24 Nov 2014 23:19:50 GMT", "version": "v1" } ]
2015-08-19
[ [ "Fu", "Feng", "" ], [ "Nowak", "Martin A.", "" ], [ "Bonhoeffer", "Sebastian", "" ] ]
Acquired resistance is one of the major barriers to successful cancer therapy. The development of resistance is commonly attributed to genetic heterogeneity. However, heterogeneity of drug penetration of the tumor microenvironment both on the microscopic level within solid tumors as well as on the macroscopic level across metastases may also contribute to acquired drug resistance. Here we use mathematical models to investigate the effect of drug heterogeneity on the probability of escape from treatment and time to resistance. Specifically we address scenarios with sufficiently efficient therapies that suppress growth of all preexisting genetic variants in the compartment with highest drug concentration. To study the joint effect of drug heterogeneity, growth rate, and evolution of resistance we analyze a multitype stochastic branching process describing growth of cancer cells in two compartments with different drug concentration and limited migration between compartments. We show that resistance is more likely to arise first in the low drug compartment and from there populate the high drug compartment. Moreover, we show that only below a threshold rate of cell migration does spatial heterogeneity accelerate resistance evolution, otherwise deterring drug resistance with excessively high migration rates. Our results provide new insights into understanding why cancers tend to quickly become resistant, and that cell migration and the presence of sanctuary sites with little drug exposure are essential to this end.
2008.12473
Xianggen Liu
Xianggen Liu, Yunan Luo, Sen Song and Jian Peng
Pre-training of Graph Neural Network for Modeling Effects of Mutations on Protein-Protein Binding Affinity
null
null
10.1371/journal.pcbi.1009284
null
q-bio.BM cs.LG q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modeling the effects of mutations on the binding affinity plays a crucial role in protein engineering and drug design. In this study, we develop a novel deep learning based framework, named GraphPPI, to predict the binding affinity changes upon mutations based on the features provided by a graph neural network (GNN). In particular, GraphPPI first employs a well-designed pre-training scheme to enforce the GNN to capture the features that are predictive of the effects of mutations on binding affinity in an unsupervised manner and then integrates these graphical features with gradient-boosting trees to perform the prediction. Experiments showed that, without any annotated signals, GraphPPI can capture meaningful patterns of the protein structures. Also, GraphPPI achieved new state-of-the-art performance in predicting the binding affinity changes upon both single- and multi-point mutations on five benchmark datasets. In-depth analyses also showed GraphPPI can accurately estimate the effects of mutations on the binding affinity between SARS-CoV-2 and its neutralizing antibodies. These results have established GraphPPI as a powerful and useful computational tool in the studies of protein design.
[ { "created": "Fri, 28 Aug 2020 04:07:39 GMT", "version": "v1" } ]
2021-09-15
[ [ "Liu", "Xianggen", "" ], [ "Luo", "Yunan", "" ], [ "Song", "Sen", "" ], [ "Peng", "Jian", "" ] ]
Modeling the effects of mutations on the binding affinity plays a crucial role in protein engineering and drug design. In this study, we develop a novel deep learning based framework, named GraphPPI, to predict the binding affinity changes upon mutations based on the features provided by a graph neural network (GNN). In particular, GraphPPI first employs a well-designed pre-training scheme to enforce the GNN to capture the features that are predictive of the effects of mutations on binding affinity in an unsupervised manner and then integrates these graphical features with gradient-boosting trees to perform the prediction. Experiments showed that, without any annotated signals, GraphPPI can capture meaningful patterns of the protein structures. Also, GraphPPI achieved new state-of-the-art performance in predicting the binding affinity changes upon both single- and multi-point mutations on five benchmark datasets. In-depth analyses also showed GraphPPI can accurately estimate the effects of mutations on the binding affinity between SARS-CoV-2 and its neutralizing antibodies. These results have established GraphPPI as a powerful and useful computational tool in the studies of protein design.
1603.01794
Pu Tian
Kai Wang, Lanru Liu and Pu Tian
Utility of potential energy span as an approximate free energy proxy
18 pages, 8 figures
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Free energy calculation is critical in predictive tasks such as protein folding, docking and design. However, rigorous calculation of free energy change is prohibitively expensive in these practical applications. The minimum potential energy is therefore widely utilized to approximate free energy. In this study, based on analysis of extensive molecular dynamics (MD) simulation trajectories of a few native globular proteins, we found that change of minimum and corresponding maximum potential energy terms exhibit similar level of correlation with change of free energy. More importantly, we demonstrated that change of span (maximum - minimum) of potential energy terms, which engender negligible additional computational cost, exhibit considerably stronger correlations with change of free energy than the corresponding change of minimum and maximum potential energy terms. Therefore, potential energy span may serve as an alternative efficient approximate free energy proxy.
[ { "created": "Sun, 6 Mar 2016 06:26:28 GMT", "version": "v1" } ]
2016-03-08
[ [ "Wang", "Kai", "" ], [ "Liu", "Lanru", "" ], [ "Tian", "Pu", "" ] ]
Free energy calculation is critical in predictive tasks such as protein folding, docking and design. However, rigorous calculation of free energy change is prohibitively expensive in these practical applications. The minimum potential energy is therefore widely utilized to approximate free energy. In this study, based on analysis of extensive molecular dynamics (MD) simulation trajectories of a few native globular proteins, we found that change of minimum and corresponding maximum potential energy terms exhibit similar level of correlation with change of free energy. More importantly, we demonstrated that change of span (maximum - minimum) of potential energy terms, which engender negligible additional computational cost, exhibit considerably stronger correlations with change of free energy than the corresponding change of minimum and maximum potential energy terms. Therefore, potential energy span may serve as an alternative efficient approximate free energy proxy.
2304.14799
Agustina Fragueiro
Agustina Fragueiro (EMPENN), Giorgia Committeri (Ud'A), Claire Cury (EMPENN)
Incomplete hippocampal inversion and hippocampal subfield volumes: Implementation and inter-reliability of automatic segmentation
null
null
null
null
q-bio.NC eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The incomplete hippocampal inversion (IHI) is an atypical anatomical pattern of the hippocampus. However, the hippocampus is not a homogeneous structure, as it consists of segregated subfields with specific characteristics. While IHI is not related to whole hippocampal volume, higher IHI scores have been associated to smaller CA1 in aging. Although the segmentation of hippocampal subfields is challenging due to their small size, there are algorithms allowing their automatic segmentation. By using a Human Connectome Project dataset of healthy young adults, we first tested the inter-reliability of two methods for automatic segmentation of hippocampal subfields, and secondly, we explored the relationship between IHI and subfield volumes. Results evidenced strong correlations between volumes obtained thorough both segmentation methods. Furthermore, higher IHI scores were associated to bigger subiculum and smaller CA1 volumes. Here, we provide new insights regarding IHI subfields volumetry, and we offer support for automatic segmentation inter-method reliability.
[ { "created": "Fri, 28 Apr 2023 12:16:56 GMT", "version": "v1" } ]
2023-05-01
[ [ "Fragueiro", "Agustina", "", "EMPENN" ], [ "Committeri", "Giorgia", "", "Ud'A" ], [ "Cury", "Claire", "", "EMPENN" ] ]
The incomplete hippocampal inversion (IHI) is an atypical anatomical pattern of the hippocampus. However, the hippocampus is not a homogeneous structure, as it consists of segregated subfields with specific characteristics. While IHI is not related to whole hippocampal volume, higher IHI scores have been associated to smaller CA1 in aging. Although the segmentation of hippocampal subfields is challenging due to their small size, there are algorithms allowing their automatic segmentation. By using a Human Connectome Project dataset of healthy young adults, we first tested the inter-reliability of two methods for automatic segmentation of hippocampal subfields, and secondly, we explored the relationship between IHI and subfield volumes. Results evidenced strong correlations between volumes obtained thorough both segmentation methods. Furthermore, higher IHI scores were associated to bigger subiculum and smaller CA1 volumes. Here, we provide new insights regarding IHI subfields volumetry, and we offer support for automatic segmentation inter-method reliability.
0807.4765
Corey S. O'Hern
Aitziber L. Cortajarena, Gregg Lois, Eilon Sherman, Corey S. O'Hern, Lynne Regan, and Gilad Haran
Non-random coil behavior as a consequence of extensive PPII structure in the denatured state
32 pages, 4 figures, 1 table
J. Mol. Biol. 382, 203 (2008)
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Unfolded proteins may contain native or non-native residual structure, which has important implications for the thermodynamics and kinetics of folding as well as for misfolding and aggregation diseases. However, it has been universally accepted that residual structure should not affect the global size scaling of the denatured chain, which obeys the statistics of random coil polymers. Here we use a single-molecule optical technique, fluorescence correlation spectroscopy, to probe the denatured state of set of repeat proteins containing an increasing number of identical domains, from two to twenty. The availability of this set allows us to obtain the scaling law for the unfolded state of these proteins, which turns out to be unusually compact, strongly deviating from random-coil statistics. The origin of this unexpected behavior is traced to the presence of extensive non-native polyproline II helical structure, which we localize to specific segments of the polypeptide chain. We show that the experimentally observed effects of PPII on the size scaling of the denatured state can be well-described by simple polymer models. Our findings suggest an hitherto unforeseen potential of non-native structure to induce significant compaction of denatured proteins, affecting significantly folding pathways and kinetics.
[ { "created": "Wed, 30 Jul 2008 01:26:19 GMT", "version": "v1" } ]
2008-09-12
[ [ "Cortajarena", "Aitziber L.", "" ], [ "Lois", "Gregg", "" ], [ "Sherman", "Eilon", "" ], [ "O'Hern", "Corey S.", "" ], [ "Regan", "Lynne", "" ], [ "Haran", "Gilad", "" ] ]
Unfolded proteins may contain native or non-native residual structure, which has important implications for the thermodynamics and kinetics of folding as well as for misfolding and aggregation diseases. However, it has been universally accepted that residual structure should not affect the global size scaling of the denatured chain, which obeys the statistics of random coil polymers. Here we use a single-molecule optical technique, fluorescence correlation spectroscopy, to probe the denatured state of set of repeat proteins containing an increasing number of identical domains, from two to twenty. The availability of this set allows us to obtain the scaling law for the unfolded state of these proteins, which turns out to be unusually compact, strongly deviating from random-coil statistics. The origin of this unexpected behavior is traced to the presence of extensive non-native polyproline II helical structure, which we localize to specific segments of the polypeptide chain. We show that the experimentally observed effects of PPII on the size scaling of the denatured state can be well-described by simple polymer models. Our findings suggest an hitherto unforeseen potential of non-native structure to induce significant compaction of denatured proteins, affecting significantly folding pathways and kinetics.
2206.15158
Gerald Cooray PhD
Gerald Cooray, Richard Rosch, Karl Friston
Global dynamics of neural mass models
40 pages, 8 figures
null
null
null
q-bio.NC q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Neural mass models are used to simulate cortical dynamics and to explain the electrical and magnetic fields measured using electro- and magnetoencephalography. Simulations evince a complex phase-space structure for these kinds of models; including stationary points and limit cycles and the possibility for bifurcations and transitions among different modes of activity. This complexity allows neural mass models to describe the itinerant features of brain dynamics. However, expressive, nonlinear neural mass models are often difficult to fit to empirical data without additional simplifying assumptions: e.g., that the system can be modelled as linear perturbations around a fixed point. In this study we offer a mathematical analysis of neural mass models, specifically the canonical microcircuit model, providing analytical solutions describing dynamical itinerancy. We derive a perturbation analysis up to second order of the phase flow, together with adiabatic approximations. This allows us to describe amplitude modulations as gradient flows on a potential function of intrinsic connectivity. These results provide analytic proof-of-principle for the existence of semi-stable states of cortical dynamics at the scale of a cortical column. Crucially, this work allows for model inversion of neural mass models, not only around fixed points, but over regions of phase space that encompass transitions among semi or multi-stable states of oscillatory activity. In principle, this formulation of cortical dynamics may improve our understanding of the itinerancy that underwrites measures of cortical activity (through EEG or MEG). Crucially, these theoretical results speak to model inversion in the context of multiple semi-stable brain states, such as onset of seizure activity in epilepsy or beta bursts in Parkinsons disease.
[ { "created": "Thu, 30 Jun 2022 09:44:15 GMT", "version": "v1" }, { "created": "Tue, 23 Aug 2022 19:16:54 GMT", "version": "v2" } ]
2022-08-25
[ [ "Cooray", "Gerald", "" ], [ "Rosch", "Richard", "" ], [ "Friston", "Karl", "" ] ]
Neural mass models are used to simulate cortical dynamics and to explain the electrical and magnetic fields measured using electro- and magnetoencephalography. Simulations evince a complex phase-space structure for these kinds of models; including stationary points and limit cycles and the possibility for bifurcations and transitions among different modes of activity. This complexity allows neural mass models to describe the itinerant features of brain dynamics. However, expressive, nonlinear neural mass models are often difficult to fit to empirical data without additional simplifying assumptions: e.g., that the system can be modelled as linear perturbations around a fixed point. In this study we offer a mathematical analysis of neural mass models, specifically the canonical microcircuit model, providing analytical solutions describing dynamical itinerancy. We derive a perturbation analysis up to second order of the phase flow, together with adiabatic approximations. This allows us to describe amplitude modulations as gradient flows on a potential function of intrinsic connectivity. These results provide analytic proof-of-principle for the existence of semi-stable states of cortical dynamics at the scale of a cortical column. Crucially, this work allows for model inversion of neural mass models, not only around fixed points, but over regions of phase space that encompass transitions among semi or multi-stable states of oscillatory activity. In principle, this formulation of cortical dynamics may improve our understanding of the itinerancy that underwrites measures of cortical activity (through EEG or MEG). Crucially, these theoretical results speak to model inversion in the context of multiple semi-stable brain states, such as onset of seizure activity in epilepsy or beta bursts in Parkinsons disease.
2405.07858
Prajwal Ghimire
Prajwal Ghimire, Ben Kinnersley, Golestan Karami, Prabhu Arumugam, Richard Houlston, Keyoumars Ashkan, Marc Modat, Thomas C Booth
Radiogenomic biomarkers for immunotherapy in glioblastoma: A systematic review of magnetic resonance imaging studies
Published in Neuro-Oncology Advances 2024
null
10.1093/noajnl/vdae055
null
q-bio.TO
http://creativecommons.org/licenses/by/4.0/
Immunotherapy is an effective precision medicine treatment for several cancers. Imaging signatures of the underlying genome (radiogenomics) in glioblastoma patients may serve as preoperative biomarkers of the tumor-host immune apparatus. Validated biomarkers would have the potential to stratify patients during immunotherapy clinical trials, and if trials are beneficial, facilitate personalized neo-adjuvant treatment. The increased use of whole genome sequencing data, and the advances in bioinformatics and machine learning make such developments plausible. We performed a systematic review to determine the extent of development and validation of immune-related radiogenomic biomarkers for glioblastoma. A systematic review was performed following PRISMA guidelines using the PubMed, Medline, and Embase databases. Qualitative analysis was performed by incorporating the QUADAS 2 tool and CLAIM checklist. PROSPERO registered CRD42022340968. Extracted data were insufficiently homogenous to perform a meta-analysis. Results Nine studies, all retrospective, were included. Biomarkers extracted from magnetic resonance imaging volumes of interest included apparent diffusion coefficient values, relative cerebral blood volume values, and image-derived features. These biomarkers correlated with genomic markers from tumor cells or immune cells or with patient survival. The majority of studies had a high risk of bias and applicability concerns regarding the index test performed. Radiogenomic immune biomarkers have the potential to provide early treatment options to patients with glioblastoma. Targeted immunotherapy, stratified by these biomarkers, has the potential to allow individualized neo-adjuvant precision treatment options in clinical trials. However, there are no prospective studies validating these biomarkers, and interpretation is limited due to study bias with little evidence of generalizability.
[ { "created": "Mon, 13 May 2024 15:44:40 GMT", "version": "v1" } ]
2024-05-14
[ [ "Ghimire", "Prajwal", "" ], [ "Kinnersley", "Ben", "" ], [ "Karami", "Golestan", "" ], [ "Arumugam", "Prabhu", "" ], [ "Houlston", "Richard", "" ], [ "Ashkan", "Keyoumars", "" ], [ "Modat", "Marc", "" ], [ "Booth", "Thomas C", "" ] ]
Immunotherapy is an effective precision medicine treatment for several cancers. Imaging signatures of the underlying genome (radiogenomics) in glioblastoma patients may serve as preoperative biomarkers of the tumor-host immune apparatus. Validated biomarkers would have the potential to stratify patients during immunotherapy clinical trials, and if trials are beneficial, facilitate personalized neo-adjuvant treatment. The increased use of whole genome sequencing data, and the advances in bioinformatics and machine learning make such developments plausible. We performed a systematic review to determine the extent of development and validation of immune-related radiogenomic biomarkers for glioblastoma. A systematic review was performed following PRISMA guidelines using the PubMed, Medline, and Embase databases. Qualitative analysis was performed by incorporating the QUADAS 2 tool and CLAIM checklist. PROSPERO registered CRD42022340968. Extracted data were insufficiently homogenous to perform a meta-analysis. Results Nine studies, all retrospective, were included. Biomarkers extracted from magnetic resonance imaging volumes of interest included apparent diffusion coefficient values, relative cerebral blood volume values, and image-derived features. These biomarkers correlated with genomic markers from tumor cells or immune cells or with patient survival. The majority of studies had a high risk of bias and applicability concerns regarding the index test performed. Radiogenomic immune biomarkers have the potential to provide early treatment options to patients with glioblastoma. Targeted immunotherapy, stratified by these biomarkers, has the potential to allow individualized neo-adjuvant precision treatment options in clinical trials. However, there are no prospective studies validating these biomarkers, and interpretation is limited due to study bias with little evidence of generalizability.
0710.5195
Herbert Sauro Dr
Herbert M Sauro and Brian Ingalls
MAPK Cascades as Feedback Amplifiers
21 pages and 8 figures
null
null
null
q-bio.MN q-bio.SC
null
Interconvertible enzyme cascades, exemplified by the mitogen activated protein kinase (MAPK) cascade, are a frequent mechanism in signal transduction pathways. There has been much speculation as to the role of these pathways, and how their structure is related to their function. A common conclusion is that the cascades serve to amplify biochemical signals so that a single bound ligand molecule might produce a multitude of second messengers. Some recent work has focused on a particular feature present in some MAPK pathways -- a negative feedback loop which spans the length of the cascade. This is a feature that is shared by a man-made engineering device, the feedback amplifier. We propose a novel interpretation: that by wrapping a feedback loop around an amplifier, these cascades may be acting as biochemical feedback amplifiers which imparts i) increased robustness with respect to internal perturbations; ii) a linear graded response over an extended operating range; iii) insulation from external perturbation, resulting in functional modularization. We also report on the growing list of experimental evidence which supports a graded response of MAPK with respect to Epidermal Growth Factor. This evidence supports our hypothesis that in these circumstances MAPK cascade, may be acting as a feedback amplifier.
[ { "created": "Fri, 26 Oct 2007 23:38:14 GMT", "version": "v1" } ]
2007-10-30
[ [ "Sauro", "Herbert M", "" ], [ "Ingalls", "Brian", "" ] ]
Interconvertible enzyme cascades, exemplified by the mitogen activated protein kinase (MAPK) cascade, are a frequent mechanism in signal transduction pathways. There has been much speculation as to the role of these pathways, and how their structure is related to their function. A common conclusion is that the cascades serve to amplify biochemical signals so that a single bound ligand molecule might produce a multitude of second messengers. Some recent work has focused on a particular feature present in some MAPK pathways -- a negative feedback loop which spans the length of the cascade. This is a feature that is shared by a man-made engineering device, the feedback amplifier. We propose a novel interpretation: that by wrapping a feedback loop around an amplifier, these cascades may be acting as biochemical feedback amplifiers which imparts i) increased robustness with respect to internal perturbations; ii) a linear graded response over an extended operating range; iii) insulation from external perturbation, resulting in functional modularization. We also report on the growing list of experimental evidence which supports a graded response of MAPK with respect to Epidermal Growth Factor. This evidence supports our hypothesis that in these circumstances MAPK cascade, may be acting as a feedback amplifier.
1906.07917
Vikas Desai
Sai Vikas Desai, Vineeth N Balasubramanian, Tokihiro Fukatsu, Seishi Ninomiya and Wei Guo
Automatic estimation of heading date of paddy rice using deep learning
null
null
10.1186/s13007-019-0457-1
null
q-bio.QM cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Accurate estimation of heading date of paddy rice greatly helps the breeders to understand the adaptability of different crop varieties in a given location. The heading date also plays a vital role in determining grain yield for research experiments. Visual examination of the crop is laborious and time consuming. Therefore, quick and precise estimation of heading date of paddy rice is highly essential. In this work, we propose a simple pipeline to detect regions containing flowering panicles from ground level RGB images of paddy rice. Given a fixed region size for an image, the number of regions containing flowering panicles is directly proportional to the number of flowering panicles present. Consequently, we use the flowering panicle region counts to estimate the heading date of the crop. The method is based on image classification using Convolutional Neural Networks (CNNs). We evaluated the performance of our algorithm on five time series image sequences of three different varieties of rice crops. When compared to the previous work on this dataset, the accuracy and general versatility of the method has been improved and heading date has been estimated with a mean absolute error of less than 1 day.
[ { "created": "Wed, 19 Jun 2019 05:02:43 GMT", "version": "v1" } ]
2019-08-08
[ [ "Desai", "Sai Vikas", "" ], [ "Balasubramanian", "Vineeth N", "" ], [ "Fukatsu", "Tokihiro", "" ], [ "Ninomiya", "Seishi", "" ], [ "Guo", "Wei", "" ] ]
Accurate estimation of heading date of paddy rice greatly helps the breeders to understand the adaptability of different crop varieties in a given location. The heading date also plays a vital role in determining grain yield for research experiments. Visual examination of the crop is laborious and time consuming. Therefore, quick and precise estimation of heading date of paddy rice is highly essential. In this work, we propose a simple pipeline to detect regions containing flowering panicles from ground level RGB images of paddy rice. Given a fixed region size for an image, the number of regions containing flowering panicles is directly proportional to the number of flowering panicles present. Consequently, we use the flowering panicle region counts to estimate the heading date of the crop. The method is based on image classification using Convolutional Neural Networks (CNNs). We evaluated the performance of our algorithm on five time series image sequences of three different varieties of rice crops. When compared to the previous work on this dataset, the accuracy and general versatility of the method has been improved and heading date has been estimated with a mean absolute error of less than 1 day.
1012.1808
Joachim Krug
Alexander Altland, Andrej Fischer, Joachim Krug, and Ivan G. Szendro
Rare events in population genetics: Stochastic tunneling in a two-locus model with recombination
4 pages, 3 figures
Physical Review Letters 106 (2011) 088101
10.1103/PhysRevLett.106.088101
null
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the evolution of a population in a two-locus genotype space, in which the negative effects of two single mutations are overcompensated in a high fitness double mutant. We discuss how the interplay of finite population size, $N$, and sexual recombination at rate $r$ affects the escape times $t_\mathrm{esc}$ to the double mutant. For small populations demographic noise generates massive fluctuations in $t_\mathrm{esc}$. The mean escape time varies non-monotonically with $r$, and grows exponentially as $\ln t_{\mathrm{esc}} \sim N(r - r^\ast)^{3/2}$ beyond a critical value $r^\ast$.
[ { "created": "Wed, 8 Dec 2010 17:05:13 GMT", "version": "v1" }, { "created": "Tue, 22 Feb 2011 16:41:38 GMT", "version": "v2" } ]
2011-02-23
[ [ "Altland", "Alexander", "" ], [ "Fischer", "Andrej", "" ], [ "Krug", "Joachim", "" ], [ "Szendro", "Ivan G.", "" ] ]
We study the evolution of a population in a two-locus genotype space, in which the negative effects of two single mutations are overcompensated in a high fitness double mutant. We discuss how the interplay of finite population size, $N$, and sexual recombination at rate $r$ affects the escape times $t_\mathrm{esc}$ to the double mutant. For small populations demographic noise generates massive fluctuations in $t_\mathrm{esc}$. The mean escape time varies non-monotonically with $r$, and grows exponentially as $\ln t_{\mathrm{esc}} \sim N(r - r^\ast)^{3/2}$ beyond a critical value $r^\ast$.
1609.01015
Arian Ashourvan
Arian Ashourvan, Shi Gu, Marcelo G. Mattar, Jean M. Vettel, Danielle S. Bassett
The Energy Landscape Underpinning Module Dynamics in the Human Brain Connectome
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Human brain dynamics can be profitably viewed through the lens of statistical mechanics, where neurophysiological activity evolves around and between local attractors representing preferred mental states. Many physically-inspired models of these dynamics define the state of the brain based on instantaneous measurements of regional activity. Yet, recent work in network neuroscience has provided initial evidence that the brain might also be well-characterized by time-varying states composed of locally coherent activity or functional modules. Here we study this network-based notion of brain state to understand how functional modules dynamically interact with one another to perform cognitive functions. We estimate the functional relationships between regions of interest (ROIs) by fitting a pair-wise maximum entropy model to each ROI's pattern of allegiance to functional modules. Local minima in this model represent attractor states characterized by specific patterns of modular structure. The clustering of local minima highlights three classes of ROIs with similar patterns of allegiance to community states. Visual, attention, sensorimotor, and subcortical ROIs tend to form a single functional community. The remaining ROIs tend to form a putative executive control community or a putative default mode and salience community. We simulate the brain's dynamic transitions between these community states using a Markov Chain Monte Carlo random walk. We observe that simulated transition probabilities between basins resemble empirically observed transitions between community allegiance states in resting state fMRI data. These results collectively offer a view of the brain as a dynamical system that transitions between basins of attraction characterized by coherent activity in small groups of brain regions, and that the strength of these attractors depends on the cognitive computations being performed.
[ { "created": "Mon, 5 Sep 2016 01:51:59 GMT", "version": "v1" } ]
2016-09-06
[ [ "Ashourvan", "Arian", "" ], [ "Gu", "Shi", "" ], [ "Mattar", "Marcelo G.", "" ], [ "Vettel", "Jean M.", "" ], [ "Bassett", "Danielle S.", "" ] ]
Human brain dynamics can be profitably viewed through the lens of statistical mechanics, where neurophysiological activity evolves around and between local attractors representing preferred mental states. Many physically-inspired models of these dynamics define the state of the brain based on instantaneous measurements of regional activity. Yet, recent work in network neuroscience has provided initial evidence that the brain might also be well-characterized by time-varying states composed of locally coherent activity or functional modules. Here we study this network-based notion of brain state to understand how functional modules dynamically interact with one another to perform cognitive functions. We estimate the functional relationships between regions of interest (ROIs) by fitting a pair-wise maximum entropy model to each ROI's pattern of allegiance to functional modules. Local minima in this model represent attractor states characterized by specific patterns of modular structure. The clustering of local minima highlights three classes of ROIs with similar patterns of allegiance to community states. Visual, attention, sensorimotor, and subcortical ROIs tend to form a single functional community. The remaining ROIs tend to form a putative executive control community or a putative default mode and salience community. We simulate the brain's dynamic transitions between these community states using a Markov Chain Monte Carlo random walk. We observe that simulated transition probabilities between basins resemble empirically observed transitions between community allegiance states in resting state fMRI data. These results collectively offer a view of the brain as a dynamical system that transitions between basins of attraction characterized by coherent activity in small groups of brain regions, and that the strength of these attractors depends on the cognitive computations being performed.
2208.02564
Tongyue Shi
Tongyue Shi and Haining Wang
Mathematical Modeling Analysis and Optimization of Fungal Diversity Growth
19 pages
null
null
null
q-bio.PE math.OC
http://creativecommons.org/licenses/by/4.0/
This paper studied the relationship between the decomposition rate of fungi and temperature, humidity, fungus elongation, moisture tolerance and fungus density in a given volume in the presence of a variety of fungi, and established a series of models to describe the decomposition of fungi in different states. Since the volume of soil was given in this case, the latter two characteristics could be attributed to the influence of the number of fungal population on the decomposition rate. Based on the Logistic model, the relationship between the number of population and time was established, and finally the number of fungi in the steady state was obtained The interaction between different species of fungi was analyzed by Lotka-Volterra model, and the decomposition rate of various fungal combinations in different environments was obtained. After studying the one and two cases, we can extrapher from one to the other, and the community consisting of n fungal populations will be similar to the community consisting of n+1 fungal populations. After the study, we substituted the collected data into the model and found that the fungal community composed of two kinds of fungi had a lower decomposition rate of ground decomposition or wooden fiber than that of a single kind of fungus for the same kind of substance. We found that the fungus in warm and humid environment of decomposition rate is highest, the change of the atmospheric cause some fungal population growth rate decreases, there are also some will increase, which is associated with the nature of fungi.We analyzed the influence of environmental factors, namely temperature and humidity, on the model.
[ { "created": "Thu, 4 Aug 2022 10:13:52 GMT", "version": "v1" } ]
2022-08-05
[ [ "Shi", "Tongyue", "" ], [ "Wang", "Haining", "" ] ]
This paper studied the relationship between the decomposition rate of fungi and temperature, humidity, fungus elongation, moisture tolerance and fungus density in a given volume in the presence of a variety of fungi, and established a series of models to describe the decomposition of fungi in different states. Since the volume of soil was given in this case, the latter two characteristics could be attributed to the influence of the number of fungal population on the decomposition rate. Based on the Logistic model, the relationship between the number of population and time was established, and finally the number of fungi in the steady state was obtained The interaction between different species of fungi was analyzed by Lotka-Volterra model, and the decomposition rate of various fungal combinations in different environments was obtained. After studying the one and two cases, we can extrapher from one to the other, and the community consisting of n fungal populations will be similar to the community consisting of n+1 fungal populations. After the study, we substituted the collected data into the model and found that the fungal community composed of two kinds of fungi had a lower decomposition rate of ground decomposition or wooden fiber than that of a single kind of fungus for the same kind of substance. We found that the fungus in warm and humid environment of decomposition rate is highest, the change of the atmospheric cause some fungal population growth rate decreases, there are also some will increase, which is associated with the nature of fungi.We analyzed the influence of environmental factors, namely temperature and humidity, on the model.
1507.04416
Iaroslav Ispolatov
Iaroslav Ispolatov, Vaibhav Madhok, and Michael Doebeli
Individual-Based models for adaptive diversification in high-dimensional phenotype spaces
23 pages, 7 figures, please open pdf with Acrobat to see movies
J. Theor. Biol, 390 (2016) 97-105
10.1016/j.jtbi.2015.10.009
null
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most theories of evolutionary diversification are based on equilibrium assumptions: they are either based on optimality arguments involving static fitness landscapes, or they assume that populations first evolve to an equilibrium state before diversification occurs, as exemplified by the concept of evolutionary branching points in adaptive dynamics theory. Recent results indicate that adaptive dynamics may often not converge to equilibrium points and instead generate complicated trajectories if evolution takes place in high-dimensional phenotype spaces. Even though some analytical results on diversification in complex phenotype spaces are available, to study this problem in general we need to reconstruct individual-based models from the adaptive dynamics generating the non-equilibrium dynamics. Here we first provide a method to construct individual-based models such that they faithfully reproduce the given adaptive dynamics attractor without diversification. We then show that a propensity to diversify can by introduced by adding Gaussian competition terms that generate frequency dependence while still preserving the same adaptive dynamics. For sufficiently strong competition, the disruptive selection generated by frequency-dependence overcomes the directional evolution along the selection gradient and leads to diversification in phenotypic directions that are orthogonal to the selection gradient.
[ { "created": "Thu, 16 Jul 2015 00:07:19 GMT", "version": "v1" } ]
2017-02-07
[ [ "Ispolatov", "Iaroslav", "" ], [ "Madhok", "Vaibhav", "" ], [ "Doebeli", "Michael", "" ] ]
Most theories of evolutionary diversification are based on equilibrium assumptions: they are either based on optimality arguments involving static fitness landscapes, or they assume that populations first evolve to an equilibrium state before diversification occurs, as exemplified by the concept of evolutionary branching points in adaptive dynamics theory. Recent results indicate that adaptive dynamics may often not converge to equilibrium points and instead generate complicated trajectories if evolution takes place in high-dimensional phenotype spaces. Even though some analytical results on diversification in complex phenotype spaces are available, to study this problem in general we need to reconstruct individual-based models from the adaptive dynamics generating the non-equilibrium dynamics. Here we first provide a method to construct individual-based models such that they faithfully reproduce the given adaptive dynamics attractor without diversification. We then show that a propensity to diversify can by introduced by adding Gaussian competition terms that generate frequency dependence while still preserving the same adaptive dynamics. For sufficiently strong competition, the disruptive selection generated by frequency-dependence overcomes the directional evolution along the selection gradient and leads to diversification in phenotypic directions that are orthogonal to the selection gradient.