id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
1504.06781
Christopher Marriott
Chris Marriott and Jobran Chebib
Finding a Mate With No Social Skills
8 pages, 5 figures, GECCO'15
null
10.1145/2739480.2754804
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sexual reproductive behavior has a necessary social coordination component as willing and capable partners must both be in the right place at the right time. While there are many known social behavioral adaptations to support solutions to this problem, we explore the possibility and likelihood of solutions that rely only on non-social mechanisms. We find three kinds of social organization that help solve this social coordination problem (herding, assortative mating, and natal philopatry) emerge in populations of simulated agents with no social mechanisms available to support these organizations. We conclude that the non-social origins of these social organizations around sexual reproduction may provide the environment for the development of social solutions to the same and different problems.
[ { "created": "Sun, 26 Apr 2015 01:49:05 GMT", "version": "v1" } ]
2015-04-28
[ [ "Marriott", "Chris", "" ], [ "Chebib", "Jobran", "" ] ]
Sexual reproductive behavior has a necessary social coordination component as willing and capable partners must both be in the right place at the right time. While there are many known social behavioral adaptations to support solutions to this problem, we explore the possibility and likelihood of solutions that rely only on non-social mechanisms. We find three kinds of social organization that help solve this social coordination problem (herding, assortative mating, and natal philopatry) emerge in populations of simulated agents with no social mechanisms available to support these organizations. We conclude that the non-social origins of these social organizations around sexual reproduction may provide the environment for the development of social solutions to the same and different problems.
1109.5423
Frederick Matsen IV
Frederick A. Matsen and Aaron Gallagher
Reconciling taxonomy and phylogenetic inference: formalism and algorithms for describing discord and inferring taxonomic roots
Version submitted to Algorithms for Molecular Biology. A number of fixes from previous version
null
null
null
q-bio.PE cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Although taxonomy is often used informally to evaluate the results of phylogenetic inference and find the root of phylogenetic trees, algorithmic methods to do so are lacking. In this paper we formalize these procedures and develop algorithms to solve the relevant problems. In particular, we introduce a new algorithm that solves a "subcoloring" problem for expressing the difference between the taxonomy and phylogeny at a given rank. This algorithm improves upon the current best algorithm in terms of asymptotic complexity for the parameter regime of interest; we also describe a branch-and-bound algorithm that saves orders of magnitude in computation on real data sets. We also develop a formalism and an algorithm for rooting phylogenetic trees according to a taxonomy. All of these algorithms are implemented in freely-available software.
[ { "created": "Mon, 26 Sep 2011 01:00:52 GMT", "version": "v1" }, { "created": "Sat, 1 Oct 2011 22:54:31 GMT", "version": "v2" } ]
2011-10-04
[ [ "Matsen", "Frederick A.", "" ], [ "Gallagher", "Aaron", "" ] ]
Although taxonomy is often used informally to evaluate the results of phylogenetic inference and find the root of phylogenetic trees, algorithmic methods to do so are lacking. In this paper we formalize these procedures and develop algorithms to solve the relevant problems. In particular, we introduce a new algorithm that solves a "subcoloring" problem for expressing the difference between the taxonomy and phylogeny at a given rank. This algorithm improves upon the current best algorithm in terms of asymptotic complexity for the parameter regime of interest; we also describe a branch-and-bound algorithm that saves orders of magnitude in computation on real data sets. We also develop a formalism and an algorithm for rooting phylogenetic trees according to a taxonomy. All of these algorithms are implemented in freely-available software.
1208.3894
Matthew Hufford
Matthew B. Hufford, Pesach Lubinksy, Tanja Pyh\"aj\"arvi, Michael T. Devengenzo, Norman C. Ellstrand, and Jeffrey Ross-Ibarra
The Genomic Signature of Crop-Wild Introgression in Maize
null
PLoS Genetics 2013 9(5): e1003477
10.1371/journal.pgen.1003477
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The evolutionary significance of hybridization and subsequent introgression has long been appreciated, but evaluation of the genome-wide effects of these phenomena has only recently become possible. Crop-wild study systems represent ideal opportunities to examine evolution through hybridization. For example, maize and the conspecific wild teosinte Zea mays ssp. mexicana, (hereafter, mexicana) are known to hybridize in the fields of highland Mexico. Despite widespread evidence of gene flow, maize and mexicana maintain distinct morphologies and have done so in sympatry for thousands of years. Neither the genomic extent nor the evolutionary importance of introgression between these taxa is understood. In this study we assessed patterns of genome-wide introgression based on 39,029 single nucleotide polymorphisms genotyped in 189 individuals from nine sympatric maize-mexicana populations and reference allopatric populations. While portions of the maize and mexicana genomes were particularly resistant to introgression (notably near known cross-incompatibility and domestication loci), we detected widespread evidence for introgression in both directions of gene flow. Through further characterization of these regions and preliminary growth chamber experiments, we found evidence suggestive of the incorporation of adaptive mexicana alleles into maize during its expansion to the highlands of central Mexico. In contrast, very little evidence was found for adaptive introgression from maize to mexicana. The methods we have applied here can be replicated widely, and such analyses have the potential to greatly informing our understanding of evolution through introgressive hybridization. Crop species, due to their exceptional genomic resources and frequent histories of spread into sympatry with relatives, should be particularly influential in these studies.
[ { "created": "Sun, 19 Aug 2012 20:48:08 GMT", "version": "v1" }, { "created": "Tue, 21 Aug 2012 19:58:12 GMT", "version": "v2" }, { "created": "Wed, 2 Jan 2013 17:13:13 GMT", "version": "v3" } ]
2013-07-30
[ [ "Hufford", "Matthew B.", "" ], [ "Lubinksy", "Pesach", "" ], [ "Pyhäjärvi", "Tanja", "" ], [ "Devengenzo", "Michael T.", "" ], [ "Ellstrand", "Norman C.", "" ], [ "Ross-Ibarra", "Jeffrey", "" ] ]
The evolutionary significance of hybridization and subsequent introgression has long been appreciated, but evaluation of the genome-wide effects of these phenomena has only recently become possible. Crop-wild study systems represent ideal opportunities to examine evolution through hybridization. For example, maize and the conspecific wild teosinte Zea mays ssp. mexicana, (hereafter, mexicana) are known to hybridize in the fields of highland Mexico. Despite widespread evidence of gene flow, maize and mexicana maintain distinct morphologies and have done so in sympatry for thousands of years. Neither the genomic extent nor the evolutionary importance of introgression between these taxa is understood. In this study we assessed patterns of genome-wide introgression based on 39,029 single nucleotide polymorphisms genotyped in 189 individuals from nine sympatric maize-mexicana populations and reference allopatric populations. While portions of the maize and mexicana genomes were particularly resistant to introgression (notably near known cross-incompatibility and domestication loci), we detected widespread evidence for introgression in both directions of gene flow. Through further characterization of these regions and preliminary growth chamber experiments, we found evidence suggestive of the incorporation of adaptive mexicana alleles into maize during its expansion to the highlands of central Mexico. In contrast, very little evidence was found for adaptive introgression from maize to mexicana. The methods we have applied here can be replicated widely, and such analyses have the potential to greatly informing our understanding of evolution through introgressive hybridization. Crop species, due to their exceptional genomic resources and frequent histories of spread into sympatry with relatives, should be particularly influential in these studies.
2108.04289
Georg Krempl
Krempl, Georg and Kottke, Daniel and Pham Minh, Tuan
ACE: A Novel Approach for the Statistical Analysis of Pairwise Connectivity
Draft of an extended version of Krempl, Kottke, Pham Minh. Statistical Analysis of Pairwise Connectivity. To appear in Discovery Science 2021
null
null
null
q-bio.NC cs.LG stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Analysing correlations between streams of events is an important problem. It arises for example in Neurosciences, when the connectivity of neurons should be inferred from spike trains that record neurons' individual spiking activity. While recently some approaches for inferring delayed synaptic connections have been proposed, they are limited in the types of connectivities and delays they are able to handle, or require computation-intensive procedures. This paper proposes a faster and more flexible approach for analysing such delayed correlated activity: a statistical approach for the Analysis of Connectivity in spiking Events (ACE), based on the idea of hypothesis testing. It first computes for any pair of a source and a target neuron the inter-spike delays between subsequent source- and target-spikes. Then, it derives a null model for the distribution of inter-spike delays for \emph{uncorrelated}~neurons. Finally, it compares the observed distribution of inter-spike delays to this null model and infers pairwise connectivity based on the Pearson's Chi-squared test statistic. Thus, ACE is capable to detect connections with a priori unknown, non-discrete (and potentially large) inter-spike delays, which might vary between pairs of neurons. Since ACE works incrementally, it has potential for being used in online processing. In our experiments, we visualise the advantages of ACE in varying experimental scenarios (except for one special case) and in a state-of-the-art dataset which has been generated for neuro-scientific research under most realistic conditions.
[ { "created": "Mon, 9 Aug 2021 18:27:52 GMT", "version": "v1" } ]
2021-08-11
[ [ "Krempl", "", "" ], [ "Georg", "", "" ], [ "Kottke", "", "" ], [ "Daniel", "", "" ], [ "Minh", "Pham", "" ], [ "Tuan", "", "" ] ]
Analysing correlations between streams of events is an important problem. It arises for example in Neurosciences, when the connectivity of neurons should be inferred from spike trains that record neurons' individual spiking activity. While recently some approaches for inferring delayed synaptic connections have been proposed, they are limited in the types of connectivities and delays they are able to handle, or require computation-intensive procedures. This paper proposes a faster and more flexible approach for analysing such delayed correlated activity: a statistical approach for the Analysis of Connectivity in spiking Events (ACE), based on the idea of hypothesis testing. It first computes for any pair of a source and a target neuron the inter-spike delays between subsequent source- and target-spikes. Then, it derives a null model for the distribution of inter-spike delays for \emph{uncorrelated}~neurons. Finally, it compares the observed distribution of inter-spike delays to this null model and infers pairwise connectivity based on the Pearson's Chi-squared test statistic. Thus, ACE is capable to detect connections with a priori unknown, non-discrete (and potentially large) inter-spike delays, which might vary between pairs of neurons. Since ACE works incrementally, it has potential for being used in online processing. In our experiments, we visualise the advantages of ACE in varying experimental scenarios (except for one special case) and in a state-of-the-art dataset which has been generated for neuro-scientific research under most realistic conditions.
1206.5945
Ivan Junier
Ivan Junier, Ryan Dale, Chunhui Hou, Fran\c{c}ois K\'ep\`es and Ann Dean
CTCF-mediated transcriptional regulation through cell type-specific chromosome organization in the {\beta}-globin locus
Full article, including Supp. Mat., is available at Nucleic Acids Research, doi: 10.1093/nar/gks536
null
10.1093/nar/gks536
null
q-bio.GN cond-mat.soft physics.bio-ph q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The principles underlying the architectural landscape of chromatin beyond the nucleosome level in living cells remains largely unknown despite its potential to play a role in mammalian gene regulation. We investigated the 3-dimensional folding of a 1 Mbp region of human chromosome 11 containing the {\beta}-globin genes by integrating looping interactions of the insulator protein CTCF determined comprehensively by chromosome conformation capture (3C) into a polymer model of chromatin. We find that CTCF-mediated cell type specific interactions in erythroid cells are organized to favor contacts known to occur in vivo between the {\beta}-globin locus control region (LCR) and genes. In these cells, the modeled {\beta}-globin domain folds into a globule with the LCR and the active globin genes on the periphery. By contrast, in non-erythroid cells, the globule is less compact with few but dominant CTCF interactions driving the genes away from the LCR. This leads to a decrease in contact frequencies that can exceed 1000-fold depending on the stiffness of the chromatin and the exact positioning of the genes. Our findings show that an ensemble of CTCF contacts functionally affects spatial distances between control elements and target genes contributing to chromosomal organization required for transcription.
[ { "created": "Tue, 26 Jun 2012 10:44:10 GMT", "version": "v1" } ]
2012-06-27
[ [ "Junier", "Ivan", "" ], [ "Dale", "Ryan", "" ], [ "Hou", "Chunhui", "" ], [ "Képès", "François", "" ], [ "Dean", "Ann", "" ] ]
The principles underlying the architectural landscape of chromatin beyond the nucleosome level in living cells remains largely unknown despite its potential to play a role in mammalian gene regulation. We investigated the 3-dimensional folding of a 1 Mbp region of human chromosome 11 containing the {\beta}-globin genes by integrating looping interactions of the insulator protein CTCF determined comprehensively by chromosome conformation capture (3C) into a polymer model of chromatin. We find that CTCF-mediated cell type specific interactions in erythroid cells are organized to favor contacts known to occur in vivo between the {\beta}-globin locus control region (LCR) and genes. In these cells, the modeled {\beta}-globin domain folds into a globule with the LCR and the active globin genes on the periphery. By contrast, in non-erythroid cells, the globule is less compact with few but dominant CTCF interactions driving the genes away from the LCR. This leads to a decrease in contact frequencies that can exceed 1000-fold depending on the stiffness of the chromatin and the exact positioning of the genes. Our findings show that an ensemble of CTCF contacts functionally affects spatial distances between control elements and target genes contributing to chromosomal organization required for transcription.
2007.13526
Salvador Chuli\'an
Salvador Chuli\'an, Alvaro Mart\'inez-Rubio, Anna Marciniak-Czochra, Thomas Stiehl, Cristina Bl\'azquez Go\~ni, Juan Francisco Rodr\'iguez Guti\'errez, Manuel Ramirez Orellana, Ana Castillo Robleda, V\'ictor M. P\'erez-Garc\'ia, Mar\'ia Rosa
Dynamical properties of feedback signalling in B lymphopoiesis: A mathematical modelling approach
Submitted to Journal of Theoretical Biology
null
10.1016/j.jtbi.2021.110685
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Haematopoiesis is the process of generation of blood cells. Lymphopoiesis generates lymphocytes, the cells in charge of the adaptive immune response. Disruptions of this process are associated with diseases like leukaemia, which is especially incident in children. The characteristics of self-regulation of this process make them suitable for a mathematical study. In this paper we develop mathematical models of lymphopoiesis using currently available data. We do this by drawing inspiration from existing structured models of cell lineage development and integrating them with paediatric bone marrow data, with special focus on regulatory mechanisms. A formal analysis of the models is carried out, giving steady states and their stability conditions. We use this analysis to obtain biologically relevant regions of the parameter space and to understand the dynamical behaviour of B-cell renovation. Finally, we use numerical simulations to obtain further insight into the influence of proliferation and maturation rates on the reconstitution of the cells in the B line. We conclude that a model including feedback regulation of cell proliferation represents a biologically plausible depiction for B-cell reconstitution in bone marrow. Research into haematological disorders could benefit from a precise dynamical description of B lymphopoiesis.
[ { "created": "Wed, 8 Jul 2020 08:12:18 GMT", "version": "v1" } ]
2022-05-13
[ [ "Chulián", "Salvador", "" ], [ "Martínez-Rubio", "Alvaro", "" ], [ "Marciniak-Czochra", "Anna", "" ], [ "Stiehl", "Thomas", "" ], [ "Goñi", "Cristina Blázquez", "" ], [ "Gutiérrez", "Juan Francisco Rodríguez", "" ], [ "Orellana", "Manuel Ramirez", "" ], [ "Robleda", "Ana Castillo", "" ], [ "Pérez-García", "Víctor M.", "" ], [ "Rosa", "María", "" ] ]
Haematopoiesis is the process of generation of blood cells. Lymphopoiesis generates lymphocytes, the cells in charge of the adaptive immune response. Disruptions of this process are associated with diseases like leukaemia, which is especially incident in children. The characteristics of self-regulation of this process make them suitable for a mathematical study. In this paper we develop mathematical models of lymphopoiesis using currently available data. We do this by drawing inspiration from existing structured models of cell lineage development and integrating them with paediatric bone marrow data, with special focus on regulatory mechanisms. A formal analysis of the models is carried out, giving steady states and their stability conditions. We use this analysis to obtain biologically relevant regions of the parameter space and to understand the dynamical behaviour of B-cell renovation. Finally, we use numerical simulations to obtain further insight into the influence of proliferation and maturation rates on the reconstitution of the cells in the B line. We conclude that a model including feedback regulation of cell proliferation represents a biologically plausible depiction for B-cell reconstitution in bone marrow. Research into haematological disorders could benefit from a precise dynamical description of B lymphopoiesis.
1403.6034
Danielle Bassett
Danielle S. Bassett, Muzhi Yang, Nicholas F. Wymbs, Scott T. Grafton
Learning-Induced Autonomy of Sensorimotor Systems
6 figures, 2 tables, and Supplement
null
null
null
q-bio.NC nlin.AO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Distributed networks of brain areas interact with one another in a time-varying fashion to enable complex cognitive and sensorimotor functions. Here we use novel network analysis algorithms to test the recruitment and integration of large-scale functional neural circuitry during learning. Using functional magnetic resonance imaging (fMRI) data acquired from healthy human participants, from initial training through mastery of a simple motor skill, we investigate changes in the architecture of functional connectivity patterns that promote learning. Our results reveal that learning induces an autonomy of sensorimotor systems and that the release of cognitive control hubs in frontal and cingulate cortices predicts individual differences in the rate of learning on other days of practice. Our general statistical approach is applicable across other cognitive domains and provides a key to understanding time-resolved interactions between distributed neural circuits that enable task performance.
[ { "created": "Mon, 24 Mar 2014 16:49:31 GMT", "version": "v1" } ]
2014-03-25
[ [ "Bassett", "Danielle S.", "" ], [ "Yang", "Muzhi", "" ], [ "Wymbs", "Nicholas F.", "" ], [ "Grafton", "Scott T.", "" ] ]
Distributed networks of brain areas interact with one another in a time-varying fashion to enable complex cognitive and sensorimotor functions. Here we use novel network analysis algorithms to test the recruitment and integration of large-scale functional neural circuitry during learning. Using functional magnetic resonance imaging (fMRI) data acquired from healthy human participants, from initial training through mastery of a simple motor skill, we investigate changes in the architecture of functional connectivity patterns that promote learning. Our results reveal that learning induces an autonomy of sensorimotor systems and that the release of cognitive control hubs in frontal and cingulate cortices predicts individual differences in the rate of learning on other days of practice. Our general statistical approach is applicable across other cognitive domains and provides a key to understanding time-resolved interactions between distributed neural circuits that enable task performance.
1809.03824
Horacio Lopez-Menendez
Horacio Lopez-Menendez and Joseph D'Alessandro
Unjamming and nematic flocks in endothelial monolayers during angiogenesis : theoretical and experimental analysis
null
null
10.1016/j.jmps.2018.11.022
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Angiogenesis is the complex process by which new blood vessels develop from an existing vasculature in order to supply nutrients and/or metabolites to tissues, playing a fundamental role in many physiological and pathological conditions such as embryogenesis and tissue repair or tumour growth. Here we analysed the \textit{in-vitro} angiogenic process from the perspective of the monolayer to understand the role of the interaction between the surrounding endothelial monolayer, the sprouting and the mechanics. First we noticed that the VEGF (Vascular Endothelial Growth Factor) promotes a jamming/unjamming transition that allows the fluidisation of the monolayer by measuring the shape index factor. Next, we measured the density field over the monolayer and realised that the flow of cells manifests strong similarities with the evacuation process where the flock of cells can flow or stuck, defining a convergent channel. Based on these novel observations we propose a mathematical model to describe the effects of unjamming and reorientation of a flock of cells which flow from the monolayer towards the early capillary structure. This model is developed into the framework of the continuum mechanics in which we consider the endothelial monolayer as an active biopolymer film where the nematic order emerges in the unjammed phase promoted by the VEGF activation. To test the proposed ideas we implement the developed coupled equations into a finite element code and describe, with a simplified geometry, the flow effect and cells orientation from the monolayer to the capillary. In this work, we propose an interpretation, from the top of the endothelial monolayer, based on experimental observations and theoretical arguments to think the angiogenesis.
[ { "created": "Tue, 11 Sep 2018 12:40:33 GMT", "version": "v1" } ]
2019-01-30
[ [ "Lopez-Menendez", "Horacio", "" ], [ "D'Alessandro", "Joseph", "" ] ]
Angiogenesis is the complex process by which new blood vessels develop from an existing vasculature in order to supply nutrients and/or metabolites to tissues, playing a fundamental role in many physiological and pathological conditions such as embryogenesis and tissue repair or tumour growth. Here we analysed the \textit{in-vitro} angiogenic process from the perspective of the monolayer to understand the role of the interaction between the surrounding endothelial monolayer, the sprouting and the mechanics. First we noticed that the VEGF (Vascular Endothelial Growth Factor) promotes a jamming/unjamming transition that allows the fluidisation of the monolayer by measuring the shape index factor. Next, we measured the density field over the monolayer and realised that the flow of cells manifests strong similarities with the evacuation process where the flock of cells can flow or stuck, defining a convergent channel. Based on these novel observations we propose a mathematical model to describe the effects of unjamming and reorientation of a flock of cells which flow from the monolayer towards the early capillary structure. This model is developed into the framework of the continuum mechanics in which we consider the endothelial monolayer as an active biopolymer film where the nematic order emerges in the unjammed phase promoted by the VEGF activation. To test the proposed ideas we implement the developed coupled equations into a finite element code and describe, with a simplified geometry, the flow effect and cells orientation from the monolayer to the capillary. In this work, we propose an interpretation, from the top of the endothelial monolayer, based on experimental observations and theoretical arguments to think the angiogenesis.
2205.10704
Tom Chou
Tom Chou and Maria D'Orsogna
A mathematical model of reward-mediated learning in drug addiction
12 pages, 7 figures
Chaos 32, 021102 (2022)
10.1063/5.0082997
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-nd/4.0/
Substances of abuse are known to activate and disrupt neuronal circuits in the brain reward system. We propose a simple and easily interpretable dynamical systems model to describe the neurobiology of drug addiction that incorporates the psychiatric concepts of reward prediction error (RPE), drug-induced incentive salience (IST), and opponent process theory (OPT). Drug-induced dopamine releases activate a biphasic reward response with pleasurable, positive "a-processes" (euphoria, rush) followed by unpleasant, negative "b-processes" (cravings, withdrawal). Neuroadaptive processes triggered by successive intakes enhance the negative component of the reward response, which the user compensates for by increasing drug dose and/or intake frequency. This positive feedback between physiological changes and drug self-administration leads to habituation, tolerance and eventually to full addiction. Our model gives rise to qualitatively different pathways to addiction that can represent a diverse set of user profiles (genetics, age) and drug potencies. We find that users who have, or neuroadaptively develop, a strong b-process response to drug consumption are most at risk for addiction. Finally, we include possible mechanisms to mitigate withdrawal symptoms, such as through the use of methadone or other auxiliary drugs used in detoxification.
[ { "created": "Sun, 22 May 2022 01:45:52 GMT", "version": "v1" } ]
2022-05-24
[ [ "Chou", "Tom", "" ], [ "D'Orsogna", "Maria", "" ] ]
Substances of abuse are known to activate and disrupt neuronal circuits in the brain reward system. We propose a simple and easily interpretable dynamical systems model to describe the neurobiology of drug addiction that incorporates the psychiatric concepts of reward prediction error (RPE), drug-induced incentive salience (IST), and opponent process theory (OPT). Drug-induced dopamine releases activate a biphasic reward response with pleasurable, positive "a-processes" (euphoria, rush) followed by unpleasant, negative "b-processes" (cravings, withdrawal). Neuroadaptive processes triggered by successive intakes enhance the negative component of the reward response, which the user compensates for by increasing drug dose and/or intake frequency. This positive feedback between physiological changes and drug self-administration leads to habituation, tolerance and eventually to full addiction. Our model gives rise to qualitatively different pathways to addiction that can represent a diverse set of user profiles (genetics, age) and drug potencies. We find that users who have, or neuroadaptively develop, a strong b-process response to drug consumption are most at risk for addiction. Finally, we include possible mechanisms to mitigate withdrawal symptoms, such as through the use of methadone or other auxiliary drugs used in detoxification.
0908.0419
Jan H. Meinke
Jan H Meinke and Ulrich H E Hansmann
Protein simulations combining an all-atom force field with a Go term
9 pages, 8 figures
J.H. Meinke & U.H.E. Hansmann. Protein simulations combining an all-atom force field with a Go term. J. Phys. - Condens. Mat 19, 285215(2007)
10.1088/0953-8984/19/28/285215
null
q-bio.BM q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Using a variant of parallel tempering, we study the changes in sampling within a simulation when the all-atom model is coupled to a Go-like potential. We find that the native structure is not the lowest-energy configuration in the all-atom force field. Adding a Go-term deforms the energy landscape in a way that the native configuration becomes the global minimum.
[ { "created": "Tue, 4 Aug 2009 09:48:09 GMT", "version": "v1" } ]
2009-08-05
[ [ "Meinke", "Jan H", "" ], [ "Hansmann", "Ulrich H E", "" ] ]
Using a variant of parallel tempering, we study the changes in sampling within a simulation when the all-atom model is coupled to a Go-like potential. We find that the native structure is not the lowest-energy configuration in the all-atom force field. Adding a Go-term deforms the energy landscape in a way that the native configuration becomes the global minimum.
2312.17629
Silvina Ponce Dawson
Alan Givr\'e and Silvina Ponce Dawson
Cell information processing via frequency encoding and excitability
19 pages, 6 figures
null
null
null
q-bio.CB
http://creativecommons.org/licenses/by/4.0/
Cells continuously interact with their environment mediating their responses through signaling cascades. Very often, external stimuli induce pulsatile behaviors in intermediaries of the cascade of increasing frequency with the stimulus strength. This is characteristic of intracellular Ca$^{2+}$ signals involving Ca$^{2+}$ release through Inositol Trisphosphate Receptors (IP$_3$Rs). The mean frequency of IP$_3$R-mediated Ca$^{2+}$ pulses has been observed to scale exponentially with the stimulus strength in many cell types. In this paper we use a simple ODE model of the intracellular Ca$^{2+}$ dynamics for parameters for which there is one excitable fixed point. Including fluctuations through an additive noise term, we derive the mean escape rate from the stationary state and, thus, the mean interpulse time, as a function of the fraction, $\beta$, of readily openable IP$_3$Rs. Using an IP$_3$R kinetic model, experimental observations of spatially resolved Ca$^{2+}$ signals and previous estimates of the IP$_3$ produced upon stimulation we quantify the fluctuations and relate $\beta$ to [IP$_3$] and the stimulus strength. In this way we determine that the mean interpulse time can be approximated by an exponential function of the latter for ranges such that the covered mean time intervals are similar or larger than those observed experimentally. The study thus provides an easily interpretable explanation, applicable to other pulsatile signaling intermediaries, of the observed exponential dependence between frequency and stimulus, a key feature that makes frequency encoding qualitatively different from other ways commonly used by cells to "read" their environment.
[ { "created": "Fri, 29 Dec 2023 14:44:10 GMT", "version": "v1" }, { "created": "Fri, 16 Feb 2024 13:50:56 GMT", "version": "v2" }, { "created": "Thu, 18 Apr 2024 18:41:15 GMT", "version": "v3" } ]
2024-04-22
[ [ "Givré", "Alan", "" ], [ "Dawson", "Silvina Ponce", "" ] ]
Cells continuously interact with their environment mediating their responses through signaling cascades. Very often, external stimuli induce pulsatile behaviors in intermediaries of the cascade of increasing frequency with the stimulus strength. This is characteristic of intracellular Ca$^{2+}$ signals involving Ca$^{2+}$ release through Inositol Trisphosphate Receptors (IP$_3$Rs). The mean frequency of IP$_3$R-mediated Ca$^{2+}$ pulses has been observed to scale exponentially with the stimulus strength in many cell types. In this paper we use a simple ODE model of the intracellular Ca$^{2+}$ dynamics for parameters for which there is one excitable fixed point. Including fluctuations through an additive noise term, we derive the mean escape rate from the stationary state and, thus, the mean interpulse time, as a function of the fraction, $\beta$, of readily openable IP$_3$Rs. Using an IP$_3$R kinetic model, experimental observations of spatially resolved Ca$^{2+}$ signals and previous estimates of the IP$_3$ produced upon stimulation we quantify the fluctuations and relate $\beta$ to [IP$_3$] and the stimulus strength. In this way we determine that the mean interpulse time can be approximated by an exponential function of the latter for ranges such that the covered mean time intervals are similar or larger than those observed experimentally. The study thus provides an easily interpretable explanation, applicable to other pulsatile signaling intermediaries, of the observed exponential dependence between frequency and stimulus, a key feature that makes frequency encoding qualitatively different from other ways commonly used by cells to "read" their environment.
1307.1586
Blaise Li
Blaise Li
Evaluating strategies of phylogenetic analyses by the coherence of their results
6 pages, 3 figures, accepted for publication in Comptes Rendus Palevol, based on a work presented at the "Journ\'ees d'automne 2012 de la Soci\'et\'e Fran\c{c}aise de Syst\'ematique" (http://www.normalesup.org/~bli/Papers/SFS_2012_BL.pdf)
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
I propose an approach to identify, among several strategies of phylogenetic analysis, those producing the most accurate results. This approach is based on the hypothesis that the more a result is reproduced from independent data, the more it reflects the historical signal common to the analysed data. Under this hypothesis, the capacity of an analytical strategy to extract historical signal should correlate positively with the coherence of the obtained results. I apply this approach to a series of analyses on empirical data, basing the coherence measure on the Robinson-Foulds distances between the obtained trees. At first approximation, the analytical strategies most suitable for the data produce the most coherent results. However, risks of false positives and false negatives are identified, which are difficult to rule out.
[ { "created": "Fri, 5 Jul 2013 11:36:40 GMT", "version": "v1" } ]
2013-07-08
[ [ "Li", "Blaise", "" ] ]
I propose an approach to identify, among several strategies of phylogenetic analysis, those producing the most accurate results. This approach is based on the hypothesis that the more a result is reproduced from independent data, the more it reflects the historical signal common to the analysed data. Under this hypothesis, the capacity of an analytical strategy to extract historical signal should correlate positively with the coherence of the obtained results. I apply this approach to a series of analyses on empirical data, basing the coherence measure on the Robinson-Foulds distances between the obtained trees. At first approximation, the analytical strategies most suitable for the data produce the most coherent results. However, risks of false positives and false negatives are identified, which are difficult to rule out.
1906.07777
Luca Mazzucato
Giancarlo La Camera, Alfredo Fontanini and Luca Mazzucato
Cortical computations via metastable activity
15 pages, 3 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Metastable brain dynamics are characterized by abrupt, jump-like modulations so that the neural activity in single trials appears to unfold as a sequence of discrete, quasi-stationary states. Evidence that cortical neural activity unfolds as a sequence of metastable states is accumulating at fast pace. Metastable activity occurs both in response to an external stimulus and during ongoing, self-generated activity. These spontaneous metastable states are increasingly found to subserve internal representations that are not locked to external triggers, including states of deliberations, attention and expectation. Moreover, decoding stimuli or decisions via metastable states can be carried out trial-by-trial. Focusing on metastability will allow us to shift our perspective on neural coding from traditional concepts based on trial-averaging to models based on dynamic ensemble representations. Recent theoretical work has started to characterize the mechanistic origin and potential roles of metastable representations. In this article we review recent findings on metastable activity, how it may arise in biologically realistic models, and its potential role for representing internal states as well as relevant task variables.
[ { "created": "Tue, 18 Jun 2019 19:25:08 GMT", "version": "v1" } ]
2019-06-20
[ [ "La Camera", "Giancarlo", "" ], [ "Fontanini", "Alfredo", "" ], [ "Mazzucato", "Luca", "" ] ]
Metastable brain dynamics are characterized by abrupt, jump-like modulations so that the neural activity in single trials appears to unfold as a sequence of discrete, quasi-stationary states. Evidence that cortical neural activity unfolds as a sequence of metastable states is accumulating at fast pace. Metastable activity occurs both in response to an external stimulus and during ongoing, self-generated activity. These spontaneous metastable states are increasingly found to subserve internal representations that are not locked to external triggers, including states of deliberations, attention and expectation. Moreover, decoding stimuli or decisions via metastable states can be carried out trial-by-trial. Focusing on metastability will allow us to shift our perspective on neural coding from traditional concepts based on trial-averaging to models based on dynamic ensemble representations. Recent theoretical work has started to characterize the mechanistic origin and potential roles of metastable representations. In this article we review recent findings on metastable activity, how it may arise in biologically realistic models, and its potential role for representing internal states as well as relevant task variables.
1601.05117
Sriganesh Srihari Dr
Sriganesh Srihari, Murugan Kalimutho, Samir Lal, Jitin Singla, Dhaval Patel, Peter T. Simpson, Kum Kum Khanna and Mark A. Ragan
Understanding the functional impact of copy number alterations in breast cancer using a network modeling approach
23 pages, 2 tables, 7 figures
null
10.1039/C5MB00655D
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Copy number alterations (CNAs) are thought to account for 85% of the variation in gene expression observed among breast tumours. The expression of cis-associated genes is impacted by CNAs occurring at proximal loci of these genes, whereas the expression of trans-associated genes is impacted by CNAs occurring at distal loci. While a majority of these CNA-driven genes responsible for breast tumourigenesis are cis-associated, trans-associated genes are thought to further abet the development of cancer and influence disease outcomes in patients. Here we present a network-based approach that integrates copy-number and expression profiles to identify putative cis- and trans-associated genes in breast cancer pathogenesis. We validate these cis- and trans-associated genes by employing them to subtype a large cohort of breast tumours obtained from the METABRIC consortium, and demonstrate that these genes accurately reconstruct the ten subtypes of breast cancer. We observe that individual breast cancer subtypes are driven by distinct sets of cis- and trans-associated genes. Among the cis-associated genes, we recover several known drivers of breast cancer (e.g. CCND1, ERRB2, MDM2 and ZNF703) and some novel putative drivers (e.g. BRF2 and SF3B3). siRNA-mediated knockdown of BRF2 across a panel of breast cancer cell lines showed significant reduction specifically in cell proliferation in HER2+ lines, thereby indicating that BRF2 could be a context-dependent oncogene and potentially targetable in these lines. Among the trans-associated genes, we identify modules of immune-response (CD2, CD19, CD38 and CD79B), mitotic/cell-cycle kinases (e.g. AURKB, MELK, PLK1 and TTK), and DNA-damage response genes (e.g. RFC4 and FEN1).
[ { "created": "Tue, 19 Jan 2016 22:16:53 GMT", "version": "v1" } ]
2016-01-21
[ [ "Srihari", "Sriganesh", "" ], [ "Kalimutho", "Murugan", "" ], [ "Lal", "Samir", "" ], [ "Singla", "Jitin", "" ], [ "Patel", "Dhaval", "" ], [ "Simpson", "Peter T.", "" ], [ "Khanna", "Kum Kum", "" ], [ "Ragan", "Mark A.", "" ] ]
Copy number alterations (CNAs) are thought to account for 85% of the variation in gene expression observed among breast tumours. The expression of cis-associated genes is impacted by CNAs occurring at proximal loci of these genes, whereas the expression of trans-associated genes is impacted by CNAs occurring at distal loci. While a majority of these CNA-driven genes responsible for breast tumourigenesis are cis-associated, trans-associated genes are thought to further abet the development of cancer and influence disease outcomes in patients. Here we present a network-based approach that integrates copy-number and expression profiles to identify putative cis- and trans-associated genes in breast cancer pathogenesis. We validate these cis- and trans-associated genes by employing them to subtype a large cohort of breast tumours obtained from the METABRIC consortium, and demonstrate that these genes accurately reconstruct the ten subtypes of breast cancer. We observe that individual breast cancer subtypes are driven by distinct sets of cis- and trans-associated genes. Among the cis-associated genes, we recover several known drivers of breast cancer (e.g. CCND1, ERRB2, MDM2 and ZNF703) and some novel putative drivers (e.g. BRF2 and SF3B3). siRNA-mediated knockdown of BRF2 across a panel of breast cancer cell lines showed significant reduction specifically in cell proliferation in HER2+ lines, thereby indicating that BRF2 could be a context-dependent oncogene and potentially targetable in these lines. Among the trans-associated genes, we identify modules of immune-response (CD2, CD19, CD38 and CD79B), mitotic/cell-cycle kinases (e.g. AURKB, MELK, PLK1 and TTK), and DNA-damage response genes (e.g. RFC4 and FEN1).
1504.03488
Mark Leake
Adam J. M. Wollman, Helen Miller, Zhaokun Zhou, Mark C. Leake
Probing DNA interactions with proteins using a single-molecule toolbox: inside the cell, in a test tube, and in a computer
null
null
null
null
q-bio.BM physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
DNA-interacting proteins have roles multiple processes, many operating as molecular machines which undergo dynamic metastable transitions to bring about their biological function. To fully understand this molecular heterogeneity, DNA and the proteins that bind to it must ideally be interrogated at a single molecule level in their native in vivo environments, in a time-resolved manner fast to sample the molecular transitions across the free energy landscape. Progress has been made over the past decade in utilising cutting-edge tools of the physical sciences to address challenging biological questions concerning the function and modes of action of several different proteins which bind to DNA. These physiologically relevant assays are technically challenging, but can be complemented by powerful and often more tractable in vitro experiments which confer advantages of the chemical environment with enhanced detection single-to-noise of molecular signatures and transition events. Here, we discuss a range of techniques we have developed to monitor DNA-protein interactions in vivo, in vitro and in silico. These include bespoke single-molecule fluorescence microscopy techniques to elucidate the architecture and dynamics of the bacterial replisome and the structural maintenance of bacterial chromosomes, as well as new computational tools to extract single-molecule molecular signatures from live cells to monitor stoichiometry, spatial localization and mobility in living cells. We also discuss recent developments from our lab made in vitro, complementing these in vivo studies, which combine optical and magnetic tweezers to manipulate and image single molecules of DNA, with and without bound protein, in a new superresolution fluorescence microscope.
[ { "created": "Tue, 14 Apr 2015 10:40:46 GMT", "version": "v1" } ]
2015-04-15
[ [ "Wollman", "Adam J. M.", "" ], [ "Miller", "Helen", "" ], [ "Zhou", "Zhaokun", "" ], [ "Leake", "Mark C.", "" ] ]
DNA-interacting proteins have roles multiple processes, many operating as molecular machines which undergo dynamic metastable transitions to bring about their biological function. To fully understand this molecular heterogeneity, DNA and the proteins that bind to it must ideally be interrogated at a single molecule level in their native in vivo environments, in a time-resolved manner fast to sample the molecular transitions across the free energy landscape. Progress has been made over the past decade in utilising cutting-edge tools of the physical sciences to address challenging biological questions concerning the function and modes of action of several different proteins which bind to DNA. These physiologically relevant assays are technically challenging, but can be complemented by powerful and often more tractable in vitro experiments which confer advantages of the chemical environment with enhanced detection single-to-noise of molecular signatures and transition events. Here, we discuss a range of techniques we have developed to monitor DNA-protein interactions in vivo, in vitro and in silico. These include bespoke single-molecule fluorescence microscopy techniques to elucidate the architecture and dynamics of the bacterial replisome and the structural maintenance of bacterial chromosomes, as well as new computational tools to extract single-molecule molecular signatures from live cells to monitor stoichiometry, spatial localization and mobility in living cells. We also discuss recent developments from our lab made in vitro, complementing these in vivo studies, which combine optical and magnetic tweezers to manipulate and image single molecules of DNA, with and without bound protein, in a new superresolution fluorescence microscope.
1811.09167
Sayak Mukherjee
D. V. Arjun, Pallab Basu and Sayak Mukherjee
Anticipation: an effective evolutionary strategy for a sub-optimal population in a cyclic environment
20 pages, 4 main text fig, 3 suppl fig
null
null
null
q-bio.PE q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We built a two-state model of an asexually reproducing organism in a periodic environment endowed with the capability to anticipate an upcoming environmental change and undergo pre-emptive switching. By virtue of these anticipatory transitions, the organism oscillates between its two states that is a time $\theta$ out of sync with the environmental oscillation. We show that an anticipation-capable organism increases its long-term fitness over an organism that oscillates in-sync with the environment, provided $\theta$ does not exceed a threshold. We also show that the long-term fitness is maximized for an optimal anticipation time that decreases approximately as $1/n$, $n$ being the number of cell divisions in time $T$. Furthermore, we demonstrate that optimal "anticipators" outperforms "bet-hedgers" in the range of parameters considered. For a sub-optimal ensemble of anticipators, anticipation performs better to bet-hedging only when the variance in anticipation is small compared to the mean and the rate of pre-emptive transition is high. Taken together, our work suggests that anticipation increases overall fitness of an organism in a periodic environment and it is a viable alternative to bet-hedging provided the error in anticipation is small.
[ { "created": "Thu, 22 Nov 2018 13:31:13 GMT", "version": "v1" } ]
2018-11-26
[ [ "Arjun", "D. V.", "" ], [ "Basu", "Pallab", "" ], [ "Mukherjee", "Sayak", "" ] ]
We built a two-state model of an asexually reproducing organism in a periodic environment endowed with the capability to anticipate an upcoming environmental change and undergo pre-emptive switching. By virtue of these anticipatory transitions, the organism oscillates between its two states that is a time $\theta$ out of sync with the environmental oscillation. We show that an anticipation-capable organism increases its long-term fitness over an organism that oscillates in-sync with the environment, provided $\theta$ does not exceed a threshold. We also show that the long-term fitness is maximized for an optimal anticipation time that decreases approximately as $1/n$, $n$ being the number of cell divisions in time $T$. Furthermore, we demonstrate that optimal "anticipators" outperforms "bet-hedgers" in the range of parameters considered. For a sub-optimal ensemble of anticipators, anticipation performs better to bet-hedging only when the variance in anticipation is small compared to the mean and the rate of pre-emptive transition is high. Taken together, our work suggests that anticipation increases overall fitness of an organism in a periodic environment and it is a viable alternative to bet-hedging provided the error in anticipation is small.
0708.2124
Mike Steel Prof.
Mike Steel and Allen Rodrigo
Maximum Likelihood Supertrees
13 pages, 0 figures
null
null
null
q-bio.PE q-bio.QM
null
We analyse a maximum-likelihood approach for combining phylogenetic trees into a larger `supertree'. This is based on a simple exponential model of phylogenetic error, which ensures that ML supertrees have a simple combinatorial description (as a median tree, minimising a weighted sum of distances to the input trees). We show that this approach to ML supertree reconstruction is statistically consistent (it converges on the true species supertree as more input trees are combined), in contrast to the widely-used MRP method, which we show can be statistically inconsistent under the exponential error model. We also show that this statistical consistency extends to an ML approach for constructing species supertrees from gene trees. In this setting, incomplete lineage sorting (due to coalescence rates of homologous genes being lower than speciation rates) has been shown to lead to gene trees that are frequently different from species trees, and this can confound efforts to reconstruct the species phylogeny correctly.
[ { "created": "Thu, 16 Aug 2007 03:11:01 GMT", "version": "v1" } ]
2007-08-17
[ [ "Steel", "Mike", "" ], [ "Rodrigo", "Allen", "" ] ]
We analyse a maximum-likelihood approach for combining phylogenetic trees into a larger `supertree'. This is based on a simple exponential model of phylogenetic error, which ensures that ML supertrees have a simple combinatorial description (as a median tree, minimising a weighted sum of distances to the input trees). We show that this approach to ML supertree reconstruction is statistically consistent (it converges on the true species supertree as more input trees are combined), in contrast to the widely-used MRP method, which we show can be statistically inconsistent under the exponential error model. We also show that this statistical consistency extends to an ML approach for constructing species supertrees from gene trees. In this setting, incomplete lineage sorting (due to coalescence rates of homologous genes being lower than speciation rates) has been shown to lead to gene trees that are frequently different from species trees, and this can confound efforts to reconstruct the species phylogeny correctly.
2107.05438
Noor Sajid
Noor Sajid and Francesco Faccio and Lancelot Da Costa and Thomas Parr and J\"urgen Schmidhuber and Karl Friston
Bayesian brains and the R\'enyi divergence
23 pages, 5 figures
null
null
null
q-bio.NC cs.AI
http://creativecommons.org/licenses/by/4.0/
Under the Bayesian brain hypothesis, behavioural variations can be attributed to different priors over generative model parameters. This provides a formal explanation for why individuals exhibit inconsistent behavioural preferences when confronted with similar choices. For example, greedy preferences are a consequence of confident (or precise) beliefs over certain outcomes. Here, we offer an alternative account of behavioural variability using R\'enyi divergences and their associated variational bounds. R\'enyi bounds are analogous to the variational free energy (or evidence lower bound) and can be derived under the same assumptions. Importantly, these bounds provide a formal way to establish behavioural differences through an $\alpha$ parameter, given fixed priors. This rests on changes in $\alpha$ that alter the bound (on a continuous scale), inducing different posterior estimates and consequent variations in behaviour. Thus, it looks as if individuals have different priors, and have reached different conclusions. More specifically, $\alpha \to 0^{+}$ optimisation leads to mass-covering variational estimates and increased variability in choice behaviour. Furthermore, $\alpha \to + \infty$ optimisation leads to mass-seeking variational posteriors and greedy preferences. We exemplify this formulation through simulations of the multi-armed bandit task. We note that these $\alpha$ parameterisations may be especially relevant, i.e., shape preferences, when the true posterior is not in the same family of distributions as the assumed (simpler) approximate density, which may be the case in many real-world scenarios. The ensuing departure from vanilla variational inference provides a potentially useful explanation for differences in behavioural preferences of biological (or artificial) agents under the assumption that the brain performs variational Bayesian inference.
[ { "created": "Mon, 12 Jul 2021 14:14:36 GMT", "version": "v1" } ]
2021-07-13
[ [ "Sajid", "Noor", "" ], [ "Faccio", "Francesco", "" ], [ "Da Costa", "Lancelot", "" ], [ "Parr", "Thomas", "" ], [ "Schmidhuber", "Jürgen", "" ], [ "Friston", "Karl", "" ] ]
Under the Bayesian brain hypothesis, behavioural variations can be attributed to different priors over generative model parameters. This provides a formal explanation for why individuals exhibit inconsistent behavioural preferences when confronted with similar choices. For example, greedy preferences are a consequence of confident (or precise) beliefs over certain outcomes. Here, we offer an alternative account of behavioural variability using R\'enyi divergences and their associated variational bounds. R\'enyi bounds are analogous to the variational free energy (or evidence lower bound) and can be derived under the same assumptions. Importantly, these bounds provide a formal way to establish behavioural differences through an $\alpha$ parameter, given fixed priors. This rests on changes in $\alpha$ that alter the bound (on a continuous scale), inducing different posterior estimates and consequent variations in behaviour. Thus, it looks as if individuals have different priors, and have reached different conclusions. More specifically, $\alpha \to 0^{+}$ optimisation leads to mass-covering variational estimates and increased variability in choice behaviour. Furthermore, $\alpha \to + \infty$ optimisation leads to mass-seeking variational posteriors and greedy preferences. We exemplify this formulation through simulations of the multi-armed bandit task. We note that these $\alpha$ parameterisations may be especially relevant, i.e., shape preferences, when the true posterior is not in the same family of distributions as the assumed (simpler) approximate density, which may be the case in many real-world scenarios. The ensuing departure from vanilla variational inference provides a potentially useful explanation for differences in behavioural preferences of biological (or artificial) agents under the assumption that the brain performs variational Bayesian inference.
0806.3734
Pietro Faccioli
Pietro Faccioli
Characterization of Protein Folding by Dominant Reaction Pathways
14 pages, 11 figures
null
null
null
q-bio.BM cond-mat.soft
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We assess the reliability of the recently developed approach denominated Dominant Reaction Pathways (DRP) by studying the folding of a 16-residue beta-hairpin, within a coarse-grained Go-type model. We show that the DRP predictions are in quantitative agreement with the results of Molecular Dynamics simulations, performed in the same model. On the other hand, in the DRP approach, the computational difficulties associated to the decoupling of time scales are rigorously bypassed. The analysis of the important transition pathways supports a picture of the beta-hairpin folding in which the reaction is initiated by the collapse of the hydrophobic cluster.
[ { "created": "Mon, 23 Jun 2008 18:59:35 GMT", "version": "v1" } ]
2008-06-24
[ [ "Faccioli", "Pietro", "" ] ]
We assess the reliability of the recently developed approach denominated Dominant Reaction Pathways (DRP) by studying the folding of a 16-residue beta-hairpin, within a coarse-grained Go-type model. We show that the DRP predictions are in quantitative agreement with the results of Molecular Dynamics simulations, performed in the same model. On the other hand, in the DRP approach, the computational difficulties associated to the decoupling of time scales are rigorously bypassed. The analysis of the important transition pathways supports a picture of the beta-hairpin folding in which the reaction is initiated by the collapse of the hydrophobic cluster.
1808.00598
Nikolai Slavov
Bogdan Budnik, Ezra Levy, Guillaume Harmange, Nikolai Slavov
Mass-spectrometry of single mammalian cells quantifies proteome heterogeneity during cell differentiation
null
Genome Biology, 2018
10.1186/s13059-018-1547-5
19:161
q-bio.GN q-bio.QM
http://creativecommons.org/licenses/by-nc-sa/4.0/
Cellular heterogeneity is important to biological processes, including cancer and development. However, proteome heterogeneity is largely unexplored because of the limitations of existing methods for quantifying protein levels in single cells. To alleviate these limitations, we developed Single Cell ProtEomics by Mass Spectrometry (SCoPE-MS), and validated its ability to identify distinct human cancer cell types based on their proteomes. We used SCoPE-MS to quantify over a thousand proteins in differentiating mouse embryonic stem (ES) cells. The single-cell proteomes enabled us to deconstruct cell populations and infer protein abundance relationships. Comparison between single-cell proteomes and transcriptomes indicated coordinated mRNA and protein covariation. Yet many genes exhibited functionally concerted and distinct regulatory patterns at the mRNA and the protein levels, suggesting that post-transcriptional regulatory mechanisms contribute to proteome remodeling during lineage specification, especially for developmental genes. SCoPE-MS is broadly applicable to measuring proteome configurations of single cells and linking them to functional phenotypes, such as cell type and differentiation potentials.
[ { "created": "Wed, 1 Aug 2018 23:50:47 GMT", "version": "v1" } ]
2018-10-29
[ [ "Budnik", "Bogdan", "" ], [ "Levy", "Ezra", "" ], [ "Harmange", "Guillaume", "" ], [ "Slavov", "Nikolai", "" ] ]
Cellular heterogeneity is important to biological processes, including cancer and development. However, proteome heterogeneity is largely unexplored because of the limitations of existing methods for quantifying protein levels in single cells. To alleviate these limitations, we developed Single Cell ProtEomics by Mass Spectrometry (SCoPE-MS), and validated its ability to identify distinct human cancer cell types based on their proteomes. We used SCoPE-MS to quantify over a thousand proteins in differentiating mouse embryonic stem (ES) cells. The single-cell proteomes enabled us to deconstruct cell populations and infer protein abundance relationships. Comparison between single-cell proteomes and transcriptomes indicated coordinated mRNA and protein covariation. Yet many genes exhibited functionally concerted and distinct regulatory patterns at the mRNA and the protein levels, suggesting that post-transcriptional regulatory mechanisms contribute to proteome remodeling during lineage specification, especially for developmental genes. SCoPE-MS is broadly applicable to measuring proteome configurations of single cells and linking them to functional phenotypes, such as cell type and differentiation potentials.
1902.09148
Hyunjin Shim
Hyunjin Shim
Futuristic methods in virus genome evolution using the Third-Generation DNA sequencing and artificial neural networks
29 pages, 5 figures, 1 table. An invited book chapter to appear in "Global Virology Series" P. Shapshak et al. (Eds) 2019
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by-nc-sa/4.0/
The Third-Generation in DNA sequencing has emerged in the last few years using new technologies that allow the production of long-read sequences. Applications of the Third-Generation sequencing enable real-time and on-site data production, changing the research paradigms in environmental and medical sampling in virology. To take full advantage of large-scale data generated from long-read sequencing, an innovation in the downstream data analysis is necessary. Here, we discuss futuristic methods using machine learning approaches to analyze big genetic data. Machine learning combines pattern recognition and computational learning to perform predictive and exploratory data analysis. In particular, deep learning is a field of machine learning that is used to solve complex problems through artificial neural networks. Unlike other methods, features can be learned using neural networks entirely from data without manual specifications. We discuss the future of 21st-century virology by presenting futuristic approaches for virus studies using real-time data production and on-site data analysis with the Third-Generation Sequencing and machine learning methods. We first introduce the basic concepts in conventional statistical models and methods in virology, building gradually into the necessity of innovating the downstream data analysis to meet the advances in sequencing technologies. We argue that artificial neural networks can innovate the downstream data analysis, as they can learn from big datasets without model assumptions or feature specifications, as opposed to the current data analysis in bioinformatics. Furthermore, we discuss how futuristic methods using artificial neural networks combined with long-read sequences can revolutionize virus studies, using specific examples in supervised and unsupervised settings.
[ { "created": "Mon, 25 Feb 2019 09:05:00 GMT", "version": "v1" } ]
2019-02-26
[ [ "Shim", "Hyunjin", "" ] ]
The Third-Generation in DNA sequencing has emerged in the last few years using new technologies that allow the production of long-read sequences. Applications of the Third-Generation sequencing enable real-time and on-site data production, changing the research paradigms in environmental and medical sampling in virology. To take full advantage of large-scale data generated from long-read sequencing, an innovation in the downstream data analysis is necessary. Here, we discuss futuristic methods using machine learning approaches to analyze big genetic data. Machine learning combines pattern recognition and computational learning to perform predictive and exploratory data analysis. In particular, deep learning is a field of machine learning that is used to solve complex problems through artificial neural networks. Unlike other methods, features can be learned using neural networks entirely from data without manual specifications. We discuss the future of 21st-century virology by presenting futuristic approaches for virus studies using real-time data production and on-site data analysis with the Third-Generation Sequencing and machine learning methods. We first introduce the basic concepts in conventional statistical models and methods in virology, building gradually into the necessity of innovating the downstream data analysis to meet the advances in sequencing technologies. We argue that artificial neural networks can innovate the downstream data analysis, as they can learn from big datasets without model assumptions or feature specifications, as opposed to the current data analysis in bioinformatics. Furthermore, we discuss how futuristic methods using artificial neural networks combined with long-read sequences can revolutionize virus studies, using specific examples in supervised and unsupervised settings.
2107.03383
Simone Marini
Mattia Prosperi, Simone Marini, Christina Boucher, Jiang Bian
Assessing putative bias in prediction of anti-microbial resistance from real-world genotyping data under explicit causal assumptions
In DSHealth '21] Joint KDD 2021 Health Day and 2021 KDD Workshop on Applied Data Science for Healthcare, Aug 14--18, 2021, Virtual, 5 pages
null
null
null
q-bio.GN cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Whole genome sequencing (WGS) is quickly becoming the customary means for identification of antimicrobial resistance (AMR) due to its ability to obtain high resolution information about the genes and mechanisms that are causing resistance and driving pathogen mobility. By contrast, traditional phenotypic (antibiogram) testing cannot easily elucidate such information. Yet development of AMR prediction tools from genotype-phenotype data can be biased, since sampling is non-randomized. Sample provenience, period of collection, and species representation can confound the association of genetic traits with AMR. Thus, prediction models can perform poorly on new data with sampling distribution shifts. In this work -- under an explicit set of causal assumptions -- we evaluate the effectiveness of propensity-based rebalancing and confounding adjustment on AMR prediction using genotype-phenotype AMR data from the Pathosystems Resource Integration Center (PATRIC). We select bacterial genotypes (encoded as k-mer signatures, i.e. DNA fragments of length k), country, year, species, and AMR phenotypes for the tetracycline drug class, preparing test data with recent genomes coming from a single country. We test boosted logistic regression (BLR) and random forests (RF) with/without bias-handling. On 10,936 instances, we find evidence of species, location and year imbalance with respect to the AMR phenotype. The crude versus bias-adjusted change in effect of genetic signatures on AMR varies but only moderately (selecting the top 20,000 out of 40+ million k-mers). The area under the receiver operating characteristic (AUROC) of the RF (0.95) is comparable to that of BLR (0.94) on both out-of-bag samples from bootstrap and the external test (n=1,085), where AUROCs do not decrease. We observe a 1%-5% gain in AUROC with bias-handling compared to the sole use of genetic signatures. ...
[ { "created": "Tue, 6 Jul 2021 21:19:21 GMT", "version": "v1" }, { "created": "Fri, 23 Jul 2021 19:59:22 GMT", "version": "v2" } ]
2021-07-27
[ [ "Prosperi", "Mattia", "" ], [ "Marini", "Simone", "" ], [ "Boucher", "Christina", "" ], [ "Bian", "Jiang", "" ] ]
Whole genome sequencing (WGS) is quickly becoming the customary means for identification of antimicrobial resistance (AMR) due to its ability to obtain high resolution information about the genes and mechanisms that are causing resistance and driving pathogen mobility. By contrast, traditional phenotypic (antibiogram) testing cannot easily elucidate such information. Yet development of AMR prediction tools from genotype-phenotype data can be biased, since sampling is non-randomized. Sample provenience, period of collection, and species representation can confound the association of genetic traits with AMR. Thus, prediction models can perform poorly on new data with sampling distribution shifts. In this work -- under an explicit set of causal assumptions -- we evaluate the effectiveness of propensity-based rebalancing and confounding adjustment on AMR prediction using genotype-phenotype AMR data from the Pathosystems Resource Integration Center (PATRIC). We select bacterial genotypes (encoded as k-mer signatures, i.e. DNA fragments of length k), country, year, species, and AMR phenotypes for the tetracycline drug class, preparing test data with recent genomes coming from a single country. We test boosted logistic regression (BLR) and random forests (RF) with/without bias-handling. On 10,936 instances, we find evidence of species, location and year imbalance with respect to the AMR phenotype. The crude versus bias-adjusted change in effect of genetic signatures on AMR varies but only moderately (selecting the top 20,000 out of 40+ million k-mers). The area under the receiver operating characteristic (AUROC) of the RF (0.95) is comparable to that of BLR (0.94) on both out-of-bag samples from bootstrap and the external test (n=1,085), where AUROCs do not decrease. We observe a 1%-5% gain in AUROC with bias-handling compared to the sole use of genetic signatures. ...
1307.5833
Biplab Chattopadhyay
Nirmalendu Hui and Biplab Chattopadhyay
Realizing Bone-mass Generation Through a Density Type Theoretical Archetype
15 pages, 11 figures
null
null
null
q-bio.TO physics.bio-ph q-bio.QM
http://creativecommons.org/licenses/by/3.0/
The dynamic process of the formation of bone-mass in case of humans is studied to gather precise understanding about the same with the motivation of applying these concepts for healing of bone-fracture and non-unions in non-invasive manner. Three cellular ingredients, osteoblasts, osteoclasts and osteocytes are predominant players in generating new bone-mass in which a host of hormones, proteins and minerals have potential supportive role. Considering population density of these three biological cells osteoblasts, osteoclasts and osteocytes as variables, we formulate a theoretical model, in the form of a set of time differential equations in order to emulate the dynamic process of bone-mass creation. High relative abundance of osteocytes at asymptotic scale together with moderate level values of osteoblasts and osteoclasts cell populations signifies formation of bone-matter in our theoretical archetype. The archetype has been studied through both analytical and numerical channels, significant results are emphasized and relevant conclusions are drawn. Certain predictive statements are also made which could be put to future in-vivo clinical test or in-vitro physiological experimentation as well.
[ { "created": "Mon, 22 Jul 2013 19:48:33 GMT", "version": "v1" } ]
2013-07-23
[ [ "Hui", "Nirmalendu", "" ], [ "Chattopadhyay", "Biplab", "" ] ]
The dynamic process of the formation of bone-mass in case of humans is studied to gather precise understanding about the same with the motivation of applying these concepts for healing of bone-fracture and non-unions in non-invasive manner. Three cellular ingredients, osteoblasts, osteoclasts and osteocytes are predominant players in generating new bone-mass in which a host of hormones, proteins and minerals have potential supportive role. Considering population density of these three biological cells osteoblasts, osteoclasts and osteocytes as variables, we formulate a theoretical model, in the form of a set of time differential equations in order to emulate the dynamic process of bone-mass creation. High relative abundance of osteocytes at asymptotic scale together with moderate level values of osteoblasts and osteoclasts cell populations signifies formation of bone-matter in our theoretical archetype. The archetype has been studied through both analytical and numerical channels, significant results are emphasized and relevant conclusions are drawn. Certain predictive statements are also made which could be put to future in-vivo clinical test or in-vitro physiological experimentation as well.
2003.09002
Iqbal H. Sarker
Sohrab Hossain, Dhiman Sarma, Rana Joyti Chakma, Wahidul Alam, Mohammed Moshiul Hoque and Iqbal H. Sarker
A Rule Based Expert System to Assess Coronary Artery Disease under Uncertainty
International Conference on Computing Science, Communication and Security (COMS2), Springer, 2020
null
null
null
q-bio.QM stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The coronary artery disease (CAD) involves narrowing and damaging the major blood vessels has become the most life threating disease in the world especially in south Asian reason. Although outstanding medical facilities are available in Singapore and India for CAD patients, early detection of CAD stages are necessary to minimize the patients' sufferings and expenses. It is really challenging for doctors to incorporate numerous factors for details analysis and CAD detections are expensive as it needs expensive medical facilities. Clinical Decision Support Systems (CDSS) may assist to analyze numerous factors for patients. In this paper, a Rule Based Expert System (RBES) is proposed which can predict five different stages of CAD. RBES contains five different Belief Rule Based (BRB) systems and the final output is produced by combining all BRBs using the Evidential Reasoning (ER). Success, Error, Failure, False Omission rates are calculated to measures the performance of the RBES. The Success Rate and False Omission Rate show better performance comparing to existing CDSS.
[ { "created": "Mon, 16 Mar 2020 15:53:20 GMT", "version": "v1" } ]
2020-03-23
[ [ "Hossain", "Sohrab", "" ], [ "Sarma", "Dhiman", "" ], [ "Chakma", "Rana Joyti", "" ], [ "Alam", "Wahidul", "" ], [ "Hoque", "Mohammed Moshiul", "" ], [ "Sarker", "Iqbal H.", "" ] ]
The coronary artery disease (CAD) involves narrowing and damaging the major blood vessels has become the most life threating disease in the world especially in south Asian reason. Although outstanding medical facilities are available in Singapore and India for CAD patients, early detection of CAD stages are necessary to minimize the patients' sufferings and expenses. It is really challenging for doctors to incorporate numerous factors for details analysis and CAD detections are expensive as it needs expensive medical facilities. Clinical Decision Support Systems (CDSS) may assist to analyze numerous factors for patients. In this paper, a Rule Based Expert System (RBES) is proposed which can predict five different stages of CAD. RBES contains five different Belief Rule Based (BRB) systems and the final output is produced by combining all BRBs using the Evidential Reasoning (ER). Success, Error, Failure, False Omission rates are calculated to measures the performance of the RBES. The Success Rate and False Omission Rate show better performance comparing to existing CDSS.
1802.08723
Thierry Dufour
T. Dufour, S. Zhang, S. Simon, A. Rousseau
Reactive species involved in higher seeds germination and shoots vigor through direct plasma exposure and plasma-activated liquids
ISPC-23, Montreal, Canada, July 30 - August 4, 2017, Y-5-9, proceeding Ref. 129
null
null
null
q-bio.OT physics.app-ph physics.plasm-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cold atmospheric plasma treatments have been applied on lenses seeds and shoots to improve their germination and vigor rates. Two approaches have been considered: direct plasma exposure and plasma activation of liquids (tap water, demineralized water and liquid fertilizer). A special focus has been drawn on reactive oxygen species generated in the plasma phase but also in plasma activated media to understand their impact on germination process as well as on plants growth.
[ { "created": "Tue, 6 Feb 2018 23:16:31 GMT", "version": "v1" } ]
2018-02-27
[ [ "Dufour", "T.", "" ], [ "Zhang", "S.", "" ], [ "Simon", "S.", "" ], [ "Rousseau", "A.", "" ] ]
Cold atmospheric plasma treatments have been applied on lenses seeds and shoots to improve their germination and vigor rates. Two approaches have been considered: direct plasma exposure and plasma activation of liquids (tap water, demineralized water and liquid fertilizer). A special focus has been drawn on reactive oxygen species generated in the plasma phase but also in plasma activated media to understand their impact on germination process as well as on plants growth.
1312.4744
Frederic Bois
J\'er\'emy Hamon, Paul Jennings, Frederic Y. Bois
Integration of Omics Data and Systems Biology Modeling: Effect of Cyclosporine A on the Nrf2 Pathway in Human Renal Kidneys Cells
Six figures, 25 pages (45 with the supplemental material included)
BMC Systems Biology, 2014, 8:76
10.1186/1752-0509-8-76
null
q-bio.QM q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In a recent paper, Wilmes et al. demonstrated a qualitative integration of omics data streams to gain a mechanistic understanding of cyclosporine A toxicity. One of their major conclusions was that cyclosporine A strongly activates the nuclear factor (erythroid-derived 2)-like 2 pathway (Nrf2) in renal proximal tubular epithelial cells exposed in vitro. We pursue here the analysis of those data with a quantitative integration of omics data with a differential equation model of the Nrf2 pathway. That was done in two steps: (i) Modeling the in vitro pharmacokinetics of cyclosporine A (exchange between cells, culture medium and vial walls) with a minimal distribution model. (ii) Modeling the time course of omics markers in response to cyclosporine A exposure at the cell level with a coupled PK-systems biology model. Posterior statistical distributions of the parameter values were obtained by Markov chain Monte Carlo sampling. Data were well simulated, and the known in vitro toxic effect EC50 was well matched by model predictions. The integration of in vitro pharmacokinetics and systems biology modeling gives us a quantitative insight into mechanisms of cyclosporine A oxidative-stress induction, and a way to predict such a stress for a variety of exposure conditions.
[ { "created": "Tue, 17 Dec 2013 12:28:58 GMT", "version": "v1" } ]
2017-05-03
[ [ "Hamon", "Jérémy", "" ], [ "Jennings", "Paul", "" ], [ "Bois", "Frederic Y.", "" ] ]
In a recent paper, Wilmes et al. demonstrated a qualitative integration of omics data streams to gain a mechanistic understanding of cyclosporine A toxicity. One of their major conclusions was that cyclosporine A strongly activates the nuclear factor (erythroid-derived 2)-like 2 pathway (Nrf2) in renal proximal tubular epithelial cells exposed in vitro. We pursue here the analysis of those data with a quantitative integration of omics data with a differential equation model of the Nrf2 pathway. That was done in two steps: (i) Modeling the in vitro pharmacokinetics of cyclosporine A (exchange between cells, culture medium and vial walls) with a minimal distribution model. (ii) Modeling the time course of omics markers in response to cyclosporine A exposure at the cell level with a coupled PK-systems biology model. Posterior statistical distributions of the parameter values were obtained by Markov chain Monte Carlo sampling. Data were well simulated, and the known in vitro toxic effect EC50 was well matched by model predictions. The integration of in vitro pharmacokinetics and systems biology modeling gives us a quantitative insight into mechanisms of cyclosporine A oxidative-stress induction, and a way to predict such a stress for a variety of exposure conditions.
2206.03806
Antonio Selva Casta\~neda MSc.
Antonio Rafael Selva Casta\~neda, Erick Eduardo Ramirez-Torres, Luis Eugenio Vald\'es-Garc\'ia, Hilda Mar\'ia Morandeira-Padr\'on, Diana Sedal Yanez, Juan I. Montijano, and Luis Enrique Bergues Cabrales
Model for prognostic of symptomatic, asymptomatic and hospitalized COVID-19 cases with correct demography evolution
null
null
null
null
q-bio.PE math.DS
http://creativecommons.org/licenses/by/4.0/
The aim of this study is to propose a modified Susceptible-Exposed-Infectious-Removed (SEIR) model that describes the behaviour of symptomatic, asymptomatic and hospitalized patients of COVID-19 epidemic, including the effect of demographic variation of population. It is shown that considering a population growth proportional to the total population leads to solutions with a qualitative behaviour different from the behaviour obtained in many studies, where constant growth ratio is assumed. An exhaustive theoretical study is carried out and the basic reproduction number $R_0$ is computed from the model equations. It is proved that if $R_0<1$ then the disease-free manifold is globally asymptotically stable, that is, the epidemics remits. Global and local stability of the equilibrium points is also studied. Numerical simulations are used to show the agreement between numerical results and theoretical properties. The model is fitted to experimental data corresponding to the pandemic evolution in the Rep\'ublica de Cuba, showing a proper behaviour of infected cases which let us think that can provide a correct estimation of asymptomatic cases. In conclusion, the model seems to be an adequate tool for the study and control of infectious diseases in particular the COVID-19 disease transmission.
[ { "created": "Wed, 8 Jun 2022 11:08:05 GMT", "version": "v1" } ]
2022-06-09
[ [ "Castañeda", "Antonio Rafael Selva", "" ], [ "Ramirez-Torres", "Erick Eduardo", "" ], [ "Valdés-García", "Luis Eugenio", "" ], [ "Morandeira-Padrón", "Hilda María", "" ], [ "Yanez", "Diana Sedal", "" ], [ "Montijano", "Juan I.", "" ], [ "Cabrales", "Luis Enrique Bergues", "" ] ]
The aim of this study is to propose a modified Susceptible-Exposed-Infectious-Removed (SEIR) model that describes the behaviour of symptomatic, asymptomatic and hospitalized patients of COVID-19 epidemic, including the effect of demographic variation of population. It is shown that considering a population growth proportional to the total population leads to solutions with a qualitative behaviour different from the behaviour obtained in many studies, where constant growth ratio is assumed. An exhaustive theoretical study is carried out and the basic reproduction number $R_0$ is computed from the model equations. It is proved that if $R_0<1$ then the disease-free manifold is globally asymptotically stable, that is, the epidemics remits. Global and local stability of the equilibrium points is also studied. Numerical simulations are used to show the agreement between numerical results and theoretical properties. The model is fitted to experimental data corresponding to the pandemic evolution in the Rep\'ublica de Cuba, showing a proper behaviour of infected cases which let us think that can provide a correct estimation of asymptomatic cases. In conclusion, the model seems to be an adequate tool for the study and control of infectious diseases in particular the COVID-19 disease transmission.
1608.05166
Tobias Galla
Tobias Galla, Vicente P\'erez-Mu\~nuzuri
Time scales and species coexistence in chaotic flows
5 pages, 4 figures
EPL, 117 (2017) 68001
10.1209/0295-5075/117/68001
null
q-bio.PE cond-mat.stat-mech physics.ao-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Empirical observations in marine ecosystems have suggested a balance of biological and advection time scales as a possible explanation of species coexistence. To characterise this scenario, we measure the time to fixation in neutrally evolving populations in chaotic flows. Contrary to intuition the variation of time scales does not interpolate straightforwardly between the no-flow and well-mixed limits; instead we find that fixation is the slowest at intermediate Damk\"ohler numbers, indicating long-lasting coexistence of species. Our analysis shows that this slowdown is due to spatial organisation on an increasingly modularised network. We also find that diffusion can either slow down or speed up fixation, depending on the relative time scales of flow and evolution.
[ { "created": "Thu, 18 Aug 2016 04:14:44 GMT", "version": "v1" } ]
2018-04-09
[ [ "Galla", "Tobias", "" ], [ "Pérez-Muñuzuri", "Vicente", "" ] ]
Empirical observations in marine ecosystems have suggested a balance of biological and advection time scales as a possible explanation of species coexistence. To characterise this scenario, we measure the time to fixation in neutrally evolving populations in chaotic flows. Contrary to intuition the variation of time scales does not interpolate straightforwardly between the no-flow and well-mixed limits; instead we find that fixation is the slowest at intermediate Damk\"ohler numbers, indicating long-lasting coexistence of species. Our analysis shows that this slowdown is due to spatial organisation on an increasingly modularised network. We also find that diffusion can either slow down or speed up fixation, depending on the relative time scales of flow and evolution.
1910.10332
Mette Olufsen
Justen Geddes, Jesper Mehlsen, Mette S. Olufsen
Characterization of blood pressure and heart rate oscillations of POTS patients via uniform phase empirical mode decomposition
null
null
null
null
q-bio.QM eess.SP math.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Objective: Postural Orthostatic Tachycardia Syndrome (POTS) is associated with the onset of tachycardia upon postural change. The current diagnosis involves the measurement of heart rate (HR) and blood pressure (BP) during head-up tilt (HUT) or active standing test. A positive diagnosis is made if HR changes with more than 30 bpm (40 bpm in patients aged 12-19 years), ignoring all of the BP and most of the HR signals. This study examines 0.1 Hz oscillations in systolic arterial blood pressure (SBP) and HR signals providing additional metrics characterizing the dynamics of the baroreflex. Methods: We analyze data from 28 control subjects and 28 POTS patients who underwent HUT. We extract beat-to-beat HR and SBP during a 10 min interval including 5 minutes of baseline and 5 minutes of HUT. We employ Uniform Phase Empirical Mode Decomposition (UPEMD) to extract 0.1 Hz stationary modes from both signals and use random forest machine learning and k-means clustering to analyze the outcomes. Results show that the amplitude of the 0.1 Hz oscillations is higher in POTS patients and that the phase response between the two signals is shorter (p < 0.005). Conclusion: POTS is associated with an increase in the amplitude of SBP and HR 0.1 Hz oscillation and a shortening of the phase between the two signals. Significance: The 0.1 Hz phase response and oscillation amplitude metrics provide new markers that can improve POTS diagnostic augmenting the existing diagnosis protocol only analyzing the change in heart rate.
[ { "created": "Wed, 23 Oct 2019 03:38:34 GMT", "version": "v1" }, { "created": "Tue, 19 Nov 2019 20:05:07 GMT", "version": "v2" }, { "created": "Fri, 24 Jan 2020 15:39:46 GMT", "version": "v3" } ]
2020-01-27
[ [ "Geddes", "Justen", "" ], [ "Mehlsen", "Jesper", "" ], [ "Olufsen", "Mette S.", "" ] ]
Objective: Postural Orthostatic Tachycardia Syndrome (POTS) is associated with the onset of tachycardia upon postural change. The current diagnosis involves the measurement of heart rate (HR) and blood pressure (BP) during head-up tilt (HUT) or active standing test. A positive diagnosis is made if HR changes with more than 30 bpm (40 bpm in patients aged 12-19 years), ignoring all of the BP and most of the HR signals. This study examines 0.1 Hz oscillations in systolic arterial blood pressure (SBP) and HR signals providing additional metrics characterizing the dynamics of the baroreflex. Methods: We analyze data from 28 control subjects and 28 POTS patients who underwent HUT. We extract beat-to-beat HR and SBP during a 10 min interval including 5 minutes of baseline and 5 minutes of HUT. We employ Uniform Phase Empirical Mode Decomposition (UPEMD) to extract 0.1 Hz stationary modes from both signals and use random forest machine learning and k-means clustering to analyze the outcomes. Results show that the amplitude of the 0.1 Hz oscillations is higher in POTS patients and that the phase response between the two signals is shorter (p < 0.005). Conclusion: POTS is associated with an increase in the amplitude of SBP and HR 0.1 Hz oscillation and a shortening of the phase between the two signals. Significance: The 0.1 Hz phase response and oscillation amplitude metrics provide new markers that can improve POTS diagnostic augmenting the existing diagnosis protocol only analyzing the change in heart rate.
q-bio/0402047
Bernhard Mehlig
A. Eriksson and B. Mehlig
Gene-history correlation and population structure
Revised and extended version: 26 pages, 5 figures, 1 table
Physical Biology 1, 220-228 (2004)
10.1088/1478-3967/1/4/004
null
q-bio.GN q-bio.PE
null
Correlation of gene histories in the human genome determines the patterns of genetic variation (haplotype structure) and is crucial to understanding genetic factors in common diseases. We derive closed analytical expressions for the correlation of gene histories in established demographic models for genetic evolution and show how to extend the analysis to more realistic (but more complicated) models of demographic structure. We identify two contributions to the correlation of gene histories in divergent populations: linkage disequilibrium, and differences in the demographic history of individuals in the sample. These two factors contribute to correlations at different length scales: the former at small, and the latter at large scales. We show that recent mixing events in divergent populations limit the range of correlations and compare our findings to empirical results on the correlation of gene histories in the human genome.
[ { "created": "Fri, 27 Feb 2004 21:31:40 GMT", "version": "v1" }, { "created": "Tue, 28 Sep 2004 14:24:23 GMT", "version": "v2" } ]
2009-11-10
[ [ "Eriksson", "A.", "" ], [ "Mehlig", "B.", "" ] ]
Correlation of gene histories in the human genome determines the patterns of genetic variation (haplotype structure) and is crucial to understanding genetic factors in common diseases. We derive closed analytical expressions for the correlation of gene histories in established demographic models for genetic evolution and show how to extend the analysis to more realistic (but more complicated) models of demographic structure. We identify two contributions to the correlation of gene histories in divergent populations: linkage disequilibrium, and differences in the demographic history of individuals in the sample. These two factors contribute to correlations at different length scales: the former at small, and the latter at large scales. We show that recent mixing events in divergent populations limit the range of correlations and compare our findings to empirical results on the correlation of gene histories in the human genome.
1504.05612
Saulo Alves Aflitos
Saulo Alves Aflitos, Gabino Sanchez-Perez, Dick de Ridder, Paul Fransz, Eric Schranz, Hans de Jong, Sander Peters
Introgression Browser: High throughput whole-genome SNP visualization
33 pages, 4 figures, 4 Supplementary Figures This is the pre-peer reviewed version of the following article: Plant J. 2015 Apr;82(1):174-82, which has been published in final form at http://doi.org/10.1111/tpj.12800
Plant J. 2015 Apr;82(1):174-82
10.1111/tpj.12800
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Breeding by introgressive hybridization is a pivotal strategy to broaden the genetic basis of crops. Usually, the desired traits are monitored in consecutive crossing generations by marker-assisted selection, but their analyses fail in chromosome regions where crossover recombinants are rare or not viable. Here, we present the Introgression Browser (IBROWSER), a novel bioinformatics tool aimed at visualizing introgressions at nucleotide or SNP accuracy. The software selects homozygous SNPs from Variant Call Format (VCF) information and filters out heterozygous SNPs, Multi-Nucleotide Polymorphisms (MNPs) and insertion-deletions (InDels). For data analysis IBROWSER makes use of sliding windows, but if needed it can generate any desired fragmentation pattern through General Feature Format (GFF) information. In an example of tomato (Solanum lycopersicum) accessions we visualize SNP patterns and elucidate both position and boundaries of the introgressions. We also show that our tool is capable of identifying alien DNA in a panel of the closely related S. pimpinellifolium by examining phylogenetic relationships of the introgressed segments in tomato. In a third example, we demonstrate the power of the IBROWSER in a panel of 600 Arabidopsis accessions, detecting the boundaries of a SNP-free region around a polymorphic 1.17 Mbp inverted segment on the short arm of chromosome 4. The architecture and functionality of IBROWSER makes the software appropriate for a broad set of analyses including SNP mining, genome structure analysis, and pedigree analysis. Its functionality, together with the capability to process large data sets and efficient visualization of sequence variation, makes IBROWSER a valuable breeding tool.
[ { "created": "Tue, 21 Apr 2015 20:58:47 GMT", "version": "v1" } ]
2015-11-18
[ [ "Aflitos", "Saulo Alves", "" ], [ "Sanchez-Perez", "Gabino", "" ], [ "de Ridder", "Dick", "" ], [ "Fransz", "Paul", "" ], [ "Schranz", "Eric", "" ], [ "de Jong", "Hans", "" ], [ "Peters", "Sander", "" ] ]
Breeding by introgressive hybridization is a pivotal strategy to broaden the genetic basis of crops. Usually, the desired traits are monitored in consecutive crossing generations by marker-assisted selection, but their analyses fail in chromosome regions where crossover recombinants are rare or not viable. Here, we present the Introgression Browser (IBROWSER), a novel bioinformatics tool aimed at visualizing introgressions at nucleotide or SNP accuracy. The software selects homozygous SNPs from Variant Call Format (VCF) information and filters out heterozygous SNPs, Multi-Nucleotide Polymorphisms (MNPs) and insertion-deletions (InDels). For data analysis IBROWSER makes use of sliding windows, but if needed it can generate any desired fragmentation pattern through General Feature Format (GFF) information. In an example of tomato (Solanum lycopersicum) accessions we visualize SNP patterns and elucidate both position and boundaries of the introgressions. We also show that our tool is capable of identifying alien DNA in a panel of the closely related S. pimpinellifolium by examining phylogenetic relationships of the introgressed segments in tomato. In a third example, we demonstrate the power of the IBROWSER in a panel of 600 Arabidopsis accessions, detecting the boundaries of a SNP-free region around a polymorphic 1.17 Mbp inverted segment on the short arm of chromosome 4. The architecture and functionality of IBROWSER makes the software appropriate for a broad set of analyses including SNP mining, genome structure analysis, and pedigree analysis. Its functionality, together with the capability to process large data sets and efficient visualization of sequence variation, makes IBROWSER a valuable breeding tool.
1612.00396
Pengxing Cao
Pengxing Cao, Nectarios Klonis, Sophie Zaloumis, Con Dogovski, Stanley C. Xie, Sompob Saralamba, Lisa J. White, Freya J. I. Fowkes, Leann Tilley, Julie A. Simpson, James M. McCaw
A dynamic stress model explains the delayed drug effect in artemisinin treatment of Plasmodium falciparum
24 Pages, 9 figures, 2 tables
null
10.1128/AAC.00618-17
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Artemisinin resistance constitutes a major threat to the continued success of control programs for malaria. With alternative antimalarial drugs not yet available, improving our understanding of how artemisinin-based drugs act and how resistance manifests is essential to enable optimisation of dosing regimens in order to prolong the lifespan of current first-line treatment options. Here, through introduction of a novel model of the dynamics of the parasites' response to drug, we explore how artemisinin-based therapies may be adjusted to maintain efficacy and how artemisinin resistance may manifest and be overcome. We introduce a dynamic mathematical model, extending on the traditional pharmacokinetic-pharmacodynamic framework, to capture the time-dependent development of a stress response in parasites. We fit the model to in vitro data and establish that the parasites' stress response explains the recently identified complex interplay between drug concentration, exposure time and parasite viability. Our model demonstrates that the previously reported hypersensitivity of early ring stage parasites of the 3D7 strain to dihydroartemisinin (DHA) is primarily due to the rapid development of stress, rather than any change in the maximum achievable killing rate. Of direct clinical relevance, we demonstrate that the complex temporal features of artemisinin action observed in vitro have a significant impact on predictions of in vivo parasite clearance using PK-PD models. Given the important role that such models play in the design and evaluation of clinical trials for alternative drug dosing regimens, our model contributes an enhanced predictive platform for the continued efforts to minimise the burden of malaria.
[ { "created": "Thu, 1 Dec 2016 19:46:51 GMT", "version": "v1" } ]
2017-10-24
[ [ "Cao", "Pengxing", "" ], [ "Klonis", "Nectarios", "" ], [ "Zaloumis", "Sophie", "" ], [ "Dogovski", "Con", "" ], [ "Xie", "Stanley C.", "" ], [ "Saralamba", "Sompob", "" ], [ "White", "Lisa J.", "" ], [ "Fowkes", "Freya J. I.", "" ], [ "Tilley", "Leann", "" ], [ "Simpson", "Julie A.", "" ], [ "McCaw", "James M.", "" ] ]
Artemisinin resistance constitutes a major threat to the continued success of control programs for malaria. With alternative antimalarial drugs not yet available, improving our understanding of how artemisinin-based drugs act and how resistance manifests is essential to enable optimisation of dosing regimens in order to prolong the lifespan of current first-line treatment options. Here, through introduction of a novel model of the dynamics of the parasites' response to drug, we explore how artemisinin-based therapies may be adjusted to maintain efficacy and how artemisinin resistance may manifest and be overcome. We introduce a dynamic mathematical model, extending on the traditional pharmacokinetic-pharmacodynamic framework, to capture the time-dependent development of a stress response in parasites. We fit the model to in vitro data and establish that the parasites' stress response explains the recently identified complex interplay between drug concentration, exposure time and parasite viability. Our model demonstrates that the previously reported hypersensitivity of early ring stage parasites of the 3D7 strain to dihydroartemisinin (DHA) is primarily due to the rapid development of stress, rather than any change in the maximum achievable killing rate. Of direct clinical relevance, we demonstrate that the complex temporal features of artemisinin action observed in vitro have a significant impact on predictions of in vivo parasite clearance using PK-PD models. Given the important role that such models play in the design and evaluation of clinical trials for alternative drug dosing regimens, our model contributes an enhanced predictive platform for the continued efforts to minimise the burden of malaria.
q-bio/0411006
Kwang-Il Goh
C.-M. Ghim, K.-I. Goh, B. Kahng
Lethality and synthetic lethality in the genome-wide metabolic network of Escherichia coli
15 pages, 7 figures, 1 table, final version published in J. Theor. Biol
J. Theor. Biol. 237, 401 (2005)
null
null
q-bio.MN cond-mat.stat-mech
null
Recent genomic analyses on the cellular metabolic network show that reaction flux across enzymes are diverse and exhibit power-law behavior in its distribution. While one may guess that the reactions with larger fluxes are more likely to be lethal under the blockade of its catalyzing gene products or gene knockouts, we find, by in silico flux analysis, that the lethality rarely has correlations with the flux level owing to the widespread backup pathways innate in the genome-wide metabolism of \textit{Escherichia coli}. Lethal reactions, of which the deletion generates cascading failure of following reactions up to the biomass reaction, are identified in terms of the Boolean network scheme as well as the flux balance analysis. The avalanche size of a reaction, defined as the number of subsequently blocked reactions after its removal, turns out to be a useful measure of lethality. As a means to elucidate phenotypic robustness to a single deletion, we investigate synthetic lethality in reaction level, where simultaneous deletion of a pair of nonlethal reactions leads to the failure of the biomass reaction. Synthetic lethals identified via flux balance and Boolean scheme are consistently shown to act in parallel pathways, working in such a way that the backup machinery is compromised.
[ { "created": "Mon, 1 Nov 2004 10:21:12 GMT", "version": "v1" }, { "created": "Fri, 13 Jan 2006 17:04:13 GMT", "version": "v2" } ]
2007-05-23
[ [ "Ghim", "C. -M.", "" ], [ "Goh", "K. -I.", "" ], [ "Kahng", "B.", "" ] ]
Recent genomic analyses on the cellular metabolic network show that reaction flux across enzymes are diverse and exhibit power-law behavior in its distribution. While one may guess that the reactions with larger fluxes are more likely to be lethal under the blockade of its catalyzing gene products or gene knockouts, we find, by in silico flux analysis, that the lethality rarely has correlations with the flux level owing to the widespread backup pathways innate in the genome-wide metabolism of \textit{Escherichia coli}. Lethal reactions, of which the deletion generates cascading failure of following reactions up to the biomass reaction, are identified in terms of the Boolean network scheme as well as the flux balance analysis. The avalanche size of a reaction, defined as the number of subsequently blocked reactions after its removal, turns out to be a useful measure of lethality. As a means to elucidate phenotypic robustness to a single deletion, we investigate synthetic lethality in reaction level, where simultaneous deletion of a pair of nonlethal reactions leads to the failure of the biomass reaction. Synthetic lethals identified via flux balance and Boolean scheme are consistently shown to act in parallel pathways, working in such a way that the backup machinery is compromised.
1712.03259
Clemence Dubois
Cl\'emence Dubois (IMoST, IUT de Clermont-Ferrand), Dufour Robin (IMoST, IUT de Clermont-Ferrand), Daumar Pierre (IUT de Clermont-Ferrand, IMoST), Aubel Corinne (IMoST), Szczepaniak Claire (CICS), Blavignac Christelle (CICS), Mounetou Emmanuelle (IMoST, IUT de Clermont-Ferrand), Penault-Llorca Fr\'ed\'erique (IMoST), Mahchid Bamdad (IMoST, IUT de Clermont-Ferrand)
Development and cytotoxic response of two proliferative MDA- MB-231 and non-proliferative SUM1315 three-dimensional cell culture models of triple-negative basal-like breast cancer cell lines
Oncotarget, Impact journals, 2017
null
10.18632/oncotarget.20517
null
q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Triple-Negative Basal-Like tumors, representing 15 to 20% of breast cancers, are very aggressive and with poor prognosis. Targeted therapies have been developed extensively in preclinical and clinical studies to open the way for new treatment strategies. The present study has focused on developing 3D cell cultures from SUM1315 and MDA-MB-231, two triple-negative basal-like (TNBL) breast cancer cell lines, using the liquid overlay technique. Extracellular matrix concentration, cell density, proliferation, cell viability, topology and ultrastructure parameters were determined. The results showed that for both cell lines, the best conditioning regimen for compact and homogeneous spheroid formation was to use 1000 cells per well and 2% Geltrex. This conditioning regimen highlighted two 3D cell models: non-proliferative SUM1315 spheroids and proliferative MDA-MB-231 spheroids. In both cell lines, the comparison of 2D vs 3D cell culture viability in the presence of increasing concentrations of chemotherapeutic agents i.e. cisplatin, docetaxel and epirubicin, showed that spheroids were clearly less sensitive than monolayer cell cultures. Moreover, a proliferative or non-proliferative 3D cell line property would enable determination of cytotoxic and/or cytostatic drug activity. 3D cell culture could be an excellent tool in addition to the arsenal of techniques currently used in preclinical studies. http://www.impactjournals.com/oncotarget/ Oncotarget, Advance Publications 2017
[ { "created": "Wed, 29 Nov 2017 16:15:37 GMT", "version": "v1" } ]
2017-12-12
[ [ "Dubois", "Clémence", "", "IMoST, IUT de Clermont-Ferrand" ], [ "Robin", "Dufour", "", "IMoST, IUT de Clermont-Ferrand" ], [ "Pierre", "Daumar", "", "IUT de Clermont-Ferrand,\n IMoST" ], [ "Corinne", "Aubel", "", "IMoST" ], [ "Claire", "Szczepaniak", "", "CICS" ], [ "Christelle", "Blavignac", "", "CICS" ], [ "Emmanuelle", "Mounetou", "", "IMoST, IUT de Clermont-Ferrand" ], [ "Frédérique", "Penault-Llorca", "", "IMoST" ], [ "Bamdad", "Mahchid", "", "IMoST, IUT de\n Clermont-Ferrand" ] ]
Triple-Negative Basal-Like tumors, representing 15 to 20% of breast cancers, are very aggressive and with poor prognosis. Targeted therapies have been developed extensively in preclinical and clinical studies to open the way for new treatment strategies. The present study has focused on developing 3D cell cultures from SUM1315 and MDA-MB-231, two triple-negative basal-like (TNBL) breast cancer cell lines, using the liquid overlay technique. Extracellular matrix concentration, cell density, proliferation, cell viability, topology and ultrastructure parameters were determined. The results showed that for both cell lines, the best conditioning regimen for compact and homogeneous spheroid formation was to use 1000 cells per well and 2% Geltrex. This conditioning regimen highlighted two 3D cell models: non-proliferative SUM1315 spheroids and proliferative MDA-MB-231 spheroids. In both cell lines, the comparison of 2D vs 3D cell culture viability in the presence of increasing concentrations of chemotherapeutic agents i.e. cisplatin, docetaxel and epirubicin, showed that spheroids were clearly less sensitive than monolayer cell cultures. Moreover, a proliferative or non-proliferative 3D cell line property would enable determination of cytotoxic and/or cytostatic drug activity. 3D cell culture could be an excellent tool in addition to the arsenal of techniques currently used in preclinical studies. http://www.impactjournals.com/oncotarget/ Oncotarget, Advance Publications 2017
1906.12243
Laurent Mombaerts
Laurent Mombaerts, Atte Aalto, Johan Markdahl, Jorge Goncalves
A multifactorial evaluation framework for gene regulatory network reconstruction
Preprint version of the paper. Accepted for Publication to Foundations of Systems Biology in Engineering (FOSBE) 2019
null
null
null
q-bio.MN math.DS q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the past years, many computational methods have been developed to infer the structure of gene regulatory networks from time-series data. However, the applicability and accuracy presumptions of such algorithms remain unclear due to experimental heterogeneity. This paper assesses the performance of recent and successful network inference strategies under a novel, multifactorial evaluation framework in order to highlight pragmatic tradeoffs in experimental design. The effects of data quantity and systems perturbations are addressed, thereby formulating guidelines for efficient resource management. Realistic data were generated from six widely used benchmark models of rhythmic and non-rhythmic gene regulatory systems with random perturbations mimicking the effect of gene knock-out or chemical treatments. Then, time-series data of increasing lengths were provided to five state-of-the-art network inference algorithms representing distinctive mathematical paradigms. The performances of such network reconstruction methodologies are uncovered under various experimental conditions. We report that the algorithms do not benefit equally from data increments. Furthermore, for rhythmic systems, it is more profitable for network inference strategies to be run on long time-series rather than short time-series with multiple perturbations. By contrast, for the non-rhythmic systems, increasing the number of perturbation experiments yielded better results than increasing the sampling frequency. We expect that future benchmark and algorithm design would integrate such multifactorial considerations to promote their widespread and conscientious usage.
[ { "created": "Fri, 28 Jun 2019 14:36:04 GMT", "version": "v1" } ]
2019-07-01
[ [ "Mombaerts", "Laurent", "" ], [ "Aalto", "Atte", "" ], [ "Markdahl", "Johan", "" ], [ "Goncalves", "Jorge", "" ] ]
In the past years, many computational methods have been developed to infer the structure of gene regulatory networks from time-series data. However, the applicability and accuracy presumptions of such algorithms remain unclear due to experimental heterogeneity. This paper assesses the performance of recent and successful network inference strategies under a novel, multifactorial evaluation framework in order to highlight pragmatic tradeoffs in experimental design. The effects of data quantity and systems perturbations are addressed, thereby formulating guidelines for efficient resource management. Realistic data were generated from six widely used benchmark models of rhythmic and non-rhythmic gene regulatory systems with random perturbations mimicking the effect of gene knock-out or chemical treatments. Then, time-series data of increasing lengths were provided to five state-of-the-art network inference algorithms representing distinctive mathematical paradigms. The performances of such network reconstruction methodologies are uncovered under various experimental conditions. We report that the algorithms do not benefit equally from data increments. Furthermore, for rhythmic systems, it is more profitable for network inference strategies to be run on long time-series rather than short time-series with multiple perturbations. By contrast, for the non-rhythmic systems, increasing the number of perturbation experiments yielded better results than increasing the sampling frequency. We expect that future benchmark and algorithm design would integrate such multifactorial considerations to promote their widespread and conscientious usage.
1804.04623
Valeriy Grytsay Dr
V.I. Grytsay, I.V. Musatenko
Nonlinear Self-organization Dynamics of a Metabolic Process of the Krebs Cycle
15 pages, 5 figures. arXiv admin note: substantial text overlap with arXiv:1602.09054, arXiv:1710.09252
null
null
null
q-bio.OT nlin.CD
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The present work continues studies of the mathematical model of a metabolic process of the Krebs cycle. We study the dependence of its cyclicity on the cell respiration intensity determined by the formation level of carbon dioxide. We constructed the phase-parametric characteristic of the consumption of a substrate by a cell depending on the intensity of the metabolic process of formation of the final product of the oxidation. The scenarios of all possible oscillatory modes of the system are constructed and studied. The bifurcations with period doubling and with formation of chaotic modes are found. Their attractors are constructed. The full spectra of indices and divergencies for the obtained modes, the values of KS-entropies, horizons of predictability, and Lyapunov dimensions of strange attractors are calculated. Some conclusions about the structural-functional connections of the cycle of tricarboxylic acids and their influence on the stability of the metabolic process in a cell are presented.
[ { "created": "Tue, 13 Mar 2018 18:40:06 GMT", "version": "v1" } ]
2018-04-13
[ [ "Grytsay", "V. I.", "" ], [ "Musatenko", "I. V.", "" ] ]
The present work continues studies of the mathematical model of a metabolic process of the Krebs cycle. We study the dependence of its cyclicity on the cell respiration intensity determined by the formation level of carbon dioxide. We constructed the phase-parametric characteristic of the consumption of a substrate by a cell depending on the intensity of the metabolic process of formation of the final product of the oxidation. The scenarios of all possible oscillatory modes of the system are constructed and studied. The bifurcations with period doubling and with formation of chaotic modes are found. Their attractors are constructed. The full spectra of indices and divergencies for the obtained modes, the values of KS-entropies, horizons of predictability, and Lyapunov dimensions of strange attractors are calculated. Some conclusions about the structural-functional connections of the cycle of tricarboxylic acids and their influence on the stability of the metabolic process in a cell are presented.
1501.00414
Vasileios Basios
Yukio-Pegio Gunji, Kohei Sonoda and Vasileios Basios
Quantum Cognition based on an Ambiguous Representation Derived from a Rough Set Approximation
23 pages, 8 figures, original research paper
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Over the last years, in a series papers by Arrechi and others, a model for the cognitive processes involved in decision making has been proposed and investigated. The key element of this model is the expression of apprehension and judgement, basic cognitive process of decision making, as an inverse Bayes inference classifying the information content of neuron spike trains. For successive plural stimuli, it has been shown that this inference, equipped with basic non-algorithmic jumps, is affected by quantum-like characteristics. We show here that such a decision making process is related consistently with ambiguous representation by an observer within a universe of discourse. In our work ambiguous representation of an object or a stimuli is defined by a pair of maps from objects of a set to their representations, where these two maps are interrelated in a particular structure. The a priori and a posteriori hypotheses in Bayes inference are replaced by the upper and lower approximation, correspondingly, for the initial data sets each derived with respect to a map. We show further that due to the particular structural relation between the two maps, the logical structure of such combined approximations can only be expressed as an orthomodular lattice and therefore can be represented by a quantum rather than a Boolean logic. To our knowledge, this is the first investigation aiming to reveal the concrete logic structure of inverse Bayes inference in cognitive processes.
[ { "created": "Mon, 29 Dec 2014 21:27:27 GMT", "version": "v1" }, { "created": "Thu, 19 Nov 2015 02:32:08 GMT", "version": "v2" } ]
2015-11-20
[ [ "Gunji", "Yukio-Pegio", "" ], [ "Sonoda", "Kohei", "" ], [ "Basios", "Vasileios", "" ] ]
Over the last years, in a series papers by Arrechi and others, a model for the cognitive processes involved in decision making has been proposed and investigated. The key element of this model is the expression of apprehension and judgement, basic cognitive process of decision making, as an inverse Bayes inference classifying the information content of neuron spike trains. For successive plural stimuli, it has been shown that this inference, equipped with basic non-algorithmic jumps, is affected by quantum-like characteristics. We show here that such a decision making process is related consistently with ambiguous representation by an observer within a universe of discourse. In our work ambiguous representation of an object or a stimuli is defined by a pair of maps from objects of a set to their representations, where these two maps are interrelated in a particular structure. The a priori and a posteriori hypotheses in Bayes inference are replaced by the upper and lower approximation, correspondingly, for the initial data sets each derived with respect to a map. We show further that due to the particular structural relation between the two maps, the logical structure of such combined approximations can only be expressed as an orthomodular lattice and therefore can be represented by a quantum rather than a Boolean logic. To our knowledge, this is the first investigation aiming to reveal the concrete logic structure of inverse Bayes inference in cognitive processes.
1211.6238
Ruggero Micheletto
Ruggero Micheletto, Maria Fernanda Avila-Ortega
Visual illusion due to the interaction of flickering and acoustic vibrotactile signals
9 pages, 6 pictures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We studied the influence of mechanical vibrotactile signals in the acoustic range to the visual perception of flickering images. These images are shown on a CRT screen intermittent at about 75 Hz, without external perturbations are perceived as constant and stable. However, if presented together with a controlled acoustical vibration an illusion is perceived. The images appears to float out of the screen, while the rest of the room is still perceived normally. The acoustical signal given to the subjects were of very low frequency (below 100Hz) and low amplitude (almost inaudible). The stimuli were transmitted through direct contact to the subject's chin with the use of a plastic stick connected to a speaker. The nature of the illusion is described and a basic theoretical model is given.
[ { "created": "Tue, 27 Nov 2012 08:46:30 GMT", "version": "v1" } ]
2012-11-28
[ [ "Micheletto", "Ruggero", "" ], [ "Avila-Ortega", "Maria Fernanda", "" ] ]
We studied the influence of mechanical vibrotactile signals in the acoustic range to the visual perception of flickering images. These images are shown on a CRT screen intermittent at about 75 Hz, without external perturbations are perceived as constant and stable. However, if presented together with a controlled acoustical vibration an illusion is perceived. The images appears to float out of the screen, while the rest of the room is still perceived normally. The acoustical signal given to the subjects were of very low frequency (below 100Hz) and low amplitude (almost inaudible). The stimuli were transmitted through direct contact to the subject's chin with the use of a plastic stick connected to a speaker. The nature of the illusion is described and a basic theoretical model is given.
1507.00751
Thierry Mora
Jonathan Desponds, Thierry Mora, Aleksandra M. Walczak
Fluctuating fitness shapes the clone size distribution of immune repertoires
null
PNAS 113 (2) 274-279 (2016)
10.1073/pnas.1512977112
null
q-bio.PE physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The adaptive immune system relies on the diversity of receptors expressed on the surface of B and T-cells to protect the organism from a vast amount of pathogenic threats. The proliferation and degradation dynamics of different cell types (B cells, T cells, naive, memory) is governed by a variety of antigenic and environmental signals, yet the observed clone sizes follow a universal power law distribution. Guided by this reproducibility we propose effective models of somatic evolution where cell fate depends on an effective fitness. This fitness is determined by growth factors acting either on clones of cells with the same receptor responding to specific antigens, or directly on single cells with no regards for clones. We identify fluctuations in the fitness acting specifically on clones as the essential ingredient leading to the observed distributions. Combining our models with experiments we characterize the scale of fluctuations in antigenic environments and we provide tools to identify the relevant growth signals in different tissues and organisms. Our results generalize to any evolving population in a fluctuating environment.
[ { "created": "Thu, 2 Jul 2015 20:28:19 GMT", "version": "v1" } ]
2016-02-10
[ [ "Desponds", "Jonathan", "" ], [ "Mora", "Thierry", "" ], [ "Walczak", "Aleksandra M.", "" ] ]
The adaptive immune system relies on the diversity of receptors expressed on the surface of B and T-cells to protect the organism from a vast amount of pathogenic threats. The proliferation and degradation dynamics of different cell types (B cells, T cells, naive, memory) is governed by a variety of antigenic and environmental signals, yet the observed clone sizes follow a universal power law distribution. Guided by this reproducibility we propose effective models of somatic evolution where cell fate depends on an effective fitness. This fitness is determined by growth factors acting either on clones of cells with the same receptor responding to specific antigens, or directly on single cells with no regards for clones. We identify fluctuations in the fitness acting specifically on clones as the essential ingredient leading to the observed distributions. Combining our models with experiments we characterize the scale of fluctuations in antigenic environments and we provide tools to identify the relevant growth signals in different tissues and organisms. Our results generalize to any evolving population in a fluctuating environment.
2106.11129
Klara R\"ohrl
Jano\'s Gabler, Tobias Raabe, Klara R\"ohrl, Hans-Martin von Gaudecker
The Effectiveness of Strategies to Contain SARS-CoV-2: Testing, Vaccinations, and NPIs
null
null
null
null
q-bio.PE econ.GN q-fin.EC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In order to slow the spread of the CoViD-19 pandemic, governments around the world have enacted a wide set of policies limiting the transmission of the disease. Initially, these focused on non-pharmaceutical interventions; more recently, vaccinations and large-scale rapid testing have started to play a major role. The objective of this study is to explain the quantitative effects of these policies on determining the course of the pandemic, allowing for factors like seasonality or virus strains with different transmission profiles. To do so, the study develops an agent-based simulation model, which is estimated using data for the second and the third wave of the CoViD-19 pandemic in Germany. The paper finds that during a period where vaccination rates rose from 5% to 40%, large-scale rapid testing had the largest effect on reducing infection numbers. Frequent large-scale rapid testing should remain part of strategies to contain CoViD-19; it can substitute for many non-pharmaceutical interventions that come at a much larger cost to individuals, society, and the economy.
[ { "created": "Mon, 21 Jun 2021 14:08:30 GMT", "version": "v1" }, { "created": "Tue, 22 Jun 2021 15:05:54 GMT", "version": "v2" } ]
2021-06-24
[ [ "Gabler", "Janoś", "" ], [ "Raabe", "Tobias", "" ], [ "Röhrl", "Klara", "" ], [ "von Gaudecker", "Hans-Martin", "" ] ]
In order to slow the spread of the CoViD-19 pandemic, governments around the world have enacted a wide set of policies limiting the transmission of the disease. Initially, these focused on non-pharmaceutical interventions; more recently, vaccinations and large-scale rapid testing have started to play a major role. The objective of this study is to explain the quantitative effects of these policies on determining the course of the pandemic, allowing for factors like seasonality or virus strains with different transmission profiles. To do so, the study develops an agent-based simulation model, which is estimated using data for the second and the third wave of the CoViD-19 pandemic in Germany. The paper finds that during a period where vaccination rates rose from 5% to 40%, large-scale rapid testing had the largest effect on reducing infection numbers. Frequent large-scale rapid testing should remain part of strategies to contain CoViD-19; it can substitute for many non-pharmaceutical interventions that come at a much larger cost to individuals, society, and the economy.
1103.3827
Giovanni Sena
Giovanni Sena, Zak Frentz, Kenneth D. Birnbaum and Stanislas Leibler
Quantitation of Cellular Dynamics in Growing Arabidopsis Roots with Light Sheet Microscopy
* The first two authors contributed equally to this work
PLoS ONE 6(6): e21303 (2011)
10.1371/journal.pone.0021303
null
q-bio.TO physics.bio-ph q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To understand dynamic developmental processes, living tissues must be imaged frequently and for extended periods of time. Root development is extensively studied at cellular resolution to understand basic mechanisms underlying pattern formation and maintenance in plants. Unfortunately, ensuring continuous specimen access, while preserving physiological conditions and preventing photo-damage, poses major barriers to measurements of cellular dynamics in indeterminately growing organs such as plant roots. We present a system that integrates optical sectioning through light sheet fluorescence microscopy with hydroponic culture that enables us to image at cellular resolution a vertically growing Arabidopsis root every few minutes and for several consecutive days. We describe novel automated routines to track the root tip as it grows, track cellular nuclei and identify cell divisions. We demonstrate the system's capabilities by collecting data on divisions and nuclear dynamics.
[ { "created": "Sun, 20 Mar 2011 03:18:03 GMT", "version": "v1" }, { "created": "Mon, 13 Jun 2011 20:33:57 GMT", "version": "v2" } ]
2015-03-13
[ [ "Sena", "Giovanni", "" ], [ "Frentz", "Zak", "" ], [ "Birnbaum", "Kenneth D.", "" ], [ "Leibler", "Stanislas", "" ] ]
To understand dynamic developmental processes, living tissues must be imaged frequently and for extended periods of time. Root development is extensively studied at cellular resolution to understand basic mechanisms underlying pattern formation and maintenance in plants. Unfortunately, ensuring continuous specimen access, while preserving physiological conditions and preventing photo-damage, poses major barriers to measurements of cellular dynamics in indeterminately growing organs such as plant roots. We present a system that integrates optical sectioning through light sheet fluorescence microscopy with hydroponic culture that enables us to image at cellular resolution a vertically growing Arabidopsis root every few minutes and for several consecutive days. We describe novel automated routines to track the root tip as it grows, track cellular nuclei and identify cell divisions. We demonstrate the system's capabilities by collecting data on divisions and nuclear dynamics.
1606.04698
Andrea Tacchetti
Andrea Tacchetti and Leyla Isik and Tomaso Poggio
Invariant recognition drives neural representations of action sequences
null
null
10.1371/journal.pcbi.1005859
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recognizing the actions of others from visual stimuli is a crucial aspect of human visual perception that allows individuals to respond to social cues. Humans are able to identify similar behaviors and discriminate between distinct actions despite transformations, like changes in viewpoint or actor, that substantially alter the visual appearance of a scene. This ability to generalize across complex transformations is a hallmark of human visual intelligence. Advances in understanding motion perception at the neural level have not always translated in precise accounts of the computational principles underlying what representation our visual cortex evolved or learned to compute. Here we test the hypothesis that invariant action discrimination might fill this gap. Recently, the study of artificial systems for static object perception has produced models, CNNs, that achieve human level performance in complex discriminative tasks. Within this class of models, architectures that better support invariant object recognition also produce image representations that match those implied by human and primate neural data. However, whether these models produce representations of action sequences that support recognition across complex transformations and closely follow neural representations remains unknown. Here we show that spatiotemporal CNNs appropriately categorize video stimuli into actions, and that deliberate model modifications that improve performance on an invariant action recognition task lead to data representations that better match human neural recordings. Our results support our hypothesis that performance on invariant discrimination dictates the neural representations of actions computed by human visual cortex.
[ { "created": "Wed, 15 Jun 2016 09:40:46 GMT", "version": "v1" }, { "created": "Tue, 29 Nov 2016 21:20:44 GMT", "version": "v2" }, { "created": "Thu, 20 Apr 2017 19:12:55 GMT", "version": "v3" } ]
2018-02-07
[ [ "Tacchetti", "Andrea", "" ], [ "Isik", "Leyla", "" ], [ "Poggio", "Tomaso", "" ] ]
Recognizing the actions of others from visual stimuli is a crucial aspect of human visual perception that allows individuals to respond to social cues. Humans are able to identify similar behaviors and discriminate between distinct actions despite transformations, like changes in viewpoint or actor, that substantially alter the visual appearance of a scene. This ability to generalize across complex transformations is a hallmark of human visual intelligence. Advances in understanding motion perception at the neural level have not always translated in precise accounts of the computational principles underlying what representation our visual cortex evolved or learned to compute. Here we test the hypothesis that invariant action discrimination might fill this gap. Recently, the study of artificial systems for static object perception has produced models, CNNs, that achieve human level performance in complex discriminative tasks. Within this class of models, architectures that better support invariant object recognition also produce image representations that match those implied by human and primate neural data. However, whether these models produce representations of action sequences that support recognition across complex transformations and closely follow neural representations remains unknown. Here we show that spatiotemporal CNNs appropriately categorize video stimuli into actions, and that deliberate model modifications that improve performance on an invariant action recognition task lead to data representations that better match human neural recordings. Our results support our hypothesis that performance on invariant discrimination dictates the neural representations of actions computed by human visual cortex.
1705.06481
Carlo Nicolini
C\'ecile Bordier, Carlo Nicolini and Angelo Bifone
Graph analysis and modularity of brain functional connectivity networks: searching for the optimal threshold
15 pages, 7 figures
null
null
null
q-bio.NC physics.data-an physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neuroimaging data can be represented as networks of nodes and edges that capture the topological organization of the brain connectivity. Graph theory provides a general and powerful framework to study these networks and their structure at various scales. By way of example, community detection methods have been widely applied to investigate the modular structure of many natural networks, including brain functional connectivity networks. Sparsification procedures are often applied to remove the weakest edges, which are the most affected by experimental noise, and to reduce the density of the graph, thus making it theoretically and computationally more tractable. However, weak links may also contain significant structural information, and procedures to identify the optimal tradeoff are the subject of active research. Here, we explore the use of percolation analysis, a method grounded in statistical physics, to identify the optimal sparsification threshold for community detection in brain connectivity networks. By using synthetic networks endowed with a ground-truth modular structure and realistic topological features typical of human brain functional connectivity networks, we show that percolation analysis can be applied to identify the optimal sparsification threshold that maximizes information on the networks' community structure. We validate this approach using three different community detection methods widely applied to the analysis of brain connectivity networks: Newman's modularity, InfoMap and Asymptotical Surprise. Importantly, we test the effects of noise and data variability, which are critical factors to determine the optimal threshold. This data-driven method should prove particularly useful in the analysis of the community structure of brain networks in populations characterized by different connectivity strengths, such as patients and controls.
[ { "created": "Thu, 18 May 2017 09:05:09 GMT", "version": "v1" } ]
2017-05-19
[ [ "Bordier", "Cécile", "" ], [ "Nicolini", "Carlo", "" ], [ "Bifone", "Angelo", "" ] ]
Neuroimaging data can be represented as networks of nodes and edges that capture the topological organization of the brain connectivity. Graph theory provides a general and powerful framework to study these networks and their structure at various scales. By way of example, community detection methods have been widely applied to investigate the modular structure of many natural networks, including brain functional connectivity networks. Sparsification procedures are often applied to remove the weakest edges, which are the most affected by experimental noise, and to reduce the density of the graph, thus making it theoretically and computationally more tractable. However, weak links may also contain significant structural information, and procedures to identify the optimal tradeoff are the subject of active research. Here, we explore the use of percolation analysis, a method grounded in statistical physics, to identify the optimal sparsification threshold for community detection in brain connectivity networks. By using synthetic networks endowed with a ground-truth modular structure and realistic topological features typical of human brain functional connectivity networks, we show that percolation analysis can be applied to identify the optimal sparsification threshold that maximizes information on the networks' community structure. We validate this approach using three different community detection methods widely applied to the analysis of brain connectivity networks: Newman's modularity, InfoMap and Asymptotical Surprise. Importantly, we test the effects of noise and data variability, which are critical factors to determine the optimal threshold. This data-driven method should prove particularly useful in the analysis of the community structure of brain networks in populations characterized by different connectivity strengths, such as patients and controls.
2305.02369
Davide Coluzzi
Davide Coluzzi, Giuseppe Baselli
Diffuse and Localized Functional Dysconnectivity in Schizophrenia: a Bootstrapped Top-Down Approach
28 pages, 8 figures
Fundamenta Informaticae, Volume 189, Issue 2: Tomography and Applications 2022 (September 21, 2023) fi:11275
10.3233/FI-222157
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Schizophrenia (SZ) is a brain disorder leading to detached mind's normally integrated processes. Hence, the exploration of the symptoms in relation to functional connectivity (FC) had great relevance in the field. FC can be investigated on different levels, going from global features to single edges between regions, revealing diffuse and localized dysconnection patterns. In this context, SZ is characterized by a diverse global integration with reduced connectivity in specific areas of the Default Mode Network (DMN). However, the assessment of FC presents various sources of uncertainty. This study proposes a multi-level approach for more robust group-comparison. FC between 74 AAL brain areas of 15 healthy controls (HC) and 12 SZ subjects were used. Multi-level analyses and graph topological indexes evaluation were carried out by the previously published SPIDER-NET tool. Robustness was augmented by bootstrapped (BOOT) data and the stability was evaluated by removing one (RST1) or two subjects (RST2). The DMN subgraph was evaluated, toegether with overall local indexes and connection weights to enhance common activations/deactivations. At a global level, expected trends were found. The robustness assessment tests highlighted more stable results for BOOT compared to the direct data testing. Conversely, significant results were found in the analysis at lower levels. The DMN highlighted reduced connectivity and strength as well as increased deactivation in the SZ group. At local level, 13 areas were found to be significantly different ($p<0.05$), highlighting a greater divergence in the frontal lobe. These results were confirmed analyzing the negative edges, suggesting inverted connectivity between prefronto-temporal areas. In conclusion, multi-level analysis supported by BOOT is highly recommended, especially when diffuse and localized dysconnections must be investigated in limited samples.
[ { "created": "Wed, 3 May 2023 18:14:26 GMT", "version": "v1" }, { "created": "Tue, 18 Jul 2023 09:45:04 GMT", "version": "v2" } ]
2024-02-14
[ [ "Coluzzi", "Davide", "" ], [ "Baselli", "Giuseppe", "" ] ]
Schizophrenia (SZ) is a brain disorder leading to detached mind's normally integrated processes. Hence, the exploration of the symptoms in relation to functional connectivity (FC) had great relevance in the field. FC can be investigated on different levels, going from global features to single edges between regions, revealing diffuse and localized dysconnection patterns. In this context, SZ is characterized by a diverse global integration with reduced connectivity in specific areas of the Default Mode Network (DMN). However, the assessment of FC presents various sources of uncertainty. This study proposes a multi-level approach for more robust group-comparison. FC between 74 AAL brain areas of 15 healthy controls (HC) and 12 SZ subjects were used. Multi-level analyses and graph topological indexes evaluation were carried out by the previously published SPIDER-NET tool. Robustness was augmented by bootstrapped (BOOT) data and the stability was evaluated by removing one (RST1) or two subjects (RST2). The DMN subgraph was evaluated, toegether with overall local indexes and connection weights to enhance common activations/deactivations. At a global level, expected trends were found. The robustness assessment tests highlighted more stable results for BOOT compared to the direct data testing. Conversely, significant results were found in the analysis at lower levels. The DMN highlighted reduced connectivity and strength as well as increased deactivation in the SZ group. At local level, 13 areas were found to be significantly different ($p<0.05$), highlighting a greater divergence in the frontal lobe. These results were confirmed analyzing the negative edges, suggesting inverted connectivity between prefronto-temporal areas. In conclusion, multi-level analysis supported by BOOT is highly recommended, especially when diffuse and localized dysconnections must be investigated in limited samples.
1804.10090
Siavash Ghavami
Siavash Ghavami, Vahid Rahmati, Farshad Lahouti, Lars Schwabe
Neuronal Synchronization Can Control the Energy Efficiency of Inter-Spike Interval Coding
47 pages, 14 figures, 2 Tables
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The role of synchronous firing in sensory coding and cognition remains controversial. While studies, focusing on its mechanistic consequences in attentional tasks, suggest that synchronization dynamically boosts sensory processing, others failed to find significant synchronization levels in such tasks. We attempt to understand both lines of evidence within a coherent theoretical framework. We conceptualize synchronization as an independent control parameter to study how the postsynaptic neuron transmits the average firing activity of a presynaptic population, in the presence of synchronization. We apply the Berger-Levy theory of energy efficient information transmission to interpret simulations of a Hodgkin-Huxley-type postsynaptic neuron model, where we varied the firing rate and synchronization level in the presynaptic population independently. We find that for a fixed presynaptic firing rate the simulated postsynaptic interspike interval distribution depends on the synchronization level and is well-described by a generalized extreme value distribution. For synchronization levels of 15% to 50%, we find that the optimal distribution of presynaptic firing rate, maximizing the mutual information per unit cost, is maximized at ~30% synchronization level. These results suggest that the statistics and energy efficiency of neuronal communication channels, through which the input rate is communicated, can be dynamically adapted by the synchronization level.
[ { "created": "Thu, 26 Apr 2018 14:35:39 GMT", "version": "v1" }, { "created": "Tue, 8 May 2018 01:12:38 GMT", "version": "v2" }, { "created": "Sat, 4 May 2019 13:43:24 GMT", "version": "v3" } ]
2019-05-07
[ [ "Ghavami", "Siavash", "" ], [ "Rahmati", "Vahid", "" ], [ "Lahouti", "Farshad", "" ], [ "Schwabe", "Lars", "" ] ]
The role of synchronous firing in sensory coding and cognition remains controversial. While studies, focusing on its mechanistic consequences in attentional tasks, suggest that synchronization dynamically boosts sensory processing, others failed to find significant synchronization levels in such tasks. We attempt to understand both lines of evidence within a coherent theoretical framework. We conceptualize synchronization as an independent control parameter to study how the postsynaptic neuron transmits the average firing activity of a presynaptic population, in the presence of synchronization. We apply the Berger-Levy theory of energy efficient information transmission to interpret simulations of a Hodgkin-Huxley-type postsynaptic neuron model, where we varied the firing rate and synchronization level in the presynaptic population independently. We find that for a fixed presynaptic firing rate the simulated postsynaptic interspike interval distribution depends on the synchronization level and is well-described by a generalized extreme value distribution. For synchronization levels of 15% to 50%, we find that the optimal distribution of presynaptic firing rate, maximizing the mutual information per unit cost, is maximized at ~30% synchronization level. These results suggest that the statistics and energy efficiency of neuronal communication channels, through which the input rate is communicated, can be dynamically adapted by the synchronization level.
1208.1853
J\"urgen Zanghellini
Christian Jungreuthmayer, David E. Ruckerbauer, and J\"urgen Zanghellini
Utilizing gene regulatory information to speed up the calculation of elementary flux modes
7 pages, 2 figures, 13 tables; prepared for submission to Bioinformatics
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite the significant progress made in recent years, the computation of the complete set of elementary flux modes of large or even genome-scale metabolic networks is still impossible. We introduce a novel approach to speed up the calculation of elementary flux modes by including transcriptional regulatory information into the analysis of metabolic network. Taking into account gene regulation dramatically reduces the solution space and allows the presented algorithm to constantly eliminate biologically infeasible modes at an early stage of the computation procedure. Thereby, the computational costs, such as runtime, memory usage and disk space are considerably reduced. Consequently, using the presented mode elimination algorithm pushes the size of metabolic networks that can be studied by elementary flux modes to new limits.
[ { "created": "Thu, 9 Aug 2012 09:29:12 GMT", "version": "v1" } ]
2012-08-10
[ [ "Jungreuthmayer", "Christian", "" ], [ "Ruckerbauer", "David E.", "" ], [ "Zanghellini", "Jürgen", "" ] ]
Despite the significant progress made in recent years, the computation of the complete set of elementary flux modes of large or even genome-scale metabolic networks is still impossible. We introduce a novel approach to speed up the calculation of elementary flux modes by including transcriptional regulatory information into the analysis of metabolic network. Taking into account gene regulation dramatically reduces the solution space and allows the presented algorithm to constantly eliminate biologically infeasible modes at an early stage of the computation procedure. Thereby, the computational costs, such as runtime, memory usage and disk space are considerably reduced. Consequently, using the presented mode elimination algorithm pushes the size of metabolic networks that can be studied by elementary flux modes to new limits.
1101.1358
Chandrasekar Kuppusamy
V. K. Chandrasekar, Jane H. Sheeba and M. Lakshmanan
Mass synchronization: Occurrence and its control with possible applications to brain dynamics
null
Chaos 20, 045106 (2010)
null
null
q-bio.NC nlin.AO physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Occurrence of strong or mass synchronization of a large number of neuronal populations in the brain characterizes its pathological states. In order to establish an understanding of the mechanism underlying such pathological synchronization we present a model of coupled populations of phase oscillators representing the interacting neuronal populations. Through numerical analysis, we discuss the occurrence of mass synchronization in the model, where a source population which gets strongly synchronized drives the target populations onto mass synchronization. We hypothesize and identify a possible cause for the occurrence of such a synchronization, which is so far unknown: Pathological synchronization is caused not just because of the increase in the strength of coupling between the populations but also because of the strength of the strong synchronization of the drive population. We propose a demand-controlled method to control this pathological synchronization by providing a delayed feedback where the strength and frequency of the synchronization determines the strength and the time delay of the feedback. We provide an analytical explanation for the occurrence of pathological synchronization and its control in the thermodynamic limit.
[ { "created": "Fri, 7 Jan 2011 06:00:17 GMT", "version": "v1" } ]
2011-01-10
[ [ "Chandrasekar", "V. K.", "" ], [ "Sheeba", "Jane H.", "" ], [ "Lakshmanan", "M.", "" ] ]
Occurrence of strong or mass synchronization of a large number of neuronal populations in the brain characterizes its pathological states. In order to establish an understanding of the mechanism underlying such pathological synchronization we present a model of coupled populations of phase oscillators representing the interacting neuronal populations. Through numerical analysis, we discuss the occurrence of mass synchronization in the model, where a source population which gets strongly synchronized drives the target populations onto mass synchronization. We hypothesize and identify a possible cause for the occurrence of such a synchronization, which is so far unknown: Pathological synchronization is caused not just because of the increase in the strength of coupling between the populations but also because of the strength of the strong synchronization of the drive population. We propose a demand-controlled method to control this pathological synchronization by providing a delayed feedback where the strength and frequency of the synchronization determines the strength and the time delay of the feedback. We provide an analytical explanation for the occurrence of pathological synchronization and its control in the thermodynamic limit.
1109.6589
Ximena Celeste Abrevaya
Ximena C. Abrevaya, N.J. Saco, P.J.D. Mauas, E. Corton
Archaea-based Microbial Fuel Cell Operating at High Ionic Strength Conditions
Draft version. Accepted for publication in EXTREMOPHILES. "The final publication is available at springerlink.com"
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work two archaea microorganisms (Haloferax volcanii and Natrialba magadii) used as biocatalyst at a microbial fuel cell (MFC) anode were evaluated. Both archaea are able to grow at high salt concentrations. By increasing the media conductivity, the internal resistance was diminished, improving the MFCs performance. Without any added redox mediator, maximum power (Pmax) and current at Pmax were 11.87 / 4.57 / 0.12 {\mu}W cm-2 and 49.67 / 22.03 / 0.59 {\mu}A cm-2 for H. volcanii, N. magadii and E. coli, respectively. When neutral red was used as redox mediator, Pmax was 50.98 and 5.39 {\mu}W cm-2 for H. volcanii and N. magadii respectively. In this paper an archaea MFC is described and compared with other MFC systems; the high salt concentration assayed here, comparable with that used in Pt-catalyzed alkaline hydrogen fuel cells will open new options when MFC scaling-up is the objective, necessary for practical applications.
[ { "created": "Thu, 29 Sep 2011 17:02:17 GMT", "version": "v1" } ]
2011-09-30
[ [ "Abrevaya", "Ximena C.", "" ], [ "Saco", "N. J.", "" ], [ "Mauas", "P. J. D.", "" ], [ "Corton", "E.", "" ] ]
In this work two archaea microorganisms (Haloferax volcanii and Natrialba magadii) used as biocatalyst at a microbial fuel cell (MFC) anode were evaluated. Both archaea are able to grow at high salt concentrations. By increasing the media conductivity, the internal resistance was diminished, improving the MFCs performance. Without any added redox mediator, maximum power (Pmax) and current at Pmax were 11.87 / 4.57 / 0.12 {\mu}W cm-2 and 49.67 / 22.03 / 0.59 {\mu}A cm-2 for H. volcanii, N. magadii and E. coli, respectively. When neutral red was used as redox mediator, Pmax was 50.98 and 5.39 {\mu}W cm-2 for H. volcanii and N. magadii respectively. In this paper an archaea MFC is described and compared with other MFC systems; the high salt concentration assayed here, comparable with that used in Pt-catalyzed alkaline hydrogen fuel cells will open new options when MFC scaling-up is the objective, necessary for practical applications.
2110.14116
Xianghao Zhan
Xianghao Zhan, Yuzhe Liu, Nicholas J. Cecchi, Olivier Gevaert, Michael M. Zeineh, Gerald A. Grant, David B. Camarillo
Data-driven decomposition of brain dynamics with principal component analysis in different types of head impacts
null
null
10.1109/TBME.2022.3163230
null
q-bio.QM cs.LG eess.SP q-bio.TO
http://creativecommons.org/licenses/by/4.0/
Strain and strain rate are effective traumatic brain injury predictors. Kinematics-based models estimating these metrics suffer from significant different distributions of both kinematics and the injury metrics across head impact types. To address this, previous studies focus on the kinematics but not the injury metrics. We have previously shown the kinematic features vary largely across head impact types, resulting in different patterns of brain deformation. This study analyzes the spatial distribution of brain deformation and applies principal component analysis (PCA) to extract the representative patterns of injury metrics (maximum principal strain (MPS), MPS rate (MPSR) and MPSXMPSR) in four impact types (simulation, football, mixed martial arts and car crashes). We apply PCA to decompose the patterns of the injury metrics for all impacts in each impact type, and investigate the distributions among brain regions using the first principal component (PC1). Furthermore, we developed a deep learning head model (DLHM) to predict PC1 and then inverse-transform to predict for all brain elements. PC1 explained >80% variance on the datasets. Based on PC1 coefficients, the corpus callosum and midbrain exhibit high variance on all datasets. We found MPSXMPSR the most sensitive metric on which the top 5% of severe impacts further deviates from the mean and there is a higher variance among the severe impacts. Finally, the DLHM reached mean absolute errors of <0.018 for MPS, <3.7 (1/s) for MPSR and <1.1 (1/s) for MPSXMPSR, much smaller than the injury thresholds. The brain injury metric in a dataset can be decomposed into mean components and PC1 with high explained variance. The brain dynamics decomposition enables better interpretation of the patterns in brain injury metrics and the sensitivity of brain injury metrics across impact types. The decomposition also reduces the dimensionality of DLHM.
[ { "created": "Wed, 27 Oct 2021 01:38:01 GMT", "version": "v1" } ]
2022-12-07
[ [ "Zhan", "Xianghao", "" ], [ "Liu", "Yuzhe", "" ], [ "Cecchi", "Nicholas J.", "" ], [ "Gevaert", "Olivier", "" ], [ "Zeineh", "Michael M.", "" ], [ "Grant", "Gerald A.", "" ], [ "Camarillo", "David B.", "" ] ]
Strain and strain rate are effective traumatic brain injury predictors. Kinematics-based models estimating these metrics suffer from significant different distributions of both kinematics and the injury metrics across head impact types. To address this, previous studies focus on the kinematics but not the injury metrics. We have previously shown the kinematic features vary largely across head impact types, resulting in different patterns of brain deformation. This study analyzes the spatial distribution of brain deformation and applies principal component analysis (PCA) to extract the representative patterns of injury metrics (maximum principal strain (MPS), MPS rate (MPSR) and MPSXMPSR) in four impact types (simulation, football, mixed martial arts and car crashes). We apply PCA to decompose the patterns of the injury metrics for all impacts in each impact type, and investigate the distributions among brain regions using the first principal component (PC1). Furthermore, we developed a deep learning head model (DLHM) to predict PC1 and then inverse-transform to predict for all brain elements. PC1 explained >80% variance on the datasets. Based on PC1 coefficients, the corpus callosum and midbrain exhibit high variance on all datasets. We found MPSXMPSR the most sensitive metric on which the top 5% of severe impacts further deviates from the mean and there is a higher variance among the severe impacts. Finally, the DLHM reached mean absolute errors of <0.018 for MPS, <3.7 (1/s) for MPSR and <1.1 (1/s) for MPSXMPSR, much smaller than the injury thresholds. The brain injury metric in a dataset can be decomposed into mean components and PC1 with high explained variance. The brain dynamics decomposition enables better interpretation of the patterns in brain injury metrics and the sensitivity of brain injury metrics across impact types. The decomposition also reduces the dimensionality of DLHM.
2104.10081
Matthew Andres Moreno
Matthew Andres Moreno, Charles Ofria
Exploring Evolved Multicellular Life Histories in a Open-Ended Digital Evolution System
null
null
10.3389/fevo.2022.750837
null
q-bio.PE cs.NE
http://creativecommons.org/licenses/by/4.0/
Evolutionary transitions occur when previously-independent replicating entities unite to form more complex individuals. Such transitions have profoundly shaped natural evolutionary history and occur in two forms: fraternal transitions involve lower-level entities that are kin (e.g., transitions to multicellularity or to eusocial colonies), while egalitarian transitions involve unrelated individuals (e.g., the origins of mitochondria). The necessary conditions and evolutionary mechanisms for these transitions to arise continue to be fruitful targets of scientific interest. Here, we examine a range of fraternal transitions in populations of open-ended self-replicating computer programs. These digital cells were allowed to form and replicate kin groups by selectively adjoining or expelling daughter cells. The capability to recognize kin-group membership enabled preferential communication and cooperation between cells. We repeatedly observed group-level traits that are characteristic of a fraternal transition. These included reproductive division of labor, resource sharing within kin groups, resource investment in offspring groups, asymmetrical behaviors mediated by messaging, morphological patterning, and adaptive apoptosis. We report eight case studies from replicates where transitions occurred and explore the diverse range of adaptive evolved multicellular strategies.
[ { "created": "Tue, 20 Apr 2021 16:03:09 GMT", "version": "v1" } ]
2023-10-10
[ [ "Moreno", "Matthew Andres", "" ], [ "Ofria", "Charles", "" ] ]
Evolutionary transitions occur when previously-independent replicating entities unite to form more complex individuals. Such transitions have profoundly shaped natural evolutionary history and occur in two forms: fraternal transitions involve lower-level entities that are kin (e.g., transitions to multicellularity or to eusocial colonies), while egalitarian transitions involve unrelated individuals (e.g., the origins of mitochondria). The necessary conditions and evolutionary mechanisms for these transitions to arise continue to be fruitful targets of scientific interest. Here, we examine a range of fraternal transitions in populations of open-ended self-replicating computer programs. These digital cells were allowed to form and replicate kin groups by selectively adjoining or expelling daughter cells. The capability to recognize kin-group membership enabled preferential communication and cooperation between cells. We repeatedly observed group-level traits that are characteristic of a fraternal transition. These included reproductive division of labor, resource sharing within kin groups, resource investment in offspring groups, asymmetrical behaviors mediated by messaging, morphological patterning, and adaptive apoptosis. We report eight case studies from replicates where transitions occurred and explore the diverse range of adaptive evolved multicellular strategies.
2401.10348
Gang Qu
Gang Qu, Anton Orlichenko, Junqi Wang, Gemeng Zhang, Li Xiao, Aiying Zhang, Zhengming Ding, Yu-Ping Wang
Exploring General Intelligence via Gated Graph Transformer in Functional Connectivity Studies
null
null
null
null
q-bio.NC cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Functional connectivity (FC) as derived from fMRI has emerged as a pivotal tool in elucidating the intricacies of various psychiatric disorders and delineating the neural pathways that underpin cognitive and behavioral dynamics inherent to the human brain. While Graph Neural Networks (GNNs) offer a structured approach to represent neuroimaging data, they are limited by their need for a predefined graph structure to depict associations between brain regions, a detail not solely provided by FCs. To bridge this gap, we introduce the Gated Graph Transformer (GGT) framework, designed to predict cognitive metrics based on FCs. Empirical validation on the Philadelphia Neurodevelopmental Cohort (PNC) underscores the superior predictive prowess of our model, further accentuating its potential in identifying pivotal neural connectivities that correlate with human cognitive processes.
[ { "created": "Thu, 18 Jan 2024 19:28:26 GMT", "version": "v1" } ]
2024-01-22
[ [ "Qu", "Gang", "" ], [ "Orlichenko", "Anton", "" ], [ "Wang", "Junqi", "" ], [ "Zhang", "Gemeng", "" ], [ "Xiao", "Li", "" ], [ "Zhang", "Aiying", "" ], [ "Ding", "Zhengming", "" ], [ "Wang", "Yu-Ping", "" ] ]
Functional connectivity (FC) as derived from fMRI has emerged as a pivotal tool in elucidating the intricacies of various psychiatric disorders and delineating the neural pathways that underpin cognitive and behavioral dynamics inherent to the human brain. While Graph Neural Networks (GNNs) offer a structured approach to represent neuroimaging data, they are limited by their need for a predefined graph structure to depict associations between brain regions, a detail not solely provided by FCs. To bridge this gap, we introduce the Gated Graph Transformer (GGT) framework, designed to predict cognitive metrics based on FCs. Empirical validation on the Philadelphia Neurodevelopmental Cohort (PNC) underscores the superior predictive prowess of our model, further accentuating its potential in identifying pivotal neural connectivities that correlate with human cognitive processes.
0909.2417
Bruno Goncalves
Duygu Balcan, Hao Hu, Bruno Goncalves, Paolo Bajardi, Chiara Poletto, Jose J Ramasco, Daniela Paolotti, Nicola Perra, Michele Tizzoni, Wouter Van den Broeck, Vittoria Colizza and Alessandro Vespignani
Seasonal transmission potential and activity peaks of the new influenza A(H1N1): a Monte Carlo likelihood analysis based on human mobility
Paper: 29 Pages, 3 Figures and 5 Tables. Supplementary Information: 29 Pages, 5 Figures and 7 Tables. Print version: http://www.biomedcentral.com/1741-7015/7/45
BMC Medicine 2009, 7:45
10.1186/1741-7015-7-45
null
q-bio.PE cond-mat.stat-mech physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
On 11 June the World Health Organization officially raised the phase of pandemic alert (with regard to the new H1N1 influenza strain) to level 6. We use a global structured metapopulation model integrating mobility and transportation data worldwide in order to estimate the transmission potential and the relevant model parameters we used the data on the chronology of the 2009 novel influenza A(H1N1). The method is based on the maximum likelihood analysis of the arrival time distribution generated by the model in 12 countries seeded by Mexico by using 1M computationally simulated epidemics. An extended chronology including 93 countries worldwide seeded before 18 June was used to ascertain the seasonality effects. We found the best estimate R0 = 1.75 (95% CI 1.64 to 1.88) for the basic reproductive number. Correlation analysis allows the selection of the most probable seasonal behavior based on the observed pattern, leading to the identification of plausible scenarios for the future unfolding of the pandemic and the estimate of pandemic activity peaks in the different hemispheres. We provide estimates for the number of hospitalizations and the attack rate for the next wave as well as an extensive sensitivity analysis on the disease parameter values. We also studied the effect of systematic therapeutic use of antiviral drugs on the epidemic timeline. The analysis shows the potential for an early epidemic peak occurring in October/November in the Northern hemisphere, likely before large-scale vaccination campaigns could be carried out. We suggest that the planning of additional mitigation policies such as systematic antiviral treatments might be the key to delay the activity peak inorder to restore the effectiveness of the vaccination programs.
[ { "created": "Mon, 14 Sep 2009 15:16:34 GMT", "version": "v1" } ]
2009-09-15
[ [ "Balcan", "Duygu", "" ], [ "Hu", "Hao", "" ], [ "Goncalves", "Bruno", "" ], [ "Bajardi", "Paolo", "" ], [ "Poletto", "Chiara", "" ], [ "Ramasco", "Jose J", "" ], [ "Paolotti", "Daniela", "" ], [ "Perra", "Nicola", "" ], [ "Tizzoni", "Michele", "" ], [ "Broeck", "Wouter Van den", "" ], [ "Colizza", "Vittoria", "" ], [ "Vespignani", "Alessandro", "" ] ]
On 11 June the World Health Organization officially raised the phase of pandemic alert (with regard to the new H1N1 influenza strain) to level 6. We use a global structured metapopulation model integrating mobility and transportation data worldwide in order to estimate the transmission potential and the relevant model parameters we used the data on the chronology of the 2009 novel influenza A(H1N1). The method is based on the maximum likelihood analysis of the arrival time distribution generated by the model in 12 countries seeded by Mexico by using 1M computationally simulated epidemics. An extended chronology including 93 countries worldwide seeded before 18 June was used to ascertain the seasonality effects. We found the best estimate R0 = 1.75 (95% CI 1.64 to 1.88) for the basic reproductive number. Correlation analysis allows the selection of the most probable seasonal behavior based on the observed pattern, leading to the identification of plausible scenarios for the future unfolding of the pandemic and the estimate of pandemic activity peaks in the different hemispheres. We provide estimates for the number of hospitalizations and the attack rate for the next wave as well as an extensive sensitivity analysis on the disease parameter values. We also studied the effect of systematic therapeutic use of antiviral drugs on the epidemic timeline. The analysis shows the potential for an early epidemic peak occurring in October/November in the Northern hemisphere, likely before large-scale vaccination campaigns could be carried out. We suggest that the planning of additional mitigation policies such as systematic antiviral treatments might be the key to delay the activity peak inorder to restore the effectiveness of the vaccination programs.
2104.07257
Jizhao Liu
Jizhao Liu, Jing Lian, J C Sprott, Qidong Liu, Yide Ma
The Butterfly Effect in Primary Visual Cortex
This manuscript has published in IEEE Transactions on Computers, 2022
null
10.1109/TC.2022.3173080
null
q-bio.NC cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Exploring and establishing artificial neural networks with electrophysiological characteristics and high computational efficiency is a popular topic in the field of computer vision. Inspired by the working mechanism of primary visual cortex, pulse-coupled neural network (PCNN) can exhibit the characteristics of synchronous oscillation, refractory period, and exponential decay. However, electrophysiological evidence shows that the neurons exhibit highly complex non-linear dynamics when stimulated by external periodic signals. This chaos phenomenon, also known as the " butterfly effect", cannot be explained by all PCNN models. In this work, we analyze the main obstacle preventing PCNN models from imitating real primary visual cortex. We consider neuronal excitation as a stochastic process. We then propose a novel neural network, called continuous-coupled neural network (CCNN). Theoretical analysis indicates that the dynamic behavior of CCNN is distinct from PCNN. Numerical results show that the CCNN model exhibits periodic behavior under DC stimulus, and exhibits chaotic behavior under AC stimulus, which is consistent with the results of real neurons. Furthermore, the image and video processing mechanisms of the CCNN model are analyzed. Experimental results on image segmentation indicate that the CCNN model has better performance than the state-of-the-art of visual cortex neural network models.
[ { "created": "Thu, 15 Apr 2021 06:04:31 GMT", "version": "v1" }, { "created": "Sat, 23 Jul 2022 04:19:59 GMT", "version": "v2" } ]
2022-07-26
[ [ "Liu", "Jizhao", "" ], [ "Lian", "Jing", "" ], [ "Sprott", "J C", "" ], [ "Liu", "Qidong", "" ], [ "Ma", "Yide", "" ] ]
Exploring and establishing artificial neural networks with electrophysiological characteristics and high computational efficiency is a popular topic in the field of computer vision. Inspired by the working mechanism of primary visual cortex, pulse-coupled neural network (PCNN) can exhibit the characteristics of synchronous oscillation, refractory period, and exponential decay. However, electrophysiological evidence shows that the neurons exhibit highly complex non-linear dynamics when stimulated by external periodic signals. This chaos phenomenon, also known as the " butterfly effect", cannot be explained by all PCNN models. In this work, we analyze the main obstacle preventing PCNN models from imitating real primary visual cortex. We consider neuronal excitation as a stochastic process. We then propose a novel neural network, called continuous-coupled neural network (CCNN). Theoretical analysis indicates that the dynamic behavior of CCNN is distinct from PCNN. Numerical results show that the CCNN model exhibits periodic behavior under DC stimulus, and exhibits chaotic behavior under AC stimulus, which is consistent with the results of real neurons. Furthermore, the image and video processing mechanisms of the CCNN model are analyzed. Experimental results on image segmentation indicate that the CCNN model has better performance than the state-of-the-art of visual cortex neural network models.
2402.14617
James Higham
James P Higham and David Colquhoun
The affinity-efficacy problem: an essential part of pharmacology education
16 pages, 3 figures
null
null
null
q-bio.OT
http://creativecommons.org/licenses/by-nc-sa/4.0/
A fundamental mistake in receptor theory has led to an enduring misunderstanding of how to estimate the affinity and efficacy of an agonist. These properties are inextricably linked and cannot be easily separated in any case where the binding of a ligand induces a conformation change in its receptor. Consequently, binding curves and concentration-response relationships for receptor agonists have no straightforward interpretation. This problem, the affinity-efficacy problem, remains overlooked and misunderstood despite it being recognised in 1987. To avoid the further propagation of this misunderstanding, we propose that the affinity-efficacy problem should be included in the core curricula for pharmacology undergraduates proposed by the British Pharmacological Society and IUPHAR.
[ { "created": "Wed, 21 Feb 2024 18:39:04 GMT", "version": "v1" } ]
2024-02-23
[ [ "Higham", "James P", "" ], [ "Colquhoun", "David", "" ] ]
A fundamental mistake in receptor theory has led to an enduring misunderstanding of how to estimate the affinity and efficacy of an agonist. These properties are inextricably linked and cannot be easily separated in any case where the binding of a ligand induces a conformation change in its receptor. Consequently, binding curves and concentration-response relationships for receptor agonists have no straightforward interpretation. This problem, the affinity-efficacy problem, remains overlooked and misunderstood despite it being recognised in 1987. To avoid the further propagation of this misunderstanding, we propose that the affinity-efficacy problem should be included in the core curricula for pharmacology undergraduates proposed by the British Pharmacological Society and IUPHAR.
1611.06152
Nuno Nene
Nuno R. Nen\'e, Ville Mustonen, Christopher J. R. Illingworth
Evaluating genetic drift in time-series evolutionary analysis
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Wright-Fisher model is the most popular population model for describing the behaviour of evolutionary systems with a finite population size. Approximations to the model have commonly been used for the analysis of time-resolved genome sequence data, but the model itself has rarely been tested against genomic data. Here, we evaluate the extent to which it can be inferred as the correct model given experimental data. Given genome-wide data from an evolutionary experiment, we validate the Wright-Fisher model as the better model for variance in a finite population in contrast to a Gaussian model of allele frequency propagation. However, we note a range of circumstances under which the Wright-Fisher model cannot be correctly identified. We discuss the potential for more rapid approximations to the Wright-Fisher model.
[ { "created": "Fri, 18 Nov 2016 16:41:32 GMT", "version": "v1" } ]
2016-11-21
[ [ "Nené", "Nuno R.", "" ], [ "Mustonen", "Ville", "" ], [ "Illingworth", "Christopher J. R.", "" ] ]
The Wright-Fisher model is the most popular population model for describing the behaviour of evolutionary systems with a finite population size. Approximations to the model have commonly been used for the analysis of time-resolved genome sequence data, but the model itself has rarely been tested against genomic data. Here, we evaluate the extent to which it can be inferred as the correct model given experimental data. Given genome-wide data from an evolutionary experiment, we validate the Wright-Fisher model as the better model for variance in a finite population in contrast to a Gaussian model of allele frequency propagation. However, we note a range of circumstances under which the Wright-Fisher model cannot be correctly identified. We discuss the potential for more rapid approximations to the Wright-Fisher model.
2405.05390
Johannes Nauta
Johannes Nauta and Manlio De Domenico
Topological conditions drive stability in meta-ecosystems
17 pages, 13 figures, journal submission
null
null
null
q-bio.PE cond-mat.dis-nn
http://creativecommons.org/licenses/by-nc-sa/4.0/
On a global level, ecological communities are being perturbed at an unprecedented rate by human activities and environmental instabilities. Yet, we understand little about what factors facilitate or impede long-term persistence of these communities. While observational studies indicate that increased biodiversity must, somehow, be driving stability, theoretical studies have argued the exact opposite viewpoint instead. This encouraged many researchers to participate in the ongoing diversity-stability debate. Within this context, however, there has been a severe lack of studies that consider spatial features explicitly, even though nearly all habitats are spatially embedded. To this end, we study here the linear stability of meta-ecosystems on networks that describe how discrete patches are connected by dispersal between them. By combining results from random matrix theory and network theory, we are able to show that there are three distinct features that underlie stability: edge density, tendency to triadic closure, and isolation or fragmentation. Our results appear to further indicate that network sparsity does not necessarily reduce stability, and that connections between patches are just as, if not more, important to consider when studying the stability of large ecological systems.
[ { "created": "Wed, 8 May 2024 19:41:24 GMT", "version": "v1" } ]
2024-05-10
[ [ "Nauta", "Johannes", "" ], [ "De Domenico", "Manlio", "" ] ]
On a global level, ecological communities are being perturbed at an unprecedented rate by human activities and environmental instabilities. Yet, we understand little about what factors facilitate or impede long-term persistence of these communities. While observational studies indicate that increased biodiversity must, somehow, be driving stability, theoretical studies have argued the exact opposite viewpoint instead. This encouraged many researchers to participate in the ongoing diversity-stability debate. Within this context, however, there has been a severe lack of studies that consider spatial features explicitly, even though nearly all habitats are spatially embedded. To this end, we study here the linear stability of meta-ecosystems on networks that describe how discrete patches are connected by dispersal between them. By combining results from random matrix theory and network theory, we are able to show that there are three distinct features that underlie stability: edge density, tendency to triadic closure, and isolation or fragmentation. Our results appear to further indicate that network sparsity does not necessarily reduce stability, and that connections between patches are just as, if not more, important to consider when studying the stability of large ecological systems.
2112.15143
Patrycja Kowalek
Patrycja Kowalek, Hanna Loch-Olszewska, {\L}ukasz {\L}aszczuk, Jaros{\l}aw Opa{\l}a, Janusz Szwabi\'nski
Boosting the performance of anomalous diffusion classifiers with the proper choice of features
36 pages, 6 figures
null
10.1088/1751-8121/ac6d2a
null
q-bio.QM physics.bio-ph physics.data-an
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding and identifying different types of single molecules' diffusion that occur in a broad range of systems (including living matter) is extremely important, as it can provide information on the physical and chemical characteristics of particles' surroundings. In recent years, an ever-growing number of methods have been proposed to overcome some of the limitations of the mean-squared displacements approach to tracer diffusion. In March 2020, the Anomalous Diffusion (AnDi) Challenge was launched by a community of international scientists to provide a framework for an objective comparison of the available methods for anomalous diffusion. In this paper, we introduce a feature-based machine learning method developed in response to Task 2 of the challenge, i.e. the classification of different types of diffusion. We discuss two sets of attributes that may be used for the classification of single-particle tracking data. The first one was proposed as our contribution to the AnDi Challenge. The latter is the result of our attempt to improve the performance of the classifier after the deadline of the competition. Extreme gradient boosting was used as the classification model. Although the deep-learning approach constitutes the state-of-the-art technology for data classification in many domains, we deliberately decided to pick this traditional machine learning algorithm due to its superior interpretability. After the extension of the feature set our classifier achieved the accuracy of 0.83, which is comparable with the top methods based on neural networks.
[ { "created": "Thu, 30 Dec 2021 17:42:20 GMT", "version": "v1" }, { "created": "Sun, 19 Jun 2022 20:12:35 GMT", "version": "v2" }, { "created": "Sun, 5 Mar 2023 14:08:08 GMT", "version": "v3" } ]
2023-03-07
[ [ "Kowalek", "Patrycja", "" ], [ "Loch-Olszewska", "Hanna", "" ], [ "Łaszczuk", "Łukasz", "" ], [ "Opała", "Jarosław", "" ], [ "Szwabiński", "Janusz", "" ] ]
Understanding and identifying different types of single molecules' diffusion that occur in a broad range of systems (including living matter) is extremely important, as it can provide information on the physical and chemical characteristics of particles' surroundings. In recent years, an ever-growing number of methods have been proposed to overcome some of the limitations of the mean-squared displacements approach to tracer diffusion. In March 2020, the Anomalous Diffusion (AnDi) Challenge was launched by a community of international scientists to provide a framework for an objective comparison of the available methods for anomalous diffusion. In this paper, we introduce a feature-based machine learning method developed in response to Task 2 of the challenge, i.e. the classification of different types of diffusion. We discuss two sets of attributes that may be used for the classification of single-particle tracking data. The first one was proposed as our contribution to the AnDi Challenge. The latter is the result of our attempt to improve the performance of the classifier after the deadline of the competition. Extreme gradient boosting was used as the classification model. Although the deep-learning approach constitutes the state-of-the-art technology for data classification in many domains, we deliberately decided to pick this traditional machine learning algorithm due to its superior interpretability. After the extension of the feature set our classifier achieved the accuracy of 0.83, which is comparable with the top methods based on neural networks.
1409.3065
Korinna T Allhoff
Korinna T. Allhoff, Eva Marie Weiel, Tobias Rogge and Barbara Drossel
On the interplay of speciation and dispersal: An evolutionary food web model in space
under review at JTB
Journal of Theoretical Biology, Volume 366, 7 February 2015, Pages 46-56
10.1016/j.jtbi.2014.11.006
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce an evolutionary metacommunity of multitrophic food webs on several habitats coupled by migration. In contrast to previous studies that focus either on evolutionary or on spatial aspects, we include both and investigate the interplay between them. Locally, the species emerge, interact and go extinct according to the rules of the well-known evolutionary food web model proposed by Loeuille and Loreau in 2005. Additionally, species are able to migrate between the habitats. With random migration, we are able to reproduce common trends in diversity-dispersal relationships: Regional diversity decreases with increasing migration rates, whereas local diversity can increase in case of a low level of dispersal. Moreover, we find that the total biomasses in the different patches become similar even when species composition remains different. With adaptive migration, we observe species compositions that differ considerably between patches and contain species that are descendant from ancestors on both patches. This result indicates that the combination of spatial aspects and evolutionary processes affects the structure of food webs in different ways than each of them alone.
[ { "created": "Wed, 10 Sep 2014 13:42:07 GMT", "version": "v1" } ]
2018-04-20
[ [ "Allhoff", "Korinna T.", "" ], [ "Weiel", "Eva Marie", "" ], [ "Rogge", "Tobias", "" ], [ "Drossel", "Barbara", "" ] ]
We introduce an evolutionary metacommunity of multitrophic food webs on several habitats coupled by migration. In contrast to previous studies that focus either on evolutionary or on spatial aspects, we include both and investigate the interplay between them. Locally, the species emerge, interact and go extinct according to the rules of the well-known evolutionary food web model proposed by Loeuille and Loreau in 2005. Additionally, species are able to migrate between the habitats. With random migration, we are able to reproduce common trends in diversity-dispersal relationships: Regional diversity decreases with increasing migration rates, whereas local diversity can increase in case of a low level of dispersal. Moreover, we find that the total biomasses in the different patches become similar even when species composition remains different. With adaptive migration, we observe species compositions that differ considerably between patches and contain species that are descendant from ancestors on both patches. This result indicates that the combination of spatial aspects and evolutionary processes affects the structure of food webs in different ways than each of them alone.
1810.08614
Les Hatton
Les Hatton, Gregory Warr
CoHSI III: Long proteins and implications for protein evolution
20 pages, 12 figures, 3 tables, 37 references
null
null
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The length distribution of proteins measured in amino acids follows the CoHSI (Conservation of Hartley-Shannon Information) probability distribution. In previous papers we have verified various predictions of this using the Uniprot database but here we explore a novel predicted relationship between the longest proteins and evolutionary time. We demonstrate from both theory and experiment that the longest protein and the total number of proteins are intimately related by Information Theory and we give a simple formula for this. We stress that no evolutionary explanation is necessary; it is an intrinsic property of a CoHSI system. While the CoHSI distribution favors the appearance of proteins with fewer than 750 amino acids (characteristic of most functional proteins or their constituent domains) its intrinsic asymptotic power-law also favors the appearance of unusually long proteins; we predict that there are as yet undiscovered proteins longer than 45,000 amino acids. In so doing, we draw an analogy between the process of protein folding driven by favorable pathways (or funnels) through the energy landscape of protein conformations, and the preferential information pathways through which CoHSI exerts its constraints in discrete systems. Finally, we show that CoHSI predicts the recent appearance in evolutionary time of the longest proteins, specifically in eukaryotes because of their richer unique alphabet of amino acids, and by merging with independent phylogenetic data, we confirm a predicted consistent relationship between the longest proteins and documented and potential undocumented mass extinctions.
[ { "created": "Fri, 19 Oct 2018 15:33:39 GMT", "version": "v1" } ]
2018-10-23
[ [ "Hatton", "Les", "" ], [ "Warr", "Gregory", "" ] ]
The length distribution of proteins measured in amino acids follows the CoHSI (Conservation of Hartley-Shannon Information) probability distribution. In previous papers we have verified various predictions of this using the Uniprot database but here we explore a novel predicted relationship between the longest proteins and evolutionary time. We demonstrate from both theory and experiment that the longest protein and the total number of proteins are intimately related by Information Theory and we give a simple formula for this. We stress that no evolutionary explanation is necessary; it is an intrinsic property of a CoHSI system. While the CoHSI distribution favors the appearance of proteins with fewer than 750 amino acids (characteristic of most functional proteins or their constituent domains) its intrinsic asymptotic power-law also favors the appearance of unusually long proteins; we predict that there are as yet undiscovered proteins longer than 45,000 amino acids. In so doing, we draw an analogy between the process of protein folding driven by favorable pathways (or funnels) through the energy landscape of protein conformations, and the preferential information pathways through which CoHSI exerts its constraints in discrete systems. Finally, we show that CoHSI predicts the recent appearance in evolutionary time of the longest proteins, specifically in eukaryotes because of their richer unique alphabet of amino acids, and by merging with independent phylogenetic data, we confirm a predicted consistent relationship between the longest proteins and documented and potential undocumented mass extinctions.
1009.4474
Sergei Maslov
Sergei Maslov, Sandeep Krishna, Tin Yau Pang, and Kim Sneppen
Toolbox model of evolution of prokaryotic metabolic networks and their regulation
6 pages, 4 figures
PNAS, 106, 9743-9748 (2009)
10.1073/pnas.0903206106
null
q-bio.MN q-bio.GN q-bio.PE q-bio.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It has been reported that the number of transcription factors encoded in prokaryotic genomes scales approximately quadratically with their total number of genes. We propose a conceptual explanation of this finding and illustrate it using a simple model in which metabolic and regulatory networks of prokaryotes are shaped by horizontal gene transfer of coregulated metabolic pathways. Adapting to a new environmental condition monitored by a new transcription factor (e.g., learning to use another nutrient) involves both acquiring new enzymes and reusing some of the enzymes already encoded in the genome. As the repertoire of enzymes of an organism (its toolbox) grows larger, it can reuse its enzyme tools more often and thus needs to get fewer new ones to master each new task. From this observation, it logically follows that the number of functional tasks and their regulators increases faster than linearly with the total number of genes encoding enzymes. Genomes can also shrink, e.g., because of a loss of a nutrient from the environment, followed by deletion of its regulator and all enzymes that become redundant. We propose several simple models of network evolution elaborating on this toolbox argument and reproducing the empirically observed quadratic scaling. The distribution of lengths of pathway branches in our model agrees with that of the real-life metabolic network of Escherichia coli. Thus, our model provides a qualitative explanation for broad distributions of regulon sizes in prokaryotes.
[ { "created": "Wed, 22 Sep 2010 20:43:17 GMT", "version": "v1" } ]
2010-09-24
[ [ "Maslov", "Sergei", "" ], [ "Krishna", "Sandeep", "" ], [ "Pang", "Tin Yau", "" ], [ "Sneppen", "Kim", "" ] ]
It has been reported that the number of transcription factors encoded in prokaryotic genomes scales approximately quadratically with their total number of genes. We propose a conceptual explanation of this finding and illustrate it using a simple model in which metabolic and regulatory networks of prokaryotes are shaped by horizontal gene transfer of coregulated metabolic pathways. Adapting to a new environmental condition monitored by a new transcription factor (e.g., learning to use another nutrient) involves both acquiring new enzymes and reusing some of the enzymes already encoded in the genome. As the repertoire of enzymes of an organism (its toolbox) grows larger, it can reuse its enzyme tools more often and thus needs to get fewer new ones to master each new task. From this observation, it logically follows that the number of functional tasks and their regulators increases faster than linearly with the total number of genes encoding enzymes. Genomes can also shrink, e.g., because of a loss of a nutrient from the environment, followed by deletion of its regulator and all enzymes that become redundant. We propose several simple models of network evolution elaborating on this toolbox argument and reproducing the empirically observed quadratic scaling. The distribution of lengths of pathway branches in our model agrees with that of the real-life metabolic network of Escherichia coli. Thus, our model provides a qualitative explanation for broad distributions of regulon sizes in prokaryotes.
2406.08322
Erin Craig
Erin Craig, Timothy Keyes, Jolanda Sarno, Maxim Zaslavsky, Garry Nolan, Kara Davis, Trevor Hastie and Robert Tibshirani
MMIL: A novel algorithm for disease associated cell type discovery
Erin Craig and Timothy Keyes contributed equally to this work
null
null
null
q-bio.QM cs.LG stat.ME
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Single-cell datasets often lack individual cell labels, making it challenging to identify cells associated with disease. To address this, we introduce Mixture Modeling for Multiple Instance Learning (MMIL), an expectation maximization method that enables the training and calibration of cell-level classifiers using patient-level labels. Our approach can be used to train e.g. lasso logistic regression models, gradient boosted trees, and neural networks. When applied to clinically-annotated, primary patient samples in Acute Myeloid Leukemia (AML) and Acute Lymphoblastic Leukemia (ALL), our method accurately identifies cancer cells, generalizes across tissues and treatment timepoints, and selects biologically relevant features. In addition, MMIL is capable of incorporating cell labels into model training when they are known, providing a powerful framework for leveraging both labeled and unlabeled data simultaneously. Mixture Modeling for MIL offers a novel approach for cell classification, with significant potential to advance disease understanding and management, especially in scenarios with unknown gold-standard labels and high dimensionality.
[ { "created": "Wed, 12 Jun 2024 15:22:56 GMT", "version": "v1" } ]
2024-06-13
[ [ "Craig", "Erin", "" ], [ "Keyes", "Timothy", "" ], [ "Sarno", "Jolanda", "" ], [ "Zaslavsky", "Maxim", "" ], [ "Nolan", "Garry", "" ], [ "Davis", "Kara", "" ], [ "Hastie", "Trevor", "" ], [ "Tibshirani", "Robert", "" ] ]
Single-cell datasets often lack individual cell labels, making it challenging to identify cells associated with disease. To address this, we introduce Mixture Modeling for Multiple Instance Learning (MMIL), an expectation maximization method that enables the training and calibration of cell-level classifiers using patient-level labels. Our approach can be used to train e.g. lasso logistic regression models, gradient boosted trees, and neural networks. When applied to clinically-annotated, primary patient samples in Acute Myeloid Leukemia (AML) and Acute Lymphoblastic Leukemia (ALL), our method accurately identifies cancer cells, generalizes across tissues and treatment timepoints, and selects biologically relevant features. In addition, MMIL is capable of incorporating cell labels into model training when they are known, providing a powerful framework for leveraging both labeled and unlabeled data simultaneously. Mixture Modeling for MIL offers a novel approach for cell classification, with significant potential to advance disease understanding and management, especially in scenarios with unknown gold-standard labels and high dimensionality.
q-bio/0402028
Manuel E. Rodriguez-Achach
R. Huerta-Quintanilla and M. Rodriguez-Achach
The role of mutation rate in a simple colonization model
Accepted in Physica A
null
10.1016/j.physa.2004.02.057
null
q-bio.PE
null
We study the effect of mutations in a simple model of colonization, based on Montecarlo simulations. When the population colonizes the whole available habitat, a maximum population density is reached, which depends on the mutation rate. Depending on the values of other parameters, such as selection pressure, fecundity and mobility, there is an optimal value for the mutation rate for which the colonization reaches the highest density. We also investigate the survival probabilities under different conditions and its relation to the mutation rate.
[ { "created": "Thu, 12 Feb 2004 21:02:52 GMT", "version": "v1" } ]
2009-11-10
[ [ "Huerta-Quintanilla", "R.", "" ], [ "Rodriguez-Achach", "M.", "" ] ]
We study the effect of mutations in a simple model of colonization, based on Montecarlo simulations. When the population colonizes the whole available habitat, a maximum population density is reached, which depends on the mutation rate. Depending on the values of other parameters, such as selection pressure, fecundity and mobility, there is an optimal value for the mutation rate for which the colonization reaches the highest density. We also investigate the survival probabilities under different conditions and its relation to the mutation rate.
1808.01991
Wouter-Jan Rappel
David Vidmar and Wouter-Jan Rappel
Extinction Dynamics of Cardiac Fibrillation
18 pages, 8 figures
Phys. Rev. E 99, 012407 (2019)
10.1103/PhysRevE.99.012407
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
During episodes of atrial fibrillation, the heart's electrical activity becomes disorganized and shows fragmenting spiral waves. To systematically address how this pattern terminates using spatially extended simulations exceeds current computational resources. To circumvent this limitation, we treat the number of spiral waves as a stochastic population with a corresponding birth-death equation and use techniques from statistical physics to determine the mean episode duration of atrial fibrillation. We show that this duration can be computed for arbitrary geometries in minimal computational time and that it depends exponentially on tissue size, consistent with the critical mass hypothesis which states that fibrillation requires a minimal organ size. Our approach can result in efficient and accurate predictions of mean episode duration, thus creating a potentially important step towards improved therapeutic interventions for atrial fibrillation.
[ { "created": "Mon, 6 Aug 2018 16:53:49 GMT", "version": "v1" } ]
2019-01-16
[ [ "Vidmar", "David", "" ], [ "Rappel", "Wouter-Jan", "" ] ]
During episodes of atrial fibrillation, the heart's electrical activity becomes disorganized and shows fragmenting spiral waves. To systematically address how this pattern terminates using spatially extended simulations exceeds current computational resources. To circumvent this limitation, we treat the number of spiral waves as a stochastic population with a corresponding birth-death equation and use techniques from statistical physics to determine the mean episode duration of atrial fibrillation. We show that this duration can be computed for arbitrary geometries in minimal computational time and that it depends exponentially on tissue size, consistent with the critical mass hypothesis which states that fibrillation requires a minimal organ size. Our approach can result in efficient and accurate predictions of mean episode duration, thus creating a potentially important step towards improved therapeutic interventions for atrial fibrillation.
1705.00063
Richard Granger
Ashok Chandrashekar, Richard Granger
Derivation of a novel efficient supervised learning algorithm from cortical-subcortical loops
null
Frontiers in Computational Neuroscience, 5 (2012)
10.3389/fncom.2011.00050
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Although brain circuits presumably carry out useful perceptual algorithms, few instances of derived biological methods have been found to compete favorably against algorithms that have been engineered for specific applications. We forward a novel analysis of function of cortico-striatal loops, which constitute more than 80% of the human brain, thus likely underlying a broad range of cognitive functions. We describe a family of operations performed by the derived method, including a nonstandard method for supervised classification, which may underlie some forms of cortically-dependent associative learning. The novel supervised classifier is compared against widely-used algorithms for classification, including support vector machines (SVM) and k-nearest neighbor methods, achieving corresponding classification rates --- at a fraction of the time and space costs. This represents an instance of a biologically-derived algorithm comparing favorably against widely used machine learning methods on well-studied tasks.
[ { "created": "Fri, 28 Apr 2017 20:15:45 GMT", "version": "v1" } ]
2017-05-02
[ [ "Chandrashekar", "Ashok", "" ], [ "Granger", "Richard", "" ] ]
Although brain circuits presumably carry out useful perceptual algorithms, few instances of derived biological methods have been found to compete favorably against algorithms that have been engineered for specific applications. We forward a novel analysis of function of cortico-striatal loops, which constitute more than 80% of the human brain, thus likely underlying a broad range of cognitive functions. We describe a family of operations performed by the derived method, including a nonstandard method for supervised classification, which may underlie some forms of cortically-dependent associative learning. The novel supervised classifier is compared against widely-used algorithms for classification, including support vector machines (SVM) and k-nearest neighbor methods, achieving corresponding classification rates --- at a fraction of the time and space costs. This represents an instance of a biologically-derived algorithm comparing favorably against widely used machine learning methods on well-studied tasks.
1808.05284
Matjaz Perc
Abdorreza Goodarzinick, Mohammad D. Niry, Alireza Valizadeh, Matjaz Perc
Robustness of functional networks at criticality against structural defects
7 two-column pages, 8 figures; accepted for publication in Physical Review E
Phys. Rev. E 98, 022312 (2018)
10.1103/PhysRevE.98.022312
null
q-bio.NC cond-mat.stat-mech physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The robustness of dynamical properties of neuronal networks against structural damages is a central problem in computational and experimental neuroscience. Research has shown that the cortical network of a healthy brain works near a critical state, and moreover, that functional neuronal networks often have scale-free and small-world properties. In this work, we study how the robustness of simple functional networks at criticality is affected by structural defects. In particular, we consider a 2D Ising model at the critical temperature and investigate how its functional network changes with the increasing degree of structural defects. We show that the scale-free and small-world properties of the functional network at criticality are robust against large degrees of structural lesions while the system remains below the percolation limit. Although the Ising model is only a conceptual description of a two-state neuron, our research reveals fundamental robustness properties of functional networks derived from classical statistical mechanics models.
[ { "created": "Wed, 15 Aug 2018 21:10:33 GMT", "version": "v1" } ]
2018-08-17
[ [ "Goodarzinick", "Abdorreza", "" ], [ "Niry", "Mohammad D.", "" ], [ "Valizadeh", "Alireza", "" ], [ "Perc", "Matjaz", "" ] ]
The robustness of dynamical properties of neuronal networks against structural damages is a central problem in computational and experimental neuroscience. Research has shown that the cortical network of a healthy brain works near a critical state, and moreover, that functional neuronal networks often have scale-free and small-world properties. In this work, we study how the robustness of simple functional networks at criticality is affected by structural defects. In particular, we consider a 2D Ising model at the critical temperature and investigate how its functional network changes with the increasing degree of structural defects. We show that the scale-free and small-world properties of the functional network at criticality are robust against large degrees of structural lesions while the system remains below the percolation limit. Although the Ising model is only a conceptual description of a two-state neuron, our research reveals fundamental robustness properties of functional networks derived from classical statistical mechanics models.
q-bio/0407012
Michael Deem
David J. Earl and Michael W. Deem
Evolvability is a Selectable Trait
21 pages, 4 figures, to appear in Proc. Natl. Acad. Sci. USA
null
10.1073/pnas.0404656101
null
q-bio.PE
null
Concomitant with the evolution of biological diversity must have been the evolution of mechanisms that facilitate evolution, due to the essentially infinite complexity of protein sequence space. We describe how evolvability can be an object of Darwinian selection, emphasizing the collective nature of the process. We quantify our theory with computer simulations of protein evolution. These simulations demonstrate that rapid or dramatic environmental change leads to selection for greater evolvability. The selective pressure for large scale genetic moves, such as DNA exchange, becomes increasingly strong as the environmental conditions become more uncertain. Our results demonstrate that evolvability is a selectable trait and allow for the explanation of a large body of experimental results.
[ { "created": "Wed, 7 Jul 2004 16:58:54 GMT", "version": "v1" } ]
2009-11-10
[ [ "Earl", "David J.", "" ], [ "Deem", "Michael W.", "" ] ]
Concomitant with the evolution of biological diversity must have been the evolution of mechanisms that facilitate evolution, due to the essentially infinite complexity of protein sequence space. We describe how evolvability can be an object of Darwinian selection, emphasizing the collective nature of the process. We quantify our theory with computer simulations of protein evolution. These simulations demonstrate that rapid or dramatic environmental change leads to selection for greater evolvability. The selective pressure for large scale genetic moves, such as DNA exchange, becomes increasingly strong as the environmental conditions become more uncertain. Our results demonstrate that evolvability is a selectable trait and allow for the explanation of a large body of experimental results.
2204.10553
Eugenio Mauri
Eugenio Mauri (LPENS), Simona Cocco (LPENS), R\'emi Monasson (LPENS)
Mutational paths with sequence-based models of proteins: from sampling to mean-field characterisation
null
null
null
null
q-bio.BM cond-mat.stat-mech q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Identifying and characterizing mutational paths is an important issue in evolutionary biology and in bioengineering. We here introduce a generic description of mutational paths in terms of the goodness of sequences and of the mutational dynamics (how sequences change) along the path. We first propose an algorithm to sample mutational paths, which we benchmark on exactly solvable models of proteins in silico, and apply to data-driven models of natural proteins learned from sequence data with Restricted Boltzmann Machines. We then use mean-field theory to characterize the properties of mutational paths for different mutational dynamics of interest, and show how it can be used to extend Kimura's estimate of evolutionary distances to sequence-based epistatic models of selection.
[ { "created": "Fri, 22 Apr 2022 07:58:19 GMT", "version": "v1" }, { "created": "Mon, 24 Oct 2022 11:23:56 GMT", "version": "v2" }, { "created": "Thu, 16 Feb 2023 15:05:13 GMT", "version": "v3" }, { "created": "Mon, 27 Mar 2023 20:59:25 GMT", "version": "v4" } ]
2023-03-29
[ [ "Mauri", "Eugenio", "", "LPENS" ], [ "Cocco", "Simona", "", "LPENS" ], [ "Monasson", "Rémi", "", "LPENS" ] ]
Identifying and characterizing mutational paths is an important issue in evolutionary biology and in bioengineering. We here introduce a generic description of mutational paths in terms of the goodness of sequences and of the mutational dynamics (how sequences change) along the path. We first propose an algorithm to sample mutational paths, which we benchmark on exactly solvable models of proteins in silico, and apply to data-driven models of natural proteins learned from sequence data with Restricted Boltzmann Machines. We then use mean-field theory to characterize the properties of mutational paths for different mutational dynamics of interest, and show how it can be used to extend Kimura's estimate of evolutionary distances to sequence-based epistatic models of selection.
2209.14631
Nassim Nicholas Taleb
Nassim Nicholas Taleb, Jeffrey West
Working With Convex Responses: Antifragility From Finance to Oncology
arXiv admin note: text overlap with arXiv:1808.00065 Final accepted version in Entropy
null
10.3390/e25020343
null
q-bio.QM q-fin.ST
http://creativecommons.org/licenses/by/4.0/
We extend techniques and learnings about the stochastic properties of nonlinear responses from finance to medicine, particularly oncology where it can inform dosing and intervention. We define antifragility. We propose uses of risk analysis to medical problems, through the properties of nonlinear responses (convex or concave). We 1) link the convexity/concavity of the dose-response function to the statistical properties of the results; 2) define "antifragility" as a mathematical property for local beneficial convex responses and the generalization of "fragility" as its opposite, locally concave in the tails of the statistical distribution; 3) propose mathematically tractable relations between dosage, severity of conditions, and iatrogenics. In short we propose a framework to integrate the necessary consequences of nonlinearities in evidence-based oncology and more general clinical risk management.
[ { "created": "Thu, 29 Sep 2022 08:46:58 GMT", "version": "v1" }, { "created": "Wed, 25 Jan 2023 16:31:54 GMT", "version": "v2" } ]
2023-03-22
[ [ "Taleb", "Nassim Nicholas", "" ], [ "West", "Jeffrey", "" ] ]
We extend techniques and learnings about the stochastic properties of nonlinear responses from finance to medicine, particularly oncology where it can inform dosing and intervention. We define antifragility. We propose uses of risk analysis to medical problems, through the properties of nonlinear responses (convex or concave). We 1) link the convexity/concavity of the dose-response function to the statistical properties of the results; 2) define "antifragility" as a mathematical property for local beneficial convex responses and the generalization of "fragility" as its opposite, locally concave in the tails of the statistical distribution; 3) propose mathematically tractable relations between dosage, severity of conditions, and iatrogenics. In short we propose a framework to integrate the necessary consequences of nonlinearities in evidence-based oncology and more general clinical risk management.
1307.6436
Petter Holme
Petter Holme, Fredrik Liljeros
Birth and death of links control disease spreading in empirical contact networks
null
Scientific Reports 4, 4999 (2014)
10.1038/srep04999
null
q-bio.PE cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate what structural aspects of a collection of twelve empirical temporal networks of human contacts are important to disease spreading. We scan the entire parameter spaces of the two canonical models of infectious disease epidemiology -- the Susceptible-Infectious-Susceptible (SIS) and Susceptible-Infectious-Removed (SIR) models. The results from these simulations are compared to reference data where we eliminate structures in the interevent intervals, the time to the first contact in the data, or the time from the last contact to the end of the sampling. The picture we find is that the birth and death of links, and the total number of contacts over a link, are essential to predict outbreaks. On the other hand, the exact times of contacts between the beginning and end, or the interevent interval distribution, do not matter much. In other words, a simplified picture of these empirical data sets that suffices for epidemiological purposes is that links are born, is active with some intensity, and die.
[ { "created": "Wed, 24 Jul 2013 14:43:49 GMT", "version": "v1" }, { "created": "Sat, 24 May 2014 01:17:39 GMT", "version": "v2" } ]
2014-05-27
[ [ "Holme", "Petter", "" ], [ "Liljeros", "Fredrik", "" ] ]
We investigate what structural aspects of a collection of twelve empirical temporal networks of human contacts are important to disease spreading. We scan the entire parameter spaces of the two canonical models of infectious disease epidemiology -- the Susceptible-Infectious-Susceptible (SIS) and Susceptible-Infectious-Removed (SIR) models. The results from these simulations are compared to reference data where we eliminate structures in the interevent intervals, the time to the first contact in the data, or the time from the last contact to the end of the sampling. The picture we find is that the birth and death of links, and the total number of contacts over a link, are essential to predict outbreaks. On the other hand, the exact times of contacts between the beginning and end, or the interevent interval distribution, do not matter much. In other words, a simplified picture of these empirical data sets that suffices for epidemiological purposes is that links are born, is active with some intensity, and die.
2207.09047
Hai-Jun Zhou
Zhen-Ye Huang, Xin-Yi Fan, Jianwen Zhou, Hai-Jun Zhou
Lateral predictive coding revisited: Internal model, symmetry breaking, and response time
12 pages, including 10 figures. To be published in the journal CTP
null
10.1088/1572-9494/ac7c03
null
q-bio.NC cond-mat.dis-nn physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Predictive coding is a promising theoretical framework in neuroscience for understanding information transmission and perception. It posits that the brain perceives the external world through internal models and updates these models under the guidance of prediction errors. Previous studies on predictive coding emphasized top-down feedback interactions in hierarchical multi-layered networks but largely ignored lateral recurrent interactions. We perform analytical and numerical investigations in this work on the effects of single-layer lateral interactions. We consider a simple predictive response dynamics and run it on the MNIST dataset of hand-written digits. We find that learning will generally break the interaction symmetry between peer neurons, and that high input correlation between two neurons does not necessarily bring strong direct interactions between them. The optimized network responds to familiar input signals much faster than to novel or random inputs, and it significantly reduces the correlations between the output states of pairs of neurons.
[ { "created": "Tue, 19 Jul 2022 03:32:18 GMT", "version": "v1" } ]
2022-09-07
[ [ "Huang", "Zhen-Ye", "" ], [ "Fan", "Xin-Yi", "" ], [ "Zhou", "Jianwen", "" ], [ "Zhou", "Hai-Jun", "" ] ]
Predictive coding is a promising theoretical framework in neuroscience for understanding information transmission and perception. It posits that the brain perceives the external world through internal models and updates these models under the guidance of prediction errors. Previous studies on predictive coding emphasized top-down feedback interactions in hierarchical multi-layered networks but largely ignored lateral recurrent interactions. We perform analytical and numerical investigations in this work on the effects of single-layer lateral interactions. We consider a simple predictive response dynamics and run it on the MNIST dataset of hand-written digits. We find that learning will generally break the interaction symmetry between peer neurons, and that high input correlation between two neurons does not necessarily bring strong direct interactions between them. The optimized network responds to familiar input signals much faster than to novel or random inputs, and it significantly reduces the correlations between the output states of pairs of neurons.
1206.2818
Marco Gherardi
Arianna Bottinelli, Bruno Bassetti, Marco Cosentino Lagomarsino, Marco Gherardi
Influence of homology and node-age on the growth of protein-protein interaction networks
14 pages, 7 figures [accepted for publication in PRE]
Phys. Rev. E 86, 041919 (2012)
10.1103/PhysRevE.86.041919
null
q-bio.MN cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Proteins participating in a protein-protein interaction network can be grouped into homology classes following their common ancestry. Proteins added to the network correspond to genes added to the classes, so that the dynamics of the two objects are intrinsically linked. Here, we first introduce a statistical model describing the joint growth of the network and the partitioning of nodes into classes, which is studied through a combined mean-field and simulation approach. We then employ this unified framework to address the specific issue of the age dependence of protein interactions, through the definition of three different node wiring/divergence schemes. Comparison with empirical data indicates that an age-dependent divergence move is necessary in order to reproduce the basic topological observables together with the age correlation between interacting nodes visible in empirical data. We also discuss the possibility of nontrivial joint partition/topology observables.
[ { "created": "Wed, 13 Jun 2012 14:27:50 GMT", "version": "v1" }, { "created": "Tue, 16 Oct 2012 20:20:30 GMT", "version": "v2" } ]
2013-05-30
[ [ "Bottinelli", "Arianna", "" ], [ "Bassetti", "Bruno", "" ], [ "Lagomarsino", "Marco Cosentino", "" ], [ "Gherardi", "Marco", "" ] ]
Proteins participating in a protein-protein interaction network can be grouped into homology classes following their common ancestry. Proteins added to the network correspond to genes added to the classes, so that the dynamics of the two objects are intrinsically linked. Here, we first introduce a statistical model describing the joint growth of the network and the partitioning of nodes into classes, which is studied through a combined mean-field and simulation approach. We then employ this unified framework to address the specific issue of the age dependence of protein interactions, through the definition of three different node wiring/divergence schemes. Comparison with empirical data indicates that an age-dependent divergence move is necessary in order to reproduce the basic topological observables together with the age correlation between interacting nodes visible in empirical data. We also discuss the possibility of nontrivial joint partition/topology observables.
2204.05138
Jared Reser
Jared Edward Reser
Artificial Intelligence Software Structured to Simulate Human Working Memory, Mental Imagery, and Mental Continuity
null
null
null
null
q-bio.NC cs.AI cs.LG cs.NE cs.SC
http://creativecommons.org/publicdomain/zero/1.0/
This article presents an artificial intelligence (AI) architecture intended to simulate the human working memory system as well as the manner in which it is updated iteratively. It features several interconnected neural networks designed to emulate the specialized modules of the cerebral cortex. These are structured hierarchically and integrated into a global workspace. They are capable of temporarily maintaining high-level patterns akin to the psychological items maintained in working memory. This maintenance is made possible by persistent neural activity in the form of two modalities: sustained neural firing (resulting in a focus of attention) and synaptic potentiation (resulting in a short-term store). This persistent activity is updated iteratively resulting in incremental changes to the content of the working memory system. As the content stored in working memory gradually evolves, successive states overlap and are continuous with one another. The present article will explore how this architecture can lead to gradual shift in the distribution of coactive representations, ultimately leading to mental continuity between processing states, and thus to human-like cognition.
[ { "created": "Tue, 29 Mar 2022 22:23:36 GMT", "version": "v1" } ]
2022-04-12
[ [ "Reser", "Jared Edward", "" ] ]
This article presents an artificial intelligence (AI) architecture intended to simulate the human working memory system as well as the manner in which it is updated iteratively. It features several interconnected neural networks designed to emulate the specialized modules of the cerebral cortex. These are structured hierarchically and integrated into a global workspace. They are capable of temporarily maintaining high-level patterns akin to the psychological items maintained in working memory. This maintenance is made possible by persistent neural activity in the form of two modalities: sustained neural firing (resulting in a focus of attention) and synaptic potentiation (resulting in a short-term store). This persistent activity is updated iteratively resulting in incremental changes to the content of the working memory system. As the content stored in working memory gradually evolves, successive states overlap and are continuous with one another. The present article will explore how this architecture can lead to gradual shift in the distribution of coactive representations, ultimately leading to mental continuity between processing states, and thus to human-like cognition.
1901.04094
Yuval Harel
Yuval Harel, Ron Meir, Manfred Opper
Optimal decoding of dynamic stimuli encoded by heterogeneous populations of spiking neurons - a closed form approximation
This is an extended version of arXiv:1507.07813 and arXiv:1609.03519. Additions relative to the version presented in NIPS (arXiv:1507.07813) are outlined in the introduction
Neural Computation 2018, Vol. 30, 2056-2112
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neural decoding may be formulated as dynamic state estimation (filtering) based on point process observations, a generally intractable problem. Numerical sampling techniques are often practically useful for the decoding of real neural data. However, they are less useful as theoretical tools for modeling and understanding sensory neural systems, since they lead to limited conceptual insight about optimal encoding and decoding strategies. We consider sensory neural populations characterized by a distribution over neuron parameters. We develop an analytically tractable Bayesian approximation to optimal filtering based on the observation of spiking activity, that greatly facilitates the analysis of optimal encoding in situations deviating from common assumptions of uniform coding. Continuous distributions are used to approximate large populations with few parameters, resulting in a filter whose complexity does not grow with the population size, and allowing optimization of population parameters rather than individual tuning functions. Numerical comparison with particle filtering demonstrates the quality of the approximation. The analytic framework leads to insights which are difficult to obtain from numerical algorithms, and is consistent with biological observations about the distribution of sensory cells' preferred stimuli.
[ { "created": "Mon, 14 Jan 2019 00:07:53 GMT", "version": "v1" } ]
2019-01-15
[ [ "Harel", "Yuval", "" ], [ "Meir", "Ron", "" ], [ "Opper", "Manfred", "" ] ]
Neural decoding may be formulated as dynamic state estimation (filtering) based on point process observations, a generally intractable problem. Numerical sampling techniques are often practically useful for the decoding of real neural data. However, they are less useful as theoretical tools for modeling and understanding sensory neural systems, since they lead to limited conceptual insight about optimal encoding and decoding strategies. We consider sensory neural populations characterized by a distribution over neuron parameters. We develop an analytically tractable Bayesian approximation to optimal filtering based on the observation of spiking activity, that greatly facilitates the analysis of optimal encoding in situations deviating from common assumptions of uniform coding. Continuous distributions are used to approximate large populations with few parameters, resulting in a filter whose complexity does not grow with the population size, and allowing optimization of population parameters rather than individual tuning functions. Numerical comparison with particle filtering demonstrates the quality of the approximation. The analytic framework leads to insights which are difficult to obtain from numerical algorithms, and is consistent with biological observations about the distribution of sensory cells' preferred stimuli.
1602.07082
Juan Poyatos
Guillermo Rodrigo and Juan F. Poyatos
Genetic redundancies enhance information transfer in noisy regulatory circuits
6 Figures, comments are welcome
null
10.1371/journal.pcbi.1005156
null
q-bio.MN
http://creativecommons.org/licenses/by/4.0/
Cellular decision making is based on regulatory circuits that associate signal thresholds to specific physiological actions. This transmission of information is subjected to molecular noise what can decrease its fidelity. Here, we show instead how such intrinsic noise enhances information transfer in the presence of multiple circuit copies. The result is due to the contribution of noise to the generation of autonomous responses by each copy, which are altogether associated with a common decision. Moreover, factors that correlate the responses of the redundant units (extrinsic noise or regulatory cross-talk) contribute to reduce fidelity, while those that further uncouple them (heterogeneity within the copies) can lead to stronger information gain. Overall, our study emphasizes how the interplay of signal thresholding, redundancy, and noise influences the accuracy of cellular decision making. Understanding this interplay provides a basis to explain collective cell signaling mechanisms, and to engineer robust decisions with noisy genetic circuits.
[ { "created": "Tue, 23 Feb 2016 08:29:19 GMT", "version": "v1" } ]
2017-02-08
[ [ "Rodrigo", "Guillermo", "" ], [ "Poyatos", "Juan F.", "" ] ]
Cellular decision making is based on regulatory circuits that associate signal thresholds to specific physiological actions. This transmission of information is subjected to molecular noise what can decrease its fidelity. Here, we show instead how such intrinsic noise enhances information transfer in the presence of multiple circuit copies. The result is due to the contribution of noise to the generation of autonomous responses by each copy, which are altogether associated with a common decision. Moreover, factors that correlate the responses of the redundant units (extrinsic noise or regulatory cross-talk) contribute to reduce fidelity, while those that further uncouple them (heterogeneity within the copies) can lead to stronger information gain. Overall, our study emphasizes how the interplay of signal thresholding, redundancy, and noise influences the accuracy of cellular decision making. Understanding this interplay provides a basis to explain collective cell signaling mechanisms, and to engineer robust decisions with noisy genetic circuits.
1704.02588
Nadav M. Shnerb
Dor Herman and Nadav M. Shnerb
Front propagation and effect of memory in stochastic desertification models with an absorbing state
null
null
10.1088/1742-5468/aa82c5
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Desertification in dryland ecosystems is considered to be a major environmental threat that may lead to devastating consequences. The concern increases when the system admits two alternative steady states and the transition is abrupt and irreversible (catastrophic shift). However, recent studies show that the inherent stochasticity of the birth-death process, when superimposed on the presence of an absorbing state, may lead to a continuous (second order) transition even if the deterministic dynamics supports a catastrophic transition. Following these works we present here a numerical study of a one-dimensional stochastic desertification model, where the deterministic predictions are confronted with the observed dynamics. Our results suggest that a stochastic spatial system allows for a propagating front only when its active phase invades the inactive (desert) one. In the extinction phase one observes transient front propagation followed by a global collapse. In the presence of a seed bank the vegetation state is shown to be more robust against demographic stochasticity, but the transition in that case still belongs to the directed percolation equivalence class.
[ { "created": "Sun, 9 Apr 2017 11:20:19 GMT", "version": "v1" }, { "created": "Tue, 20 Jun 2017 11:29:09 GMT", "version": "v2" } ]
2017-09-13
[ [ "Herman", "Dor", "" ], [ "Shnerb", "Nadav M.", "" ] ]
Desertification in dryland ecosystems is considered to be a major environmental threat that may lead to devastating consequences. The concern increases when the system admits two alternative steady states and the transition is abrupt and irreversible (catastrophic shift). However, recent studies show that the inherent stochasticity of the birth-death process, when superimposed on the presence of an absorbing state, may lead to a continuous (second order) transition even if the deterministic dynamics supports a catastrophic transition. Following these works we present here a numerical study of a one-dimensional stochastic desertification model, where the deterministic predictions are confronted with the observed dynamics. Our results suggest that a stochastic spatial system allows for a propagating front only when its active phase invades the inactive (desert) one. In the extinction phase one observes transient front propagation followed by a global collapse. In the presence of a seed bank the vegetation state is shown to be more robust against demographic stochasticity, but the transition in that case still belongs to the directed percolation equivalence class.
1304.3691
Arindam RoyChoudhury
Arindam RoyChoudhury
Identifiability of a Coalescent-based Population Tree Model
11 pages, 1 figure
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Identifiability of evolutionary tree models has been a recent topic of discussion and some models have been shown to be non-identifiable. A coalescent-based rooted population tree model, originally proposed by Nielsen et al. 1998 [2], has been used by many authors in the last few years and is a simple tool to accurately model the changes in allele frequencies in the tree. However, the identifiability of this model has never been proven. Here we prove this model to be identifiable by showing that the model parameters can be expressed as functions of the probability distributions of subsamples. This a step toward proving the consistency of the maximum likelihood estimator of the population tree based on this model.
[ { "created": "Fri, 12 Apr 2013 17:30:13 GMT", "version": "v1" } ]
2013-04-15
[ [ "RoyChoudhury", "Arindam", "" ] ]
Identifiability of evolutionary tree models has been a recent topic of discussion and some models have been shown to be non-identifiable. A coalescent-based rooted population tree model, originally proposed by Nielsen et al. 1998 [2], has been used by many authors in the last few years and is a simple tool to accurately model the changes in allele frequencies in the tree. However, the identifiability of this model has never been proven. Here we prove this model to be identifiable by showing that the model parameters can be expressed as functions of the probability distributions of subsamples. This a step toward proving the consistency of the maximum likelihood estimator of the population tree based on this model.
1711.10376
Shimin Cai Dr
Shi-Min Cai, Wei Chen, Dong-Bai Liu, Ming Tang, Xun Chen
Complex network analysis of brain functional connectivity under a multi-step cognitive task
18 pages 6 figures
Physica A 466, 663 (2017)
10.1016/j.physa.2016.09.058
null
q-bio.NC physics.bio-ph physics.data-an
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Functional brain network has been widely studied to understand the relationship between brain organization and behavior. In this paper, we aim to explore the functional connectivity of brain network under a \emph{multi-step} cognitive task involving with consecutive behaviors, and further understand the effect of behaviors on the brain organization. The functional brain networks are constructed base on a high spatial and temporal resolution fMRI dataset and analyzed via complex network based approach. We find that at voxel level the functional brain network shows robust small-worldness and scale-free characteristics, while its assortativity and rich-club organization are slightly restricted to order of behaviors performed. More interestingly, the functional connectivity of brain network in activated ROIs strongly correlates with behaviors and behaves obvious differences restricted to order of behaviors performed. These empirical results suggest that the brain organization has the generic properties of small-worldness and scale-free characteristics, and its diverse function connectivity emerging from activated ROIs is strongly driven by these behavioral activities via the plasticity of brain.
[ { "created": "Tue, 28 Nov 2017 16:16:33 GMT", "version": "v1" } ]
2017-12-06
[ [ "Cai", "Shi-Min", "" ], [ "Chen", "Wei", "" ], [ "Liu", "Dong-Bai", "" ], [ "Tang", "Ming", "" ], [ "Chen", "Xun", "" ] ]
Functional brain network has been widely studied to understand the relationship between brain organization and behavior. In this paper, we aim to explore the functional connectivity of brain network under a \emph{multi-step} cognitive task involving with consecutive behaviors, and further understand the effect of behaviors on the brain organization. The functional brain networks are constructed base on a high spatial and temporal resolution fMRI dataset and analyzed via complex network based approach. We find that at voxel level the functional brain network shows robust small-worldness and scale-free characteristics, while its assortativity and rich-club organization are slightly restricted to order of behaviors performed. More interestingly, the functional connectivity of brain network in activated ROIs strongly correlates with behaviors and behaves obvious differences restricted to order of behaviors performed. These empirical results suggest that the brain organization has the generic properties of small-worldness and scale-free characteristics, and its diverse function connectivity emerging from activated ROIs is strongly driven by these behavioral activities via the plasticity of brain.
1708.04988
Assya Trofimov
Trofimov Assya, Lemieux Sebastien, Perreault Claude
Warp: a method for neural network interpretability applied to gene expression profiles
5 pages, 3 figures, NIPS2016, Machine Learning in Computational Biology workshop
null
null
null
q-bio.GN cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We show a proof of principle for warping, a method to interpret the inner working of neural networks in the context of gene expression analysis. Warping is an efficient way to gain insight to the inner workings of neural nets and make them more interpretable. We demonstrate the ability of warping to recover meaningful information for a given class on a samplespecific individual basis. We found warping works well in both linearly and nonlinearly separable datasets. These encouraging results show that warping has a potential to be the answer to neural networks interpretability in computational biology.
[ { "created": "Wed, 16 Aug 2017 17:27:40 GMT", "version": "v1" } ]
2017-08-17
[ [ "Assya", "Trofimov", "" ], [ "Sebastien", "Lemieux", "" ], [ "Claude", "Perreault", "" ] ]
We show a proof of principle for warping, a method to interpret the inner working of neural networks in the context of gene expression analysis. Warping is an efficient way to gain insight to the inner workings of neural nets and make them more interpretable. We demonstrate the ability of warping to recover meaningful information for a given class on a samplespecific individual basis. We found warping works well in both linearly and nonlinearly separable datasets. These encouraging results show that warping has a potential to be the answer to neural networks interpretability in computational biology.
1705.04499
Francisco Herrer\'ias-Azcu\'e Mr.
Francisco Herrer\'ias-Azcu\'e, Tobias Galla
The effects of heterogeneity on stochastic cycles in epidemics
Main text 16 pages, 9 figures. Supplement 5 pages
null
10.1038/s41598-017-12606-x
null
q-bio.PE cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Models of biological processes are often subject to different sources of noise. Developing an understanding of the combined effects of different types of uncertainty is an open challenge. In this paper, we study a variant of the susceptible-infective-recovered model of epidemic spread, which combines both agent-to-agent heterogeneity and intrinsic noise. We focus on epidemic cycles, driven by the stochasticity of infection and recovery events, and study in detail how heterogeneity in susceptibilities and propensities to pass on the disease affects these quasi-cycles. While the system can only be described by a large hierarchical set of equations in the transient regime, we derive a reduced closed set of equations for population-level quantities in the stationary regime. We analytically obtain the spectra of quasi-cycles in the linear-noise approximation. We find that the characteristic frequency of these cycles is typically determined by population averages of susceptibilities and infectivities, but that their amplitude depends on higher-order moments of the heterogeneity. We also investigate the synchronisation properties and phase lag between different groups of susceptible and infected individuals.
[ { "created": "Fri, 12 May 2017 10:21:03 GMT", "version": "v1" } ]
2018-03-28
[ [ "Herrerías-Azcué", "Francisco", "" ], [ "Galla", "Tobias", "" ] ]
Models of biological processes are often subject to different sources of noise. Developing an understanding of the combined effects of different types of uncertainty is an open challenge. In this paper, we study a variant of the susceptible-infective-recovered model of epidemic spread, which combines both agent-to-agent heterogeneity and intrinsic noise. We focus on epidemic cycles, driven by the stochasticity of infection and recovery events, and study in detail how heterogeneity in susceptibilities and propensities to pass on the disease affects these quasi-cycles. While the system can only be described by a large hierarchical set of equations in the transient regime, we derive a reduced closed set of equations for population-level quantities in the stationary regime. We analytically obtain the spectra of quasi-cycles in the linear-noise approximation. We find that the characteristic frequency of these cycles is typically determined by population averages of susceptibilities and infectivities, but that their amplitude depends on higher-order moments of the heterogeneity. We also investigate the synchronisation properties and phase lag between different groups of susceptible and infected individuals.
2106.14443
Chuan Xu
David Osumi-Sutherland, Chuan Xu, Maria Keays, Peter V. Kharchenko, Aviv Regev, Ed Lein, Sarah A. Teichmann
Cell types and ontologies of the Human Cell Atlas
null
Nat Cell Biol 23, 1129-1135 (2021)
10.1038/s41556-021-00787-7
null
q-bio.CB
http://creativecommons.org/licenses/by-nc-sa/4.0/
Massive single-cell profiling efforts have accelerated our discovery of the cellular composition of the human body, while at the same time raising the need to formalise this new knowledge. Here, we review current cell ontology efforts to harmonise and integrate different sources of annotations of cell types and states. We illustrate with examples how a unified ontology can consolidate and advance our understanding of cell types across scientific communities and biological domains.
[ { "created": "Mon, 28 Jun 2021 07:54:37 GMT", "version": "v1" } ]
2021-11-10
[ [ "Osumi-Sutherland", "David", "" ], [ "Xu", "Chuan", "" ], [ "Keays", "Maria", "" ], [ "Kharchenko", "Peter V.", "" ], [ "Regev", "Aviv", "" ], [ "Lein", "Ed", "" ], [ "Teichmann", "Sarah A.", "" ] ]
Massive single-cell profiling efforts have accelerated our discovery of the cellular composition of the human body, while at the same time raising the need to formalise this new knowledge. Here, we review current cell ontology efforts to harmonise and integrate different sources of annotations of cell types and states. We illustrate with examples how a unified ontology can consolidate and advance our understanding of cell types across scientific communities and biological domains.
1309.2609
Liane Gabora
Liane Gabora
Contextual Focus: A Cognitive Explanation for the Cultural Revolution of the Middle/Upper Paleolithic
6 pages
In R. Alterman & D. Hirsch (Eds.), Proceedings of the 25th Annual Meeting of the Cognitive Science Society (pp. 432-437). Austin, TX: Cognitive Science Society. (2003)
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many elements of culture made their first appearance in the Upper Paleolithic. Previous hypotheses put forth to explain this unprecedented burst of creativity are found wanting. Examination of the psychological basis of creativity leads to the suggestion that it resulted from the onset of contextual focus: the capacity to focus or defocus attention in response to the situation, thereby shifting between analytic and associative modes of thought. New ideas germinate in a defocused state in which one is receptive to the possible relevance of many dimensions of a situation. They are refined in a focused state, conducive to filtering out irrelevant dimensions and condensing relevant ones.
[ { "created": "Tue, 10 Sep 2013 18:50:54 GMT", "version": "v1" }, { "created": "Wed, 22 May 2019 20:08:13 GMT", "version": "v2" }, { "created": "Fri, 5 Jul 2019 18:57:04 GMT", "version": "v3" }, { "created": "Tue, 9 Jul 2019 19:49:02 GMT", "version": "v4" } ]
2019-07-11
[ [ "Gabora", "Liane", "" ] ]
Many elements of culture made their first appearance in the Upper Paleolithic. Previous hypotheses put forth to explain this unprecedented burst of creativity are found wanting. Examination of the psychological basis of creativity leads to the suggestion that it resulted from the onset of contextual focus: the capacity to focus or defocus attention in response to the situation, thereby shifting between analytic and associative modes of thought. New ideas germinate in a defocused state in which one is receptive to the possible relevance of many dimensions of a situation. They are refined in a focused state, conducive to filtering out irrelevant dimensions and condensing relevant ones.
1308.3663
Michael Sadovsky
Michael Zakharov (PetPace LTD, Israel) and Michael Sadovsky (Institute of Computational Modelling SB RAS, Russia)
The role of blood circulatory system in thermal regulation of animals explained by entropy production analysis
23 pages, 3 figures
null
null
null
q-bio.OT physics.bio-ph physics.med-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A novel model of thermal regulation of homoeothermic animals has been implemented. The model is based on a non-equilibrium thermodynamic approach which introduces entropy balance and the rate of entropy generation as a formulation of The Second Law. The model explains linear correlation between an animals skin and environment temperatures using the first principles and demonstrates the role of blood circulation in the thermoregulation of homoeothermic animals.
[ { "created": "Fri, 16 Aug 2013 16:25:54 GMT", "version": "v1" } ]
2013-08-19
[ [ "Zakharov", "Michael", "", "PetPace LTD, Israel" ], [ "Sadovsky", "Michael", "", "Institute\n of Computational Modelling SB RAS, Russia" ] ]
A novel model of thermal regulation of homoeothermic animals has been implemented. The model is based on a non-equilibrium thermodynamic approach which introduces entropy balance and the rate of entropy generation as a formulation of The Second Law. The model explains linear correlation between an animals skin and environment temperatures using the first principles and demonstrates the role of blood circulation in the thermoregulation of homoeothermic animals.
2406.17800
Meng Cui
Meng Cui, Xubo Liu, Haohe Liu, Jinzheng Zhao, Daoliang Li, Wenwu Wang
Fish Tracking, Counting, and Behaviour Analysis in Digital Aquaculture: A Comprehensive Review
null
null
null
null
q-bio.QM cs.SD eess.AS
http://creativecommons.org/licenses/by/4.0/
Digital aquaculture leverages advanced technologies and data-driven methods, providing substantial benefits over traditional aquaculture practices. Fish tracking, counting, and behaviour analysis are crucial components of digital aquaculture, which are essential for optimizing production efficiency, enhancing fish welfare, and improving resource management. Previous reviews have focused on single modalities, limiting their ability to address the diverse challenges encountered in these tasks comprehensively. This review provides a comprehensive analysis of the current state of aquaculture digital technologies, including vision-based, acoustic-based, and biosensor-based methods. We examine the advantages, limitations, and applications of these methods, highlighting recent advancements and identifying critical research gaps. The scarcity of comprehensive fish datasets and the lack of unified evaluation standards, which make it difficult to compare the performance of different technologies, are identified as major obstacles hindering progress in this field. To overcome current limitations and improve the accuracy, robustness, and efficiency of fish monitoring systems, we explore the potential of emerging technologies such as multimodal data fusion and deep learning. Additionally, we contribute to the field by providing a summary of existing datasets available for fish tracking, counting, and behaviour analysis. Future research directions are outlined, emphasizing the need for comprehensive datasets and evaluation standards to facilitate meaningful comparisons between technologies and promote their practical implementation in real-world aquaculture settings.
[ { "created": "Thu, 20 Jun 2024 11:37:27 GMT", "version": "v1" } ]
2024-06-27
[ [ "Cui", "Meng", "" ], [ "Liu", "Xubo", "" ], [ "Liu", "Haohe", "" ], [ "Zhao", "Jinzheng", "" ], [ "Li", "Daoliang", "" ], [ "Wang", "Wenwu", "" ] ]
Digital aquaculture leverages advanced technologies and data-driven methods, providing substantial benefits over traditional aquaculture practices. Fish tracking, counting, and behaviour analysis are crucial components of digital aquaculture, which are essential for optimizing production efficiency, enhancing fish welfare, and improving resource management. Previous reviews have focused on single modalities, limiting their ability to address the diverse challenges encountered in these tasks comprehensively. This review provides a comprehensive analysis of the current state of aquaculture digital technologies, including vision-based, acoustic-based, and biosensor-based methods. We examine the advantages, limitations, and applications of these methods, highlighting recent advancements and identifying critical research gaps. The scarcity of comprehensive fish datasets and the lack of unified evaluation standards, which make it difficult to compare the performance of different technologies, are identified as major obstacles hindering progress in this field. To overcome current limitations and improve the accuracy, robustness, and efficiency of fish monitoring systems, we explore the potential of emerging technologies such as multimodal data fusion and deep learning. Additionally, we contribute to the field by providing a summary of existing datasets available for fish tracking, counting, and behaviour analysis. Future research directions are outlined, emphasizing the need for comprehensive datasets and evaluation standards to facilitate meaningful comparisons between technologies and promote their practical implementation in real-world aquaculture settings.
2303.07383
Colby Long
Elizabeth A. Allman, Colby Long, John A. Rhodes
Phylogenomic Models from Tree Symmetries
20 pages, 0 figures
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
A model of genomic sequence evolution on a species tree should include not only a sequence substitution process, but also a coalescent process, since different sites may evolve on different gene trees due to incomplete lineage sorting. Chifman and Kubatko initiated the study of such models, leading to the development of the SVDquartets methods of species tree inference. A key observation was that symmetries in an ultrametric species tree led to symmetries in the joint distribution of bases at the taxa. In this work, we explore the implications of such symmetry more fully, defining new models incorporating only the symmetries of this distribution, regardless of the mechanism that might have produced them. The models are thus supermodels of many standard ones with mechanistic parameterizations. We study phylogenetic invariants for the models, and establish identifiability of species tree topologies using them.
[ { "created": "Mon, 13 Mar 2023 18:01:52 GMT", "version": "v1" } ]
2023-03-15
[ [ "Allman", "Elizabeth A.", "" ], [ "Long", "Colby", "" ], [ "Rhodes", "John A.", "" ] ]
A model of genomic sequence evolution on a species tree should include not only a sequence substitution process, but also a coalescent process, since different sites may evolve on different gene trees due to incomplete lineage sorting. Chifman and Kubatko initiated the study of such models, leading to the development of the SVDquartets methods of species tree inference. A key observation was that symmetries in an ultrametric species tree led to symmetries in the joint distribution of bases at the taxa. In this work, we explore the implications of such symmetry more fully, defining new models incorporating only the symmetries of this distribution, regardless of the mechanism that might have produced them. The models are thus supermodels of many standard ones with mechanistic parameterizations. We study phylogenetic invariants for the models, and establish identifiability of species tree topologies using them.
2209.08612
Jonathan Ch\'avez Casillas
Jonathan A. Ch\'avez Casillas
A Stochastic Model for the Early Stages of Highly Contagious Epidemics by using a State-Dependent Point Process
32 pages, 9 figures
null
null
null
q-bio.PE math.PR
http://creativecommons.org/licenses/by/4.0/
The recent COVID-19 pandemic has shown that when the reproduction number is high and there are no proper measurements in place, the number of infected people can increase dramatically in a short time, producing a phenomenon that many stochastic SIR-like models cannot describe: overdispersion of the number of infected people (i.e., the variance of the number of infected people during any interval is very high compared to the average). To address this issue, in this paper we explore the possibility of modeling the total number of infections as a state dependent self-exciting point process. In this way, infections are not independent among themselves, but any infection will increase the likelihood of a new infection while also the number of currently infected and recovered individuals are included into determining the likelihood of new infections, Since long term simulation is extremely computationally intensive, exact expressions for the moments of the processes determining the number of infected and recovered individuals are computed, while also simulation algorithms for these state-dependent processes are provided.
[ { "created": "Sun, 18 Sep 2022 17:27:34 GMT", "version": "v1" } ]
2022-09-20
[ [ "Casillas", "Jonathan A. Chávez", "" ] ]
The recent COVID-19 pandemic has shown that when the reproduction number is high and there are no proper measurements in place, the number of infected people can increase dramatically in a short time, producing a phenomenon that many stochastic SIR-like models cannot describe: overdispersion of the number of infected people (i.e., the variance of the number of infected people during any interval is very high compared to the average). To address this issue, in this paper we explore the possibility of modeling the total number of infections as a state dependent self-exciting point process. In this way, infections are not independent among themselves, but any infection will increase the likelihood of a new infection while also the number of currently infected and recovered individuals are included into determining the likelihood of new infections, Since long term simulation is extremely computationally intensive, exact expressions for the moments of the processes determining the number of infected and recovered individuals are computed, while also simulation algorithms for these state-dependent processes are provided.
1911.00052
Ewandson Luiz Lameu
Ewandson L. Lameu, Fernando S. Borges, Kelly C. Iarosz, Paulo R. Protachevicz, Antonio M. Batista, Chris G. Antonopoulos and Elbert E. N. Macau
Short-term and spike-timing-dependent plasticities facilitate the formation of modular neural networks
18 pages, 11 figures
null
null
null
q-bio.NC nlin.AO physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The brain has the phenomenal ability to reorganize itself by forming new connections among neurons and by pruning others. The so-called neural or brain plasticity facilitates the modification of brain structure and function over different time scales. Plasticity might occur due to external stimuli received from the environment, during recovery from brain injury, or due to modifications within the body and brain itself. In this paper, we study the combined effect of short-term (STP) and spike-timing-dependent plasticities (STDP) on the synaptic strength of excitatory coupled Hodgkin-Huxley neurons and show that plasticity can facilitate the formation of modular neural networks with complex topologies that resemble those of networks with preferential attachment properties. In particular, we use an STDP rule that alters the synaptic coupling intensity based on time intervals between spikes of postsynaptic and presynaptic neurons. Previous works have shown that STDP may induce the appearance of directed connections from high to low-frequency spiking neurons. On the other hand, STP is attributed to the release of neurotransmitters in the synaptic cleft of neurons that alter its synaptic efficiency. Our results suggest that the combined effect of STP and STDP with high recovery time facilitates the formation of connections among neurons with similar spike frequencies only, a kind of preferential attachment. We then pursue this further and show that, when starting with all-to-all neural configurations, depending on the STP recovery time and distribution of neural frequencies, modular neural networks can emerge as a direct result of the combined effect of STP and STDP.
[ { "created": "Thu, 31 Oct 2019 18:45:39 GMT", "version": "v1" } ]
2019-11-04
[ [ "Lameu", "Ewandson L.", "" ], [ "Borges", "Fernando S.", "" ], [ "Iarosz", "Kelly C.", "" ], [ "Protachevicz", "Paulo R.", "" ], [ "Batista", "Antonio M.", "" ], [ "Antonopoulos", "Chris G.", "" ], [ "Macau", "Elbert E. N.", "" ] ]
The brain has the phenomenal ability to reorganize itself by forming new connections among neurons and by pruning others. The so-called neural or brain plasticity facilitates the modification of brain structure and function over different time scales. Plasticity might occur due to external stimuli received from the environment, during recovery from brain injury, or due to modifications within the body and brain itself. In this paper, we study the combined effect of short-term (STP) and spike-timing-dependent plasticities (STDP) on the synaptic strength of excitatory coupled Hodgkin-Huxley neurons and show that plasticity can facilitate the formation of modular neural networks with complex topologies that resemble those of networks with preferential attachment properties. In particular, we use an STDP rule that alters the synaptic coupling intensity based on time intervals between spikes of postsynaptic and presynaptic neurons. Previous works have shown that STDP may induce the appearance of directed connections from high to low-frequency spiking neurons. On the other hand, STP is attributed to the release of neurotransmitters in the synaptic cleft of neurons that alter its synaptic efficiency. Our results suggest that the combined effect of STP and STDP with high recovery time facilitates the formation of connections among neurons with similar spike frequencies only, a kind of preferential attachment. We then pursue this further and show that, when starting with all-to-all neural configurations, depending on the STP recovery time and distribution of neural frequencies, modular neural networks can emerge as a direct result of the combined effect of STP and STDP.
2007.14392
Sai Vinjanampathy
Sanit Gupta, Sahil Shah, Sumit Chaturvedi, Pranav Thakkar, Parvinder Solanki, Soham Dibyachintan, Sandeepan Roy, M. B. Sushma, Adwait Godbole, Noufal Jaseem, Pradumn Kumar, Sucheta Ravikanti, Aritra Das, Giridhara R. Babu, Tarun Bhatnagar, Avijit Maji, Mithun K. Mitra, and Sai Vinjanampathy
An India-specific Compartmental Model for Covid-19: Projections and Intervention Strategies by Incorporating Geographical, Infrastructural and Response Heterogeneity
code and updates available at https://github.com/vinjanampathy/Heterogenous_COVID19_India_Model
null
null
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a compartmental meta-population model for the spread of Covid-19 in India. Our model simulates populations at a district or state level using an epidemiological model that is appropriate to Covid-19. Different districts are connected by a transportation matrix developed using available census data. We introduce uncertainties in the testing rates into the model that takes into account the disparate responses of the different states to the epidemic and also factors in the state of the public healthcare system. Our model allows us to generate qualitative projections of Covid-19 spread in India, and further allows us to investigate the effects of different proposed interventions. By building in heterogeneity at geographical and infrastructural levels and in local responses, our model aims to capture some of the complexity of epidemiological modeling appropriate to a diverse country such as India.
[ { "created": "Tue, 28 Jul 2020 06:49:45 GMT", "version": "v1" } ]
2020-07-30
[ [ "Gupta", "Sanit", "" ], [ "Shah", "Sahil", "" ], [ "Chaturvedi", "Sumit", "" ], [ "Thakkar", "Pranav", "" ], [ "Solanki", "Parvinder", "" ], [ "Dibyachintan", "Soham", "" ], [ "Roy", "Sandeepan", "" ], [ "Sushma", "M. B.", "" ], [ "Godbole", "Adwait", "" ], [ "Jaseem", "Noufal", "" ], [ "Kumar", "Pradumn", "" ], [ "Ravikanti", "Sucheta", "" ], [ "Das", "Aritra", "" ], [ "Babu", "Giridhara R.", "" ], [ "Bhatnagar", "Tarun", "" ], [ "Maji", "Avijit", "" ], [ "Mitra", "Mithun K.", "" ], [ "Vinjanampathy", "Sai", "" ] ]
We present a compartmental meta-population model for the spread of Covid-19 in India. Our model simulates populations at a district or state level using an epidemiological model that is appropriate to Covid-19. Different districts are connected by a transportation matrix developed using available census data. We introduce uncertainties in the testing rates into the model that takes into account the disparate responses of the different states to the epidemic and also factors in the state of the public healthcare system. Our model allows us to generate qualitative projections of Covid-19 spread in India, and further allows us to investigate the effects of different proposed interventions. By building in heterogeneity at geographical and infrastructural levels and in local responses, our model aims to capture some of the complexity of epidemiological modeling appropriate to a diverse country such as India.
1611.09024
Sang-Yoon Kim
Sang-Yoon Kim and Woochang Lim
Dynamical Responses to External Stimuli for Both Cases of Excitatory and Inhibitory Synchronization in A Complex Neuronal Network
null
null
null
null
q-bio.NC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For studying how dynamical responses to external stimuli depend on the synaptic-coupling type, we consider two types of excitatory and inhibitory synchronization (i.e., synchronization via synaptic excitation and inhibition) in complex small-world networks of excitatory regular spiking (RS) pyramidal neurons and inhibitory fast spiking (FS) interneurons. For both cases of excitatory and inhibitory synchronization, effects of synaptic couplings on dynamical responses to external time-periodic stimuli $S(t)$ (applied to a fraction of neurons) are investigated by varying the driving amplitude $A$ of $S(t)$. Stimulated neurons are phase-locked to external stimuli for both cases of excitatory and inhibitory couplings. On the other hand, the stimulation effect on non-stimulated neurons depends on the type of synaptic coupling. The external stimulus $S(t)$ makes a constructive effect on excitatory non-stimulated RS neurons (i.e., it causes external phase lockings in the non-stimulated sub-population), while $S(t)$ makes a destructive effect on inhibitory non-stimulated FS interneurons (i.e., it breaks up original inhibitory synchronization in the non-stimulated sub-population). As results of these different effects of $S(t)$, the type and degree of dynamical response (e.g., synchronization enhancement or suppression), characterized by the dynamical response factor $D_f$ (given by the ratio of synchronization degree in the presence and absence of stimulus), are found to vary in a distinctly different way, depending on the synaptic-coupling type. Furthermore, we also measure the matching degree between the dynamics of the two sub-populations of stimulated and non-stimulated neurons in terms of a "cross-correlation" measure $M_c$. With increasing $A$, based on $M_c$, we discuss the cross-correlations between the two sub-populations, affecting the dynamical responses to $S(t)$.
[ { "created": "Mon, 28 Nov 2016 08:45:26 GMT", "version": "v1" } ]
2016-11-29
[ [ "Kim", "Sang-Yoon", "" ], [ "Lim", "Woochang", "" ] ]
For studying how dynamical responses to external stimuli depend on the synaptic-coupling type, we consider two types of excitatory and inhibitory synchronization (i.e., synchronization via synaptic excitation and inhibition) in complex small-world networks of excitatory regular spiking (RS) pyramidal neurons and inhibitory fast spiking (FS) interneurons. For both cases of excitatory and inhibitory synchronization, effects of synaptic couplings on dynamical responses to external time-periodic stimuli $S(t)$ (applied to a fraction of neurons) are investigated by varying the driving amplitude $A$ of $S(t)$. Stimulated neurons are phase-locked to external stimuli for both cases of excitatory and inhibitory couplings. On the other hand, the stimulation effect on non-stimulated neurons depends on the type of synaptic coupling. The external stimulus $S(t)$ makes a constructive effect on excitatory non-stimulated RS neurons (i.e., it causes external phase lockings in the non-stimulated sub-population), while $S(t)$ makes a destructive effect on inhibitory non-stimulated FS interneurons (i.e., it breaks up original inhibitory synchronization in the non-stimulated sub-population). As results of these different effects of $S(t)$, the type and degree of dynamical response (e.g., synchronization enhancement or suppression), characterized by the dynamical response factor $D_f$ (given by the ratio of synchronization degree in the presence and absence of stimulus), are found to vary in a distinctly different way, depending on the synaptic-coupling type. Furthermore, we also measure the matching degree between the dynamics of the two sub-populations of stimulated and non-stimulated neurons in terms of a "cross-correlation" measure $M_c$. With increasing $A$, based on $M_c$, we discuss the cross-correlations between the two sub-populations, affecting the dynamical responses to $S(t)$.
2309.03344
Bartholomew Jardine
Bartholomew E. Jardine, Lucian P. Smith, Herbert M. Sauro
MakeSBML: A tool for converting between Antimony and SBML
4 pages, 1 figure. Author roles: Bartholomew E Jardine: Writing - Original draft preparation, Software. Lucian P Smith: Software, Writing. Herbert M Sauro: Conceived, Software design, Writing
null
null
null
q-bio.MN q-bio.QM
http://creativecommons.org/licenses/by/4.0/
We describe a web-based tool, MakeSBML (https://sys-bio.github.io/makesbml/), that provides an installation-free application for creating, editing, and searching the Biomodels repository for SBML-based models. MakeSBML is a client-based web application that translates models expressed in human-readable Antimony to the System Biology Markup Language (SBML) and vice-versa. Since MakeSBML is a web-based application it requires no installation on the user's part. Currently, MakeSBML is hosted on a GitHub page where the client-based design makes it trivial to move to other hosts. This model for software deployment also reduces maintenance costs since an active server is not required. The SBML modeling language is often used in systems biology research to describe complex biochemical networks and makes reproducing models much easier. However, SBML is designed to be computer-readable, not human-readable. We therefore employ the human-readable Antimony language to make it easy to create and edit SBML models.
[ { "created": "Wed, 6 Sep 2023 19:55:21 GMT", "version": "v1" } ]
2023-09-08
[ [ "Jardine", "Bartholomew E.", "" ], [ "Smith", "Lucian P.", "" ], [ "Sauro", "Herbert M.", "" ] ]
We describe a web-based tool, MakeSBML (https://sys-bio.github.io/makesbml/), that provides an installation-free application for creating, editing, and searching the Biomodels repository for SBML-based models. MakeSBML is a client-based web application that translates models expressed in human-readable Antimony to the System Biology Markup Language (SBML) and vice-versa. Since MakeSBML is a web-based application it requires no installation on the user's part. Currently, MakeSBML is hosted on a GitHub page where the client-based design makes it trivial to move to other hosts. This model for software deployment also reduces maintenance costs since an active server is not required. The SBML modeling language is often used in systems biology research to describe complex biochemical networks and makes reproducing models much easier. However, SBML is designed to be computer-readable, not human-readable. We therefore employ the human-readable Antimony language to make it easy to create and edit SBML models.
1711.01487
Yao Li
Yao Li, Logan Chariker, Lai-Sang Young
How well do reduced models capture the dynamics in models of interacting neurons ?
null
null
null
null
q-bio.NC math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper introduces a class of stochastic models of interacting neurons with emergent dynamics similar to those seen in local cortical populations, and compares them to very simple reduced models driven by the same mean excitatory and inhibitory currents. Discrepancies in firing rates between network and reduced models were investigated, and mechanisms leading to these discrepancies were identified. Chief among them is correlations in spiking, or partial synchronization, working in concert with "nonlinearities" in the time evolution of membrane potentials. Additionally, simple random walk models and their first passage times were shown to reproduce well fluctuations in neuronal membrane potentials and interspike times.
[ { "created": "Sat, 4 Nov 2017 20:05:09 GMT", "version": "v1" } ]
2017-11-07
[ [ "Li", "Yao", "" ], [ "Chariker", "Logan", "" ], [ "Young", "Lai-Sang", "" ] ]
This paper introduces a class of stochastic models of interacting neurons with emergent dynamics similar to those seen in local cortical populations, and compares them to very simple reduced models driven by the same mean excitatory and inhibitory currents. Discrepancies in firing rates between network and reduced models were investigated, and mechanisms leading to these discrepancies were identified. Chief among them is correlations in spiking, or partial synchronization, working in concert with "nonlinearities" in the time evolution of membrane potentials. Additionally, simple random walk models and their first passage times were shown to reproduce well fluctuations in neuronal membrane potentials and interspike times.
2107.13792
Thilo Gross
Alexey Ryabov, Bernd Blasius, Helmut Hillebrand, Irina Olenina, and Thilo Gross
Estimation of functional diversity and species traits from ecological monitoring data
37 pages, main text ends on page 13, color figures
null
10.1073/pnas.2118156119
null
q-bio.PE physics.data-an
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The twin crises of climate change and biodiversity loss define a strong need for functional diversity monitoring. While the availability of high-quality ecological monitoring data is increasing, the quantification of functional diversity so far requires the identification of species traits, for which data is harder to obtain. However, the traits that are relevant for the ecological function of a species also shape its performance in the environment and hence should be reflected indirectly in its spatio-temporal distribution. Thus it may be possible to reconstruct these traits from a sufficiently extensive monitoring dataset. Here we use diffusion maps, a deterministic and de-facto parameter-free analysis method, to reconstruct a proxy representation of the species' traits directly from monitoring data and use it to estimate functional diversity. We demonstrate this approach both with simulated data and real-world phytoplankton monitoring data from the Baltic sea. We anticipate that wider application of this approach to existing data could greatly advance the analysis of changes in functional biodiversity.
[ { "created": "Thu, 29 Jul 2021 07:39:49 GMT", "version": "v1" }, { "created": "Tue, 20 Sep 2022 05:57:05 GMT", "version": "v2" } ]
2023-01-11
[ [ "Ryabov", "Alexey", "" ], [ "Blasius", "Bernd", "" ], [ "Hillebrand", "Helmut", "" ], [ "Olenina", "Irina", "" ], [ "Gross", "Thilo", "" ] ]
The twin crises of climate change and biodiversity loss define a strong need for functional diversity monitoring. While the availability of high-quality ecological monitoring data is increasing, the quantification of functional diversity so far requires the identification of species traits, for which data is harder to obtain. However, the traits that are relevant for the ecological function of a species also shape its performance in the environment and hence should be reflected indirectly in its spatio-temporal distribution. Thus it may be possible to reconstruct these traits from a sufficiently extensive monitoring dataset. Here we use diffusion maps, a deterministic and de-facto parameter-free analysis method, to reconstruct a proxy representation of the species' traits directly from monitoring data and use it to estimate functional diversity. We demonstrate this approach both with simulated data and real-world phytoplankton monitoring data from the Baltic sea. We anticipate that wider application of this approach to existing data could greatly advance the analysis of changes in functional biodiversity.
0902.1730
Thomas Risler
Markus Basan, Thomas Risler, Jean-Francois Joanny, Xavier Sastre-Garau, Jacques Prost
Homeostatic competition drives tumor growth and metastasis nucleation
13 pages, 11 figures, to be published in the HFSP Journal
HFSP J. 3 (4), 265-272 (2009)
10.2976/1.3086732
null
q-bio.TO physics.bio-ph physics.med-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a mechanism for tumor growth emphasizing the role of homeostatic regulation and tissue stability. We show that competition between surface and bulk effects leads to the existence of a critical size that must be overcome by metastases to reach macroscopic sizes. This property can qualitatively explain the observed size distributions of metastases, while size-independent growth rates cannot account for clinical and experimental data. In addition, it potentially explains the observed preferential growth of metastases on tissue surfaces and membranes such as the pleural and peritoneal layers, suggests a mechanism underlying the seed and soil hypothesis introduced by Stephen Paget in 1889 and yields realistic values for metastatic inefficiency. We propose a number of key experiments to test these concepts. The homeostatic pressure as introduced in this work could constitute a quantitative, experimentally accessible measure for the metastatic potential of early malignant growths.
[ { "created": "Tue, 10 Feb 2009 19:46:00 GMT", "version": "v1" } ]
2010-08-19
[ [ "Basan", "Markus", "" ], [ "Risler", "Thomas", "" ], [ "Joanny", "Jean-Francois", "" ], [ "Sastre-Garau", "Xavier", "" ], [ "Prost", "Jacques", "" ] ]
We propose a mechanism for tumor growth emphasizing the role of homeostatic regulation and tissue stability. We show that competition between surface and bulk effects leads to the existence of a critical size that must be overcome by metastases to reach macroscopic sizes. This property can qualitatively explain the observed size distributions of metastases, while size-independent growth rates cannot account for clinical and experimental data. In addition, it potentially explains the observed preferential growth of metastases on tissue surfaces and membranes such as the pleural and peritoneal layers, suggests a mechanism underlying the seed and soil hypothesis introduced by Stephen Paget in 1889 and yields realistic values for metastatic inefficiency. We propose a number of key experiments to test these concepts. The homeostatic pressure as introduced in this work could constitute a quantitative, experimentally accessible measure for the metastatic potential of early malignant growths.
2012.03072
Swarnendu Banerjee
Swarnendu Banerjee, Bapi Saha, Max Rietkerk, Mara Baudena, Joydev Chattopadhyay
Chemical contamination mediated regime shifts in planktonic systems
23 pages, 10 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Abrupt transitions leading to algal blooms are quite well known in aquatic ecosystems and have important implications for the environment. These ecosystem shifts have been largely attributed to nutrient dynamics and food web interactions. Contamination with heavy metals such as copper can modulate such ecological interactions which in turn may impact ecosystem functioning. Motivated by this, we explored the effect of copper enrichment on such regime shifts in planktonic systems. We integrated copper contamination to a minimal phytoplankton-zooplankton model which is known to demonstrate abrupt transitions between ecosystem states. Our results suggest that both the toxic and deficient concentration of copper in water bodies can lead to regime shift to an algal dominated alternative stable state. Further, interaction with fish density can also lead to collapse of population cycles thus leading to algal domination in the intermediate copper ranges. Environmental stochasticity may result in state transition much prior to the tipping point and there is a significant loss in the bimodality on increasing intensity and redness of noise. Finally, the impending state shifts due to contamination cannot be predicted by the generic early warning indicators unless the transition is close enough. Overall the study provides fresh impetus to explore regime shifts in ecosystems under the influence of anthropogenic changes like chemical contamination.
[ { "created": "Sat, 5 Dec 2020 16:37:07 GMT", "version": "v1" } ]
2020-12-08
[ [ "Banerjee", "Swarnendu", "" ], [ "Saha", "Bapi", "" ], [ "Rietkerk", "Max", "" ], [ "Baudena", "Mara", "" ], [ "Chattopadhyay", "Joydev", "" ] ]
Abrupt transitions leading to algal blooms are quite well known in aquatic ecosystems and have important implications for the environment. These ecosystem shifts have been largely attributed to nutrient dynamics and food web interactions. Contamination with heavy metals such as copper can modulate such ecological interactions which in turn may impact ecosystem functioning. Motivated by this, we explored the effect of copper enrichment on such regime shifts in planktonic systems. We integrated copper contamination to a minimal phytoplankton-zooplankton model which is known to demonstrate abrupt transitions between ecosystem states. Our results suggest that both the toxic and deficient concentration of copper in water bodies can lead to regime shift to an algal dominated alternative stable state. Further, interaction with fish density can also lead to collapse of population cycles thus leading to algal domination in the intermediate copper ranges. Environmental stochasticity may result in state transition much prior to the tipping point and there is a significant loss in the bimodality on increasing intensity and redness of noise. Finally, the impending state shifts due to contamination cannot be predicted by the generic early warning indicators unless the transition is close enough. Overall the study provides fresh impetus to explore regime shifts in ecosystems under the influence of anthropogenic changes like chemical contamination.
2005.02294
Isaac P\'erez Castillo
Rams\'es H. Mena and Jorge X. Velasco-Hernandez and Natalia B. Mantilla-Beniers and Gabriel A. Carranco-Sapi\'ens and Luis Benet and Denis Boyer and Isaac P\'erez Castillo
Using posterior predictive distributions to analyse epidemic models: COVID-19 in Mexico City
6 pages, 9 figures. The draft has been rewritten and the SI has been included as an ancillary file. The calibration of the model has been redone using the updated public data on May 7th
Physical Biology 17 (2020), 065001
10.1088/1478-3975/abb115
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Epidemiological models contain a set of parameters that must be adjusted based on available observations. Once a model has been calibrated, it can be used as a forecasting tool to make predictions and to evaluate contingency plans. It is customary to employ only point estimators for such predictions. However, some models may fit the same data reasonably well for a broad range of parameter values, and this flexibility means that predictions stemming from such models will vary widely, depending on the particular parameter values employed within the range that give a good fit. When data are poor or incomplete, model uncertainty widens further. A way to circumvent this problem is to use Bayesian statistics to incorporate observations and use the full range of parameter estimates contained in the posterior distribution to adjust for uncertainties in model predictions. Specifically, given the epidemiological model and a probability distribution for observations, we use the posterior distribution of model parameters to generate all possible epidemiological curves via the posterior predictive distribution. From the envelope of all curves one can extract the worst-case scenario and study the impact of implementing contingency plans according to this assessment. We apply this approach to the potential evolution of COVID-19 in Mexico City and assess whether contingency plans are being successful and whether the epidemiological curve has flattened.
[ { "created": "Tue, 5 May 2020 15:47:57 GMT", "version": "v1" }, { "created": "Fri, 15 May 2020 12:26:41 GMT", "version": "v2" } ]
2021-03-02
[ [ "Mena", "Ramsés H.", "" ], [ "Velasco-Hernandez", "Jorge X.", "" ], [ "Mantilla-Beniers", "Natalia B.", "" ], [ "Carranco-Sapiéns", "Gabriel A.", "" ], [ "Benet", "Luis", "" ], [ "Boyer", "Denis", "" ], [ "Castillo", "Isaac Pérez", "" ] ]
Epidemiological models contain a set of parameters that must be adjusted based on available observations. Once a model has been calibrated, it can be used as a forecasting tool to make predictions and to evaluate contingency plans. It is customary to employ only point estimators for such predictions. However, some models may fit the same data reasonably well for a broad range of parameter values, and this flexibility means that predictions stemming from such models will vary widely, depending on the particular parameter values employed within the range that give a good fit. When data are poor or incomplete, model uncertainty widens further. A way to circumvent this problem is to use Bayesian statistics to incorporate observations and use the full range of parameter estimates contained in the posterior distribution to adjust for uncertainties in model predictions. Specifically, given the epidemiological model and a probability distribution for observations, we use the posterior distribution of model parameters to generate all possible epidemiological curves via the posterior predictive distribution. From the envelope of all curves one can extract the worst-case scenario and study the impact of implementing contingency plans according to this assessment. We apply this approach to the potential evolution of COVID-19 in Mexico City and assess whether contingency plans are being successful and whether the epidemiological curve has flattened.
1605.09269
Maciej Ciemny
Mateusz Kurcinski, Maciej Pawel Ciemny, Maciej Blaszczyk, Andrzej Kolinski and Sebastian Kmiecik
Flexible protein-peptide docking using CABS-dock with knowledge about the binding site
IWBBIO 2016 International work-conference on Bioinformatics and biomedical engineering. Proceedings Extended abstracts 20-22 April, 2016, Granada (SPAIN)
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite considerable efforts, structural prediction of protein-peptide complexes is still a very challenging task, mainly due to two reasons: high flexibility of the peptides and transient character of their interactions with proteins. Recently we have developed an automated web server CABS-dock (http://biocomp.chem.uw.edu.pl/CABSdock), which conducts flexible protein-peptide docking without any knowledge about the binding site. Our method allows for full flexibility of the peptide, whereas the flexibility of the receptor is restricted to near native conformations considering the main chain, and full flexibility of the side chains. Performance of the CABS-dock server was thoroughly tested on a benchmark of 171 test cases, both bound and unbound. Evaluation of the obtained results showed overall good performance of the method, especially that no information of the binding site was used. From unsuccessful experiments we learned that the accuracy of docking might be significantly improved, if only little information of the binding site was considered. In fact, in real-life applications user typically has access to some data indicating the location and/or structure of the binding site. In the current work, we test and demonstrate the performance of the CABS-dock server with two new features. The first one allows to utilize the knowledge about receptor residue(s) constituting the binding site, and the second one allows to enforce the desired secondary structure on the peptide structure. Based on the given example, we observe significant improvement of the docking accuracy in comparison to the default CABS-dock mode.
[ { "created": "Mon, 30 May 2016 15:17:00 GMT", "version": "v1" } ]
2016-05-31
[ [ "Kurcinski", "Mateusz", "" ], [ "Ciemny", "Maciej Pawel", "" ], [ "Blaszczyk", "Maciej", "" ], [ "Kolinski", "Andrzej", "" ], [ "Kmiecik", "Sebastian", "" ] ]
Despite considerable efforts, structural prediction of protein-peptide complexes is still a very challenging task, mainly due to two reasons: high flexibility of the peptides and transient character of their interactions with proteins. Recently we have developed an automated web server CABS-dock (http://biocomp.chem.uw.edu.pl/CABSdock), which conducts flexible protein-peptide docking without any knowledge about the binding site. Our method allows for full flexibility of the peptide, whereas the flexibility of the receptor is restricted to near native conformations considering the main chain, and full flexibility of the side chains. Performance of the CABS-dock server was thoroughly tested on a benchmark of 171 test cases, both bound and unbound. Evaluation of the obtained results showed overall good performance of the method, especially that no information of the binding site was used. From unsuccessful experiments we learned that the accuracy of docking might be significantly improved, if only little information of the binding site was considered. In fact, in real-life applications user typically has access to some data indicating the location and/or structure of the binding site. In the current work, we test and demonstrate the performance of the CABS-dock server with two new features. The first one allows to utilize the knowledge about receptor residue(s) constituting the binding site, and the second one allows to enforce the desired secondary structure on the peptide structure. Based on the given example, we observe significant improvement of the docking accuracy in comparison to the default CABS-dock mode.
1107.5439
Luca Peliti
Remi Lehe and Oskar Hallatschek and Luca Peliti
The rate of beneficial mutations surfing on the wave of a range expansion
21 pages, 7 figures; to appear in PLoS Computational Biology
PLoS Comp. Biol. 8(3): e1002447 (2012)
10.1371/journal.pcbi.1002447
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many theoretical and experimental studies suggest that range expansions can have severe consequences for the gene pool of the expanding population. Due to strongly enhanced genetic drift at the advancing frontier, neutral and weakly deleterious mutations can reach large frequencies in the newly colonized regions, as if they were surfing the front of the range expansion. These findings raise the question of how frequently beneficial mutations successfully surf at shifting range margins, thereby promoting adaptation towards a range-expansion phenotype. Here, we use individual-based simulations to study the surfing statistics of recurrent beneficial mutations on wave-like range expansions in linear habitats. We show that the rate of surfing depends on two strongly antagonistic factors, the probability of surfing given the spatial location of a novel mutation and the rate of occurrence of mutations at that location. The surfing probability strongly increases towards the tip of the wave. Novel mutations are unlikely to surf unless they enjoy a spatial head start compared to the bulk of the population. The needed head start is shown to be proportional to the inverse fitness of the mutant type, and only weakly dependent on the carrying capacity. The second factor is the mutation occurrence which strongly decreases towards the tip of the wave. Thus, most successful mutations arise at an intermediate position in the front of the wave. We present an analytic theory for the tradeoff between these factors that allows to predict how frequently substitutions by beneficial mutations occur at invasion fronts. We find that small amounts of genetic drift increase the fixation rate of beneficial mutations at the advancing front, and thus could be important for adaptation during species invasions.
[ { "created": "Wed, 27 Jul 2011 11:03:25 GMT", "version": "v1" }, { "created": "Wed, 3 Aug 2011 14:44:54 GMT", "version": "v2" }, { "created": "Tue, 14 Feb 2012 06:49:04 GMT", "version": "v3" } ]
2012-04-02
[ [ "Lehe", "Remi", "" ], [ "Hallatschek", "Oskar", "" ], [ "Peliti", "Luca", "" ] ]
Many theoretical and experimental studies suggest that range expansions can have severe consequences for the gene pool of the expanding population. Due to strongly enhanced genetic drift at the advancing frontier, neutral and weakly deleterious mutations can reach large frequencies in the newly colonized regions, as if they were surfing the front of the range expansion. These findings raise the question of how frequently beneficial mutations successfully surf at shifting range margins, thereby promoting adaptation towards a range-expansion phenotype. Here, we use individual-based simulations to study the surfing statistics of recurrent beneficial mutations on wave-like range expansions in linear habitats. We show that the rate of surfing depends on two strongly antagonistic factors, the probability of surfing given the spatial location of a novel mutation and the rate of occurrence of mutations at that location. The surfing probability strongly increases towards the tip of the wave. Novel mutations are unlikely to surf unless they enjoy a spatial head start compared to the bulk of the population. The needed head start is shown to be proportional to the inverse fitness of the mutant type, and only weakly dependent on the carrying capacity. The second factor is the mutation occurrence which strongly decreases towards the tip of the wave. Thus, most successful mutations arise at an intermediate position in the front of the wave. We present an analytic theory for the tradeoff between these factors that allows to predict how frequently substitutions by beneficial mutations occur at invasion fronts. We find that small amounts of genetic drift increase the fixation rate of beneficial mutations at the advancing front, and thus could be important for adaptation during species invasions.
0905.0563
Jonas Cremer
Jonas Cremer, Tobias Reichenbach, and Erwin Frey
The edge of neutral evolution in social dilemmas
17 pages, 4 figures
New J. Phys. 11 (2009) 093029
10.1088/1367-2630/11/9/093029
LMU-ASC 21/09
q-bio.PE cond-mat.stat-mech physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The functioning of animal as well as human societies fundamentally relies on cooperation. Yet, defection is often favorable for the selfish individual, and social dilemmas arise. Selection by individuals' fitness, usually the basic driving force of evolution, quickly eliminates cooperators. However, evolution is also governed by fluctuations that can be of greater importance than fitness differences, and can render evolution effectively neutral. Here, we investigate the effects of selection versus fluctuations in social dilemmas. By studying the mean extinction times of cooperators and defectors, a variable sensitive to fluctuations, we are able to identify and quantify an emerging 'edge of neutral evolution' that delineates regimes of neutral and Darwinian evolution. Our results reveal that cooperation is significantly maintained in the neutral regimes. In contrast, the classical predictions of evolutionary game theory, where defectors beat cooperators, are recovered in the Darwinian regimes. Our studies demonstrate that fluctuations can provide a surprisingly simple way to partly resolve social dilemmas. Our methods are generally applicable to estimate the role of random drift in evolutionary dynamics.
[ { "created": "Tue, 5 May 2009 09:24:44 GMT", "version": "v1" }, { "created": "Sat, 26 Sep 2009 09:32:12 GMT", "version": "v2" } ]
2009-09-26
[ [ "Cremer", "Jonas", "" ], [ "Reichenbach", "Tobias", "" ], [ "Frey", "Erwin", "" ] ]
The functioning of animal as well as human societies fundamentally relies on cooperation. Yet, defection is often favorable for the selfish individual, and social dilemmas arise. Selection by individuals' fitness, usually the basic driving force of evolution, quickly eliminates cooperators. However, evolution is also governed by fluctuations that can be of greater importance than fitness differences, and can render evolution effectively neutral. Here, we investigate the effects of selection versus fluctuations in social dilemmas. By studying the mean extinction times of cooperators and defectors, a variable sensitive to fluctuations, we are able to identify and quantify an emerging 'edge of neutral evolution' that delineates regimes of neutral and Darwinian evolution. Our results reveal that cooperation is significantly maintained in the neutral regimes. In contrast, the classical predictions of evolutionary game theory, where defectors beat cooperators, are recovered in the Darwinian regimes. Our studies demonstrate that fluctuations can provide a surprisingly simple way to partly resolve social dilemmas. Our methods are generally applicable to estimate the role of random drift in evolutionary dynamics.
2003.14091
Meiyun Xia
Meiyun Xia, Pengfei Xu, Yuanbin Yang, Wenyu Jiang, Zehua Wang, Xiaolei Gu, Mingxi Yang, Deyu Li, Shuyu Li, Guijun Dong, Ling Wang, Daifa Wang
Frontoparietal Connectivity Neurofeedback Training for Promotion of Working Memory: An fNIRS Study in Healthy Male Participants
null
null
10.1109/ACCESS.2021.3074220
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neurofeedback cognitive training is a promising tool used to promote cognitive functions effectively and efficiently. In this study, we investigated a novel functional near-infrared spectroscopy (fNIRS)-based frontoparietal functional connectivity (FC) neurofeedback training paradigm related to working memory, involving healthy adults. Compared with conventional cognitive training studies, we chose the frontoparietal network, a key brain region for cognitive function modulation, as neurofeedback, yielding a strong targeting effect. In the experiment, 10 participants (test group) received three cognitive training sessions of 15 min using fNIRS-based frontoparietal FC as neurofeedback, and another 10 participants served as the control group. Frontoparietal FC was significantly increased in the test group (p D 0.03), and the cognitive functions (memory and attention) were significantly promoted compared with the control group (accuracy of 3-back test: p D 0.0005, reaction time of 3-back test: p D 0.0009). After additional validations on long-term training effect and on different patient populations, the proposed method exhibited considerable potential to be developed as a fast, effective, and widespread training approach for cognitive function enhancement.
[ { "created": "Tue, 31 Mar 2020 10:57:31 GMT", "version": "v1" }, { "created": "Wed, 2 Jun 2021 11:00:51 GMT", "version": "v2" } ]
2021-06-03
[ [ "Xia", "Meiyun", "" ], [ "Xu", "Pengfei", "" ], [ "Yang", "Yuanbin", "" ], [ "Jiang", "Wenyu", "" ], [ "Wang", "Zehua", "" ], [ "Gu", "Xiaolei", "" ], [ "Yang", "Mingxi", "" ], [ "Li", "Deyu", "" ], [ "Li", "Shuyu", "" ], [ "Dong", "Guijun", "" ], [ "Wang", "Ling", "" ], [ "Wang", "Daifa", "" ] ]
Neurofeedback cognitive training is a promising tool used to promote cognitive functions effectively and efficiently. In this study, we investigated a novel functional near-infrared spectroscopy (fNIRS)-based frontoparietal functional connectivity (FC) neurofeedback training paradigm related to working memory, involving healthy adults. Compared with conventional cognitive training studies, we chose the frontoparietal network, a key brain region for cognitive function modulation, as neurofeedback, yielding a strong targeting effect. In the experiment, 10 participants (test group) received three cognitive training sessions of 15 min using fNIRS-based frontoparietal FC as neurofeedback, and another 10 participants served as the control group. Frontoparietal FC was significantly increased in the test group (p D 0.03), and the cognitive functions (memory and attention) were significantly promoted compared with the control group (accuracy of 3-back test: p D 0.0005, reaction time of 3-back test: p D 0.0009). After additional validations on long-term training effect and on different patient populations, the proposed method exhibited considerable potential to be developed as a fast, effective, and widespread training approach for cognitive function enhancement.
2004.01154
Leonard Schmiester
Leonard Schmiester, Yannik Sch\"alte, Frank T. Bergmann, Tacio Camba, Erika Dudkin, Janine Egert, Fabian Fr\"ohlich, Lara Fuhrmann, Adrian L. Hauber, Svenja Kemmer, Polina Lakrisenko, Carolin Loos, Simon Merkt, Wolfgang M\"uller, Dilan Pathirana, Elba Raim\'undez, Lukas Refisch, Marcus Rosenblatt, Paul L. Stapor, Philipp St\"adter, Dantong Wang, Franz-Georg Wieland, Julio R. Banga, Jens Timmer, Alejandro F. Villaverde, Sven Sahle, Clemens Kreutz, Jan Hasenauer, Daniel Weindl
PEtab -- interoperable specification of parameter estimation problems in systems biology
null
null
10.1371/journal.pcbi.1008646
null
q-bio.QM
http://creativecommons.org/licenses/by/4.0/
Reproducibility and reusability of the results of data-based modeling studies are essential. Yet, there has been -- so far -- no broadly supported format for the specification of parameter estimation problems in systems biology. Here, we introduce PEtab, a format which facilitates the specification of parameter estimation problems using Systems Biology Markup Language (SBML) models and a set of tab-separated value files describing the observation model and experimental data as well as parameters to be estimated. We already implemented PEtab support into eight well-established model simulation and parameter estimation toolboxes with hundreds of users in total. We provide a Python library for validation and modification of a PEtab problem and currently 20 example parameter estimation problems based on recent studies. Specifications of PEtab, the PEtab Python library, as well as links to examples, and all supporting software tools are available at https://github.com/PEtab-dev/PEtab, a snapshot is available at https://doi.org/10.5281/zenodo.3732958. All original content is available under permissive licenses.
[ { "created": "Thu, 2 Apr 2020 17:25:43 GMT", "version": "v1" }, { "created": "Fri, 3 Apr 2020 13:57:20 GMT", "version": "v2" }, { "created": "Thu, 6 Aug 2020 17:13:18 GMT", "version": "v3" }, { "created": "Fri, 7 Aug 2020 12:53:50 GMT", "version": "v4" } ]
2021-06-09
[ [ "Schmiester", "Leonard", "" ], [ "Schälte", "Yannik", "" ], [ "Bergmann", "Frank T.", "" ], [ "Camba", "Tacio", "" ], [ "Dudkin", "Erika", "" ], [ "Egert", "Janine", "" ], [ "Fröhlich", "Fabian", "" ], [ "Fuhrmann", "Lara", "" ], [ "Hauber", "Adrian L.", "" ], [ "Kemmer", "Svenja", "" ], [ "Lakrisenko", "Polina", "" ], [ "Loos", "Carolin", "" ], [ "Merkt", "Simon", "" ], [ "Müller", "Wolfgang", "" ], [ "Pathirana", "Dilan", "" ], [ "Raimúndez", "Elba", "" ], [ "Refisch", "Lukas", "" ], [ "Rosenblatt", "Marcus", "" ], [ "Stapor", "Paul L.", "" ], [ "Städter", "Philipp", "" ], [ "Wang", "Dantong", "" ], [ "Wieland", "Franz-Georg", "" ], [ "Banga", "Julio R.", "" ], [ "Timmer", "Jens", "" ], [ "Villaverde", "Alejandro F.", "" ], [ "Sahle", "Sven", "" ], [ "Kreutz", "Clemens", "" ], [ "Hasenauer", "Jan", "" ], [ "Weindl", "Daniel", "" ] ]
Reproducibility and reusability of the results of data-based modeling studies are essential. Yet, there has been -- so far -- no broadly supported format for the specification of parameter estimation problems in systems biology. Here, we introduce PEtab, a format which facilitates the specification of parameter estimation problems using Systems Biology Markup Language (SBML) models and a set of tab-separated value files describing the observation model and experimental data as well as parameters to be estimated. We already implemented PEtab support into eight well-established model simulation and parameter estimation toolboxes with hundreds of users in total. We provide a Python library for validation and modification of a PEtab problem and currently 20 example parameter estimation problems based on recent studies. Specifications of PEtab, the PEtab Python library, as well as links to examples, and all supporting software tools are available at https://github.com/PEtab-dev/PEtab, a snapshot is available at https://doi.org/10.5281/zenodo.3732958. All original content is available under permissive licenses.
1901.04991
Divine Wanduku (Dr. )
Divine Wanduku
The stochastic extinction and stability conditions for a class of malaria epidemic models
arXiv admin note: substantial text overlap with arXiv:1808.09842, arXiv:1809.03866, arXiv:1809.03897
Mathematical Biosciences and Engineering, 2019, 16(5): 3771-3806
10.3934/mbe.2019187
null
q-bio.PE
http://creativecommons.org/licenses/by-nc-sa/4.0/
The stochastic extinction and stability in the mean of a family of SEIRS malaria models with a general nonlinear incidence rate is presented. The dynamics is driven by independent white noise processes from the disease transmission and natural death rates. The basic reproduction number $R^{*}_{0}$, the expected survival probability of the plasmodium $E(e^{-(\mu_{v}T_{1}+\mu T_{2})})$, and other threshold values are calculated. A sample Lyapunov exponential analysis for the system is utilized to obtain extinction results. Moreover, the rate of extinction of malaria is estimated, and innovative local Martingale and Lyapunov functional techniques are applied to establish the strong persistence, and asymptotic stability in the mean of the malaria-free steady population. %The extinction of malaria depends on $R^{*}_{0}$, and $E(e^{-(\mu_{v}T_{1}+\mu T_{2})})$. Moreover, for either $R^{*}_{0}<1$, or $E(e^{-(\mu_{v}T_{1}+\mu T_{2})})<\frac{1}{R^{*}_{0}}$, whenever $R^{*}_{0}\geq 1$, respectively, extinction of malaria occurs. Furthermore, the robustness of these threshold conditions to the intensity of noise from the disease transmission rate is exhibited. Numerical simulation results are presented.
[ { "created": "Mon, 14 Jan 2019 14:12:44 GMT", "version": "v1" } ]
2020-05-05
[ [ "Wanduku", "Divine", "" ] ]
The stochastic extinction and stability in the mean of a family of SEIRS malaria models with a general nonlinear incidence rate is presented. The dynamics is driven by independent white noise processes from the disease transmission and natural death rates. The basic reproduction number $R^{*}_{0}$, the expected survival probability of the plasmodium $E(e^{-(\mu_{v}T_{1}+\mu T_{2})})$, and other threshold values are calculated. A sample Lyapunov exponential analysis for the system is utilized to obtain extinction results. Moreover, the rate of extinction of malaria is estimated, and innovative local Martingale and Lyapunov functional techniques are applied to establish the strong persistence, and asymptotic stability in the mean of the malaria-free steady population. %The extinction of malaria depends on $R^{*}_{0}$, and $E(e^{-(\mu_{v}T_{1}+\mu T_{2})})$. Moreover, for either $R^{*}_{0}<1$, or $E(e^{-(\mu_{v}T_{1}+\mu T_{2})})<\frac{1}{R^{*}_{0}}$, whenever $R^{*}_{0}\geq 1$, respectively, extinction of malaria occurs. Furthermore, the robustness of these threshold conditions to the intensity of noise from the disease transmission rate is exhibited. Numerical simulation results are presented.