id
stringlengths
9
13
submitter
stringlengths
4
48
authors
stringlengths
4
9.62k
title
stringlengths
4
343
comments
stringlengths
2
480
journal-ref
stringlengths
9
309
doi
stringlengths
12
138
report-no
stringclasses
277 values
categories
stringlengths
8
87
license
stringclasses
9 values
orig_abstract
stringlengths
27
3.76k
versions
listlengths
1
15
update_date
stringlengths
10
10
authors_parsed
listlengths
1
147
abstract
stringlengths
24
3.75k
q-bio/0312047
Anders Irb\"ack
Giorgio Favrin, Anders Irb\"ack, Stefan Wallin
Sequence-based study of two related proteins with different folding behaviors
12 pages, 5 figures
Proteins 54 (2004) 8-12
null
LU TP 03-14
q-bio.BM cond-mat.soft
null
ZSPA-1 is an engineered protein that binds to its parent, the three-helix-bundle Z domain of staphylococcal protein A. Uncomplexed ZSPA-1 shows a reduced helix content and a melting behavior that is less cooperative, compared with the wild-type Z domain. Here we show that the difference in folding behavior between these two sequences can be partly understood in terms of an off-lattice model with 5-6 atoms per amino acid and a minimalistic potential, in which folding is driven by backbone hydrogen bonding and effective hydrophobic attraction.
[ { "created": "Tue, 30 Dec 2003 20:39:58 GMT", "version": "v1" } ]
2007-05-23
[ [ "Favrin", "Giorgio", "" ], [ "Irbäck", "Anders", "" ], [ "Wallin", "Stefan", "" ] ]
ZSPA-1 is an engineered protein that binds to its parent, the three-helix-bundle Z domain of staphylococcal protein A. Uncomplexed ZSPA-1 shows a reduced helix content and a melting behavior that is less cooperative, compared with the wild-type Z domain. Here we show that the difference in folding behavior between these two sequences can be partly understood in terms of an off-lattice model with 5-6 atoms per amino acid and a minimalistic potential, in which folding is driven by backbone hydrogen bonding and effective hydrophobic attraction.
1407.7997
Chris Greenman
Donatien Fotso-Chedom, Pablo R. Murcia and Chris D. Greenman
Inferring the Clonal Structure of Viral Populations from Time Series Sequencing
21 pages, 9 figures
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
RNA virus populations will undergo processes of mutation and selection resulting in a mixed population of viral particles. High throughput sequencing of a viral population subsequently contains a mixed signal of the underlying clones. We would like to identify the underlying evolutionary structures. We utilize two sources of information to attempt this; within segment linkage information, and mutation prevalence. We demonstrate that clone haplotypes, their prevalence, and maximum parsimony reticulate evolutionary structures can be identified, although the solutions may not be unique, even for complete sets of information. This is applied to a chain of influenza infection, where we infer evolutionary structures, including reassortment, and demonstrate some of the difficulties of interpretation that arise from deep sequencing due to artifacts such as template switching during PCR amplification.
[ { "created": "Wed, 30 Jul 2014 11:03:08 GMT", "version": "v1" } ]
2014-07-31
[ [ "Fotso-Chedom", "Donatien", "" ], [ "Murcia", "Pablo R.", "" ], [ "Greenman", "Chris D.", "" ] ]
RNA virus populations will undergo processes of mutation and selection resulting in a mixed population of viral particles. High throughput sequencing of a viral population subsequently contains a mixed signal of the underlying clones. We would like to identify the underlying evolutionary structures. We utilize two sources of information to attempt this; within segment linkage information, and mutation prevalence. We demonstrate that clone haplotypes, their prevalence, and maximum parsimony reticulate evolutionary structures can be identified, although the solutions may not be unique, even for complete sets of information. This is applied to a chain of influenza infection, where we infer evolutionary structures, including reassortment, and demonstrate some of the difficulties of interpretation that arise from deep sequencing due to artifacts such as template switching during PCR amplification.
1707.00854
Jin Wang
Zhenlong Jiang, Li Tian, Xiaona Fang, Kun Zhang, Qiong Liu, Qingzhe Dong, Erkang Wang, Jin Wang
The emergence of the two cell fates and their associated switching for a negative auto-regulating gene
19 pages, 4 figures
null
null
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Decisions in the cell that lead to its ultimate fate are important for cellular functions such as proliferation, growth, differentiation, development and death. Understanding this decision process is imperative for advancements in the treatment of diseases such as cancer. It is clear that underlying gene regulatory networks and surrounding environments of the cells are crucial for function. The self-repressor is a very abundant gene regulatory motif, and is often believed to have only one cell fate. In this study, we elucidate the effects of microenvironments mimicking the epigenetic effects on cell fates through the introduction of inducers capable of binding to a self-repressing gene product (protein), thus regulating the associated gene. This alters the effective regulatory binding speed of the self-repressor regulatory protein to its destination DNA without changing the gene itself. The steady state observations and real time monitoring of the self-repressor expression dynamics reveal the emergence of the two cell fates, The simulations are consistent with the experimental findings. We provide physical and quantitative explanations for the origin of the two phenotypic cell fates. We find that two cell fates, rather than a single fate, and their associated switching dynamics emerge from a change in effective gene regulation strengths. The switching time scale is quantified. Our results reveal a new mechanism for the emergence of multiple cell fates. This provides an origin for the heterogeneity often observed among cell states, while illustrating the influence of microenvironments on cell fates and their decision-making processes without genetic changes
[ { "created": "Tue, 4 Jul 2017 08:46:32 GMT", "version": "v1" } ]
2017-07-05
[ [ "Jiang", "Zhenlong", "" ], [ "Tian", "Li", "" ], [ "Fang", "Xiaona", "" ], [ "Zhang", "Kun", "" ], [ "Liu", "Qiong", "" ], [ "Dong", "Qingzhe", "" ], [ "Wang", "Erkang", "" ], [ "Wang", "Jin", "" ] ]
Decisions in the cell that lead to its ultimate fate are important for cellular functions such as proliferation, growth, differentiation, development and death. Understanding this decision process is imperative for advancements in the treatment of diseases such as cancer. It is clear that underlying gene regulatory networks and surrounding environments of the cells are crucial for function. The self-repressor is a very abundant gene regulatory motif, and is often believed to have only one cell fate. In this study, we elucidate the effects of microenvironments mimicking the epigenetic effects on cell fates through the introduction of inducers capable of binding to a self-repressing gene product (protein), thus regulating the associated gene. This alters the effective regulatory binding speed of the self-repressor regulatory protein to its destination DNA without changing the gene itself. The steady state observations and real time monitoring of the self-repressor expression dynamics reveal the emergence of the two cell fates, The simulations are consistent with the experimental findings. We provide physical and quantitative explanations for the origin of the two phenotypic cell fates. We find that two cell fates, rather than a single fate, and their associated switching dynamics emerge from a change in effective gene regulation strengths. The switching time scale is quantified. Our results reveal a new mechanism for the emergence of multiple cell fates. This provides an origin for the heterogeneity often observed among cell states, while illustrating the influence of microenvironments on cell fates and their decision-making processes without genetic changes
1907.11075
Andrew Warrington
Andrew Warrington, Arthur Spencer, Frank Wood
The Virtual Patch Clamp: Imputing C. elegans Membrane Potentials from Calcium Imaging
Includes Supplementary Materials
null
null
null
q-bio.NC cs.AI cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We develop a stochastic whole-brain and body simulator of the nematode roundworm Caenorhabditis elegans (C. elegans) and show that it is sufficiently regularizing to allow imputation of latent membrane potentials from partial calcium fluorescence imaging observations. This is the first attempt we know of to "complete the circle," where an anatomically grounded whole-connectome simulator is used to impute a time-varying "brain" state at single-cell fidelity from covariates that are measurable in practice. The sequential Monte Carlo (SMC) method we employ not only enables imputation of said latent states but also presents a strategy for learning simulator parameters via variational optimization of the noisy model evidence approximation provided by SMC. Our imputation and parameter estimation experiments were conducted on distributed systems using novel implementations of the aforementioned techniques applied to synthetic data of dimension and type representative of that which are measured in laboratories currently.
[ { "created": "Wed, 24 Jul 2019 17:57:39 GMT", "version": "v1" } ]
2019-07-26
[ [ "Warrington", "Andrew", "" ], [ "Spencer", "Arthur", "" ], [ "Wood", "Frank", "" ] ]
We develop a stochastic whole-brain and body simulator of the nematode roundworm Caenorhabditis elegans (C. elegans) and show that it is sufficiently regularizing to allow imputation of latent membrane potentials from partial calcium fluorescence imaging observations. This is the first attempt we know of to "complete the circle," where an anatomically grounded whole-connectome simulator is used to impute a time-varying "brain" state at single-cell fidelity from covariates that are measurable in practice. The sequential Monte Carlo (SMC) method we employ not only enables imputation of said latent states but also presents a strategy for learning simulator parameters via variational optimization of the noisy model evidence approximation provided by SMC. Our imputation and parameter estimation experiments were conducted on distributed systems using novel implementations of the aforementioned techniques applied to synthetic data of dimension and type representative of that which are measured in laboratories currently.
1406.5746
Artem Novozhilov
Alexander S. Bratus, Artem S. Novozhilov, Vladimir P. Posvyanskii
Asymptotic behavior of spatially distributed replicator systems
20 pages, in Russian, an English translation will be available shortly
null
null
null
q-bio.PE math.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The question of biological stability (permanence) of a replicator reaction-diffusion system is considered. Sufficient conditions of biological stability are found. It is proved that there are situations when biologically unstable non-distributed replicator system becomes biologically stable in the distributed case. Numerical examples illustrate analytical findings. This manuscript is a continuation of arXiv:1308.5631.
[ { "created": "Sun, 22 Jun 2014 18:18:59 GMT", "version": "v1" } ]
2014-06-24
[ [ "Bratus", "Alexander S.", "" ], [ "Novozhilov", "Artem S.", "" ], [ "Posvyanskii", "Vladimir P.", "" ] ]
The question of biological stability (permanence) of a replicator reaction-diffusion system is considered. Sufficient conditions of biological stability are found. It is proved that there are situations when biologically unstable non-distributed replicator system becomes biologically stable in the distributed case. Numerical examples illustrate analytical findings. This manuscript is a continuation of arXiv:1308.5631.
1710.08443
Biman Bagchi -
Saumyak Mukherjee, Sayantan Mondal, Ashish Anilrao Deshmukh, Balasubramanian Gopal and Biman Bagchi
Cavity Waters Govern Insulin Association and Release: Inferences from Experimental Data and Molecular Dynamics Simulations
null
null
null
null
q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While a monomer of the ubiquitous hormone insulin is the biologically active form in the human body, its hexameric assembly acts as an efficient storage unit. However, the role of water molecules in the structure, stability and dynamics of the insulin hexamer is poorly understood. Here we combine experimental data with molecular dynamics simulations to investigate the shape, structure and stability of an insulin hexamer focusing on the role of water molecules. Both X-Ray analysis and computer simulations show that the core of the hexamer cavity is barrel-shaped, holding, on an average, sixteen water molecules. These encapsulated and constrained molecules impart structural stability to the hexamer. Apart from the electrostatic interactions with Zn2+ ions, an intricate hydrogen bond network amongst cavity water and neighboring protein residues stabilizes the hexameric association. These water molecules solvate six glutamate residues inside the cavity decreasing electrostatic repulsions amongst the negatively charged carboxylate groups. They also prevent association between glutamate residues and Zn2+ ions and maintain the integrity of the cavity. Simulations reveal that removal of these waters results in a collapse of the cavity. Subsequent analyses also show that the hydrogen bond network among these water molecules and protein residues that face the inner side of the cavity is more rigid with a slower relaxation as compared to that of the bulk solvent. Dynamics of cavity water reveal certain slow water molecules which form the back bone of the stable hydrogen bond network. The analysis presented here suggests a dominant role of structurally conserved water molecules in maintaining the integrity of the hexameric assembly and potentially modulating the dissociation of this assembly into the functional monomeric form.
[ { "created": "Mon, 23 Oct 2017 18:28:46 GMT", "version": "v1" } ]
2017-10-25
[ [ "Mukherjee", "Saumyak", "" ], [ "Mondal", "Sayantan", "" ], [ "Deshmukh", "Ashish Anilrao", "" ], [ "Gopal", "Balasubramanian", "" ], [ "Bagchi", "Biman", "" ] ]
While a monomer of the ubiquitous hormone insulin is the biologically active form in the human body, its hexameric assembly acts as an efficient storage unit. However, the role of water molecules in the structure, stability and dynamics of the insulin hexamer is poorly understood. Here we combine experimental data with molecular dynamics simulations to investigate the shape, structure and stability of an insulin hexamer focusing on the role of water molecules. Both X-Ray analysis and computer simulations show that the core of the hexamer cavity is barrel-shaped, holding, on an average, sixteen water molecules. These encapsulated and constrained molecules impart structural stability to the hexamer. Apart from the electrostatic interactions with Zn2+ ions, an intricate hydrogen bond network amongst cavity water and neighboring protein residues stabilizes the hexameric association. These water molecules solvate six glutamate residues inside the cavity decreasing electrostatic repulsions amongst the negatively charged carboxylate groups. They also prevent association between glutamate residues and Zn2+ ions and maintain the integrity of the cavity. Simulations reveal that removal of these waters results in a collapse of the cavity. Subsequent analyses also show that the hydrogen bond network among these water molecules and protein residues that face the inner side of the cavity is more rigid with a slower relaxation as compared to that of the bulk solvent. Dynamics of cavity water reveal certain slow water molecules which form the back bone of the stable hydrogen bond network. The analysis presented here suggests a dominant role of structurally conserved water molecules in maintaining the integrity of the hexameric assembly and potentially modulating the dissociation of this assembly into the functional monomeric form.
1512.04568
Antara Sengupta Mrs.
Antara Sengupta, Sk. Sarif Hassan, Pabitra Pal Choudhury
Understanding Functional Protein-Protein Interactions Of ABCB11 And ADA In Human And Mouse
6 PAGES
null
null
null
q-bio.OT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Proteins are macromolecules which hardly act alone; they need to make interactions with some other proteins to do so. Numerous factors are there which can regulate the interactions between proteins [4]. Here in this present study we aim to understand Protein -Protein Interactions (PPIs) of two proteins ABCB11 and ADA from quantitative point of view. One of our major aims also is to study the factors that regulate the PPIs and thus to distinguish these PPIs with proper quantification across the two species Homo Sapiens and Mus Musculus respectively to know how one protein interacts with different set of proteins in different species.
[ { "created": "Wed, 2 Dec 2015 09:46:23 GMT", "version": "v1" } ]
2015-12-16
[ [ "Sengupta", "Antara", "" ], [ "Hassan", "Sk. Sarif", "" ], [ "Choudhury", "Pabitra Pal", "" ] ]
Proteins are macromolecules which hardly act alone; they need to make interactions with some other proteins to do so. Numerous factors are there which can regulate the interactions between proteins [4]. Here in this present study we aim to understand Protein -Protein Interactions (PPIs) of two proteins ABCB11 and ADA from quantitative point of view. One of our major aims also is to study the factors that regulate the PPIs and thus to distinguish these PPIs with proper quantification across the two species Homo Sapiens and Mus Musculus respectively to know how one protein interacts with different set of proteins in different species.
2012.12267
Katerina Kaouri Dr
Zechariah Lau, Ian M. Griffiths, Aaron English and Katerina Kaouri
Predicting the Spatially Varying Infection Risk in Indoor Spaces Using an Efficient Airborne Transmission Model
25 pages, 14 figures. Version v3 substantially revises v1. (v3 same as v2 but with a more compact layout, hence fewer pages.) Compared to v1 the model has been extended to determine the spatiotemporal infection risk and recast as an extension of the Wells-Riley model. Another author has been added. The underlying model and domain linked to the semi-analytic solution are now explained in detail
null
null
null
q-bio.QM physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We develop a spatially dependent generalisation to the Wells-Riley model and its extensions applied to COVID-19, that determines the infection risk due to airborne transmission of viruses. We assume that the concentration of infectious particles is governed by an advection-diffusion-reaction equation with the particles advected by airflow, diffused due to turbulence, emitted by infected people and removed due to the room ventilation, inactivation of the virus and gravitational settling. We consider one asymptomatic or presymptomatic infectious person who breathes or talks, with or without a mask and model a quasi-3D setup that incorporates a recirculating air-conditioning flow. A semi-analytic solution is available and this enables fast simulations. We quantify the effect of ventilation and particle emission rate on the particle concentration, infection risk and the `time to probable infection' (TTPI). Good agreement with CFD models is achieved. Furthermore, we derive power laws that quantify the effect of ventilation, emission rate and infectiousness of the virus on the TTPI. The model can be easily updated to take into account modified parameter values. This work paves the way for establishing `safe occupancy times' at any location and has direct applicability in mitigating the spread of the COVID-19 pandemic.
[ { "created": "Tue, 22 Dec 2020 18:21:17 GMT", "version": "v1" }, { "created": "Fri, 14 May 2021 16:58:13 GMT", "version": "v2" }, { "created": "Tue, 18 May 2021 17:22:55 GMT", "version": "v3" } ]
2021-05-19
[ [ "Lau", "Zechariah", "" ], [ "Griffiths", "Ian M.", "" ], [ "English", "Aaron", "" ], [ "Kaouri", "Katerina", "" ] ]
We develop a spatially dependent generalisation to the Wells-Riley model and its extensions applied to COVID-19, that determines the infection risk due to airborne transmission of viruses. We assume that the concentration of infectious particles is governed by an advection-diffusion-reaction equation with the particles advected by airflow, diffused due to turbulence, emitted by infected people and removed due to the room ventilation, inactivation of the virus and gravitational settling. We consider one asymptomatic or presymptomatic infectious person who breathes or talks, with or without a mask and model a quasi-3D setup that incorporates a recirculating air-conditioning flow. A semi-analytic solution is available and this enables fast simulations. We quantify the effect of ventilation and particle emission rate on the particle concentration, infection risk and the `time to probable infection' (TTPI). Good agreement with CFD models is achieved. Furthermore, we derive power laws that quantify the effect of ventilation, emission rate and infectiousness of the virus on the TTPI. The model can be easily updated to take into account modified parameter values. This work paves the way for establishing `safe occupancy times' at any location and has direct applicability in mitigating the spread of the COVID-19 pandemic.
1505.01215
Tsvi Tlusty
Yonatan Savir Jacob Kagan and Tsvi Tlusty
Binding of transcription factors adapts to resolve information-energy trade-off
null
null
null
null
q-bio.GN physics.bio-ph q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We examine the binding of transcription factors to DNA in terms of an information transfer problem. The input of the noisy channel is the biophysical signal of a factor bound to a DNA site, and the output is a distribution of probable DNA sequences at this site. This task involves an inherent tradeoff between the information gain and the energetics of the binding interaction - high binding energies provide higher information gain but hinder the dynamics of the system as factors are bound too tightly. We show that adaptation of the binding interaction towards increasing information transfer under a general energy constraint implies that the information gain per specific binding energy at each base-pair is maximized. We analyze hundreds of prokaryote and eukaryote transcription factors from various organisms to evaluate the discrimination energies. We find that, in accordance with our theoretical argument, binding energies nearly maximize the information gain per energy. This work suggests the adaptation of information gain as a generic design principle of molecular recognition systems.
[ { "created": "Tue, 5 May 2015 23:03:18 GMT", "version": "v1" }, { "created": "Wed, 5 Aug 2015 21:31:45 GMT", "version": "v2" } ]
2015-08-07
[ [ "Kagan", "Yonatan Savir Jacob", "" ], [ "Tlusty", "Tsvi", "" ] ]
We examine the binding of transcription factors to DNA in terms of an information transfer problem. The input of the noisy channel is the biophysical signal of a factor bound to a DNA site, and the output is a distribution of probable DNA sequences at this site. This task involves an inherent tradeoff between the information gain and the energetics of the binding interaction - high binding energies provide higher information gain but hinder the dynamics of the system as factors are bound too tightly. We show that adaptation of the binding interaction towards increasing information transfer under a general energy constraint implies that the information gain per specific binding energy at each base-pair is maximized. We analyze hundreds of prokaryote and eukaryote transcription factors from various organisms to evaluate the discrimination energies. We find that, in accordance with our theoretical argument, binding energies nearly maximize the information gain per energy. This work suggests the adaptation of information gain as a generic design principle of molecular recognition systems.
2406.05859
Behdad Khodabandehloo
Behdad Khodabandehloo, Payam Jannatdoust, Babak Nadjar Araabi
From First-order to Higher-order Interactions: Enhanced Representation of Homotopic Functional Connectivity through Control of Intervening Variables
null
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
The brain's complex functionality emerges from network interactions that go beyond dyadic connections, with higher-order interactions significantly contributing to this complexity. One method of capturing higher-order interactions is through traversing the brain network using random walks. The efficacy of these random walks depends on the defined mutual interactions between two brain entities. More precise capture of higher-order interactions enables a better reflection of the brain's intrinsic neurophysiological characteristics. One well-established neurophysiological concept is Homotopic Functional Connectivity (HoFC), which illustrates the synchronized spontaneous activity between corresponding regions in the brain's left and right hemispheres. We employ node2vec, a random walk node embedding approach, alongside resting-state fMRI from the Human Connectome Project (HCP) to obtain higher-order feature vectors. We assess the efficacy of different functional connectivity parameterizations using HoFC. The results indicates that the quality of capturing higher-order interactions largely depends on the statistical dependency measure between brain regions. Higher-order interactions defined by partial correlation, better reflects HoFC compare to other statistical associations. In this case of first-order interactions, tangent space embedding more effectively demonstrates HoFC. The findings validate HoFC and underscore the importance of functional connectivity construction method in capturing intrinsic characteristics of the human brain.
[ { "created": "Sun, 9 Jun 2024 17:04:27 GMT", "version": "v1" } ]
2024-06-11
[ [ "Khodabandehloo", "Behdad", "" ], [ "Jannatdoust", "Payam", "" ], [ "Araabi", "Babak Nadjar", "" ] ]
The brain's complex functionality emerges from network interactions that go beyond dyadic connections, with higher-order interactions significantly contributing to this complexity. One method of capturing higher-order interactions is through traversing the brain network using random walks. The efficacy of these random walks depends on the defined mutual interactions between two brain entities. More precise capture of higher-order interactions enables a better reflection of the brain's intrinsic neurophysiological characteristics. One well-established neurophysiological concept is Homotopic Functional Connectivity (HoFC), which illustrates the synchronized spontaneous activity between corresponding regions in the brain's left and right hemispheres. We employ node2vec, a random walk node embedding approach, alongside resting-state fMRI from the Human Connectome Project (HCP) to obtain higher-order feature vectors. We assess the efficacy of different functional connectivity parameterizations using HoFC. The results indicates that the quality of capturing higher-order interactions largely depends on the statistical dependency measure between brain regions. Higher-order interactions defined by partial correlation, better reflects HoFC compare to other statistical associations. In this case of first-order interactions, tangent space embedding more effectively demonstrates HoFC. The findings validate HoFC and underscore the importance of functional connectivity construction method in capturing intrinsic characteristics of the human brain.
2007.12784
Ivan Sudakov
Ivan Sudakov, Sergey A. Vakulenko, John T. Bruun
Stochastic physics of species extinctions in a large population
null
null
null
null
q-bio.PE physics.data-an
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Species extinction is a core process that affects the diversity of life on Earth. Competition between species in a population is considered by ecological niche-based theories as a key factor leading to different severity of species extinctions. There are population dynamics models that describe a simple and easily understandable mechanism for resource competition. However, these models can not efficiently characterize and quantify new emergent extinctions in a large population appearing due to environmental forcing. To address this issue we develop a stochastic physics-inspired approach to analyze how environmental forcing influences the severity of species extinctions in such models. This approach is based on the large deviations theory of stochastic processes (the Freidlin-Wentzell theory). We show that there are three possible fundamentally different scenarios of extinctions, which we call catastrophic extinctions, asymmetric ones, and extinctions with exponentially small probabilities. The realization of those scenarios depends on environmental noise properties and the boundaries of niches, which define the domain, where species survive. Furthermore, we describe a hysteresis effect in species extinction showing that fluctuations can lead to dramatic consequences even if an averaged resource supply is sufficient to support population survival. Our stochastic physics-inspired approach generalizes niche theory by accounting environmental forcing and will be useful to find, by available data, which environmental perturbations may induce extinctions.
[ { "created": "Fri, 24 Jul 2020 22:00:22 GMT", "version": "v1" } ]
2020-07-28
[ [ "Sudakov", "Ivan", "" ], [ "Vakulenko", "Sergey A.", "" ], [ "Bruun", "John T.", "" ] ]
Species extinction is a core process that affects the diversity of life on Earth. Competition between species in a population is considered by ecological niche-based theories as a key factor leading to different severity of species extinctions. There are population dynamics models that describe a simple and easily understandable mechanism for resource competition. However, these models can not efficiently characterize and quantify new emergent extinctions in a large population appearing due to environmental forcing. To address this issue we develop a stochastic physics-inspired approach to analyze how environmental forcing influences the severity of species extinctions in such models. This approach is based on the large deviations theory of stochastic processes (the Freidlin-Wentzell theory). We show that there are three possible fundamentally different scenarios of extinctions, which we call catastrophic extinctions, asymmetric ones, and extinctions with exponentially small probabilities. The realization of those scenarios depends on environmental noise properties and the boundaries of niches, which define the domain, where species survive. Furthermore, we describe a hysteresis effect in species extinction showing that fluctuations can lead to dramatic consequences even if an averaged resource supply is sufficient to support population survival. Our stochastic physics-inspired approach generalizes niche theory by accounting environmental forcing and will be useful to find, by available data, which environmental perturbations may induce extinctions.
1611.05707
Shuji Ishihara
Shuji Ishihara, Philippe Marcq, Kaoru Sugimura
From cells to tissue: A continuum model of epithelial mechanics
33 pages, 8 figures
Phys. Rev. E 96, 022418 (2017)
10.1103/PhysRevE.96.022418
null
q-bio.TO cond-mat.soft q-bio.CB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A continuum model of epithelial tissue mechanics was formulated using cellular-level mechanical ingredients and cell morphogenetic processes, including cellular shape changes and cellular rearrangements. This model can include finite deformation, and incorporates stress and deformation tensors, which can be compared with experimental data. Using this model, we elucidated dynamical behavior underlying passive relaxation, active contraction-elongation, and tissue shear flow. This study provides an integrated scheme for the understanding of the mechanisms that are involved in orchestrating the morphogenetic processes in individual cells, in order to achieve epithelial tissue morphogenesis.
[ { "created": "Thu, 17 Nov 2016 14:40:20 GMT", "version": "v1" }, { "created": "Sun, 23 Apr 2017 01:05:43 GMT", "version": "v2" } ]
2017-09-06
[ [ "Ishihara", "Shuji", "" ], [ "Marcq", "Philippe", "" ], [ "Sugimura", "Kaoru", "" ] ]
A continuum model of epithelial tissue mechanics was formulated using cellular-level mechanical ingredients and cell morphogenetic processes, including cellular shape changes and cellular rearrangements. This model can include finite deformation, and incorporates stress and deformation tensors, which can be compared with experimental data. Using this model, we elucidated dynamical behavior underlying passive relaxation, active contraction-elongation, and tissue shear flow. This study provides an integrated scheme for the understanding of the mechanisms that are involved in orchestrating the morphogenetic processes in individual cells, in order to achieve epithelial tissue morphogenesis.
q-bio/0612011
Reiko Tanaka
Hidenori Kimura, Hiroyuki Okano, and Reiko J. Tanaka
Stochastic approach to molecular interactions and computational theory of metabolic and genetic regulations
20 pages, 13 figures
null
10.1016/j.jtbi.2007.06.017
null
q-bio.MN q-bio.QM
null
Binding and unbinding of ligands to specific sites of a macromolecule are one of the most elementary molecular interactions inside the cell that embody the computational processes of biological regulations. The interaction between transcription factors and the operators of genes and that between ligands and binding sites of allosteric enzymes are typical examples of such molecular interactions. In order to obtain the general mathematical framework of biological regulations, we formulate these interactions as finite Markov processes and establish a computational theory of regulatory activities of macromolecules based mainly on graphical analysis of their state transition diagrams. The contribution is summarized as follows: (1) Stochastic interpretation of Michaelis-Menten equation is given. (2) Notion of \textit{probability flow} is introduced in relation to detailed balance. (3) A stochastic analogy of \textit{Wegscheider condition} is given in relation to loops in the state transition diagram. (4) A simple graphical method of computing the regulatory activity in terms of ligands' concentrations is obtained for Wegscheider cases.
[ { "created": "Thu, 7 Dec 2006 06:15:29 GMT", "version": "v1" } ]
2011-11-10
[ [ "Kimura", "Hidenori", "" ], [ "Okano", "Hiroyuki", "" ], [ "Tanaka", "Reiko J.", "" ] ]
Binding and unbinding of ligands to specific sites of a macromolecule are one of the most elementary molecular interactions inside the cell that embody the computational processes of biological regulations. The interaction between transcription factors and the operators of genes and that between ligands and binding sites of allosteric enzymes are typical examples of such molecular interactions. In order to obtain the general mathematical framework of biological regulations, we formulate these interactions as finite Markov processes and establish a computational theory of regulatory activities of macromolecules based mainly on graphical analysis of their state transition diagrams. The contribution is summarized as follows: (1) Stochastic interpretation of Michaelis-Menten equation is given. (2) Notion of \textit{probability flow} is introduced in relation to detailed balance. (3) A stochastic analogy of \textit{Wegscheider condition} is given in relation to loops in the state transition diagram. (4) A simple graphical method of computing the regulatory activity in terms of ligands' concentrations is obtained for Wegscheider cases.
1307.6594
Lawrence Uricchio
Lawrence H. Uricchio, Ryan D. Hernandez
Robust forward simulations of recurrent hitchhiking
null
Genetics May 2014 197:221-236
10.1534/genetics.113.156935
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Evolutionary forces shape patterns of genetic diversity within populations and contribute to phenotypic variation. In particular, recurrent positive selection has attracted significant interest in both theoretical and empirical studies. However, most existing theoretical models of recurrent positive selection cannot easily incorporate realistic confounding effects such as interference between selected sites, arbitrary selection schemes, and complicated demographic processes. It is possible to quantify the effects of arbitrarily complex evolutionary models by performing forward population genetic simulations, but forward simulations can be computationally prohibitive for large population sizes ($> 10^5$). A common approach for overcoming these computational limitations is rescaling of the most computationally expensive parameters, especially population size. Here, we show that ad hoc approaches to parameter rescaling under the recurrent hitchhiking model do not always provide sufficiently accurate dynamics, potentially skewing patterns of diversity in simulated DNA sequences. We derive an extension of the recurrent hitchhiking model that is appropriate for strong selection in small population sizes, and use it to develop a method for parameter rescaling that provides the best possible computational performance for a given error tolerance. We perform a detailed theoretical analysis of the robustness of rescaling across the parameter space. Finally, we apply our rescaling algorithms to parameters that were previously inferred for Drosophila, and discuss practical considerations such as interference between selected sites.
[ { "created": "Wed, 24 Jul 2013 21:40:45 GMT", "version": "v1" }, { "created": "Wed, 7 May 2014 21:41:11 GMT", "version": "v2" } ]
2014-05-09
[ [ "Uricchio", "Lawrence H.", "" ], [ "Hernandez", "Ryan D.", "" ] ]
Evolutionary forces shape patterns of genetic diversity within populations and contribute to phenotypic variation. In particular, recurrent positive selection has attracted significant interest in both theoretical and empirical studies. However, most existing theoretical models of recurrent positive selection cannot easily incorporate realistic confounding effects such as interference between selected sites, arbitrary selection schemes, and complicated demographic processes. It is possible to quantify the effects of arbitrarily complex evolutionary models by performing forward population genetic simulations, but forward simulations can be computationally prohibitive for large population sizes ($> 10^5$). A common approach for overcoming these computational limitations is rescaling of the most computationally expensive parameters, especially population size. Here, we show that ad hoc approaches to parameter rescaling under the recurrent hitchhiking model do not always provide sufficiently accurate dynamics, potentially skewing patterns of diversity in simulated DNA sequences. We derive an extension of the recurrent hitchhiking model that is appropriate for strong selection in small population sizes, and use it to develop a method for parameter rescaling that provides the best possible computational performance for a given error tolerance. We perform a detailed theoretical analysis of the robustness of rescaling across the parameter space. Finally, we apply our rescaling algorithms to parameters that were previously inferred for Drosophila, and discuss practical considerations such as interference between selected sites.
1009.2958
Daniel Zuckerman
Daniel M. Zuckerman
Equilibrium Sampling in Biomolecular Simulation
submitted to Annual Review of Biophysics
null
null
null
q-bio.BM physics.bio-ph physics.chem-ph physics.comp-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Equilibrium sampling of biomolecules remains an unmet challenge after more than 30 years of atomistic simulation. Efforts to enhance sampling capability, which are reviewed here, range from the development of new algorithms to parallelization to novel uses of hardware. Special focus is placed on classifying algorithms -- most of which are underpinned by a few key ideas -- in order to understand their fundamental strengths and limitations. Although algorithms have proliferated, progress resulting from novel hardware use appears to be more clear-cut than from algorithms alone, partly due to the lack of widely used sampling measures.
[ { "created": "Wed, 15 Sep 2010 15:46:06 GMT", "version": "v1" } ]
2010-09-16
[ [ "Zuckerman", "Daniel M.", "" ] ]
Equilibrium sampling of biomolecules remains an unmet challenge after more than 30 years of atomistic simulation. Efforts to enhance sampling capability, which are reviewed here, range from the development of new algorithms to parallelization to novel uses of hardware. Special focus is placed on classifying algorithms -- most of which are underpinned by a few key ideas -- in order to understand their fundamental strengths and limitations. Although algorithms have proliferated, progress resulting from novel hardware use appears to be more clear-cut than from algorithms alone, partly due to the lack of widely used sampling measures.
2008.03380
Lana Garmire
Hui Li (1), Qianhui Huang (2), Yu Liu (1), Lana X Garmire (1 and $) ((1) Department of Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, MI, USA (2) Department of Biostatistics, School of Public Health, University of Michigan, Ann Arbor, MI, USA ($) Corresponding author: lgarmire@med.umich.edu)
Single Cell Transcriptome Research in Human Placenta
34 pages, 1 figure, 2 tables
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Human placenta is a complex and heterogeneous organ interfacing between the mother and the fetus that supports fetal development. Alterations to placental structural components are associated with various pregnancy complications. To reveal the heterogeneity among various placenta cell types in normal and diseased placentas, as well as elucidate molecular interactions within a population of placental cells, a new genomics technology called single cell RNA-Seq (or scRNA-seq) has been employed in the last couple of years. Here we review the principles of scRNA-seq technology, and summarize the recent human placenta studies at scRNA-seq level across gestational ages as well as in pregnancy complications such as preterm birth and preeclampsia. We list the computational analysis platforms and resources available for the public use. Lastly, we discuss the future areas of interest for placenta single cell studies, as well as the data analytics needed to accomplish them.
[ { "created": "Fri, 7 Aug 2020 21:14:09 GMT", "version": "v1" } ]
2020-08-11
[ [ "Li", "Hui", "", "1 and $" ], [ "Huang", "Qianhui", "", "1 and $" ], [ "Liu", "Yu", "", "1 and $" ], [ "Garmire", "Lana X", "", "1 and $" ] ]
Human placenta is a complex and heterogeneous organ interfacing between the mother and the fetus that supports fetal development. Alterations to placental structural components are associated with various pregnancy complications. To reveal the heterogeneity among various placenta cell types in normal and diseased placentas, as well as elucidate molecular interactions within a population of placental cells, a new genomics technology called single cell RNA-Seq (or scRNA-seq) has been employed in the last couple of years. Here we review the principles of scRNA-seq technology, and summarize the recent human placenta studies at scRNA-seq level across gestational ages as well as in pregnancy complications such as preterm birth and preeclampsia. We list the computational analysis platforms and resources available for the public use. Lastly, we discuss the future areas of interest for placenta single cell studies, as well as the data analytics needed to accomplish them.
2002.05200
Giulia Guidi
Alberto Zeni, Giulia Guidi, Marquita Ellis, Nan Ding, Marco D. Santambrogio, Steven Hofmeyr, Ayd{\i}n Bulu\c{c}, Leonid Oliker, Katherine Yelick
LOGAN: High-Performance GPU-Based X-Drop Long-Read Alignment
null
34th IEEE International Parallel and Distributed Processing Symposium (IPDPS), 2020
null
null
q-bio.GN cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Pairwise sequence alignment is one of the most computationally intensive kernels in genomic data analysis, accounting for more than 90% of the runtime for key bioinformatics applications. This method is particularly expensive for third-generation sequences due to the high computational cost of analyzing sequences of length between 1Kb and 1Mb. Given the quadratic overhead of exact pairwise algorithms for long alignments, the community primarily relies on approximate algorithms that search only for high-quality alignments and stop early when one is not found. In this work, we present the first GPU optimization of the popular X-drop alignment algorithm, that we named LOGAN. Results show that our high-performance multi-GPU implementation achieves up to 181.6 GCUPS and speed-ups up to 6.6x and 30.7x using 1 and 6 NVIDIA Tesla V100, respectively, over the state-of-the-art software running on two IBM Power9 processors using 168 CPU threads, with equivalent accuracy. We also demonstrate a 2.3x LOGAN speed-up versus ksw2, a state-of-art vectorized algorithm for sequence alignment implemented in minimap2, a long-read mapping software. To highlight the impact of our work on a real-world application, we couple LOGAN with a many-to-many long-read alignment software called BELLA, and demonstrate that our implementation improves the overall BELLA runtime by up to 10.6x. Finally, we adapt the Roofline model for LOGAN and demonstrate that our implementation is near-optimal on the NVIDIA Tesla V100s.
[ { "created": "Wed, 12 Feb 2020 19:53:12 GMT", "version": "v1" } ]
2020-02-14
[ [ "Zeni", "Alberto", "" ], [ "Guidi", "Giulia", "" ], [ "Ellis", "Marquita", "" ], [ "Ding", "Nan", "" ], [ "Santambrogio", "Marco D.", "" ], [ "Hofmeyr", "Steven", "" ], [ "Buluç", "Aydın", "" ], [ "Oliker", "Leonid", "" ], [ "Yelick", "Katherine", "" ] ]
Pairwise sequence alignment is one of the most computationally intensive kernels in genomic data analysis, accounting for more than 90% of the runtime for key bioinformatics applications. This method is particularly expensive for third-generation sequences due to the high computational cost of analyzing sequences of length between 1Kb and 1Mb. Given the quadratic overhead of exact pairwise algorithms for long alignments, the community primarily relies on approximate algorithms that search only for high-quality alignments and stop early when one is not found. In this work, we present the first GPU optimization of the popular X-drop alignment algorithm, that we named LOGAN. Results show that our high-performance multi-GPU implementation achieves up to 181.6 GCUPS and speed-ups up to 6.6x and 30.7x using 1 and 6 NVIDIA Tesla V100, respectively, over the state-of-the-art software running on two IBM Power9 processors using 168 CPU threads, with equivalent accuracy. We also demonstrate a 2.3x LOGAN speed-up versus ksw2, a state-of-art vectorized algorithm for sequence alignment implemented in minimap2, a long-read mapping software. To highlight the impact of our work on a real-world application, we couple LOGAN with a many-to-many long-read alignment software called BELLA, and demonstrate that our implementation improves the overall BELLA runtime by up to 10.6x. Finally, we adapt the Roofline model for LOGAN and demonstrate that our implementation is near-optimal on the NVIDIA Tesla V100s.
1801.05017
Chendi Wang
Chendi Wang, Rafeef Abugharbieh
Hypergraph based Subnetwork Extraction using Fusion of Task and Rest Functional Connectivity
19 pages, 4 figures
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Functional subnetwork extraction is commonly used to explore the brain's modular structure. However, reliable subnetwork extraction from functional magnetic resonance imaging (fMRI) data remains challenging due to the pronounced noise in neuroimaging data. In this paper, we proposed a high order relation informed approach based on hypergraph to combine the information from multi-task data and resting state data to improve subnetwork extraction. Our assumption is that task data can be beneficial for the subnetwork extraction process, since the repeatedly activated nodes involved in diverse tasks might be the canonical network components which comprise pre-existing repertoires of resting state subnetworks. Our proposed high order relation informed subnetwork extraction based on a strength information embedded hypergraph, (1) facilitates the multisource integration for subnetwork extraction, (2) utilizes information on relationships and changes between the nodes across different tasks, and (3) enables the study on higher order relations among brain network nodes. On real data, we demonstrated that fusing task activation, task-induced connectivity and resting state functional connectivity based on hypergraphs improves subnetwork extraction compared to employing a single source from either rest or task data in terms of subnetwork modularity measure, inter-subject reproducibility, along with more biologically meaningful subnetwork assignments.
[ { "created": "Mon, 15 Jan 2018 21:17:37 GMT", "version": "v1" } ]
2018-01-17
[ [ "Wang", "Chendi", "" ], [ "Abugharbieh", "Rafeef", "" ] ]
Functional subnetwork extraction is commonly used to explore the brain's modular structure. However, reliable subnetwork extraction from functional magnetic resonance imaging (fMRI) data remains challenging due to the pronounced noise in neuroimaging data. In this paper, we proposed a high order relation informed approach based on hypergraph to combine the information from multi-task data and resting state data to improve subnetwork extraction. Our assumption is that task data can be beneficial for the subnetwork extraction process, since the repeatedly activated nodes involved in diverse tasks might be the canonical network components which comprise pre-existing repertoires of resting state subnetworks. Our proposed high order relation informed subnetwork extraction based on a strength information embedded hypergraph, (1) facilitates the multisource integration for subnetwork extraction, (2) utilizes information on relationships and changes between the nodes across different tasks, and (3) enables the study on higher order relations among brain network nodes. On real data, we demonstrated that fusing task activation, task-induced connectivity and resting state functional connectivity based on hypergraphs improves subnetwork extraction compared to employing a single source from either rest or task data in terms of subnetwork modularity measure, inter-subject reproducibility, along with more biologically meaningful subnetwork assignments.
1610.07081
Benjamin Allen
Christine Sample and Benjamin Allen
The limits of weak selection and large population size in evolutionary game theory
36 pages, 3 figures
Journal of Mathematical Biology (2017)
10.1007/s00285-017-1119-4
null
q-bio.PE math.PR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Evolutionary game theory is a mathematical approach to studying how social behaviors evolve. In many recent works, evolutionary competition between strategies is modeled as a stochastic process in a finite population. In this context, two limits are both mathematically convenient and biologically relevant: weak selection and large population size. These limits can be combined in different ways, leading to potentially different results. We consider two orderings: the wN limit, in which weak selection is applied before the large population limit, and the Nw limit, in which the order is reversed. Formal mathematical definitions of the Nw and wN limits are provided. Applying these definitions to the Moran process of evolutionary game theory, we obtain asymptotic expressions for fixation probability and conditions for success in these limits. We find that the asymptotic expressions for fixation probability, and the conditions for a strategy to be favored over a neutral mutation, are different in the Nw and wN limits. However, the ordering of limits does not affect the conditions for one strategy to be favored over another.
[ { "created": "Sat, 22 Oct 2016 17:44:33 GMT", "version": "v1" } ]
2017-08-16
[ [ "Sample", "Christine", "" ], [ "Allen", "Benjamin", "" ] ]
Evolutionary game theory is a mathematical approach to studying how social behaviors evolve. In many recent works, evolutionary competition between strategies is modeled as a stochastic process in a finite population. In this context, two limits are both mathematically convenient and biologically relevant: weak selection and large population size. These limits can be combined in different ways, leading to potentially different results. We consider two orderings: the wN limit, in which weak selection is applied before the large population limit, and the Nw limit, in which the order is reversed. Formal mathematical definitions of the Nw and wN limits are provided. Applying these definitions to the Moran process of evolutionary game theory, we obtain asymptotic expressions for fixation probability and conditions for success in these limits. We find that the asymptotic expressions for fixation probability, and the conditions for a strategy to be favored over a neutral mutation, are different in the Nw and wN limits. However, the ordering of limits does not affect the conditions for one strategy to be favored over another.
1211.0309
Dante Chialvo
Enzo Tagliazucchi and Dante R. Chialvo
Brain complexity born out of criticality
In Proceedings of the 12th Granada Seminar "Physics, Computation, and the Mind - Advances and Challenges at Interfaces-". (J. Marro, P. L. Garrido & J. J. Torres, Eds.) American Institute of Physics (2012, in press)
null
10.1063/1.4776495
null
q-bio.NC cond-mat.dis-nn physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this essay we elaborate on recent evidence demonstrating the presence of a second order phase transition in human brain dynamics and discuss its consequences for theoretical approaches to brain function. We review early evidence of criticality in brain dynamics at different spatial and temporal scales, and we stress how it was necessary to unify concepts and analysis techniques across scales to introduce the adequate order and control parameters which define the transition. A discussion on the relation between structural vs. dynamical complexity exposes future steps to understand the dynamics of the connectome (structure) from which emerges the cognitome (function).
[ { "created": "Thu, 1 Nov 2012 21:16:47 GMT", "version": "v1" } ]
2015-06-12
[ [ "Tagliazucchi", "Enzo", "" ], [ "Chialvo", "Dante R.", "" ] ]
In this essay we elaborate on recent evidence demonstrating the presence of a second order phase transition in human brain dynamics and discuss its consequences for theoretical approaches to brain function. We review early evidence of criticality in brain dynamics at different spatial and temporal scales, and we stress how it was necessary to unify concepts and analysis techniques across scales to introduce the adequate order and control parameters which define the transition. A discussion on the relation between structural vs. dynamical complexity exposes future steps to understand the dynamics of the connectome (structure) from which emerges the cognitome (function).
1507.02383
Ralf Metzler
Maximilian Bauer, Emil S. Rasmussen, Michael A. Lomholt, and Ralf Metzler
Real sequence effects on the search dynamics of transcription factors on DNA
26 pages, 7 figures
Scientific Reports 5, 10072 (2015)
null
null
q-bio.QM cond-mat.stat-mech physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent experiments show that transcription factors (TFs) indeed use the facilitated diffusion mechanism to locate their target sequences on DNA in living bacteria cells: TFs alternate between sliding motion along DNA and relocation events through the cytoplasm. From simulations and theoretical analysis we study the TF-sliding motion for a large section of the DNA-sequence of a common E. coli strain, based on the two-state TF-model with a fast-sliding search state and a recognition state enabling target detection. For the probability to detect the target before dissociating from DNA the TF-search times self-consistently depend heavily on whether or not an auxiliary operator (an accessible sequence similar to the main operator) is present in the genome section. Importantly, within our model the extent to which the interconversion rates between search and recognition states depend on the underlying nucleotide sequence is varied. A moderate dependence maximises the capability to distinguish between the main operator and similar sequences. Moreover, these auxiliary operators serve as starting points for DNA looping with the main operator, yielding a spectrum of target detection times spanning several orders of magnitude. Auxiliary operators are shown to act as funnels facilitating target detection by TFs.
[ { "created": "Thu, 9 Jul 2015 05:52:30 GMT", "version": "v1" } ]
2015-07-10
[ [ "Bauer", "Maximilian", "" ], [ "Rasmussen", "Emil S.", "" ], [ "Lomholt", "Michael A.", "" ], [ "Metzler", "Ralf", "" ] ]
Recent experiments show that transcription factors (TFs) indeed use the facilitated diffusion mechanism to locate their target sequences on DNA in living bacteria cells: TFs alternate between sliding motion along DNA and relocation events through the cytoplasm. From simulations and theoretical analysis we study the TF-sliding motion for a large section of the DNA-sequence of a common E. coli strain, based on the two-state TF-model with a fast-sliding search state and a recognition state enabling target detection. For the probability to detect the target before dissociating from DNA the TF-search times self-consistently depend heavily on whether or not an auxiliary operator (an accessible sequence similar to the main operator) is present in the genome section. Importantly, within our model the extent to which the interconversion rates between search and recognition states depend on the underlying nucleotide sequence is varied. A moderate dependence maximises the capability to distinguish between the main operator and similar sequences. Moreover, these auxiliary operators serve as starting points for DNA looping with the main operator, yielding a spectrum of target detection times spanning several orders of magnitude. Auxiliary operators are shown to act as funnels facilitating target detection by TFs.
1212.5439
Eric Werner
Eric Werner
A Developmental Network Theory of Gynandromorphs, Sexual Dimorphism and Species Formation
22 pages. Key Words: Gynandromorphs, developmental control networks, cenome, CENEs, epigenomics, origin of species, evolution of species, sexual dimorphism, unisexual hemimorphism, synsexhemimorphism, hemihyperplasia, hemihypertrophy, bilateral symmetry, multicellular development, developmental systems biology, embryonic development, computational modeling, simulation
null
null
null
q-bio.MN q-bio.GN q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Gynandromorphs are creatures where at least two different body sections are a different sex. Bilateral gynandromorphs are half male and half female. Here we develop a theory of gynandromorph ontogeny based on developmental control networks. The theory explains the embryogenesis of all known variations of gynandromorphs found in multicellular organisms. The theory also predicts a large variety of more subtle gynandromorphic morphologies yet to be discovered. The network theory of gynandromorph development has direct relevance to understanding sexual dimorphism (differences in morphology between male and female organisms of the same species) and medical pathologies such as hemihyperplasia (asymmetric development of normally symmetric body parts in a unisexual individual). The network theory of gynandromorphs brings up fundamental open questions about developmental control in ontogeny. This in turn suggests a new theory of the origin and evolution of species that is based on cooperative interactions and conflicts between developmental control networks in the haploid genomes and epigenomes of potential sexual partners for reproduction. This network-based theory of the origin of species is a paradigmatic shift in our understanding of evolutionary processes that goes beyond gene-centered theories.
[ { "created": "Fri, 21 Dec 2012 13:57:43 GMT", "version": "v1" } ]
2012-12-24
[ [ "Werner", "Eric", "" ] ]
Gynandromorphs are creatures where at least two different body sections are a different sex. Bilateral gynandromorphs are half male and half female. Here we develop a theory of gynandromorph ontogeny based on developmental control networks. The theory explains the embryogenesis of all known variations of gynandromorphs found in multicellular organisms. The theory also predicts a large variety of more subtle gynandromorphic morphologies yet to be discovered. The network theory of gynandromorph development has direct relevance to understanding sexual dimorphism (differences in morphology between male and female organisms of the same species) and medical pathologies such as hemihyperplasia (asymmetric development of normally symmetric body parts in a unisexual individual). The network theory of gynandromorphs brings up fundamental open questions about developmental control in ontogeny. This in turn suggests a new theory of the origin and evolution of species that is based on cooperative interactions and conflicts between developmental control networks in the haploid genomes and epigenomes of potential sexual partners for reproduction. This network-based theory of the origin of species is a paradigmatic shift in our understanding of evolutionary processes that goes beyond gene-centered theories.
2401.09097
Konstantinos Zygalakis
Thomas Trigo Trindade, Konstantinos C. Zygalakis
A hybrid tau-leap for simulating chemical kinetics with applications to parameter estimation
25 pages, 8 figures
null
null
null
q-bio.MN cs.NA math.NA stat.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We consider the problem of efficiently simulating stochastic models of chemical kinetics. The Gillespie Stochastic Simulation algorithm (SSA) is often used to simulate these models, however, in many scenarios of interest, the computational cost quickly becomes prohibitive. This is further exasperated in the Bayesian inference context when estimating parameters of chemical models, as the intractability of the likelihood requires multiple simulations of the underlying system. To deal with issues of computational complexity in this paper, we propose a novel hybrid $\tau$-leap algorithm for simulating well-mixed chemical systems. In particular, the algorithm uses $\tau$-leap when appropriate (high population densities), and SSA when necessary (low population densities, when discrete effects become non-negligible). In the intermediate regime, a combination of the two methods, which leverages the properties of the underlying Poisson formulation, is employed. As illustrated through a number of numerical experiments the hybrid $\tau$ offers significant computational savings when compared to SSA without however sacrificing the overall accuracy. This feature is particularly welcomed in the Bayesian inference context, as it allows for parameter estimation of stochastic chemical kinetics at reduced computational cost.
[ { "created": "Wed, 17 Jan 2024 10:00:29 GMT", "version": "v1" }, { "created": "Thu, 18 Jan 2024 15:22:22 GMT", "version": "v2" }, { "created": "Tue, 9 Jul 2024 09:37:00 GMT", "version": "v3" } ]
2024-07-10
[ [ "Trindade", "Thomas Trigo", "" ], [ "Zygalakis", "Konstantinos C.", "" ] ]
We consider the problem of efficiently simulating stochastic models of chemical kinetics. The Gillespie Stochastic Simulation algorithm (SSA) is often used to simulate these models, however, in many scenarios of interest, the computational cost quickly becomes prohibitive. This is further exasperated in the Bayesian inference context when estimating parameters of chemical models, as the intractability of the likelihood requires multiple simulations of the underlying system. To deal with issues of computational complexity in this paper, we propose a novel hybrid $\tau$-leap algorithm for simulating well-mixed chemical systems. In particular, the algorithm uses $\tau$-leap when appropriate (high population densities), and SSA when necessary (low population densities, when discrete effects become non-negligible). In the intermediate regime, a combination of the two methods, which leverages the properties of the underlying Poisson formulation, is employed. As illustrated through a number of numerical experiments the hybrid $\tau$ offers significant computational savings when compared to SSA without however sacrificing the overall accuracy. This feature is particularly welcomed in the Bayesian inference context, as it allows for parameter estimation of stochastic chemical kinetics at reduced computational cost.
1810.10498
Elena Pastorelli
Cristiano Capone and Elena Pastorelli and Bruno Golosio and Pier Stanislao Paolucci
Sleep-like slow oscillations improve visual classification through synaptic homeostasis and memory association in a thalamo-cortical model
11 pages, 5 figures, v5 is the final version published on Scientific Reports journal
Sci Rep 9, 8990 (2019)
10.1038/s41598-019-45525-0
null
q-bio.NC cs.AI cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The occurrence of sleep passed through the evolutionary sieve and is widespread in animal species. Sleep is known to be beneficial to cognitive and mnemonic tasks, while chronic sleep deprivation is detrimental. Despite the importance of the phenomenon, a complete understanding of its functions and underlying mechanisms is still lacking. In this paper, we show interesting effects of deep-sleep-like slow oscillation activity on a simplified thalamo-cortical model which is trained to encode, retrieve and classify images of handwritten digits. During slow oscillations, spike-timing-dependent-plasticity (STDP) produces a differential homeostatic process. It is characterized by both a specific unsupervised enhancement of connections among groups of neurons associated to instances of the same class (digit) and a simultaneous down-regulation of stronger synapses created by the training. This hierarchical organization of post-sleep internal representations favours higher performances in retrieval and classification tasks. The mechanism is based on the interaction between top-down cortico-thalamic predictions and bottom-up thalamo-cortical projections during deep-sleep-like slow oscillations. Indeed, when learned patterns are replayed during sleep, cortico-thalamo-cortical connections favour the activation of other neurons coding for similar thalamic inputs, promoting their association. Such mechanism hints at possible applications to artificial learning systems.
[ { "created": "Wed, 24 Oct 2018 17:06:00 GMT", "version": "v1" }, { "created": "Wed, 14 Nov 2018 15:38:12 GMT", "version": "v2" }, { "created": "Mon, 3 Dec 2018 15:24:40 GMT", "version": "v3" }, { "created": "Mon, 21 Jan 2019 15:51:14 GMT", "version": "v4" }, { "created": "Mon, 18 Nov 2019 13:01:01 GMT", "version": "v5" } ]
2019-11-19
[ [ "Capone", "Cristiano", "" ], [ "Pastorelli", "Elena", "" ], [ "Golosio", "Bruno", "" ], [ "Paolucci", "Pier Stanislao", "" ] ]
The occurrence of sleep passed through the evolutionary sieve and is widespread in animal species. Sleep is known to be beneficial to cognitive and mnemonic tasks, while chronic sleep deprivation is detrimental. Despite the importance of the phenomenon, a complete understanding of its functions and underlying mechanisms is still lacking. In this paper, we show interesting effects of deep-sleep-like slow oscillation activity on a simplified thalamo-cortical model which is trained to encode, retrieve and classify images of handwritten digits. During slow oscillations, spike-timing-dependent-plasticity (STDP) produces a differential homeostatic process. It is characterized by both a specific unsupervised enhancement of connections among groups of neurons associated to instances of the same class (digit) and a simultaneous down-regulation of stronger synapses created by the training. This hierarchical organization of post-sleep internal representations favours higher performances in retrieval and classification tasks. The mechanism is based on the interaction between top-down cortico-thalamic predictions and bottom-up thalamo-cortical projections during deep-sleep-like slow oscillations. Indeed, when learned patterns are replayed during sleep, cortico-thalamo-cortical connections favour the activation of other neurons coding for similar thalamic inputs, promoting their association. Such mechanism hints at possible applications to artificial learning systems.
0705.4630
Danielle Rojas-Rousse
Auguste Ndoutoume-Ndong, Danielle Rojas-Rousse (IRBII)
Y a-t-il \'elimination d'Eupelmus orientalis Crawford par Eupelmus vuilleti Crawford (Hymenoptera : Eupelmidae) des syst\`emes de stockage du ni\'eb\'e (Vigna unguiculata Walp) ?
null
Annales de la Soci\'et\'e Entomologique de France 43, 2 (01/06/2007) 139-144
null
null
q-bio.PE
null
Ni\'eb\'e is a food leguminous plant cultivated in tropical Africa for its seeds rich in proteins. The main problem setted by its production is the conservation of harvests. In the fields as in the stocks, the seeds are destroyed by pests (bruchids). These bruchids are always associated with several entomophagous species of hymenoptera. Four entomophagous species were listed : an egg parasitoid (U lariophaga Stephan), and three solitary larval and pupal ectoparasitoids (D. Basalis Rondoni, Pteromalidae; E. vuilleti Crawford and E. orientalis Crawford, Eupelmidae). The survey of the populations shows that at the beginning of storage, E orientalis is the most abundant specie (72 %) whereas E. vuilleti and D. Basalis respectively represent 12 % and 16 % of the hymenoptera. During storage, the E orientalis population decreases gradually and it disappears completely in less than two months after the beginning of storage. E. Vuilleti population becomes gradually more important than D. basalis population which regress until less than 10 % of the emerging parasitoids. E vuilleti adopts ovicide and larvicide behaviour against D. Basalis. This behaviour explains its population regression inside granaries. If the aggressive behaviour of this Eupelmidae is a constant, that could also explain the disappearance of E orientalis. However if this species is maintained in stocks, it would be an effective control agent of bruchids according to their parasitic capacities. This study shows that ovicide and larvicide behaviour of E vuilleti is not expressed against E orientalis. When the females have exclusively the hosts already parasitized by E orientalis, they do not lay eggs. The disappearance of E orientalis could not thus be explained by the presence of E. vuilleti.
[ { "created": "Thu, 31 May 2007 15:32:45 GMT", "version": "v1" } ]
2007-06-01
[ [ "Ndoutoume-Ndong", "Auguste", "", "IRBII" ], [ "Rojas-Rousse", "Danielle", "", "IRBII" ] ]
Ni\'eb\'e is a food leguminous plant cultivated in tropical Africa for its seeds rich in proteins. The main problem setted by its production is the conservation of harvests. In the fields as in the stocks, the seeds are destroyed by pests (bruchids). These bruchids are always associated with several entomophagous species of hymenoptera. Four entomophagous species were listed : an egg parasitoid (U lariophaga Stephan), and three solitary larval and pupal ectoparasitoids (D. Basalis Rondoni, Pteromalidae; E. vuilleti Crawford and E. orientalis Crawford, Eupelmidae). The survey of the populations shows that at the beginning of storage, E orientalis is the most abundant specie (72 %) whereas E. vuilleti and D. Basalis respectively represent 12 % and 16 % of the hymenoptera. During storage, the E orientalis population decreases gradually and it disappears completely in less than two months after the beginning of storage. E. Vuilleti population becomes gradually more important than D. basalis population which regress until less than 10 % of the emerging parasitoids. E vuilleti adopts ovicide and larvicide behaviour against D. Basalis. This behaviour explains its population regression inside granaries. If the aggressive behaviour of this Eupelmidae is a constant, that could also explain the disappearance of E orientalis. However if this species is maintained in stocks, it would be an effective control agent of bruchids according to their parasitic capacities. This study shows that ovicide and larvicide behaviour of E vuilleti is not expressed against E orientalis. When the females have exclusively the hosts already parasitized by E orientalis, they do not lay eggs. The disappearance of E orientalis could not thus be explained by the presence of E. vuilleti.
q-bio/0403015
Marcus Kaiser
Marcus Kaiser and Claus C. Hilgetag
Edge vulnerability in neural and metabolic networks
8 pages, 4 figures, to appear in Biological Cybernetics
Biological Cybernetics 90:311-317 (May 2004)
10.1007/s00422-004-0479-1
null
q-bio.NC q-bio.MN
null
Biological networks, such as cellular metabolic pathways or networks of corticocortical connections in the brain, are intricately organized, yet remarkably robust toward structural damage. Whereas many studies have investigated specific aspects of robustness, such as molecular mechanisms of repair, this article focuses more generally on how local structural features in networks may give rise to their global stability. In many networks the failure of single connections may be more likely than the extinction of entire nodes, yet no analysis of edge importance (edge vulnerability) has been provided so far for biological networks. We tested several measures for identifying vulnerable edges and compared their prediction performance in biological and artificial networks. Among the tested measures, edge frequency in all shortest paths of a network yielded a particularly high correlation with vulnerability, and identified inter-cluster connections in biological but not in random and scale-free benchmark networks. We discuss different local and global network patterns and the edge vulnerability resulting from them.
[ { "created": "Mon, 15 Mar 2004 17:48:22 GMT", "version": "v1" } ]
2007-05-23
[ [ "Kaiser", "Marcus", "" ], [ "Hilgetag", "Claus C.", "" ] ]
Biological networks, such as cellular metabolic pathways or networks of corticocortical connections in the brain, are intricately organized, yet remarkably robust toward structural damage. Whereas many studies have investigated specific aspects of robustness, such as molecular mechanisms of repair, this article focuses more generally on how local structural features in networks may give rise to their global stability. In many networks the failure of single connections may be more likely than the extinction of entire nodes, yet no analysis of edge importance (edge vulnerability) has been provided so far for biological networks. We tested several measures for identifying vulnerable edges and compared their prediction performance in biological and artificial networks. Among the tested measures, edge frequency in all shortest paths of a network yielded a particularly high correlation with vulnerability, and identified inter-cluster connections in biological but not in random and scale-free benchmark networks. We discuss different local and global network patterns and the edge vulnerability resulting from them.
2005.10055
Antonio Juli\`a
Antonio Juli\`a, Irene Bonafonte, Antonio G\'omez, Mar\'ia L\'opez-Lasanta, Mireia L\'opez-Corbeto, Sergio H. Mart\'inez-Mateu, Jordi Llad\'os, Iv\'an Rodr\'iguez-Nunez, Richard M. Myers, Sara Marsal
Blocking of the CD80/86 axis as a therapeutic approach to prevent progression to more severe forms of COVID-19
23 pages, 8 figures, 1 Table, Supplementary data (3 supplementary figures and 3 supplementary Tables)
null
null
null
q-bio.TO q-bio.GN
http://creativecommons.org/licenses/by/4.0/
In its more severe forms, COVID-19 progresses towards an excessive immune response, leading to the systemic overexpression of proinflammatory cytokines like IL6, mostly from the infected lungs. This cytokine storm can cause multiple organ damage and death. Consequently, there is a pressing need to identify therapies to treat and prevent severe symptoms during COVID-19. Based on previous clinical evidence, we hypothesized that inhibiting T cell co-stimulation by blocking CD80/86 could be an effective therapeutic strategy against progression to severe proinflammatory states. To support this hypothesis, we performed an analysis integrating blood transcriptional data we generated from rheumatoid arthritis patients treated with abatacept -- a CD80/86 co-stimulation inhibitor -- with the pathological features associated with COVID-19, particularly in its more severe forms. We have found that many of the biological processes that have been consistently associated with COVID-19 pathology are reversed by CD80/86 co-stimulation inhibition, including the downregulation of IL6 production. Also, analysis of previous transcriptional data from blood of SARS-CoV-infected patients showed that the response to abatacept has a very high level of antagonism to that elicited by COVID-19. Finally, analyzing a recent single cell RNA-seq dataset from bronchoalveolar lavage fluid cells from COVID-19 patients, we found a significant correlation along the main elements of the C80/86 axis: CD86+/80+ antigen presenting cells, activated CD4+ T cells and IL6 production. Our in-silico study provides additional support to the hypothesis that blocking of the CD80/CD86 signaling axis may be protective of the excessive proinflammatory state associated with COVID-19 in the lungs.
[ { "created": "Wed, 20 May 2020 14:05:06 GMT", "version": "v1" } ]
2020-05-21
[ [ "Julià", "Antonio", "" ], [ "Bonafonte", "Irene", "" ], [ "Gómez", "Antonio", "" ], [ "López-Lasanta", "María", "" ], [ "López-Corbeto", "Mireia", "" ], [ "Martínez-Mateu", "Sergio H.", "" ], [ "Lladós", "Jordi", "" ], [ "Rodríguez-Nunez", "Iván", "" ], [ "Myers", "Richard M.", "" ], [ "Marsal", "Sara", "" ] ]
In its more severe forms, COVID-19 progresses towards an excessive immune response, leading to the systemic overexpression of proinflammatory cytokines like IL6, mostly from the infected lungs. This cytokine storm can cause multiple organ damage and death. Consequently, there is a pressing need to identify therapies to treat and prevent severe symptoms during COVID-19. Based on previous clinical evidence, we hypothesized that inhibiting T cell co-stimulation by blocking CD80/86 could be an effective therapeutic strategy against progression to severe proinflammatory states. To support this hypothesis, we performed an analysis integrating blood transcriptional data we generated from rheumatoid arthritis patients treated with abatacept -- a CD80/86 co-stimulation inhibitor -- with the pathological features associated with COVID-19, particularly in its more severe forms. We have found that many of the biological processes that have been consistently associated with COVID-19 pathology are reversed by CD80/86 co-stimulation inhibition, including the downregulation of IL6 production. Also, analysis of previous transcriptional data from blood of SARS-CoV-infected patients showed that the response to abatacept has a very high level of antagonism to that elicited by COVID-19. Finally, analyzing a recent single cell RNA-seq dataset from bronchoalveolar lavage fluid cells from COVID-19 patients, we found a significant correlation along the main elements of the C80/86 axis: CD86+/80+ antigen presenting cells, activated CD4+ T cells and IL6 production. Our in-silico study provides additional support to the hypothesis that blocking of the CD80/CD86 signaling axis may be protective of the excessive proinflammatory state associated with COVID-19 in the lungs.
1911.12131
Jerome Coville
Fr\'ed\'eric Fabre (UMR SAVE), J\'er\^ome Coville (BIOSP), Nik J. Cunniffe
Optimising reactive disease management using spatially explicit models at the landscape scale
null
null
null
null
q-bio.PE math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Increasing rates of global trade and travel, as well as changing climatic patterns, have led to more frequent outbreaks of plant disease epidemics worldwide. Mathematical modelling is a key tool in predicting where and how these new threats will spread, as well as in assessing how damaging they might be. Models can also be used to inform disease management, providing a rational methodology for comparing the performance of possible control strategies against one another. For emerging epidemics, in which new pathogens or pathogen strains are actively spreading into new regions, the spatial component of spread becomes particularly important, both to make predictions and to optimise disease control. In this chapter we illustrate how the spatial spread of emerging plant diseases can be modelled at the landscape scale via spatially explicit compartmental models. Our particular focus is on the crucial role of the dispersal kernel-which parameterises the probability of pathogen spread from an infected host to susceptible hosts at any given distance-in determining outcomes of epidemics. We add disease management to our model by testing performance of a simple ''one off'' form of reactive disease control, in which sites within a particular distance of locations detected to contain infection are removed in a single round of disease management. We use this simplified model to show how ostensibly arcane decisions made by the modeller-most notably whether or not the underpinning disease model allows for stochasticity (i.e. randomness)-can greatly impact on disease management recommendations. Our chapter is accompanied by example code in the programming language R available via an online repository, allowing the reader to run the models we present for him/herself.
[ { "created": "Wed, 27 Nov 2019 13:14:24 GMT", "version": "v1" } ]
2019-11-28
[ [ "Fabre", "Frédéric", "", "UMR SAVE" ], [ "Coville", "Jérôme", "", "BIOSP" ], [ "Cunniffe", "Nik J.", "" ] ]
Increasing rates of global trade and travel, as well as changing climatic patterns, have led to more frequent outbreaks of plant disease epidemics worldwide. Mathematical modelling is a key tool in predicting where and how these new threats will spread, as well as in assessing how damaging they might be. Models can also be used to inform disease management, providing a rational methodology for comparing the performance of possible control strategies against one another. For emerging epidemics, in which new pathogens or pathogen strains are actively spreading into new regions, the spatial component of spread becomes particularly important, both to make predictions and to optimise disease control. In this chapter we illustrate how the spatial spread of emerging plant diseases can be modelled at the landscape scale via spatially explicit compartmental models. Our particular focus is on the crucial role of the dispersal kernel-which parameterises the probability of pathogen spread from an infected host to susceptible hosts at any given distance-in determining outcomes of epidemics. We add disease management to our model by testing performance of a simple ''one off'' form of reactive disease control, in which sites within a particular distance of locations detected to contain infection are removed in a single round of disease management. We use this simplified model to show how ostensibly arcane decisions made by the modeller-most notably whether or not the underpinning disease model allows for stochasticity (i.e. randomness)-can greatly impact on disease management recommendations. Our chapter is accompanied by example code in the programming language R available via an online repository, allowing the reader to run the models we present for him/herself.
q-bio/0501035
Hiro-Sato Niwa
Hiro-Sato Niwa
Power-law scaling in dimension-to-biomass relationship of fish schools
25 pages, 6 figures, to appear in J. Theor. Biol
J. Theor. Biol. 235 (2005) 419-430
10.1016/j.jtbi.2005.01.022
null
q-bio.PE cond-mat.other cond-mat.stat-mech q-bio.OT
null
Motivated by the finding that there is some biological universality in the relationship between school geometry and school biomass of various pelagic fishes in various conditions, I here establish a scaling law for school dimensions: the school diameter increases as a power-law function of school biomass. The power-law exponent is extracted through the data collapse, and is close to 3/5. This value of the exponent implies that the mean packing density decreases as the school biomass increases, and the packing structure displays a mass-fractal dimension of 5/3. By exploiting an analogy between school geometry and polymer chain statistics, I examine the behavioral algorithm governing the swollen conformation of large-sized schools of pelagics, and I explain the value of the exponent.
[ { "created": "Thu, 27 Jan 2005 01:45:57 GMT", "version": "v1" }, { "created": "Wed, 15 Jun 2005 07:49:09 GMT", "version": "v2" } ]
2007-05-23
[ [ "Niwa", "Hiro-Sato", "" ] ]
Motivated by the finding that there is some biological universality in the relationship between school geometry and school biomass of various pelagic fishes in various conditions, I here establish a scaling law for school dimensions: the school diameter increases as a power-law function of school biomass. The power-law exponent is extracted through the data collapse, and is close to 3/5. This value of the exponent implies that the mean packing density decreases as the school biomass increases, and the packing structure displays a mass-fractal dimension of 5/3. By exploiting an analogy between school geometry and polymer chain statistics, I examine the behavioral algorithm governing the swollen conformation of large-sized schools of pelagics, and I explain the value of the exponent.
1612.08059
Danielle Bassett
Danielle S. Bassett, Ankit N. Khambhati, Scott T. Grafton
Emerging Frontiers of Neuroengineering: A Network Science of Brain Connectivity
17 pages, 6 figures. Manuscript accepted to the journal "Annual Review of Biomedical Engineering"
null
null
null
q-bio.NC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neuroengineering is faced with unique challenges in repairing or replacing complex neural systems that are composed of many interacting parts. These interactions form intricate patterns over large spatiotemporal scales, and produce emergent behaviors that are difficult to predict from individual elements. Network science provides a particularly appropriate framework in which to study and intervene in such systems, by treating neural elements (cells, volumes) as nodes in a graph and neural interactions (synapses, white matter tracts) as edges in that graph. Here, we review the emerging discipline of network neuroscience, which uses and develops tools from graph theory to better understand and manipulate neural systems, from micro- to macroscales. We present examples of how human brain imaging data is being modeled with network analysis and underscore potential pitfalls. We then highlight current computational and theoretical frontiers, and emphasize their utility in informing diagnosis and monitoring, brain-machine interfaces, and brain stimulation. A flexible and rapidly evolving enterprise, network neuroscience provides a set of powerful approaches and fundamental insights critical to the neuroengineer's toolkit.
[ { "created": "Fri, 23 Dec 2016 18:28:12 GMT", "version": "v1" } ]
2016-12-26
[ [ "Bassett", "Danielle S.", "" ], [ "Khambhati", "Ankit N.", "" ], [ "Grafton", "Scott T.", "" ] ]
Neuroengineering is faced with unique challenges in repairing or replacing complex neural systems that are composed of many interacting parts. These interactions form intricate patterns over large spatiotemporal scales, and produce emergent behaviors that are difficult to predict from individual elements. Network science provides a particularly appropriate framework in which to study and intervene in such systems, by treating neural elements (cells, volumes) as nodes in a graph and neural interactions (synapses, white matter tracts) as edges in that graph. Here, we review the emerging discipline of network neuroscience, which uses and develops tools from graph theory to better understand and manipulate neural systems, from micro- to macroscales. We present examples of how human brain imaging data is being modeled with network analysis and underscore potential pitfalls. We then highlight current computational and theoretical frontiers, and emphasize their utility in informing diagnosis and monitoring, brain-machine interfaces, and brain stimulation. A flexible and rapidly evolving enterprise, network neuroscience provides a set of powerful approaches and fundamental insights critical to the neuroengineer's toolkit.
1301.1565
Boris Adryan
Alicia Schep and Boris Adryan
A comparative analysis of transcription factor expression during metazoan embryonic development
~10 pages, 50 references, 6+3 figures and 5 tables
null
10.1371/journal.pone.0066826
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
During embryonic development, a complex organism is formed from a single starting cell. These processes of growth and differentiation are driven by large transcriptional changes, which are following the expression and activity of transcription factors (TFs). This study sought to compare TF expression during embryonic development in a diverse group of metazoan animals: representatives of vertebrates (Danio rerio, Xenopus tropicalis), a chordate (Ciona intestinalis) and invertebrate phyla such as insects (Drosophila melanogaster, Anopheles gambiae) and nematodes (Caenorhabditis elegans) were sampled, The different species showed overall very similar TF expression patterns, with TF expression increasing during the initial stages of development. C2H2 zinc finger TFs were over-represented and Homeobox TFs were under-represented in the early stages in all species. We further clustered TFs for each species based on their quantitative temporal expression profiles. This showed very similar TF expression trends in development in vertebrate and insect species. However, analysis of the expression of orthologous pairs between more closely related species showed that expression of most individual TFs is not conserved, following the general model of duplication and diversification. The degree of similarity between TF expression between Xenopus tropicalis and Danio rerio followed the hourglass model, with the greatest similarity occuring during the early tailbud stage in Xenopus tropicalis and the late segmentation stage in Danio rerio. However, for Drosophila melanogaster and Anopheles gambiae there were two periods of high TF transcriptome similarity, one during the Arthropod phylotypic stage at 8-10 hours into Drosophila development and the other later at 16-18 hours into Drosophila development.
[ { "created": "Tue, 8 Jan 2013 15:36:57 GMT", "version": "v1" } ]
2015-06-12
[ [ "Schep", "Alicia", "" ], [ "Adryan", "Boris", "" ] ]
During embryonic development, a complex organism is formed from a single starting cell. These processes of growth and differentiation are driven by large transcriptional changes, which are following the expression and activity of transcription factors (TFs). This study sought to compare TF expression during embryonic development in a diverse group of metazoan animals: representatives of vertebrates (Danio rerio, Xenopus tropicalis), a chordate (Ciona intestinalis) and invertebrate phyla such as insects (Drosophila melanogaster, Anopheles gambiae) and nematodes (Caenorhabditis elegans) were sampled, The different species showed overall very similar TF expression patterns, with TF expression increasing during the initial stages of development. C2H2 zinc finger TFs were over-represented and Homeobox TFs were under-represented in the early stages in all species. We further clustered TFs for each species based on their quantitative temporal expression profiles. This showed very similar TF expression trends in development in vertebrate and insect species. However, analysis of the expression of orthologous pairs between more closely related species showed that expression of most individual TFs is not conserved, following the general model of duplication and diversification. The degree of similarity between TF expression between Xenopus tropicalis and Danio rerio followed the hourglass model, with the greatest similarity occuring during the early tailbud stage in Xenopus tropicalis and the late segmentation stage in Danio rerio. However, for Drosophila melanogaster and Anopheles gambiae there were two periods of high TF transcriptome similarity, one during the Arthropod phylotypic stage at 8-10 hours into Drosophila development and the other later at 16-18 hours into Drosophila development.
1902.03815
Wilhelm Braun
Wilhelm Braun and Andr\'e Longtin
Interspike interval correlations in networks of inhibitory integrate-and-fire neurons
16 pages, 18 figures, 3 appendices. Accepted for publication in Physical Review E
null
10.1103/PhysRevE.99.032402
null
q-bio.NC math.DS physics.bio-ph physics.data-an
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study temporal correlations of interspike intervals (ISIs), quantified by the network-averaged serial correlation coefficient (SCC), in networks of both current- and conductance-based purely inhibitory integrate-and-fire neurons. Numerical simulations reveal transitions to negative SCCs at intermediate values of bias current drive and network size. As bias drive and network size are increased past these values, the SCC returns to zero. The SCC is maximally negative at an intermediate value of the network oscillation strength. The dependence of the SCC on two canonical schemes for synaptic connectivity is studied, and it is shown that the results occur robustly in both schemes. For conductance-based synapses, the SCC becomes negative at the onset of both a fast and slow coherent network oscillation. Finally, we devise a noise-reduced diffusion approximation for current-based networks that accounts for the observed temporal correlation transitions.
[ { "created": "Mon, 11 Feb 2019 11:06:50 GMT", "version": "v1" } ]
2019-03-27
[ [ "Braun", "Wilhelm", "" ], [ "Longtin", "André", "" ] ]
We study temporal correlations of interspike intervals (ISIs), quantified by the network-averaged serial correlation coefficient (SCC), in networks of both current- and conductance-based purely inhibitory integrate-and-fire neurons. Numerical simulations reveal transitions to negative SCCs at intermediate values of bias current drive and network size. As bias drive and network size are increased past these values, the SCC returns to zero. The SCC is maximally negative at an intermediate value of the network oscillation strength. The dependence of the SCC on two canonical schemes for synaptic connectivity is studied, and it is shown that the results occur robustly in both schemes. For conductance-based synapses, the SCC becomes negative at the onset of both a fast and slow coherent network oscillation. Finally, we devise a noise-reduced diffusion approximation for current-based networks that accounts for the observed temporal correlation transitions.
2206.10000
Juliane Moraes
Juliane T. Moraes, Eyisto J. Aguilar Trejo, Sabrina Camargo, Silvio C. Ferreira, and Dante R. Chialvo
Self Tuned Criticality: Controlling a neuron near its bifurcation point via temporal correlations
null
null
10.1103/PhysRevE.107.034204
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Previous work showed that the collective activity of large neuronal networks can be tamed to remain near its critical point by a feedback control that maximizes the temporal correlations of the mean-field fluctuations. Since such correlations behave similarly near instabilities across nonlinear dynamical systems, it is expected that the principle should control also low dimensional dynamical systems exhibiting continuous or discontinuous bifurcations from fixed points to limit cycles. Here we present numerical evidence that the dynamics of a single neuron can be controlled in the vicinity of its bifurcation point. The approach is tested in two models: a 2D generic excitable map and the paradigmatic FitzHugh-Nagumo neuron model. The results show that in both cases, the system can be self-tuned to its bifurcation point by modifying the control parameter according to the first coefficient of the autocorrelation function.
[ { "created": "Mon, 20 Jun 2022 20:53:51 GMT", "version": "v1" }, { "created": "Thu, 2 Mar 2023 16:57:21 GMT", "version": "v2" } ]
2023-03-29
[ [ "Moraes", "Juliane T.", "" ], [ "Trejo", "Eyisto J. Aguilar", "" ], [ "Camargo", "Sabrina", "" ], [ "Ferreira", "Silvio C.", "" ], [ "Chialvo", "Dante R.", "" ] ]
Previous work showed that the collective activity of large neuronal networks can be tamed to remain near its critical point by a feedback control that maximizes the temporal correlations of the mean-field fluctuations. Since such correlations behave similarly near instabilities across nonlinear dynamical systems, it is expected that the principle should control also low dimensional dynamical systems exhibiting continuous or discontinuous bifurcations from fixed points to limit cycles. Here we present numerical evidence that the dynamics of a single neuron can be controlled in the vicinity of its bifurcation point. The approach is tested in two models: a 2D generic excitable map and the paradigmatic FitzHugh-Nagumo neuron model. The results show that in both cases, the system can be self-tuned to its bifurcation point by modifying the control parameter according to the first coefficient of the autocorrelation function.
1703.05644
Petter Holme
Jain Gu, Sungmin Lee, Jari Saram\"aki, Petter Holme
Ranking influential spreaders is an ill-defined problem
null
EPL 118 (2017) 68002
10.1209/0295-5075/118/68002
null
q-bio.PE cs.SI physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Finding influential spreaders of information and disease in networks is an important theoretical problem, and one of considerable recent interest. It has been almost exclusively formulated as a node-ranking problem -- methods for identifying influential spreaders rank nodes according to how influential they are. In this work, we show that the ranking approach does not necessarily work: the set of most influential nodes depends on the number of nodes in the set. Therefore, the set of $n$ most important nodes to vaccinate does not need to have any node in common with the set of $n+1$ most important nodes. We propose a method for quantifying the extent and impact of this phenomenon, and show that it is common in both empirical and model networks.
[ { "created": "Thu, 16 Mar 2017 14:35:53 GMT", "version": "v1" } ]
2017-08-14
[ [ "Gu", "Jain", "" ], [ "Lee", "Sungmin", "" ], [ "Saramäki", "Jari", "" ], [ "Holme", "Petter", "" ] ]
Finding influential spreaders of information and disease in networks is an important theoretical problem, and one of considerable recent interest. It has been almost exclusively formulated as a node-ranking problem -- methods for identifying influential spreaders rank nodes according to how influential they are. In this work, we show that the ranking approach does not necessarily work: the set of most influential nodes depends on the number of nodes in the set. Therefore, the set of $n$ most important nodes to vaccinate does not need to have any node in common with the set of $n+1$ most important nodes. We propose a method for quantifying the extent and impact of this phenomenon, and show that it is common in both empirical and model networks.
0811.0347
Przemyslaw Biecek
Przemyslaw Biecek, Katarzyna Bonkowska, Stanislaw Cebrat
Relations between organisms and the environment in the ageing process
10 pages, 8 figures, presented on MARF2
MARF2 2006
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We have modified the sexual Penna model by introducing the fluctuating environment and fluctuations representing physiological functions of individuals. Additionally, we have introduced the mother care corresponding to the protection against the deleterious influence of the environment, the learning capacity of individuals corresponding to their immunity and adaptation to the environment fluctuations and the other risk factors appearing at puberty. Each of the above mentioned elements influences mainly the survival of newborns and young individuals while genetic defects accumulated in the genomes increase the noise of individuals and were responsible for higher mortality of older individuals. All these modifications enable precise fitting the age structure of the simulated populations to the age distributions of human populations.
[ { "created": "Mon, 3 Nov 2008 17:37:33 GMT", "version": "v1" } ]
2008-11-04
[ [ "Biecek", "Przemyslaw", "" ], [ "Bonkowska", "Katarzyna", "" ], [ "Cebrat", "Stanislaw", "" ] ]
We have modified the sexual Penna model by introducing the fluctuating environment and fluctuations representing physiological functions of individuals. Additionally, we have introduced the mother care corresponding to the protection against the deleterious influence of the environment, the learning capacity of individuals corresponding to their immunity and adaptation to the environment fluctuations and the other risk factors appearing at puberty. Each of the above mentioned elements influences mainly the survival of newborns and young individuals while genetic defects accumulated in the genomes increase the noise of individuals and were responsible for higher mortality of older individuals. All these modifications enable precise fitting the age structure of the simulated populations to the age distributions of human populations.
1601.02932
Paul Medvedev
Alexandru I. Tomescu and Paul Medvedev
Safe and complete contig assembly via omnitigs
Full version of the paper in the proceedings of RECOMB 2016
null
null
null
q-bio.QM cs.DM cs.DS q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Contig assembly is the first stage that most assemblers solve when reconstructing a genome from a set of reads. Its output consists of contigs -- a set of strings that are promised to appear in any genome that could have generated the reads. From the introduction of contigs 20 years ago, assemblers have tried to obtain longer and longer contigs, but the following question was never solved: given a genome graph $G$ (e.g. a de Bruijn, or a string graph), what are all the strings that can be safely reported from $G$ as contigs? In this paper we finally answer this question, and also give a polynomial time algorithm to find them. Our experiments show that these strings, which we call omnitigs, are 66% to 82% longer on average than the popular unitigs, and 29% of dbSNP locations have more neighbors in omnitigs than in unitigs.
[ { "created": "Tue, 12 Jan 2016 16:02:04 GMT", "version": "v1" }, { "created": "Tue, 16 Aug 2016 20:41:42 GMT", "version": "v2" } ]
2016-08-18
[ [ "Tomescu", "Alexandru I.", "" ], [ "Medvedev", "Paul", "" ] ]
Contig assembly is the first stage that most assemblers solve when reconstructing a genome from a set of reads. Its output consists of contigs -- a set of strings that are promised to appear in any genome that could have generated the reads. From the introduction of contigs 20 years ago, assemblers have tried to obtain longer and longer contigs, but the following question was never solved: given a genome graph $G$ (e.g. a de Bruijn, or a string graph), what are all the strings that can be safely reported from $G$ as contigs? In this paper we finally answer this question, and also give a polynomial time algorithm to find them. Our experiments show that these strings, which we call omnitigs, are 66% to 82% longer on average than the popular unitigs, and 29% of dbSNP locations have more neighbors in omnitigs than in unitigs.
1803.04500
\'Elie Besserer-Offroy
\'Elie Besserer-Offroy, Patrick B\'erub\'e, J\'er\^ome C\^ot\'e, Alexandre Murza, Jean-Michel Longpr\'e, Robert Dumaine, Olivier Lesur, Mannix Auger-Messier, Richard Leduc, \'Eric Marsault, and Philippe Sarret
The hypotensive effect of activated apelin receptor is correlated with \b{eta}-arrestin recruitment
This is the accepted (postprint) version of the following article: Besserer-Offroy \'E, et al. (2018), Pharmacol Res. doi: 10.1016/j.phrs.2018.02.032, which has been accepted and published in final form at https://www.sciencedirect.com/science/article/pii/S1043661817313804 V1: Preprint version V2: Accepted (postprint) version
Elie Besserer-Offroy, et al. Pharmacol Res, Epub 9 March 2018
10.1016/j.phrs.2018.02.032
null
q-bio.MN q-bio.BM q-bio.CB q-bio.QM
http://creativecommons.org/licenses/by-nc-sa/4.0/
The apelinergic system is an important player in the regulation of both vascular tone and cardiovascular function, making this physiological system an attractive target for drug development for hypertension, heart failure and ischemic heart disease. Indeed, apelin exerts a positive inotropic effect in humans whilst reducing peripheral vascular resistance. In this study, we investigated the signaling pathways through which apelin exerts its hypotensive action. We synthesized a series of apelin-13 analogs whereby the C-terminal Phe13 residue was replaced by natural or unnatural amino acids. In HEK293 cells expressing APJ, we evaluated the relative efficacy of these compounds to activate G{\alpha}i1 and G{\alpha}oA G-proteins, recruit \b{eta}-arrestins 1 and 2 (\b{eta}arrs), and inhibit cAMP production. Calculating the transduction ratio for each pathway allowed us to identify several analogs with distinct signaling profiles. Furthermore, we found that these analogs delivered i.v. to Sprague-Dawley rats exerted a wide range of hypotensive responses. Indeed, two compounds lost their ability to lower blood pressure, while other analogs significantly reduced blood pressure as apelin-13. Interestingly, analogs that did not lower blood pressure were less effective at recruiting \b{eta}arrs. Finally, using Spearman correlations, we established that the hypotensive response was significantly correlated with \b{eta}arr recruitment but not with G protein- dependent signaling. In conclusion, our results demonstrated that the \b{eta}arr recruitment potency is involved in the hypotensive efficacy of activated APJ.
[ { "created": "Mon, 12 Mar 2018 20:01:06 GMT", "version": "v1" }, { "created": "Wed, 14 Mar 2018 00:56:34 GMT", "version": "v2" } ]
2018-03-15
[ [ "Besserer-Offroy", "Élie", "" ], [ "Bérubé", "Patrick", "" ], [ "Côté", "Jérôme", "" ], [ "Murza", "Alexandre", "" ], [ "Longpré", "Jean-Michel", "" ], [ "Dumaine", "Robert", "" ], [ "Lesur", "Olivier", "" ], [ "Auger-Messier", "Mannix", "" ], [ "Leduc", "Richard", "" ], [ "Marsault", "Éric", "" ], [ "Sarret", "Philippe", "" ] ]
The apelinergic system is an important player in the regulation of both vascular tone and cardiovascular function, making this physiological system an attractive target for drug development for hypertension, heart failure and ischemic heart disease. Indeed, apelin exerts a positive inotropic effect in humans whilst reducing peripheral vascular resistance. In this study, we investigated the signaling pathways through which apelin exerts its hypotensive action. We synthesized a series of apelin-13 analogs whereby the C-terminal Phe13 residue was replaced by natural or unnatural amino acids. In HEK293 cells expressing APJ, we evaluated the relative efficacy of these compounds to activate G{\alpha}i1 and G{\alpha}oA G-proteins, recruit \b{eta}-arrestins 1 and 2 (\b{eta}arrs), and inhibit cAMP production. Calculating the transduction ratio for each pathway allowed us to identify several analogs with distinct signaling profiles. Furthermore, we found that these analogs delivered i.v. to Sprague-Dawley rats exerted a wide range of hypotensive responses. Indeed, two compounds lost their ability to lower blood pressure, while other analogs significantly reduced blood pressure as apelin-13. Interestingly, analogs that did not lower blood pressure were less effective at recruiting \b{eta}arrs. Finally, using Spearman correlations, we established that the hypotensive response was significantly correlated with \b{eta}arr recruitment but not with G protein- dependent signaling. In conclusion, our results demonstrated that the \b{eta}arr recruitment potency is involved in the hypotensive efficacy of activated APJ.
2011.02349
Andrea Maiorana
Andrea Maiorana, Marco Meneghelli, Mario Resnati
Effectiveness of isolation measures with app support to contain COVID-19 epidemics: a parametric approach
31 pages, 10 figures
J. Math. Biol. 83, 46 (2021)
10.1007/s00285-021-01660-9
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this study, we analyze the effectiveness of measures aimed at finding and isolating infected individuals to contain epidemics like COVID-19, as the suppression induced over the effective reproduction number. We develop a mathematical model to compute the relative suppression of the effective reproduction number of an epidemic that such measures produce. This outcome is expressed as a function of a small set of parameters that describe the main features of the epidemic and summarize the effectiveness of the isolation measures. In particular, we focus on the impact when a fraction of the population uses a mobile application for epidemic control. Finally, we apply the model to COVID-19, providing several computations as examples, and a link to a public repository to run custom calculations. These computations display in a quantitative manner the importance of recognizing infected individuals from symptoms and contact-tracing information, and isolating them as early as possible. The computations also assess the impact of each variable on the suppression of the epidemic.
[ { "created": "Wed, 4 Nov 2020 15:29:41 GMT", "version": "v1" }, { "created": "Thu, 21 Oct 2021 14:24:04 GMT", "version": "v2" } ]
2021-10-22
[ [ "Maiorana", "Andrea", "" ], [ "Meneghelli", "Marco", "" ], [ "Resnati", "Mario", "" ] ]
In this study, we analyze the effectiveness of measures aimed at finding and isolating infected individuals to contain epidemics like COVID-19, as the suppression induced over the effective reproduction number. We develop a mathematical model to compute the relative suppression of the effective reproduction number of an epidemic that such measures produce. This outcome is expressed as a function of a small set of parameters that describe the main features of the epidemic and summarize the effectiveness of the isolation measures. In particular, we focus on the impact when a fraction of the population uses a mobile application for epidemic control. Finally, we apply the model to COVID-19, providing several computations as examples, and a link to a public repository to run custom calculations. These computations display in a quantitative manner the importance of recognizing infected individuals from symptoms and contact-tracing information, and isolating them as early as possible. The computations also assess the impact of each variable on the suppression of the epidemic.
q-bio/0403021
Petter Holme
Petter Holme
Efficient local strategies for vaccination and network attack
null
Europhys. Lett., 68 (6), pp. 908-914 (2004)
10.1209/epl/i2004-10286-2
null
q-bio.PE cond-mat.dis-nn
null
We study how a fraction of a population should be vaccinated to most efficiently top epidemics. We argue that only local information (about the neighborhood of specific vertices) is usable in practice, and hence we consider only local vaccination strategies. The efficiency of the vaccination strategies is investigated with both static and dynamical measures. Among other things we find that the most efficient strategy for many real-world situations is to iteratively vaccinate the neighbor of the previous vaccinee that has most links out of the neighborhood.
[ { "created": "Tue, 16 Mar 2004 09:27:14 GMT", "version": "v1" } ]
2007-05-23
[ [ "Holme", "Petter", "" ] ]
We study how a fraction of a population should be vaccinated to most efficiently top epidemics. We argue that only local information (about the neighborhood of specific vertices) is usable in practice, and hence we consider only local vaccination strategies. The efficiency of the vaccination strategies is investigated with both static and dynamical measures. Among other things we find that the most efficient strategy for many real-world situations is to iteratively vaccinate the neighbor of the previous vaccinee that has most links out of the neighborhood.
2407.11248
Grzegorz A Rempala
Jiaxin Jin and Grzegorz A. Rempala
Infinitesimal Homeostasis in Mass-Action Systems
5 figures
null
null
null
q-bio.QM math.DS q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Homeostasis occurs in a biological system when a chosen output variable remains approximately constant despite changes in an input variable. In this work we specifically focus on biological systems which may be represented as chemical reaction networks and consider their infinitesimal homeostasis, where the derivative of the input-output function is zero. The specific challenge of chemical reaction networks is that they often obey various conservation laws complicating the standard input-output analysis. We derive several results that allow to verify the existence of infinitesimal homeostasis points both in the absence of conservation and under conservation laws where conserved quantities serve as input parameters. In particular, we introduce the notion of infinitesimal concentration robustness, where the output variable remains nearly constant despite fluctuations in the conserved quantities. We provide several examples of chemical networks which illustrate our results both in deterministic and stochastic settings.
[ { "created": "Mon, 15 Jul 2024 21:25:06 GMT", "version": "v1" }, { "created": "Wed, 17 Jul 2024 20:53:01 GMT", "version": "v2" } ]
2024-07-19
[ [ "Jin", "Jiaxin", "" ], [ "Rempala", "Grzegorz A.", "" ] ]
Homeostasis occurs in a biological system when a chosen output variable remains approximately constant despite changes in an input variable. In this work we specifically focus on biological systems which may be represented as chemical reaction networks and consider their infinitesimal homeostasis, where the derivative of the input-output function is zero. The specific challenge of chemical reaction networks is that they often obey various conservation laws complicating the standard input-output analysis. We derive several results that allow to verify the existence of infinitesimal homeostasis points both in the absence of conservation and under conservation laws where conserved quantities serve as input parameters. In particular, we introduce the notion of infinitesimal concentration robustness, where the output variable remains nearly constant despite fluctuations in the conserved quantities. We provide several examples of chemical networks which illustrate our results both in deterministic and stochastic settings.
1803.06520
Alberto Calderone Dr.
Alberto Calderone, Gianni Cesareni
Analysis of Triplet Motifs in Biological Signed Oriented Graphs Suggests a Relationship Between Fine Topology and Function
null
null
null
null
q-bio.MN cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Background: Networks in different domains are characterized by similar global characteristics while differing in local structures. To further extend this concept, we investigated network regularities on a fine scale in order to examine the functional impact of recurring motifs in signed oriented biological networks. In this work we generalize to signaling net works some considerations made on feedback and feed forward loops and extend them by adding a close scrutiny of Linear Triplets, which have not yet been investigate in detail. Results: We studied the role of triplets, either open or closed (Loops or linear events) by enumerating them in different biological signaling networks and by comparing their significance profiles. We compared different data sources and investigated the fine topology of protein networks representing causal relationships based on transcriptional control, phosphorylation, ubiquitination and binding. Not only were we able to generalize findings that have already been reported but we also highlighted a connection between relative motif abundance and node function. Furthermore, by analyzing for the first time Linear Triplets, we highlighted the relative importance of nodes sitting in specific positions in closed signaling triplets. Finally, we tried to apply machine learning to show that a combination of motifs features can be used to derive node function. Availability: The triplets counter used for this work is available as a Cytoscape App and as a standalone command line Java application. http://apps.cytoscape.org/apps/counttriplets Keywords: Graph theory, graph analysis, graph topology, machine learning, cytoscape
[ { "created": "Sat, 17 Mar 2018 15:26:02 GMT", "version": "v1" }, { "created": "Tue, 20 Mar 2018 15:15:18 GMT", "version": "v2" }, { "created": "Sat, 29 Jun 2019 11:36:24 GMT", "version": "v3" }, { "created": "Wed, 10 Jul 2019 20:31:01 GMT", "version": "v4" } ]
2019-07-12
[ [ "Calderone", "Alberto", "" ], [ "Cesareni", "Gianni", "" ] ]
Background: Networks in different domains are characterized by similar global characteristics while differing in local structures. To further extend this concept, we investigated network regularities on a fine scale in order to examine the functional impact of recurring motifs in signed oriented biological networks. In this work we generalize to signaling net works some considerations made on feedback and feed forward loops and extend them by adding a close scrutiny of Linear Triplets, which have not yet been investigate in detail. Results: We studied the role of triplets, either open or closed (Loops or linear events) by enumerating them in different biological signaling networks and by comparing their significance profiles. We compared different data sources and investigated the fine topology of protein networks representing causal relationships based on transcriptional control, phosphorylation, ubiquitination and binding. Not only were we able to generalize findings that have already been reported but we also highlighted a connection between relative motif abundance and node function. Furthermore, by analyzing for the first time Linear Triplets, we highlighted the relative importance of nodes sitting in specific positions in closed signaling triplets. Finally, we tried to apply machine learning to show that a combination of motifs features can be used to derive node function. Availability: The triplets counter used for this work is available as a Cytoscape App and as a standalone command line Java application. http://apps.cytoscape.org/apps/counttriplets Keywords: Graph theory, graph analysis, graph topology, machine learning, cytoscape
q-bio/0609006
John Hopfield
J. J. Hopfield
Searching for memories, Sudoku, implicit check-bits, and the iterative use of not-always-correct rapid neural computation
42 pages
null
null
null
q-bio.NC
null
The algorithms that simple feedback neural circuits representing a brain area can rapidly carry out are often adequate to solve only easy problems, and for more difficult problems can return incorrect answers. A new excitatory-inhibitory circuit model of associative memory displays the common human problem of failing to rapidly find a memory when only a small clue is present. The memory model and a related computational network for solving Sudoku puzzles produce answers that contain implicit check-bits in the representation of information across neurons, allowing a rapid evaluation of whether the putative answer is correct or incorrect through a computation related to visual 'pop-out'. This fact may account for our strong psychological feeling of right or wrong when we retrieve a nominal memory from a minimal clue. This information allows more difficult computations or memory retrievals to be done in a serial fashion by using the fast but limited capabilities of a computational module multiple times. The mathematics of the excitatory-inhibitory circuits for associative memory and for Sudoku, both of which are understood in terms of 'energy' or Lyapunov functions, is described in detail.
[ { "created": "Tue, 5 Sep 2006 17:53:26 GMT", "version": "v1" }, { "created": "Tue, 19 Sep 2006 16:09:31 GMT", "version": "v2" } ]
2007-05-23
[ [ "Hopfield", "J. J.", "" ] ]
The algorithms that simple feedback neural circuits representing a brain area can rapidly carry out are often adequate to solve only easy problems, and for more difficult problems can return incorrect answers. A new excitatory-inhibitory circuit model of associative memory displays the common human problem of failing to rapidly find a memory when only a small clue is present. The memory model and a related computational network for solving Sudoku puzzles produce answers that contain implicit check-bits in the representation of information across neurons, allowing a rapid evaluation of whether the putative answer is correct or incorrect through a computation related to visual 'pop-out'. This fact may account for our strong psychological feeling of right or wrong when we retrieve a nominal memory from a minimal clue. This information allows more difficult computations or memory retrievals to be done in a serial fashion by using the fast but limited capabilities of a computational module multiple times. The mathematics of the excitatory-inhibitory circuits for associative memory and for Sudoku, both of which are understood in terms of 'energy' or Lyapunov functions, is described in detail.
q-bio/0410010
Peter Siegel
P.B. Siegel, J. Sperber, W. Kindermann, and A. Urhausen
Nonstationary time series analysis of heart rate variability
null
null
null
null
q-bio.QM
null
An analysis of the RR-interval time series, $t_i$, is presented for the case in which the average time, $\bar{t}$, changes slowly. In particular, $\bar{t}$ and a short-time scale variability parameter, $V$, are simultaneously measured while $\bar{t}$ decreases for subjects in the reclined position. The initial decrease in $\bar{t}$ is usually linear with $V$ yielding parameters that can be related to physiological quantities.
[ { "created": "Thu, 7 Oct 2004 18:52:12 GMT", "version": "v1" } ]
2007-05-23
[ [ "Siegel", "P. B.", "" ], [ "Sperber", "J.", "" ], [ "Kindermann", "W.", "" ], [ "Urhausen", "A.", "" ] ]
An analysis of the RR-interval time series, $t_i$, is presented for the case in which the average time, $\bar{t}$, changes slowly. In particular, $\bar{t}$ and a short-time scale variability parameter, $V$, are simultaneously measured while $\bar{t}$ decreases for subjects in the reclined position. The initial decrease in $\bar{t}$ is usually linear with $V$ yielding parameters that can be related to physiological quantities.
1204.1395
Donald Cooper Ph.D.
Melissa A. Fowler, Andrew L. Varnell and Donald C. Cooper
mGluR5 Knockout mice exhibit normal conditioned place-preference to cocaine
2 pages,2 figures Nature Precedings http://precedings.nature.com/documents/6180/version/2
null
10.1038/npre.2011.6180.2
null
q-bio.NC q-bio.GN
http://creativecommons.org/licenses/by-nc-sa/3.0/
Metabotropic glutamate receptor 5 (mGluR5) null mutant (-/-) mice have been reported to totally lack the rein- forcing or locomotor stimulating effects of cocaine. We tested mGluR5 -/- and +/+ mice for their locomotor and conditioned place- preference response to cocaine. Unlike the previous finding, here we show that compared to mGluR5 +/+ mice, -/- mice exhibit no difference in the locomotor response to low to moderate doses of cocaine (10 or 20 mg/kg). A high dose of cocaine (40 mg/kg) resulted in a blunted rather than absent locomo- tor response. We tested mGluR5 -/- and +/+ mice for conditioned place-preference to cocaine and found no group differences at a conditioning dose of 10 mg/kg, suggesting normal conditioned rewarding properties of cocaine. These results differ substantially from Chiamulera et al. (2001) and replicates Olsen et al., (2010), who found normal cocaine place-preference in mGluR5 -/- mice at 5 mg/kg. Our results indicate mGluR5 receptors exert a modulatory rather than necessary role in cocaine-induced locomotor stimulation and exert no effect on the conditioned rewarding effects of cocaine.
[ { "created": "Fri, 6 Apr 2012 02:04:55 GMT", "version": "v1" } ]
2012-04-09
[ [ "Fowler", "Melissa A.", "" ], [ "Varnell", "Andrew L.", "" ], [ "Cooper", "Donald C.", "" ] ]
Metabotropic glutamate receptor 5 (mGluR5) null mutant (-/-) mice have been reported to totally lack the rein- forcing or locomotor stimulating effects of cocaine. We tested mGluR5 -/- and +/+ mice for their locomotor and conditioned place- preference response to cocaine. Unlike the previous finding, here we show that compared to mGluR5 +/+ mice, -/- mice exhibit no difference in the locomotor response to low to moderate doses of cocaine (10 or 20 mg/kg). A high dose of cocaine (40 mg/kg) resulted in a blunted rather than absent locomo- tor response. We tested mGluR5 -/- and +/+ mice for conditioned place-preference to cocaine and found no group differences at a conditioning dose of 10 mg/kg, suggesting normal conditioned rewarding properties of cocaine. These results differ substantially from Chiamulera et al. (2001) and replicates Olsen et al., (2010), who found normal cocaine place-preference in mGluR5 -/- mice at 5 mg/kg. Our results indicate mGluR5 receptors exert a modulatory rather than necessary role in cocaine-induced locomotor stimulation and exert no effect on the conditioned rewarding effects of cocaine.
0805.4315
Vasile Morariu
V. V. Morariu
Microbial genome as a fluctuating system: Distribution and correlation of coding sequence lengths
13 pages, 8 figures
null
null
null
q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The length of coding sequence series in microbial genomes were regarded as a fluctuating system and characterized by the methods of statistical physics. The distribution and the correlatin properties of 50 genomes including bacteria and several archaea were investigated. The distribution was investigated by rank-size analysis (Zipf's law. We found that coding sequence lengths series do not obey Zipf's law contrary to natural languages. The distribution was found to be more closely to an exponential distribution. The correlation appeared to be similar to natural languages. Segmentation analysis of the series showed to be short range memory systems.
[ { "created": "Wed, 28 May 2008 11:40:43 GMT", "version": "v1" } ]
2008-05-29
[ [ "Morariu", "V. V.", "" ] ]
The length of coding sequence series in microbial genomes were regarded as a fluctuating system and characterized by the methods of statistical physics. The distribution and the correlatin properties of 50 genomes including bacteria and several archaea were investigated. The distribution was investigated by rank-size analysis (Zipf's law. We found that coding sequence lengths series do not obey Zipf's law contrary to natural languages. The distribution was found to be more closely to an exponential distribution. The correlation appeared to be similar to natural languages. Segmentation analysis of the series showed to be short range memory systems.
2307.12682
Pan Tan
Pan Tan, Mingchen Li, Yuanxi Yu, Fan Jiang, Lirong Zheng, Banghao Wu, Xinyu Sun, Liqi Kang, Jie Song, Liang Zhang, Yi Xiong, Wanli Ouyang, Zhiqiang Hu, Guisheng Fan, Yufeng Pei, Liang Hong
Pro-PRIME: A general Temperature-Guided Language model to engineer enhanced Stability and Activity in Proteins
arXiv admin note: text overlap with arXiv:2304.03780
null
null
null
q-bio.BM
http://creativecommons.org/licenses/by-nc-sa/4.0/
Designing protein mutants of both high stability and activity is a critical yet challenging task in protein engineering. Here, we introduce Pro-PRIME, a deep learning zero-shot model, which can suggest protein mutants of improved stability and activity without any prior experimental mutagenesis data. By leveraging temperature-guided language modelling, Pro-PRIME demonstrated superior predictive power compared to current state-of-the-art models on the public mutagenesis dataset over 33 proteins. Furthermore, we carried out wet experiments to test Pro-PRIME on five distinct proteins to engineer certain physicochemical properties, including thermal stability, rates of RNA polymerization and DNA cleavage, hydrolase activity, antigen-antibody binding affinity, or even the nonnatural properties, e.g., the ability to polymerize non-natural nucleic acid or resilience to extreme alkaline conditions. Surprisingly, about 40% AI-designed mutants show better performance than the one before mutation for all five proteins studied and for all properties targeted for engineering. Hence, Pro-PRIME demonstrates the general applicability in protein engineering.
[ { "created": "Mon, 24 Jul 2023 10:41:48 GMT", "version": "v1" }, { "created": "Fri, 11 Aug 2023 02:21:39 GMT", "version": "v2" }, { "created": "Thu, 17 Aug 2023 08:59:45 GMT", "version": "v3" }, { "created": "Fri, 19 Apr 2024 01:14:06 GMT", "version": "v4" }, { "created": "Mon, 13 May 2024 09:53:08 GMT", "version": "v5" } ]
2024-05-14
[ [ "Tan", "Pan", "" ], [ "Li", "Mingchen", "" ], [ "Yu", "Yuanxi", "" ], [ "Jiang", "Fan", "" ], [ "Zheng", "Lirong", "" ], [ "Wu", "Banghao", "" ], [ "Sun", "Xinyu", "" ], [ "Kang", "Liqi", "" ], [ "Song", "Jie", "" ], [ "Zhang", "Liang", "" ], [ "Xiong", "Yi", "" ], [ "Ouyang", "Wanli", "" ], [ "Hu", "Zhiqiang", "" ], [ "Fan", "Guisheng", "" ], [ "Pei", "Yufeng", "" ], [ "Hong", "Liang", "" ] ]
Designing protein mutants of both high stability and activity is a critical yet challenging task in protein engineering. Here, we introduce Pro-PRIME, a deep learning zero-shot model, which can suggest protein mutants of improved stability and activity without any prior experimental mutagenesis data. By leveraging temperature-guided language modelling, Pro-PRIME demonstrated superior predictive power compared to current state-of-the-art models on the public mutagenesis dataset over 33 proteins. Furthermore, we carried out wet experiments to test Pro-PRIME on five distinct proteins to engineer certain physicochemical properties, including thermal stability, rates of RNA polymerization and DNA cleavage, hydrolase activity, antigen-antibody binding affinity, or even the nonnatural properties, e.g., the ability to polymerize non-natural nucleic acid or resilience to extreme alkaline conditions. Surprisingly, about 40% AI-designed mutants show better performance than the one before mutation for all five proteins studied and for all properties targeted for engineering. Hence, Pro-PRIME demonstrates the general applicability in protein engineering.
2205.06349
Travis Wheeler
Jack W. Roddy, George T. Lesica, Travis J. Wheeler
SODA: a TypeScript/JavaScript Library for Visualizing Biological Sequence Annotation
null
null
10.1093/nargab/lqac077
null
q-bio.GN
http://creativecommons.org/licenses/by/4.0/
We present SODA, a lightweight and open-source visualization library for biological sequence annotations that enables straightforward development of flexible, dynamic, and interactive web graphics. SODA is implemented in TypeScript and can be used as a library within TypeScript and JavaScript.
[ { "created": "Thu, 12 May 2022 20:25:21 GMT", "version": "v1" } ]
2022-11-10
[ [ "Roddy", "Jack W.", "" ], [ "Lesica", "George T.", "" ], [ "Wheeler", "Travis J.", "" ] ]
We present SODA, a lightweight and open-source visualization library for biological sequence annotations that enables straightforward development of flexible, dynamic, and interactive web graphics. SODA is implemented in TypeScript and can be used as a library within TypeScript and JavaScript.
0901.0583
Nilou Ataie Dr
Nilou Ataie
The Adaptation of Complexity in the Evolution of Macromolecules
null
null
null
null
q-bio.MN q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Enzymes are on the front lines of evolution. All living organisms rely on highly efficient, specific enzymes for growth, sustenance, and reproduction; and many diseases are a consequence of a mutation on an enzyme that affects its catalytic function. It follows that the function of an enzyme affects the fitness of an organism, but just as rightfully true, the function of an enzyme affects the fitness of itself. Understanding how the complexity of enzyme structure relates to its essential function will unveil the fundamental mechanisms of evolution, and, perhaps, shed light on strategies used by ancient replicators. This paper presents evidence that supports the hypothesis that enzymes, and proteins in general, are the manifestation of the coevolution of two opposing forces. The synthesis of enzyme architecture, stability, function, evolutionary relationships, and evolvability shows that the complexity of macromolecules is a consequence of the function it provides.
[ { "created": "Tue, 6 Jan 2009 00:59:28 GMT", "version": "v1" } ]
2009-01-07
[ [ "Ataie", "Nilou", "" ] ]
Enzymes are on the front lines of evolution. All living organisms rely on highly efficient, specific enzymes for growth, sustenance, and reproduction; and many diseases are a consequence of a mutation on an enzyme that affects its catalytic function. It follows that the function of an enzyme affects the fitness of an organism, but just as rightfully true, the function of an enzyme affects the fitness of itself. Understanding how the complexity of enzyme structure relates to its essential function will unveil the fundamental mechanisms of evolution, and, perhaps, shed light on strategies used by ancient replicators. This paper presents evidence that supports the hypothesis that enzymes, and proteins in general, are the manifestation of the coevolution of two opposing forces. The synthesis of enzyme architecture, stability, function, evolutionary relationships, and evolvability shows that the complexity of macromolecules is a consequence of the function it provides.
1610.01391
Alexandre De Brevern
Floriane No\"el, Alain Malpertuy, Alexandre De Brevern
Global analysis of VHHs framework regions with a structural alphabet
null
Biochimie, Elsevier, 2016, 131, pp.11 - 19
10.1016/j.biochi.2016.09.005
null
q-bio.QM q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The VHHs are antigen-binding region/domain of camelid heavy chain antibodies (HCAb). They have many interesting biotechnological and biomedical properties due to their small size, high solubility and stability, and high affinity and specificity for their antigens. HCAb and classical IgGs are evolutionary related and share a common fold. VHHs are composed of regions considered as constant, called the frameworks (FRs) connected by Complementarity Determining Regions (CDRs), a highly variable region that provide interaction with the epitope. Actually, no systematic structural analyses had been performed on VHH structures despite a significant number of structures. This work is the first study to analyse the structural diversity of FRs of VHHs. Using a structural alphabet that allows approximating the local conformation, we show that each of the four FRs do not have a unique structure but exhibit many structural variant patterns. Moreover, no direct simple link between the local conformational change and amino acid composition can be detected. These results indicate that long-range interactions affect the local conformation of FRs and impact the building of structural models.
[ { "created": "Wed, 5 Oct 2016 12:53:57 GMT", "version": "v1" } ]
2016-10-06
[ [ "Noël", "Floriane", "" ], [ "Malpertuy", "Alain", "" ], [ "De Brevern", "Alexandre", "" ] ]
The VHHs are antigen-binding region/domain of camelid heavy chain antibodies (HCAb). They have many interesting biotechnological and biomedical properties due to their small size, high solubility and stability, and high affinity and specificity for their antigens. HCAb and classical IgGs are evolutionary related and share a common fold. VHHs are composed of regions considered as constant, called the frameworks (FRs) connected by Complementarity Determining Regions (CDRs), a highly variable region that provide interaction with the epitope. Actually, no systematic structural analyses had been performed on VHH structures despite a significant number of structures. This work is the first study to analyse the structural diversity of FRs of VHHs. Using a structural alphabet that allows approximating the local conformation, we show that each of the four FRs do not have a unique structure but exhibit many structural variant patterns. Moreover, no direct simple link between the local conformational change and amino acid composition can be detected. These results indicate that long-range interactions affect the local conformation of FRs and impact the building of structural models.
2203.02219
Yves-Henri Sanejouand
Yves-Henri Sanejouand
At least three xenon binding sites in the glycine binding domain of the N-methyl D-aspartate receptor
9 pages, 3 figures
Archives of Biochemistry and Biophysics 2022, vol.724, 109265
10.1016/j.abb.2022.109265
null
q-bio.BM
http://creativecommons.org/licenses/by-sa/4.0/
Xenon can produce general anesthesia. Its main protein target is the N-methyl-D-aspartate receptor, a ionotropic channel playing a pivotal role in the function of the central nervous system. The molecular mechanisms allowing this noble gas to have such a specific effect remain obscure, probably as a consequence of the lack of structural data at the atomic level of detail. Herein, as a result of five independent molecular dynamics simulations, three different binding sites were found for xenon in the glycine binding domain of the N-methyl-D-aspartate receptor. The absolute binding free energy of xenon in these sites ranges between -8 and -14 kJ/mole. However, it depends significantly upon the protein conformer chosen for performing the calculation, suggesting that larger values could probably be obtained, if other conformers were considered. These three sites are next to each other, one of them being next to the glycine site. This could explain why the F758W and F758Y mutations can prevent competitive inhibition by xenon without affecting glycine binding.
[ { "created": "Fri, 4 Mar 2022 09:59:03 GMT", "version": "v1" } ]
2022-05-13
[ [ "Sanejouand", "Yves-Henri", "" ] ]
Xenon can produce general anesthesia. Its main protein target is the N-methyl-D-aspartate receptor, a ionotropic channel playing a pivotal role in the function of the central nervous system. The molecular mechanisms allowing this noble gas to have such a specific effect remain obscure, probably as a consequence of the lack of structural data at the atomic level of detail. Herein, as a result of five independent molecular dynamics simulations, three different binding sites were found for xenon in the glycine binding domain of the N-methyl-D-aspartate receptor. The absolute binding free energy of xenon in these sites ranges between -8 and -14 kJ/mole. However, it depends significantly upon the protein conformer chosen for performing the calculation, suggesting that larger values could probably be obtained, if other conformers were considered. These three sites are next to each other, one of them being next to the glycine site. This could explain why the F758W and F758Y mutations can prevent competitive inhibition by xenon without affecting glycine binding.
1312.0125
Hamed Seyed-Allaei
Hamed Seyed-allaei
Phase Diagram of Spiking Neural Networks
oscillations are studied in this version
Front. Comput. Neurosci., 04 March 2015
10.3389/fncom.2015.00019
null
q-bio.NC cond-mat.dis-nn
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In computer simulations of spiking neural networks, often it is assumed that every two neurons of the network are connected by a probability of 2\%, 20\% of neurons are inhibitory and 80\% are excitatory. These common values are based on experiments, observations, and trials and errors, but here, I take a different perspective, inspired by evolution, I systematically simulate many networks, each with a different set of parameters, and then I try to figure out what makes the common values desirable. I stimulate networks with pulses and then measure their: dynamic range, dominant frequency of population activities, total duration of activities, maximum rate of population and the occurrence time of maximum rate. The results are organized in phase diagram. This phase diagram gives an insight into the space of parameters -- excitatory to inhibitory ratio, sparseness of connections and synaptic weights. This phase diagram can be used to decide the parameters of a model. The phase diagrams show that networks which are configured according to the common values, have a good dynamic range in response to an impulse and their dynamic range is robust in respect to synaptic weights, and for some synaptic weights they oscillate in $\alpha$ or $\beta$ frequencies, even in absence of external stimuli.
[ { "created": "Sat, 30 Nov 2013 16:55:18 GMT", "version": "v1" }, { "created": "Tue, 16 Sep 2014 15:30:47 GMT", "version": "v2" } ]
2015-03-06
[ [ "Seyed-allaei", "Hamed", "" ] ]
In computer simulations of spiking neural networks, often it is assumed that every two neurons of the network are connected by a probability of 2\%, 20\% of neurons are inhibitory and 80\% are excitatory. These common values are based on experiments, observations, and trials and errors, but here, I take a different perspective, inspired by evolution, I systematically simulate many networks, each with a different set of parameters, and then I try to figure out what makes the common values desirable. I stimulate networks with pulses and then measure their: dynamic range, dominant frequency of population activities, total duration of activities, maximum rate of population and the occurrence time of maximum rate. The results are organized in phase diagram. This phase diagram gives an insight into the space of parameters -- excitatory to inhibitory ratio, sparseness of connections and synaptic weights. This phase diagram can be used to decide the parameters of a model. The phase diagrams show that networks which are configured according to the common values, have a good dynamic range in response to an impulse and their dynamic range is robust in respect to synaptic weights, and for some synaptic weights they oscillate in $\alpha$ or $\beta$ frequencies, even in absence of external stimuli.
2105.13121
Shuangjia Zheng
Shuangjia Zheng, Tao Zeng, Chengtao Li, Binghong Chen, Connor W. Coley, Yuedong Yang, Ruibo Wu
BioNavi-NP: Biosynthesis Navigator for Natural Products
14 pages
null
null
null
q-bio.QM cs.CE cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Nature, a synthetic master, creates more than 300,000 natural products (NPs) which are the major constituents of FDA-proved drugs owing to the vast chemical space of NPs. To date, there are fewer than 30,000 validated NPs compounds involved in about 33,000 known enzyme catalytic reactions, and even fewer biosynthetic pathways are known with complete cascade-connected enzyme catalysis. Therefore, it is valuable to make computer-aided bio-retrosynthesis predictions. Here, we develop BioNavi-NP, a navigable and user-friendly toolkit, which is capable of predicting the biosynthetic pathways for NPs and NP-like compounds through a novel (AND-OR Tree)-based planning algorithm, an enhanced molecular Transformer neural network, and a training set that combines general organic transformations and biosynthetic steps. Extensive evaluations reveal that BioNavi-NP generalizes well to identifying the reported biosynthetic pathways for 90% of test compounds and recovering the verified building blocks for 73%, significantly outperforming conventional rule-based approaches. Moreover, BioNavi-NP also shows an outstanding capacity of biologically plausible pathways enumeration. In this sense, BioNavi-NP is a leading-edge toolkit to redesign complex biosynthetic pathways of natural products with applications to total or semi-synthesis and pathway elucidation or reconstruction.
[ { "created": "Wed, 26 May 2021 12:04:38 GMT", "version": "v1" } ]
2021-05-28
[ [ "Zheng", "Shuangjia", "" ], [ "Zeng", "Tao", "" ], [ "Li", "Chengtao", "" ], [ "Chen", "Binghong", "" ], [ "Coley", "Connor W.", "" ], [ "Yang", "Yuedong", "" ], [ "Wu", "Ruibo", "" ] ]
Nature, a synthetic master, creates more than 300,000 natural products (NPs) which are the major constituents of FDA-proved drugs owing to the vast chemical space of NPs. To date, there are fewer than 30,000 validated NPs compounds involved in about 33,000 known enzyme catalytic reactions, and even fewer biosynthetic pathways are known with complete cascade-connected enzyme catalysis. Therefore, it is valuable to make computer-aided bio-retrosynthesis predictions. Here, we develop BioNavi-NP, a navigable and user-friendly toolkit, which is capable of predicting the biosynthetic pathways for NPs and NP-like compounds through a novel (AND-OR Tree)-based planning algorithm, an enhanced molecular Transformer neural network, and a training set that combines general organic transformations and biosynthetic steps. Extensive evaluations reveal that BioNavi-NP generalizes well to identifying the reported biosynthetic pathways for 90% of test compounds and recovering the verified building blocks for 73%, significantly outperforming conventional rule-based approaches. Moreover, BioNavi-NP also shows an outstanding capacity of biologically plausible pathways enumeration. In this sense, BioNavi-NP is a leading-edge toolkit to redesign complex biosynthetic pathways of natural products with applications to total or semi-synthesis and pathway elucidation or reconstruction.
1306.5114
Michael Courtney
Joshua M. Courtney, Amy C. Courtney, Michael W. Courtney
Nutrient Loading Increases Red Snapper Production in the Gulf of Mexico
null
Hypotheses in the Life Sciences 3(1):7-14 2013
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A large, annually recurring region of hypoxia in the northern Gulf of Mexico has been attributed to water stratification and nutrient loading of nitrogen and phosphorus delivered by the Mississippi and Atchafalaya rivers. This nutrient loading increased nearly 300% since 1950, primarily due to increased use of agricultural fertilizers. Over this same time period, the red snapper (Lutjanus campechanus) population in the Gulf of Mexico has shifted strongly from being dominated by the eastern Gulf of Mexico to being dominated by the northern and western Gulf of Mexico, with the bulk of the current population in the same regions with significant nutrient loading from the Mississippi and Atchafalaya rivers and in or near areas with development of mid-summer hypoxic zones. The population decline of red snapper in the eastern Gulf is almost certainly attributable to overfishing, but the cause of the population increase in the northern and western Gulf is subject to broad debate, with the impact of artificial reefs (primarily oil platforms which have incr eased greatly since the 1960s) being the most contentious point. Nutrient loading has been shown to positively impact secondary production of fish in many marine systems. The present paper offers the hypothesis that increased nutrient loading has contributed significantly to increased red snapper population in the northern and western Gulf of Mexico. Nutrient loading may be working in synergy with the abundant oil platforms both increasing primary production and providing structure encouraging red snapper to feed throughout the water column.
[ { "created": "Fri, 21 Jun 2013 12:22:34 GMT", "version": "v1" } ]
2013-06-24
[ [ "Courtney", "Joshua M.", "" ], [ "Courtney", "Amy C.", "" ], [ "Courtney", "Michael W.", "" ] ]
A large, annually recurring region of hypoxia in the northern Gulf of Mexico has been attributed to water stratification and nutrient loading of nitrogen and phosphorus delivered by the Mississippi and Atchafalaya rivers. This nutrient loading increased nearly 300% since 1950, primarily due to increased use of agricultural fertilizers. Over this same time period, the red snapper (Lutjanus campechanus) population in the Gulf of Mexico has shifted strongly from being dominated by the eastern Gulf of Mexico to being dominated by the northern and western Gulf of Mexico, with the bulk of the current population in the same regions with significant nutrient loading from the Mississippi and Atchafalaya rivers and in or near areas with development of mid-summer hypoxic zones. The population decline of red snapper in the eastern Gulf is almost certainly attributable to overfishing, but the cause of the population increase in the northern and western Gulf is subject to broad debate, with the impact of artificial reefs (primarily oil platforms which have incr eased greatly since the 1960s) being the most contentious point. Nutrient loading has been shown to positively impact secondary production of fish in many marine systems. The present paper offers the hypothesis that increased nutrient loading has contributed significantly to increased red snapper population in the northern and western Gulf of Mexico. Nutrient loading may be working in synergy with the abundant oil platforms both increasing primary production and providing structure encouraging red snapper to feed throughout the water column.
2407.18818
Jose Enrique Romero Gomez Dr
Pedro Royo, Elkin Mu\~noz, Jos\'e-Enrique Romero, Jos\'e-Vicente Manj\'on, Catalina Roig, Carmen Fern\'andez-Delgado, Nuria Mu\~niz, Antonio Requena, Nicol\'as Garrido, Juan Antonio Garc\'ia- Velasco, Antonio Pellicer
Three-dimensional ultrasound-based online system for automated ovarian follicle measurement
21 pages, 4 figures, 2 tables
null
null
null
q-bio.TO eess.IV
http://creativecommons.org/licenses/by-sa/4.0/
Ultrasound follicle tracking is an important part of cycle monitoring. OSIS Ovary (Online System for Image Segmentation for the Ovary) has been conceived aiming to aid the management of the workflow in follicle tracking, one of the most iterative procedures in cycle monitoring during ovarian stimulation. In the present study, we compared OSIS Ovary (as three-dimensional ultrasound-based automated system) with the two-dimensional manual standard measurement method, in order to assess the reliability of the main measurements obtained to track follicle growth during ovarian stimulation cycles, the follicle size and count. Based on the mean follicle diameter and follicle count values obtained, the Pearson/intraclass correlation coefficients were 0.976/0.987 and 0.804/0.889 in >=10mm follicles, 0.989/0.994 and 0.809/0.867 in >=13mm follicles and 0.995/0.997 and 0.791/0.840 in >=16mm follicles. The mean difference (MnD) for the mean diameter and follicle count was, respectively, 0.759/0.161 in >=10mm follicles, 0.486/1.033 in >=13mm follicles and 0.784/0.486 in >=16mm follicles. The upper and lower limits of agreement (ULA and LLA) were 3.641/2.123 and 5.392/3.070 in >=10mm follicles, 3.496/2.522 and 4.285/2.218 in >=13mm follicles, and 3.723/2.153 and 2.432/1.459 in >=16mm follicles. The limits of agreement range (LoAR) were 5.764/8.462 in >=10mm follicles, 6.048/6.503 in >=13mm follicles and 5.876/3.891 in >=16mm follicles. P<0.05 was considered for all calculations. As three-dimensional ultrasound-based automated system in comparison with two-dimensional manual method standard, we found OSIS Ovary as a reliable tool to track follicle growth during ovarian stimulation cycles
[ { "created": "Fri, 26 Jul 2024 15:27:53 GMT", "version": "v1" } ]
2024-07-29
[ [ "Royo", "Pedro", "" ], [ "Muñoz", "Elkin", "" ], [ "Romero", "José-Enrique", "" ], [ "Manjón", "José-Vicente", "" ], [ "Roig", "Catalina", "" ], [ "Fernández-Delgado", "Carmen", "" ], [ "Muñiz", "Nuria", "" ], [ "Requena", "Antonio", "" ], [ "Garrido", "Nicolás", "" ], [ "Velasco", "Juan Antonio García-", "" ], [ "Pellicer", "Antonio", "" ] ]
Ultrasound follicle tracking is an important part of cycle monitoring. OSIS Ovary (Online System for Image Segmentation for the Ovary) has been conceived aiming to aid the management of the workflow in follicle tracking, one of the most iterative procedures in cycle monitoring during ovarian stimulation. In the present study, we compared OSIS Ovary (as three-dimensional ultrasound-based automated system) with the two-dimensional manual standard measurement method, in order to assess the reliability of the main measurements obtained to track follicle growth during ovarian stimulation cycles, the follicle size and count. Based on the mean follicle diameter and follicle count values obtained, the Pearson/intraclass correlation coefficients were 0.976/0.987 and 0.804/0.889 in >=10mm follicles, 0.989/0.994 and 0.809/0.867 in >=13mm follicles and 0.995/0.997 and 0.791/0.840 in >=16mm follicles. The mean difference (MnD) for the mean diameter and follicle count was, respectively, 0.759/0.161 in >=10mm follicles, 0.486/1.033 in >=13mm follicles and 0.784/0.486 in >=16mm follicles. The upper and lower limits of agreement (ULA and LLA) were 3.641/2.123 and 5.392/3.070 in >=10mm follicles, 3.496/2.522 and 4.285/2.218 in >=13mm follicles, and 3.723/2.153 and 2.432/1.459 in >=16mm follicles. The limits of agreement range (LoAR) were 5.764/8.462 in >=10mm follicles, 6.048/6.503 in >=13mm follicles and 5.876/3.891 in >=16mm follicles. P<0.05 was considered for all calculations. As three-dimensional ultrasound-based automated system in comparison with two-dimensional manual method standard, we found OSIS Ovary as a reliable tool to track follicle growth during ovarian stimulation cycles
1208.1694
Suani Pinho
David R. Souza, T\^ania Tom\'e, Suani T. R. Pinho, Florisneide R. Barreto, and M\'ario J. de Oliveira
Stochastic dynamics of dengue epidemics
8 pages, 3 figures, accepted for publication in Physical Review E
null
null
null
q-bio.PE cond-mat.stat-mech nlin.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We use a stochastic Markovian dynamics approach to describe the spreading of vector-transmitted diseases, like dengue, and the threshold of the disease. The coexistence space is composed by two structures representing the human and mosquito populations. The human population follows a susceptible-infected-recovered (SIR) type dynamics and the mosquito population follows a susceptible-infected-susceptible (SIS) type dynamics. The human infection is caused by infected mosquitoes and vice-versa so that the SIS and SIR dynamics are interconnected. We develop a truncation scheme to solve the evolution equations from which we get the threshold of the disease and the reproductive ratio. The threshold of the disease is also obtained by performing numerical simulations. We found that for certain values of the infection rates the spreading of the disease is impossible whatever is the death rate of infected mosquito.
[ { "created": "Sun, 5 Aug 2012 16:51:37 GMT", "version": "v1" }, { "created": "Fri, 28 Dec 2012 03:35:21 GMT", "version": "v2" } ]
2013-01-01
[ [ "Souza", "David R.", "" ], [ "Tomé", "Tânia", "" ], [ "Pinho", "Suani T. R.", "" ], [ "Barreto", "Florisneide R.", "" ], [ "de Oliveira", "Mário J.", "" ] ]
We use a stochastic Markovian dynamics approach to describe the spreading of vector-transmitted diseases, like dengue, and the threshold of the disease. The coexistence space is composed by two structures representing the human and mosquito populations. The human population follows a susceptible-infected-recovered (SIR) type dynamics and the mosquito population follows a susceptible-infected-susceptible (SIS) type dynamics. The human infection is caused by infected mosquitoes and vice-versa so that the SIS and SIR dynamics are interconnected. We develop a truncation scheme to solve the evolution equations from which we get the threshold of the disease and the reproductive ratio. The threshold of the disease is also obtained by performing numerical simulations. We found that for certain values of the infection rates the spreading of the disease is impossible whatever is the death rate of infected mosquito.
1501.00310
Mansour Taghavi Azar Sharabiani
Mansour Taghavi Azar Sharabiani
Alignment of metabolic trajectories with application to metabonomic toxicology
Final report of the first project of the Master of Research Programme (2005-2006), Imperial College London
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Geometry of the metabolic trajectories is characteristic of the biological response (Keun, Ebbels et al. 2004). Yet, due to unavoidable inter-individual variations, the exact trajectories characterising the biological responses differ. We examined whether the differences seen between metabolic trajectories of a specific treatment, correspond to the variations seen in the other biological manifestations of the same treatment. Differences in trajectories were measured via alignment procedures which introduced and implemented in this study. Our study revealed strong correlation between the scales of the aligned trajectories of metabolic responses and the severity of the hepatocelluar lesions induced after administration of hydrazine. Thus the results confirm that aligned trajectories are characteristic of a specific treatment. They then can be used for comparison with other treatment specific or unknown metabolic trajectories and can have many metabonomic applications such as preclinical toxicological screening
[ { "created": "Thu, 1 Jan 2015 20:49:13 GMT", "version": "v1" } ]
2015-01-05
[ [ "Sharabiani", "Mansour Taghavi Azar", "" ] ]
Geometry of the metabolic trajectories is characteristic of the biological response (Keun, Ebbels et al. 2004). Yet, due to unavoidable inter-individual variations, the exact trajectories characterising the biological responses differ. We examined whether the differences seen between metabolic trajectories of a specific treatment, correspond to the variations seen in the other biological manifestations of the same treatment. Differences in trajectories were measured via alignment procedures which introduced and implemented in this study. Our study revealed strong correlation between the scales of the aligned trajectories of metabolic responses and the severity of the hepatocelluar lesions induced after administration of hydrazine. Thus the results confirm that aligned trajectories are characteristic of a specific treatment. They then can be used for comparison with other treatment specific or unknown metabolic trajectories and can have many metabonomic applications such as preclinical toxicological screening
2102.00772
Rava A. da Silveira
Rava Azeredo da Silveira and Fred Rieke
The Geometry of Information Coding in Correlated Neural Populations
30 pages; 3 figures; review article
null
10.1146/annurev-neuro-120320-082744
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neurons in the brain represent information in their collective activity. The fidelity of this neural population code depends on whether and how variability in the response of one neuron is shared with other neurons. Two decades of studies have investigated the influence of these noise correlations on the properties of neural coding. We provide an overview of the theoretical developments on the topic. Using simple, qualitative and general arguments, we discuss, categorize, and relate the various published results. We emphasize the relevance of the fine structure of noise correlation, and we present a new approach to the issue. Throughout we emphasize a geometrical picture of how noise correlations impact the neural code.
[ { "created": "Mon, 1 Feb 2021 11:14:46 GMT", "version": "v1" } ]
2021-02-02
[ [ "da Silveira", "Rava Azeredo", "" ], [ "Rieke", "Fred", "" ] ]
Neurons in the brain represent information in their collective activity. The fidelity of this neural population code depends on whether and how variability in the response of one neuron is shared with other neurons. Two decades of studies have investigated the influence of these noise correlations on the properties of neural coding. We provide an overview of the theoretical developments on the topic. Using simple, qualitative and general arguments, we discuss, categorize, and relate the various published results. We emphasize the relevance of the fine structure of noise correlation, and we present a new approach to the issue. Throughout we emphasize a geometrical picture of how noise correlations impact the neural code.
2304.09069
Steven Frank
Steven A. Frank
Robustness and complexity
null
null
10.1016/j.cels.2023.11.003
null
q-bio.PE q-bio.MN
http://creativecommons.org/licenses/by/4.0/
When a biological system robustly corrects component-level errors, the direct pressure on component performance declines. Components may become less reliable, maintain more genetic variability, or drift neutrally in design, creating the basis for new forms of organismal complexity. This article links the protection-decay dynamic to other aspects of robust and complex systems. Examples include the hourglass pattern of biological development and Doyle's hourglass architecture for robustly complex systems in engineering. The deeply and densely connected wiring architecture in biology's cellular controls and in machine learning's computational neural networks provide another link. By unifying these seemingly different aspects into a unified framework, we gain a new perspective on robust and complex systems.
[ { "created": "Tue, 18 Apr 2023 15:42:27 GMT", "version": "v1" } ]
2023-12-27
[ [ "Frank", "Steven A.", "" ] ]
When a biological system robustly corrects component-level errors, the direct pressure on component performance declines. Components may become less reliable, maintain more genetic variability, or drift neutrally in design, creating the basis for new forms of organismal complexity. This article links the protection-decay dynamic to other aspects of robust and complex systems. Examples include the hourglass pattern of biological development and Doyle's hourglass architecture for robustly complex systems in engineering. The deeply and densely connected wiring architecture in biology's cellular controls and in machine learning's computational neural networks provide another link. By unifying these seemingly different aspects into a unified framework, we gain a new perspective on robust and complex systems.
2112.15427
Rohan Nuckchady Mr
Rohan Nuckchady
SIS/R model on Bi-Uniform Hypergraph
null
null
null
null
q-bio.PE math.CO math.PR
http://creativecommons.org/licenses/by/4.0/
This report is based on the work in (1). We first review definitions and notation developped there and provide derivations for the exact mathematical description of an SIS epidemic on a hypergraph. We then generalise the work in (1) to a new class of models that encompass SIS and SIR models. The exact differential equations are derived for the expected values of the population of each state. Focusing on Bi-uniform hypergraphs, we make suitable approximations obtain numerical solutions to those equations. These are compared with stochastic simulations of the model for various systems.
[ { "created": "Sun, 26 Dec 2021 16:32:43 GMT", "version": "v1" } ]
2022-01-03
[ [ "Nuckchady", "Rohan", "" ] ]
This report is based on the work in (1). We first review definitions and notation developped there and provide derivations for the exact mathematical description of an SIS epidemic on a hypergraph. We then generalise the work in (1) to a new class of models that encompass SIS and SIR models. The exact differential equations are derived for the expected values of the population of each state. Focusing on Bi-uniform hypergraphs, we make suitable approximations obtain numerical solutions to those equations. These are compared with stochastic simulations of the model for various systems.
2008.03264
Fatima Mroue
Ayman Mourad and Fatima Mroue
Modeling and Simulation of the spread of coronavirus disease (COVID-19) in Lebanon
11 pages, 10 Figures
null
null
null
q-bio.PE physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we develop a probabilistic mathematical model for the spread of coronavirus disease (COVID-19). It takes into account the known special characteristics of this disease such as the existence of infectious undetected cases and the different social and infectiousness conditions of infected people. In particular, it considers the social structure and governmental measures in a country, the fraction of detected cases over the real total infected cases, and the influx of undetected infected people from outside the borders. Although the model is simple and allows a reasonable identification of its parameters, using the data provided by local authorities on this pandemic, it is also complex enough to capture the most important effects. We study the particular case of Lebanon and use its reported data to estimate the model parameters, which can be of interest for estimating the spread of COVID-19 in other countries. We show a good agreement between the reported data and the estimations given by our model. We also simulate several scenarios that help policy makers in deciding how to loosen different measures without risking a severe wave of COVID-19. We are also able to identify the main factors that lead to specific scenarios which helps in a better understanding of the spread of the virus.
[ { "created": "Fri, 7 Aug 2020 16:49:03 GMT", "version": "v1" } ]
2020-08-10
[ [ "Mourad", "Ayman", "" ], [ "Mroue", "Fatima", "" ] ]
In this paper, we develop a probabilistic mathematical model for the spread of coronavirus disease (COVID-19). It takes into account the known special characteristics of this disease such as the existence of infectious undetected cases and the different social and infectiousness conditions of infected people. In particular, it considers the social structure and governmental measures in a country, the fraction of detected cases over the real total infected cases, and the influx of undetected infected people from outside the borders. Although the model is simple and allows a reasonable identification of its parameters, using the data provided by local authorities on this pandemic, it is also complex enough to capture the most important effects. We study the particular case of Lebanon and use its reported data to estimate the model parameters, which can be of interest for estimating the spread of COVID-19 in other countries. We show a good agreement between the reported data and the estimations given by our model. We also simulate several scenarios that help policy makers in deciding how to loosen different measures without risking a severe wave of COVID-19. We are also able to identify the main factors that lead to specific scenarios which helps in a better understanding of the spread of the virus.
2012.03240
George F. R. Ellis
George Ellis and Carole Bloch
Neuroscience and Literacy: An Integrative View
Main text 42 pages, 6 figures. Published version: Transactions of the Royal Society of South Africa (14 May 2021), DOI: 10.1080/0035919X.2021.1912848
null
null
null
q-bio.NC
http://creativecommons.org/licenses/by/4.0/
Significant challenges exist globally regarding literacy teaching and learning. To address these challenges, key features of how the brain works should be taken into account. First, perception is an active process based in detection of errors in hierarchical predictions of sensory data and action outcomes. Reading is a particular case of this non-linear predictive process. Second, emotions play a key role in underlying cognitive functioning, including oral and written language. Negative emotions undermine motivation to learn. Third, there is not the fundamental difference between listening/speaking and reading/writing often alleged on the basis of evolutionary arguments. Both are socio-cultural practices that are driven through the communication imperative of the social brain. Fourth, both listening and reading are contextually occurring pyscho-social practices of understanding, shaped by current knowledge and cutlural contexts and practices. Fifth, the natural operation of the brain is not rule-based, as is supposed in the standard view of linguistics: it is prediction, based on statistical pattern recognition. This all calls into question narrow interpretations of the widely quoted "Simple View of Reading", which argues that explict decoding is the necessary route to comprehension. One of the two neural routes to reading does not involve such explicit decoding processes, and can be activated from the earliest years. An integrated view of brain function reflecting the non-linear contextual nature of the reading process implies that an ongoing focus on personal meaning and understanding from the very beginning provides positive conditions for learning all aspects of reading and writing.
[ { "created": "Sun, 6 Dec 2020 11:28:48 GMT", "version": "v1" }, { "created": "Mon, 1 Mar 2021 13:41:28 GMT", "version": "v2" }, { "created": "Thu, 25 Mar 2021 09:52:56 GMT", "version": "v3" }, { "created": "Sun, 23 May 2021 19:00:12 GMT", "version": "v4" } ]
2021-05-25
[ [ "Ellis", "George", "" ], [ "Bloch", "Carole", "" ] ]
Significant challenges exist globally regarding literacy teaching and learning. To address these challenges, key features of how the brain works should be taken into account. First, perception is an active process based in detection of errors in hierarchical predictions of sensory data and action outcomes. Reading is a particular case of this non-linear predictive process. Second, emotions play a key role in underlying cognitive functioning, including oral and written language. Negative emotions undermine motivation to learn. Third, there is not the fundamental difference between listening/speaking and reading/writing often alleged on the basis of evolutionary arguments. Both are socio-cultural practices that are driven through the communication imperative of the social brain. Fourth, both listening and reading are contextually occurring pyscho-social practices of understanding, shaped by current knowledge and cutlural contexts and practices. Fifth, the natural operation of the brain is not rule-based, as is supposed in the standard view of linguistics: it is prediction, based on statistical pattern recognition. This all calls into question narrow interpretations of the widely quoted "Simple View of Reading", which argues that explict decoding is the necessary route to comprehension. One of the two neural routes to reading does not involve such explicit decoding processes, and can be activated from the earliest years. An integrated view of brain function reflecting the non-linear contextual nature of the reading process implies that an ongoing focus on personal meaning and understanding from the very beginning provides positive conditions for learning all aspects of reading and writing.
1810.08840
Maxwell Bertolero Dr
Maxwell A Bertolero, Azeez Adebimpe, Ankit N. Khambhati, Marcelo G. Mattar, Daniel Romer, Sharon L. Thompson-Schill, Danielle S. Bassett
Learning differentially reorganizes brain activity and connectivity
null
null
null
null
q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Human learning is a complex process in which future behavior is altered via the reorganization of brain activity and connectivity. It remains unknown whether activity and connectivity differentially reorganize during learning, and, if so, how that differential reorganization tracks stages of learning across distinct brain areas. Here, we address this gap in knowledge by measuring brain activity and functional connectivity in a longitudinal fMRI experiment in which healthy adult human participants learn the values of novel objects over the course of four days. An increasing similarity in activity or functional connectivity across subjects during learning reflects reorganization toward a common functional architecture. We assessed the presence of reorganization in activity and connectivity both during value learning and during the resting-state, allowing us to differentiate common elicited processes from intrinsic processes. We found a complex and dynamic reorganization of brain connectivity and activity--as a function of time, space, and performance--that occurs while subjects learn. Spatially localized brain activity reorganizes across the brain to a common functional architecture early in learning, and this reorganization tracks early learning performance. In contrast, spatially distributed connectivity reorganizes across the brain to a common functional architecture as training progresses, and this reorganization tracks later learning performance. Particularly good performance is associated with a sticky connectivity, that persists into the resting state. Broadly, our work uncovers distinct principles of reorganization in activity and connectivity at different phases of value learning, which inform the ongoing study of learning processes more generally.
[ { "created": "Sat, 20 Oct 2018 18:40:07 GMT", "version": "v1" }, { "created": "Sun, 23 Feb 2020 15:55:25 GMT", "version": "v2" }, { "created": "Thu, 10 Sep 2020 11:04:02 GMT", "version": "v3" } ]
2020-09-11
[ [ "Bertolero", "Maxwell A", "" ], [ "Adebimpe", "Azeez", "" ], [ "Khambhati", "Ankit N.", "" ], [ "Mattar", "Marcelo G.", "" ], [ "Romer", "Daniel", "" ], [ "Thompson-Schill", "Sharon L.", "" ], [ "Bassett", "Danielle S.", "" ] ]
Human learning is a complex process in which future behavior is altered via the reorganization of brain activity and connectivity. It remains unknown whether activity and connectivity differentially reorganize during learning, and, if so, how that differential reorganization tracks stages of learning across distinct brain areas. Here, we address this gap in knowledge by measuring brain activity and functional connectivity in a longitudinal fMRI experiment in which healthy adult human participants learn the values of novel objects over the course of four days. An increasing similarity in activity or functional connectivity across subjects during learning reflects reorganization toward a common functional architecture. We assessed the presence of reorganization in activity and connectivity both during value learning and during the resting-state, allowing us to differentiate common elicited processes from intrinsic processes. We found a complex and dynamic reorganization of brain connectivity and activity--as a function of time, space, and performance--that occurs while subjects learn. Spatially localized brain activity reorganizes across the brain to a common functional architecture early in learning, and this reorganization tracks early learning performance. In contrast, spatially distributed connectivity reorganizes across the brain to a common functional architecture as training progresses, and this reorganization tracks later learning performance. Particularly good performance is associated with a sticky connectivity, that persists into the resting state. Broadly, our work uncovers distinct principles of reorganization in activity and connectivity at different phases of value learning, which inform the ongoing study of learning processes more generally.
1509.05757
Carl Veller
Daniel Cooney, Benjamin Allen, Carl Veller
Assortment and the evolution of cooperation in a Moran process with exponential fitness
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the evolution of cooperation in a finite population interacting according to a simple model of like-with-like assortment. Evolution proceeds as a Moran process, and payoffs from the underlying cooperator-defector game are translated to positive fitnesses by an exponential transformation. These evolutionary dynamics can arise, for example, in a nest-structured population with rare migration. The use of the exponential transformation, rather than the usual linear one, is appropriate when interactions have multiplicative fitness effects, and allows for a tractable characterization of the effect of assortment on the evolution of cooperation. We define two senses in which a greater degree of assortment can favour the evolution of cooperation, the first stronger than the second: (i) greater assortment increases, at all population states, the probability that the number of cooperators increases, relative to the probability that the number of defectors increases; and (ii) greater assortment increases the fixation probability of cooperation, relative to that of defection. We show that, by the stronger definition, greater assortment favours the evolution of cooperation for a subset of cooperative dilemmas: prisoners' dilemmas, snowdrift games, stag-hunt games, and some prisoners' delight games. For other cooperative dilemmas, greater assortment favours cooperation by the weak definition, but not by the strong definition. Our results hold for any strength of selection.
[ { "created": "Fri, 18 Sep 2015 19:42:44 GMT", "version": "v1" }, { "created": "Wed, 13 Apr 2016 02:55:32 GMT", "version": "v2" } ]
2016-04-14
[ [ "Cooney", "Daniel", "" ], [ "Allen", "Benjamin", "" ], [ "Veller", "Carl", "" ] ]
We study the evolution of cooperation in a finite population interacting according to a simple model of like-with-like assortment. Evolution proceeds as a Moran process, and payoffs from the underlying cooperator-defector game are translated to positive fitnesses by an exponential transformation. These evolutionary dynamics can arise, for example, in a nest-structured population with rare migration. The use of the exponential transformation, rather than the usual linear one, is appropriate when interactions have multiplicative fitness effects, and allows for a tractable characterization of the effect of assortment on the evolution of cooperation. We define two senses in which a greater degree of assortment can favour the evolution of cooperation, the first stronger than the second: (i) greater assortment increases, at all population states, the probability that the number of cooperators increases, relative to the probability that the number of defectors increases; and (ii) greater assortment increases the fixation probability of cooperation, relative to that of defection. We show that, by the stronger definition, greater assortment favours the evolution of cooperation for a subset of cooperative dilemmas: prisoners' dilemmas, snowdrift games, stag-hunt games, and some prisoners' delight games. For other cooperative dilemmas, greater assortment favours cooperation by the weak definition, but not by the strong definition. Our results hold for any strength of selection.
1606.04247
Timothy Cummins
Timothy D. Cummins and Gopal P. Sapkota
Characterization of protein complexes using chemical cross-linking coupled electrospray mass spectrometry
This is a useful methodology for studying protein networks using mass spectrometry. Included are a list of GFP non-specific interactions and detailed methods for protein-protein interaction studies
null
null
null
q-bio.MN q-bio.BM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Identification and characterization of large protein complexes is a mainstay of biochemical toolboxes. Utilization of cross-linking chemicals can facilitate the capture and identification of transient or weak interactions of a transient nature. Here we describe a detailed methodology for cell culture based proteomic approach. We describe the generation of cells stably expressing green fluorescent protein (GFP)- tagged proteins under the tetracycline-inducible promoter and subsequent proteomic analysis of GFP-interacting proteins. We include a list of proteins that were identified as interactors of GFP.
[ { "created": "Tue, 14 Jun 2016 08:34:04 GMT", "version": "v1" } ]
2016-06-15
[ [ "Cummins", "Timothy D.", "" ], [ "Sapkota", "Gopal P.", "" ] ]
Identification and characterization of large protein complexes is a mainstay of biochemical toolboxes. Utilization of cross-linking chemicals can facilitate the capture and identification of transient or weak interactions of a transient nature. Here we describe a detailed methodology for cell culture based proteomic approach. We describe the generation of cells stably expressing green fluorescent protein (GFP)- tagged proteins under the tetracycline-inducible promoter and subsequent proteomic analysis of GFP-interacting proteins. We include a list of proteins that were identified as interactors of GFP.
1410.2494
Peter Csermely
Daniel V. Veres, David M. Gyurko, Benedek Thaler, Kristof Z. Szalay, David Fazekas, Tamas Korcsmaros and Peter Csermely
ComPPI, a cellular compartment-specific database for protein-protein interaction network analysis
9 pages, 4 figures, 50 references + a Supplementary Information containing 7 Figures, 6 Tables and 28 References
Nucleic Acid Research 2015 43, Database Issue, D485-D493 (2015)
10.1093/nar/gku1007
null
q-bio.MN physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Here we present ComPPI, a cellular compartment specific database of proteins and their interactions enabling an extensive, compartmentalized protein-protein interaction network analysis (http://ComPPI.LinkGroup.hu). ComPPI enables the user to filter biologically unlikely interactions, where the two interacting proteins have no common subcellular localizations and to predict novel properties, such as compartment-specific biological functions. ComPPI is an integrated database covering four species (S. cerevisiae, C. elegans, D. melanogaster and H. sapiens). The compilation of nine protein-protein interaction and eight subcellular localization data sets had four curation steps including a manually built, comprehensive hierarchical structure of more than 1600 subcellular localizations. ComPPI provides confidence scores for protein subcellular localizations and protein-protein interactions. ComPPI has user-friendly search options for individual proteins giving their subcellular localization, their interactions and the likelihood of their interactions considering the subcellular localization of their interacting partners. Download options of search results, whole proteomes, organelle-specific interactomes and subcellular localization data are available on its website. Due to its novel features, ComPPI is useful for the analysis of experimental results in biochemistry and molecular biology, as well as for proteome-wide studies in bioinformatics and network science helping cellular biology, medicine and drug design.
[ { "created": "Thu, 9 Oct 2014 14:51:08 GMT", "version": "v1" }, { "created": "Tue, 13 Jan 2015 08:16:30 GMT", "version": "v2" } ]
2015-01-14
[ [ "Veres", "Daniel V.", "" ], [ "Gyurko", "David M.", "" ], [ "Thaler", "Benedek", "" ], [ "Szalay", "Kristof Z.", "" ], [ "Fazekas", "David", "" ], [ "Korcsmaros", "Tamas", "" ], [ "Csermely", "Peter", "" ] ]
Here we present ComPPI, a cellular compartment specific database of proteins and their interactions enabling an extensive, compartmentalized protein-protein interaction network analysis (http://ComPPI.LinkGroup.hu). ComPPI enables the user to filter biologically unlikely interactions, where the two interacting proteins have no common subcellular localizations and to predict novel properties, such as compartment-specific biological functions. ComPPI is an integrated database covering four species (S. cerevisiae, C. elegans, D. melanogaster and H. sapiens). The compilation of nine protein-protein interaction and eight subcellular localization data sets had four curation steps including a manually built, comprehensive hierarchical structure of more than 1600 subcellular localizations. ComPPI provides confidence scores for protein subcellular localizations and protein-protein interactions. ComPPI has user-friendly search options for individual proteins giving their subcellular localization, their interactions and the likelihood of their interactions considering the subcellular localization of their interacting partners. Download options of search results, whole proteomes, organelle-specific interactomes and subcellular localization data are available on its website. Due to its novel features, ComPPI is useful for the analysis of experimental results in biochemistry and molecular biology, as well as for proteome-wide studies in bioinformatics and network science helping cellular biology, medicine and drug design.
1504.01619
Monica Golumbeanu
Monica Golumbeanu, Pejman Mohammadi, and Niko Beerenwinkel
Probabilistic modeling of occurring substitutions in PAR-CLIP data
The manuscript has been accepted for oral presentation at the Fifth RECOMB Satellite Workshop on Massively Parallel Sequencing (RECOMB-SEQ 2015)
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Photoactivatable ribonucleoside-enhanced cross-linking and immunoprecipitation (PAR-CLIP) is an experimental method based on next-generation sequencing for identifying the RNA interaction sites of a given protein. The method deliberately inserts T-to-C substitutions at the RNA-protein interaction sites, which provides a second layer of evidence compared to other CLIP methods. However, the experiment includes several sources of noise which cause both low-frequency errors and spurious high-frequency alterations. Therefore, rigorous statistical analysis is required in order to separate true T-to-C base changes, following cross-linking, from noise. So far, most of the existing PAR-CLIP data analysis methods focus on discarding the low-frequency errors and rely on high-frequency substitutions to report binding sites, not taking into account the possibility of high-frequency false positive substitutions. Here, we introduce BMix, a new probabilistic method which explicitly accounts for the sources of noise in PAR- CLIP data and distinguishes cross-link induced T-to-C substitutions from low and high-frequency erroneous alterations. We demonstrate the superior speed and accuracy of our method compared to existing approaches on both simulated and real, publicly available human datasets. The model is implemented in the Matlab toolbox BMix, freely available at www.cbg.bsse.ethz.ch/software/BMix.
[ { "created": "Tue, 7 Apr 2015 14:35:14 GMT", "version": "v1" } ]
2015-04-08
[ [ "Golumbeanu", "Monica", "" ], [ "Mohammadi", "Pejman", "" ], [ "Beerenwinkel", "Niko", "" ] ]
Photoactivatable ribonucleoside-enhanced cross-linking and immunoprecipitation (PAR-CLIP) is an experimental method based on next-generation sequencing for identifying the RNA interaction sites of a given protein. The method deliberately inserts T-to-C substitutions at the RNA-protein interaction sites, which provides a second layer of evidence compared to other CLIP methods. However, the experiment includes several sources of noise which cause both low-frequency errors and spurious high-frequency alterations. Therefore, rigorous statistical analysis is required in order to separate true T-to-C base changes, following cross-linking, from noise. So far, most of the existing PAR-CLIP data analysis methods focus on discarding the low-frequency errors and rely on high-frequency substitutions to report binding sites, not taking into account the possibility of high-frequency false positive substitutions. Here, we introduce BMix, a new probabilistic method which explicitly accounts for the sources of noise in PAR- CLIP data and distinguishes cross-link induced T-to-C substitutions from low and high-frequency erroneous alterations. We demonstrate the superior speed and accuracy of our method compared to existing approaches on both simulated and real, publicly available human datasets. The model is implemented in the Matlab toolbox BMix, freely available at www.cbg.bsse.ethz.ch/software/BMix.
1605.08575
Promit Moitra
Vidit Agrawal, Promit Moitra, Sudeshna Sinha
Emergence of Persistent Infection due to Heterogeneity
null
Scientific Reports, 41582, Volume 7, 2017
10.1038/srep41582
null
q-bio.PE nlin.CG
http://creativecommons.org/licenses/by-nc-sa/4.0/
We explore the emergence of persistent infection in a patch of population, where the disease progression of the individuals is given by the SIRS model and an individual becomes infected on contact with another infected individual. We investigate the persistence of contagion qualitatively and quantitatively, under varying degrees of heterogeneity in the initial population. We observe that when the initial population is uniform, consisting of individuals at the same stage of disease progression, infection arising from a contagious seed does not persist. However when the initial population consists of randomly distributed refractory and susceptible individuals, a single source of infection can lead to sustained infection in the population, as heterogeneity facilitates the de-synchronization of the phases in the disease cycle of the individuals. We also show how the average size of the window of persistence of infection depends on the degree of heterogeneity in the initial composition of the population. In particular, we show that the infection eventually dies out when the entire initial population is susceptible, while even a few susceptibles among an heterogeneous refractory population gives rise to a large persistent infected set.
[ { "created": "Fri, 27 May 2016 10:51:05 GMT", "version": "v1" } ]
2018-05-08
[ [ "Agrawal", "Vidit", "" ], [ "Moitra", "Promit", "" ], [ "Sinha", "Sudeshna", "" ] ]
We explore the emergence of persistent infection in a patch of population, where the disease progression of the individuals is given by the SIRS model and an individual becomes infected on contact with another infected individual. We investigate the persistence of contagion qualitatively and quantitatively, under varying degrees of heterogeneity in the initial population. We observe that when the initial population is uniform, consisting of individuals at the same stage of disease progression, infection arising from a contagious seed does not persist. However when the initial population consists of randomly distributed refractory and susceptible individuals, a single source of infection can lead to sustained infection in the population, as heterogeneity facilitates the de-synchronization of the phases in the disease cycle of the individuals. We also show how the average size of the window of persistence of infection depends on the degree of heterogeneity in the initial composition of the population. In particular, we show that the infection eventually dies out when the entire initial population is susceptible, while even a few susceptibles among an heterogeneous refractory population gives rise to a large persistent infected set.
2109.12731
Sang-Yoon Kim
Sang-Yoon Kim and Woochang Lim
Disynaptic Effect of Hilar Cells on Pattern Separation in A Spiking Neural Network of Hippocampal Dentate Gyrus
arXiv admin note: text overlap with arXiv:2106.00172, arXiv:2105.06057
null
null
null
q-bio.NC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate disynaptic effect of the hilar cells on pattern separation in a spiking neural network of the hippocampal dentate gyrus (DG). The principal granule cells (GCs) in the DG perform pattern separation, transforming similar input patterns into less-similar output patterns. The hilus consists of excitatory mossy cells (MCs) and inhibitory HIPP (hilar perforant path-associated) cells. Here, we consider the disynaptic effects of the MCs and the HIPP cells on the GCs, mediated by the inhibitory basket cells (BCs); MC $\rightarrow$ BC $\rightarrow$ GC and HIPP $\rightarrow$ BC $\rightarrow$ GC. By changing synaptic strength $K^{\rm (BC, X)}$ ( X = MC or HIPP) from the default value ${K^{\rm (BC, X)}}^*$, we study the change in the pattern separation degree ${\cal S}_d$. When decreasing $K^{\rm (BC, MC)}$ or independently increasing $K^{\rm (BC, HIPP)}$ from their default values, ${\cal S}_d$ is found to decrease (i.e., pattern separation is reduced). On the other hand, as $K^{\rm (BC, MC)}$ is increased or independently $K^{\rm (BC, HIPP)}$ is decreased from their default values, pattern separation becomes enhanced (i.e., ${\cal S}_d$ increases). In this way, the disynaptic effects of the MCs and the HIPP cells on the pattern separation are opposite ones. Thus, when simultaneously varying both $K^{\rm (BC, MC)}$ and $K^{\rm (BC, HIPP)}$, as a result of balance between the two competing disynaptic effects of the MCs and the HIPP cells, ${\cal S}_d$ forms a bell-shaped curve with an optimal maximum at their default values. Moreover, the population and individual behaviors of the sparsely synchronized rhythm of the GCs are found to be strongly correlated with the pattern separation degree ${\cal S}_d$. Consequently, the larger the synchronization and the random phase-locking degrees of the sparsely synchronized rhythm is, the more the pattern separation becomes enhanced.
[ { "created": "Mon, 27 Sep 2021 00:30:13 GMT", "version": "v1" } ]
2021-09-28
[ [ "Kim", "Sang-Yoon", "" ], [ "Lim", "Woochang", "" ] ]
We investigate disynaptic effect of the hilar cells on pattern separation in a spiking neural network of the hippocampal dentate gyrus (DG). The principal granule cells (GCs) in the DG perform pattern separation, transforming similar input patterns into less-similar output patterns. The hilus consists of excitatory mossy cells (MCs) and inhibitory HIPP (hilar perforant path-associated) cells. Here, we consider the disynaptic effects of the MCs and the HIPP cells on the GCs, mediated by the inhibitory basket cells (BCs); MC $\rightarrow$ BC $\rightarrow$ GC and HIPP $\rightarrow$ BC $\rightarrow$ GC. By changing synaptic strength $K^{\rm (BC, X)}$ ( X = MC or HIPP) from the default value ${K^{\rm (BC, X)}}^*$, we study the change in the pattern separation degree ${\cal S}_d$. When decreasing $K^{\rm (BC, MC)}$ or independently increasing $K^{\rm (BC, HIPP)}$ from their default values, ${\cal S}_d$ is found to decrease (i.e., pattern separation is reduced). On the other hand, as $K^{\rm (BC, MC)}$ is increased or independently $K^{\rm (BC, HIPP)}$ is decreased from their default values, pattern separation becomes enhanced (i.e., ${\cal S}_d$ increases). In this way, the disynaptic effects of the MCs and the HIPP cells on the pattern separation are opposite ones. Thus, when simultaneously varying both $K^{\rm (BC, MC)}$ and $K^{\rm (BC, HIPP)}$, as a result of balance between the two competing disynaptic effects of the MCs and the HIPP cells, ${\cal S}_d$ forms a bell-shaped curve with an optimal maximum at their default values. Moreover, the population and individual behaviors of the sparsely synchronized rhythm of the GCs are found to be strongly correlated with the pattern separation degree ${\cal S}_d$. Consequently, the larger the synchronization and the random phase-locking degrees of the sparsely synchronized rhythm is, the more the pattern separation becomes enhanced.
1409.7839
Elizabeth Jerison
Elizabeth R. Jerison, Sergey Kryazhimskiy, Michael M. Desai
Pleiotropic consequences of adaptation across gradations of environmental stress in budding yeast
null
null
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Adaptation to one environment often results in fitness gains and losses in other conditions. To characterize how these consequences of adaptation depend on the physical similarity between environments, we evolved 180 populations of Saccharomyces cerevisiae at different degrees of stress induced by either salt, temperature, pH, or glucose depletion. We measure how the fitness of clones adapted to each environment depends on the intensity of the corresponding type of stress. We find that clones evolved in a given type and intensity of stress tend to gain fitness in other similar intensities of that stress, and lose progressively more fitness in more physically dissimilar environments. These fitness trade-offs are asymmetric: adaptation to permissive conditions incurred a smaller trade-off in stressful conditions than vice versa. We also find that fitnesses of clones are highly correlated across similar intensities of stress, but these correlations decay towards zero in more dissimilar environments. To interpret these results, we introduce the concept of a joint distribution of fitness effects of new mutations in multiple environments (the JDFE), which describes the probability that a given mutation has particular fitness effects in some set of conditions. We find that our observations are consistent with JDFEs that are highly correlated between physically similar environments, and that become less correlated and asymmetric as the environments become more dissimilar. The JDFE provides a framework for quantifying evolutionary similarity between conditions, and forms a useful basis for theoretical work aimed at predicting the outcomes of evolution in fluctuating environments.
[ { "created": "Sat, 27 Sep 2014 20:52:09 GMT", "version": "v1" } ]
2014-09-30
[ [ "Jerison", "Elizabeth R.", "" ], [ "Kryazhimskiy", "Sergey", "" ], [ "Desai", "Michael M.", "" ] ]
Adaptation to one environment often results in fitness gains and losses in other conditions. To characterize how these consequences of adaptation depend on the physical similarity between environments, we evolved 180 populations of Saccharomyces cerevisiae at different degrees of stress induced by either salt, temperature, pH, or glucose depletion. We measure how the fitness of clones adapted to each environment depends on the intensity of the corresponding type of stress. We find that clones evolved in a given type and intensity of stress tend to gain fitness in other similar intensities of that stress, and lose progressively more fitness in more physically dissimilar environments. These fitness trade-offs are asymmetric: adaptation to permissive conditions incurred a smaller trade-off in stressful conditions than vice versa. We also find that fitnesses of clones are highly correlated across similar intensities of stress, but these correlations decay towards zero in more dissimilar environments. To interpret these results, we introduce the concept of a joint distribution of fitness effects of new mutations in multiple environments (the JDFE), which describes the probability that a given mutation has particular fitness effects in some set of conditions. We find that our observations are consistent with JDFEs that are highly correlated between physically similar environments, and that become less correlated and asymmetric as the environments become more dissimilar. The JDFE provides a framework for quantifying evolutionary similarity between conditions, and forms a useful basis for theoretical work aimed at predicting the outcomes of evolution in fluctuating environments.
1507.07037
Carl Boettiger
Carl Boettiger, Michael Bode, James N. Sanchirico, Jacob LaRiviere, Alan Hastings, Paul R. Armsworth
Optimal management of a stochastically varying population when policy adjustment is costly
null
null
null
null
q-bio.PE
http://creativecommons.org/licenses/by/4.0/
Ecological systems are dynamic and policies to manage them need to respond to that variation. However, policy adjustments will sometimes be costly, which means that fine-tuning a policy to track variability in the environment very tightly will only sometimes be worthwhile. We use a classic fisheries management question -- how to manage a stochastically varying population using annually varying quotas in order to maximize profit -- to examine how costs of policy adjustment change optimal management recommendations. Costs of policy adjustment (here changes in fishing quotas through time) could take different forms. For example, these costs may respond to the size of the change being implemented, or there could be a fixed cost any time a quota change is made. We show how different forms of policy costs have contrasting implications for optimal policies. Though it is frequently assumed that costs to adjusting policies will dampen variation in the policy, we show that certain cost structures can actually increase variation through time. We further show that failing to account for adjustment costs has a consistently worse economic impact than would assuming these costs are present when they are not.
[ { "created": "Fri, 24 Jul 2015 22:35:45 GMT", "version": "v1" } ]
2015-07-28
[ [ "Boettiger", "Carl", "" ], [ "Bode", "Michael", "" ], [ "Sanchirico", "James N.", "" ], [ "LaRiviere", "Jacob", "" ], [ "Hastings", "Alan", "" ], [ "Armsworth", "Paul R.", "" ] ]
Ecological systems are dynamic and policies to manage them need to respond to that variation. However, policy adjustments will sometimes be costly, which means that fine-tuning a policy to track variability in the environment very tightly will only sometimes be worthwhile. We use a classic fisheries management question -- how to manage a stochastically varying population using annually varying quotas in order to maximize profit -- to examine how costs of policy adjustment change optimal management recommendations. Costs of policy adjustment (here changes in fishing quotas through time) could take different forms. For example, these costs may respond to the size of the change being implemented, or there could be a fixed cost any time a quota change is made. We show how different forms of policy costs have contrasting implications for optimal policies. Though it is frequently assumed that costs to adjusting policies will dampen variation in the policy, we show that certain cost structures can actually increase variation through time. We further show that failing to account for adjustment costs has a consistently worse economic impact than would assuming these costs are present when they are not.
q-bio/0412050
Carlos Escudero
Carlos Escudero
Chemotactic Collapse and Mesenchymal Morphogenesis
null
null
10.1103/PhysRevE.72.022903
null
q-bio.CB math.AP nlin.AO
null
We study the effect of chemotactic signaling among mesenchymal cells. We show that the particular physiology of the mesenchymal cells allows one-dimensional collapse in contrast to the case of bacteria, and that the mesenchymal morphogenesis represents thus a more complex type of pattern formation than those found in bacterial colonies. We finally compare our theoretical predictions with recent in vitro experiments.
[ { "created": "Thu, 30 Dec 2004 11:21:44 GMT", "version": "v1" } ]
2009-11-10
[ [ "Escudero", "Carlos", "" ] ]
We study the effect of chemotactic signaling among mesenchymal cells. We show that the particular physiology of the mesenchymal cells allows one-dimensional collapse in contrast to the case of bacteria, and that the mesenchymal morphogenesis represents thus a more complex type of pattern formation than those found in bacterial colonies. We finally compare our theoretical predictions with recent in vitro experiments.
1308.4256
Johannes Reiter
Benjamin M. Zagorsky, Johannes G. Reiter, Krishnendu Chatterjee, Martin A. Nowak
Forgiver triumphs in alternating Prisoner's Dilemma
null
null
10.1371/journal.pone.0080814
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cooperative behavior, where one individual incurs a cost to help another, is a wide spread phenomenon. Here we study direct reciprocity in the context of the alternating Prisoner's Dilemma. We consider all strategies that can be implemented by one and two-state automata. We calculate the payoff matrix of all pairwise encounters in the presence of noise. We explore deterministic selection dynamics with and without mutation. Using different error rates and payoff values, we observe convergence to a small number of distinct equilibria. Two of them are uncooperative strict Nash equilibria representing always-defect (ALLD) and Grim. The third equilibrium is mixed and represents a cooperative alliance of several strategies, dominated by a strategy which we call Forgiver. Forgiver cooperates whenever the opponent has cooperated; it defects once when the opponent has defected, but subsequently Forgiver attempts to re-establish cooperation even if the opponent has defected again. Forgiver is not an evolutionarily stable strategy, but the alliance, which it rules, is asymptotically stable. For a wide range of parameter values the most commonly observed outcome is convergence to the mixed equilibrium, dominated by Forgiver. Our results show that although forgiving might incur a short-term loss it can lead to a long-term gain. Forgiveness facilitates stable cooperation in the presence of exploitation and noise.
[ { "created": "Tue, 20 Aug 2013 08:47:57 GMT", "version": "v1" } ]
2014-03-05
[ [ "Zagorsky", "Benjamin M.", "" ], [ "Reiter", "Johannes G.", "" ], [ "Chatterjee", "Krishnendu", "" ], [ "Nowak", "Martin A.", "" ] ]
Cooperative behavior, where one individual incurs a cost to help another, is a wide spread phenomenon. Here we study direct reciprocity in the context of the alternating Prisoner's Dilemma. We consider all strategies that can be implemented by one and two-state automata. We calculate the payoff matrix of all pairwise encounters in the presence of noise. We explore deterministic selection dynamics with and without mutation. Using different error rates and payoff values, we observe convergence to a small number of distinct equilibria. Two of them are uncooperative strict Nash equilibria representing always-defect (ALLD) and Grim. The third equilibrium is mixed and represents a cooperative alliance of several strategies, dominated by a strategy which we call Forgiver. Forgiver cooperates whenever the opponent has cooperated; it defects once when the opponent has defected, but subsequently Forgiver attempts to re-establish cooperation even if the opponent has defected again. Forgiver is not an evolutionarily stable strategy, but the alliance, which it rules, is asymptotically stable. For a wide range of parameter values the most commonly observed outcome is convergence to the mixed equilibrium, dominated by Forgiver. Our results show that although forgiving might incur a short-term loss it can lead to a long-term gain. Forgiveness facilitates stable cooperation in the presence of exploitation and noise.
2005.10428
Gyubaek Shin
Gyubaek Shin (1) and Jin Wang (1 and 2) ((1) Department of Chemistry, SUNY Stony Brook, NY, USA, (2) Department of Physics and Astronomy, SUNY Stony Brook, NY, USA)
The Role of Energy Cost on Accuracy, Sensitivity, Specificity, Speed and Adaptation of T Cell Foreign and Self Recognition
13 pages, 14 figures
null
10.1039/D0CP02422H
null
q-bio.MN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The critical role of energy consumption in biological systems including T cell discrimination process has been investigated in various ways. The kinetic proofreading(KPR) in T cell recognition involving different levels of energy dissipation influences functional outcomes such as error rates and specificity. In this work, we study quantitatively how the energy cost influences error fractions, sensitivity, specificity, kinetic speed in terms of Mean First Passage Time(MFPT) and adaption errors. These provide the background to adequately understand T cell dynamics. It is found that energy plays a central role in the system that aims to achieve minimum error fractions and maximum sensitivity and specificity with the fastest speed under our kinetic scheme for which numerical values of kinetic parameters are specially chosen, but such a condition can be broken with varying data. Starting with the application of steady state approximation(SSA) to the evaluation of the concentration of each complex produced associated with KPR, which is used to quantify various observables, we present both analytical and numerical results in detail.
[ { "created": "Thu, 21 May 2020 02:27:43 GMT", "version": "v1" }, { "created": "Thu, 12 Nov 2020 00:12:49 GMT", "version": "v2" }, { "created": "Mon, 16 Nov 2020 17:01:05 GMT", "version": "v3" }, { "created": "Fri, 11 Dec 2020 05:55:05 GMT", "version": "v4" }, { "created": "Mon, 26 Apr 2021 12:58:09 GMT", "version": "v5" } ]
2021-04-27
[ [ "Shin", "Gyubaek", "", "1 and 2" ], [ "Wang", "Jin", "", "1 and 2" ] ]
The critical role of energy consumption in biological systems including T cell discrimination process has been investigated in various ways. The kinetic proofreading(KPR) in T cell recognition involving different levels of energy dissipation influences functional outcomes such as error rates and specificity. In this work, we study quantitatively how the energy cost influences error fractions, sensitivity, specificity, kinetic speed in terms of Mean First Passage Time(MFPT) and adaption errors. These provide the background to adequately understand T cell dynamics. It is found that energy plays a central role in the system that aims to achieve minimum error fractions and maximum sensitivity and specificity with the fastest speed under our kinetic scheme for which numerical values of kinetic parameters are specially chosen, but such a condition can be broken with varying data. Starting with the application of steady state approximation(SSA) to the evaluation of the concentration of each complex produced associated with KPR, which is used to quantify various observables, we present both analytical and numerical results in detail.
2002.01563
Kirill Shmilovich
Kirill Shmilovich, Rachael A. Mansbach, Hythem Sidky, Olivia E. Dunne, Sayak Subhra Panda, John D. Tovar, Andrew L. Ferguson
Discovery of Self-Assembling $\pi$-Conjugated Peptides by Active Learning-Directed Coarse-Grained Molecular Simulation
null
null
10.1021/acs.jpcb.0c00708
null
q-bio.BM cond-mat.soft cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Electronically-active organic molecules have demonstrated great promise as novel soft materials for energy harvesting and transport. Self-assembled nanoaggregates formed from $\pi$-conjugated oligopeptides composed of an aromatic core flanked by oligopeptide wings offer emergent optoelectronic properties within a water soluble and biocompatible substrate. Nanoaggregate properties can be controlled by tuning core chemistry and peptide composition, but the sequence-structure-function relations remain poorly characterized. In this work, we employ coarse-grained molecular dynamics simulations within an active learning protocol employing deep representational learning and Bayesian optimization to efficiently identify molecules capable of assembling pseudo-1D nanoaggregates with good stacking of the electronically-active $\pi$-cores. We consider the DXXX-OPV3-XXXD oligopeptide family, where D is an Asp residue and OPV3 is an oligophenylene vinylene oligomer (1,4-distyrylbenzene), to identify the top performing XXX tripeptides within all 20$^3$ = 8,000 possible sequences. By direct simulation of only 2.3% of this space, we identify molecules predicted to exhibit superior assembly relative to those reported in prior work. Spectral clustering of the top candidates reveals new design rules governing assembly. This work establishes new understanding of DXXX-OPV3-XXXD assembly, identifies promising new candidates for experimental testing, and presents a computational design platform that can be generically extended to other peptide-based and peptide-like systems.
[ { "created": "Mon, 27 Jan 2020 00:01:21 GMT", "version": "v1" } ]
2022-05-12
[ [ "Shmilovich", "Kirill", "" ], [ "Mansbach", "Rachael A.", "" ], [ "Sidky", "Hythem", "" ], [ "Dunne", "Olivia E.", "" ], [ "Panda", "Sayak Subhra", "" ], [ "Tovar", "John D.", "" ], [ "Ferguson", "Andrew L.", "" ] ]
Electronically-active organic molecules have demonstrated great promise as novel soft materials for energy harvesting and transport. Self-assembled nanoaggregates formed from $\pi$-conjugated oligopeptides composed of an aromatic core flanked by oligopeptide wings offer emergent optoelectronic properties within a water soluble and biocompatible substrate. Nanoaggregate properties can be controlled by tuning core chemistry and peptide composition, but the sequence-structure-function relations remain poorly characterized. In this work, we employ coarse-grained molecular dynamics simulations within an active learning protocol employing deep representational learning and Bayesian optimization to efficiently identify molecules capable of assembling pseudo-1D nanoaggregates with good stacking of the electronically-active $\pi$-cores. We consider the DXXX-OPV3-XXXD oligopeptide family, where D is an Asp residue and OPV3 is an oligophenylene vinylene oligomer (1,4-distyrylbenzene), to identify the top performing XXX tripeptides within all 20$^3$ = 8,000 possible sequences. By direct simulation of only 2.3% of this space, we identify molecules predicted to exhibit superior assembly relative to those reported in prior work. Spectral clustering of the top candidates reveals new design rules governing assembly. This work establishes new understanding of DXXX-OPV3-XXXD assembly, identifies promising new candidates for experimental testing, and presents a computational design platform that can be generically extended to other peptide-based and peptide-like systems.
1305.6697
Tatiana T. Marquez-Lago
Tatiana T. Marquez-Lago and Pablo Padilla
A Selection Criterion for Patterns in Reaction-Diffusion Systems
19 pages, 10 figures
Theoretical Biology and Medical Modelling 2014, 11:7
10.1186/1742-4682-11-7
null
q-bio.QM cond-mat.stat-mech
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Alan Turing's work in Morphogenesis has received wide attention during the past 60 years. The central idea behind his theory is that two chemically interacting diffusible substances are able to generate stable spatial patterns, provided certain conditions are met. Turing's proposal has already been confirmed as a pattern formation mechanism in several chemical and biological systems and, due to their wide applicability, there is a great deal of interest in deciphering how to generate specific patterns under controlled conditions. However, techniques allowing one to predict what kind of spatial structure will emerge from Turing systems, as well as generalized reaction-diffusion systems, remain unknown. Here, we consider a generalized reaction diffusion system on a planar domain and provide an analytic criterion to determine whether spots or stripes will be formed. It is motivated by the existence of an associated energy function that allows bringing in the intuition provided by phase transitions phenomena. This criterion is proved rigorously in some situations, generalizing well known results for the scalar equation where the pattern selection process can be understood in terms of a potential. In more complex settings it is investigated numerically. Our criterion can be applied to efficiently design Biotechnology and Developmental Biology experiments, or simplify the analysis of hypothesized morphogenetic models.
[ { "created": "Wed, 29 May 2013 05:50:07 GMT", "version": "v1" } ]
2014-07-29
[ [ "Marquez-Lago", "Tatiana T.", "" ], [ "Padilla", "Pablo", "" ] ]
Alan Turing's work in Morphogenesis has received wide attention during the past 60 years. The central idea behind his theory is that two chemically interacting diffusible substances are able to generate stable spatial patterns, provided certain conditions are met. Turing's proposal has already been confirmed as a pattern formation mechanism in several chemical and biological systems and, due to their wide applicability, there is a great deal of interest in deciphering how to generate specific patterns under controlled conditions. However, techniques allowing one to predict what kind of spatial structure will emerge from Turing systems, as well as generalized reaction-diffusion systems, remain unknown. Here, we consider a generalized reaction diffusion system on a planar domain and provide an analytic criterion to determine whether spots or stripes will be formed. It is motivated by the existence of an associated energy function that allows bringing in the intuition provided by phase transitions phenomena. This criterion is proved rigorously in some situations, generalizing well known results for the scalar equation where the pattern selection process can be understood in terms of a potential. In more complex settings it is investigated numerically. Our criterion can be applied to efficiently design Biotechnology and Developmental Biology experiments, or simplify the analysis of hypothesized morphogenetic models.
2209.15419
Louis Omenyi Dr
Louis Omenyi, Aloysius Ezaka, Henry O. Adagba, Gerald Ozoigbo, Kafayat Elebute
A sensitivity analysis of a gonorrhoea dynamics and control model
Journal of Mathematical and Computational Science, 2022
null
null
null
q-bio.PE math.DS
http://creativecommons.org/licenses/by/4.0/
We formulate and analyse a robust mathematical model of the dynamics of gonorrhoea incorporating passive immunity and control. Our results show that the disease-free and endemic equilibria of the model are both locally and globally asymptotically stable. A sensitivity analysis of the model shows that the dynamics of the model is variable and dependent on waning rate, control parameters and interaction of the latent and infected classes. In particular, the lower the waning rate, the more the exponential decrease in the passive immunity but the susceptible population increases to the equilibrium and wanes asymptotically due to the presence of the control parameters and restricted interaction of the latent and infected classes.
[ { "created": "Sat, 17 Sep 2022 21:47:35 GMT", "version": "v1" } ]
2022-10-03
[ [ "Omenyi", "Louis", "" ], [ "Ezaka", "Aloysius", "" ], [ "Adagba", "Henry O.", "" ], [ "Ozoigbo", "Gerald", "" ], [ "Elebute", "Kafayat", "" ] ]
We formulate and analyse a robust mathematical model of the dynamics of gonorrhoea incorporating passive immunity and control. Our results show that the disease-free and endemic equilibria of the model are both locally and globally asymptotically stable. A sensitivity analysis of the model shows that the dynamics of the model is variable and dependent on waning rate, control parameters and interaction of the latent and infected classes. In particular, the lower the waning rate, the more the exponential decrease in the passive immunity but the susceptible population increases to the equilibrium and wanes asymptotically due to the presence of the control parameters and restricted interaction of the latent and infected classes.
1206.6768
Richard A Neher
Philipp W. Messer and Richard A. Neher
Estimating the Strength of Selective Sweeps from Deep Population Diversity Data
null
Genetics 191:593-605, 2012
null
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Selective sweeps are typically associated with a local reduction of genetic diversity around the adaptive site. However, selective sweeps can also quickly carry neutral mutations to observable population frequencies if they arise early in a sweep and hitchhike with the adaptive allele. We show that the interplay between mutation and exponential amplification through hitchhiking results in a characteristic frequency spectrum of the resulting novel haplotype variation that depends only on the ratio of the mutation rate and the selection coefficient of the sweep. Based on this result, we develop an estimator for the selection coefficient driving a sweep. Since this estimator utilizes the novel variation arising from mutations during a sweep, it does not rely on preexisting variation and can also be applied to loci that lack recombination. Compared with standard approaches that infer selection coefficients from the size of dips in genetic diversity around the adaptive site, our estimator requires much shorter sequences but sampled at high population depth in order to capture low-frequency variants; given such data, it consistently outperforms standard approaches. We investigate analytically and numerically how the accuracy of our estimator is affected by the decay of the sweep pattern over time as a consequence of random genetic drift and discuss potential effects of recombination, soft sweeps, and demography. As an example for its use, we apply our estimator to deep sequencing data from HIV populations.
[ { "created": "Thu, 28 Jun 2012 17:28:06 GMT", "version": "v1" } ]
2012-06-29
[ [ "Messer", "Philipp W.", "" ], [ "Neher", "Richard A.", "" ] ]
Selective sweeps are typically associated with a local reduction of genetic diversity around the adaptive site. However, selective sweeps can also quickly carry neutral mutations to observable population frequencies if they arise early in a sweep and hitchhike with the adaptive allele. We show that the interplay between mutation and exponential amplification through hitchhiking results in a characteristic frequency spectrum of the resulting novel haplotype variation that depends only on the ratio of the mutation rate and the selection coefficient of the sweep. Based on this result, we develop an estimator for the selection coefficient driving a sweep. Since this estimator utilizes the novel variation arising from mutations during a sweep, it does not rely on preexisting variation and can also be applied to loci that lack recombination. Compared with standard approaches that infer selection coefficients from the size of dips in genetic diversity around the adaptive site, our estimator requires much shorter sequences but sampled at high population depth in order to capture low-frequency variants; given such data, it consistently outperforms standard approaches. We investigate analytically and numerically how the accuracy of our estimator is affected by the decay of the sweep pattern over time as a consequence of random genetic drift and discuss potential effects of recombination, soft sweeps, and demography. As an example for its use, we apply our estimator to deep sequencing data from HIV populations.
1910.00582
Xi Yang
Xi Yang, Yan Gong, Nida Waheed, Keith March, Jiang Bian, William R. Hogan, Yonghui Wu
Identifying Cancer Patients at Risk for Heart Failure Using Machine Learning Methods
6 pages, 1 figure, 3 tables, accepted by AMIA 2019
AMIA Annu Symp Proc (2019) 933-941
null
null
q-bio.QM cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cardiotoxicity related to cancer therapies has become a serious issue, diminishing cancer treatment outcomes and quality of life. Early detection of cancer patients at risk for cardiotoxicity before cardiotoxic treatments and providing preventive measures are potential solutions to improve cancer patients's quality of life. This study focuses on predicting the development of heart failure in cancer patients after cancer diagnoses using historical electronic health record (EHR) data. We examined four machine learning algorithms using 143,199 cancer patients from the University of Florida Health (UF Health) Integrated Data Repository (IDR). We identified a total number of 1,958 qualified cases and matched them to 15,488 controls by gender, age, race, and major cancer type. Two feature encoding strategies were compared to encode variables as machine learning features. The gradient boosting (GB) based model achieved the best AUC score of 0.9077 (with a sensitivity of 0.8520 and a specificity of 0.8138), outperforming other machine learning methods. We also looked into the subgroup of cancer patients with exposure to chemotherapy drugs and observed a lower specificity score (0.7089). The experimental results show that machine learning methods are able to capture clinical factors that are known to be associated with heart failure and that it is feasible to use machine learning methods to identify cancer patients at risk for cancer therapy-related heart failure.
[ { "created": "Tue, 1 Oct 2019 12:13:04 GMT", "version": "v1" } ]
2020-05-21
[ [ "Yang", "Xi", "" ], [ "Gong", "Yan", "" ], [ "Waheed", "Nida", "" ], [ "March", "Keith", "" ], [ "Bian", "Jiang", "" ], [ "Hogan", "William R.", "" ], [ "Wu", "Yonghui", "" ] ]
Cardiotoxicity related to cancer therapies has become a serious issue, diminishing cancer treatment outcomes and quality of life. Early detection of cancer patients at risk for cardiotoxicity before cardiotoxic treatments and providing preventive measures are potential solutions to improve cancer patients's quality of life. This study focuses on predicting the development of heart failure in cancer patients after cancer diagnoses using historical electronic health record (EHR) data. We examined four machine learning algorithms using 143,199 cancer patients from the University of Florida Health (UF Health) Integrated Data Repository (IDR). We identified a total number of 1,958 qualified cases and matched them to 15,488 controls by gender, age, race, and major cancer type. Two feature encoding strategies were compared to encode variables as machine learning features. The gradient boosting (GB) based model achieved the best AUC score of 0.9077 (with a sensitivity of 0.8520 and a specificity of 0.8138), outperforming other machine learning methods. We also looked into the subgroup of cancer patients with exposure to chemotherapy drugs and observed a lower specificity score (0.7089). The experimental results show that machine learning methods are able to capture clinical factors that are known to be associated with heart failure and that it is feasible to use machine learning methods to identify cancer patients at risk for cancer therapy-related heart failure.
1507.07209
Mareike Fischer
Mareike Fischer, Martin Kreidl
Non-hereditary Minimum Deep Coalescence trees
21 pages, 3 figures, 5 tables
null
null
null
q-bio.PE math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One of the goals of phylogenetic research is to find the species tree describing the evolutionary history of a set of species. But the trees derived from geneti data with the help of tree inference methods are gene trees that need not coincide with the species tree. This can for example happen when so-called deep coalescence events take place. It is also known that species trees can differ from their most likely gene trees. Therefore, as a means to find the species tree, it has been suggested to use subtrees of the gene trees, for example triples, and to puzzle them together in order to find the species tree. In this paper, we will show that this approach may lead to wrong trees regarding the minimum deep coalescence criterion (MDC). In particular, we present an example in which the optimal MDC tree is unique, but none of its triple subtrees fulfills the MDC criterion. In this sense, MDC is a non-hereditary tree reconstruction method.
[ { "created": "Sun, 26 Jul 2015 15:18:58 GMT", "version": "v1" } ]
2015-07-28
[ [ "Fischer", "Mareike", "" ], [ "Kreidl", "Martin", "" ] ]
One of the goals of phylogenetic research is to find the species tree describing the evolutionary history of a set of species. But the trees derived from geneti data with the help of tree inference methods are gene trees that need not coincide with the species tree. This can for example happen when so-called deep coalescence events take place. It is also known that species trees can differ from their most likely gene trees. Therefore, as a means to find the species tree, it has been suggested to use subtrees of the gene trees, for example triples, and to puzzle them together in order to find the species tree. In this paper, we will show that this approach may lead to wrong trees regarding the minimum deep coalescence criterion (MDC). In particular, we present an example in which the optimal MDC tree is unique, but none of its triple subtrees fulfills the MDC criterion. In this sense, MDC is a non-hereditary tree reconstruction method.
1906.02598
Sergey Ovchinnikov
Justas Dauparas, Haobo Wang, Avi Swartz, Peter Koo, Mor Nitzan, Sergey Ovchinnikov
Unified framework for modeling multivariate distributions in biological sequences
2019 ICML Workshop on Computational Biology
null
null
null
q-bio.QM
http://creativecommons.org/licenses/by-nc-sa/4.0/
Revealing the functional sites of biological sequences, such as evolutionary conserved, structurally interacting or co-evolving protein sites, is a fundamental, and yet challenging task. Different frameworks and models were developed to approach this challenge, including Position-Specific Scoring Matrices, Markov Random Fields, Multivariate Gaussian models and most recently Autoencoders. Each of these methods has certain advantages, and while they have generated a set of insights for better biological predictions, these have been restricted to the corresponding methods and were difficult to translate to the complementary domains. Here we propose a unified framework for the above-mentioned models, that allows for interpretable transformations between the different methods and naturally incorporates the advantages and insight gained individually in the different communities. We show how, by using the unified framework, we are able to achieve state-of-the-art performance for protein structure prediction, while enhancing interpretability of the prediction process.
[ { "created": "Thu, 6 Jun 2019 14:05:22 GMT", "version": "v1" } ]
2019-06-07
[ [ "Dauparas", "Justas", "" ], [ "Wang", "Haobo", "" ], [ "Swartz", "Avi", "" ], [ "Koo", "Peter", "" ], [ "Nitzan", "Mor", "" ], [ "Ovchinnikov", "Sergey", "" ] ]
Revealing the functional sites of biological sequences, such as evolutionary conserved, structurally interacting or co-evolving protein sites, is a fundamental, and yet challenging task. Different frameworks and models were developed to approach this challenge, including Position-Specific Scoring Matrices, Markov Random Fields, Multivariate Gaussian models and most recently Autoencoders. Each of these methods has certain advantages, and while they have generated a set of insights for better biological predictions, these have been restricted to the corresponding methods and were difficult to translate to the complementary domains. Here we propose a unified framework for the above-mentioned models, that allows for interpretable transformations between the different methods and naturally incorporates the advantages and insight gained individually in the different communities. We show how, by using the unified framework, we are able to achieve state-of-the-art performance for protein structure prediction, while enhancing interpretability of the prediction process.
2004.03059
Brittany H Scheid
Brittany H. Scheid, Arian Ashourvan, Jennifer Stiso, Kathryn A. Davis, Fadi Mikhail, Fabio Pasqualetti, Brian Litt, Danielle S. Bassett
Time-evolving controllability of effective connectivity networks during seizure progression
Main text is 7 pages and contains 4 figures, supplement is 17 pages and contains 9 figures and one table
PNAS 2021 Vol. 118 No. 5
10.1073/pnas.2006436118
null
q-bio.NC
http://creativecommons.org/licenses/by-nc-sa/4.0/
Over one third of the estimated 3 million people with epilepsy in the US are medication resistant. Responsive neurostimulation from chronically implanted electrodes provides a promising treatment option and alternative to resective surgery. However, determining personalized optimal stimulation parameters, including when and where to intervene to guarantee a positive patient outcome, is a major open challenge. Network neuroscience and control theory offer useful tools that may guide improvements in parameter selection for control of anomalous neural activity. Here we use a novel method to characterize dynamic controllability across consecutive effective connectivity (EC) networks based on regularized partial correlations between implanted electrodes during the onset, propagation, and termination phases of thirty-four seizures. We estimate regularized partial correlation adjacency matrices from one-second time windows of intracranial electrocorticography recordings using the Graphical Least Absolute Shrinkage and Selection Operator (GLASSO). Average and modal controllability metrics calculated from each resulting EC network track the time-varying controllability of the brain on an evolving landscape of conditionally dependent network interactions. We show that average controllability increases throughout a seizure and is negatively correlated with modal controllability throughout. Furthermore, our results support the hypothesis that the energy required to drive the brain to a seizure-free state from an ictal state is smallest during seizure onset; yet, we find that applying control energy at electrodes in the seizure onset zone may not always be energetically favorable. Our work suggests that a low-complexity model of time-evolving controllability may offer new insights for developing and improving control strategies targeting seizure suppression.
[ { "created": "Tue, 7 Apr 2020 00:57:25 GMT", "version": "v1" } ]
2021-02-03
[ [ "Scheid", "Brittany H.", "" ], [ "Ashourvan", "Arian", "" ], [ "Stiso", "Jennifer", "" ], [ "Davis", "Kathryn A.", "" ], [ "Mikhail", "Fadi", "" ], [ "Pasqualetti", "Fabio", "" ], [ "Litt", "Brian", "" ], [ "Bassett", "Danielle S.", "" ] ]
Over one third of the estimated 3 million people with epilepsy in the US are medication resistant. Responsive neurostimulation from chronically implanted electrodes provides a promising treatment option and alternative to resective surgery. However, determining personalized optimal stimulation parameters, including when and where to intervene to guarantee a positive patient outcome, is a major open challenge. Network neuroscience and control theory offer useful tools that may guide improvements in parameter selection for control of anomalous neural activity. Here we use a novel method to characterize dynamic controllability across consecutive effective connectivity (EC) networks based on regularized partial correlations between implanted electrodes during the onset, propagation, and termination phases of thirty-four seizures. We estimate regularized partial correlation adjacency matrices from one-second time windows of intracranial electrocorticography recordings using the Graphical Least Absolute Shrinkage and Selection Operator (GLASSO). Average and modal controllability metrics calculated from each resulting EC network track the time-varying controllability of the brain on an evolving landscape of conditionally dependent network interactions. We show that average controllability increases throughout a seizure and is negatively correlated with modal controllability throughout. Furthermore, our results support the hypothesis that the energy required to drive the brain to a seizure-free state from an ictal state is smallest during seizure onset; yet, we find that applying control energy at electrodes in the seizure onset zone may not always be energetically favorable. Our work suggests that a low-complexity model of time-evolving controllability may offer new insights for developing and improving control strategies targeting seizure suppression.
2307.16283
Chandre Dharma-wardana
M. W. C. Dharma-wardana (NRC Canada), Parakrama Waidyanatha, K. A. Renuka, D. Sumith de S. Abeysiriwardena, Buddhi Marambe
A critical examination of crop-yield data for vegetables, maize and Tea for commercialized Sri Lankan biofilm biofertilizers
19 pages, figures and tables
null
null
null
q-bio.OT
http://creativecommons.org/licenses/by/4.0/
With increasing global interest in microbial methods for agriculture, the commercialization of biofertilizers in Sri Lanka is of general interest. The use of a biofilm-biofertilizer (BFBF) commercialized in Sri Lanka is claimed to reduce chemical fertilizer (CF) usage by ~50 per cent while boosting harvest by 20-30 per cent. Many countries have explored the potential of biofilm biofertilizers, but have so far found mixed results. Here we review this BFBF commercialized in Sri Lanka and approved for nation-wide use there. We show in detail that the improved yields claimed for this BFBF fall within the uncertainties (error bars) of the harvest. Theoretical models that produce a seemingly reduced CF scenario with an "increase" in harvests, although this is in fact not so, are presented. While BFBF usage seems to improve soil quality in certain respects, the currently available BFBF promoted in Sri Lanka has negligible impact on crop yields. We also briefly consider the potential negative effects of large-scale adoption of microbial methods.
[ { "created": "Sun, 30 Jul 2023 17:39:00 GMT", "version": "v1" } ]
2023-08-01
[ [ "Dharma-wardana", "M. W. C.", "", "NRC Canada" ], [ "Waidyanatha", "Parakrama", "" ], [ "Renuka", "K. A.", "" ], [ "Abeysiriwardena", "D. Sumith de S.", "" ], [ "Marambe", "Buddhi", "" ] ]
With increasing global interest in microbial methods for agriculture, the commercialization of biofertilizers in Sri Lanka is of general interest. The use of a biofilm-biofertilizer (BFBF) commercialized in Sri Lanka is claimed to reduce chemical fertilizer (CF) usage by ~50 per cent while boosting harvest by 20-30 per cent. Many countries have explored the potential of biofilm biofertilizers, but have so far found mixed results. Here we review this BFBF commercialized in Sri Lanka and approved for nation-wide use there. We show in detail that the improved yields claimed for this BFBF fall within the uncertainties (error bars) of the harvest. Theoretical models that produce a seemingly reduced CF scenario with an "increase" in harvests, although this is in fact not so, are presented. While BFBF usage seems to improve soil quality in certain respects, the currently available BFBF promoted in Sri Lanka has negligible impact on crop yields. We also briefly consider the potential negative effects of large-scale adoption of microbial methods.
1710.07577
Jan Korvink
Jan G. Korvink, Neil MacKinnon
Microscale magnetic resonance detectors: a technology roadmap for in vivo metabolomics
Submission to the Sensor Symposium, Hiroshima, 2017
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One of the great challenges in biology is to observe, at sufficient detail, the real-time workings of the cell. Many methods exist to do cell measurements invasively. For example, mass spectrometry has tremendous mass sensitivity but destroys the cell. Molecular tagging can reveal exquisite detail using STED microscopy, but is currently neither relevant for a large number of different molecules, nor is it applicable to very small molecules. For marker free non-invasive measurements, only magnetic resonance has sufficient molecular specificity, but the technique suffers from low sensitivity and resolution. In this presentation we will consider the roadmap for achieving in vivo metabolomic measurements with more sensitivity and resolution. The roadmap will point towards the technological advances that are necessary for magnetic resonance microscopy to answer questions relevant to cell biology.
[ { "created": "Fri, 20 Oct 2017 15:32:32 GMT", "version": "v1" } ]
2017-10-23
[ [ "Korvink", "Jan G.", "" ], [ "MacKinnon", "Neil", "" ] ]
One of the great challenges in biology is to observe, at sufficient detail, the real-time workings of the cell. Many methods exist to do cell measurements invasively. For example, mass spectrometry has tremendous mass sensitivity but destroys the cell. Molecular tagging can reveal exquisite detail using STED microscopy, but is currently neither relevant for a large number of different molecules, nor is it applicable to very small molecules. For marker free non-invasive measurements, only magnetic resonance has sufficient molecular specificity, but the technique suffers from low sensitivity and resolution. In this presentation we will consider the roadmap for achieving in vivo metabolomic measurements with more sensitivity and resolution. The roadmap will point towards the technological advances that are necessary for magnetic resonance microscopy to answer questions relevant to cell biology.
2007.14542
Robert Cockrell
Dale Larie, Gary An, and Chase Cockrell
Artificial neural networks for disease trajectory prediction in the context of sepsis
null
null
null
null
q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The disease trajectory for clinical sepsis, in terms of temporal cytokine and phenotypic dynamics, can be interpreted as a random dynamical system. The ability to make accurate predictions about patient state from clinical measurements has eluded the biomedical community, primarily due to the paucity of relevant and high-resolution data. We have utilized two distinct neural network architectures, Long Short-Term Memory and Multi-Layer Perceptron, to take a time sequence of five measurements of eleven simulated serum cytokine concentrations as input and to return both the future cytokine trajectories as well as an aggregate metric representing the patient's state of health. The neural networks converged within 50 epochs for cytokine trajectory predictions and health-metric regressions, with the expected amount of error (due to stochasticity in the simulation). The mapping from a specific cytokine profile to a state-of-health is not unique, and increased levels of inflammation result in less accurate predictions. Due to the propagation of machine learning error combined with computational model stochasticity over time, the network should be re-grounded in reality daily as predictions can diverge from the true model trajectory as the system evolves towards a probabilistic basin of attraction. This work serves as a proof-of-concept for the use of artificial neural networks to predict disease progression in sepsis. This work is not intended to replace a trained clinician, rather the goal is to augment intuition with quantifiable statistical information to help them make the best decisions. We note that this relies on a valid computational model of the system in question as there does not exist sufficient data to inform a machine-learning trained, artificially intelligent, controller.
[ { "created": "Wed, 29 Jul 2020 00:54:20 GMT", "version": "v1" } ]
2020-07-30
[ [ "Larie", "Dale", "" ], [ "An", "Gary", "" ], [ "Cockrell", "Chase", "" ] ]
The disease trajectory for clinical sepsis, in terms of temporal cytokine and phenotypic dynamics, can be interpreted as a random dynamical system. The ability to make accurate predictions about patient state from clinical measurements has eluded the biomedical community, primarily due to the paucity of relevant and high-resolution data. We have utilized two distinct neural network architectures, Long Short-Term Memory and Multi-Layer Perceptron, to take a time sequence of five measurements of eleven simulated serum cytokine concentrations as input and to return both the future cytokine trajectories as well as an aggregate metric representing the patient's state of health. The neural networks converged within 50 epochs for cytokine trajectory predictions and health-metric regressions, with the expected amount of error (due to stochasticity in the simulation). The mapping from a specific cytokine profile to a state-of-health is not unique, and increased levels of inflammation result in less accurate predictions. Due to the propagation of machine learning error combined with computational model stochasticity over time, the network should be re-grounded in reality daily as predictions can diverge from the true model trajectory as the system evolves towards a probabilistic basin of attraction. This work serves as a proof-of-concept for the use of artificial neural networks to predict disease progression in sepsis. This work is not intended to replace a trained clinician, rather the goal is to augment intuition with quantifiable statistical information to help them make the best decisions. We note that this relies on a valid computational model of the system in question as there does not exist sufficient data to inform a machine-learning trained, artificially intelligent, controller.
q-bio/0703049
Karen Alim
Karen Alim, Erwin Frey
Shapes of Semiflexible Polymer Rings
4 pages, 3 figures, Version as published in Phys. Rev. Lett
Phys. Rev. Lett. 99, 198102 (2007)
10.1103/PhysRevLett.99.198102
LMU-ASC 17/07
q-bio.BM cond-mat.soft
null
The shape of semiflexible polymer rings is studied over their whole range of flexibility. Investigating the joint distribution of asphericity and nature of asphericity as well as their respective averages we find two distinct shape regimes depending on the flexibility of the polymer. For small perimeter to persistence length the fluctuating rings exhibit only planar, elliptical configurations. At higher flexibilities three dimensional, crumpled structures arise. Analytic calculations for tight polymer rings confirm an elliptical shape in the stiff regime.
[ { "created": "Thu, 22 Mar 2007 19:33:03 GMT", "version": "v1" }, { "created": "Thu, 7 Feb 2008 12:39:01 GMT", "version": "v2" } ]
2008-02-07
[ [ "Alim", "Karen", "" ], [ "Frey", "Erwin", "" ] ]
The shape of semiflexible polymer rings is studied over their whole range of flexibility. Investigating the joint distribution of asphericity and nature of asphericity as well as their respective averages we find two distinct shape regimes depending on the flexibility of the polymer. For small perimeter to persistence length the fluctuating rings exhibit only planar, elliptical configurations. At higher flexibilities three dimensional, crumpled structures arise. Analytic calculations for tight polymer rings confirm an elliptical shape in the stiff regime.
2006.15268
Marcelo Menezes Morato
Saulo B. Bastos and Marcelo M. Morato and Daniel O. Cajueiro anda Julio E Normey-Rico
The COVID-19 (SARS-CoV-2) Uncertainty Tripod in Brazil: Assessments on model-based predictions with large under-reporting
Pre-Print, 26 Pages, 11 Figures, 7 Tables
null
null
null
q-bio.PE math.DS physics.soc-ph
http://creativecommons.org/licenses/by-nc-sa/4.0/
The COVID-19 pandemic (SARS-CoV-2 virus) is the defying global health crisis of our time. The absence of mass testing and the relevant presence of asymptomatic individuals causes the available data of the COVID-19 pandemic in Brazil to be largely under-reported regarding the number of infected individuals and deaths. We propose an adapted Susceptible-Infected-Recovered (SIR) model which explicitly incorporates the under-reporting and the response of the population to public policies (such as confinement measures, widespread use of masks, etc) to cast short-term and long-term predictions. Large amounts of uncertainty could provide misleading models and predictions. In this paper, we discuss the role of uncertainty in these prediction, which is illustrated regarding three key aspects. First, assuming that the number of infected individuals is under-reported, we demonstrate an anticipation regarding the peak of infection. Furthermore, while a model with a single class of infected individuals yields forecasts with increased peaks, a model that considers both symptomatic and asymptomatic infected individuals suggests a decrease of the peak of symptomatic. Second, considering that the actual amount of deaths is larger than what is being register, then demonstrate the increase of the mortality rates. Third, when consider generally under-reported data, we demonstrate how the transmission and recovery rate model parameters change qualitatively and quantitatively. We also investigate the effect of the "COVID-19 under-reporting tripod", i.e. the under-reporting in terms of infected individuals, of deaths and the true mortality rate. If two of these factors are known, the remainder can be inferred, as long as proportions are kept constant. The proposed approach allows one to determine the margins of uncertainty by assessments on the observed and true mortality rates.
[ { "created": "Sat, 27 Jun 2020 03:11:17 GMT", "version": "v1" } ]
2020-06-30
[ [ "Bastos", "Saulo B.", "" ], [ "Morato", "Marcelo M.", "" ], [ "Normey-Rico", "Daniel O. Cajueiro anda Julio E", "" ] ]
The COVID-19 pandemic (SARS-CoV-2 virus) is the defying global health crisis of our time. The absence of mass testing and the relevant presence of asymptomatic individuals causes the available data of the COVID-19 pandemic in Brazil to be largely under-reported regarding the number of infected individuals and deaths. We propose an adapted Susceptible-Infected-Recovered (SIR) model which explicitly incorporates the under-reporting and the response of the population to public policies (such as confinement measures, widespread use of masks, etc) to cast short-term and long-term predictions. Large amounts of uncertainty could provide misleading models and predictions. In this paper, we discuss the role of uncertainty in these prediction, which is illustrated regarding three key aspects. First, assuming that the number of infected individuals is under-reported, we demonstrate an anticipation regarding the peak of infection. Furthermore, while a model with a single class of infected individuals yields forecasts with increased peaks, a model that considers both symptomatic and asymptomatic infected individuals suggests a decrease of the peak of symptomatic. Second, considering that the actual amount of deaths is larger than what is being register, then demonstrate the increase of the mortality rates. Third, when consider generally under-reported data, we demonstrate how the transmission and recovery rate model parameters change qualitatively and quantitatively. We also investigate the effect of the "COVID-19 under-reporting tripod", i.e. the under-reporting in terms of infected individuals, of deaths and the true mortality rate. If two of these factors are known, the remainder can be inferred, as long as proportions are kept constant. The proposed approach allows one to determine the margins of uncertainty by assessments on the observed and true mortality rates.
1701.01531
Evelyn Tang
Evelyn Tang and Danielle S. Bassett
Control of Dynamics in Brain Networks
Intended for a Colloquium in Rev. Mod. Phys
null
10.1103/RevModPhys.90.031003
null
q-bio.QM cond-mat.dis-nn cond-mat.soft physics.bio-ph q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The ability to effectively control brain dynamics holds great promise for the enhancement of cognitive function in humans, and the betterment of their quality of life. Yet, successfully controlling dynamics in neural systems is challenging, in part due to the immense complexity of the brain and the large set of interactions that can drive any single change. While we have gained some understanding of the control of single neurons, the control of large-scale neural systems -- networks of multiply interacting components -- remains poorly understood. Efforts to address this gap include the construction of tools for the control of brain networks, mostly adapted from control and dynamical systems theory. Informed by current opportunities for practical intervention, these theoretical contributions provide models that draw from a wide array of mathematical approaches. We present intriguing recent developments for effective strategies of control in dynamic brain networks, and we also describe potential mechanisms that underlie such processes. We review efforts in the control of general neurophysiological processes with implications for brain development and cognitive function, as well as the control of altered neurophysiological processes in medical contexts such as anesthesia administration, seizure suppression, and deep-brain stimulation for Parkinson's disease. We conclude with a forward-looking discussion regarding how emerging results from network control -- especially approaches that deal with nonlinear dynamics or more realistic trajectories for control transitions -- could be used to directly address pressing questions in neuroscience.
[ { "created": "Fri, 6 Jan 2017 03:14:45 GMT", "version": "v1" }, { "created": "Thu, 19 Jan 2017 15:17:18 GMT", "version": "v2" }, { "created": "Mon, 14 May 2018 22:52:03 GMT", "version": "v3" } ]
2018-08-29
[ [ "Tang", "Evelyn", "" ], [ "Bassett", "Danielle S.", "" ] ]
The ability to effectively control brain dynamics holds great promise for the enhancement of cognitive function in humans, and the betterment of their quality of life. Yet, successfully controlling dynamics in neural systems is challenging, in part due to the immense complexity of the brain and the large set of interactions that can drive any single change. While we have gained some understanding of the control of single neurons, the control of large-scale neural systems -- networks of multiply interacting components -- remains poorly understood. Efforts to address this gap include the construction of tools for the control of brain networks, mostly adapted from control and dynamical systems theory. Informed by current opportunities for practical intervention, these theoretical contributions provide models that draw from a wide array of mathematical approaches. We present intriguing recent developments for effective strategies of control in dynamic brain networks, and we also describe potential mechanisms that underlie such processes. We review efforts in the control of general neurophysiological processes with implications for brain development and cognitive function, as well as the control of altered neurophysiological processes in medical contexts such as anesthesia administration, seizure suppression, and deep-brain stimulation for Parkinson's disease. We conclude with a forward-looking discussion regarding how emerging results from network control -- especially approaches that deal with nonlinear dynamics or more realistic trajectories for control transitions -- could be used to directly address pressing questions in neuroscience.
2301.10814
Wengong Jin
Wengong Jin, Siranush Sarkizova, Xun Chen, Nir Hacohen, Caroline Uhler
Unsupervised Protein-Ligand Binding Energy Prediction via Neural Euler's Rotation Equation
null
null
null
null
q-bio.BM cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Protein-ligand binding prediction is a fundamental problem in AI-driven drug discovery. Prior work focused on supervised learning methods using a large set of binding affinity data for small molecules, but it is hard to apply the same strategy to other drug classes like antibodies as labelled data is limited. In this paper, we explore unsupervised approaches and reformulate binding energy prediction as a generative modeling task. Specifically, we train an energy-based model on a set of unlabelled protein-ligand complexes using SE(3) denoising score matching and interpret its log-likelihood as binding affinity. Our key contribution is a new equivariant rotation prediction network called Neural Euler's Rotation Equations (NERE) for SE(3) score matching. It predicts a rotation by modeling the force and torque between protein and ligand atoms, where the force is defined as the gradient of an energy function with respect to atom coordinates. We evaluate NERE on protein-ligand and antibody-antigen binding affinity prediction benchmarks. Our model outperforms all unsupervised baselines (physics-based and statistical potentials) and matches supervised learning methods in the antibody case.
[ { "created": "Wed, 25 Jan 2023 20:33:51 GMT", "version": "v1" }, { "created": "Tue, 12 Dec 2023 22:17:35 GMT", "version": "v2" } ]
2023-12-14
[ [ "Jin", "Wengong", "" ], [ "Sarkizova", "Siranush", "" ], [ "Chen", "Xun", "" ], [ "Hacohen", "Nir", "" ], [ "Uhler", "Caroline", "" ] ]
Protein-ligand binding prediction is a fundamental problem in AI-driven drug discovery. Prior work focused on supervised learning methods using a large set of binding affinity data for small molecules, but it is hard to apply the same strategy to other drug classes like antibodies as labelled data is limited. In this paper, we explore unsupervised approaches and reformulate binding energy prediction as a generative modeling task. Specifically, we train an energy-based model on a set of unlabelled protein-ligand complexes using SE(3) denoising score matching and interpret its log-likelihood as binding affinity. Our key contribution is a new equivariant rotation prediction network called Neural Euler's Rotation Equations (NERE) for SE(3) score matching. It predicts a rotation by modeling the force and torque between protein and ligand atoms, where the force is defined as the gradient of an energy function with respect to atom coordinates. We evaluate NERE on protein-ligand and antibody-antigen binding affinity prediction benchmarks. Our model outperforms all unsupervised baselines (physics-based and statistical potentials) and matches supervised learning methods in the antibody case.
1403.1803
Torsten Held M. Sc.
Torsten Held, Armita Nourmohammad, Michael L\"assig
Adaptive evolution of molecular phenotypes
Figures are not optimally displayed in Firefox
null
10.1088/1742-5468/2014/09/P09029
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Molecular phenotypes link genomic information with organismic functions, fitness, and evolution. Quantitative traits are complex phenotypes that depend on multiple genomic loci. In this paper, we study the adaptive evolution of a quantitative trait under time-dependent selection, which arises from environmental changes or through fitness interactions with other co-evolving phenotypes. We analyze a model of trait evolution under mutations and genetic drift in a single-peak fitness seascape. The fitness peak performs a constrained random walk in the trait amplitude, which determines the time-dependent trait optimum in a given population. We derive analytical expressions for the distribution of the time-dependent trait divergence between populations and of the trait diversity within populations. Based on this solution, we develop a method to infer adaptive evolution of quantitative traits. Specifically, we show that the ratio of the average trait divergence and the diversity is a universal function of evolutionary time, which predicts the stabilizing strength and the driving rate of the fitness seascape. From an information-theoretic point of view, this function measures the macro-evolutionary entropy in a population ensemble, which determines the predictability of the evolutionary process. Our solution also quantifies two key characteristics of adapting populations: the cumulative fitness flux, which measures the total amount of adaptation, and the adaptive load, which is the fitness cost due to a population's lag behind the fitness peak.
[ { "created": "Fri, 7 Mar 2014 16:53:53 GMT", "version": "v1" } ]
2015-06-19
[ [ "Held", "Torsten", "" ], [ "Nourmohammad", "Armita", "" ], [ "Lässig", "Michael", "" ] ]
Molecular phenotypes link genomic information with organismic functions, fitness, and evolution. Quantitative traits are complex phenotypes that depend on multiple genomic loci. In this paper, we study the adaptive evolution of a quantitative trait under time-dependent selection, which arises from environmental changes or through fitness interactions with other co-evolving phenotypes. We analyze a model of trait evolution under mutations and genetic drift in a single-peak fitness seascape. The fitness peak performs a constrained random walk in the trait amplitude, which determines the time-dependent trait optimum in a given population. We derive analytical expressions for the distribution of the time-dependent trait divergence between populations and of the trait diversity within populations. Based on this solution, we develop a method to infer adaptive evolution of quantitative traits. Specifically, we show that the ratio of the average trait divergence and the diversity is a universal function of evolutionary time, which predicts the stabilizing strength and the driving rate of the fitness seascape. From an information-theoretic point of view, this function measures the macro-evolutionary entropy in a population ensemble, which determines the predictability of the evolutionary process. Our solution also quantifies two key characteristics of adapting populations: the cumulative fitness flux, which measures the total amount of adaptation, and the adaptive load, which is the fitness cost due to a population's lag behind the fitness peak.
1509.01893
Herv\'e Rouault
Herv\'e Rouault and Shaul Druckmann
Spectrum density of large sparse random matrices associated to neural networks
5 pages, 3 figures and one supplementary information (8 pages, 1 figure)
null
null
null
q-bio.NC cond-mat.dis-nn
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The eigendecomposition of the coupling matrix of large biological networks is central to the study of the dynamics of these networks. For neural networks, this matrix should reflect the topology of the network and conform with Dale's law which states that a neuron can have only all excitatory or only all inhibitory output connections, i.e., coefficients of one column of the coupling matrix must all have the same sign. The eigenspectrum density has been determined before for dense matrices $J_{ij}$, when several populations are considered. However, the expressions were derived under the assumption of dense connectivity, whereas neural circuits have sparse connections. Here, we followed mean-field approaches in order to come up with exact self-consistent expressions for the spectrum density in the limit of sparse matrices for both symmetric and neural network matrices. Furthermore we introduced approximations that allow for good numerical evaluation of the density. Finally, we studied the phenomenology of localization properties of the eigenvectors.
[ { "created": "Mon, 7 Sep 2015 03:43:47 GMT", "version": "v1" } ]
2015-09-08
[ [ "Rouault", "Hervé", "" ], [ "Druckmann", "Shaul", "" ] ]
The eigendecomposition of the coupling matrix of large biological networks is central to the study of the dynamics of these networks. For neural networks, this matrix should reflect the topology of the network and conform with Dale's law which states that a neuron can have only all excitatory or only all inhibitory output connections, i.e., coefficients of one column of the coupling matrix must all have the same sign. The eigenspectrum density has been determined before for dense matrices $J_{ij}$, when several populations are considered. However, the expressions were derived under the assumption of dense connectivity, whereas neural circuits have sparse connections. Here, we followed mean-field approaches in order to come up with exact self-consistent expressions for the spectrum density in the limit of sparse matrices for both symmetric and neural network matrices. Furthermore we introduced approximations that allow for good numerical evaluation of the density. Finally, we studied the phenomenology of localization properties of the eigenvectors.
2103.16125
Giovanna Jona Lasinio Prof.
Sara Martino, Daniela Silvia Pace, Stefano Moro, Edoardo Casoli, Daniele Ventura, Alessandro Frachea, Margherita Silvestri, Antonella Arcangeli, Giancarlo Giacomini, Giandomenico Ardizzone, Giovanna Jona Lasinio
Integration of presence-only data from several sources. A case study on dolphins' spatial distribution
null
null
null
null
q-bio.QM stat.AP stat.ME
http://creativecommons.org/licenses/by/4.0/
Presence-only data are a typical occurrence in species distribution modeling. They include the presence locations and no information on the absence. Their modeling usually does not account for detection biases. In this work, we aim to merge three different sources of information to model the presence of marine mammals. The approach is fully general and it is applied to two species of dolphins in the Central Tyrrhenian Sea (Italy) as a case study. Data come from the Italian Environmental Protection Agency (ISPRA) and Sapienza University of Rome research campaigns, and from a careful selection of social media (SM) images and videos. We build a Log Gaussian Cox process where different detection functions describe each data source. For the SM data, we analyze several choices that allow accounting for detection biases. Our findings allow for a correct understanding of Stenella coeruleoalba and Tursiops truncatus distribution in the study area. The results prove that the proposed approach is broadly applicable, it can be widely used, and it is easily implemented in the R software using INLA and inlabru. We provide examples' code with simulated data in the supplementary materials.
[ { "created": "Tue, 30 Mar 2021 07:22:03 GMT", "version": "v1" } ]
2021-03-31
[ [ "Martino", "Sara", "" ], [ "Pace", "Daniela Silvia", "" ], [ "Moro", "Stefano", "" ], [ "Casoli", "Edoardo", "" ], [ "Ventura", "Daniele", "" ], [ "Frachea", "Alessandro", "" ], [ "Silvestri", "Margherita", "" ], [ "Arcangeli", "Antonella", "" ], [ "Giacomini", "Giancarlo", "" ], [ "Ardizzone", "Giandomenico", "" ], [ "Lasinio", "Giovanna Jona", "" ] ]
Presence-only data are a typical occurrence in species distribution modeling. They include the presence locations and no information on the absence. Their modeling usually does not account for detection biases. In this work, we aim to merge three different sources of information to model the presence of marine mammals. The approach is fully general and it is applied to two species of dolphins in the Central Tyrrhenian Sea (Italy) as a case study. Data come from the Italian Environmental Protection Agency (ISPRA) and Sapienza University of Rome research campaigns, and from a careful selection of social media (SM) images and videos. We build a Log Gaussian Cox process where different detection functions describe each data source. For the SM data, we analyze several choices that allow accounting for detection biases. Our findings allow for a correct understanding of Stenella coeruleoalba and Tursiops truncatus distribution in the study area. The results prove that the proposed approach is broadly applicable, it can be widely used, and it is easily implemented in the R software using INLA and inlabru. We provide examples' code with simulated data in the supplementary materials.
0704.3264
Jose Vilar
Leonor Saiz and Jose M. G. Vilar
Efficiency and versatility of distal multisite transcription regulation
null
null
null
null
q-bio.SC q-bio.MN
null
Transcription regulation typically involves the binding of proteins over long distances on multiple DNA sites that are brought close to each other by the formation of DNA loops. The inherent complexity of the assembly of regulatory complexes on looped DNA challenges the understanding of even the simplest genetic systems, including the prototypical lac operon. Here we implement a scalable quantitative computational approach to analyze systems regulated through multiple DNA sites with looping. Our approach applied to the lac operon accurately predicts the transcription rate over five orders of magnitude for wild type and seven mutants accounting for all the combinations of deletions of the three operators. A quantitative analysis of the model reveals that the presence of three operators provides a mechanism to combine robust repression with sensitive induction, two seemingly mutually exclusive properties that are required for optimal functioning of metabolic switches.
[ { "created": "Tue, 24 Apr 2007 19:46:28 GMT", "version": "v1" } ]
2007-05-23
[ [ "Saiz", "Leonor", "" ], [ "Vilar", "Jose M. G.", "" ] ]
Transcription regulation typically involves the binding of proteins over long distances on multiple DNA sites that are brought close to each other by the formation of DNA loops. The inherent complexity of the assembly of regulatory complexes on looped DNA challenges the understanding of even the simplest genetic systems, including the prototypical lac operon. Here we implement a scalable quantitative computational approach to analyze systems regulated through multiple DNA sites with looping. Our approach applied to the lac operon accurately predicts the transcription rate over five orders of magnitude for wild type and seven mutants accounting for all the combinations of deletions of the three operators. A quantitative analysis of the model reveals that the presence of three operators provides a mechanism to combine robust repression with sensitive induction, two seemingly mutually exclusive properties that are required for optimal functioning of metabolic switches.
1806.04138
Michael D. Multerer
M. D. Peters, L. D. Wittwer, A. Stopka, D. Barac, C. Lang, D. Iber
Simulation of morphogen and tissue dynamics
null
null
null
null
q-bio.TO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Morphogenesis, the process by which an adult organism emerges from a single cell, has fascinated humans for a long time. Modelling this process can provide novel insights into development and the principles that orchestrate the developmental processes. This chapter focusses on the mathematical description and numerical simulation of developmental processes. In particular, we discuss the mathematical representation of morphogen and tissue dynamics on static and grow- ing domains, as well as the corresponding tissue mechanics. In addition, we give an overview of numerical methods that are routinely used to solve the resulting systems of partial differential equations. These include the finite element method and the Lattice Boltzmann method for the discretisation as well as the arbitrary Lagrangian-Eulerian method and the Diffuse-Domain method to numerically treat deforming domains.
[ { "created": "Mon, 11 Jun 2018 15:28:32 GMT", "version": "v1" } ]
2018-06-13
[ [ "Peters", "M. D.", "" ], [ "Wittwer", "L. D.", "" ], [ "Stopka", "A.", "" ], [ "Barac", "D.", "" ], [ "Lang", "C.", "" ], [ "Iber", "D.", "" ] ]
Morphogenesis, the process by which an adult organism emerges from a single cell, has fascinated humans for a long time. Modelling this process can provide novel insights into development and the principles that orchestrate the developmental processes. This chapter focusses on the mathematical description and numerical simulation of developmental processes. In particular, we discuss the mathematical representation of morphogen and tissue dynamics on static and grow- ing domains, as well as the corresponding tissue mechanics. In addition, we give an overview of numerical methods that are routinely used to solve the resulting systems of partial differential equations. These include the finite element method and the Lattice Boltzmann method for the discretisation as well as the arbitrary Lagrangian-Eulerian method and the Diffuse-Domain method to numerically treat deforming domains.
1012.2504
Jiapu Zhang
Jiapu Zhang, Jie Sun, Changzhi Wu
Optimal atomic-resolution structures of prion AGAAAAGA amyloid fibrils
The paper corrects many errors of old versions
J of Theor. Biology 279 (1), 17-28 (2011)
10.1016/j.jtbi.2011.02.012
PMID 21420420
q-bio.BM physics.bio-ph
http://creativecommons.org/licenses/by-nc-sa/3.0/
X-ray crystallography is a powerful tool to determine the protein 3D structure. However, it is time-consuming and expensive, and not all proteins can be successfully crystallized, particularly for membrane proteins. Although nuclear magnetic resonance (NMR) spectroscopy is indeed a very powerful tool in determining the 3D structures of membrane proteins, it is also time-consuming and costly. To the best of the authors' knowledge, there is little structural data available on the AGAAAAGA palindrome in the hydrophobic region (113-120) of prion proteins due to the noncrystalline and insoluble nature of the amyloid fibril, although many experimental studies have shown that this region has amyloid fibril forming properties and plays an important role in prion diseases. In view of this, the present study is devoted to address this problem from computational approaches such as global energy optimization, simulated annealing, and structural bioinformatics. The optimal atomic-resolution structures of prion AGAAAAGA amyloid fibils reported in this paper have a value to the scientific community in its drive to find treatments for prion diseases.
[ { "created": "Sun, 12 Dec 2010 01:01:50 GMT", "version": "v1" }, { "created": "Sun, 2 Jan 2011 19:01:55 GMT", "version": "v2" }, { "created": "Fri, 21 Jan 2011 06:43:18 GMT", "version": "v3" }, { "created": "Fri, 18 Feb 2011 06:26:14 GMT", "version": "v4" }, { "created": "Tue, 22 Mar 2011 02:22:56 GMT", "version": "v5" }, { "created": "Mon, 11 Apr 2011 08:23:30 GMT", "version": "v6" } ]
2013-12-10
[ [ "Zhang", "Jiapu", "" ], [ "Sun", "Jie", "" ], [ "Wu", "Changzhi", "" ] ]
X-ray crystallography is a powerful tool to determine the protein 3D structure. However, it is time-consuming and expensive, and not all proteins can be successfully crystallized, particularly for membrane proteins. Although nuclear magnetic resonance (NMR) spectroscopy is indeed a very powerful tool in determining the 3D structures of membrane proteins, it is also time-consuming and costly. To the best of the authors' knowledge, there is little structural data available on the AGAAAAGA palindrome in the hydrophobic region (113-120) of prion proteins due to the noncrystalline and insoluble nature of the amyloid fibril, although many experimental studies have shown that this region has amyloid fibril forming properties and plays an important role in prion diseases. In view of this, the present study is devoted to address this problem from computational approaches such as global energy optimization, simulated annealing, and structural bioinformatics. The optimal atomic-resolution structures of prion AGAAAAGA amyloid fibils reported in this paper have a value to the scientific community in its drive to find treatments for prion diseases.
2006.13158
Hideaki Shimazaki
Hideaki Shimazaki
The principles of adaptation in organisms and machines II: Thermodynamics of the Bayesian brain
29 pages, 10 figures
null
null
null
q-bio.NC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This article reviews how organisms learn and recognize the world through the dynamics of neural networks from the perspective of Bayesian inference, and introduces a view on how such dynamics is described by the laws for the entropy of neural activity, a paradigm that we call thermodynamics of the Bayesian brain. The Bayesian brain hypothesis sees the stimulus-evoked activity of neurons as an act of constructing the Bayesian posterior distribution based on the generative model of the external world that an organism possesses. A closer look at the stimulus-evoked activity at early sensory cortices reveals that feedforward connections initially mediate the stimulus-response, which is later modulated by input from recurrent connections. Importantly, not the initial response, but the delayed modulation expresses animals' cognitive states such as awareness and attention regarding the stimulus. Using a simple generative model made of a spiking neural population, we reproduce the stimulus-evoked dynamics with the delayed feedback modulation as the process of the Bayesian inference that integrates the stimulus evidence and a prior knowledge with time-delay. We then introduce a thermodynamic view on this process based on the laws for the entropy of neural activity. This view elucidates that the process of the Bayesian inference works as the recently-proposed information-theoretic engine (neural engine, an analogue of a heat engine in thermodynamics), which allows us to quantify the perceptual capacity expressed in the delayed modulation in terms of entropy.
[ { "created": "Tue, 23 Jun 2020 16:57:46 GMT", "version": "v1" } ]
2020-06-24
[ [ "Shimazaki", "Hideaki", "" ] ]
This article reviews how organisms learn and recognize the world through the dynamics of neural networks from the perspective of Bayesian inference, and introduces a view on how such dynamics is described by the laws for the entropy of neural activity, a paradigm that we call thermodynamics of the Bayesian brain. The Bayesian brain hypothesis sees the stimulus-evoked activity of neurons as an act of constructing the Bayesian posterior distribution based on the generative model of the external world that an organism possesses. A closer look at the stimulus-evoked activity at early sensory cortices reveals that feedforward connections initially mediate the stimulus-response, which is later modulated by input from recurrent connections. Importantly, not the initial response, but the delayed modulation expresses animals' cognitive states such as awareness and attention regarding the stimulus. Using a simple generative model made of a spiking neural population, we reproduce the stimulus-evoked dynamics with the delayed feedback modulation as the process of the Bayesian inference that integrates the stimulus evidence and a prior knowledge with time-delay. We then introduce a thermodynamic view on this process based on the laws for the entropy of neural activity. This view elucidates that the process of the Bayesian inference works as the recently-proposed information-theoretic engine (neural engine, an analogue of a heat engine in thermodynamics), which allows us to quantify the perceptual capacity expressed in the delayed modulation in terms of entropy.
1607.08318
Zachary Kilpatrick PhD
Adrian E Radillo, Alan Veliz-Cuba, Kresimir Josic, and Zachary P Kilpatrick
Evidence accumulation and change rate inference in dynamic environments
43 pages, 8 figures, in press
Neural Computation (2017)
null
null
q-bio.NC math.ST stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In a constantly changing world, animals must account for environmental volatility when making decisions. To appropriately discount older, irrelevant information, they need to learn the rate at which the environment changes. We develop an ideal observer model capable of inferring the present state of the environment along with its rate of change. Key to this computation is an update of the posterior probability of all possible changepoint counts. This computation can be challenging, as the number of possibilities grows rapidly with time. However, we show how the computations can be simplified in the continuum limit by a moment closure approximation. The resulting low-dimensional system can be used to infer the environmental state and change rate with accuracy comparable to the ideal observer. The approximate computations can be performed by a neural network model via a rate-correlation based plasticity rule. We thus show how optimal observers accumulate evidence in changing environments, and map this computation to reduced models which perform inference using plausible neural mechanisms.
[ { "created": "Thu, 28 Jul 2016 05:24:55 GMT", "version": "v1" }, { "created": "Wed, 11 Jan 2017 17:46:26 GMT", "version": "v2" } ]
2017-01-12
[ [ "Radillo", "Adrian E", "" ], [ "Veliz-Cuba", "Alan", "" ], [ "Josic", "Kresimir", "" ], [ "Kilpatrick", "Zachary P", "" ] ]
In a constantly changing world, animals must account for environmental volatility when making decisions. To appropriately discount older, irrelevant information, they need to learn the rate at which the environment changes. We develop an ideal observer model capable of inferring the present state of the environment along with its rate of change. Key to this computation is an update of the posterior probability of all possible changepoint counts. This computation can be challenging, as the number of possibilities grows rapidly with time. However, we show how the computations can be simplified in the continuum limit by a moment closure approximation. The resulting low-dimensional system can be used to infer the environmental state and change rate with accuracy comparable to the ideal observer. The approximate computations can be performed by a neural network model via a rate-correlation based plasticity rule. We thus show how optimal observers accumulate evidence in changing environments, and map this computation to reduced models which perform inference using plausible neural mechanisms.
2008.11167
Fernanda Matias
Francisco-Leandro P. Carlos, Maciel-Monteiro Ubirakitan, Marcelo Cairr\~ao Ara\'ujo Rodrigues, Mois\'es Aguilar-Domingo, Eva Herrera-Guti\'errez, Jes\'us G\'omez-Amor, Mauro Copelli, Pedro V. Carelli, Fernanda S. Matias
Anticipated synchronization in human EEG data: unidirectional causality with negative phase-lag
null
null
10.1103/PhysRevE.102.032216
null
q-bio.NC physics.bio-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding the functional connectivity of the brain has become a major goal of neuroscience. In many situatons, the relative phase difference, together with coherence patterns, have been employed to infer the direction of the information flow. However, it has been recently shown in local field potential data from monkeys the existence of a synchronized regime in which unidirectionally coupled areas can present both positive and negative phase differences. During the counterintuitive regime, called anticipated synchronization (AS), the phase difference does not reflect the causality. Here we investigate coherence and causality at the alpha frequency band (10 Hz) between pairs of electroencephalogram (EEG) electrodes in humans during a GO/NO-GO task. We show that human EEG signals can exhibit anticipated synchronization, which is characterized by a unidirectional influence from an electrode A to an electrode B, but the electrode B leads the electrode A in time. To the best of our knowledge, this is the first verification of AS in EEG signals and in the human brain. The usual delayed synchronization (DS) regime is also present between many pairs. DS is characterized by a unidirectional influence from an electrode A to an electrode B and a positive phase difference between A and B which indicates that the electrode A leads the electrode B in time. Moreover, we show that EEG signals exhibit diversity in phase relations: the pairs of electrodes can present in-phase, anti-phase, or out-of-phase synchronization with a similar distribution of positive and negative phase differences.
[ { "created": "Tue, 25 Aug 2020 16:59:54 GMT", "version": "v1" } ]
2020-10-28
[ [ "Carlos", "Francisco-Leandro P.", "" ], [ "Ubirakitan", "Maciel-Monteiro", "" ], [ "Rodrigues", "Marcelo Cairrão Araújo", "" ], [ "Aguilar-Domingo", "Moisés", "" ], [ "Herrera-Gutiérrez", "Eva", "" ], [ "Gómez-Amor", "Jesús", "" ], [ "Copelli", "Mauro", "" ], [ "Carelli", "Pedro V.", "" ], [ "Matias", "Fernanda S.", "" ] ]
Understanding the functional connectivity of the brain has become a major goal of neuroscience. In many situatons, the relative phase difference, together with coherence patterns, have been employed to infer the direction of the information flow. However, it has been recently shown in local field potential data from monkeys the existence of a synchronized regime in which unidirectionally coupled areas can present both positive and negative phase differences. During the counterintuitive regime, called anticipated synchronization (AS), the phase difference does not reflect the causality. Here we investigate coherence and causality at the alpha frequency band (10 Hz) between pairs of electroencephalogram (EEG) electrodes in humans during a GO/NO-GO task. We show that human EEG signals can exhibit anticipated synchronization, which is characterized by a unidirectional influence from an electrode A to an electrode B, but the electrode B leads the electrode A in time. To the best of our knowledge, this is the first verification of AS in EEG signals and in the human brain. The usual delayed synchronization (DS) regime is also present between many pairs. DS is characterized by a unidirectional influence from an electrode A to an electrode B and a positive phase difference between A and B which indicates that the electrode A leads the electrode B in time. Moreover, we show that EEG signals exhibit diversity in phase relations: the pairs of electrodes can present in-phase, anti-phase, or out-of-phase synchronization with a similar distribution of positive and negative phase differences.
2003.14102
Giuseppe Gaeta
Giuseppe Gaeta
Social distancing versus early detection and contacts tracing in epidemic management
16 pages, 8 figures in new version, to appear in Chaos
Chaos, Solitons and Fractals 140 (2020) 110074
10.1016/j.chaos.2020.110074
null
q-bio.PE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Different countries -- and sometimes different regions within the same countries -- have adopted different strategies in trying to contain the ongoing COVID-19 epidemic; these mix in variable parts social confinement, early detection and contact tracing. In this paper we discuss the different effects of these ingredients on the epidemic dynamics; the discussion is conducted with the help of two simple models, i.e. the classical SIR model and the recently introduced variant (A-SIR, arXiv:2003.08720) which takes into account the presence of a large set of asymptomatic infectives.
[ { "created": "Tue, 31 Mar 2020 11:19:28 GMT", "version": "v1" }, { "created": "Sun, 5 Apr 2020 12:50:01 GMT", "version": "v2" }, { "created": "Thu, 2 Jul 2020 08:14:37 GMT", "version": "v3" } ]
2020-07-17
[ [ "Gaeta", "Giuseppe", "" ] ]
Different countries -- and sometimes different regions within the same countries -- have adopted different strategies in trying to contain the ongoing COVID-19 epidemic; these mix in variable parts social confinement, early detection and contact tracing. In this paper we discuss the different effects of these ingredients on the epidemic dynamics; the discussion is conducted with the help of two simple models, i.e. the classical SIR model and the recently introduced variant (A-SIR, arXiv:2003.08720) which takes into account the presence of a large set of asymptomatic infectives.
q-bio/0508032
Claudius Gros
Claudius Gros
Self-Sustained Thought Processes in a Dense Associative Network
14 pages, 9 figures
Proceedings of the 28th Annual German Conference on Artificial Intelligence (KI 2005), U. Furbach (Eds.): KI 2005, Springer Lecture Notes in Artificial Intelligence 3698, pp. 366-379, 2005
null
null
q-bio.NC nlin.AO physics.bio-ph
null
Several guiding principles for thought processes are proposed and a neural-network-type model implementing these principles is presented and studied. We suggest to consider thinking within an associative network built-up of overlapping memory states. We consider a homogeneous associative network as biological considerations rule out distinct conjunction units between the information (the memories) stored in the brain. We therefore propose that memory states have a dual functionality: They represent on one side the stored information and serve, on the other side, as the associative links in between the different dynamical states of the network which consists of transient attractors. We implement these principles within a generalized winners-take-all neural network with sparse coding and an additional coupling to local reservoirs. We show that this network is capable to generate autonomously a self-sustained time-series of memory states which we identify with a thought process. Each memory state is associatively connected with its predecessor. This system shows several emerging features, it is able (a) to recognize external patterns in a noisy background, (b) to focus attention autonomously and (c) to represent hierarchical memory states with an internal structure.
[ { "created": "Mon, 22 Aug 2005 14:38:44 GMT", "version": "v1" }, { "created": "Fri, 2 Mar 2007 10:56:11 GMT", "version": "v2" } ]
2007-05-23
[ [ "Gros", "Claudius", "" ] ]
Several guiding principles for thought processes are proposed and a neural-network-type model implementing these principles is presented and studied. We suggest to consider thinking within an associative network built-up of overlapping memory states. We consider a homogeneous associative network as biological considerations rule out distinct conjunction units between the information (the memories) stored in the brain. We therefore propose that memory states have a dual functionality: They represent on one side the stored information and serve, on the other side, as the associative links in between the different dynamical states of the network which consists of transient attractors. We implement these principles within a generalized winners-take-all neural network with sparse coding and an additional coupling to local reservoirs. We show that this network is capable to generate autonomously a self-sustained time-series of memory states which we identify with a thought process. Each memory state is associatively connected with its predecessor. This system shows several emerging features, it is able (a) to recognize external patterns in a noisy background, (b) to focus attention autonomously and (c) to represent hierarchical memory states with an internal structure.
1403.3414
Simon Gravel
Simon Gravel and NHLBI GO Exome Sequencing Project
Predicting discovery rates of genomic features
21 pages, 7 figures plus 8-page appendix
null
null
null
q-bio.GN math.ST q-bio.PE stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Successful sequencing experiments require judicious sample selection. However, this selection must often be performed on the basis of limited preliminary data. Predicting the statistical properties of the final sample based on preliminary data can be challenging, because numerous uncertain model assumptions may be involved. Here, we ask whether we can predict ``omics" variation across many samples by sequencing only a fraction of them. In the infinite-genome limit, we find that a pilot study sequencing $5\%$ of a population is sufficient to predict the number of genetic variants in the entire population within $6\%$ of the correct value, using an estimator agnostic to demography, selection, or population structure. To reach similar accuracy in a finite genome with millions of polymorphisms, the pilot study would require about $15\%$ of the population. We present computationally efficient jackknife and linear programming methods that exhibit substantially less bias than the state of the art when applied to simulated data and sub-sampled 1000 Genomes Project data. Extrapolating based on the NHLBI Exome Sequencing Project data, we predict that $7.2\%$ of sites in the capture region would be variable in a sample of $50,000$ African-Americans, and $8.8\%$ in a European sample of equal size. Finally, we show how the linear programming method can also predict discovery rates of various genomic features, such as the number of transcription factor binding sites across different cell types.
[ { "created": "Thu, 13 Mar 2014 20:05:45 GMT", "version": "v1" } ]
2014-03-17
[ [ "Gravel", "Simon", "" ], [ "Project", "NHLBI GO Exome Sequencing", "" ] ]
Successful sequencing experiments require judicious sample selection. However, this selection must often be performed on the basis of limited preliminary data. Predicting the statistical properties of the final sample based on preliminary data can be challenging, because numerous uncertain model assumptions may be involved. Here, we ask whether we can predict ``omics" variation across many samples by sequencing only a fraction of them. In the infinite-genome limit, we find that a pilot study sequencing $5\%$ of a population is sufficient to predict the number of genetic variants in the entire population within $6\%$ of the correct value, using an estimator agnostic to demography, selection, or population structure. To reach similar accuracy in a finite genome with millions of polymorphisms, the pilot study would require about $15\%$ of the population. We present computationally efficient jackknife and linear programming methods that exhibit substantially less bias than the state of the art when applied to simulated data and sub-sampled 1000 Genomes Project data. Extrapolating based on the NHLBI Exome Sequencing Project data, we predict that $7.2\%$ of sites in the capture region would be variable in a sample of $50,000$ African-Americans, and $8.8\%$ in a European sample of equal size. Finally, we show how the linear programming method can also predict discovery rates of various genomic features, such as the number of transcription factor binding sites across different cell types.