id
stringlengths 9
13
| submitter
stringlengths 4
48
| authors
stringlengths 4
9.62k
| title
stringlengths 4
343
| comments
stringlengths 2
480
⌀ | journal-ref
stringlengths 9
309
⌀ | doi
stringlengths 12
138
⌀ | report-no
stringclasses 277
values | categories
stringlengths 8
87
| license
stringclasses 9
values | orig_abstract
stringlengths 27
3.76k
| versions
listlengths 1
15
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
147
| abstract
stringlengths 24
3.75k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2312.00191
|
Patricia Suriana
|
Patricia Suriana, Ron O. Dror
|
Enhancing Ligand Pose Sampling for Molecular Docking
|
Published at the Machine Learning for Structural Biology Workshop,
NeurIPS 2023
| null | null | null |
q-bio.BM cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Deep learning promises to dramatically improve scoring functions for
molecular docking, leading to substantial advances in binding pose prediction
and virtual screening. To train scoring functions-and to perform molecular
docking-one must generate a set of candidate ligand binding poses.
Unfortunately, the sampling protocols currently used to generate candidate
poses frequently fail to produce any poses close to the correct, experimentally
determined pose, unless information about the correct pose is provided. This
limits the accuracy of learned scoring functions and molecular docking. Here,
we describe two improved protocols for pose sampling: GLOW (auGmented sampLing
with sOftened vdW potential) and a novel technique named IVES (IteratiVe
Ensemble Sampling). Our benchmarking results demonstrate the effectiveness of
our methods in improving the likelihood of sampling accurate poses, especially
for binding pockets whose shape changes substantially when different ligands
bind. This improvement is observed across both experimentally determined and
AlphaFold-generated protein structures. Additionally, we present datasets of
candidate ligand poses generated using our methods for each of around 5,000
protein-ligand cross-docking pairs, for training and testing scoring functions.
To benefit the research community, we provide these cross-docking datasets and
an open-source Python implementation of GLOW and IVES at
https://github.com/drorlab/GLOW_IVES .
|
[
{
"created": "Thu, 30 Nov 2023 21:02:37 GMT",
"version": "v1"
}
] |
2023-12-04
|
[
[
"Suriana",
"Patricia",
""
],
[
"Dror",
"Ron O.",
""
]
] |
Deep learning promises to dramatically improve scoring functions for molecular docking, leading to substantial advances in binding pose prediction and virtual screening. To train scoring functions-and to perform molecular docking-one must generate a set of candidate ligand binding poses. Unfortunately, the sampling protocols currently used to generate candidate poses frequently fail to produce any poses close to the correct, experimentally determined pose, unless information about the correct pose is provided. This limits the accuracy of learned scoring functions and molecular docking. Here, we describe two improved protocols for pose sampling: GLOW (auGmented sampLing with sOftened vdW potential) and a novel technique named IVES (IteratiVe Ensemble Sampling). Our benchmarking results demonstrate the effectiveness of our methods in improving the likelihood of sampling accurate poses, especially for binding pockets whose shape changes substantially when different ligands bind. This improvement is observed across both experimentally determined and AlphaFold-generated protein structures. Additionally, we present datasets of candidate ligand poses generated using our methods for each of around 5,000 protein-ligand cross-docking pairs, for training and testing scoring functions. To benefit the research community, we provide these cross-docking datasets and an open-source Python implementation of GLOW and IVES at https://github.com/drorlab/GLOW_IVES .
|
0903.4844
|
Anirban Banerji
|
Aniket Magarkar, Anirban Banerji, Shweta Kolhi
|
Quantification of Biological Robustness at the Systemic Level
|
Biological robustness, negative entropy, game theory. Computational
methodology to quantify negative entropy
| null | null | null |
q-bio.SC q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Biological systems possess negative entropy. In them, one form of order
produces another, more organized form of order. We propose a formal scheme to
calculate robustness of an entire biological system by quantifying the negative
entropy present in it. Our Methodology is based upon a computational
implementation of two-person non-cooperative finite zero-sum game between
positive (physico-chemical) and negative (biological) entropy, present in the
system(TCA cycle, for this work). Biochemical analogue of Nash equilibrium,
proposed here, could measure the robustness in TCA cycle in exact numeric
terms, whereas the mixed strategy game between these entropies could quantitate
the progression of stages of biological adaptation. Synchronization profile
amongst macromolecular concentrations (even under environmental perturbations)
is found to account for negative entropy and biological robustness. Emergence
of synchronization profile was investigated with dynamically varying metabolite
concentrations. Obtained results were verified with that from the deterministic
simulation methods. Categorical plans to apply this algorithm in Cancer studies
and anti-viral therapies are proposed alongside. From theoretical perspective,
this work proposes a general, rigorous and alternative view of immunology.
|
[
{
"created": "Fri, 27 Mar 2009 16:43:00 GMT",
"version": "v1"
},
{
"created": "Fri, 3 Apr 2009 14:44:13 GMT",
"version": "v2"
},
{
"created": "Tue, 20 Dec 2011 09:07:25 GMT",
"version": "v3"
}
] |
2011-12-21
|
[
[
"Magarkar",
"Aniket",
""
],
[
"Banerji",
"Anirban",
""
],
[
"Kolhi",
"Shweta",
""
]
] |
Biological systems possess negative entropy. In them, one form of order produces another, more organized form of order. We propose a formal scheme to calculate robustness of an entire biological system by quantifying the negative entropy present in it. Our Methodology is based upon a computational implementation of two-person non-cooperative finite zero-sum game between positive (physico-chemical) and negative (biological) entropy, present in the system(TCA cycle, for this work). Biochemical analogue of Nash equilibrium, proposed here, could measure the robustness in TCA cycle in exact numeric terms, whereas the mixed strategy game between these entropies could quantitate the progression of stages of biological adaptation. Synchronization profile amongst macromolecular concentrations (even under environmental perturbations) is found to account for negative entropy and biological robustness. Emergence of synchronization profile was investigated with dynamically varying metabolite concentrations. Obtained results were verified with that from the deterministic simulation methods. Categorical plans to apply this algorithm in Cancer studies and anti-viral therapies are proposed alongside. From theoretical perspective, this work proposes a general, rigorous and alternative view of immunology.
|
2304.11474
|
Eric Jones
|
Eric W. Jones, Joshua Derrick, Roger M. Nisbet, Will Ludington, David
A. Sivak
|
Signal in the noise: temporal variation in exponentially growing
populations
|
6 page main text with 5 figures, 8 page supplement with 4 figures.
Updated with substantially revised manuscript
|
Sci. Rep. 13, 21340 (2023)
|
10.1038/s41598-023-48726-w
| null |
q-bio.PE physics.bio-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In exponential population growth, variability in the timing of individual
division events and environmental factors (including stochastic inoculation)
compound to produce variable growth trajectories. In several stochastic models
of exponential growth we show power-law relationships that relate variability
in the time required to reach a threshold population size to growth rate and
inoculum size. Population-growth experiments in E. coli and S. aureus with
inoculum sizes ranging between 1 and 100 are consistent with these
relationships. We quantify how noise accumulates over time, finding that it
encodes -- and can be used to deduce -- information about the early growth rate
of a population.
|
[
{
"created": "Sat, 22 Apr 2023 20:10:53 GMT",
"version": "v1"
},
{
"created": "Wed, 26 Apr 2023 21:01:36 GMT",
"version": "v2"
},
{
"created": "Mon, 7 Aug 2023 19:49:50 GMT",
"version": "v3"
}
] |
2023-12-25
|
[
[
"Jones",
"Eric W.",
""
],
[
"Derrick",
"Joshua",
""
],
[
"Nisbet",
"Roger M.",
""
],
[
"Ludington",
"Will",
""
],
[
"Sivak",
"David A.",
""
]
] |
In exponential population growth, variability in the timing of individual division events and environmental factors (including stochastic inoculation) compound to produce variable growth trajectories. In several stochastic models of exponential growth we show power-law relationships that relate variability in the time required to reach a threshold population size to growth rate and inoculum size. Population-growth experiments in E. coli and S. aureus with inoculum sizes ranging between 1 and 100 are consistent with these relationships. We quantify how noise accumulates over time, finding that it encodes -- and can be used to deduce -- information about the early growth rate of a population.
|
q-bio/0509004
|
Junghyo Jo
|
J. Jo, H. Kang, M. Y. Choi and D. S. Koh
|
How Noise and Coupling Induce Bursting Action Potentials in Pancreatic
beta-cells
|
40 pages, 10 figures
|
published in Biophysical Journal 89:1534-1542 (2005)
|
10.1529/biophysj.104.053181
| null |
q-bio.CB physics.bio-ph
| null |
Unlike isolated beta-cells, which usually produce continuous spikes or fast
and irregular bursts, electrically coupled beta-cells are apt to exhibit robust
bursting action potentials. We consider the noise induced by thermal
fluctuations as well as that by channel gating stochasticity and examine its
effects on the action potential behavior of the beta-cell model. It is observed
numerically that such noise in general helps single cells to produce a variety
of electrical activities. In addition, we also probe coupling via gap junctions
between neighboring cells,with heterogeneity induced by noise, to find that it
enhances regular bursts.
|
[
{
"created": "Mon, 5 Sep 2005 10:25:13 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Jo",
"J.",
""
],
[
"Kang",
"H.",
""
],
[
"Choi",
"M. Y.",
""
],
[
"Koh",
"D. S.",
""
]
] |
Unlike isolated beta-cells, which usually produce continuous spikes or fast and irregular bursts, electrically coupled beta-cells are apt to exhibit robust bursting action potentials. We consider the noise induced by thermal fluctuations as well as that by channel gating stochasticity and examine its effects on the action potential behavior of the beta-cell model. It is observed numerically that such noise in general helps single cells to produce a variety of electrical activities. In addition, we also probe coupling via gap junctions between neighboring cells,with heterogeneity induced by noise, to find that it enhances regular bursts.
|
1405.4024
|
Thomas House
|
Thomas House
|
Algebraic moment closure for population dynamics on discrete structures
|
12 pages, 4 figures
| null | null | null |
q-bio.PE math.OA
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Moment closure on general discrete structures often requires one of the
following: (i) an absence of short closed loops (zero clustering); (ii)
existence of a spatial scale; (iii) ad hoc assumptions. Algebraic methods are
presented to avoid the use of such assumptions for populations based on clumps,
and are applied to both SIR and macroparasite disease dynamics. One approach
involves a series of approximations that can be derived systematically, and
another is exact and based on Lie algebraic methods.
|
[
{
"created": "Thu, 15 May 2014 21:59:04 GMT",
"version": "v1"
}
] |
2014-05-19
|
[
[
"House",
"Thomas",
""
]
] |
Moment closure on general discrete structures often requires one of the following: (i) an absence of short closed loops (zero clustering); (ii) existence of a spatial scale; (iii) ad hoc assumptions. Algebraic methods are presented to avoid the use of such assumptions for populations based on clumps, and are applied to both SIR and macroparasite disease dynamics. One approach involves a series of approximations that can be derived systematically, and another is exact and based on Lie algebraic methods.
|
1405.2965
|
Andrew Francis
|
Andrew R. Francis and Mike Steel
|
Tree-like Reticulation Networks - When Do Tree-like Distances Also
Support Reticulate Evolution?
|
19 pages, 9 figures. Revised version includes clarification in proof
of Theorem 2, and a new figure (Fig 9)
| null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Hybrid evolution and horizontal gene transfer (HGT) are processes where
evolutionary relationships may more accurately be described by a reticulated
network than by a tree. In such a network, there will often be several paths
between any two extant species, reflecting the possible pathways that genetic
material may have been passed down from a common ancestor to these species.
These paths will typically have different lengths but an `average distance' can
still be calculated between any two taxa. In this article, we ask whether this
average distance is able to distinguish reticulate evolution from pure
tree-like evolution. We consider two types of reticulation networks:
hybridization networks and HGT networks. For the former, we establish a general
result which shows that average distances between extant taxa can appear
tree-like, but only under a single hybridization event near the root; in all
other cases, the two forms of evolution can be distinguished by average
distances. For HGT networks, we demonstrate some analogous but more intricate
results.
|
[
{
"created": "Mon, 12 May 2014 21:11:39 GMT",
"version": "v1"
},
{
"created": "Tue, 23 Sep 2014 01:32:49 GMT",
"version": "v2"
}
] |
2014-09-24
|
[
[
"Francis",
"Andrew R.",
""
],
[
"Steel",
"Mike",
""
]
] |
Hybrid evolution and horizontal gene transfer (HGT) are processes where evolutionary relationships may more accurately be described by a reticulated network than by a tree. In such a network, there will often be several paths between any two extant species, reflecting the possible pathways that genetic material may have been passed down from a common ancestor to these species. These paths will typically have different lengths but an `average distance' can still be calculated between any two taxa. In this article, we ask whether this average distance is able to distinguish reticulate evolution from pure tree-like evolution. We consider two types of reticulation networks: hybridization networks and HGT networks. For the former, we establish a general result which shows that average distances between extant taxa can appear tree-like, but only under a single hybridization event near the root; in all other cases, the two forms of evolution can be distinguished by average distances. For HGT networks, we demonstrate some analogous but more intricate results.
|
2202.07171
|
Hyunjin Shim
|
Hyunjin Shim
|
Investigating the genomic background of CRISPR-Cas genomes for
CRISPR-based antimicrobials
|
27 pages, 5 figures
| null | null | null |
q-bio.GN
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
CRISPR-Cas systems are an adaptive immunity that protects prokaryotes against
foreign genetic elements. Genetic templates acquired during past infection
events enable DNA-interacting enzymes to recognize foreign DNA for destruction.
Due to the programmability and specificity of these genetic templates,
CRISPR-Cas systems are potential alternative antibiotics that can be engineered
to self-target antimicrobial resistance genes on the chromosome or plasmid.
However, several fundamental questions remain to repurpose these tools against
drug-resistant bacteria. For endogenous CRISPR-Cas self-targeting,
antimicrobial resistance genes and functional CRISPR-Cas systems have to
co-occur in the target cell. Furthermore, these tools have to outplay DNA
repair pathways that respond to the nuclease activities of Cas proteins, even
for exogenous CRISPR-Cas delivery. Here, we conduct a comprehensive survey of
CRISPR-Cas genomes. First, we address the co-occurrence of CRISPR-Cas systems
and antimicrobial resistance genes in the CRISPR-Cas genomes. We show that the
average number of these genes varies greatly by the CRISPR-Cas type, and some
CRISPR-Cas types (IE and IIIA) have over 20 genes per genome. Next, we
investigate the DNA repair pathways of these CRISPR-Cas genomes, revealing that
the diversity and frequency of these pathways differ by the CRISPR-Cas type.
The interplay between CRISPR-Cas systems and DNA repair pathways is essential
for the acquisition of new spacers in CRISPR arrays. We conduct simulation
studies to demonstrate that the efficiency of these DNA repair pathways may be
inferred from the time-series patterns in the RNA structure of CRISPR repeats.
This bioinformatic survey of CRISPR-Cas genomes elucidates the necessity to
consider multifaceted interactions between different genes and systems to
design effective CRISPR-based antimicrobials.
|
[
{
"created": "Tue, 15 Feb 2022 03:47:26 GMT",
"version": "v1"
}
] |
2022-02-16
|
[
[
"Shim",
"Hyunjin",
""
]
] |
CRISPR-Cas systems are an adaptive immunity that protects prokaryotes against foreign genetic elements. Genetic templates acquired during past infection events enable DNA-interacting enzymes to recognize foreign DNA for destruction. Due to the programmability and specificity of these genetic templates, CRISPR-Cas systems are potential alternative antibiotics that can be engineered to self-target antimicrobial resistance genes on the chromosome or plasmid. However, several fundamental questions remain to repurpose these tools against drug-resistant bacteria. For endogenous CRISPR-Cas self-targeting, antimicrobial resistance genes and functional CRISPR-Cas systems have to co-occur in the target cell. Furthermore, these tools have to outplay DNA repair pathways that respond to the nuclease activities of Cas proteins, even for exogenous CRISPR-Cas delivery. Here, we conduct a comprehensive survey of CRISPR-Cas genomes. First, we address the co-occurrence of CRISPR-Cas systems and antimicrobial resistance genes in the CRISPR-Cas genomes. We show that the average number of these genes varies greatly by the CRISPR-Cas type, and some CRISPR-Cas types (IE and IIIA) have over 20 genes per genome. Next, we investigate the DNA repair pathways of these CRISPR-Cas genomes, revealing that the diversity and frequency of these pathways differ by the CRISPR-Cas type. The interplay between CRISPR-Cas systems and DNA repair pathways is essential for the acquisition of new spacers in CRISPR arrays. We conduct simulation studies to demonstrate that the efficiency of these DNA repair pathways may be inferred from the time-series patterns in the RNA structure of CRISPR repeats. This bioinformatic survey of CRISPR-Cas genomes elucidates the necessity to consider multifaceted interactions between different genes and systems to design effective CRISPR-based antimicrobials.
|
1309.2576
|
Reinhard Buerger
|
Reinhard B\"urger
|
A Survey on Migration-Selection Models in Population Genetics
| null |
Discrete Cont. Dyn. Syst. B 19, 883 - 959 (2014)
|
10.3934/dcdsb.2014.19.883
| null |
q-bio.PE math.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This survey focuses on the most important aspects of the mathematical theory
of population genetic models of selection and migration between discrete
niches. Such models are most appropriate if the dispersal distance is short
compared to the scale at which the environment changes, or if the habitat is
fragmented. The general goal of such models is to study the influence of
population subdivision and gene flow among subpopulations on the amount and
pattern of genetic variation maintained. Only deterministic models are treated.
Because space is discrete, they are formulated in terms of systems of nonlinear
difference or differential equations. A central topic is the exploration of the
equilibrium and stability structure under various assumptions on the patterns
of selection and migration. Another important, closely related topic concerns
conditions (necessary or sufficient) for fully polymorphic (internal)
equilibria. First, the theory of one-locus models with two or multiple alleles
is laid out. Then, mostly very recent, developments about multilocus models are
presented. Finally, as an application, analysis and results of an explicit
two-locus model emerging from speciation theory are highlighted.
|
[
{
"created": "Tue, 10 Sep 2013 17:09:39 GMT",
"version": "v1"
}
] |
2019-12-13
|
[
[
"Bürger",
"Reinhard",
""
]
] |
This survey focuses on the most important aspects of the mathematical theory of population genetic models of selection and migration between discrete niches. Such models are most appropriate if the dispersal distance is short compared to the scale at which the environment changes, or if the habitat is fragmented. The general goal of such models is to study the influence of population subdivision and gene flow among subpopulations on the amount and pattern of genetic variation maintained. Only deterministic models are treated. Because space is discrete, they are formulated in terms of systems of nonlinear difference or differential equations. A central topic is the exploration of the equilibrium and stability structure under various assumptions on the patterns of selection and migration. Another important, closely related topic concerns conditions (necessary or sufficient) for fully polymorphic (internal) equilibria. First, the theory of one-locus models with two or multiple alleles is laid out. Then, mostly very recent, developments about multilocus models are presented. Finally, as an application, analysis and results of an explicit two-locus model emerging from speciation theory are highlighted.
|
0808.2922
|
Craig Powell
|
Craig R. Powell, Alan J. McKane
|
Effects of food web construction by evolution or immigration
|
12 pages, 7 figures
| null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present results contrasting food webs constructed using the same model
where the source of species was either evolution or immigration from a
previously evolved species pool. The overall structure of the webs are
remarkably similar, although we find some important differences which mainly
relate to the percentage of basal and top species. Food webs assembled from
evolved webs also show distinct plateaux in the number of tropic levels as the
resources available to system increase, in contrast to evolved webs. By
equating the resources available to basal species to area, we are able to
examine the species-area curve created by each process separately. They are
found to correspond to different regimes of the tri-phasic species-area curve.
|
[
{
"created": "Thu, 21 Aug 2008 13:06:24 GMT",
"version": "v1"
}
] |
2008-08-22
|
[
[
"Powell",
"Craig R.",
""
],
[
"McKane",
"Alan J.",
""
]
] |
We present results contrasting food webs constructed using the same model where the source of species was either evolution or immigration from a previously evolved species pool. The overall structure of the webs are remarkably similar, although we find some important differences which mainly relate to the percentage of basal and top species. Food webs assembled from evolved webs also show distinct plateaux in the number of tropic levels as the resources available to system increase, in contrast to evolved webs. By equating the resources available to basal species to area, we are able to examine the species-area curve created by each process separately. They are found to correspond to different regimes of the tri-phasic species-area curve.
|
1704.05748
|
Nicolai Pedersen
|
Rasmus S. Andersen, Anders U. Eliasen, Nicolai Pedersen, Michael Riis
Andersen, Sofie Therese Hansen, Lars Kai Hansen
|
EEG source imaging assists decoding in a face recognition task
| null | null | null | null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
EEG based brain state decoding has numerous applications. State of the art
decoding is based on processing of the multivariate sensor space signal,
however evidence is mounting that EEG source reconstruction can assist
decoding. EEG source imaging leads to high-dimensional representations and
rather strong a priori information must be invoked. Recent work by Edelman et
al. (2016) has demonstrated that introduction of a spatially focal source space
representation can improve decoding of motor imagery. In this work we explore
the generality of Edelman et al. hypothesis by considering decoding of face
recognition. This task concerns the differentiation of brain responses to
images of faces and scrambled faces and poses a rather difficult decoding
problem at the single trial level. We implement the pipeline using spatially
focused features and show that this approach is challenged and source imaging
does not lead to an improved decoding. We design a distributed pipeline in
which the classifier has access to brain wide features which in turn does lead
to a 15% reduction in the error rate using source space features. Hence, our
work presents supporting evidence for the hypothesis that source imaging
improves decoding.
|
[
{
"created": "Mon, 17 Apr 2017 10:19:16 GMT",
"version": "v1"
}
] |
2017-04-20
|
[
[
"Andersen",
"Rasmus S.",
""
],
[
"Eliasen",
"Anders U.",
""
],
[
"Pedersen",
"Nicolai",
""
],
[
"Andersen",
"Michael Riis",
""
],
[
"Hansen",
"Sofie Therese",
""
],
[
"Hansen",
"Lars Kai",
""
]
] |
EEG based brain state decoding has numerous applications. State of the art decoding is based on processing of the multivariate sensor space signal, however evidence is mounting that EEG source reconstruction can assist decoding. EEG source imaging leads to high-dimensional representations and rather strong a priori information must be invoked. Recent work by Edelman et al. (2016) has demonstrated that introduction of a spatially focal source space representation can improve decoding of motor imagery. In this work we explore the generality of Edelman et al. hypothesis by considering decoding of face recognition. This task concerns the differentiation of brain responses to images of faces and scrambled faces and poses a rather difficult decoding problem at the single trial level. We implement the pipeline using spatially focused features and show that this approach is challenged and source imaging does not lead to an improved decoding. We design a distributed pipeline in which the classifier has access to brain wide features which in turn does lead to a 15% reduction in the error rate using source space features. Hence, our work presents supporting evidence for the hypothesis that source imaging improves decoding.
|
2312.13465
|
Thierry Mora
|
Maria Francesca Abbate, Thomas Dupic, Emmanuelle Vigne, Melody A.
Shahsavarian, Aleksandra M. Walczak, Thierry Mora
|
Computational detection of antigen specific B cell receptors following
immunization
| null | null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
B cell receptors (BCRs) play a crucial role in recognizing and fighting
foreign antigens. High-throughput sequencing enables in-depth sampling of the
BCRs repertoire after immunization. However, only a minor fraction of BCRs
actively participate in any given infection. To what extent can we accurately
identify antigen-specific sequences directly from BCRs repertoires? We present
a computational method grounded on sequence similarity, aimed at identifying
statistically significant responsive BCRs. This method leverages well-known
characteristics of affinity maturation and expected diversity. We validate its
effectiveness using longitudinally sampled human immune repertoire data
following influenza vaccination and Sars-CoV-2 infections. We show that
different lineages converge to the same responding CDR3, demonstrating
convergent selection within an individual. The outcomes of this method hold
promise for application in vaccine development, personalized medicine, and
antibody-derived therapeutics.
|
[
{
"created": "Wed, 20 Dec 2023 22:32:02 GMT",
"version": "v1"
}
] |
2023-12-22
|
[
[
"Abbate",
"Maria Francesca",
""
],
[
"Dupic",
"Thomas",
""
],
[
"Vigne",
"Emmanuelle",
""
],
[
"Shahsavarian",
"Melody A.",
""
],
[
"Walczak",
"Aleksandra M.",
""
],
[
"Mora",
"Thierry",
""
]
] |
B cell receptors (BCRs) play a crucial role in recognizing and fighting foreign antigens. High-throughput sequencing enables in-depth sampling of the BCRs repertoire after immunization. However, only a minor fraction of BCRs actively participate in any given infection. To what extent can we accurately identify antigen-specific sequences directly from BCRs repertoires? We present a computational method grounded on sequence similarity, aimed at identifying statistically significant responsive BCRs. This method leverages well-known characteristics of affinity maturation and expected diversity. We validate its effectiveness using longitudinally sampled human immune repertoire data following influenza vaccination and Sars-CoV-2 infections. We show that different lineages converge to the same responding CDR3, demonstrating convergent selection within an individual. The outcomes of this method hold promise for application in vaccine development, personalized medicine, and antibody-derived therapeutics.
|
2201.08110
|
Gianni De Fabritiis
|
Raimondas Galvelis, Alejandro Varela-Rial, Stefan Doerr, Roberto Fino,
Peter Eastman, Thomas E. Markland, John D. Chodera and Gianni De Fabritiis
|
NNP/MM: Accelerating molecular dynamics simulations with machine
learning potentials and molecular mechanic
| null | null | null | null |
q-bio.BM cs.LG physics.bio-ph physics.comp-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Machine learning potentials have emerged as a means to enhance the accuracy
of biomolecular simulations. However, their application is constrained by the
significant computational cost arising from the vast number of parameters
compared to traditional molecular mechanics. To tackle this issue, we introduce
an optimized implementation of the hybrid method (NNP/MM), which combines
neural network potentials (NNP) and molecular mechanics (MM). This approach
models a portion of the system, such as a small molecule, using NNP while
employing MM for the remaining system to boost efficiency. By conducting
molecular dynamics (MD) simulations on various protein-ligand complexes and
metadynamics (MTD) simulations on a ligand, we showcase the capabilities of our
implementation of NNP/MM. It has enabled us to increase the simulation speed by
5 times and achieve a combined sampling of one microsecond for each complex,
marking the longest simulations ever reported for this class of simulation.
|
[
{
"created": "Thu, 20 Jan 2022 10:57:20 GMT",
"version": "v1"
},
{
"created": "Mon, 28 Aug 2023 12:04:46 GMT",
"version": "v2"
}
] |
2023-08-29
|
[
[
"Galvelis",
"Raimondas",
""
],
[
"Varela-Rial",
"Alejandro",
""
],
[
"Doerr",
"Stefan",
""
],
[
"Fino",
"Roberto",
""
],
[
"Eastman",
"Peter",
""
],
[
"Markland",
"Thomas E.",
""
],
[
"Chodera",
"John D.",
""
],
[
"De Fabritiis",
"Gianni",
""
]
] |
Machine learning potentials have emerged as a means to enhance the accuracy of biomolecular simulations. However, their application is constrained by the significant computational cost arising from the vast number of parameters compared to traditional molecular mechanics. To tackle this issue, we introduce an optimized implementation of the hybrid method (NNP/MM), which combines neural network potentials (NNP) and molecular mechanics (MM). This approach models a portion of the system, such as a small molecule, using NNP while employing MM for the remaining system to boost efficiency. By conducting molecular dynamics (MD) simulations on various protein-ligand complexes and metadynamics (MTD) simulations on a ligand, we showcase the capabilities of our implementation of NNP/MM. It has enabled us to increase the simulation speed by 5 times and achieve a combined sampling of one microsecond for each complex, marking the longest simulations ever reported for this class of simulation.
|
1804.01694
|
Anvita Gupta
|
Anvita Gupta and James Zou
|
Feedback GAN (FBGAN) for DNA: a Novel Feedback-Loop Architecture for
Optimizing Protein Functions
| null | null | null | null |
q-bio.GN cs.LG cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Generative Adversarial Networks (GANs) represent an attractive and novel
approach to generate realistic data, such as genes, proteins, or drugs, in
synthetic biology. Here, we apply GANs to generate synthetic DNA sequences
encoding for proteins of variable length. We propose a novel feedback-loop
architecture, called Feedback GAN (FBGAN), to optimize the synthetic gene
sequences for desired properties using an external function analyzer. The
proposed architecture also has the advantage that the analyzer need not be
differentiable. We apply the feedback-loop mechanism to two examples: 1)
generating synthetic genes coding for antimicrobial peptides, and 2) optimizing
synthetic genes for the secondary structure of their resulting peptides. A
suite of metrics demonstrate that the GAN generated proteins have desirable
biophysical properties. The FBGAN architecture can also be used to optimize
GAN-generated datapoints for useful properties in domains beyond genomics.
|
[
{
"created": "Thu, 5 Apr 2018 07:17:42 GMT",
"version": "v1"
}
] |
2018-04-06
|
[
[
"Gupta",
"Anvita",
""
],
[
"Zou",
"James",
""
]
] |
Generative Adversarial Networks (GANs) represent an attractive and novel approach to generate realistic data, such as genes, proteins, or drugs, in synthetic biology. Here, we apply GANs to generate synthetic DNA sequences encoding for proteins of variable length. We propose a novel feedback-loop architecture, called Feedback GAN (FBGAN), to optimize the synthetic gene sequences for desired properties using an external function analyzer. The proposed architecture also has the advantage that the analyzer need not be differentiable. We apply the feedback-loop mechanism to two examples: 1) generating synthetic genes coding for antimicrobial peptides, and 2) optimizing synthetic genes for the secondary structure of their resulting peptides. A suite of metrics demonstrate that the GAN generated proteins have desirable biophysical properties. The FBGAN architecture can also be used to optimize GAN-generated datapoints for useful properties in domains beyond genomics.
|
2001.08264
|
Martin Frasch
|
Martin G. Frasch
|
Heart rate variability code: Does it exist and can we hack it?
| null |
Bioengineering 2023
|
10.3390/bioengineering10070822
| null |
q-bio.TO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Heart rate variability (HRV) has been studied for over 50 years, yet an
integrative concept is missing on what HRV's mathematical properties represent
physiologically. Here I introduce the notion of HRV code as an attempt to
address this challenge systematically. I review the existing evidence from
physiological studies in various species to support this concept and propose
experiments to help validate and expand this notion further.
|
[
{
"created": "Wed, 22 Jan 2020 20:23:50 GMT",
"version": "v1"
},
{
"created": "Wed, 12 Feb 2020 02:05:22 GMT",
"version": "v2"
},
{
"created": "Wed, 6 May 2020 22:25:50 GMT",
"version": "v3"
},
{
"created": "Mon, 10 May 2021 18:17:22 GMT",
"version": "v4"
}
] |
2023-07-18
|
[
[
"Frasch",
"Martin G.",
""
]
] |
Heart rate variability (HRV) has been studied for over 50 years, yet an integrative concept is missing on what HRV's mathematical properties represent physiologically. Here I introduce the notion of HRV code as an attempt to address this challenge systematically. I review the existing evidence from physiological studies in various species to support this concept and propose experiments to help validate and expand this notion further.
|
1102.5528
|
Henry L\"utcke
|
Henry L\"utcke and Fritjof Helmchen
|
Two-photon imaging and analysis of neural network dynamics
|
36 pages, 4 figures, accepted for publication in Reports on Progress
in Physics
| null |
10.1088/0034-4885/74/8/086602
| null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The glow of a starry night sky, the smell of a freshly brewed cup of coffee
or the sound of ocean waves breaking on the beach are representations of the
physical world that have been created by the dynamic interactions of thousands
of neurons in our brains. How the brain mediates perceptions, creates thoughts,
stores memories and initiates actions remains one of the most profound puzzles
in biology, if not all of science. A key to a mechanistic understanding of how
the nervous system works is the ability to analyze the dynamics of neuronal
networks in the living organism in the context of sensory stimulation and
behaviour. Dynamic brain properties have been fairly well characterized on the
microscopic level of individual neurons and on the macroscopic level of whole
brain areas largely with the help of various electrophysiological techniques.
However, our understanding of the mesoscopic level comprising local populations
of hundreds to thousands of neurons (so called 'microcircuits') remains
comparably poor. In large parts, this has been due to the technical
difficulties involved in recording from large networks of neurons with
single-cell spatial resolution and near- millisecond temporal resolution in the
brain of living animals. In recent years, two-photon microscopy has emerged as
a technique which meets many of these requirements and thus has become the
method of choice for the interrogation of local neural circuits. Here, we
review the state-of-research in the field of two-photon imaging of neuronal
populations, covering the topics of microscope technology, suitable fluorescent
indicator dyes, staining techniques, and in particular analysis techniques for
extracting relevant information from the fluorescence data. We expect that
functional analysis of neural networks using two-photon imaging will help to
decipher fundamental operational principles of neural microcircuits.
|
[
{
"created": "Sun, 27 Feb 2011 17:48:50 GMT",
"version": "v1"
},
{
"created": "Tue, 26 Jul 2011 21:42:27 GMT",
"version": "v2"
}
] |
2011-07-28
|
[
[
"Lütcke",
"Henry",
""
],
[
"Helmchen",
"Fritjof",
""
]
] |
The glow of a starry night sky, the smell of a freshly brewed cup of coffee or the sound of ocean waves breaking on the beach are representations of the physical world that have been created by the dynamic interactions of thousands of neurons in our brains. How the brain mediates perceptions, creates thoughts, stores memories and initiates actions remains one of the most profound puzzles in biology, if not all of science. A key to a mechanistic understanding of how the nervous system works is the ability to analyze the dynamics of neuronal networks in the living organism in the context of sensory stimulation and behaviour. Dynamic brain properties have been fairly well characterized on the microscopic level of individual neurons and on the macroscopic level of whole brain areas largely with the help of various electrophysiological techniques. However, our understanding of the mesoscopic level comprising local populations of hundreds to thousands of neurons (so called 'microcircuits') remains comparably poor. In large parts, this has been due to the technical difficulties involved in recording from large networks of neurons with single-cell spatial resolution and near- millisecond temporal resolution in the brain of living animals. In recent years, two-photon microscopy has emerged as a technique which meets many of these requirements and thus has become the method of choice for the interrogation of local neural circuits. Here, we review the state-of-research in the field of two-photon imaging of neuronal populations, covering the topics of microscope technology, suitable fluorescent indicator dyes, staining techniques, and in particular analysis techniques for extracting relevant information from the fluorescence data. We expect that functional analysis of neural networks using two-photon imaging will help to decipher fundamental operational principles of neural microcircuits.
|
2107.07574
|
Sayantan Nag Chowdhury
|
Sayantan Nag Chowdhury, Srilena Kundu, Jeet Banerjee, Matja\v{z} Perc,
Dibakar Ghosh
|
Eco-evolutionary dynamics of cooperation in the presence of policing
| null |
J. Theor. Biol. 518, 110606 (2021)
|
10.1016/j.jtbi.2021.110606
| null |
q-bio.PE physics.soc-ph
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Ecology and evolution are inherently linked, and studying a mathematical
model that considers both holds promise of insightful discoveries related to
the dynamics of cooperation. In the present article, we use the prisoner's
dilemma (PD) game as a basis for long-term apprehension of the essential social
dilemma related to cooperation among unrelated individuals. We upgrade the
contemporary PD game with an inclusion of evolution-induced act of punishment
as a third competing strategy in addition to the traditional cooperators and
defectors. In a population structure, the abundance of ecologically-viable free
space often regulates the reproductive opportunities of the constituents.
Hence, additionally, we consider the availability of free space as an
ecological footprint, thus arriving at a simple eco-evolutionary model, which
displays fascinating complex dynamics. As possible outcomes, we report the
individual dominance of cooperators and defectors as well as a plethora of
mixed states, where different strategies coexist followed by maintaining the
diversity in a socio-ecological framework. These states can either be steady or
oscillating, whereby oscillations are sustained by cyclic dominance among
different combinations of cooperators, defectors, and punishers. We also
observe a novel route to cyclic dominance where cooperators, punishers, and
defectors enter a coexistence via an inverse Hopf bifurcation that is followed
by an inverse period doubling route.
|
[
{
"created": "Thu, 15 Jul 2021 19:16:10 GMT",
"version": "v1"
}
] |
2021-08-10
|
[
[
"Chowdhury",
"Sayantan Nag",
""
],
[
"Kundu",
"Srilena",
""
],
[
"Banerjee",
"Jeet",
""
],
[
"Perc",
"Matjaž",
""
],
[
"Ghosh",
"Dibakar",
""
]
] |
Ecology and evolution are inherently linked, and studying a mathematical model that considers both holds promise of insightful discoveries related to the dynamics of cooperation. In the present article, we use the prisoner's dilemma (PD) game as a basis for long-term apprehension of the essential social dilemma related to cooperation among unrelated individuals. We upgrade the contemporary PD game with an inclusion of evolution-induced act of punishment as a third competing strategy in addition to the traditional cooperators and defectors. In a population structure, the abundance of ecologically-viable free space often regulates the reproductive opportunities of the constituents. Hence, additionally, we consider the availability of free space as an ecological footprint, thus arriving at a simple eco-evolutionary model, which displays fascinating complex dynamics. As possible outcomes, we report the individual dominance of cooperators and defectors as well as a plethora of mixed states, where different strategies coexist followed by maintaining the diversity in a socio-ecological framework. These states can either be steady or oscillating, whereby oscillations are sustained by cyclic dominance among different combinations of cooperators, defectors, and punishers. We also observe a novel route to cyclic dominance where cooperators, punishers, and defectors enter a coexistence via an inverse Hopf bifurcation that is followed by an inverse period doubling route.
|
2403.07475
|
Zhiheng Lyu
|
Zhiheng Lyu, Jiannan Yang, Zhongzhi Xu, Weilan Wang, Weibin Cheng,
Kwok-Leung Tsui, Gary Tse and Qingpeng Zhang
|
Predicting the Risk of Ischemic Stroke in Patients with Atrial
Fibrillation using Heterogeneous Drug-protein-disease Network-based Deep
Learning
| null | null | null | null |
q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We develop a deep learning model, ABioSPATH, to predict the one-year risk of
ischemic stroke (IS) in atrial fibrillation (AF) patients. The model integrates
drug-protein-disease pathways and real-world clinical data of AF patients to
generate the IS risk and potential pathways for each patient. The model uses a
multilayer network to identify the mechanism of drug action and disease
comorbidity propagation pathways. The model is tested on the Electronic Health
Record (EHR) data of 7859 AF patients from 43 hospitals in Hong Kong. The model
outperforms all baselines across all metrics and provides valuable
molecular-level insights for clinical use. The model also highlights key
proteins in common pathways and potential IS risks tied to less-studied drugs.
The model only requires routinely collected data, without requiring expensive
biomarkers to be tested.
|
[
{
"created": "Tue, 12 Mar 2024 10:10:55 GMT",
"version": "v1"
}
] |
2024-03-13
|
[
[
"Lyu",
"Zhiheng",
""
],
[
"Yang",
"Jiannan",
""
],
[
"Xu",
"Zhongzhi",
""
],
[
"Wang",
"Weilan",
""
],
[
"Cheng",
"Weibin",
""
],
[
"Tsui",
"Kwok-Leung",
""
],
[
"Tse",
"Gary",
""
],
[
"Zhang",
"Qingpeng",
""
]
] |
We develop a deep learning model, ABioSPATH, to predict the one-year risk of ischemic stroke (IS) in atrial fibrillation (AF) patients. The model integrates drug-protein-disease pathways and real-world clinical data of AF patients to generate the IS risk and potential pathways for each patient. The model uses a multilayer network to identify the mechanism of drug action and disease comorbidity propagation pathways. The model is tested on the Electronic Health Record (EHR) data of 7859 AF patients from 43 hospitals in Hong Kong. The model outperforms all baselines across all metrics and provides valuable molecular-level insights for clinical use. The model also highlights key proteins in common pathways and potential IS risks tied to less-studied drugs. The model only requires routinely collected data, without requiring expensive biomarkers to be tested.
|
2305.14185
|
Fabio Chalub
|
Fabio A.C.C. Chalub, Antonio G\'omez-Corral, Mart\'in
L\'opez-Garc\'ia, and F\'atima Palacios-Rodr\'iguez
|
A Markov chain model to investigate the spread of antibiotic-resistant
bacteria in hospitals
|
30 pages, 9 figures
| null | null | null |
q-bio.PE
|
http://creativecommons.org/licenses/by/4.0/
|
Ordinary differential equation (ODE) models used in mathematical epidemiology
assume explicitly or implicitly large populations. For the study of infections
in a hospital this is an extremely restrictive assumption as typically a
hospital ward has a few dozen, or even fewer, patients. This work reframes a
well-known model used in the study of the spread of antibiotic-resistant
bacteria in hospitals, to consider the pathogen transmission dynamics in small
populations. In this vein, this paper proposes a Markov chain model to describe
the spread of a single bacterial species in a hospital ward where patients may
be free of bacteria or may carry bacterial strains that are either sensitive or
resistant to antimicrobial agents. We determine the probability law of the
\emph{exact} reproduction number ${\cal R}_{exact,0}$, which is here defined as
the random number of secondary infections generated by those patients who are
accommodated in a predetermined bed before a patient who is free of bacteria is
accommodated in this bed for the first time. Specifically, we decompose the
exact reproduction number ${\cal R}_{exact,0}$ into two contributions allowing
us to distinguish between infections due to the sensitive and the resistant
bacterial strains. Our methodology is mainly based on structured Markov chains
and the use of related matrix-analytic methods. This guarantees the
compatibility of the new, finite-population model, with large population models
present in the literature and takes full advantage, in its mathematical
analysis, of the intrinsic stochasticity.
|
[
{
"created": "Tue, 23 May 2023 16:06:56 GMT",
"version": "v1"
},
{
"created": "Fri, 18 Aug 2023 10:59:21 GMT",
"version": "v2"
}
] |
2023-08-21
|
[
[
"Chalub",
"Fabio A. C. C.",
""
],
[
"Gómez-Corral",
"Antonio",
""
],
[
"López-García",
"Martín",
""
],
[
"Palacios-Rodríguez",
"Fátima",
""
]
] |
Ordinary differential equation (ODE) models used in mathematical epidemiology assume explicitly or implicitly large populations. For the study of infections in a hospital this is an extremely restrictive assumption as typically a hospital ward has a few dozen, or even fewer, patients. This work reframes a well-known model used in the study of the spread of antibiotic-resistant bacteria in hospitals, to consider the pathogen transmission dynamics in small populations. In this vein, this paper proposes a Markov chain model to describe the spread of a single bacterial species in a hospital ward where patients may be free of bacteria or may carry bacterial strains that are either sensitive or resistant to antimicrobial agents. We determine the probability law of the \emph{exact} reproduction number ${\cal R}_{exact,0}$, which is here defined as the random number of secondary infections generated by those patients who are accommodated in a predetermined bed before a patient who is free of bacteria is accommodated in this bed for the first time. Specifically, we decompose the exact reproduction number ${\cal R}_{exact,0}$ into two contributions allowing us to distinguish between infections due to the sensitive and the resistant bacterial strains. Our methodology is mainly based on structured Markov chains and the use of related matrix-analytic methods. This guarantees the compatibility of the new, finite-population model, with large population models present in the literature and takes full advantage, in its mathematical analysis, of the intrinsic stochasticity.
|
0911.5315
|
Timothy Saunders
|
Timothy E Saunders and Martin Howard
|
When it Pays to Rush: Interpreting Morphogen Gradients Prior to
Steady-State
|
18 pages, 3 figures
|
Physical Biology, 6, 046020 (2009)
|
10.1088/1478-3975/6/4/046020
| null |
q-bio.MN q-bio.CB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
During development, morphogen gradients precisely determine the position of
gene expression boundaries despite the inevitable presence of fluctuations.
Recent experiments suggest that some morphogen gradients may be interpreted
prior to reaching steady-state. Theoretical work has predicted that such
systems will be more robust to embryo-to-embryo fluctuations. By analysing two
experimentally motivated models of morphogen gradient formation, we investigate
the positional precision of gene expression boundaries determined by
pre-steady-state morphogen gradients in the presence of embryo-to-embryo
fluctuations, internal biochemical noise and variations in the timing of
morphogen measurement. Morphogens that are direct transcription factors are
found to be particularly sensitive to internal noise when interpreted prior to
steady-state, disadvantaging early measurement, even in the presence of large
embryo-to-embryo fluctuations. Morphogens interpreted by cell-surface receptors
can be measured prior to steady-state without significant decrease in
positional precision provided fluctuations in the timing of measurement are
small. Applying our results to experiment, we predict that Bicoid, a
transcription factor morphogen in Drosophila, is unlikely to be interpreted
prior to reaching steady-state. We also predict that Activin in Xenopus and
Nodal in zebrafish, morphogens interpreted by cell-surface receptors, can be
decoded in pre-steady-state.
|
[
{
"created": "Fri, 27 Nov 2009 17:22:13 GMT",
"version": "v1"
}
] |
2009-11-30
|
[
[
"Saunders",
"Timothy E",
""
],
[
"Howard",
"Martin",
""
]
] |
During development, morphogen gradients precisely determine the position of gene expression boundaries despite the inevitable presence of fluctuations. Recent experiments suggest that some morphogen gradients may be interpreted prior to reaching steady-state. Theoretical work has predicted that such systems will be more robust to embryo-to-embryo fluctuations. By analysing two experimentally motivated models of morphogen gradient formation, we investigate the positional precision of gene expression boundaries determined by pre-steady-state morphogen gradients in the presence of embryo-to-embryo fluctuations, internal biochemical noise and variations in the timing of morphogen measurement. Morphogens that are direct transcription factors are found to be particularly sensitive to internal noise when interpreted prior to steady-state, disadvantaging early measurement, even in the presence of large embryo-to-embryo fluctuations. Morphogens interpreted by cell-surface receptors can be measured prior to steady-state without significant decrease in positional precision provided fluctuations in the timing of measurement are small. Applying our results to experiment, we predict that Bicoid, a transcription factor morphogen in Drosophila, is unlikely to be interpreted prior to reaching steady-state. We also predict that Activin in Xenopus and Nodal in zebrafish, morphogens interpreted by cell-surface receptors, can be decoded in pre-steady-state.
|
2206.02438
|
Matej Hoffmann Ph.D.
|
Matej Hoffmann
|
Body schema or the body as its own best model
|
7 pages, 2 figures. arXiv admin note: text overlap with
arXiv:2010.09325
|
in G. \v{S}ejnov\'a and M. Vavre\v{c}ka and J. Hvoreck\'y, ed.,
'Kognice a um\v{e}l\'y \v{z}ivot XX (Cognition and Artificial Life XX)', CTU
in Prague, , pp. 45-51 (2022)
| null | null |
q-bio.NC
|
http://creativecommons.org/licenses/by/4.0/
|
Rodney Brooks (1991) put forth the idea that during an agent's interaction
with its environment, representations of the world often stand in the way.
Instead, using the world as its own best model, i.e. interacting with it
directly without making models, often leads to better and more natural
behavior. The same perspective can be applied to representations of the agent's
body. I analyze different examples from biology -- octopus and humans in
particular -- and compare them with robots and their body models. At one end of
the spectrum, the octopus, a highly intelligent animal, largely relies on the
mechanical properties of its arms and peripheral nervous system. No central
representations or maps of its body were found in its central nervous system.
Primate brains do contain areas dedicated to processing body-related
information and different body maps were found. Yet, these representations are
still largely implicit and distributed and some functionality is also offloaded
to the periphery. Robots, on the other hand, rely almost exclusively on their
body models when planning and executing behaviors. I analyze the pros and cons
of these different approaches and propose what may be the best solution for
robots of the future.
|
[
{
"created": "Mon, 6 Jun 2022 08:57:33 GMT",
"version": "v1"
}
] |
2022-06-07
|
[
[
"Hoffmann",
"Matej",
""
]
] |
Rodney Brooks (1991) put forth the idea that during an agent's interaction with its environment, representations of the world often stand in the way. Instead, using the world as its own best model, i.e. interacting with it directly without making models, often leads to better and more natural behavior. The same perspective can be applied to representations of the agent's body. I analyze different examples from biology -- octopus and humans in particular -- and compare them with robots and their body models. At one end of the spectrum, the octopus, a highly intelligent animal, largely relies on the mechanical properties of its arms and peripheral nervous system. No central representations or maps of its body were found in its central nervous system. Primate brains do contain areas dedicated to processing body-related information and different body maps were found. Yet, these representations are still largely implicit and distributed and some functionality is also offloaded to the periphery. Robots, on the other hand, rely almost exclusively on their body models when planning and executing behaviors. I analyze the pros and cons of these different approaches and propose what may be the best solution for robots of the future.
|
2205.01790
|
Soumyajit Seth
|
Zeric Tabekoueng Njitacke, Sishu Shankar Muni, Soumyajit Seth, Jan
Awrejcewicz, Jacques Kengne
|
Complex dynamics of a heterogeneous network of Hindmarsh-Rose neurons
| null | null |
10.1088/1402-4896/acbdd1
| null |
q-bio.NC nlin.CD
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this contribution, we have considered the collective behavior of the two
as well as the network of heterogeneous coupled Hindmarsh Rose (HR) neurons.
The heterogeneous models were made of a memristive 2D (HR) and the traditional
3D HR neurons. Investigating a model of two coupled neurons through an
electrical synapse reveals dissipative properties. When control parameters are
varied, the coupled neuron model exhibits rich dynamics, such as the periodic,
quasi-periodic, and chaotic dynamics involving either bursting or spiking
oscillations. For weak electrical coupling strength, non-synchronized motion is
observed. But in the case of higher coupling strength, synchronized cluster
states are observed. Besides, ring-star networks of up to 100 under three
different heterogeneous topologies are being investigated, and various
spatiotemporal patterns are explored. It is found that the spatiotemporal
patterns depend on the topology of the heterogeneous network considered. A new
clustered chimera state is revealed qualitatively via the recurrence plot. The
cluster states are indicated in the ring and star configurations of the
heterogeneous network. Single and double-well chimera states have been revealed
in the ring and ring-star structures. Finally, an equivalent electronic circuit
for the two coupled heterogeneous is designed and investigated in the PSIM
simulation environment. A perfect match is observed between the results
obtained from the designed analog circuit and the mathematical model of the two
coupled neurons, which supports the fact that our obtained results are not
related to an artifact.
|
[
{
"created": "Tue, 3 May 2022 21:33:32 GMT",
"version": "v1"
}
] |
2023-02-24
|
[
[
"Njitacke",
"Zeric Tabekoueng",
""
],
[
"Muni",
"Sishu Shankar",
""
],
[
"Seth",
"Soumyajit",
""
],
[
"Awrejcewicz",
"Jan",
""
],
[
"Kengne",
"Jacques",
""
]
] |
In this contribution, we have considered the collective behavior of the two as well as the network of heterogeneous coupled Hindmarsh Rose (HR) neurons. The heterogeneous models were made of a memristive 2D (HR) and the traditional 3D HR neurons. Investigating a model of two coupled neurons through an electrical synapse reveals dissipative properties. When control parameters are varied, the coupled neuron model exhibits rich dynamics, such as the periodic, quasi-periodic, and chaotic dynamics involving either bursting or spiking oscillations. For weak electrical coupling strength, non-synchronized motion is observed. But in the case of higher coupling strength, synchronized cluster states are observed. Besides, ring-star networks of up to 100 under three different heterogeneous topologies are being investigated, and various spatiotemporal patterns are explored. It is found that the spatiotemporal patterns depend on the topology of the heterogeneous network considered. A new clustered chimera state is revealed qualitatively via the recurrence plot. The cluster states are indicated in the ring and star configurations of the heterogeneous network. Single and double-well chimera states have been revealed in the ring and ring-star structures. Finally, an equivalent electronic circuit for the two coupled heterogeneous is designed and investigated in the PSIM simulation environment. A perfect match is observed between the results obtained from the designed analog circuit and the mathematical model of the two coupled neurons, which supports the fact that our obtained results are not related to an artifact.
|
0704.3263
|
Dominique Jean-Marie Mornet
|
Karim Hnia, G\'erald Hugon, Ahmed Masmoudi, Jacques Mercier,
Fran\c{c}ois Rivier, Dominique Jean-Marie Mornet
|
Effect of beta-Dystroglycan Processing on Utrophin / DP116 Anchorage in
Normal and MDX Mouse Schwann Cell Membrane
| null |
Neuroscience 141 (18/04/2006) 607-620
|
10.1016/J.neuroscience.2006.04.043
| null |
q-bio.NC
| null |
In the peripheral nervous system, utrophin and the short dystrophin isoform
(Dp116) are co-localized at the outermost layer of the myelin sheath of nerve
fibers; together with the dystroglycan complex. In peripheral nerve, matrix
metalloproteinase (MMP) creates a 30 kDa fragment of beta-dystroglycan, leading
to a disruption of the link between the extracellular matrix and the cell
membrane. Here we asked if the processing of the beta-dystroglycan could
influence the anchorage of Dp116 or/and utrophin in normal and mdx Schwann cell
membrane. We showed that MMP-9 was more activated in mdx nerve than in
wild-type one. This activation leads to an accumulation of the 30 kDa
beta-dystroglycan isoform and have an impact on the anchorage of Dp116 and
utrophin isoforms in mdx Schwann cells membrane. Our results showed that Dp116
had greater affinity to the full length form of beta-dystroglycan than the 30
kDa form. Moreover, we showed for the first time that the short isoform of
utrophin (Up71) was over-expressed in mdx Schwann cells compared to wild-type.
In addition, this utrophin isoform (Up71) seems to have greater affinity to the
30 kDa beta-dystroglycan which could explain a more stabilization of this 30
kDa at the membrane compartment. Our results highlight the potential
participation of the short utrophin isoform and the cleaved form of
beta-dystroglycan in mdx Schwann cell membrane architecture.
|
[
{
"created": "Tue, 24 Apr 2007 19:17:45 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Hnia",
"Karim",
""
],
[
"Hugon",
"Gérald",
""
],
[
"Masmoudi",
"Ahmed",
""
],
[
"Mercier",
"Jacques",
""
],
[
"Rivier",
"François",
""
],
[
"Mornet",
"Dominique Jean-Marie",
""
]
] |
In the peripheral nervous system, utrophin and the short dystrophin isoform (Dp116) are co-localized at the outermost layer of the myelin sheath of nerve fibers; together with the dystroglycan complex. In peripheral nerve, matrix metalloproteinase (MMP) creates a 30 kDa fragment of beta-dystroglycan, leading to a disruption of the link between the extracellular matrix and the cell membrane. Here we asked if the processing of the beta-dystroglycan could influence the anchorage of Dp116 or/and utrophin in normal and mdx Schwann cell membrane. We showed that MMP-9 was more activated in mdx nerve than in wild-type one. This activation leads to an accumulation of the 30 kDa beta-dystroglycan isoform and have an impact on the anchorage of Dp116 and utrophin isoforms in mdx Schwann cells membrane. Our results showed that Dp116 had greater affinity to the full length form of beta-dystroglycan than the 30 kDa form. Moreover, we showed for the first time that the short isoform of utrophin (Up71) was over-expressed in mdx Schwann cells compared to wild-type. In addition, this utrophin isoform (Up71) seems to have greater affinity to the 30 kDa beta-dystroglycan which could explain a more stabilization of this 30 kDa at the membrane compartment. Our results highlight the potential participation of the short utrophin isoform and the cleaved form of beta-dystroglycan in mdx Schwann cell membrane architecture.
|
1711.03332
|
Alberto Sorrentino
|
Sara Sommariva and Alberto Sorrentino and Michele Piana and Vittorio
Pizzella and Laura Marzetti
|
A comparative study of the robustness of frequency--domain connectivity
measures to finite data length
|
32 pages, 13 figures
| null | null | null |
q-bio.QM math.NA q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this work we use numerical simulation to investigate how the temporal
length of the data affects the reliability of the estimates of brain
connectivity from EEG time--series. We assume that the neural sources follow a
stable MultiVariate AutoRegressive model, and consider three connectivity
metrics: Imaginary part of Coherency (IC), generalized Partial Directed
Coherence (gPDC) and frequency--domain Granger Causality (fGC). In order to
assess the statistical significance of the estimated values, we use the
surrogate data test by generating phase--randomized and autoregressive
surrogate data. We first consider the ideal case where we know the source time
courses exactly. Here we show how, expectedly, even exact knowledge of the
source time courses is not sufficient to provide reliable estimates of the
connectivity when the number of samples gets small; however, while gPDC and fGC
tend to provide a larger number of false positives, the IC becomes less
sensitive to the presence of connectivity. Then we proceed with more realistic
simulations, where the source time courses are estimated using eLORETA, and the
EEG signal is affected by biological noise of increasing intensity. Using the
ideal case as a reference, we show that the impact of biological noise on IC
estimates is qualitatively different from the impact on gPDC and fGC.
|
[
{
"created": "Thu, 9 Nov 2017 11:25:29 GMT",
"version": "v1"
}
] |
2017-11-10
|
[
[
"Sommariva",
"Sara",
""
],
[
"Sorrentino",
"Alberto",
""
],
[
"Piana",
"Michele",
""
],
[
"Pizzella",
"Vittorio",
""
],
[
"Marzetti",
"Laura",
""
]
] |
In this work we use numerical simulation to investigate how the temporal length of the data affects the reliability of the estimates of brain connectivity from EEG time--series. We assume that the neural sources follow a stable MultiVariate AutoRegressive model, and consider three connectivity metrics: Imaginary part of Coherency (IC), generalized Partial Directed Coherence (gPDC) and frequency--domain Granger Causality (fGC). In order to assess the statistical significance of the estimated values, we use the surrogate data test by generating phase--randomized and autoregressive surrogate data. We first consider the ideal case where we know the source time courses exactly. Here we show how, expectedly, even exact knowledge of the source time courses is not sufficient to provide reliable estimates of the connectivity when the number of samples gets small; however, while gPDC and fGC tend to provide a larger number of false positives, the IC becomes less sensitive to the presence of connectivity. Then we proceed with more realistic simulations, where the source time courses are estimated using eLORETA, and the EEG signal is affected by biological noise of increasing intensity. Using the ideal case as a reference, we show that the impact of biological noise on IC estimates is qualitatively different from the impact on gPDC and fGC.
|
q-bio/0407040
|
Jie Liang
|
Changyu Hu, Xiang Li and Jie Liang
|
Developing optimal nonlinear scoring function for protein design
|
25 pages, 6 figures, 7 tables. Accepted by Bioinformatics
| null | null | null |
q-bio.BM
| null |
Motivation. Protein design aims to identify sequences compatible with a given
protein fold but incompatible to any alternative folds. To select the correct
sequences and to guide the search process, a design scoring function is
critically important. Such a scoring function should be able to characterize
the global fitness landscape of many proteins simultaneously.
Results. To find optimal design scoring functions, we introduce two geometric
views and propose a formulation using mixture of nonlinear Gaussian kernel
functions. We aim to solve a simplified protein sequence design problem. Our
goal is to distinguish each native sequence for a major portion of
representative protein structures from a large number of alternative decoy
sequences, each a fragment from proteins of different fold. Our scoring
function discriminate perfectly a set of 440 native proteins from 14 million
sequence decoys. We show that no linear scoring function can succeed in this
task. In a blind test of unrelated proteins, our scoring function misclassfies
only 13 native proteins out of 194. This compares favorably with about 3-4
times more misclassifications when optimal linear functions reported in
literature are used. We also discuss how to develop protein folding scoring
function.
|
[
{
"created": "Thu, 29 Jul 2004 18:14:02 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Hu",
"Changyu",
""
],
[
"Li",
"Xiang",
""
],
[
"Liang",
"Jie",
""
]
] |
Motivation. Protein design aims to identify sequences compatible with a given protein fold but incompatible to any alternative folds. To select the correct sequences and to guide the search process, a design scoring function is critically important. Such a scoring function should be able to characterize the global fitness landscape of many proteins simultaneously. Results. To find optimal design scoring functions, we introduce two geometric views and propose a formulation using mixture of nonlinear Gaussian kernel functions. We aim to solve a simplified protein sequence design problem. Our goal is to distinguish each native sequence for a major portion of representative protein structures from a large number of alternative decoy sequences, each a fragment from proteins of different fold. Our scoring function discriminate perfectly a set of 440 native proteins from 14 million sequence decoys. We show that no linear scoring function can succeed in this task. In a blind test of unrelated proteins, our scoring function misclassfies only 13 native proteins out of 194. This compares favorably with about 3-4 times more misclassifications when optimal linear functions reported in literature are used. We also discuss how to develop protein folding scoring function.
|
2007.04853
|
Victor Popescu
|
Victor-Bogdan Popescu and Krishna Kanhaiya and Iulian N\u{a}stac and
Eugen Czeizler and Ion Petre
|
Identifying efficient controls of complex interaction networks using
genetic algorithms
|
The submission contains 34 pages, 9 figures and 6 tables
| null | null | null |
q-bio.MN cs.NE cs.SY eess.SY q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Control theory has seen recently impactful applications in network science,
especially in connections with applications in network medicine. A key topic of
research is that of finding minimal external interventions that offer control
over the dynamics of a given network, a problem known as network
controllability. We propose in this article a new solution for this problem
based on genetic algorithms. We tailor our solution for applications in
computational drug repurposing, seeking to maximise its use of FDA-approved
drug targets in a given disease-specific protein-protein interaction network.
We show how our algorithm identifies a number of potentially efficient drugs
for breast, ovarian, and pancreatic cancer. We demonstrate our algorithm on
several benchmark networks from cancer medicine, social networks, electronic
circuits, and several random networks with their edges distributed according to
the Erd\H{o}s-R\'{e}nyi, the small-world, and the scale-free properties.
Overall, we show that our new algorithm is more efficient in identifying
relevant drug targets in a disease network, advancing the computational
solutions needed for new therapeutic and drug repurposing approaches.
|
[
{
"created": "Thu, 9 Jul 2020 14:56:54 GMT",
"version": "v1"
}
] |
2020-07-10
|
[
[
"Popescu",
"Victor-Bogdan",
""
],
[
"Kanhaiya",
"Krishna",
""
],
[
"Năstac",
"Iulian",
""
],
[
"Czeizler",
"Eugen",
""
],
[
"Petre",
"Ion",
""
]
] |
Control theory has seen recently impactful applications in network science, especially in connections with applications in network medicine. A key topic of research is that of finding minimal external interventions that offer control over the dynamics of a given network, a problem known as network controllability. We propose in this article a new solution for this problem based on genetic algorithms. We tailor our solution for applications in computational drug repurposing, seeking to maximise its use of FDA-approved drug targets in a given disease-specific protein-protein interaction network. We show how our algorithm identifies a number of potentially efficient drugs for breast, ovarian, and pancreatic cancer. We demonstrate our algorithm on several benchmark networks from cancer medicine, social networks, electronic circuits, and several random networks with their edges distributed according to the Erd\H{o}s-R\'{e}nyi, the small-world, and the scale-free properties. Overall, we show that our new algorithm is more efficient in identifying relevant drug targets in a disease network, advancing the computational solutions needed for new therapeutic and drug repurposing approaches.
|
2012.08973
|
Tara Van Viegen
|
Tara van Viegen (1), Athena Akrami (2), Kate Bonnen (3), Eric DeWitt
(4), Alexandre Hyafil (5), Helena Ledmyr (6), Grace W. Lindsay (7), Patrick
Mineault, John D. Murray (8), Xaq Pitkow (9 and 10 and 11), Aina Puce (12),
Madineh Sedigh-Sarvestani (13), Carsen Stringer (14), Titipat Achakulvisut
(15), Elnaz Alikarami (16), Melvin Selim Atay (17), Eleanor Batty (18),
Jeffrey C. Erlich (19), Byron V. Galbraith (20), Yueqi Guo (21), Ashley L.
Juavinett (22), Matthew R. Krause (23), Songting Li (24), Marius Pachitariu
(14), Elizabeth Straley (9), Davide Valeriani (25 and 26 and 27), Emma
Vaughan (28), Maryam Vaziri-Pashkam (29), Michael L. Waskom (3), Gunnar Blohm
(30), Konrad Kording (15), Paul Schrater (31), Brad Wyble (32), Sean Escola
(33), Megan A. K. Peters (34) ((1) Princeton Neuroscience Institute,
Princeton University, Princeton, NJ, USA, (2) Sainsbury Wellcome Centre,
University College London, London, UK, (3) Center for Neural Science, New
York University, New York City, NY, USA, (4) Champalimaud Research,
Champalimaud Foundation, Lisbon, Portugal, (5) Centre de Recerca
Matem\`atica, Bellaterra, Universitat Aut\`onoma de Barcelona, Barcelona,
Spain, (6) International Neuroinformatics Coordinating Facility, (7) Gatsby
Computational Neuroscience Unit, Sainsbury Wellcome Centre, University
College London, London, UK, (8) Department of Psychiatry, Yale University,
New Haven, CT, USA, (9) Baylor College of Medicine, Dept. of Neuroscience,
Houston, TX, USA, (10) Baylor College of Medicine, Center for Neuroscience
and Artificial Intelligence, Houston, TX, USA, (11) Rice University, Dept of
Electrical and Computer Engineering, Houston, TX, USA, (12) Psychological &
Brain Sciences, Indiana University, Bloomington, IN, USA, (13) Max Planck
Florida Institute for Neuroscience, Jupiter, FL, USA, (14) HHMI Janelia
Research Campus, Ashburn, VA, USA, (15) University of Pennsylvania, Depts of
Neuroscience and Bioengineering, Philadelphia, PA, USA, (16) Faculty of
dentistry, McGill University, Montreal, QB, Canada, (17) Middle East
Technical University, Ankara, Turkey, (18) Department of Neurobiology,
Harvard University, Cambridge, MA, USA, (19) New York University Shanghai,
Shanghai, China, (20) Talla, Inc, Boston, MA, USA (21) Johns Hopkins
University, Department of Biomedical Engineering, Baltimore, MD, USA, (22)
Division of Biological Sciences, UC San Diego, La Jolla, CA, (23) Montreal
Neurological Institute, McGill University, Montreal, QB, Canada, (24) School
of Mathematical Sciences, MOE-LSC and Institute of Natural Sciences, Shanghai
Jiao Tong University, Shanghai, P.R. China, (25) Department of Otolaryngology
- Head & Neck Surgery, Massachusetts Eye and Ear, Boston, MA, USA, (26)
Department of Otolaryngology - Head & Neck Surgery, Harvard Medical School,
Boston, MA, USA, (27) Department of Neurology, Massachusetts General
Hospital, Boston, MA, USA, (28) IBRO - Simons Computational Neuroscience
Imbizo, Cape Town, South-Africa, (29) Laboratory of Brain and Cognition,
National Institute of Mental Health, Bethesda, MD, USA, (30) Centre for
Neuroscience Studies, Queen's University, Kingston, Ontario, Canada, (31)
University of Minnesota, Depts of Psychology and Computer Science & Eng,
Minneapolis, MN, USA, (32) Psychology Department, Pennsylvania State
University, Philadelphia, PA, USA, (33) Columbia University, Dept of
Psychiatry, Center for Theoretical Neuroscience, New York, NY, USA, (34)
Department of Cognitive Sciences, University of California Irvine, Irvina,
CA, USA)
|
Neuromatch Academy: Teaching Computational Neuroscience with global
accessibility
|
10 pages, 3 figures. Equal contribution by the executive committee
members of Neuromatch Academy: Tara van Viegen, Athena Akrami, Kate Bonnen,
Eric DeWitt, Alexandre Hyafil, Helena Ledmyr, Grace W. Lindsay, Patrick
Mineault, John D. Murray, Xaq Pitkow, Aina Puce, Madineh Sedigh-Sarvestani,
Carsen Stringer. and equal contribution by the board of directors of
Neuromatch Academy: Gunnar Blohm, Konrad Kording, Paul Schrater, Brad Wyble,
Sean Escola, Megan A. K. Peters
| null | null | null |
q-bio.OT
|
http://creativecommons.org/licenses/by/4.0/
|
Neuromatch Academy designed and ran a fully online 3-week Computational
Neuroscience summer school for 1757 students with 191 teaching assistants
working in virtual inverted (or flipped) classrooms and on small group
projects. Fourteen languages, active community management, and low cost allowed
for an unprecedented level of inclusivity and universal accessibility.
|
[
{
"created": "Tue, 15 Dec 2020 17:20:17 GMT",
"version": "v1"
}
] |
2020-12-17
|
[
[
"van Viegen",
"Tara",
"",
"9 and 10 and 11"
],
[
"Akrami",
"Athena",
"",
"9 and 10 and 11"
],
[
"Bonnen",
"Kate",
"",
"9 and 10 and 11"
],
[
"DeWitt",
"Eric",
"",
"9 and 10 and 11"
],
[
"Hyafil",
"Alexandre",
"",
"9 and 10 and 11"
],
[
"Ledmyr",
"Helena",
"",
"9 and 10 and 11"
],
[
"Lindsay",
"Grace W.",
"",
"9 and 10 and 11"
],
[
"Mineault",
"Patrick",
"",
"9 and 10 and 11"
],
[
"Murray",
"John D.",
"",
"9 and 10 and 11"
],
[
"Pitkow",
"Xaq",
"",
"9 and 10 and 11"
],
[
"Puce",
"Aina",
"",
"25 and 26 and 27"
],
[
"Sedigh-Sarvestani",
"Madineh",
"",
"25 and 26 and 27"
],
[
"Stringer",
"Carsen",
"",
"25 and 26 and 27"
],
[
"Achakulvisut",
"Titipat",
"",
"25 and 26 and 27"
],
[
"Alikarami",
"Elnaz",
"",
"25 and 26 and 27"
],
[
"Atay",
"Melvin Selim",
"",
"25 and 26 and 27"
],
[
"Batty",
"Eleanor",
"",
"25 and 26 and 27"
],
[
"Erlich",
"Jeffrey C.",
"",
"25 and 26 and 27"
],
[
"Galbraith",
"Byron V.",
"",
"25 and 26 and 27"
],
[
"Guo",
"Yueqi",
"",
"25 and 26 and 27"
],
[
"Juavinett",
"Ashley L.",
"",
"25 and 26 and 27"
],
[
"Krause",
"Matthew R.",
"",
"25 and 26 and 27"
],
[
"Li",
"Songting",
"",
"25 and 26 and 27"
],
[
"Pachitariu",
"Marius",
"",
"25 and 26 and 27"
],
[
"Straley",
"Elizabeth",
"",
"25 and 26 and 27"
],
[
"Valeriani",
"Davide",
"",
"25 and 26 and 27"
],
[
"Vaughan",
"Emma",
""
],
[
"Vaziri-Pashkam",
"Maryam",
""
],
[
"Waskom",
"Michael L.",
""
],
[
"Blohm",
"Gunnar",
""
],
[
"Kording",
"Konrad",
""
],
[
"Schrater",
"Paul",
""
],
[
"Wyble",
"Brad",
""
],
[
"Escola",
"Sean",
""
],
[
"Peters",
"Megan A. K.",
""
]
] |
Neuromatch Academy designed and ran a fully online 3-week Computational Neuroscience summer school for 1757 students with 191 teaching assistants working in virtual inverted (or flipped) classrooms and on small group projects. Fourteen languages, active community management, and low cost allowed for an unprecedented level of inclusivity and universal accessibility.
|
1202.3384
|
Scott McKinley
|
Andrew M. Hein, Scott A. McKinley
|
Sensing and decision-making in random search
| null | null |
10.1073/pnas.1202686109
| null |
q-bio.PE math.PR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While microscopic organisms can use gradient-based search to locate
resources, this strategy can be poorly suited to the sensory signals available
to macroscopic organisms. We propose a framework that models search-decision
making in cases where sensory signals are infrequent, subject to large
fluctuations, and contain little directional information. Our approach
simultaneously models an organism's intrinsic movement behavior (e.g. Levy
walk) while allowing this behavior to be adjusted based on sensory data. We
find that including even a simple model for signal response can dominate other
features of random search and greatly improve search performance. In
particular, we show that a lack of signal is not a lack of information.
Searchers that receive no signal can quickly abandon target-poor regions. Such
phenomena naturally give rise to the area-restricted search behavior exhibited
by many searching organisms.
|
[
{
"created": "Wed, 15 Feb 2012 17:42:44 GMT",
"version": "v1"
}
] |
2015-06-04
|
[
[
"Hein",
"Andrew M.",
""
],
[
"McKinley",
"Scott A.",
""
]
] |
While microscopic organisms can use gradient-based search to locate resources, this strategy can be poorly suited to the sensory signals available to macroscopic organisms. We propose a framework that models search-decision making in cases where sensory signals are infrequent, subject to large fluctuations, and contain little directional information. Our approach simultaneously models an organism's intrinsic movement behavior (e.g. Levy walk) while allowing this behavior to be adjusted based on sensory data. We find that including even a simple model for signal response can dominate other features of random search and greatly improve search performance. In particular, we show that a lack of signal is not a lack of information. Searchers that receive no signal can quickly abandon target-poor regions. Such phenomena naturally give rise to the area-restricted search behavior exhibited by many searching organisms.
|
1710.03655
|
Erick Martins Ratamero
|
Erick Martins Ratamero, Dom Bellini, Christopher G. Dowson, Rudolf A.
Roemer
|
Touching proteins with virtual bare hands: how to visualize protein-drug
complexes and their dynamics in virtual reality
| null |
Journal of Computer-Aided Molecular Design 32(6), 703-709 (2018)
|
10.1007/s10822-018-0123-0
| null |
q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The ability to precisely visualize the atomic geometry of the interactions
between a drug and its protein target in structural models is critical in
predicting the correct modifications in previously identified inhibitors to
create more effective next generation drugs. It is currently common practice
among medicinal chemists while attempting the above to access the information
contained in three-dimensional structures by using two-dimensional projections,
which can preclude disclosure of useful features. A more precise visualization
of the three-dimensional configuration of the atomic geometry in the models can
be achieved through the implementation of immersive virtual reality (VR). In
this work, we present a freely available software pipeline for visualising
protein structures through VR. New customer hardware, such as the HTC Vive and
the Oculus Rift utilized in this study, are available at reasonable prices.
Moreover, we have combined VR visualization with fast algorithms for simulating
intramolecular motions of protein flexibility, in an effort to further improve
structure-lead drug design by exposing molecular interactions that might be
hidden in the less informative static models.
|
[
{
"created": "Tue, 10 Oct 2017 15:28:23 GMT",
"version": "v1"
}
] |
2018-08-14
|
[
[
"Ratamero",
"Erick Martins",
""
],
[
"Bellini",
"Dom",
""
],
[
"Dowson",
"Christopher G.",
""
],
[
"Roemer",
"Rudolf A.",
""
]
] |
The ability to precisely visualize the atomic geometry of the interactions between a drug and its protein target in structural models is critical in predicting the correct modifications in previously identified inhibitors to create more effective next generation drugs. It is currently common practice among medicinal chemists while attempting the above to access the information contained in three-dimensional structures by using two-dimensional projections, which can preclude disclosure of useful features. A more precise visualization of the three-dimensional configuration of the atomic geometry in the models can be achieved through the implementation of immersive virtual reality (VR). In this work, we present a freely available software pipeline for visualising protein structures through VR. New customer hardware, such as the HTC Vive and the Oculus Rift utilized in this study, are available at reasonable prices. Moreover, we have combined VR visualization with fast algorithms for simulating intramolecular motions of protein flexibility, in an effort to further improve structure-lead drug design by exposing molecular interactions that might be hidden in the less informative static models.
|
2109.06031
|
Steffen Lange
|
Steffen Lange, Richard Mogwitz, Denis H\"unniger, Anja Voss-B\"ohme
|
Modeling age-specific incidence of colon cancer via niche competition
|
28 pages, 13 figures
|
PLoS Comput Biol 18(8): e1010403 (2022)
|
10.1371/journal.pcbi.1010403
| null |
q-bio.TO nlin.CG physics.bio-ph physics.data-an physics.med-ph
|
http://creativecommons.org/licenses/by/4.0/
|
Cancer development is a multistep process often starting with a single cell
in which a number of epigenetic and genetic alterations have accumulated thus
transforming it into a tumor cell. The progeny of such a single benign tumor
cell expands in the tissue and can at some point progress to malignant tumor
cells until a detectable tumor is formed. The dynamics from the early phase of
a single cell to a detectable tumor with billions of tumor cells are complex
and still not fully resolved, not even for the well-known prototype of
multistage carcinogenesis, the adenoma-adenocarcinoma sequence of colorectal
cancer. Mathematical models of such carcinogenesis are frequently tested and
calibrated based on reported age-specific incidence rates of cancer, but they
usually require calibration of four or more parameters due to the wide range of
processes these models aim to reflect. We present a cell-based model, which
focuses on the competition between wild-type and tumor cells in colonic crypts,
with which we are able reproduce epidemiological incidence rates of colon
cancer. Additionally, the fraction of cancerous tumors with precancerous
lesions predicted by the model agree with clinical estimates. The
correspondence between model and reported data suggests that the fate of tumor
development is majorly determined by the early phase of tumor growth and
progression long before a tumor becomes detectable. Due to the focus on the
early phase of tumor development, the model has only a single fit parameter,
the time scale set by an effective replacement rate of stem cells in the crypt.
We find this effective rate to be considerable smaller than the actual
replacement rate, which implies that the time scale is limited by the processes
succeeding clonal conversion of crypts.
|
[
{
"created": "Sat, 4 Sep 2021 10:03:46 GMT",
"version": "v1"
},
{
"created": "Fri, 15 Jul 2022 08:11:54 GMT",
"version": "v2"
}
] |
2022-09-21
|
[
[
"Lange",
"Steffen",
""
],
[
"Mogwitz",
"Richard",
""
],
[
"Hünniger",
"Denis",
""
],
[
"Voss-Böhme",
"Anja",
""
]
] |
Cancer development is a multistep process often starting with a single cell in which a number of epigenetic and genetic alterations have accumulated thus transforming it into a tumor cell. The progeny of such a single benign tumor cell expands in the tissue and can at some point progress to malignant tumor cells until a detectable tumor is formed. The dynamics from the early phase of a single cell to a detectable tumor with billions of tumor cells are complex and still not fully resolved, not even for the well-known prototype of multistage carcinogenesis, the adenoma-adenocarcinoma sequence of colorectal cancer. Mathematical models of such carcinogenesis are frequently tested and calibrated based on reported age-specific incidence rates of cancer, but they usually require calibration of four or more parameters due to the wide range of processes these models aim to reflect. We present a cell-based model, which focuses on the competition between wild-type and tumor cells in colonic crypts, with which we are able reproduce epidemiological incidence rates of colon cancer. Additionally, the fraction of cancerous tumors with precancerous lesions predicted by the model agree with clinical estimates. The correspondence between model and reported data suggests that the fate of tumor development is majorly determined by the early phase of tumor growth and progression long before a tumor becomes detectable. Due to the focus on the early phase of tumor development, the model has only a single fit parameter, the time scale set by an effective replacement rate of stem cells in the crypt. We find this effective rate to be considerable smaller than the actual replacement rate, which implies that the time scale is limited by the processes succeeding clonal conversion of crypts.
|
2109.05833
|
Johannes Wirtz
|
Johannes Wirtz and St\'ephane Guindon
|
Rate of coalescence of pairs of lineages in the spatial
{\lambda}-Fleming-Viot process
|
31 pages, 8 figures
| null | null | null |
q-bio.PE math.PR
|
http://creativecommons.org/licenses/by/4.0/
|
We revisit the spatial ${\lambda}$-Fleming-Viot process introduced in [1].
Particularly, we are interested in the time $T_0$ to the most recent common
ancestor for two lineages. We distinguish between the case where the process
acts on the entire two-dimensional plane, and on a finite rectangle. Utilizing
a differential equation linking $T_0$ with the physical distance between the
lineages, we arrive at simple and reasonably accurate approximation schemes for
both cases. Furthermore, our analysis enables us to address the question of
whether the genealogical process of the model "comes down from infinity", which
has been partly answered before in [2].
|
[
{
"created": "Mon, 13 Sep 2021 10:01:53 GMT",
"version": "v1"
}
] |
2021-09-14
|
[
[
"Wirtz",
"Johannes",
""
],
[
"Guindon",
"Stéphane",
""
]
] |
We revisit the spatial ${\lambda}$-Fleming-Viot process introduced in [1]. Particularly, we are interested in the time $T_0$ to the most recent common ancestor for two lineages. We distinguish between the case where the process acts on the entire two-dimensional plane, and on a finite rectangle. Utilizing a differential equation linking $T_0$ with the physical distance between the lineages, we arrive at simple and reasonably accurate approximation schemes for both cases. Furthermore, our analysis enables us to address the question of whether the genealogical process of the model "comes down from infinity", which has been partly answered before in [2].
|
2008.05591
|
Daniel L. Cox
|
Yuduo Zhi and Daniel L. Cox
|
Neurodegenerative damage reduces firing coherence in a continuous
attractor model of grid cells
|
24 pages, 9 figures
| null |
10.1103/PhysRevE.104.044414
| null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Grid cells in the dorsolateral band of the medial entorhinal cortex(dMEC)
display strikingly regular periodic firing patterns on a lattice of positions
in 2-D space. This helps animals to encode relative spatial location without
reference to external cues. The dMEC is damaged in the early stages of
Alzheimer's Disease, which affects navigation ability of a disease victim,
reducing the synaptic density of neurons in the network. Within an established
2-dimensional continuous attractor neural network model of grid cell activity,
we introduce damage parameterized by radius and by the strength of the synaptic
output for neurons in the damaged region. The proportionality of the grid field
flow on the dMEX to the velocity of the model organism is maintained, but when
we examine the coherence of the grid cell firing field in the form of the
Fourier transform (Bragg peaks) of the grid lattice, we find that a wide range
of damage radius and strength induces an incoherent structure with only a
single central peak, adjacent to narrow bands of striped (two additional
peaks), which abut an orthorhombic pattern (four additional peaks), that abuts
the undamaged hexagonal region (six additional peaks). Within the damaged
region, grid cells show no Bragg peaks, and outside the damaged region the
central Bragg peak strength is largely unaffected. There is a re-entrant region
of normal grid firing for very large damage area. We anticipate that the
modified grid cell behavior can be observed in non-invasive fMRI imaging of the
dMEC.
|
[
{
"created": "Wed, 12 Aug 2020 22:38:33 GMT",
"version": "v1"
},
{
"created": "Wed, 14 Apr 2021 00:45:00 GMT",
"version": "v2"
}
] |
2021-11-10
|
[
[
"Zhi",
"Yuduo",
""
],
[
"Cox",
"Daniel L.",
""
]
] |
Grid cells in the dorsolateral band of the medial entorhinal cortex(dMEC) display strikingly regular periodic firing patterns on a lattice of positions in 2-D space. This helps animals to encode relative spatial location without reference to external cues. The dMEC is damaged in the early stages of Alzheimer's Disease, which affects navigation ability of a disease victim, reducing the synaptic density of neurons in the network. Within an established 2-dimensional continuous attractor neural network model of grid cell activity, we introduce damage parameterized by radius and by the strength of the synaptic output for neurons in the damaged region. The proportionality of the grid field flow on the dMEX to the velocity of the model organism is maintained, but when we examine the coherence of the grid cell firing field in the form of the Fourier transform (Bragg peaks) of the grid lattice, we find that a wide range of damage radius and strength induces an incoherent structure with only a single central peak, adjacent to narrow bands of striped (two additional peaks), which abut an orthorhombic pattern (four additional peaks), that abuts the undamaged hexagonal region (six additional peaks). Within the damaged region, grid cells show no Bragg peaks, and outside the damaged region the central Bragg peak strength is largely unaffected. There is a re-entrant region of normal grid firing for very large damage area. We anticipate that the modified grid cell behavior can be observed in non-invasive fMRI imaging of the dMEC.
|
2105.15123
|
Onkar Sadekar
|
Onkar Sadekar, Mansi Budamagunta, G. J. Sreejith, Sachin Jain, and M.
S. Santhanam
|
An infectious diseases hazard map for India based on mobility and
transportation networks
|
25 pages, 8 figures, including supplementary material
|
Current Science 121, 1208 (2021)
|
10.18520/cs/v121/i9/1208-1215
| null |
q-bio.PE physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a risk measure and construct an infectious diseases hazard map for
India. Given an outbreak location, a hazard index is assigned to each city
using an effective distance that depends on inter-city mobilities instead of
geographical distance. We demonstrate its utility using an SIR model augmented
with air, rail, and road data between top 446 cities. Simulations show that the
effective distance from outbreak location reliably predicts the time of arrival
of infection in other cities. The hazard index predictions compare well with
the observed spread of SARS-CoV-2. The hazard map can be useful in other
outbreaks also.
|
[
{
"created": "Mon, 24 May 2021 06:38:10 GMT",
"version": "v1"
},
{
"created": "Wed, 2 Jun 2021 00:21:40 GMT",
"version": "v2"
},
{
"created": "Wed, 4 Aug 2021 07:50:55 GMT",
"version": "v3"
}
] |
2021-11-18
|
[
[
"Sadekar",
"Onkar",
""
],
[
"Budamagunta",
"Mansi",
""
],
[
"Sreejith",
"G. J.",
""
],
[
"Jain",
"Sachin",
""
],
[
"Santhanam",
"M. S.",
""
]
] |
We propose a risk measure and construct an infectious diseases hazard map for India. Given an outbreak location, a hazard index is assigned to each city using an effective distance that depends on inter-city mobilities instead of geographical distance. We demonstrate its utility using an SIR model augmented with air, rail, and road data between top 446 cities. Simulations show that the effective distance from outbreak location reliably predicts the time of arrival of infection in other cities. The hazard index predictions compare well with the observed spread of SARS-CoV-2. The hazard map can be useful in other outbreaks also.
|
1710.06274
|
Christopher Spalding Mr
|
Christopher Spalding, Charles R. Doering, Glenn R. Flierl
|
Resonant Activation of Population Extinctions
|
12 pages, 7 Figures, Accepted for publication in Physical Review E
| null |
10.1103/PhysRevE.96.042411
| null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Understanding the mechanisms governing population extinctions is of key
importance to many problems in ecology and evolution. Stochastic factors are
known to play a central role in extinction, but the interactions between a
population's demographic stochasticity and environmental noise remain poorly
understood. Here, we model environmental forcing as a stochastic fluctuation
between two states, one with a higher death rate than the other. We find that
in general there exists a rate of fluctuations that minimizes the mean time to
extinction, a phenomenon previously dubbed "resonant activation." We develop a
heuristic description of the phenomenon, together with a criterion for the
existence of resonant activation. Specifically the minimum extinction time
arises as a result of the system approaching a scenario wherein the severity of
rare events is balanced by the time interval between them. We discuss our
findings within the context of more general forms of environmental noise, and
suggest potential applications to evolutionary models.
|
[
{
"created": "Tue, 17 Oct 2017 13:41:25 GMT",
"version": "v1"
}
] |
2017-11-22
|
[
[
"Spalding",
"Christopher",
""
],
[
"Doering",
"Charles R.",
""
],
[
"Flierl",
"Glenn R.",
""
]
] |
Understanding the mechanisms governing population extinctions is of key importance to many problems in ecology and evolution. Stochastic factors are known to play a central role in extinction, but the interactions between a population's demographic stochasticity and environmental noise remain poorly understood. Here, we model environmental forcing as a stochastic fluctuation between two states, one with a higher death rate than the other. We find that in general there exists a rate of fluctuations that minimizes the mean time to extinction, a phenomenon previously dubbed "resonant activation." We develop a heuristic description of the phenomenon, together with a criterion for the existence of resonant activation. Specifically the minimum extinction time arises as a result of the system approaching a scenario wherein the severity of rare events is balanced by the time interval between them. We discuss our findings within the context of more general forms of environmental noise, and suggest potential applications to evolutionary models.
|
2307.15540
|
Shuhei Kitamura PhD
|
Shuhei Kitamura
|
Quantifying the Influence of Climate on Human Mind and Culture: Evidence
from Visual Art
| null | null | null | null |
q-bio.PE econ.GN q-fin.EC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
While connections between climate and the human mind and culture are widely
acknowledged, they are not thoroughly quantified. Analyzing 100,000 paintings
and data on 2,000 artists from the 13th to 21st centuries, the study reveals
that the lightness of the paintings exhibited an interesting U-shaped pattern
mirroring global temperature trends. There is a significant association between
the two, even after controlling for various factors. Event study analysis using
the artist-level data further reveals that high-temperature shocks resulted in
brighter paintings in later periods for artists who experienced them compared
to the control group. The effect is particularly pronounced in art genres that
rely on artists' imaginations, indicating a notable influence on artists'
minds. These findings underscore the enduring impact of climate on the human
mind and culture throughout history and highlight art as a valuable tool for
understanding people's minds and cultures.
|
[
{
"created": "Fri, 28 Jul 2023 13:11:52 GMT",
"version": "v1"
},
{
"created": "Fri, 12 Apr 2024 03:11:45 GMT",
"version": "v2"
}
] |
2024-04-15
|
[
[
"Kitamura",
"Shuhei",
""
]
] |
While connections between climate and the human mind and culture are widely acknowledged, they are not thoroughly quantified. Analyzing 100,000 paintings and data on 2,000 artists from the 13th to 21st centuries, the study reveals that the lightness of the paintings exhibited an interesting U-shaped pattern mirroring global temperature trends. There is a significant association between the two, even after controlling for various factors. Event study analysis using the artist-level data further reveals that high-temperature shocks resulted in brighter paintings in later periods for artists who experienced them compared to the control group. The effect is particularly pronounced in art genres that rely on artists' imaginations, indicating a notable influence on artists' minds. These findings underscore the enduring impact of climate on the human mind and culture throughout history and highlight art as a valuable tool for understanding people's minds and cultures.
|
0806.0690
|
Vladimir Ivancevic
|
Vladimir Ivancevic, Eugene Aidman, Leong Yen and Darryn Reid
|
Phase Transitions, Chaos and Joint Action in the Life Space Foam
|
20 pages, no figures, elsart
| null | null | null |
q-bio.NC q-bio.OT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper extends our recently developed Life Space Foam (LSF) model of
motivated cognitive dynamics \cite{IA}. LSF uses adaptive path integrals to
generate Lewinian force--fields on smooth manifolds, in order to characterize
the dynamics of individual goal--directed action. According to explanatory
theories growing in acceptance in cognitive neuroscience, one of the key
properties of this dynamics, capable of linking it to microscopic-level
cortical neurodynamics, is its meta-stability and the resulting phase
transitions. Our extended LSF model incorporates the notion of phase
transitions and complements it with embedded geometrical chaos. To describe
this LSF phase transition, a general path--integral is used, along the
corresponding LSF topology change. As a result, our extended LSF model is able
to rigorously represent co-action by two or more actors in the common
LSF--manifold. The model yields substantial qualitative differences in
geometrical properties between bilateral and multi-lateral co-action due to
intrinsic chaotic coupling between $n$ actors when $n\geq 3$.
Keywords: cognitive dynamics, adaptive path integrals, phase transitions,
chaos, topology change, human joint action, function approximation
|
[
{
"created": "Wed, 4 Jun 2008 05:06:16 GMT",
"version": "v1"
}
] |
2008-06-05
|
[
[
"Ivancevic",
"Vladimir",
""
],
[
"Aidman",
"Eugene",
""
],
[
"Yen",
"Leong",
""
],
[
"Reid",
"Darryn",
""
]
] |
This paper extends our recently developed Life Space Foam (LSF) model of motivated cognitive dynamics \cite{IA}. LSF uses adaptive path integrals to generate Lewinian force--fields on smooth manifolds, in order to characterize the dynamics of individual goal--directed action. According to explanatory theories growing in acceptance in cognitive neuroscience, one of the key properties of this dynamics, capable of linking it to microscopic-level cortical neurodynamics, is its meta-stability and the resulting phase transitions. Our extended LSF model incorporates the notion of phase transitions and complements it with embedded geometrical chaos. To describe this LSF phase transition, a general path--integral is used, along the corresponding LSF topology change. As a result, our extended LSF model is able to rigorously represent co-action by two or more actors in the common LSF--manifold. The model yields substantial qualitative differences in geometrical properties between bilateral and multi-lateral co-action due to intrinsic chaotic coupling between $n$ actors when $n\geq 3$. Keywords: cognitive dynamics, adaptive path integrals, phase transitions, chaos, topology change, human joint action, function approximation
|
2112.08961
|
Holger Maier
|
Dominik Thalmeier, Gregor Miller, Elida Schneltzer, Anja Hurt, Martin
Hrab\v{e} de Angelis, Lore Becker, Christian L. M\"uller, Holger Maier
|
Objective hearing threshold identification from auditory brainstem
response measurements using supervised and self-supervised approaches
|
41 pages, 17 figures
|
BMC Neurosci 23, 81 (2022)
|
10.1186/s12868-022-00758-0
| null |
q-bio.NC cs.LG q-bio.QM
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Hearing loss is a major health problem and psychological burden in humans.
Mouse models offer a possibility to elucidate genes involved in the underlying
developmental and pathophysiological mechanisms of hearing impairment. To this
end, large-scale mouse phenotyping programs include auditory phenotyping of
single-gene knockout mouse lines. Using the auditory brainstem response (ABR)
procedure, the German Mouse Clinic and similar facilities worldwide have
produced large, uniform data sets of averaged ABR raw data of mutant and
wildtype mice. In the course of standard ABR analysis, hearing thresholds are
assessed visually by trained staff from series of signal curves of increasing
sound pressure level. This is time-consuming and prone to be biased by the
reader as well as the graphical display quality and scale. In an attempt to
reduce workload and improve quality and reproducibility, we developed and
compared two methods for automated hearing threshold identification from
averaged ABR raw data: a supervised approach involving two combined neural
networks trained on human-generated labels and a self-supervised approach,
which exploits the signal power spectrum and combines random forest sound level
estimation with a piece-wise curve fitting algorithm for threshold finding. We
show that both models work well, outperform human threshold detection, and are
suitable for fast, reliable, and unbiased hearing threshold detection and
quality control. In a high-throughput mouse phenotyping environment, both
methods perform well as part of an automated end-to-end screening pipeline to
detect candidate genes for hearing involvement. Code for both models as well as
data used for this work are freely available.
|
[
{
"created": "Thu, 16 Dec 2021 15:24:31 GMT",
"version": "v1"
}
] |
2023-01-12
|
[
[
"Thalmeier",
"Dominik",
""
],
[
"Miller",
"Gregor",
""
],
[
"Schneltzer",
"Elida",
""
],
[
"Hurt",
"Anja",
""
],
[
"de Angelis",
"Martin Hrabě",
""
],
[
"Becker",
"Lore",
""
],
[
"Müller",
"Christian L.",
""
],
[
"Maier",
"Holger",
""
]
] |
Hearing loss is a major health problem and psychological burden in humans. Mouse models offer a possibility to elucidate genes involved in the underlying developmental and pathophysiological mechanisms of hearing impairment. To this end, large-scale mouse phenotyping programs include auditory phenotyping of single-gene knockout mouse lines. Using the auditory brainstem response (ABR) procedure, the German Mouse Clinic and similar facilities worldwide have produced large, uniform data sets of averaged ABR raw data of mutant and wildtype mice. In the course of standard ABR analysis, hearing thresholds are assessed visually by trained staff from series of signal curves of increasing sound pressure level. This is time-consuming and prone to be biased by the reader as well as the graphical display quality and scale. In an attempt to reduce workload and improve quality and reproducibility, we developed and compared two methods for automated hearing threshold identification from averaged ABR raw data: a supervised approach involving two combined neural networks trained on human-generated labels and a self-supervised approach, which exploits the signal power spectrum and combines random forest sound level estimation with a piece-wise curve fitting algorithm for threshold finding. We show that both models work well, outperform human threshold detection, and are suitable for fast, reliable, and unbiased hearing threshold detection and quality control. In a high-throughput mouse phenotyping environment, both methods perform well as part of an automated end-to-end screening pipeline to detect candidate genes for hearing involvement. Code for both models as well as data used for this work are freely available.
|
1703.05406
|
Andrew Sornborger
|
Yuxiu Shao, Andrew T. Sornborger, Louis Tao
|
A Pulse-Gated, Predictive Neural Circuit
|
This invited paper was presented at the 50th Asilomar Conference on
Signals, Systems and Computers
| null | null | null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent evidence suggests that neural information is encoded in packets and
may be flexibly routed from region to region. We have hypothesized that neural
circuits are split into sub-circuits where one sub-circuit controls information
propagation via pulse gating and a second sub-circuit processes graded
information under the control of the first sub-circuit. Using an explicit
pulse-gating mechanism, we have been able to show how information may be
processed by such pulse-controlled circuits and also how, by allowing the
information processing circuit to interact with the gating circuit, decisions
can be made. Here, we demonstrate how Hebbian plasticity may be used to
supplement our pulse-gated information processing framework by implementing a
machine learning algorithm. The resulting neural circuit has a number of
structures that are similar to biological neural systems, including a layered
structure and information propagation driven by oscillatory gating with a
complex frequency spectrum.
|
[
{
"created": "Wed, 15 Mar 2017 22:25:29 GMT",
"version": "v1"
}
] |
2017-03-17
|
[
[
"Shao",
"Yuxiu",
""
],
[
"Sornborger",
"Andrew T.",
""
],
[
"Tao",
"Louis",
""
]
] |
Recent evidence suggests that neural information is encoded in packets and may be flexibly routed from region to region. We have hypothesized that neural circuits are split into sub-circuits where one sub-circuit controls information propagation via pulse gating and a second sub-circuit processes graded information under the control of the first sub-circuit. Using an explicit pulse-gating mechanism, we have been able to show how information may be processed by such pulse-controlled circuits and also how, by allowing the information processing circuit to interact with the gating circuit, decisions can be made. Here, we demonstrate how Hebbian plasticity may be used to supplement our pulse-gated information processing framework by implementing a machine learning algorithm. The resulting neural circuit has a number of structures that are similar to biological neural systems, including a layered structure and information propagation driven by oscillatory gating with a complex frequency spectrum.
|
1907.09841
|
Chun Tung Chou
|
Chun Tung Chou
|
Using biochemical circuits to approximately compute log-likelihood ratio
for detecting persistent signals
| null |
IEEE Access, 2021
|
10.1109/ACCESS.2021.3113377
| null |
q-bio.MN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Given that biochemical circuits can process information by using analog
computation, a question is: What can biochemical circuits compute? This paper
considers the problem of using biochemical circuits to distinguish persistent
signals from transient ones. We define a statistical detection problem over a
reaction pathway consisting of three species: an inducer, a transcription
factor (TF) and a gene promoter, where the inducer can activate the TF and an
active TF can bind to the gene promoter. We model the pathway using the
chemical master equation so the counts of bound promoters over time is a
stochastic signal. We consider the problem of using the continuous-time
stochastic signal of the counts of bound promoters to infer whether the inducer
signal is persistent or not. We use statistical detection theory to derive the
solution to this detection problem, which is to compute the log-likelihood
ratio of observing a persistent signal to a transient one. We then show, using
time-scale separation and other assumptions, that this log-likelihood ratio can
be approximately computed by using the continuous-time signals of the number of
active TF molecules and the number of bound promoters when the input is
persistent. Finally, we show that the coherent feedforward gene circuits can be
used to approximately compute this log-likelihood ratio when the inducer signal
is persistent.
|
[
{
"created": "Tue, 23 Jul 2019 12:34:49 GMT",
"version": "v1"
},
{
"created": "Wed, 15 Sep 2021 07:29:41 GMT",
"version": "v10"
},
{
"created": "Fri, 26 Jul 2019 04:06:52 GMT",
"version": "v2"
},
{
"created": "Thu, 19 Dec 2019 01:13:53 GMT",
"version": "v3"
},
{
"created": "Sat, 21 Mar 2020 22:37:24 GMT",
"version": "v4"
},
{
"created": "Fri, 10 Jul 2020 05:44:29 GMT",
"version": "v5"
},
{
"created": "Wed, 22 Jul 2020 11:41:01 GMT",
"version": "v6"
},
{
"created": "Tue, 4 Aug 2020 03:58:15 GMT",
"version": "v7"
},
{
"created": "Fri, 13 Nov 2020 02:21:05 GMT",
"version": "v8"
},
{
"created": "Mon, 31 May 2021 13:32:58 GMT",
"version": "v9"
}
] |
2021-09-20
|
[
[
"Chou",
"Chun Tung",
""
]
] |
Given that biochemical circuits can process information by using analog computation, a question is: What can biochemical circuits compute? This paper considers the problem of using biochemical circuits to distinguish persistent signals from transient ones. We define a statistical detection problem over a reaction pathway consisting of three species: an inducer, a transcription factor (TF) and a gene promoter, where the inducer can activate the TF and an active TF can bind to the gene promoter. We model the pathway using the chemical master equation so the counts of bound promoters over time is a stochastic signal. We consider the problem of using the continuous-time stochastic signal of the counts of bound promoters to infer whether the inducer signal is persistent or not. We use statistical detection theory to derive the solution to this detection problem, which is to compute the log-likelihood ratio of observing a persistent signal to a transient one. We then show, using time-scale separation and other assumptions, that this log-likelihood ratio can be approximately computed by using the continuous-time signals of the number of active TF molecules and the number of bound promoters when the input is persistent. Finally, we show that the coherent feedforward gene circuits can be used to approximately compute this log-likelihood ratio when the inducer signal is persistent.
|
2104.05645
|
Jaime Ashander
|
Jaime Ashander, Kailin Kroetz, Rebecca S Epanchin-Niell, Nicholas B.
D. Phelps, Robert G Haight, Laura E. Dee
|
Guiding large-scale management of invasive species using network metrics
|
40 pages, 8 figures, 7 tables
| null |
10.1038/s41893-022-00913-9
| null |
q-bio.QM physics.soc-ph q-bio.PE
|
http://creativecommons.org/licenses/by/4.0/
|
Complex socio-environmental interdependencies drive biological invasions,
causing damages across large spatial scales. For widespread invasions,
targeting of management activities based on optimization approaches may fail
due to computational or data constraints. Here we evaluate an alternative
approach that embraces complexity by representing the invasion as a network and
using network structure to inform management locations. We compare optimal
versus network-guided invasive species management at a landscape-scale,
considering siting of boat decontamination stations targeting 1.6 million
boater movements among 9,182 lakes in Minnesota, USA. Studying performance for
58 counties, we find that when full information is known on invasion status and
boater movements, the best-performing network-guided metric achieves a median
and lower quartile performance of 100% of optimal. We also find that
performance remains relatively high using different network metrics or with
less information (median above 80% and lower quartile above 60% of optimal for
most metrics), but is more variable, particularly at the lower quartile.
Additionally, performance is generally stable across counties with varying lake
counts, suggesting viability for large-scale invasion management.
|
[
{
"created": "Mon, 12 Apr 2021 17:10:57 GMT",
"version": "v1"
},
{
"created": "Tue, 31 May 2022 17:02:59 GMT",
"version": "v2"
}
] |
2022-06-07
|
[
[
"Ashander",
"Jaime",
""
],
[
"Kroetz",
"Kailin",
""
],
[
"Epanchin-Niell",
"Rebecca S",
""
],
[
"Phelps",
"Nicholas B. D.",
""
],
[
"Haight",
"Robert G",
""
],
[
"Dee",
"Laura E.",
""
]
] |
Complex socio-environmental interdependencies drive biological invasions, causing damages across large spatial scales. For widespread invasions, targeting of management activities based on optimization approaches may fail due to computational or data constraints. Here we evaluate an alternative approach that embraces complexity by representing the invasion as a network and using network structure to inform management locations. We compare optimal versus network-guided invasive species management at a landscape-scale, considering siting of boat decontamination stations targeting 1.6 million boater movements among 9,182 lakes in Minnesota, USA. Studying performance for 58 counties, we find that when full information is known on invasion status and boater movements, the best-performing network-guided metric achieves a median and lower quartile performance of 100% of optimal. We also find that performance remains relatively high using different network metrics or with less information (median above 80% and lower quartile above 60% of optimal for most metrics), but is more variable, particularly at the lower quartile. Additionally, performance is generally stable across counties with varying lake counts, suggesting viability for large-scale invasion management.
|
1905.00734
|
Sheryl Chang
|
Sheryl L. Chang, Mahendra Piraveenan, Mikhail Prokopenko
|
The effects of imitation dynamics on vaccination behaviours in
SIR-network model
|
23 pages, 11 figures
|
International Journal of Environmental Research and Public Health,
16(14), 2477, 2019
|
10.3390/ijerph16142477
| null |
q-bio.PE physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a series of SIR-network models, extended with a game-theoretic
treatment of imitation dynamics which result from regular population mobility
across residential and work areas and the ensuing interactions. Each considered
SIR-network model captures a class of vaccination behaviours influenced by
epidemic characteristics, interaction topology, and imitation dynamics. Our
focus is the eventual vaccination coverage, produced under voluntary
vaccination schemes, in response to these varying factors. Using the next
generation matrix method, we analytically derive and compare expressions for
the basic reproduction number $R_0$ for the proposed SIR-network models.
Furthermore, we simulate the epidemic dynamics over time for the considered
models, and show that if individuals are sufficiently responsive towards the
changes in the disease prevalence, then the more expansive travelling patterns
encourage convergence to the endemic, mixed equilibria. On the contrary, if
individuals are insensitive to changes in the disease prevalence, we find that
they tend to remain unvaccinated in all the studied models. Our results concur
with earlier studies in showing that residents from highly connected
residential areas are more likely to get vaccinated. We also show that the
existence of the individuals committed to receiving vaccination reduces $R_0$
and delays the disease prevalence, and thus is essential to containing
epidemics.
|
[
{
"created": "Wed, 1 May 2019 06:00:16 GMT",
"version": "v1"
},
{
"created": "Wed, 8 May 2019 04:02:13 GMT",
"version": "v2"
},
{
"created": "Mon, 13 May 2019 02:18:24 GMT",
"version": "v3"
}
] |
2019-11-14
|
[
[
"Chang",
"Sheryl L.",
""
],
[
"Piraveenan",
"Mahendra",
""
],
[
"Prokopenko",
"Mikhail",
""
]
] |
We present a series of SIR-network models, extended with a game-theoretic treatment of imitation dynamics which result from regular population mobility across residential and work areas and the ensuing interactions. Each considered SIR-network model captures a class of vaccination behaviours influenced by epidemic characteristics, interaction topology, and imitation dynamics. Our focus is the eventual vaccination coverage, produced under voluntary vaccination schemes, in response to these varying factors. Using the next generation matrix method, we analytically derive and compare expressions for the basic reproduction number $R_0$ for the proposed SIR-network models. Furthermore, we simulate the epidemic dynamics over time for the considered models, and show that if individuals are sufficiently responsive towards the changes in the disease prevalence, then the more expansive travelling patterns encourage convergence to the endemic, mixed equilibria. On the contrary, if individuals are insensitive to changes in the disease prevalence, we find that they tend to remain unvaccinated in all the studied models. Our results concur with earlier studies in showing that residents from highly connected residential areas are more likely to get vaccinated. We also show that the existence of the individuals committed to receiving vaccination reduces $R_0$ and delays the disease prevalence, and thus is essential to containing epidemics.
|
2303.04624
|
Yutaka Nishiyama
|
Yutaka Nishiyama
|
A Dynamic Interference Model for Benham's Top
|
10 pages, 12 figures
| null | null | null |
q-bio.NC
|
http://creativecommons.org/licenses/by/4.0/
|
It remains a mystery why colors emerge from the black-and-white pattern of a
Benham's top. This article is an extension of one I published in 1979. Here I
discuss in greater depth the reasons for manifestations of subjective color,
with a focus on my own hypothesis, namely, that two successive stimuli are
shifted in phase as they pass through the human visual system, causing dynamic
interference that results in the emergence of a specific color. I hope this
hypothesis will significantly contribute to the nearly 200-year history of
attempts to elucidate subjective color.
|
[
{
"created": "Wed, 25 Jan 2023 13:24:19 GMT",
"version": "v1"
},
{
"created": "Wed, 15 Mar 2023 02:16:22 GMT",
"version": "v2"
},
{
"created": "Sat, 18 Mar 2023 08:54:59 GMT",
"version": "v3"
}
] |
2023-03-21
|
[
[
"Nishiyama",
"Yutaka",
""
]
] |
It remains a mystery why colors emerge from the black-and-white pattern of a Benham's top. This article is an extension of one I published in 1979. Here I discuss in greater depth the reasons for manifestations of subjective color, with a focus on my own hypothesis, namely, that two successive stimuli are shifted in phase as they pass through the human visual system, causing dynamic interference that results in the emergence of a specific color. I hope this hypothesis will significantly contribute to the nearly 200-year history of attempts to elucidate subjective color.
|
1807.11036
|
Sandro Sozzo
|
Diederik Aerts, Massimiliano Sassoli de Bianchi, Sandro Sozzo, Tomas
Veloz
|
Modeling Human Decision-making: An Overview of the Brussels Quantum
Approach
|
25 pages, Latex, 2 figures, published in Foundations of Science
|
Foundations of Science 26, pp. 27-54 (2021)
|
10.1007/s10699-018-9559-x
| null |
q-bio.NC quant-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present the fundamentals of the quantum theoretical approach we have
developed in the last decade to model cognitive phenomena that resisted
modeling by means of classical logical and probabilistic structures, like
Boolean, Kolmogorovian and, more generally, set theoretical structures. We
firstly sketch the operational-realistic foundations of conceptual entities,
i.e. concepts, conceptual combinations, propositions, decision-making entities,
etc. Then, we briefly illustrate the application of the quantum formalism in
Hilbert space to represent combinations of natural concepts, discussing its
success in modeling a wide range of empirical data on concepts and their
conjunction, disjunction and negation. Next, we naturally extend the quantum
theoretical approach to model some long-standing `fallacies of human
reasoning', namely, the `conjunction fallacy' and the `disjunction effect'.
Finally, we put forward an explanatory hypothesis according to which human
reasoning is a defined superposition of `emergent reasoning' and `logical
reasoning', where the former generally prevails over the latter. The quantum
theoretical approach explains human fallacies as the consequence of genuine
quantum structures in human reasoning, i.e. `contextuality', `emergence',
`entanglement', `interference' and `superposition'. As such, it is alternative
to the Kahneman-Tversky research programme, which instead aims to explain human
fallacies in terms of `individual heuristics and biases'.
|
[
{
"created": "Sun, 29 Jul 2018 10:21:59 GMT",
"version": "v1"
}
] |
2023-02-27
|
[
[
"Aerts",
"Diederik",
""
],
[
"de Bianchi",
"Massimiliano Sassoli",
""
],
[
"Sozzo",
"Sandro",
""
],
[
"Veloz",
"Tomas",
""
]
] |
We present the fundamentals of the quantum theoretical approach we have developed in the last decade to model cognitive phenomena that resisted modeling by means of classical logical and probabilistic structures, like Boolean, Kolmogorovian and, more generally, set theoretical structures. We firstly sketch the operational-realistic foundations of conceptual entities, i.e. concepts, conceptual combinations, propositions, decision-making entities, etc. Then, we briefly illustrate the application of the quantum formalism in Hilbert space to represent combinations of natural concepts, discussing its success in modeling a wide range of empirical data on concepts and their conjunction, disjunction and negation. Next, we naturally extend the quantum theoretical approach to model some long-standing `fallacies of human reasoning', namely, the `conjunction fallacy' and the `disjunction effect'. Finally, we put forward an explanatory hypothesis according to which human reasoning is a defined superposition of `emergent reasoning' and `logical reasoning', where the former generally prevails over the latter. The quantum theoretical approach explains human fallacies as the consequence of genuine quantum structures in human reasoning, i.e. `contextuality', `emergence', `entanglement', `interference' and `superposition'. As such, it is alternative to the Kahneman-Tversky research programme, which instead aims to explain human fallacies in terms of `individual heuristics and biases'.
|
2201.05431
|
Sebastian Lotter
|
Sebastian Lotter, Maximilian Sch\"afer, Robert Schober
|
Molecular Noise In Synaptic Communication
|
16 pages, 13 figures, 2 tables. Published in IEEE Transactions on
NanoBioscience. This article is the extended journal version of the
conference paper arXiv:2109.14986
| null |
10.1109/TNB.2022.3183692
| null |
q-bio.SC cs.ET
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In synaptic molecular communication (MC), the activation of postsynaptic
receptors by neurotransmitters (NTs) is governed by a stochastic
reaction-diffusion process. This randomness of synaptic MC contributes to the
randomness of the electrochemical downstream signal in the postsynaptic cell,
called postsynaptic membrane potential (PSP). Since the randomness of the PSP
is relevant for neural computation and learning, characterizing the statistics
of the PSP is critical. However, the statistical characterization of the
synaptic reaction-diffusion process is difficult because the reversible
bi-molecular reaction of NTs with receptors renders the system nonlinear.
Consequently, there is currently no model available which characterizes the
impact of the statistics of postsynaptic receptor activation on the PSP. In
this work, we propose a novel statistical model for the synaptic
reaction-diffusion process in terms of the chemical master equation (CME). We
further propose a novel numerical method which allows to compute the CME
efficiently and we use this method to characterize the statistics of the PSP.
Finally, we present results from stochastic particle-based computer simulations
which validate the proposed models. We show that the biophysical parameters
governing synaptic transmission shape the autocovariance of the receptor
activation and, ultimately, the statistics of the PSP. Our results suggest that
the processing of the synaptic signal by the postsynaptic cell effectively
mitigates synaptic noise while the statistical characteristics of the synaptic
signal are preserved. The results presented in this paper contribute to a
better understanding of the impact of the randomness of synaptic signal
transmission on neuronal information processing.
|
[
{
"created": "Fri, 14 Jan 2022 13:04:11 GMT",
"version": "v1"
},
{
"created": "Tue, 14 Jun 2022 10:02:13 GMT",
"version": "v2"
},
{
"created": "Mon, 16 Jan 2023 13:41:13 GMT",
"version": "v3"
}
] |
2023-01-18
|
[
[
"Lotter",
"Sebastian",
""
],
[
"Schäfer",
"Maximilian",
""
],
[
"Schober",
"Robert",
""
]
] |
In synaptic molecular communication (MC), the activation of postsynaptic receptors by neurotransmitters (NTs) is governed by a stochastic reaction-diffusion process. This randomness of synaptic MC contributes to the randomness of the electrochemical downstream signal in the postsynaptic cell, called postsynaptic membrane potential (PSP). Since the randomness of the PSP is relevant for neural computation and learning, characterizing the statistics of the PSP is critical. However, the statistical characterization of the synaptic reaction-diffusion process is difficult because the reversible bi-molecular reaction of NTs with receptors renders the system nonlinear. Consequently, there is currently no model available which characterizes the impact of the statistics of postsynaptic receptor activation on the PSP. In this work, we propose a novel statistical model for the synaptic reaction-diffusion process in terms of the chemical master equation (CME). We further propose a novel numerical method which allows to compute the CME efficiently and we use this method to characterize the statistics of the PSP. Finally, we present results from stochastic particle-based computer simulations which validate the proposed models. We show that the biophysical parameters governing synaptic transmission shape the autocovariance of the receptor activation and, ultimately, the statistics of the PSP. Our results suggest that the processing of the synaptic signal by the postsynaptic cell effectively mitigates synaptic noise while the statistical characteristics of the synaptic signal are preserved. The results presented in this paper contribute to a better understanding of the impact of the randomness of synaptic signal transmission on neuronal information processing.
|
0708.4065
|
Manoj Gopalakrishnan
|
Shivam Ghosh (St.Stephens College, Delhi), Manoj Gopalakrishnan (HRI,
Allahabad), Kimberly Forsten-Williams (Virginia Tech)
|
Self-consistent theory of reversible ligand binding to a spherical cell
|
23 pages with 4 figures
|
Phys. Biol. 4 344-354 (2007)
|
10.1088/1478-3975/4/4/010
| null |
q-bio.SC cond-mat.stat-mech q-bio.QM
| null |
In this article, we study the kinetics of reversible ligand binding to
receptors on a spherical cell surface using a self-consistent stochastic
theory. Binding, dissociation, diffusion and rebinding of ligands are
incorporated into the theory in a systematic manner. We derive explicitly the
time evolution of the ligand-bound receptor fraction p(t) in various regimes .
Contrary to the commonly accepted view, we find that the well-known
Berg-Purcell scaling for the association rate is modified as a function of
time. Specifically, the effective on-rate changes non-monotonically as a
function of time and equals the intrinsic rate at very early as well as late
times, while being approximately equal to the Berg-Purcell value at
intermediate times. The effective dissociation rate, as it appears in the
binding curve or measured in a dissociation experiment, is strongly modified by
rebinding events and assumes the Berg-Purcell value except at very late times,
where the decay is algebraic and not exponential. In equilibrium, the ligand
concentration everywhere in the solution is the same and equals its spatial
mean, thus ensuring that there is no depletion in the vicinity of the cell.
Implications of our results for binding experiments and numerical simulations
of ligand-receptor systems are also discussed.
|
[
{
"created": "Thu, 30 Aug 2007 03:15:42 GMT",
"version": "v1"
}
] |
2009-11-13
|
[
[
"Ghosh",
"Shivam",
"",
"St.Stephens College, Delhi"
],
[
"Gopalakrishnan",
"Manoj",
"",
"HRI,\n Allahabad"
],
[
"Forsten-Williams",
"Kimberly",
"",
"Virginia Tech"
]
] |
In this article, we study the kinetics of reversible ligand binding to receptors on a spherical cell surface using a self-consistent stochastic theory. Binding, dissociation, diffusion and rebinding of ligands are incorporated into the theory in a systematic manner. We derive explicitly the time evolution of the ligand-bound receptor fraction p(t) in various regimes . Contrary to the commonly accepted view, we find that the well-known Berg-Purcell scaling for the association rate is modified as a function of time. Specifically, the effective on-rate changes non-monotonically as a function of time and equals the intrinsic rate at very early as well as late times, while being approximately equal to the Berg-Purcell value at intermediate times. The effective dissociation rate, as it appears in the binding curve or measured in a dissociation experiment, is strongly modified by rebinding events and assumes the Berg-Purcell value except at very late times, where the decay is algebraic and not exponential. In equilibrium, the ligand concentration everywhere in the solution is the same and equals its spatial mean, thus ensuring that there is no depletion in the vicinity of the cell. Implications of our results for binding experiments and numerical simulations of ligand-receptor systems are also discussed.
|
1408.5629
|
Nathan Baker
|
Huan Lei, Xiu Yang, Bin Zheng, Guang Lin, Nathan A. Baker
|
Quantifying the influence of conformational uncertainty in biomolecular
solvation
|
Accepted by Multiscale Modeling & Simulation
|
Multiscale.Model.Simul 13 (2015) 1327-1353
|
10.1137/140981587
| null |
q-bio.BM stat.AP
|
http://creativecommons.org/licenses/by/4.0/
|
Biomolecules exhibit conformational fluctuations near equilibrium states,
inducing uncertainty in various biological properties in a dynamic way. We have
developed a general method to quantify the uncertainty of target properties
induced by conformational fluctuations. Using a generalized polynomial chaos
(gPC) expansion, we construct a surrogate model of the target property with
respect to varying conformational states. We also propose a method to increase
the sparsity of the gPC expansion by defining a set of conformational "active
space" random variables. With the increased sparsity, we employ the compressive
sensing method to accurately construct the surrogate model. We demonstrate the
performance of the surrogate model by evaluating fluctuation-induced
uncertainty in solvent-accessible surface area for the bovine trypsin inhibitor
protein system and show that the new approach offers more accurate statistical
information than standard Monte Carlo approaches. Further more, the constructed
surrogate model also enables us to directly evaluate the target property under
various conformational states, yielding a more accurate response surface than
standard sparse grid collocation methods. In particular, the new method
provides higher accuracy in high-dimensional systems, such as biomolecules,
where sparse grid performance is limited by the accuracy of the computed
quantity of interest. Our new framework is generalizable and can be used to
investigate the uncertainty of a wide variety of target properties in
biomolecular systems.
|
[
{
"created": "Sun, 24 Aug 2014 19:40:55 GMT",
"version": "v1"
},
{
"created": "Mon, 31 Aug 2015 13:39:09 GMT",
"version": "v2"
}
] |
2018-10-02
|
[
[
"Lei",
"Huan",
""
],
[
"Yang",
"Xiu",
""
],
[
"Zheng",
"Bin",
""
],
[
"Lin",
"Guang",
""
],
[
"Baker",
"Nathan A.",
""
]
] |
Biomolecules exhibit conformational fluctuations near equilibrium states, inducing uncertainty in various biological properties in a dynamic way. We have developed a general method to quantify the uncertainty of target properties induced by conformational fluctuations. Using a generalized polynomial chaos (gPC) expansion, we construct a surrogate model of the target property with respect to varying conformational states. We also propose a method to increase the sparsity of the gPC expansion by defining a set of conformational "active space" random variables. With the increased sparsity, we employ the compressive sensing method to accurately construct the surrogate model. We demonstrate the performance of the surrogate model by evaluating fluctuation-induced uncertainty in solvent-accessible surface area for the bovine trypsin inhibitor protein system and show that the new approach offers more accurate statistical information than standard Monte Carlo approaches. Further more, the constructed surrogate model also enables us to directly evaluate the target property under various conformational states, yielding a more accurate response surface than standard sparse grid collocation methods. In particular, the new method provides higher accuracy in high-dimensional systems, such as biomolecules, where sparse grid performance is limited by the accuracy of the computed quantity of interest. Our new framework is generalizable and can be used to investigate the uncertainty of a wide variety of target properties in biomolecular systems.
|
q-bio/0602014
|
Lucilla de Arcangelis
|
Lucilla de Arcangelis, Carla Perrone-Capano, Hans J. Herrmann
|
Self-Organized Criticality model for Brain Plasticity
|
4 pages, 3 figures
| null |
10.1103/PhysRevLett.96.028107
| null |
q-bio.NC
| null |
Networks of living neurons exhibit an avalanche mode of activity,
experimentally found in organotypic cultures. Here we present a model based on
self-organized criticality and taking into account brain plasticity, which is
able to reproduce the spectrum of electroencephalograms (EEG). The model
consists in an electrical network with threshold firing and activity-dependent
synapse strenghts. The system exhibits an avalanche activity power law
distributed. The analysis of the power spectra of the electrical signal
reproduces very robustly the power law behaviour with the exponent 0.8,
experimentally measured in EEG spectra. The same value of the exponent is found
on small-world lattices and for leaky neurons, indicating that universality
holds for a wide class of brain models.
|
[
{
"created": "Sun, 12 Feb 2006 18:40:20 GMT",
"version": "v1"
}
] |
2009-11-13
|
[
[
"de Arcangelis",
"Lucilla",
""
],
[
"Perrone-Capano",
"Carla",
""
],
[
"Herrmann",
"Hans J.",
""
]
] |
Networks of living neurons exhibit an avalanche mode of activity, experimentally found in organotypic cultures. Here we present a model based on self-organized criticality and taking into account brain plasticity, which is able to reproduce the spectrum of electroencephalograms (EEG). The model consists in an electrical network with threshold firing and activity-dependent synapse strenghts. The system exhibits an avalanche activity power law distributed. The analysis of the power spectra of the electrical signal reproduces very robustly the power law behaviour with the exponent 0.8, experimentally measured in EEG spectra. The same value of the exponent is found on small-world lattices and for leaky neurons, indicating that universality holds for a wide class of brain models.
|
1702.03492
|
Xaq Pitkow
|
Xaq Pitkow and Dora Angelaki
|
How the brain might work: statistics flowing in redundant population
codes
|
11 pages, 3 figures, contribution related to workshop called "How the
brain works" at the University of Copenhagen, 14-16 Sept 2016
| null | null | null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
It is widely believed that the brain performs approximate probabilistic
inference to estimate causal variables in the world from ambiguous sensory
data. To understand these computations, we need to analyze how information is
represented and transformed by the actions of nonlinear recurrent neural
networks. We propose that these probabilistic computations function by a
message-passing algorithm operating at the level of redundant neural
populations. To explain this framework, we review its underlying concepts,
including graphical models, sufficient statistics, and message-passing, and
then describe how these concepts could be implemented by recurrently connected
probabilistic population codes. The relevant information flow in these networks
will be most interpretable at the population level, particularly for redundant
neural codes. We therefore outline a general approach to identify the essential
features of a neural message-passing algorithm. Finally, we argue that to
reveal the most important aspects of these neural computations, we must study
large-scale activity patterns during moderately complex, naturalistic
behaviors.
|
[
{
"created": "Sun, 12 Feb 2017 05:32:29 GMT",
"version": "v1"
},
{
"created": "Mon, 15 May 2017 04:12:09 GMT",
"version": "v2"
}
] |
2017-05-16
|
[
[
"Pitkow",
"Xaq",
""
],
[
"Angelaki",
"Dora",
""
]
] |
It is widely believed that the brain performs approximate probabilistic inference to estimate causal variables in the world from ambiguous sensory data. To understand these computations, we need to analyze how information is represented and transformed by the actions of nonlinear recurrent neural networks. We propose that these probabilistic computations function by a message-passing algorithm operating at the level of redundant neural populations. To explain this framework, we review its underlying concepts, including graphical models, sufficient statistics, and message-passing, and then describe how these concepts could be implemented by recurrently connected probabilistic population codes. The relevant information flow in these networks will be most interpretable at the population level, particularly for redundant neural codes. We therefore outline a general approach to identify the essential features of a neural message-passing algorithm. Finally, we argue that to reveal the most important aspects of these neural computations, we must study large-scale activity patterns during moderately complex, naturalistic behaviors.
|
1703.10058
|
Srividya Iyer-Biswas
|
Farshid Jafarpour, Michael Vennettilli, and Srividya Iyer-Biswas
|
Biological timekeeping in the presence of stochasticity
| null | null | null | null |
q-bio.MN cond-mat.soft cond-mat.stat-mech q-bio.CB q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Causal ordering of key events in the cell cycle is essential for proper
functioning of an organism. Yet, it remains a mystery how a specific temporal
program of events is maintained despite ineluctable stochasticity in the
biochemical dynamics which dictate timing of cellular events. We propose that
if a change of cell fate is triggered by the {\em time-integral} of the
underlying stochastic biochemical signal, rather than the original signal, then
a dramatic improvement in temporal specificity results. Exact analytical
results for stochastic models of hourglass-timers and pendulum-clocks, two
important paradigms for biological timekeeping, elucidate how temporal
specificity is achieved through time-integration. En route, we introduce a
natural representation for time-integrals of stochastic processes, provide an
analytical prescription for evaluating corresponding first-passage-time
distributions, and uncover a mechanism by which a population of identical cells
can spontaneously bifurcate into subpopulations of early and late responders,
depending on hierarchy of timescales in the dynamics. Moreover, our approach
reveals how time-integration of stochastic signals may be realized
biochemically, through a simple chemical reaction scheme.
|
[
{
"created": "Tue, 28 Mar 2017 15:55:56 GMT",
"version": "v1"
}
] |
2017-03-30
|
[
[
"Jafarpour",
"Farshid",
""
],
[
"Vennettilli",
"Michael",
""
],
[
"Iyer-Biswas",
"Srividya",
""
]
] |
Causal ordering of key events in the cell cycle is essential for proper functioning of an organism. Yet, it remains a mystery how a specific temporal program of events is maintained despite ineluctable stochasticity in the biochemical dynamics which dictate timing of cellular events. We propose that if a change of cell fate is triggered by the {\em time-integral} of the underlying stochastic biochemical signal, rather than the original signal, then a dramatic improvement in temporal specificity results. Exact analytical results for stochastic models of hourglass-timers and pendulum-clocks, two important paradigms for biological timekeeping, elucidate how temporal specificity is achieved through time-integration. En route, we introduce a natural representation for time-integrals of stochastic processes, provide an analytical prescription for evaluating corresponding first-passage-time distributions, and uncover a mechanism by which a population of identical cells can spontaneously bifurcate into subpopulations of early and late responders, depending on hierarchy of timescales in the dynamics. Moreover, our approach reveals how time-integration of stochastic signals may be realized biochemically, through a simple chemical reaction scheme.
|
1511.00182
|
Andrew Elliott
|
Andrew Elliott and Elizabeth Leicht and Alan Whitmore and Gesine
Reinert and Felix Reed-Tsochas
|
A Nonparametric Significance Test for Sampled Networks
|
Bioinformatics 2017
|
Bioinformatics 2017
|
10.1093/bioinformatics/btx419
| null |
q-bio.QM physics.data-an q-bio.MN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Our work is motivated by an interest in constructing a protein-protein
interaction network that captures key features associated with Parkinson's
disease. While there is an abundance of subnetwork construction methods
available, it is often far from obvious which subnetwork is the most suitable
starting point for further investigation. We provide a method to assess whether
a subnetwork constructed from a seed list (a list of nodes known to be
important in the area of interest) differs significantly from a randomly
generated subnetwork. The proposed method uses a Monte Carlo approach. As
different seed lists can give rise to the same subnetwork, we control for
redundancy by constructing a minimal seed list as the starting point for the
significance test. The null model is based on random seed lists of the same
length as a minimum seed list that generates the subnetwork, in this random
seed list the nodes have (approximately) the same degree distribution as the
nodes in the minimum seed list. We use this null model to select subnetworks
which deviate significantly from random on an appropriate set of statistics and
might capture useful information for a real world protein-protein interaction
network.
|
[
{
"created": "Sat, 31 Oct 2015 21:48:47 GMT",
"version": "v1"
},
{
"created": "Fri, 2 Sep 2016 02:40:57 GMT",
"version": "v2"
},
{
"created": "Mon, 9 Oct 2017 13:05:55 GMT",
"version": "v3"
}
] |
2017-10-11
|
[
[
"Elliott",
"Andrew",
""
],
[
"Leicht",
"Elizabeth",
""
],
[
"Whitmore",
"Alan",
""
],
[
"Reinert",
"Gesine",
""
],
[
"Reed-Tsochas",
"Felix",
""
]
] |
Our work is motivated by an interest in constructing a protein-protein interaction network that captures key features associated with Parkinson's disease. While there is an abundance of subnetwork construction methods available, it is often far from obvious which subnetwork is the most suitable starting point for further investigation. We provide a method to assess whether a subnetwork constructed from a seed list (a list of nodes known to be important in the area of interest) differs significantly from a randomly generated subnetwork. The proposed method uses a Monte Carlo approach. As different seed lists can give rise to the same subnetwork, we control for redundancy by constructing a minimal seed list as the starting point for the significance test. The null model is based on random seed lists of the same length as a minimum seed list that generates the subnetwork, in this random seed list the nodes have (approximately) the same degree distribution as the nodes in the minimum seed list. We use this null model to select subnetworks which deviate significantly from random on an appropriate set of statistics and might capture useful information for a real world protein-protein interaction network.
|
2202.09388
|
Marc Aur\`ele Gilles
|
Marc Aur\`ele Gilles and Amit Singer
|
A Molecular Prior Distribution for Bayesian Inference Based on Wilson
Statistics
| null | null | null | null |
q-bio.QM cs.CV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Background and Objective: Wilson statistics describe well the power spectrum
of proteins at high frequencies. Therefore, it has found several applications
in structural biology, e.g., it is the basis for sharpening steps used in
cryogenic electron microscopy (cryo-EM). A recent paper gave the first rigorous
proof of Wilson statistics based on a formalism of Wilson's original argument.
This new analysis also leads to statistical estimates of the scattering
potential of proteins that reveal a correlation between neighboring Fourier
coefficients. Here we exploit these estimates to craft a novel prior that can
be used for Bayesian inference of molecular structures. Methods: We describe
the properties of the prior and the computation of its hyperparameters. We then
evaluate the prior on two synthetic linear inverse problems, and compare
against a popular prior in cryo-EM reconstruction at a range of SNRs. Results:
We show that the new prior effectively suppresses noise and fills-in low SNR
regions in the spectral domain. Furthermore, it improves the resolution of
estimates on the problems considered for a wide range of SNR and produces
Fourier Shell Correlation curves that are insensitive to masking effects.
Conclusions: We analyze the assumptions in the model, discuss relations to
other regularization strategies, and postulate on potential implications for
structure determination in cryo-EM.
|
[
{
"created": "Fri, 18 Feb 2022 19:14:03 GMT",
"version": "v1"
},
{
"created": "Mon, 2 May 2022 14:48:23 GMT",
"version": "v2"
}
] |
2022-05-03
|
[
[
"Gilles",
"Marc Aurèle",
""
],
[
"Singer",
"Amit",
""
]
] |
Background and Objective: Wilson statistics describe well the power spectrum of proteins at high frequencies. Therefore, it has found several applications in structural biology, e.g., it is the basis for sharpening steps used in cryogenic electron microscopy (cryo-EM). A recent paper gave the first rigorous proof of Wilson statistics based on a formalism of Wilson's original argument. This new analysis also leads to statistical estimates of the scattering potential of proteins that reveal a correlation between neighboring Fourier coefficients. Here we exploit these estimates to craft a novel prior that can be used for Bayesian inference of molecular structures. Methods: We describe the properties of the prior and the computation of its hyperparameters. We then evaluate the prior on two synthetic linear inverse problems, and compare against a popular prior in cryo-EM reconstruction at a range of SNRs. Results: We show that the new prior effectively suppresses noise and fills-in low SNR regions in the spectral domain. Furthermore, it improves the resolution of estimates on the problems considered for a wide range of SNR and produces Fourier Shell Correlation curves that are insensitive to masking effects. Conclusions: We analyze the assumptions in the model, discuss relations to other regularization strategies, and postulate on potential implications for structure determination in cryo-EM.
|
2008.10382
|
Sarah Tebeka
|
Sarah Tebeka, Yann Le Strat, Alix De Premorel Higgons, Alexandra
Benachi, Marc Dommergues, Gilles Kayem, Jacques Lepercq, Dominique Luton,
Laurent Mandelbrot, Yves Ville, Nicolas Ramoz, Sophie Tezenas du Montcel,
IGEDEPP Groups, Jimmy Mullaert, Caroline Dubertret
|
Prevalence and incidence of postpartum depression and environmental
factors: the IGEDEPP cohort
|
34 pages, 6 tables
| null | null | null |
q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Background: IGEDEPP (Interaction of Gene and Environment of Depression during
PostPartum) is a prospective multicenter cohort study of 3,310 Caucasian women
who gave birth between 2011 and 2016, with follow-up until one year postpartum.
The aim of the current study is to describe the cohort and estimate the
prevalence and cumulative incidence of early and late postpartum depression
(PPD). Methods: Socio-demographic data, personal and family psychiatric
history, as well as stressful life events during childhood and pregnancy were
evaluated at baseline. Early and late PPD were assessed at 8 weeks and 1 year
postpartum respectively, using DSM-5 criteria. Results: The prevalence of early
PPD was 8.3% (95%CI 7.3-9.3), and late PPD 12.9% (95%CI 11.5-14.2), resulting
in an 8-week cumulative incidence of 8.5% (95%CI 7.4-9.6) and a one-year
cumulative incidence of PPD of 18.1% (95%CI: 17.1-19.2). Nearly half of the
cohort (N=1571, 47.5%) had a history of at least one psychiatric or addictive
disorder, primarily depressive disorder (35%). Almost 300 women in the cohort
(9.0%) reported childhood trauma. During pregnancy, 47.7% women experienced a
stressful event, 30.2% in the first 8 weeks and 43.9% between 8 weeks and one
year postpartum. Nearly one in five women reported at least one stressful
postpartum event at 8 weeks. Conclusion: Incident depressive episodes affected
nearly one in five women during the first year postpartum. Most women had
stressful perinatal events. Further IGEDEPP studies will aim to disentangle the
impact of childhood and pregnancy-related stressful events on postpartum mental
disorders.
|
[
{
"created": "Fri, 21 Aug 2020 08:28:02 GMT",
"version": "v1"
}
] |
2020-08-25
|
[
[
"Tebeka",
"Sarah",
""
],
[
"Strat",
"Yann Le",
""
],
[
"Higgons",
"Alix De Premorel",
""
],
[
"Benachi",
"Alexandra",
""
],
[
"Dommergues",
"Marc",
""
],
[
"Kayem",
"Gilles",
""
],
[
"Lepercq",
"Jacques",
""
],
[
"Luton",
"Dominique",
""
],
[
"Mandelbrot",
"Laurent",
""
],
[
"Ville",
"Yves",
""
],
[
"Ramoz",
"Nicolas",
""
],
[
"Montcel",
"Sophie Tezenas du",
""
],
[
"Groups",
"IGEDEPP",
""
],
[
"Mullaert",
"Jimmy",
""
],
[
"Dubertret",
"Caroline",
""
]
] |
Background: IGEDEPP (Interaction of Gene and Environment of Depression during PostPartum) is a prospective multicenter cohort study of 3,310 Caucasian women who gave birth between 2011 and 2016, with follow-up until one year postpartum. The aim of the current study is to describe the cohort and estimate the prevalence and cumulative incidence of early and late postpartum depression (PPD). Methods: Socio-demographic data, personal and family psychiatric history, as well as stressful life events during childhood and pregnancy were evaluated at baseline. Early and late PPD were assessed at 8 weeks and 1 year postpartum respectively, using DSM-5 criteria. Results: The prevalence of early PPD was 8.3% (95%CI 7.3-9.3), and late PPD 12.9% (95%CI 11.5-14.2), resulting in an 8-week cumulative incidence of 8.5% (95%CI 7.4-9.6) and a one-year cumulative incidence of PPD of 18.1% (95%CI: 17.1-19.2). Nearly half of the cohort (N=1571, 47.5%) had a history of at least one psychiatric or addictive disorder, primarily depressive disorder (35%). Almost 300 women in the cohort (9.0%) reported childhood trauma. During pregnancy, 47.7% women experienced a stressful event, 30.2% in the first 8 weeks and 43.9% between 8 weeks and one year postpartum. Nearly one in five women reported at least one stressful postpartum event at 8 weeks. Conclusion: Incident depressive episodes affected nearly one in five women during the first year postpartum. Most women had stressful perinatal events. Further IGEDEPP studies will aim to disentangle the impact of childhood and pregnancy-related stressful events on postpartum mental disorders.
|
1510.05918
|
Shi Huang
|
Shi Huang
|
New thoughts on an old riddle: what determines genetic diversity within
and between species?
|
23 pages, 1 figure
|
Genomics, 108: 3-10 (2016)
|
10.1016/j.ygeno.2016.01.008
| null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The question of what determines genetic diversity both between and within
species has long remained unsolved by the modern evolutionary theory (MET).
However, it has not deterred researchers from producing interpretations of
genetic diversity by using MET. We here examine the two key experimental
observations of genetic diversity made in the 1960s, one between species and
the other within a population of a species, that directly contributed to the
development of MET. The interpretations of these observations as well as the
assumptions by MET are widely known to be inadequate. We review the recent
progress of an alternative framework, the maximum genetic diversity (MGD)
hypothesis, that uses axioms and natural selection to explain the vast majority
of genetic diversity as being at optimum equilibrium that is largely determined
by organismal complexity. The MGD hypothesis fully absorbs the proven virtues
of MET and considers its assumptions relevant only to a much more limited
scope. This new synthesis has accounted for the much overlooked phenomenon of
progression towards higher complexity, and more importantly, been instrumental
in directing productive research into both evolutionary and biomedical
problems.
|
[
{
"created": "Tue, 20 Oct 2015 16:36:25 GMT",
"version": "v1"
},
{
"created": "Wed, 21 Oct 2015 04:35:48 GMT",
"version": "v2"
}
] |
2019-04-04
|
[
[
"Huang",
"Shi",
""
]
] |
The question of what determines genetic diversity both between and within species has long remained unsolved by the modern evolutionary theory (MET). However, it has not deterred researchers from producing interpretations of genetic diversity by using MET. We here examine the two key experimental observations of genetic diversity made in the 1960s, one between species and the other within a population of a species, that directly contributed to the development of MET. The interpretations of these observations as well as the assumptions by MET are widely known to be inadequate. We review the recent progress of an alternative framework, the maximum genetic diversity (MGD) hypothesis, that uses axioms and natural selection to explain the vast majority of genetic diversity as being at optimum equilibrium that is largely determined by organismal complexity. The MGD hypothesis fully absorbs the proven virtues of MET and considers its assumptions relevant only to a much more limited scope. This new synthesis has accounted for the much overlooked phenomenon of progression towards higher complexity, and more importantly, been instrumental in directing productive research into both evolutionary and biomedical problems.
|
1704.08889
|
Yoo Ah Kim
|
Phuong Dao, Yoo-Ah Kim, Damian Wojtowicz, Sanna Madan, Roded Sharan,
and Teresa M. Przytycka
|
BeWith: A Between-Within Method to Discover Relationships between Cancer
Modules via Integrated Analysis of Mutual Exclusivity, Co-occurrence and
Functional Interactions
| null | null |
10.1371/journal.pcbi.1005695
| null |
q-bio.MN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The analysis of the mutational landscape of cancer, including mutual
exclusivity and co-occurrence of mutations, has been instrumental in studying
the disease. We hypothesized that exploring the interplay between
co-occurrence, mutual exclusivity, and functional interactions between genes
will further improve our understanding of the disease and help to uncover new
relations between cancer driving genes and pathways. To this end, we designed a
general framework, BeWith, for identifying modules with different combinations
of mutation and interaction patterns. We focused on three different settings of
the BeWith schema: (i) BeME-WithFun in which the relations between modules are
enriched with mutual exclusivity while genes within each module are
functionally related; (ii) BeME-WithCo which combines mutual exclusivity
between modules with co-occurrence within modules; and (iii) BeCo-WithMEFun
which ensures co-occurrence between modules while the within module relations
combine mutual exclusivity and functional interactions. We formulated the
BeWith framework using Integer Linear Programming (ILP), enabling us to find
optimally scoring sets of modules. Our results demonstrate the utility of
BeWith in providing novel information about mutational patterns, driver genes,
and pathways. In particular, BeME-WithFun helped identify functionally coherent
modules that might be relevant for cancer progression. In addition to finding
previously well-known drivers, the identified modules pointed to the importance
of the interaction between NCOR and NCOA3 in breast cancer. Additionally, an
application of the BeME-WithCo setting revealed that gene groups differ with
respect to their vulnerability to different mutagenic processes, and helped us
to uncover pairs of genes with potentially synergetic effects, including a
potential synergy between mutations in TP53 and metastasis related DCC gene.
|
[
{
"created": "Fri, 28 Apr 2017 12:08:12 GMT",
"version": "v1"
}
] |
2018-02-07
|
[
[
"Dao",
"Phuong",
""
],
[
"Kim",
"Yoo-Ah",
""
],
[
"Wojtowicz",
"Damian",
""
],
[
"Madan",
"Sanna",
""
],
[
"Sharan",
"Roded",
""
],
[
"Przytycka",
"Teresa M.",
""
]
] |
The analysis of the mutational landscape of cancer, including mutual exclusivity and co-occurrence of mutations, has been instrumental in studying the disease. We hypothesized that exploring the interplay between co-occurrence, mutual exclusivity, and functional interactions between genes will further improve our understanding of the disease and help to uncover new relations between cancer driving genes and pathways. To this end, we designed a general framework, BeWith, for identifying modules with different combinations of mutation and interaction patterns. We focused on three different settings of the BeWith schema: (i) BeME-WithFun in which the relations between modules are enriched with mutual exclusivity while genes within each module are functionally related; (ii) BeME-WithCo which combines mutual exclusivity between modules with co-occurrence within modules; and (iii) BeCo-WithMEFun which ensures co-occurrence between modules while the within module relations combine mutual exclusivity and functional interactions. We formulated the BeWith framework using Integer Linear Programming (ILP), enabling us to find optimally scoring sets of modules. Our results demonstrate the utility of BeWith in providing novel information about mutational patterns, driver genes, and pathways. In particular, BeME-WithFun helped identify functionally coherent modules that might be relevant for cancer progression. In addition to finding previously well-known drivers, the identified modules pointed to the importance of the interaction between NCOR and NCOA3 in breast cancer. Additionally, an application of the BeME-WithCo setting revealed that gene groups differ with respect to their vulnerability to different mutagenic processes, and helped us to uncover pairs of genes with potentially synergetic effects, including a potential synergy between mutations in TP53 and metastasis related DCC gene.
|
2009.02081
|
Selma Souihel
|
Selma Souihel (BIOVISION), Bruno Cessac (BIOVISION)
|
On the potential role of lateral connectivity in retinal anticipation
| null | null | null | null |
q-bio.QM q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We analyse the potential effects of lateral connectivity (amacrine cells and
gap junctions) on motion anticipation in the retina. Our main result is that
lateral connectivity can-under conditions analysed in the paper-trigger a wave
of activity enhancing the anticipation mechanism provided by local gain control
[8, 17]. We illustrate these predictions by two examples studied in the
experimental literature: differential motion sensitive cells [1] and direction
sensitive cells where direction sensitivity is inherited from asymmetry in gap
junctions connectivity [73]. We finally present reconstructions of retinal
responses to 2D visual inputs to assess the ability of our model to anticipate
motion in the case of three different 2D stimuli.
|
[
{
"created": "Fri, 4 Sep 2020 09:18:41 GMT",
"version": "v1"
}
] |
2020-09-07
|
[
[
"Souihel",
"Selma",
"",
"BIOVISION"
],
[
"Cessac",
"Bruno",
"",
"BIOVISION"
]
] |
We analyse the potential effects of lateral connectivity (amacrine cells and gap junctions) on motion anticipation in the retina. Our main result is that lateral connectivity can-under conditions analysed in the paper-trigger a wave of activity enhancing the anticipation mechanism provided by local gain control [8, 17]. We illustrate these predictions by two examples studied in the experimental literature: differential motion sensitive cells [1] and direction sensitive cells where direction sensitivity is inherited from asymmetry in gap junctions connectivity [73]. We finally present reconstructions of retinal responses to 2D visual inputs to assess the ability of our model to anticipate motion in the case of three different 2D stimuli.
|
1404.6134
|
Georgios Tsaousis N
|
Katerina C. Nastou, Georgios N. Tsaousis, Kimon E. Kremizas, Zoi I.
Litou and Stavros J. Hamodrakas
|
The Human Plasma Membrane Peripherome: Visualization and Analysis of
Interactions
|
39 pages, 5 figures, 3 supplement figures, under review in BMRI
|
BioMed Research International, vol. 2014, Article ID 397145, 12
pages, 2014
|
10.1155/2014/397145
| null |
q-bio.BM q-bio.MN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A major part of membrane function is conducted by proteins, both integral and
peripheral. Peripheral membrane proteins temporarily adhere to biological
membranes, either to the lipid bilayer or to integral membrane proteins with
non-covalent interactions. The aim of this study was to construct and analyze
the interactions of the human plasma membrane peripheral proteins (peripherome
hereinafter). For this purpose, we collected a dataset of peripheral proteins
of the human plasma membrane. We also collected a dataset of experimentally
verified interactions for these proteins. The interaction network created from
this dataset has been visualized using Cytoscape. We grouped the proteins based
on their subcellular location and clustered them using the MCL algorithm in
order to detect functional modules. Moreover, functional and graph theory based
analyses have been performed to assess biological features of the network.
Interaction data with drug molecules show that ~10% of peripheral membrane
proteins are targets for approved drugs, suggesting their potential
implications in disease. In conclusion, we reveal novel features and properties
regarding the protein-protein interaction network created by peripheral
proteins of the human plasma membrane.
|
[
{
"created": "Thu, 24 Apr 2014 14:44:45 GMT",
"version": "v1"
}
] |
2014-06-27
|
[
[
"Nastou",
"Katerina C.",
""
],
[
"Tsaousis",
"Georgios N.",
""
],
[
"Kremizas",
"Kimon E.",
""
],
[
"Litou",
"Zoi I.",
""
],
[
"Hamodrakas",
"Stavros J.",
""
]
] |
A major part of membrane function is conducted by proteins, both integral and peripheral. Peripheral membrane proteins temporarily adhere to biological membranes, either to the lipid bilayer or to integral membrane proteins with non-covalent interactions. The aim of this study was to construct and analyze the interactions of the human plasma membrane peripheral proteins (peripherome hereinafter). For this purpose, we collected a dataset of peripheral proteins of the human plasma membrane. We also collected a dataset of experimentally verified interactions for these proteins. The interaction network created from this dataset has been visualized using Cytoscape. We grouped the proteins based on their subcellular location and clustered them using the MCL algorithm in order to detect functional modules. Moreover, functional and graph theory based analyses have been performed to assess biological features of the network. Interaction data with drug molecules show that ~10% of peripheral membrane proteins are targets for approved drugs, suggesting their potential implications in disease. In conclusion, we reveal novel features and properties regarding the protein-protein interaction network created by peripheral proteins of the human plasma membrane.
|
q-bio/0702055
|
Indrani Bose
|
Rajesh Karmakar and Indrani Bose
|
Positive Feedback, Stochasticity and Genetic Competence
|
Accepted for publication in Physical Biology
|
Phys. Biol. 4 (2007) 29--37
|
10.1088/1478-3975/4/1/004
| null |
q-bio.QM cond-mat.stat-mech q-bio.OT
| null |
A single gene, regulating its own expression via a positive feedback loop,
constitutes a common motif in gene regulatory networks and signalling cascades.
Recent experiments on the development of competence in the bacterial population
\emph{B. subtilis} show that the autoregulatory genetic module by itself can
give rise to two types of cellular states. The states correspond to the low and
high expression states of the master regulator ComK. The high expression state
is attained when the ComK protein level exceeds a threshold value leading to a
full activation of the autostimulatory loop. Stochasticity in gene expression
drives the transitions between the two stable states. In this paper, we explain
the appearance of bimodal protein distributions in \emph{B. subtilis} cell
population in the framework of three possible scenarios. In two of the cases,
bistability provides the basis for binary gene expression. In the third case,
the system is monostable in a deterministic description and stochasticity in
gene expression is solely responsible for the appearance of the two expression
states.
|
[
{
"created": "Mon, 26 Feb 2007 09:59:54 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Karmakar",
"Rajesh",
""
],
[
"Bose",
"Indrani",
""
]
] |
A single gene, regulating its own expression via a positive feedback loop, constitutes a common motif in gene regulatory networks and signalling cascades. Recent experiments on the development of competence in the bacterial population \emph{B. subtilis} show that the autoregulatory genetic module by itself can give rise to two types of cellular states. The states correspond to the low and high expression states of the master regulator ComK. The high expression state is attained when the ComK protein level exceeds a threshold value leading to a full activation of the autostimulatory loop. Stochasticity in gene expression drives the transitions between the two stable states. In this paper, we explain the appearance of bimodal protein distributions in \emph{B. subtilis} cell population in the framework of three possible scenarios. In two of the cases, bistability provides the basis for binary gene expression. In the third case, the system is monostable in a deterministic description and stochasticity in gene expression is solely responsible for the appearance of the two expression states.
|
2004.04741
|
George Mohler
|
Andrea L. Bertozzi, Elisa Franco, George Mohler, Martin B. Short,
Daniel Sledge
|
The challenges of modeling and forecasting the spread of COVID-19
| null | null |
10.1073/pnas.2006520117
| null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present three data driven model-types for COVID-19 with a minimal number
of parameters to provide insights into the spread of the disease that may be
used for developing policy responses. The first is exponential growth, widely
studied in analysis of early-time data. The second is a self-exciting branching
process model which includes a delay in transmission and recovery. It allows
for meaningful fit to early time stochastic data. The third is the well-known
Susceptible-Infected-Resistant (SIR) model and its cousin, SEIR, with an
"Exposed" component. All three models are related quantitatively, and the SIR
model is used to illustrate the potential effects of short-term distancing
measures in the United States.
|
[
{
"created": "Thu, 9 Apr 2020 16:22:23 GMT",
"version": "v1"
}
] |
2022-05-25
|
[
[
"Bertozzi",
"Andrea L.",
""
],
[
"Franco",
"Elisa",
""
],
[
"Mohler",
"George",
""
],
[
"Short",
"Martin B.",
""
],
[
"Sledge",
"Daniel",
""
]
] |
We present three data driven model-types for COVID-19 with a minimal number of parameters to provide insights into the spread of the disease that may be used for developing policy responses. The first is exponential growth, widely studied in analysis of early-time data. The second is a self-exciting branching process model which includes a delay in transmission and recovery. It allows for meaningful fit to early time stochastic data. The third is the well-known Susceptible-Infected-Resistant (SIR) model and its cousin, SEIR, with an "Exposed" component. All three models are related quantitatively, and the SIR model is used to illustrate the potential effects of short-term distancing measures in the United States.
|
1409.6744
|
Antonio Galves
|
C. D. Vargas, M. L. Rangel and A. Galves
|
Predicting upcoming actions by observation: some facts, models and
challenges
| null | null | null | null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Predicting another person's upcoming action to build an appropriate response
is a regular occurrence in the domain of motor control. In this review we
discuss conceptual and experimental approaches aiming at the neural basis of
predicting and learning to predict upcoming movements by their observation.
|
[
{
"created": "Tue, 23 Sep 2014 20:22:50 GMT",
"version": "v1"
}
] |
2014-09-25
|
[
[
"Vargas",
"C. D.",
""
],
[
"Rangel",
"M. L.",
""
],
[
"Galves",
"A.",
""
]
] |
Predicting another person's upcoming action to build an appropriate response is a regular occurrence in the domain of motor control. In this review we discuss conceptual and experimental approaches aiming at the neural basis of predicting and learning to predict upcoming movements by their observation.
|
1611.00716
|
Alfred Bennun BA
|
Alfred Bennun
|
The assay of the hydration shell dynamics on the turnover of the active
site of CF1-ATPase
|
13 pages, 3 figures
| null | null | null |
q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Previous kinetic models had assumed that the reaction medium was reacting at
random and without a turnover associated to thermodynamics exchanges, with a
rigid active site on the enzyme. The experimental studies show that coupling
factor 1 (CF1) from spinach chloroplasts has latent ATPase activity, which
become expressed after heat-treatment and incubation with calcium. The
sigmoidal kinetics observed on the competitive effect of glycerol on water
saturating a protein, suggests that the role of the hydration shell in the
catalytic mechanism of the CF1-ATPase, modify the number of water molecules
associated with the conformational turnover required for active site activity.
It is assume that the water associated to the hydrophilic state of the enzyme
produces a fit-in of the substrate to form (ES), follow by the catalytic action
with product formation (EP). This one induces the dissociation of water and
increases the hydrophobic attractions between R-groups. The latter, becomes the
form of the enzyme interacting with water to form the dissociated free enzyme
(E) and free product (P). Glycerol dependent suppression of the water dynamics
on two interacting sites configuration shows a change in the
H-bond-configuration. The thermodynamics modeling requires an energy
expenditure of about 4kcal/mol per each H-bond in turnover. Glycerol determined
a turnover of 14 molecules of water released from the active sites to reach the
inactive form of the enzyme. The entropy generated by turnover of the fit-in
and -out substrate and product from the protein could be dissipated-out of the
enzyme-water system itself. Coupling with the surrounding water clusters allows
to recreate H-bonds. This should involve a decrease in the number of H-bonds
present in the clusters. These changes in the mass action capability of the
water clusters could eventually become dissipated through a cooling effect.
|
[
{
"created": "Thu, 7 Jul 2016 21:45:49 GMT",
"version": "v1"
}
] |
2016-11-03
|
[
[
"Bennun",
"Alfred",
""
]
] |
Previous kinetic models had assumed that the reaction medium was reacting at random and without a turnover associated to thermodynamics exchanges, with a rigid active site on the enzyme. The experimental studies show that coupling factor 1 (CF1) from spinach chloroplasts has latent ATPase activity, which become expressed after heat-treatment and incubation with calcium. The sigmoidal kinetics observed on the competitive effect of glycerol on water saturating a protein, suggests that the role of the hydration shell in the catalytic mechanism of the CF1-ATPase, modify the number of water molecules associated with the conformational turnover required for active site activity. It is assume that the water associated to the hydrophilic state of the enzyme produces a fit-in of the substrate to form (ES), follow by the catalytic action with product formation (EP). This one induces the dissociation of water and increases the hydrophobic attractions between R-groups. The latter, becomes the form of the enzyme interacting with water to form the dissociated free enzyme (E) and free product (P). Glycerol dependent suppression of the water dynamics on two interacting sites configuration shows a change in the H-bond-configuration. The thermodynamics modeling requires an energy expenditure of about 4kcal/mol per each H-bond in turnover. Glycerol determined a turnover of 14 molecules of water released from the active sites to reach the inactive form of the enzyme. The entropy generated by turnover of the fit-in and -out substrate and product from the protein could be dissipated-out of the enzyme-water system itself. Coupling with the surrounding water clusters allows to recreate H-bonds. This should involve a decrease in the number of H-bonds present in the clusters. These changes in the mass action capability of the water clusters could eventually become dissipated through a cooling effect.
|
1205.2919
|
Christos Skiadas H
|
Christos H. Skiadas and Charilaos Skiadas
|
Estimating the Healthy Life Expectancy from the Health State Function of
a Population in Connection to the Life Expectancy at Birth
|
17 pages, 8 figures, 4 tables
| null | null | null |
q-bio.PE nlin.AO q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Following our previous works on the health state of a population and the
related health state function we proceed in developing a method to estimate the
Healthy Life Expectancy in connection to the relative impact of the Mortality
Area in the health state function graph. The resulting tools are applied to the
data sets for 1990, 2000 and 2009 for the Countries of the World Health
Organization (WHO). The application is done in the Excel Chart and it is ready
to be used for other time periods. The results are compared with the estimates
presented in the WHO report for 2000 showing a good relationship between the
estimates of the two methods. However, our proposed method, not based on
collection of data for diseases and other causes affecting a healthy life, is
more flexible. We can estimate the healthy life years from various time periods
when information related to diseases is missing. Keywords: Health state
function, Healthy life expectancy, Deterioration, Loss of healthy years, HALE,
DALE, World Health Organization, WHO
|
[
{
"created": "Mon, 14 May 2012 00:33:54 GMT",
"version": "v1"
}
] |
2012-05-15
|
[
[
"Skiadas",
"Christos H.",
""
],
[
"Skiadas",
"Charilaos",
""
]
] |
Following our previous works on the health state of a population and the related health state function we proceed in developing a method to estimate the Healthy Life Expectancy in connection to the relative impact of the Mortality Area in the health state function graph. The resulting tools are applied to the data sets for 1990, 2000 and 2009 for the Countries of the World Health Organization (WHO). The application is done in the Excel Chart and it is ready to be used for other time periods. The results are compared with the estimates presented in the WHO report for 2000 showing a good relationship between the estimates of the two methods. However, our proposed method, not based on collection of data for diseases and other causes affecting a healthy life, is more flexible. We can estimate the healthy life years from various time periods when information related to diseases is missing. Keywords: Health state function, Healthy life expectancy, Deterioration, Loss of healthy years, HALE, DALE, World Health Organization, WHO
|
1603.04721
|
Claus Metzner
|
Patrick Krauss, Konstantin Tziridis, Achim Schilling, Claus Metzner
and Holger Schulze
|
Stochastic resonance controlled upregulation of internal noise after
hearing loss as a putative correlate of tinnitus-related neuronal
hyperactivity
| null | null | null | null |
q-bio.QM q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Subjective tinnitus (ST) is generally assumed to be a consequence of hearing
loss (HL). In animal studies acoustic trauma can lead to behavioral signs of
ST, in human studies ST patients without increased hearing thresholds were
found to suffer from so called hidden HL. Additionally, ST is correlated with
pathologically increased spontaneous firing rates and neuronal hyperactivity
(NH) along the auditory pathway. Homeostatic plasticity (HP) has been proposed
as a compensation mechanism leading to the development of NH, arguing that
after HL initially decreased mean firing rates of neurons are subsequently
restored by increased spontaneous rates. However all HP models fundamentally
lack explanatory power since the function of keeping mean firing rate constant
remains elusive as does the benefit this might have in terms of information
processing. Furthermore the neural circuitry being able to perform the
comparison of preferred with actual mean firing rate remains unclear. Here we
propose an entirely new interpretation of ST related development of NH in terms
of information theory. We suggest that stochastic resonance (SR) plays a key
role in short- and long-term plasticity within the auditory system and is the
ultimate cause of NH and ST. SR has been found ubiquitous in neuroscience and
refers to the phenomenon that sub-threshold, unperceivable signals can be
transmitted by adding noise to sensor input. We argue that after HL, SR serves
to lift signals above the increased hearing threshold, hence subsequently
decreasing thresholds again. The increased amount of internal noise is the
correlate of the NH, which finally leads to the development of ST, due to
neuronal plasticity along the auditory pathway. We demonstrate the plausibility
of our hypothesis by using a computational model and provide exemplarily
findings of human and animal studies that are consistent with our model.
|
[
{
"created": "Tue, 15 Mar 2016 15:27:19 GMT",
"version": "v1"
}
] |
2016-03-16
|
[
[
"Krauss",
"Patrick",
""
],
[
"Tziridis",
"Konstantin",
""
],
[
"Schilling",
"Achim",
""
],
[
"Metzner",
"Claus",
""
],
[
"Schulze",
"Holger",
""
]
] |
Subjective tinnitus (ST) is generally assumed to be a consequence of hearing loss (HL). In animal studies acoustic trauma can lead to behavioral signs of ST, in human studies ST patients without increased hearing thresholds were found to suffer from so called hidden HL. Additionally, ST is correlated with pathologically increased spontaneous firing rates and neuronal hyperactivity (NH) along the auditory pathway. Homeostatic plasticity (HP) has been proposed as a compensation mechanism leading to the development of NH, arguing that after HL initially decreased mean firing rates of neurons are subsequently restored by increased spontaneous rates. However all HP models fundamentally lack explanatory power since the function of keeping mean firing rate constant remains elusive as does the benefit this might have in terms of information processing. Furthermore the neural circuitry being able to perform the comparison of preferred with actual mean firing rate remains unclear. Here we propose an entirely new interpretation of ST related development of NH in terms of information theory. We suggest that stochastic resonance (SR) plays a key role in short- and long-term plasticity within the auditory system and is the ultimate cause of NH and ST. SR has been found ubiquitous in neuroscience and refers to the phenomenon that sub-threshold, unperceivable signals can be transmitted by adding noise to sensor input. We argue that after HL, SR serves to lift signals above the increased hearing threshold, hence subsequently decreasing thresholds again. The increased amount of internal noise is the correlate of the NH, which finally leads to the development of ST, due to neuronal plasticity along the auditory pathway. We demonstrate the plausibility of our hypothesis by using a computational model and provide exemplarily findings of human and animal studies that are consistent with our model.
|
1205.0802
|
Matjaz Perc
|
Yongkui Liu, Xiaojie Chen, Lin Zhang, Long Wang, Matjaz Perc
|
Win-stay-lose-learn promotes cooperation in the spatial prisoner's
dilemma game
|
14 pages, 6 figures; accepted for publication in PLoS ONE
|
PLoS ONE 7 (2012) e30689
|
10.1371/journal.pone.0030689
| null |
q-bio.PE cond-mat.stat-mech physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Holding on to one's strategy is natural and common if the later warrants
success and satisfaction. This goes against widespread simulation practices of
evolutionary games, where players frequently consider changing their strategy
even though their payoffs may be marginally different than those of the other
players. Inspired by this observation, we introduce an aspiration-based
win-stay-lose-learn strategy updating rule into the spatial prisoner's dilemma
game. The rule is simple and intuitive, foreseeing strategy changes only by
dissatisfied players, who then attempt to adopt the strategy of one of their
nearest neighbors, while the strategies of satisfied players are not subject to
change. We find that the proposed win-stay-lose-learn rule promotes the
evolution of cooperation, and it does so very robustly and independently of the
initial conditions. In fact, we show that even a minute initial fraction of
cooperators may be sufficient to eventually secure a highly cooperative final
state. In addition to extensive simulation results that support our
conclusions, we also present results obtained by means of the pair
approximation of the studied game. Our findings continue the success story of
related win-stay strategy updating rules, and by doing so reveal new ways of
resolving the prisoner's dilemma.
|
[
{
"created": "Thu, 3 May 2012 19:56:36 GMT",
"version": "v1"
}
] |
2012-05-04
|
[
[
"Liu",
"Yongkui",
""
],
[
"Chen",
"Xiaojie",
""
],
[
"Zhang",
"Lin",
""
],
[
"Wang",
"Long",
""
],
[
"Perc",
"Matjaz",
""
]
] |
Holding on to one's strategy is natural and common if the later warrants success and satisfaction. This goes against widespread simulation practices of evolutionary games, where players frequently consider changing their strategy even though their payoffs may be marginally different than those of the other players. Inspired by this observation, we introduce an aspiration-based win-stay-lose-learn strategy updating rule into the spatial prisoner's dilemma game. The rule is simple and intuitive, foreseeing strategy changes only by dissatisfied players, who then attempt to adopt the strategy of one of their nearest neighbors, while the strategies of satisfied players are not subject to change. We find that the proposed win-stay-lose-learn rule promotes the evolution of cooperation, and it does so very robustly and independently of the initial conditions. In fact, we show that even a minute initial fraction of cooperators may be sufficient to eventually secure a highly cooperative final state. In addition to extensive simulation results that support our conclusions, we also present results obtained by means of the pair approximation of the studied game. Our findings continue the success story of related win-stay strategy updating rules, and by doing so reveal new ways of resolving the prisoner's dilemma.
|
1610.09297
|
Antonio Di Crescenzo
|
Antonio Di Crescenzo, Serena Spina
|
Analysis of a growth model inspired by Gompertz and Korf laws, and an
analogous birth-death process
|
25 pages, 11 figures
|
Mathematical Biosciences (2016), Vol. 282, p. 121-134
|
10.1016/j.mbs.2016.10.005
| null |
q-bio.PE math.PR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a new deterministic growth model which captures certain features
of both the Gompertz and Korf laws. We investigate its main properties, with
special attention to the correction factor, the relative growth rate, the
inflection point, the maximum specific growth rate, the lag time and the
threshold crossing problem. Some data analytic examples and their performance
are also considered. Furthermore, we study a stochastic counterpart of the
proposed model, that is a linear time-inhomogeneous birth-death process whose
mean behaves as the deterministic one. We obtain the transition probabilities,
the moments and the population ultimate extinction probability for this
process. We finally treat the special case of a simple birth process, which
better mimics the proposed growth model.
|
[
{
"created": "Fri, 28 Oct 2016 16:20:59 GMT",
"version": "v1"
}
] |
2016-10-31
|
[
[
"Di Crescenzo",
"Antonio",
""
],
[
"Spina",
"Serena",
""
]
] |
We propose a new deterministic growth model which captures certain features of both the Gompertz and Korf laws. We investigate its main properties, with special attention to the correction factor, the relative growth rate, the inflection point, the maximum specific growth rate, the lag time and the threshold crossing problem. Some data analytic examples and their performance are also considered. Furthermore, we study a stochastic counterpart of the proposed model, that is a linear time-inhomogeneous birth-death process whose mean behaves as the deterministic one. We obtain the transition probabilities, the moments and the population ultimate extinction probability for this process. We finally treat the special case of a simple birth process, which better mimics the proposed growth model.
|
q-bio/0401032
|
Svyatoslav Gabuda Petrovich
|
R.G.Khlebopros, V.A.Slepkov, V.G.Sukhovolsky, Y.V.Mironov,
V.E.Fedorov, S.P.Gabuda
|
Mathematical model of solid tumor formation
|
7 pages, 5 figures, key words:tumour growth, mathematical model,
local interaction
| null | null | null |
q-bio.CB
| null |
The problem of the onset and growth of solid tumour in homogeneous tissue is
regarded using an approach based on local interaction between the tumoral and
the sane tissue cells. The characteristic sizes and growth rates of spherical
tumours, the points of the beginning and the end of spherical growth, and the
further development of complex structures (elongated outgrowths, dendritic
structures, and metastases) are derived from the assumption that the
reproduction rate of a population of cancer cells is a non-monotone function of
their local concentration. The predicted statistical distribution of the
characteristic tumour sizes, when compared to the clinical data, will make a
basis for checking the validity of the theory.
|
[
{
"created": "Fri, 23 Jan 2004 14:59:51 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Khlebopros",
"R. G.",
""
],
[
"Slepkov",
"V. A.",
""
],
[
"Sukhovolsky",
"V. G.",
""
],
[
"Mironov",
"Y. V.",
""
],
[
"Fedorov",
"V. E.",
""
],
[
"Gabuda",
"S. P.",
""
]
] |
The problem of the onset and growth of solid tumour in homogeneous tissue is regarded using an approach based on local interaction between the tumoral and the sane tissue cells. The characteristic sizes and growth rates of spherical tumours, the points of the beginning and the end of spherical growth, and the further development of complex structures (elongated outgrowths, dendritic structures, and metastases) are derived from the assumption that the reproduction rate of a population of cancer cells is a non-monotone function of their local concentration. The predicted statistical distribution of the characteristic tumour sizes, when compared to the clinical data, will make a basis for checking the validity of the theory.
|
2402.05903
|
Giuseppe Vinci
|
Lauren Miako Beede and Giuseppe Vinci
|
Neuronal functional connectivity graph estimation with the R package
neurofuncon
|
7 pages, 5 figures
| null | null | null |
q-bio.NC stat.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Researchers continue exploring neurons' intricate patterns of activity in the
cerebral visual cortex in response to visual stimuli. The way neurons
communicate and optimize their interactions with each other under different
experimental conditions remains a topic of active investigation. Probabilistic
Graphical Models are invaluable tools in neuroscience research, as they let us
identify the functional connections, or conditional statistical dependencies,
between neurons. Graphical models represent these connections as a graph, where
nodes represent neurons and edges indicate the presence of functional
connections between them. We developed the R package neurofuncon for the
computation and visualization of functional connectivity graphs from
large-scale data based on the Graphical lasso. We illustrate the use of this
package with publicly available two-photon calcium microscopy imaging data from
approximately 10000 neurons in a 1mm cubic section of a mouse visual cortex.
|
[
{
"created": "Thu, 8 Feb 2024 18:42:04 GMT",
"version": "v1"
}
] |
2024-02-09
|
[
[
"Beede",
"Lauren Miako",
""
],
[
"Vinci",
"Giuseppe",
""
]
] |
Researchers continue exploring neurons' intricate patterns of activity in the cerebral visual cortex in response to visual stimuli. The way neurons communicate and optimize their interactions with each other under different experimental conditions remains a topic of active investigation. Probabilistic Graphical Models are invaluable tools in neuroscience research, as they let us identify the functional connections, or conditional statistical dependencies, between neurons. Graphical models represent these connections as a graph, where nodes represent neurons and edges indicate the presence of functional connections between them. We developed the R package neurofuncon for the computation and visualization of functional connectivity graphs from large-scale data based on the Graphical lasso. We illustrate the use of this package with publicly available two-photon calcium microscopy imaging data from approximately 10000 neurons in a 1mm cubic section of a mouse visual cortex.
|
1001.5235
|
Alexander Dobrinevski
|
A. Dobrinevski and E. Frey
|
Extinction in neutrally stable stochastic Lotka-Volterra models
|
14 pages, 7 figures
|
Phys. Rev. E 85, 051903 (2012)
|
10.1103/PhysRevE.85.051903
|
LMU-ASC 03/10
|
q-bio.PE cond-mat.stat-mech q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Populations of competing biological species exhibit a fascinating interplay
between the nonlinear dynamics of evolutionary selection forces and random
fluctuations arising from the stochastic nature of the interactions. The
processes leading to extinction of species, whose understanding is a key
component in the study of evolution and biodiversity, are influenced by both of
these factors.
In this paper, we investigate a class of stochastic population dynamics
models based on generalized Lotka-Volterra systems. In the case of neutral
stability of the underlying deterministic model, the impact of intrinsic noise
on the survival of species is dramatic: it destroys coexistence of interacting
species on a time scale proportional to the population size. We introduce a new
method based on stochastic averaging which allows one to understand this
extinction process quantitatively by reduction to a lower-dimensional effective
dynamics. This is performed analytically for two highly symmetrical models and
can be generalized numerically to more complex situations. The extinction
probability distributions and other quantities of interest we obtain show
excellent agreement with simulations.
|
[
{
"created": "Thu, 28 Jan 2010 18:11:01 GMT",
"version": "v1"
},
{
"created": "Thu, 19 Apr 2012 08:18:15 GMT",
"version": "v2"
}
] |
2012-05-08
|
[
[
"Dobrinevski",
"A.",
""
],
[
"Frey",
"E.",
""
]
] |
Populations of competing biological species exhibit a fascinating interplay between the nonlinear dynamics of evolutionary selection forces and random fluctuations arising from the stochastic nature of the interactions. The processes leading to extinction of species, whose understanding is a key component in the study of evolution and biodiversity, are influenced by both of these factors. In this paper, we investigate a class of stochastic population dynamics models based on generalized Lotka-Volterra systems. In the case of neutral stability of the underlying deterministic model, the impact of intrinsic noise on the survival of species is dramatic: it destroys coexistence of interacting species on a time scale proportional to the population size. We introduce a new method based on stochastic averaging which allows one to understand this extinction process quantitatively by reduction to a lower-dimensional effective dynamics. This is performed analytically for two highly symmetrical models and can be generalized numerically to more complex situations. The extinction probability distributions and other quantities of interest we obtain show excellent agreement with simulations.
|
2403.09723
|
Michele Mastella
|
Michele Mastella and Tesse Tiemens and Elisabetta Chicca
|
Texture Recognition Using a Biologically Plausible Spiking Phase-Locked
Loop Model for Spike Train Frequency Decomposition
|
16 pages, 4 figures, journal
| null | null | null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we present a novel spiking neural network model designed to
perform frequency decomposition of spike trains. Our model emulates neural
microcircuits theorized in the somatosensory cortex, rendering it a
biologically plausible candidate for decoding the spike trains observed in
tactile peripheral nerves. We demonstrate the capacity of simple neurons and
synapses to replicate the phase-locked loop (PLL) and explore the emergent
properties when considering multiple spiking phase-locked loops (sPLLs) with
diverse oscillations. We illustrate how these sPLLs can decode textures using
the spectral features elicited in peripheral nerves. Leveraging our model's
frequency decomposition abilities, we improve state-of-the-art performances on
a Multifrequency Spike Train (MST) dataset. This work offers valuable insights
into neural processing and presents a practical framework for enhancing
artificial neural network capabilities in complex pattern recognition tasks.
|
[
{
"created": "Tue, 12 Mar 2024 13:28:39 GMT",
"version": "v1"
}
] |
2024-03-18
|
[
[
"Mastella",
"Michele",
""
],
[
"Tiemens",
"Tesse",
""
],
[
"Chicca",
"Elisabetta",
""
]
] |
In this paper, we present a novel spiking neural network model designed to perform frequency decomposition of spike trains. Our model emulates neural microcircuits theorized in the somatosensory cortex, rendering it a biologically plausible candidate for decoding the spike trains observed in tactile peripheral nerves. We demonstrate the capacity of simple neurons and synapses to replicate the phase-locked loop (PLL) and explore the emergent properties when considering multiple spiking phase-locked loops (sPLLs) with diverse oscillations. We illustrate how these sPLLs can decode textures using the spectral features elicited in peripheral nerves. Leveraging our model's frequency decomposition abilities, we improve state-of-the-art performances on a Multifrequency Spike Train (MST) dataset. This work offers valuable insights into neural processing and presents a practical framework for enhancing artificial neural network capabilities in complex pattern recognition tasks.
|
q-bio/0403030
|
Max Shpak
|
Max Shpak, Peter Stadler, Gunter Wagner, and Lee Altenberg
|
Simon-Ando Decomposability and Fitness Landscapes
| null | null | null | null |
q-bio.PE
| null |
In this paper, we investigate fitness landscapes (under point mutation and
recombination) from the standpoint of whether the induced evolutionary dynamics
have a "fast-slow" time scale associated with the differences in relaxation
time between local quasi-equilibria and the global equilibrium. This dynamical
behavior has been formally described in the econometrics literature in terms of
the spectral properties of the appropriate operator matrices by Simon and Ando
(1961), and we use the relations they derive to ask which fitness functions and
mutation/recombination operators satisfy these properties. It turns out that
quite a wide range of landscapes satisfy the condition (at least trivially)
under point mutation given a sufficiently low mutation rate, while the property
appears to be difficult to satisfy under genetic recombination. In spite of the
fact that Simon-Ando decomposability can be realized over fairly wide range of
parameters, it imposes a number of restrictions on which landscape
partitionings are possible. For these reasons, the Simon-Ando formalism doesn't
appear to be applicable to other forms of decomposition and aggregation of
variables that are important in evolutionary systems.
|
[
{
"created": "Mon, 22 Mar 2004 18:52:58 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Shpak",
"Max",
""
],
[
"Stadler",
"Peter",
""
],
[
"Wagner",
"Gunter",
""
],
[
"Altenberg",
"Lee",
""
]
] |
In this paper, we investigate fitness landscapes (under point mutation and recombination) from the standpoint of whether the induced evolutionary dynamics have a "fast-slow" time scale associated with the differences in relaxation time between local quasi-equilibria and the global equilibrium. This dynamical behavior has been formally described in the econometrics literature in terms of the spectral properties of the appropriate operator matrices by Simon and Ando (1961), and we use the relations they derive to ask which fitness functions and mutation/recombination operators satisfy these properties. It turns out that quite a wide range of landscapes satisfy the condition (at least trivially) under point mutation given a sufficiently low mutation rate, while the property appears to be difficult to satisfy under genetic recombination. In spite of the fact that Simon-Ando decomposability can be realized over fairly wide range of parameters, it imposes a number of restrictions on which landscape partitionings are possible. For these reasons, the Simon-Ando formalism doesn't appear to be applicable to other forms of decomposition and aggregation of variables that are important in evolutionary systems.
|
q-bio/0601045
|
Edoardo Milotti
|
Roberto Chignola, Chiara Dalla Pellegrina, Alessio Del Fabbro, Edoardo
Milotti
|
Thresholds, long delays and stability from generalized allosteric effect
in protein networks
|
5 figures, submitted to Physica A
| null |
10.1016/j.physa.2006.03.044
| null |
q-bio.CB q-bio.SC
| null |
Post-transductional modifications tune the functions of proteins and regulate
the collective dynamics of biochemical networks that determine how cells
respond to environmental signals. For example, protein phosphorylation and
nitrosylation are well-known to play a pivotal role in the intracellular
transduction of activation and death signals. A protein can have multiple sites
where chemical groups can reversibly attach in processes such as
phosphorylation or nitrosylation. A microscopic description of these processes
must take into account the intrinsic probabilistic nature of the underlying
reactions. We apply combinatorial considerations to standard enzyme kinetics
and in this way we extend to the dynamic regime a simplified version of the
traditional models on the allosteric regulation of protein functions. We link a
generic modification chain to a downstream Michaelis-Menten enzymatic reaction
and we demonstrate numerically that this accounts both for thresholds and long
time delays in the conversion of the substrate by the enzyme. The proposed
mechanism is stable and robust and the higher the number of modification sites,
the greater the stability. We show that a high number of modification sites
converts a fast reaction into a slow process, and the slowing down depends on
the number of sites and may span many orders of magnitude; in this way
multisite modification of proteins stands out as a general mechanism that
allows the transfer of information from the very short time scales of enzyme
reactions (milliseconds) to the long time scale of cell response (hours).
|
[
{
"created": "Thu, 26 Jan 2006 18:34:57 GMT",
"version": "v1"
}
] |
2009-11-13
|
[
[
"Chignola",
"Roberto",
""
],
[
"Pellegrina",
"Chiara Dalla",
""
],
[
"Del Fabbro",
"Alessio",
""
],
[
"Milotti",
"Edoardo",
""
]
] |
Post-transductional modifications tune the functions of proteins and regulate the collective dynamics of biochemical networks that determine how cells respond to environmental signals. For example, protein phosphorylation and nitrosylation are well-known to play a pivotal role in the intracellular transduction of activation and death signals. A protein can have multiple sites where chemical groups can reversibly attach in processes such as phosphorylation or nitrosylation. A microscopic description of these processes must take into account the intrinsic probabilistic nature of the underlying reactions. We apply combinatorial considerations to standard enzyme kinetics and in this way we extend to the dynamic regime a simplified version of the traditional models on the allosteric regulation of protein functions. We link a generic modification chain to a downstream Michaelis-Menten enzymatic reaction and we demonstrate numerically that this accounts both for thresholds and long time delays in the conversion of the substrate by the enzyme. The proposed mechanism is stable and robust and the higher the number of modification sites, the greater the stability. We show that a high number of modification sites converts a fast reaction into a slow process, and the slowing down depends on the number of sites and may span many orders of magnitude; in this way multisite modification of proteins stands out as a general mechanism that allows the transfer of information from the very short time scales of enzyme reactions (milliseconds) to the long time scale of cell response (hours).
|
1009.5810
|
Yvinec Romain M.
|
Michael C. Mackey and Marta Tyran-Kami\'nska and Romain Yvinec
|
Molecular Distributions in Gene Regulatory Dynamics
|
14 pages, 12 figures. Conferences: "2010 Annual Meeting of The
Society of Mathematical Biology", Rio de Janeiro (Brazil), 24-29/07/2010.
"First International workshop on Differential and Integral Equations with
Applications in Biology and Medicine", Aegean University, Karlovassi, Samos
island (Greece), 6-10/09/2010
|
Journal of Theoretical Biology 274 (2011), 84-96
|
10.1016/j.jtbi.2011.01.020
| null |
q-bio.MN math.CA math.PR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We show how one may analytically compute the stationary density of the
distribution of molecular constituents in populations of cells in the presence
of noise arising from either bursting transcription or translation, or noise in
degradation rates arising from low numbers of molecules. We have compared our
results with an analysis of the same model systems (either inducible or
repressible operons) in the absence of any stochastic effects, and shown the
correspondence between behaviour in the deterministic system and the stochastic
analogs. We have identified key dimensionless parameters that control the
appearance of one or two steady states in the deterministic case, or unimodal
and bimodal densities in the stochastic systems, and detailed the analytic
requirements for the occurrence of different behaviours. This approach
provides, in some situations, an alternative to computationally intensive
stochastic simulations. Our results indicate that, within the context of the
simple models we have examined, bursting and degradation noise cannot be
distinguished analytically when present alone.
|
[
{
"created": "Wed, 29 Sep 2010 09:07:44 GMT",
"version": "v1"
}
] |
2015-10-15
|
[
[
"Mackey",
"Michael C.",
""
],
[
"Tyran-Kamińska",
"Marta",
""
],
[
"Yvinec",
"Romain",
""
]
] |
We show how one may analytically compute the stationary density of the distribution of molecular constituents in populations of cells in the presence of noise arising from either bursting transcription or translation, or noise in degradation rates arising from low numbers of molecules. We have compared our results with an analysis of the same model systems (either inducible or repressible operons) in the absence of any stochastic effects, and shown the correspondence between behaviour in the deterministic system and the stochastic analogs. We have identified key dimensionless parameters that control the appearance of one or two steady states in the deterministic case, or unimodal and bimodal densities in the stochastic systems, and detailed the analytic requirements for the occurrence of different behaviours. This approach provides, in some situations, an alternative to computationally intensive stochastic simulations. Our results indicate that, within the context of the simple models we have examined, bursting and degradation noise cannot be distinguished analytically when present alone.
|
1407.3590
|
Roman Zubarev A
|
Xueshu Xie, Roman A. Zubarev
|
Effects of low-level deuterium enrichment on bacterial growth
|
Accepted to PLoS One. Press embargo applies
|
Xie X, Zubarev RA (2014) Effects of Low-Level Deuterium Enrichment
on Bacterial Growth. PLoS ONE 9(7): e102071
|
10.1371/journal.pone.0102071
| null |
q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Using very precise (up to 0.05%) measurements of the growth parameters for
bacteria E. coli grown on minimal media, we aimed to determine the lowest
deuterium concentration at which the adverse effects that are prominent at
higher enrichments start to become noticeable. Such a threshold was found at
0.5% D, a surprisingly high value, while the ultralow deuterium concentrations
(up to 0.25% D) showed signs of the opposite trend. Bacterial adaptation for
400 generations in isotopically different environment confirmed preference for
ultralow (up to 0.25% D) enrichment. This effect appears to be similar to those
described in sporadic but multiple earlier reports. Possible explanations
include hormesis and isotopic resonance phenomena, with the latter explanation
being favored.
|
[
{
"created": "Mon, 14 Jul 2014 10:16:36 GMT",
"version": "v1"
}
] |
2015-01-06
|
[
[
"Xie",
"Xueshu",
""
],
[
"Zubarev",
"Roman A.",
""
]
] |
Using very precise (up to 0.05%) measurements of the growth parameters for bacteria E. coli grown on minimal media, we aimed to determine the lowest deuterium concentration at which the adverse effects that are prominent at higher enrichments start to become noticeable. Such a threshold was found at 0.5% D, a surprisingly high value, while the ultralow deuterium concentrations (up to 0.25% D) showed signs of the opposite trend. Bacterial adaptation for 400 generations in isotopically different environment confirmed preference for ultralow (up to 0.25% D) enrichment. This effect appears to be similar to those described in sporadic but multiple earlier reports. Possible explanations include hormesis and isotopic resonance phenomena, with the latter explanation being favored.
|
1110.5865
|
Eric Werner
|
Eric Werner
|
Cancer Networks: A general theoretical and computational framework for
understanding cancer
|
Key words: Cancer networks, cene, cenome, developmental control
networks, stem cells, stem cell networks, cancer stem cells, stochastic stem
cell networks, metastases hierarchy, linear networks, exponential networks,
geometric cancer networks, cell signaling, cancer cell communication
networks, systems biology, computational biology, multiagent systems,
muticellular modeling, cancer modeling
| null | null | null |
q-bio.MN cs.CE cs.MA q-bio.CB q-bio.GN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a general computational theory of cancer and its developmental
dynamics. The theory is based on a theory of the architecture and function of
developmental control networks which guide the formation of multicellular
organisms. Cancer networks are special cases of developmental control networks.
Cancer results from transformations of normal developmental networks. Our
theory generates a natural classification of all possible cancers based on
their network architecture. Each cancer network has a unique topology and
semantics and developmental dynamics that result in distinct clinical tumor
phenotypes. We apply this new theory with a series of proof of concept cases
for all the basic cancer types. These cases have been computationally modeled,
their behavior simulated and mathematically described using a multicellular
systems biology approach. There are fascinating correspondences between the
dynamic developmental phenotype of computationally modeled {\em in silico}
cancers and natural {\em in vivo} cancers. The theory lays the foundation for a
new research paradigm for understanding and investigating cancer. The theory of
cancer networks implies that new diagnostic methods and new treatments to cure
cancer will become possible.
|
[
{
"created": "Wed, 26 Oct 2011 18:07:37 GMT",
"version": "v1"
}
] |
2011-11-16
|
[
[
"Werner",
"Eric",
""
]
] |
We present a general computational theory of cancer and its developmental dynamics. The theory is based on a theory of the architecture and function of developmental control networks which guide the formation of multicellular organisms. Cancer networks are special cases of developmental control networks. Cancer results from transformations of normal developmental networks. Our theory generates a natural classification of all possible cancers based on their network architecture. Each cancer network has a unique topology and semantics and developmental dynamics that result in distinct clinical tumor phenotypes. We apply this new theory with a series of proof of concept cases for all the basic cancer types. These cases have been computationally modeled, their behavior simulated and mathematically described using a multicellular systems biology approach. There are fascinating correspondences between the dynamic developmental phenotype of computationally modeled {\em in silico} cancers and natural {\em in vivo} cancers. The theory lays the foundation for a new research paradigm for understanding and investigating cancer. The theory of cancer networks implies that new diagnostic methods and new treatments to cure cancer will become possible.
|
1405.7856
|
Jose Vilar
|
Jose M. G. Vilar
|
Entropy of leukemia on multidimensional morphological and molecular
landscapes
|
15 pages, 4 figures, and supplementary information
|
Phys. Rev. X 4, 021038 (2014)
|
10.1103/PhysRevX.4.021038
| null |
q-bio.QM cond-mat.stat-mech physics.med-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Leukemia epitomizes the class of highly complex diseases that new
technologies aim to tackle by using large sets of single-cell level
information. Achieving such goal depends critically not only on experimental
techniques but also on approaches to interpret the data. A most pressing issue
is to identify the salient quantitative features of the disease from the
resulting massive amounts of information. Here, I show that the entropies of
cell-population distributions on specific multidimensional molecular and
morphological landscapes provide a set of measures for the precise
characterization of normal and pathological states, such as those corresponding
to healthy individuals and acute myeloid leukemia (AML) patients. I provide a
systematic procedure to identify the specific landscapes and illustrate how,
applied to cell samples from peripheral blood and bone marrow aspirates, this
characterization accurately diagnoses AML from just flow cytometry data. The
methodology can generally be applied to other types of cell-populations and
establishes a straightforward link between the traditional statistical
thermodynamics methodology and biomedical applications.
|
[
{
"created": "Fri, 30 May 2014 13:26:11 GMT",
"version": "v1"
}
] |
2014-06-02
|
[
[
"Vilar",
"Jose M. G.",
""
]
] |
Leukemia epitomizes the class of highly complex diseases that new technologies aim to tackle by using large sets of single-cell level information. Achieving such goal depends critically not only on experimental techniques but also on approaches to interpret the data. A most pressing issue is to identify the salient quantitative features of the disease from the resulting massive amounts of information. Here, I show that the entropies of cell-population distributions on specific multidimensional molecular and morphological landscapes provide a set of measures for the precise characterization of normal and pathological states, such as those corresponding to healthy individuals and acute myeloid leukemia (AML) patients. I provide a systematic procedure to identify the specific landscapes and illustrate how, applied to cell samples from peripheral blood and bone marrow aspirates, this characterization accurately diagnoses AML from just flow cytometry data. The methodology can generally be applied to other types of cell-populations and establishes a straightforward link between the traditional statistical thermodynamics methodology and biomedical applications.
|
1507.08245
|
Fran\c{c}ois Blanquart
|
Fran\c{c}ois Blanquart, Thomas Bataillon
|
Epistasis and the structure of fitness landscapes: are experimental
fitness landscapes compatible with Fisher's Geometric model?
|
Added an alternative ABC algorithm based on a set of summary
statistics
| null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The fitness landscape defines the relationship between genotypes and fitness
in a given environment, and underlies fundamental quantities such as the
distribution of selection coefficient, or the magnitude and type of epistasis.
A better understanding of variation of landscape structure across species and
environments is thus necessary to understand and predict how populations will
adapt. An increasing number of experiments investigates the properties of
fitness landscapes by identifying mutations, constructing genotypes with
combinations of these mutations, and measuring the fitness of these genotypes.
Yet these empirical landscapes represent a very small sample of the vast space
of all possible genotypes, and this sample is often biased by the protocol used
to identify mutations. Here we develop a rigorous statistical framework based
on Approximate Bayesian Computation to address these concerns, and use this
flexible framework to fit a broad class of phenotypic fitness models (including
Fisher's model) to 26 empirical landscapes representing 9 diverse biological
systems. In spite of uncertainty due to the small size of most published
empirical landscapes, the inferred landscapes have similar structure in similar
biological systems. Surprisingly, goodness of fit tests reveal that this class
of phenotypic models, which has been successful so far in interpreting
experimental data, is a plausible model in only 3 out of 9 biological systems.
More precisely, although Fisher's model was able to explain several statistical
properties of the landscapes - including mean and standard deviation of
selection and epistasis coefficients -, it was often unable to explain the full
structure of fitness landscapes.
|
[
{
"created": "Wed, 29 Jul 2015 18:08:11 GMT",
"version": "v1"
},
{
"created": "Tue, 15 Sep 2015 17:08:01 GMT",
"version": "v2"
},
{
"created": "Tue, 17 May 2016 14:44:56 GMT",
"version": "v3"
}
] |
2016-05-18
|
[
[
"Blanquart",
"François",
""
],
[
"Bataillon",
"Thomas",
""
]
] |
The fitness landscape defines the relationship between genotypes and fitness in a given environment, and underlies fundamental quantities such as the distribution of selection coefficient, or the magnitude and type of epistasis. A better understanding of variation of landscape structure across species and environments is thus necessary to understand and predict how populations will adapt. An increasing number of experiments investigates the properties of fitness landscapes by identifying mutations, constructing genotypes with combinations of these mutations, and measuring the fitness of these genotypes. Yet these empirical landscapes represent a very small sample of the vast space of all possible genotypes, and this sample is often biased by the protocol used to identify mutations. Here we develop a rigorous statistical framework based on Approximate Bayesian Computation to address these concerns, and use this flexible framework to fit a broad class of phenotypic fitness models (including Fisher's model) to 26 empirical landscapes representing 9 diverse biological systems. In spite of uncertainty due to the small size of most published empirical landscapes, the inferred landscapes have similar structure in similar biological systems. Surprisingly, goodness of fit tests reveal that this class of phenotypic models, which has been successful so far in interpreting experimental data, is a plausible model in only 3 out of 9 biological systems. More precisely, although Fisher's model was able to explain several statistical properties of the landscapes - including mean and standard deviation of selection and epistasis coefficients -, it was often unable to explain the full structure of fitness landscapes.
|
1506.06936
|
Pascal Grange
|
Pascal Grange
|
Computational neuroanatomy: mapping cell-type densities in the mouse
brain, simulations from the Allen Brain Atlas
|
Prepared for the International Conference on Mathematical Modeling in
Physical Sciences, 5th-8th June 2015, Mykonos Island, Greece
| null |
10.1088/1742-6596/633/1/012070
| null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Allen Brain Atlas (ABA) of the adult mouse consists of digitized
expression profiles of thousands of genes in the mouse brain, co-registered to
a common three-dimensional template (the Allen Reference Atlas). This
brain-wide, genome-wide data set has triggered a renaissance in neuroanatomy.
Its voxelized version (with cubic voxels of side 200 microns) can be analyzed
on a desktop computer using MATLAB. On the other hand, brain cells exhibit a
great phenotypic diversity (in terms of size, shape and electrophysiological
activity), which has inspired the names of some well-studied cell types, such
as granule cells and medium spiny neurons. However, no exhaustive taxonomy of
brain cells is available. A genetic classification of brain cells is under way,
and some cell types have been characterized by their transcriptome profiles.
However, given a cell type characterized by its transcriptome, it is not clear
where else in the brain similar cells can be found. The ABA can been used to
solve this region-specificity problem in a data-driven way: rewriting the
brain-wide expression profiles of all genes in the atlas as a sum of
cell-type-specific transcriptome profiles is equivalent to solving a quadratic
optimization problem at each voxel in the brain. However, the estimated
brain-wide densities of 64 cell types published recently were based on one
series of co-registered coronal in situ hybridization (ISH) images per gene,
whereas the online ABA contains several image series per gene, including
sagittal ones. In the presented work, we simulate the variability of cell-type
densities in a Monte Carlo way by repeatedly drawing a random image series for
each gene and solving optimization problems. This yields error bars on the
region-specificity of cell types.
|
[
{
"created": "Tue, 23 Jun 2015 10:35:45 GMT",
"version": "v1"
}
] |
2015-10-28
|
[
[
"Grange",
"Pascal",
""
]
] |
The Allen Brain Atlas (ABA) of the adult mouse consists of digitized expression profiles of thousands of genes in the mouse brain, co-registered to a common three-dimensional template (the Allen Reference Atlas). This brain-wide, genome-wide data set has triggered a renaissance in neuroanatomy. Its voxelized version (with cubic voxels of side 200 microns) can be analyzed on a desktop computer using MATLAB. On the other hand, brain cells exhibit a great phenotypic diversity (in terms of size, shape and electrophysiological activity), which has inspired the names of some well-studied cell types, such as granule cells and medium spiny neurons. However, no exhaustive taxonomy of brain cells is available. A genetic classification of brain cells is under way, and some cell types have been characterized by their transcriptome profiles. However, given a cell type characterized by its transcriptome, it is not clear where else in the brain similar cells can be found. The ABA can been used to solve this region-specificity problem in a data-driven way: rewriting the brain-wide expression profiles of all genes in the atlas as a sum of cell-type-specific transcriptome profiles is equivalent to solving a quadratic optimization problem at each voxel in the brain. However, the estimated brain-wide densities of 64 cell types published recently were based on one series of co-registered coronal in situ hybridization (ISH) images per gene, whereas the online ABA contains several image series per gene, including sagittal ones. In the presented work, we simulate the variability of cell-type densities in a Monte Carlo way by repeatedly drawing a random image series for each gene and solving optimization problems. This yields error bars on the region-specificity of cell types.
|
1810.04637
|
Fakrul Islam Tushar
|
Md. Kamrul Hasan, Fakrul Islam Tushar
|
Quantification of Trabeculae Inside the Heart from MRI Using Fractal
Analysis
| null | null | null | null |
q-bio.QM cs.CV physics.med-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Left ventricular non-compaction (LVNC) is a rare cardiomyopathy (CMP) that
should be considered as a possible diagnosis because of its potential
complications which are heart failure, ventricular arrhythmias, and embolic
events. For analysis cardiac functionality, extracting information from the
Left ventricular (LV) is already a broad field of Medical Imaging. Different
algorithms and strategies ranging that is semiautomated or automated has
already been developed to get useful information from such a critical structure
of heart. Trabeculae in the heart undergoes difference changes like solid from
spongy. Due to failure of this process left ventricle non-compaction occurred.
In this project, we will demonstrate the fractal dimension (FD) and manual
segmentation of the Magnetic Resonance Imaging (MRI) of the heart that quantify
amount of trabeculae inside the heart. The greater the value of fractal
dimension inside the heart indicates the greater complex pattern of the
trabeculae in the heart.
|
[
{
"created": "Sun, 30 Sep 2018 15:05:48 GMT",
"version": "v1"
},
{
"created": "Sun, 14 Oct 2018 17:00:09 GMT",
"version": "v2"
}
] |
2018-10-16
|
[
[
"Hasan",
"Md. Kamrul",
""
],
[
"Tushar",
"Fakrul Islam",
""
]
] |
Left ventricular non-compaction (LVNC) is a rare cardiomyopathy (CMP) that should be considered as a possible diagnosis because of its potential complications which are heart failure, ventricular arrhythmias, and embolic events. For analysis cardiac functionality, extracting information from the Left ventricular (LV) is already a broad field of Medical Imaging. Different algorithms and strategies ranging that is semiautomated or automated has already been developed to get useful information from such a critical structure of heart. Trabeculae in the heart undergoes difference changes like solid from spongy. Due to failure of this process left ventricle non-compaction occurred. In this project, we will demonstrate the fractal dimension (FD) and manual segmentation of the Magnetic Resonance Imaging (MRI) of the heart that quantify amount of trabeculae inside the heart. The greater the value of fractal dimension inside the heart indicates the greater complex pattern of the trabeculae in the heart.
|
2308.13870
|
Tahereh Toosi
|
Tahereh Toosi and Elias B. Issa
|
Brain-like representational straightening of natural movies in robust
feedforward neural networks
|
21 pages, 15 figures, published in ICLR 2023
|
International Conference on Learning Representations (ICLR), 2023
| null | null |
q-bio.NC cs.CV cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Representational straightening refers to a decrease in curvature of visual
feature representations of a sequence of frames taken from natural movies.
Prior work established straightening in neural representations of the primate
primary visual cortex (V1) and perceptual straightening in human behavior as a
hallmark of biological vision in contrast to artificial feedforward neural
networks which did not demonstrate this phenomenon as they were not explicitly
optimized to produce temporally predictable movie representations. Here, we
show robustness to noise in the input image can produce representational
straightening in feedforward neural networks. Both adversarial training (AT)
and base classifiers for Random Smoothing (RS) induced remarkably straightened
feature codes. Demonstrating their utility within the domain of natural movies,
these codes could be inverted to generate intervening movie frames by linear
interpolation in the feature space even though they were not trained on these
trajectories. Demonstrating their biological utility, we found that AT and RS
training improved predictions of neural data in primate V1 over baseline models
providing a parsimonious, bio-plausible mechanism -- noise in the sensory input
stages -- for generating representations in early visual cortex. Finally, we
compared the geometric properties of frame representations in these networks to
better understand how they produced representations that mimicked the
straightening phenomenon from biology. Overall, this work elucidating emergent
properties of robust neural networks demonstrates that it is not necessary to
utilize predictive objectives or train directly on natural movie statistics to
achieve models supporting straightened movie representations similar to human
perception that also predict V1 neural responses.
|
[
{
"created": "Sat, 26 Aug 2023 13:04:36 GMT",
"version": "v1"
}
] |
2023-08-29
|
[
[
"Toosi",
"Tahereh",
""
],
[
"Issa",
"Elias B.",
""
]
] |
Representational straightening refers to a decrease in curvature of visual feature representations of a sequence of frames taken from natural movies. Prior work established straightening in neural representations of the primate primary visual cortex (V1) and perceptual straightening in human behavior as a hallmark of biological vision in contrast to artificial feedforward neural networks which did not demonstrate this phenomenon as they were not explicitly optimized to produce temporally predictable movie representations. Here, we show robustness to noise in the input image can produce representational straightening in feedforward neural networks. Both adversarial training (AT) and base classifiers for Random Smoothing (RS) induced remarkably straightened feature codes. Demonstrating their utility within the domain of natural movies, these codes could be inverted to generate intervening movie frames by linear interpolation in the feature space even though they were not trained on these trajectories. Demonstrating their biological utility, we found that AT and RS training improved predictions of neural data in primate V1 over baseline models providing a parsimonious, bio-plausible mechanism -- noise in the sensory input stages -- for generating representations in early visual cortex. Finally, we compared the geometric properties of frame representations in these networks to better understand how they produced representations that mimicked the straightening phenomenon from biology. Overall, this work elucidating emergent properties of robust neural networks demonstrates that it is not necessary to utilize predictive objectives or train directly on natural movie statistics to achieve models supporting straightened movie representations similar to human perception that also predict V1 neural responses.
|
q-bio/0609046
|
David Steinsaltz
|
Steven N. Evans and David Steinsaltz and Kenneth W. Wachter
|
A mutation-selection model for general genotypes with recombination
|
133 pages; 4 figures. Substantially revised. Main convergence result
in chapters 7 and 8 largely rewritten. New discussion of recombination in
chapter 4, with pictures. Results improved: bounded fitness costs replaced by
Lipschitz; more general initial states; some results for mutation intensities
with infinite mass. Added index and glossary of notation. Rewrote some
notation for consistency
| null | null | null |
q-bio.PE math.PR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We investigate a continuous time, probability measure-valued dynamical system
that describes the process of mutation-selection balance in a context where the
population is infinite, there may be infinitely many loci, and there are weak
assumptions on selective costs. Our model arises when we incorporate very
general recombination mechanisms into a previous model of mutation and
selection from Steinsaltz, Evans and Wachter (2005) and take the relative
strength of mutation and selection to be sufficiently small. The resulting
dynamical system is a flow of measures on the space of loci. Each such measure
is the intensity measure of a Poisson random measure on the space of loci: the
points of a realization of the random measure record the set of loci at which
the genotype of a uniformly chosen individual differs from a reference wild
type due to an accumulation of ancestral mutations. Our motivation for working
in such a general setting is to provide a basis for understanding
mutation-driven changes in age-specific demographic schedules that arise from
the complex interaction of many genes, and hence to develop a framework for
understanding the evolution of aging.
We establish the existence and uniqueness of the dynamical system, provide
conditions for the existence and stability of equilibrium states, and prove
that our continuous-time dynamical system is the limit of a sequence of
discrete-time infinite population mutation-selection-recombination models in
the standard asymptotic regime where selection and mutation are weak relative
to recombination and both scale at the same infinitesimal rate in the limit.
|
[
{
"created": "Tue, 26 Sep 2006 20:48:58 GMT",
"version": "v1"
},
{
"created": "Fri, 23 Oct 2009 14:08:41 GMT",
"version": "v2"
},
{
"created": "Mon, 19 Sep 2011 11:17:36 GMT",
"version": "v3"
}
] |
2011-09-20
|
[
[
"Evans",
"Steven N.",
""
],
[
"Steinsaltz",
"David",
""
],
[
"Wachter",
"Kenneth W.",
""
]
] |
We investigate a continuous time, probability measure-valued dynamical system that describes the process of mutation-selection balance in a context where the population is infinite, there may be infinitely many loci, and there are weak assumptions on selective costs. Our model arises when we incorporate very general recombination mechanisms into a previous model of mutation and selection from Steinsaltz, Evans and Wachter (2005) and take the relative strength of mutation and selection to be sufficiently small. The resulting dynamical system is a flow of measures on the space of loci. Each such measure is the intensity measure of a Poisson random measure on the space of loci: the points of a realization of the random measure record the set of loci at which the genotype of a uniformly chosen individual differs from a reference wild type due to an accumulation of ancestral mutations. Our motivation for working in such a general setting is to provide a basis for understanding mutation-driven changes in age-specific demographic schedules that arise from the complex interaction of many genes, and hence to develop a framework for understanding the evolution of aging. We establish the existence and uniqueness of the dynamical system, provide conditions for the existence and stability of equilibrium states, and prove that our continuous-time dynamical system is the limit of a sequence of discrete-time infinite population mutation-selection-recombination models in the standard asymptotic regime where selection and mutation are weak relative to recombination and both scale at the same infinitesimal rate in the limit.
|
2111.09469
|
Steven Massey
|
Steven E Massey
|
SARS-CoV-2's closest relative, RaTG13, was generated from a bat
transcriptome not a fecal swab: implications for the origin of COVID-19
| null | null | null | null |
q-bio.GN
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
RaTG13 is the closest related coronavirus genome phylogenetically to
SARS-CoV-2, consequently understanding its provenance is of key importance to
understanding the origin of the COVID-19 pandemic. The RaTG13 NGS dataset is
attributed to a fecal swab from the intermediate horseshoe bat Rhinolophus
affinis. However, sequence analysis reveals that this is unlikely. Metagenomic
analysis using Metaxa2 shows that only 10.3 % of small subunit (SSU) rRNA
sequences in the dataset are bacterial, inconsistent with a fecal sample, which
are typically dominated by bacterial sequences. In addition, the bacterial taxa
present in the sample are inconsistent with fecal material. Assembly of
mitochondrial SSU rRNA sequences in the dataset produces a contig 98.7 %
identical to R.affinis mitochondrial SSU rRNA, indicating that the sample was
generated from this or a closely related species. 87.5 % of the NGS reads map
to the Rhinolophus ferrumequinum genome, the closest bat genome to R.affinis
available. In the annotated genome assembly, 62.2 % of mapped reads map to
protein coding genes. These results clearly demonstrate that the dataset
represents a Rhinolophus sp. transcriptome, and not a fecal swab sample.
Overall, the data show that the RaTG13 dataset was generated by the Wuhan
Institute of Virology (WIV) from a transcriptome derived from Rhinolophus sp.
tissue or cell line, indicating that RaTG13 was in live culture. This raises
the question of whether the WIV was culturing additional unreported
coronaviruses closely related to SARS-CoV-2 prior to the pandemic. The
implications for the origin of the COVID-19 pandemic are discussed.
|
[
{
"created": "Thu, 18 Nov 2021 01:30:19 GMT",
"version": "v1"
},
{
"created": "Mon, 22 Nov 2021 02:38:47 GMT",
"version": "v2"
}
] |
2021-11-23
|
[
[
"Massey",
"Steven E",
""
]
] |
RaTG13 is the closest related coronavirus genome phylogenetically to SARS-CoV-2, consequently understanding its provenance is of key importance to understanding the origin of the COVID-19 pandemic. The RaTG13 NGS dataset is attributed to a fecal swab from the intermediate horseshoe bat Rhinolophus affinis. However, sequence analysis reveals that this is unlikely. Metagenomic analysis using Metaxa2 shows that only 10.3 % of small subunit (SSU) rRNA sequences in the dataset are bacterial, inconsistent with a fecal sample, which are typically dominated by bacterial sequences. In addition, the bacterial taxa present in the sample are inconsistent with fecal material. Assembly of mitochondrial SSU rRNA sequences in the dataset produces a contig 98.7 % identical to R.affinis mitochondrial SSU rRNA, indicating that the sample was generated from this or a closely related species. 87.5 % of the NGS reads map to the Rhinolophus ferrumequinum genome, the closest bat genome to R.affinis available. In the annotated genome assembly, 62.2 % of mapped reads map to protein coding genes. These results clearly demonstrate that the dataset represents a Rhinolophus sp. transcriptome, and not a fecal swab sample. Overall, the data show that the RaTG13 dataset was generated by the Wuhan Institute of Virology (WIV) from a transcriptome derived from Rhinolophus sp. tissue or cell line, indicating that RaTG13 was in live culture. This raises the question of whether the WIV was culturing additional unreported coronaviruses closely related to SARS-CoV-2 prior to the pandemic. The implications for the origin of the COVID-19 pandemic are discussed.
|
1304.1455
|
Jakob Ruess
|
Jakob Ruess, Andreas Milias-Argeitis and John Lygeros
|
Designing Experiments to Understand the Variability in Biochemical
Reaction Networks
| null |
J R Soc Interface 10:20130588 (2013)
|
10.1098/rsif.2013.0588
| null |
q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Exploiting the information provided by the molecular noise of a biological
process has proven to be valuable in extracting knowledge about the underlying
kinetic parameters and sources of variability from single cell measurements.
However, quantifying this additional information a priori, to decide whether a
single cell experiment might be beneficial, is currently only possibly in very
simple systems where either the chemical master equation is computationally
tractable or a Gaussian approximation is appropriate. Here we show how the
information provided by distribution measurements can be approximated from the
first four moments of the underlying process. The derived formulas are
generally valid for any stochastic kinetic model including models that comprise
both intrinsic and extrinsic noise. This allows us to propose an optimal
experimental design framework for heterogeneous cell populations which we
employ to compare the utility of dual reporter and perturbation experiments for
separating extrinsic and intrinsic noise in a simple model of gene expression.
Subsequently, we compare the information content of different experiments which
have been performed in an engineered light-switch gene expression system in
yeast and show that well chosen gene induction patterns may allow one to
identify features of the system which remain hidden in unplanned experiments.
|
[
{
"created": "Thu, 4 Apr 2013 18:12:40 GMT",
"version": "v1"
}
] |
2013-08-30
|
[
[
"Ruess",
"Jakob",
""
],
[
"Milias-Argeitis",
"Andreas",
""
],
[
"Lygeros",
"John",
""
]
] |
Exploiting the information provided by the molecular noise of a biological process has proven to be valuable in extracting knowledge about the underlying kinetic parameters and sources of variability from single cell measurements. However, quantifying this additional information a priori, to decide whether a single cell experiment might be beneficial, is currently only possibly in very simple systems where either the chemical master equation is computationally tractable or a Gaussian approximation is appropriate. Here we show how the information provided by distribution measurements can be approximated from the first four moments of the underlying process. The derived formulas are generally valid for any stochastic kinetic model including models that comprise both intrinsic and extrinsic noise. This allows us to propose an optimal experimental design framework for heterogeneous cell populations which we employ to compare the utility of dual reporter and perturbation experiments for separating extrinsic and intrinsic noise in a simple model of gene expression. Subsequently, we compare the information content of different experiments which have been performed in an engineered light-switch gene expression system in yeast and show that well chosen gene induction patterns may allow one to identify features of the system which remain hidden in unplanned experiments.
|
2108.13908
|
Mattia Angeli
|
Mattia Angeli, Georgios Neofotistos, Marios Mattheakis and Efthimios
Kaxiras
|
Modeling the effect of the vaccination campaign on the Covid-19 pandemic
| null | null |
10.1016/j.chaos.2021.111621
| null |
q-bio.PE cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Population-wide vaccination is critical for containing the SARS-CoV-2
(Covid-19) pandemic when combined with restrictive and prevention measures. In
this study, we introduce SAIVR, a mathematical model able to forecast the
Covid-19 epidemic evolution during the vaccination campaign. SAIVR extends the
widely used Susceptible-Infectious-Removed (SIR) model by considering the
Asymptomatic (A) and Vaccinated (V) compartments. The model contains several
parameters and initial conditions that are estimated by employing a
semi-supervised machine learning procedure. After training an unsupervised
neural network to solve the SAIVR differential equations, a supervised
framework then estimates the optimal conditions and parameters that best fit
recent infectious curves of 27 countries. Instructed by these results, we
performed an extensive study on the temporal evolution of the pandemic under
varying values of roll-out daily rates, vaccine efficacy, and a broad range of
societal vaccine hesitancy/denial levels. The concept of herd immunity is
questioned by studying future scenarios which involve different vaccination
efforts and more infectious Covid-19 variants.
|
[
{
"created": "Fri, 27 Aug 2021 19:12:13 GMT",
"version": "v1"
}
] |
2022-01-19
|
[
[
"Angeli",
"Mattia",
""
],
[
"Neofotistos",
"Georgios",
""
],
[
"Mattheakis",
"Marios",
""
],
[
"Kaxiras",
"Efthimios",
""
]
] |
Population-wide vaccination is critical for containing the SARS-CoV-2 (Covid-19) pandemic when combined with restrictive and prevention measures. In this study, we introduce SAIVR, a mathematical model able to forecast the Covid-19 epidemic evolution during the vaccination campaign. SAIVR extends the widely used Susceptible-Infectious-Removed (SIR) model by considering the Asymptomatic (A) and Vaccinated (V) compartments. The model contains several parameters and initial conditions that are estimated by employing a semi-supervised machine learning procedure. After training an unsupervised neural network to solve the SAIVR differential equations, a supervised framework then estimates the optimal conditions and parameters that best fit recent infectious curves of 27 countries. Instructed by these results, we performed an extensive study on the temporal evolution of the pandemic under varying values of roll-out daily rates, vaccine efficacy, and a broad range of societal vaccine hesitancy/denial levels. The concept of herd immunity is questioned by studying future scenarios which involve different vaccination efforts and more infectious Covid-19 variants.
|
2408.07839
|
Murat Okatan
|
Murat Okatan
|
Fast computation of the statistical significance test for
spatio-temporal receptive field estimates obtained using spike-triggered
averaging of binary pseudo-random sequences
|
This paper develops an approximate method that accelerates the exact
method of Okatan, M. A statistical significance test for spatio-temporal
receptive field estimates obtained using spike-triggered averaging of binary
pseudo-random sequences. SIViP 17, 3759-3766 (2023).
https://doi.org/10.1007/s11760-023-02603-1
| null | null | null |
q-bio.QM
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Background: Spatio-temporal receptive fields (STRF) of visual neurons are
often estimated using spike-triggered averaging of binary pseudo-random
stimulus sequences. The stimuli are visual displays that contain black and
white pixels that flicker randomly at a fixed frame rate without any spatial or
temporal correlation. The spike train of a visual neuron, such as a retinal
ganglion cell, is recorded simultaneously with the stimulus presentation. The
neuron's STRF is estimated by averaging the stimulus frames that coincide with
spikes at fixed latencies. Recently, an exact analytical method for determining
the statistical significance of the estimated value of the STRF pixels has been
developed. Application of the method on spike trains collected from individual
mouse retinal ganglion cells revealed that the time required to compute the
test ranged from a couple of minutes to half a day for different neurons. New
method: Here, this method is accelerated by using the Normal approximation to
the null distribution of STRF pixel estimates. Results: The significance
threshold and computation time obtained under the approximate distribution are
examined systematically as a function of various input spike trains collected
from individual mouse retinal ganglion cells. Comparison with existing methods:
The accuracy and the time saved by the use of the approximate distribution are
examined in comparison with the exact distribution. Conclusions: For the real
data analyzed here, the approximate distribution yields the same significance
thresholds as the exact distribution within a much shorter computation time.
|
[
{
"created": "Wed, 14 Aug 2024 22:24:35 GMT",
"version": "v1"
}
] |
2024-08-16
|
[
[
"Okatan",
"Murat",
""
]
] |
Background: Spatio-temporal receptive fields (STRF) of visual neurons are often estimated using spike-triggered averaging of binary pseudo-random stimulus sequences. The stimuli are visual displays that contain black and white pixels that flicker randomly at a fixed frame rate without any spatial or temporal correlation. The spike train of a visual neuron, such as a retinal ganglion cell, is recorded simultaneously with the stimulus presentation. The neuron's STRF is estimated by averaging the stimulus frames that coincide with spikes at fixed latencies. Recently, an exact analytical method for determining the statistical significance of the estimated value of the STRF pixels has been developed. Application of the method on spike trains collected from individual mouse retinal ganglion cells revealed that the time required to compute the test ranged from a couple of minutes to half a day for different neurons. New method: Here, this method is accelerated by using the Normal approximation to the null distribution of STRF pixel estimates. Results: The significance threshold and computation time obtained under the approximate distribution are examined systematically as a function of various input spike trains collected from individual mouse retinal ganglion cells. Comparison with existing methods: The accuracy and the time saved by the use of the approximate distribution are examined in comparison with the exact distribution. Conclusions: For the real data analyzed here, the approximate distribution yields the same significance thresholds as the exact distribution within a much shorter computation time.
|
2401.17894
|
Shruthi Viswanath
|
Shreyas Arvindekar, Kartik Majila, Shruthi Viswanath
|
Recent methods from statistical inference and machine learning to
improve integrative modeling of macromolecular assemblies
| null | null | null | null |
q-bio.BM
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Integrative modeling of macromolecular assemblies allows for structural
characterization of large assemblies that are recalcitrant to direct
experimental observation. A Bayesian inference approach facilitates combining
data from complementary experiments along with physical principles, statistics
of known structures, and prior models, for structure determination. Here, we
review recent methods for integrative modeling based on statistical inference
and machine learning. These methods improve over the current state-of-the-art
by enhancing the data collection, optimizing coarse-grained model
representations, making scoring functions more accurate, sampling more
efficient, and model analysis more rigorous. We also discuss three new
frontiers in integrative modeling: incorporating recent deep learning-based
methods, integrative modeling with in situ data, and metamodeling.
|
[
{
"created": "Wed, 31 Jan 2024 15:00:53 GMT",
"version": "v1"
},
{
"created": "Fri, 2 Feb 2024 05:09:04 GMT",
"version": "v2"
},
{
"created": "Tue, 6 Feb 2024 05:20:50 GMT",
"version": "v3"
},
{
"created": "Sat, 18 May 2024 15:02:26 GMT",
"version": "v4"
}
] |
2024-05-21
|
[
[
"Arvindekar",
"Shreyas",
""
],
[
"Majila",
"Kartik",
""
],
[
"Viswanath",
"Shruthi",
""
]
] |
Integrative modeling of macromolecular assemblies allows for structural characterization of large assemblies that are recalcitrant to direct experimental observation. A Bayesian inference approach facilitates combining data from complementary experiments along with physical principles, statistics of known structures, and prior models, for structure determination. Here, we review recent methods for integrative modeling based on statistical inference and machine learning. These methods improve over the current state-of-the-art by enhancing the data collection, optimizing coarse-grained model representations, making scoring functions more accurate, sampling more efficient, and model analysis more rigorous. We also discuss three new frontiers in integrative modeling: incorporating recent deep learning-based methods, integrative modeling with in situ data, and metamodeling.
|
2006.04905
|
Goncalo Oliveira
|
Goncalo Oliveira
|
Early epidemic spread, percolation and Covid-19
| null | null | null | null |
q-bio.PE math.PR q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Human to human transmissible infectious diseases spread in a population using
human interactions as its transmission vector. The early stages of such an
outbreak can be modeled by a graph whose edges encode these interactions
between individuals, the vertices. This article attempts to account for the
case when each individual entails in different kinds of interactions which have
therefore different probabilities of transmitting the disease. The majority of
these results can be also stated in the language of percolation theory.
The main contributions of the article are: (1) Extend to this setting some
results which were previously known in the case when each individual has only
one kind of interactions. (2) Find an explicit formula for the basic
reproduction number $R_0$ which depends only on the probabilities of
transmitting the disease along the different edges and the first two moments of
the degree distributions of the associated graphs. (3) Motivated by the recent
Covid-19 pandemic, we use the framework developed to compute the $R_0$ of a
model disease spreading in populations whose trees and degree distributions are
adjusted to several different countries. In this setting, we shall also compute
the probability that the outbreak will not lead to an epidemic. In all cases we
find such probability to be very low if no interventions are put in place.
|
[
{
"created": "Mon, 8 Jun 2020 19:41:20 GMT",
"version": "v1"
}
] |
2020-06-11
|
[
[
"Oliveira",
"Goncalo",
""
]
] |
Human to human transmissible infectious diseases spread in a population using human interactions as its transmission vector. The early stages of such an outbreak can be modeled by a graph whose edges encode these interactions between individuals, the vertices. This article attempts to account for the case when each individual entails in different kinds of interactions which have therefore different probabilities of transmitting the disease. The majority of these results can be also stated in the language of percolation theory. The main contributions of the article are: (1) Extend to this setting some results which were previously known in the case when each individual has only one kind of interactions. (2) Find an explicit formula for the basic reproduction number $R_0$ which depends only on the probabilities of transmitting the disease along the different edges and the first two moments of the degree distributions of the associated graphs. (3) Motivated by the recent Covid-19 pandemic, we use the framework developed to compute the $R_0$ of a model disease spreading in populations whose trees and degree distributions are adjusted to several different countries. In this setting, we shall also compute the probability that the outbreak will not lead to an epidemic. In all cases we find such probability to be very low if no interventions are put in place.
|
1906.09528
|
Sergey Shuvaev
|
Sergey A. Shuvaev, Ngoc B. Tran, Marcus Stephenson-Jones, Bo Li, and
Alexei A. Koulakov
|
Neural networks with motivation
|
Added the Methods section
| null | null | null |
q-bio.NC cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
How can animals behave effectively in conditions involving different
motivational contexts? Here, we propose how reinforcement learning neural
networks can learn optimal behavior for dynamically changing motivational
salience vectors. First, we show that Q-learning neural networks with
motivation can navigate in environment with dynamic rewards. Second, we show
that such networks can learn complex behaviors simultaneously directed towards
several goals distributed in an environment. Finally, we show that in Pavlovian
conditioning task, the responses of the neurons in our model resemble the
firing patterns of neurons in the ventral pallidum (VP), a basal ganglia
structure involved in motivated behaviors. We show that, similarly to real
neurons, recurrent networks with motivation are composed of two
oppositely-tuned classes of neurons, responding to positive and negative
rewards. Our model generates predictions for the VP connectivity. We conclude
that networks with motivation can rapidly adapt their behavior to varying
conditions without changes in synaptic strength when expected reward is
modulated by motivation. Such networks may also provide a mechanism for how
hierarchical reinforcement learning is implemented in the brain.
|
[
{
"created": "Sun, 23 Jun 2019 01:44:32 GMT",
"version": "v1"
},
{
"created": "Tue, 19 Nov 2019 03:27:52 GMT",
"version": "v2"
}
] |
2019-11-20
|
[
[
"Shuvaev",
"Sergey A.",
""
],
[
"Tran",
"Ngoc B.",
""
],
[
"Stephenson-Jones",
"Marcus",
""
],
[
"Li",
"Bo",
""
],
[
"Koulakov",
"Alexei A.",
""
]
] |
How can animals behave effectively in conditions involving different motivational contexts? Here, we propose how reinforcement learning neural networks can learn optimal behavior for dynamically changing motivational salience vectors. First, we show that Q-learning neural networks with motivation can navigate in environment with dynamic rewards. Second, we show that such networks can learn complex behaviors simultaneously directed towards several goals distributed in an environment. Finally, we show that in Pavlovian conditioning task, the responses of the neurons in our model resemble the firing patterns of neurons in the ventral pallidum (VP), a basal ganglia structure involved in motivated behaviors. We show that, similarly to real neurons, recurrent networks with motivation are composed of two oppositely-tuned classes of neurons, responding to positive and negative rewards. Our model generates predictions for the VP connectivity. We conclude that networks with motivation can rapidly adapt their behavior to varying conditions without changes in synaptic strength when expected reward is modulated by motivation. Such networks may also provide a mechanism for how hierarchical reinforcement learning is implemented in the brain.
|
2004.10478
|
Nicola Cufaro Petroni
|
Nicola Cufaro Petroni, Salvatore De Martino and Silvio De Siena
|
Logistic and $\theta$-logistic models in population dynamics: General
analysis and exact results
|
25 pages, 8 figures. The second part of the paper (from p. 10,
Section 3.1 on) is substantially improved w.r.t. the old version. As a
consequence Title, Abstract, Introduction, Conclusions and References have
been accordingly updated
| null |
10.1088/1751-8121/abb277
| null |
q-bio.PE math.PR physics.bio-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the present paper we provide the closed form of the path-like solutions
for the logistic and $\theta$-logistic stochastic differential equations, along
with the exact expressions of both their probability density functions and
their moments. We simulate in addition a few typical sample trajectories, and
we provide a few examples of numerical computation of the said closed formulas
at different noise intensities: this shows in particular that an increasing
randomness - while making the process more unpredictable - asymptotically tends
to suppress in average the logistic growth. These main results are preceded by
a discussion of the noiseless, deterministic versions of these models: a
prologue which turns out to be instrumental - on the basis of a few simplified
but functional hypotheses - to frame the logistic and $\theta$-logistic
equations in a unified context, within which also the Gompertz model emerges
from an anomalous scaling.
|
[
{
"created": "Wed, 22 Apr 2020 10:21:50 GMT",
"version": "v1"
},
{
"created": "Tue, 26 May 2020 17:23:24 GMT",
"version": "v2"
}
] |
2020-10-28
|
[
[
"Petroni",
"Nicola Cufaro",
""
],
[
"De Martino",
"Salvatore",
""
],
[
"De Siena",
"Silvio",
""
]
] |
In the present paper we provide the closed form of the path-like solutions for the logistic and $\theta$-logistic stochastic differential equations, along with the exact expressions of both their probability density functions and their moments. We simulate in addition a few typical sample trajectories, and we provide a few examples of numerical computation of the said closed formulas at different noise intensities: this shows in particular that an increasing randomness - while making the process more unpredictable - asymptotically tends to suppress in average the logistic growth. These main results are preceded by a discussion of the noiseless, deterministic versions of these models: a prologue which turns out to be instrumental - on the basis of a few simplified but functional hypotheses - to frame the logistic and $\theta$-logistic equations in a unified context, within which also the Gompertz model emerges from an anomalous scaling.
|
2401.10462
|
Xin Cao
|
Xin Cao, Michelle H. Hummel, Yuzhang Wang, Carlos Simmerling,
Evangelos A. Coutsias
|
Exact analytical algorithm for solvent accessible surface area and
derivatives in implicit solvent molecular simulations on GPUs
| null | null | null | null |
q-bio.BM
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In this paper, we present dSASA (differentiable SASA), an exact geometric
method to calculate solvent accessible surface area (SASA) analytically along
with atomic derivatives on GPUs. The atoms in a molecule are first assigned to
tetrahedra in groups of four atoms by Delaunay tetrahedrization adapted for
efficient GPU implementation and the SASA values for atoms and molecules are
calculated based on the tetrahedrization information and inclusion-exclusion
method. The SASA values from the numerical icosahedral-based method can be
reproduced with more than 98% accuracy for both proteins and RNAs. Having been
implemented on GPUs and incorporated into the software Amber, we can apply
dSASA to implicit solvent molecular dynamics simulations with inclusion of this
nonpolar term. The current GPU version of GB/SA simulations has been
accelerated up to nearly 20-fold compared to the CPU version, outperforming
LCPO, a commonly used, fast algorithm for calculating SASA, as the system size
increases. While we focus on the accuracy of the SASA calculations for proteins
and nucleic acids, we also demonstrate stable GB/SA MD mini-protein
simulations.
|
[
{
"created": "Fri, 19 Jan 2024 03:18:04 GMT",
"version": "v1"
},
{
"created": "Thu, 25 Apr 2024 15:52:01 GMT",
"version": "v2"
}
] |
2024-04-26
|
[
[
"Cao",
"Xin",
""
],
[
"Hummel",
"Michelle H.",
""
],
[
"Wang",
"Yuzhang",
""
],
[
"Simmerling",
"Carlos",
""
],
[
"Coutsias",
"Evangelos A.",
""
]
] |
In this paper, we present dSASA (differentiable SASA), an exact geometric method to calculate solvent accessible surface area (SASA) analytically along with atomic derivatives on GPUs. The atoms in a molecule are first assigned to tetrahedra in groups of four atoms by Delaunay tetrahedrization adapted for efficient GPU implementation and the SASA values for atoms and molecules are calculated based on the tetrahedrization information and inclusion-exclusion method. The SASA values from the numerical icosahedral-based method can be reproduced with more than 98% accuracy for both proteins and RNAs. Having been implemented on GPUs and incorporated into the software Amber, we can apply dSASA to implicit solvent molecular dynamics simulations with inclusion of this nonpolar term. The current GPU version of GB/SA simulations has been accelerated up to nearly 20-fold compared to the CPU version, outperforming LCPO, a commonly used, fast algorithm for calculating SASA, as the system size increases. While we focus on the accuracy of the SASA calculations for proteins and nucleic acids, we also demonstrate stable GB/SA MD mini-protein simulations.
|
1612.09379
|
Sanzo Miyazawa
|
Sanzo Miyazawa
|
Selection originating from protein stability/foldability: Relationships
between protein folding free energy, sequence ensemble, and fitness
|
31 pages, 3 Tables, and 15 figures with a supplement. Table 3 (Table
S.3) and Table S.5 are revised by employing thermochemical calorie (1 cal =
4.184 J) rather than steam table calorie (1 cal = 4.1868 J) in the previous
versions including DOI:10.1016/j.jtbi.2017.08.018 (J. Theoretical Biol., 433,
21-38, 2017); the revised values in these tables are printed in blue
|
J. Theoretical Biol., 433, 21-38, 2017
|
10.1016/j.jtbi.2017.08.018
| null |
q-bio.PE q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Assuming that mutation and fixation processes are reversible Markov
processes, we prove that the equilibrium ensemble of sequences obeys a
Boltzmann distribution with $\exp(4N_e m(1 - 1/(2N)))$, where $m$ is Malthusian
fitness and $N_e$ and $N$ are effective and actual population sizes. On the
other hand, the probability distribution of sequences with maximum entropy that
satisfies a given amino acid composition at each site and a given pairwise
amino acid frequency at each site pair is a Boltzmann distribution with
$\exp(-\psi_N)$, where $\psi_N$ is represented as the sum of one body and
pairwise potentials. A protein folding theory indicates that homologous
sequences obey a canonical ensemble characterized by $\exp(-\Delta G_{ND}/k_B
T_s)$ or by $\exp(- G_{N}/k_B T_s)$ if an amino acid composition is kept
constant, where $\Delta G_{ND} \equiv G_N - G_D$, $G_N$ and $G_D$ are the
native and denatured free energies, and $T_s$ is selective temperature. Thus,
$4N_e m (1 - 1/(2N))$, $-\Delta \psi_{ND}$, and $-\Delta G_{ND}/k_B T_s$ must
be equivalent to each other. Based on the analysis of the changes ($\Delta
\psi_N$) of $\psi_N$ due to single nucleotide nonsynonymous substitutions,
$T_s$, and then glass transition temperature $T_g$, and $\Delta G_{ND}$ are
estimated with reasonable values for 14 protein domains. In addition,
approximating the probability density function (PDF) of $\Delta \psi_N$ by a
log-normal distribution, PDFs of $\Delta \psi_N$ and $K_a/K_s$, which is the
ratio of nonsynonymous to synonymous substitution rate per site, in all and in
fixed mutants are estimated. It is confirmed that $T_s$ negatively correlates
with the mean of $K_a/K_s$. Stabilizing mutations are significantly fixed by
positive selection, and balance with destabilizing mutations fixed by random
drift. Supporting the nearly neutral theory, neutral selection is not
significant.
|
[
{
"created": "Fri, 30 Dec 2016 04:11:46 GMT",
"version": "v1"
},
{
"created": "Tue, 28 Mar 2017 02:13:07 GMT",
"version": "v2"
},
{
"created": "Sat, 16 Sep 2017 05:46:51 GMT",
"version": "v3"
},
{
"created": "Wed, 15 Sep 2021 09:37:03 GMT",
"version": "v4"
}
] |
2021-09-16
|
[
[
"Miyazawa",
"Sanzo",
""
]
] |
Assuming that mutation and fixation processes are reversible Markov processes, we prove that the equilibrium ensemble of sequences obeys a Boltzmann distribution with $\exp(4N_e m(1 - 1/(2N)))$, where $m$ is Malthusian fitness and $N_e$ and $N$ are effective and actual population sizes. On the other hand, the probability distribution of sequences with maximum entropy that satisfies a given amino acid composition at each site and a given pairwise amino acid frequency at each site pair is a Boltzmann distribution with $\exp(-\psi_N)$, where $\psi_N$ is represented as the sum of one body and pairwise potentials. A protein folding theory indicates that homologous sequences obey a canonical ensemble characterized by $\exp(-\Delta G_{ND}/k_B T_s)$ or by $\exp(- G_{N}/k_B T_s)$ if an amino acid composition is kept constant, where $\Delta G_{ND} \equiv G_N - G_D$, $G_N$ and $G_D$ are the native and denatured free energies, and $T_s$ is selective temperature. Thus, $4N_e m (1 - 1/(2N))$, $-\Delta \psi_{ND}$, and $-\Delta G_{ND}/k_B T_s$ must be equivalent to each other. Based on the analysis of the changes ($\Delta \psi_N$) of $\psi_N$ due to single nucleotide nonsynonymous substitutions, $T_s$, and then glass transition temperature $T_g$, and $\Delta G_{ND}$ are estimated with reasonable values for 14 protein domains. In addition, approximating the probability density function (PDF) of $\Delta \psi_N$ by a log-normal distribution, PDFs of $\Delta \psi_N$ and $K_a/K_s$, which is the ratio of nonsynonymous to synonymous substitution rate per site, in all and in fixed mutants are estimated. It is confirmed that $T_s$ negatively correlates with the mean of $K_a/K_s$. Stabilizing mutations are significantly fixed by positive selection, and balance with destabilizing mutations fixed by random drift. Supporting the nearly neutral theory, neutral selection is not significant.
|
q-bio/0603027
|
Michael E. Wall
|
Michael E. Wall
|
Ligand Binding, Protein Fluctuations, and Allosteric Free Energy
|
18 pages; 7 figures; Second International Congress of the
Biocomputing and Physics of Complex Systems Research Institute, Zaragoza,
Spain, 8-11 Feb 2006; increase breadth of review of methods for analysis of
allosteric mechanisms; Add AIP in press; fix missing kTs in equations
| null |
10.1063/1.2345620
|
LA-UR-06-2066
|
q-bio.BM q-bio.QM
| null |
Although the importance of protein dynamics in protein function is generally
recognized, the role of protein fluctuations in allosteric effects scarcely has
been considered. To address this gap, the Kullback-Leibler divergence (Dx)
between protein conformational distributions before and after ligand binding
was proposed as a means of quantifying allosteric effects in proteins. Here,
previous applications of Dx to methods for analysis and simulation of proteins
are first reviewed, and their implications for understanding aspects of protein
function and protein evolution are discussed. Next, equations for Dx suggest
that k_{B}TDx should be interpreted as an allosteric free energy -- the free
energy associated with changing the ligand-free protein conformational
distribution to the ligand-bound conformational distribution. This
interpretation leads to a thermodynamic model of allosteric transitions that
unifies existing perspectives on the relation between ligand binding and
changes in protein conformational distributions. The definition of Dx is used
to explore some interesting mathematical relations among commonly recognized
thermodynamic and biophysical quantities, such as the total free energy change
upon ligand binding, and ligand-binding affinities for individual protein
conformations. These results represent the beginnings of a theoretical
framework for considering the full protein conformational distribution in
modeling allosteric transitions. Early applications of the framework have
produced results with implications both for methods for coarsed-grained
modeling of proteins, and for understanding the relation between ligand binding
and protein dynamics.
|
[
{
"created": "Wed, 22 Mar 2006 17:07:35 GMT",
"version": "v1"
},
{
"created": "Mon, 3 Apr 2006 17:42:15 GMT",
"version": "v2"
},
{
"created": "Mon, 3 Jul 2006 19:45:41 GMT",
"version": "v3"
}
] |
2009-11-13
|
[
[
"Wall",
"Michael E.",
""
]
] |
Although the importance of protein dynamics in protein function is generally recognized, the role of protein fluctuations in allosteric effects scarcely has been considered. To address this gap, the Kullback-Leibler divergence (Dx) between protein conformational distributions before and after ligand binding was proposed as a means of quantifying allosteric effects in proteins. Here, previous applications of Dx to methods for analysis and simulation of proteins are first reviewed, and their implications for understanding aspects of protein function and protein evolution are discussed. Next, equations for Dx suggest that k_{B}TDx should be interpreted as an allosteric free energy -- the free energy associated with changing the ligand-free protein conformational distribution to the ligand-bound conformational distribution. This interpretation leads to a thermodynamic model of allosteric transitions that unifies existing perspectives on the relation between ligand binding and changes in protein conformational distributions. The definition of Dx is used to explore some interesting mathematical relations among commonly recognized thermodynamic and biophysical quantities, such as the total free energy change upon ligand binding, and ligand-binding affinities for individual protein conformations. These results represent the beginnings of a theoretical framework for considering the full protein conformational distribution in modeling allosteric transitions. Early applications of the framework have produced results with implications both for methods for coarsed-grained modeling of proteins, and for understanding the relation between ligand binding and protein dynamics.
|
0901.2581
|
Takashi Nishikawa
|
Takashi Nishikawa, Natali Gulbahce, Adilson E. Motter
|
Spontaneous Reaction Silencing in Metabolic Optimization
|
34 pages, 6 figures
|
PLoS Comput Biol 4(12), e1000236 (2008)
|
10.1371/journal.pcbi.1000236
| null |
q-bio.MN cond-mat.dis-nn q-bio.CB
|
http://creativecommons.org/licenses/by/3.0/
|
Metabolic reactions of single-cell organisms are routinely observed to become
dispensable or even incapable of carrying activity under certain circumstances.
Yet, the mechanisms as well as the range of conditions and phenotypes
associated with this behavior remain very poorly understood. Here we predict
computationally and analytically that any organism evolving to maximize growth
rate, ATP production, or any other linear function of metabolic fluxes tends to
significantly reduce the number of active metabolic reactions compared to
typical non-optimal states. The reduced number appears to be constant across
the microbial species studied and just slightly larger than the minimum number
required for the organism to grow at all. We show that this massive spontaneous
reaction silencing is triggered by the irreversibility of a large fraction of
the metabolic reactions and propagates through the network as a cascade of
inactivity. Our results help explain existing experimental data on
intracellular flux measurements and the usage of latent pathways, shedding new
light on microbial evolution, robustness, and versatility for the execution of
specific biochemical tasks. In particular, the identification of optimal
reaction activity provides rigorous ground for an intriguing knockout-based
method recently proposed for the synthetic recovery of metabolic function.
|
[
{
"created": "Fri, 16 Jan 2009 21:35:00 GMT",
"version": "v1"
}
] |
2009-01-21
|
[
[
"Nishikawa",
"Takashi",
""
],
[
"Gulbahce",
"Natali",
""
],
[
"Motter",
"Adilson E.",
""
]
] |
Metabolic reactions of single-cell organisms are routinely observed to become dispensable or even incapable of carrying activity under certain circumstances. Yet, the mechanisms as well as the range of conditions and phenotypes associated with this behavior remain very poorly understood. Here we predict computationally and analytically that any organism evolving to maximize growth rate, ATP production, or any other linear function of metabolic fluxes tends to significantly reduce the number of active metabolic reactions compared to typical non-optimal states. The reduced number appears to be constant across the microbial species studied and just slightly larger than the minimum number required for the organism to grow at all. We show that this massive spontaneous reaction silencing is triggered by the irreversibility of a large fraction of the metabolic reactions and propagates through the network as a cascade of inactivity. Our results help explain existing experimental data on intracellular flux measurements and the usage of latent pathways, shedding new light on microbial evolution, robustness, and versatility for the execution of specific biochemical tasks. In particular, the identification of optimal reaction activity provides rigorous ground for an intriguing knockout-based method recently proposed for the synthetic recovery of metabolic function.
|
2205.04393
|
Yaocong Duan
|
Y. Duan (1), J. Zhan (1), J. Gross (2), R.A.A. Ince (1), P.G. Schyns
(1) ((1) School of Psychology and Neuroscience, University of Glasgow, United
Kingdom, (2) Institute for Biomagnetism and Biosignalanalysis, University of
M\"unster, Germany)
|
Pre-frontal cortex guides dimension-reducing transformations in the
occipito-ventral pathway for categorization behaviors
| null | null | null | null |
q-bio.NC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
To interpret our surroundings, the brain uses a visual categorization
process. Current theories and models suggest that this process comprises a
hierarchy of different computations that transforms complex, high-dimensional
inputs into lower-dimensional representations (i.e. manifolds) in support of
multiple categorization behaviors. Here, we tested this hypothesis by analyzing
these transformations reflected in dynamic MEG source activity while individual
participants actively categorized the same stimuli according to different
tasks: face expression, face gender, pedestrian gender, vehicle type. Results
reveal three transformation stages guided by pre-frontal cortex. At Stage 1
(high-dimensional, 50-120ms), occipital sources represent both task-relevant
and task-irrelevant stimulus features; task-relevant features advance into
higher ventral/dorsal regions whereas task-irrelevant features halt at the
occipital-temporal junction. At Stage 2 (121-150ms), stimulus feature
representations reduce to lower-dimensional manifolds, which then transform
into the task-relevant features underlying categorization behavior over Stage 3
(161-350ms).
|
[
{
"created": "Mon, 9 May 2022 15:56:22 GMT",
"version": "v1"
},
{
"created": "Fri, 13 May 2022 15:59:13 GMT",
"version": "v2"
},
{
"created": "Thu, 20 Jun 2024 16:07:14 GMT",
"version": "v3"
}
] |
2024-06-21
|
[
[
"Duan",
"Y.",
""
],
[
"Zhan",
"J.",
""
],
[
"Gross",
"J.",
""
],
[
"Ince",
"R. A. A.",
""
],
[
"Schyns",
"P. G.",
""
]
] |
To interpret our surroundings, the brain uses a visual categorization process. Current theories and models suggest that this process comprises a hierarchy of different computations that transforms complex, high-dimensional inputs into lower-dimensional representations (i.e. manifolds) in support of multiple categorization behaviors. Here, we tested this hypothesis by analyzing these transformations reflected in dynamic MEG source activity while individual participants actively categorized the same stimuli according to different tasks: face expression, face gender, pedestrian gender, vehicle type. Results reveal three transformation stages guided by pre-frontal cortex. At Stage 1 (high-dimensional, 50-120ms), occipital sources represent both task-relevant and task-irrelevant stimulus features; task-relevant features advance into higher ventral/dorsal regions whereas task-irrelevant features halt at the occipital-temporal junction. At Stage 2 (121-150ms), stimulus feature representations reduce to lower-dimensional manifolds, which then transform into the task-relevant features underlying categorization behavior over Stage 3 (161-350ms).
|
2006.02249
|
Marc Hellmuth
|
David Schaller, Manuela Gei{\ss}, Peter F. Stadler and Marc Hellmuth
|
Complete Characterization of Incorrect Orthology Assignments in Best
Match Graphs
| null | null | null | null |
q-bio.PE cs.DM cs.DS math.CO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Genome-scale orthology assignments are usually based on reciprocal best
matches. In the absence of horizontal gene transfer (HGT), every pair of
orthologs forms a reciprocal best match. Incorrect orthology assignments
therefore are always false positives in the reciprocal best match graph. We
consider duplication/loss scenarios and characterize unambiguous false-positive
(u-fp) orthology assignments, that is, edges in the best match graphs (BMGs)
that cannot correspond to orthologs for any gene tree that explains the BMG.
Moreover, we provide a polynomial-time algorithm to identify all u-fp orthology
assignments in a BMG. Simulations show that at least $75\%$ of all incorrect
orthology assignments can be detected in this manner. All results rely only on
the structure of the BMGs and not on any a priori knowledge about underlying
gene or species trees.
|
[
{
"created": "Wed, 3 Jun 2020 13:06:57 GMT",
"version": "v1"
},
{
"created": "Thu, 26 Nov 2020 12:48:15 GMT",
"version": "v2"
}
] |
2020-11-30
|
[
[
"Schaller",
"David",
""
],
[
"Geiß",
"Manuela",
""
],
[
"Stadler",
"Peter F.",
""
],
[
"Hellmuth",
"Marc",
""
]
] |
Genome-scale orthology assignments are usually based on reciprocal best matches. In the absence of horizontal gene transfer (HGT), every pair of orthologs forms a reciprocal best match. Incorrect orthology assignments therefore are always false positives in the reciprocal best match graph. We consider duplication/loss scenarios and characterize unambiguous false-positive (u-fp) orthology assignments, that is, edges in the best match graphs (BMGs) that cannot correspond to orthologs for any gene tree that explains the BMG. Moreover, we provide a polynomial-time algorithm to identify all u-fp orthology assignments in a BMG. Simulations show that at least $75\%$ of all incorrect orthology assignments can be detected in this manner. All results rely only on the structure of the BMGs and not on any a priori knowledge about underlying gene or species trees.
|
2108.06397
|
Firas Lethaus
|
Firas Lethaus and Robert Kaul
|
Cognitive factor forming an individual constituent in a driver model
inferred from multiplicatory relationships between cognitive sub-factors
|
15 pages, 3 figures, to be published
| null | null | null |
q-bio.NC cs.HC stat.AP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In order to reproduce human behaviour in dynamic traffic situations, a
computational representation of the requisite mental processes used to carry
out the complex driving tasks is required. A single cognitive factor has been
developed and forms a crucial component in our driver model. This cognitive
factor is composed of cognitive sub-factors: distraction, anticipation, stress,
and strain. It has been defined such that these sub-factors have a
multiplicatory relationship with one another. In addition, each of these
sub-factors comprises the visual, auditory, tactile-kinetic, and verbal
information channel individually. This results in a lattice-like network of
relationships, which can be expanded modularly in horizontal and vertical
directions. The conceptual topology and the characteristics of this
multiplicatory, operation-based cognitive factor are presented.
|
[
{
"created": "Mon, 19 Jul 2021 17:29:36 GMT",
"version": "v1"
}
] |
2021-08-17
|
[
[
"Lethaus",
"Firas",
""
],
[
"Kaul",
"Robert",
""
]
] |
In order to reproduce human behaviour in dynamic traffic situations, a computational representation of the requisite mental processes used to carry out the complex driving tasks is required. A single cognitive factor has been developed and forms a crucial component in our driver model. This cognitive factor is composed of cognitive sub-factors: distraction, anticipation, stress, and strain. It has been defined such that these sub-factors have a multiplicatory relationship with one another. In addition, each of these sub-factors comprises the visual, auditory, tactile-kinetic, and verbal information channel individually. This results in a lattice-like network of relationships, which can be expanded modularly in horizontal and vertical directions. The conceptual topology and the characteristics of this multiplicatory, operation-based cognitive factor are presented.
|
2401.01398
|
Farshud Sorourifar
|
Farshud Sorourifar, Thomas Banker, Joel A. Paulson
|
Accelerating Black-Box Molecular Property Optimization by Adaptively
Learning Sparse Subspaces
|
9 pages, 2 figures consisting of 6 and 4 plots, accepted to NeurIPS
2023 Workshop on Adaptive Experimental Design and Active Learning in the Real
World
| null | null | null |
q-bio.BM cs.CE cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Molecular property optimization (MPO) problems are inherently challenging
since they are formulated over discrete, unstructured spaces and the labeling
process involves expensive simulations or experiments, which fundamentally
limits the amount of available data. Bayesian optimization (BO) is a powerful
and popular framework for efficient optimization of noisy, black-box objective
functions (e.g., measured property values), thus is a potentially attractive
framework for MPO. To apply BO to MPO problems, one must select a structured
molecular representation that enables construction of a probabilistic surrogate
model. Many molecular representations have been developed, however, they are
all high-dimensional, which introduces important challenges in the BO process
-- mainly because the curse of dimensionality makes it difficult to define and
perform inference over a suitable class of surrogate models. This challenge has
been recently addressed by learning a lower-dimensional encoding of a SMILE or
graph representation of a molecule in an unsupervised manner and then
performing BO in the encoded space. In this work, we show that such methods
have a tendency to "get stuck," which we hypothesize occurs since the mapping
from the encoded space to property values is not necessarily well-modeled by a
Gaussian process. We argue for an alternative approach that combines numerical
molecular descriptors with a sparse axis-aligned Gaussian process model, which
is capable of rapidly identifying sparse subspaces that are most relevant to
modeling the unknown property function. We demonstrate that our proposed method
substantially outperforms existing MPO methods on a variety of benchmark and
real-world problems. Specifically, we show that our method can routinely find
near-optimal molecules out of a set of more than $>100$k alternatives within
100 or fewer expensive queries.
|
[
{
"created": "Tue, 2 Jan 2024 18:34:29 GMT",
"version": "v1"
}
] |
2024-01-04
|
[
[
"Sorourifar",
"Farshud",
""
],
[
"Banker",
"Thomas",
""
],
[
"Paulson",
"Joel A.",
""
]
] |
Molecular property optimization (MPO) problems are inherently challenging since they are formulated over discrete, unstructured spaces and the labeling process involves expensive simulations or experiments, which fundamentally limits the amount of available data. Bayesian optimization (BO) is a powerful and popular framework for efficient optimization of noisy, black-box objective functions (e.g., measured property values), thus is a potentially attractive framework for MPO. To apply BO to MPO problems, one must select a structured molecular representation that enables construction of a probabilistic surrogate model. Many molecular representations have been developed, however, they are all high-dimensional, which introduces important challenges in the BO process -- mainly because the curse of dimensionality makes it difficult to define and perform inference over a suitable class of surrogate models. This challenge has been recently addressed by learning a lower-dimensional encoding of a SMILE or graph representation of a molecule in an unsupervised manner and then performing BO in the encoded space. In this work, we show that such methods have a tendency to "get stuck," which we hypothesize occurs since the mapping from the encoded space to property values is not necessarily well-modeled by a Gaussian process. We argue for an alternative approach that combines numerical molecular descriptors with a sparse axis-aligned Gaussian process model, which is capable of rapidly identifying sparse subspaces that are most relevant to modeling the unknown property function. We demonstrate that our proposed method substantially outperforms existing MPO methods on a variety of benchmark and real-world problems. Specifically, we show that our method can routinely find near-optimal molecules out of a set of more than $>100$k alternatives within 100 or fewer expensive queries.
|
2305.06334
|
Isaac Filella-Merce
|
Isaac Filella-Merce, Alexis Molina, Marek Orzechowski, Luc\'ia D\'iaz,
Yang Ming Zhu, Julia Vilalta Mor, Laura Malo, Ajay S Yekkirala, Soumya Ray,
Victor Guallar
|
Optimizing Drug Design by Merging Generative AI With Active Learning
Frameworks
| null | null | null | null |
q-bio.BM cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Traditional drug discovery programs are being transformed by the advent of
machine learning methods. Among these, Generative AI methods (GM) have gained
attention due to their ability to design new molecules and enhance specific
properties of existing ones. However, current GM methods have limitations, such
as low affinity towards the target, unknown ADME/PK properties, or the lack of
synthetic tractability. To improve the applicability domain of GM methods, we
have developed a workflow based on a variational autoencoder coupled with
active learning steps. The designed GM workflow iteratively learns from
molecular metrics, including drug likeliness, synthesizability, similarity, and
docking scores. In addition, we also included a hierarchical set of criteria
based on advanced molecular modeling simulations during a final selection step.
We tested our GM workflow on two model systems, CDK2 and KRAS. In both cases,
our model generated chemically viable molecules with a high predicted affinity
toward the targets. Particularly, the proportion of high-affinity molecules
inferred by our GM workflow was significantly greater than that in the training
data. Notably, we also uncovered novel scaffolds significantly dissimilar to
those known for each target. These results highlight the potential of our GM
workflow to explore novel chemical space for specific targets, thereby opening
up new possibilities for drug discovery endeavors.
|
[
{
"created": "Thu, 4 May 2023 13:25:14 GMT",
"version": "v1"
}
] |
2023-05-11
|
[
[
"Filella-Merce",
"Isaac",
""
],
[
"Molina",
"Alexis",
""
],
[
"Orzechowski",
"Marek",
""
],
[
"Díaz",
"Lucía",
""
],
[
"Zhu",
"Yang Ming",
""
],
[
"Mor",
"Julia Vilalta",
""
],
[
"Malo",
"Laura",
""
],
[
"Yekkirala",
"Ajay S",
""
],
[
"Ray",
"Soumya",
""
],
[
"Guallar",
"Victor",
""
]
] |
Traditional drug discovery programs are being transformed by the advent of machine learning methods. Among these, Generative AI methods (GM) have gained attention due to their ability to design new molecules and enhance specific properties of existing ones. However, current GM methods have limitations, such as low affinity towards the target, unknown ADME/PK properties, or the lack of synthetic tractability. To improve the applicability domain of GM methods, we have developed a workflow based on a variational autoencoder coupled with active learning steps. The designed GM workflow iteratively learns from molecular metrics, including drug likeliness, synthesizability, similarity, and docking scores. In addition, we also included a hierarchical set of criteria based on advanced molecular modeling simulations during a final selection step. We tested our GM workflow on two model systems, CDK2 and KRAS. In both cases, our model generated chemically viable molecules with a high predicted affinity toward the targets. Particularly, the proportion of high-affinity molecules inferred by our GM workflow was significantly greater than that in the training data. Notably, we also uncovered novel scaffolds significantly dissimilar to those known for each target. These results highlight the potential of our GM workflow to explore novel chemical space for specific targets, thereby opening up new possibilities for drug discovery endeavors.
|
2009.12338
|
David Danko
|
David C Danko and Chris Mason
|
The MetaSUB Microbiome Core Analysis Pipeline Enables Large Scale
Metagenomic Analysis
| null | null | null | null |
q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Motivation: Accurate data analysis and quality control is critical for
metagenomic studies. Though many tools exist to analyze metagenomic data there
is no consistent framework to integrate and run these tools across projects.
Currently, computational analysis of metagenomes is time consuming, often
misses potentially interesting results, and is difficult to reproduce. Further,
comparison between metagenomic studies is hampered by inconsistencies in tools
and databases.
Results: We present the MetaSUB Core Analysis Pipeline (CAP) a comprehensive
tool to analyze metagenomes and summarize the results of a project. The CAP is
designed in a bottom up fashion to perform QC, preprocessing, analysis and even
to build relevant databases and install necessary tools.
Availability and Implementation: The CAP is available under an MIT License on
GitHub at https://github.com/MetaSUB/CAP2 and on the Python Package Index.
Documentation and examples are available on GitHub.
|
[
{
"created": "Fri, 25 Sep 2020 16:57:24 GMT",
"version": "v1"
}
] |
2020-09-28
|
[
[
"Danko",
"David C",
""
],
[
"Mason",
"Chris",
""
]
] |
Motivation: Accurate data analysis and quality control is critical for metagenomic studies. Though many tools exist to analyze metagenomic data there is no consistent framework to integrate and run these tools across projects. Currently, computational analysis of metagenomes is time consuming, often misses potentially interesting results, and is difficult to reproduce. Further, comparison between metagenomic studies is hampered by inconsistencies in tools and databases. Results: We present the MetaSUB Core Analysis Pipeline (CAP) a comprehensive tool to analyze metagenomes and summarize the results of a project. The CAP is designed in a bottom up fashion to perform QC, preprocessing, analysis and even to build relevant databases and install necessary tools. Availability and Implementation: The CAP is available under an MIT License on GitHub at https://github.com/MetaSUB/CAP2 and on the Python Package Index. Documentation and examples are available on GitHub.
|
2207.02028
|
Mohammad M. Amirian
|
Mohammad M. Amirian, Andrew J. Irwin, Zoe V. Finkel
|
Extending the Monod Model of Microbial Growth with Memory
| null | null | null | null |
q-bio.PE math.DS stat.AP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Monod's model describes the growth of microorganisms using a hyperbolic
function of extracellular resource concentration. Under fluctuating or limited
resource concentrations this model performs poorly against experimental data,
motivating the more complex Droop model with a time-varying internal storage
pool. We extend the Monod model to incorporate memory of past conditions,
adding a single parameter motivated by a fractional calculus analysis. We show
how to interpret the memory element in a biological context and describe its
connection to a resource storage pool. Under nitrogen starvation at
non-equilibrium conditions, we validate the model with simulations and
empirical data obtained from lab cultures of diatoms (T. pseudonana and T.
weissflogii) and prasinophytes (Micromonas sp. and O. tauri), globally
influential phytoplankton taxa. Using statistical analysis, we show that our
Monod-memory model estimates the growth rate, cell density, and resource
concentration as well as the Droop model while requiring one less state
variable. Our simple model may improve descriptions of phytoplankton dynamics
in complex earth system models at a lower computational cost than is presently
achievable.
|
[
{
"created": "Tue, 5 Jul 2022 13:18:09 GMT",
"version": "v1"
},
{
"created": "Wed, 28 Dec 2022 18:57:34 GMT",
"version": "v2"
}
] |
2022-12-29
|
[
[
"Amirian",
"Mohammad M.",
""
],
[
"Irwin",
"Andrew J.",
""
],
[
"Finkel",
"Zoe V.",
""
]
] |
Monod's model describes the growth of microorganisms using a hyperbolic function of extracellular resource concentration. Under fluctuating or limited resource concentrations this model performs poorly against experimental data, motivating the more complex Droop model with a time-varying internal storage pool. We extend the Monod model to incorporate memory of past conditions, adding a single parameter motivated by a fractional calculus analysis. We show how to interpret the memory element in a biological context and describe its connection to a resource storage pool. Under nitrogen starvation at non-equilibrium conditions, we validate the model with simulations and empirical data obtained from lab cultures of diatoms (T. pseudonana and T. weissflogii) and prasinophytes (Micromonas sp. and O. tauri), globally influential phytoplankton taxa. Using statistical analysis, we show that our Monod-memory model estimates the growth rate, cell density, and resource concentration as well as the Droop model while requiring one less state variable. Our simple model may improve descriptions of phytoplankton dynamics in complex earth system models at a lower computational cost than is presently achievable.
|
2211.07375
|
Jiashu Lou
|
Jiashu Lou, Leyi Cui, Wenxuan Qiu
|
Reconstruction of gene regulatory network via sparse optimization
| null | null | null | null |
q-bio.MN cs.LG math.OC
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In this paper, we tested several sparse optimization algorithms based on the
public dataset of the DREAM5 Gene Regulatory Network Inference Challenge. And
we find that introducing 20% of the regulatory network as a priori known data
can provide a basis for parameter selection of inference algorithms, thus
improving prediction efficiency and accuracy. In addition to testing common
sparse optimization methods, we also developed voting algorithms by bagging
them. Experiments on the DREAM5 dataset show that the sparse optimization-based
inference of the moderation relation works well, achieving better results than
the official DREAM5 results on three datasets. However, the performance of
traditional independent algorithms varies greatly in the face of different
datasets, while our voting algorithm achieves the best results on three of the
four datasets.
|
[
{
"created": "Fri, 11 Nov 2022 07:57:59 GMT",
"version": "v1"
}
] |
2022-11-15
|
[
[
"Lou",
"Jiashu",
""
],
[
"Cui",
"Leyi",
""
],
[
"Qiu",
"Wenxuan",
""
]
] |
In this paper, we tested several sparse optimization algorithms based on the public dataset of the DREAM5 Gene Regulatory Network Inference Challenge. And we find that introducing 20% of the regulatory network as a priori known data can provide a basis for parameter selection of inference algorithms, thus improving prediction efficiency and accuracy. In addition to testing common sparse optimization methods, we also developed voting algorithms by bagging them. Experiments on the DREAM5 dataset show that the sparse optimization-based inference of the moderation relation works well, achieving better results than the official DREAM5 results on three datasets. However, the performance of traditional independent algorithms varies greatly in the face of different datasets, while our voting algorithm achieves the best results on three of the four datasets.
|
1802.05646
|
Jonathan Taylor
|
Vytautas Zickus and Jonathan M. Taylor
|
4D blood flow mapping using SPIM-microPIV in the developing zebrafish
heart
|
V. Zickus and J. M. Taylor, "4D blood flow mapping using
SPIM-microPIV in the developing zebrafish heart", Photonics West BiOS 2018:
Biomedical Spectroscopy, Microscopy, and Imaging (10499), 10499-38 (2018).
Copyright 2018 Society of Photo Optical Instrumentation Engineers (SPIE)
|
Proc. SPIE 10499 (2018) 104991E
|
10.1117/12.2289709
| null |
q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Fluid-structure interaction in the developing heart is an active area of
research in developmental biology. However, investigation of heart dynamics is
mostly limited to computational fluid dynamics simulations using heart wall
structure information only, or single plane blood flow information - so there
is a need for 3D + time resolved data to fully understand cardiac function. We
present an imaging platform combining selective plane illumination microscopy
(SPIM) with micro particle image velocimetry ({\textmu}PIV) to enable
3D-resolved flow mapping in a microscopic environment, free from many of the
sources of error and bias present in traditional epifluorescence-based
{\textmu}PIV systems. By using our new system in conjunction with optical heart
beat synchronisation, we demonstrte the ability obtain non-invasive 3D + time
resolved blood flow measurements in the heart of a living zebrafish embryo.
|
[
{
"created": "Thu, 15 Feb 2018 16:37:47 GMT",
"version": "v1"
}
] |
2018-03-01
|
[
[
"Zickus",
"Vytautas",
""
],
[
"Taylor",
"Jonathan M.",
""
]
] |
Fluid-structure interaction in the developing heart is an active area of research in developmental biology. However, investigation of heart dynamics is mostly limited to computational fluid dynamics simulations using heart wall structure information only, or single plane blood flow information - so there is a need for 3D + time resolved data to fully understand cardiac function. We present an imaging platform combining selective plane illumination microscopy (SPIM) with micro particle image velocimetry ({\textmu}PIV) to enable 3D-resolved flow mapping in a microscopic environment, free from many of the sources of error and bias present in traditional epifluorescence-based {\textmu}PIV systems. By using our new system in conjunction with optical heart beat synchronisation, we demonstrte the ability obtain non-invasive 3D + time resolved blood flow measurements in the heart of a living zebrafish embryo.
|
2111.13344
|
Matteo Smerlak
|
Camila Br\"autigam, Matteo Smerlak
|
Diffusion approximations in population genetics and the rate of Muller's
ratchet
|
17 pages, 5 figures
|
Journal of Theoretical Biology 550, 111236, 2022
|
10.1016/j.jtbi.2022.111236
| null |
q-bio.PE
|
http://creativecommons.org/licenses/by/4.0/
|
Diffusion theory is a central tool of modern population genetics, yielding
simple expressions for fixation probabilities and other quantities that are not
easily derived from the underlying Wright-Fisher model. Unfortunately, the
textbook derivation of diffusion equations as scaling limits requires
evolutionary parameters (selection coefficients, mutation rates) to scale like
the inverse population size -- a severe restriction that does not always
reflect biological reality. Here we note that the Wright-Fisher model can be
approximated by diffusion equations under more general conditions, including in
regimes where selection and/or mutation are strong compared to genetic drift.
As an illustration, we use a diffusion approximation of the Wright-Fisher model
to improve estimates for the expected time to fixation of a strongly
deleterious allele, i.e. the rate of Muller's ratchet.
|
[
{
"created": "Fri, 26 Nov 2021 07:50:00 GMT",
"version": "v1"
}
] |
2022-12-19
|
[
[
"Bräutigam",
"Camila",
""
],
[
"Smerlak",
"Matteo",
""
]
] |
Diffusion theory is a central tool of modern population genetics, yielding simple expressions for fixation probabilities and other quantities that are not easily derived from the underlying Wright-Fisher model. Unfortunately, the textbook derivation of diffusion equations as scaling limits requires evolutionary parameters (selection coefficients, mutation rates) to scale like the inverse population size -- a severe restriction that does not always reflect biological reality. Here we note that the Wright-Fisher model can be approximated by diffusion equations under more general conditions, including in regimes where selection and/or mutation are strong compared to genetic drift. As an illustration, we use a diffusion approximation of the Wright-Fisher model to improve estimates for the expected time to fixation of a strongly deleterious allele, i.e. the rate of Muller's ratchet.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.