id stringlengths 9 13 | submitter stringlengths 4 48 | authors stringlengths 4 9.62k | title stringlengths 4 343 | comments stringlengths 2 480 ⌀ | journal-ref stringlengths 9 309 ⌀ | doi stringlengths 12 138 ⌀ | report-no stringclasses 277 values | categories stringlengths 8 87 | license stringclasses 9 values | orig_abstract stringlengths 27 3.76k | versions listlengths 1 15 | update_date stringlengths 10 10 | authors_parsed listlengths 1 147 | abstract stringlengths 24 3.75k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1802.03949 | Guido Tiana | Matteo Negri, Marco Gherardi, Guido Tiana, Marco Cosentino Lagomarsino | Spontaneous domain formation in disordered copolymers as a mechanism for
chromosome structuring | null | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Motivated by the problem of domain formation in chromosomes, we studied a
co--polymer model where only a subset of the monomers feel attractive
interactions. These monomers are displaced randomly from a regularly-spaced
pattern, thus introducing some quenched disorder in the system. Previous work
has shown that in the case of regularly-spaced interacting monomers this chain
can fold into structures characterized by multiple distinct domains of
consecutive segments. In each domain, attractive interactions are balanced by
the entropy cost of forming loops. We show by advanced replica-exchange
simulations that adding disorder in the position of the interacting monomers
further stabilizes these domains. The model suggests that the partitioning of
the chain into well-defined domains of consecutive monomers is a spontaneous
property of heteropolymers. In the case of chromosomes, evolution could have
acted on the spacing of interacting monomers to modulate in a simple way the
underlying domains for functional reasons.
| [
{
"created": "Mon, 12 Feb 2018 09:53:33 GMT",
"version": "v1"
}
] | 2018-02-13 | [
[
"Negri",
"Matteo",
""
],
[
"Gherardi",
"Marco",
""
],
[
"Tiana",
"Guido",
""
],
[
"Lagomarsino",
"Marco Cosentino",
""
]
] | Motivated by the problem of domain formation in chromosomes, we studied a co--polymer model where only a subset of the monomers feel attractive interactions. These monomers are displaced randomly from a regularly-spaced pattern, thus introducing some quenched disorder in the system. Previous work has shown that in the case of regularly-spaced interacting monomers this chain can fold into structures characterized by multiple distinct domains of consecutive segments. In each domain, attractive interactions are balanced by the entropy cost of forming loops. We show by advanced replica-exchange simulations that adding disorder in the position of the interacting monomers further stabilizes these domains. The model suggests that the partitioning of the chain into well-defined domains of consecutive monomers is a spontaneous property of heteropolymers. In the case of chromosomes, evolution could have acted on the spacing of interacting monomers to modulate in a simple way the underlying domains for functional reasons. |
1606.09545 | Kimberly Schlesinger | Elizabeth N. Davison, Benjamin O. Turner, Kimberly J. Schlesinger,
Michael B. Miller, Scott T. Grafton, Danielle S. Bassett, Jean M. Carlson | Individual Differences in Dynamic Functional Brain Connectivity Across
the Human Lifespan | null | null | 10.1371/journal.pcbi.1005178 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Individual differences in brain functional networks may be related to complex
personal identifiers, including health, age, and ability. Understanding and
quantifying these differences is a necessary first step towards developing
predictive methods derived from network topology. Here, we present a method to
quantify individual differences in brain functional dynamics by applying
hypergraph analysis, a method from dynamic network theory. Using a summary
metric derived from the hypergraph formalism---hypergraph cardinality---we
investigate individual variations in two separate and complementary data sets.
The first data set ("multi-task") consists of 77 individuals engaging in four
consecutive cognitive tasks. We observed that hypergraph cardinality exhibits
variation across individuals while remaining consistent within individuals
between tasks; moreover, one of the memory tasks evinced a marginally
significant correspondence between hypergraph cardinality and age. This finding
motivated a similar analysis of the second data set ("age-memory"), in which 95
individuals of varying ages performed a memory task with a similar structure to
the multi-task memory task. With the increased age range in the age-memory data
set, the correlation between hypergraph cardinality and age correspondence
becomes significant. We discuss these results in the context of the well-known
finding linking age with network structure, and suggest that age-related
changes in brain function can be better understood by taking an integrative
approach that incorporates information about the dynamics of functional
interactions.
| [
{
"created": "Thu, 30 Jun 2016 15:51:14 GMT",
"version": "v1"
}
] | 2017-02-08 | [
[
"Davison",
"Elizabeth N.",
""
],
[
"Turner",
"Benjamin O.",
""
],
[
"Schlesinger",
"Kimberly J.",
""
],
[
"Miller",
"Michael B.",
""
],
[
"Grafton",
"Scott T.",
""
],
[
"Bassett",
"Danielle S.",
""
],
[
"Carlson",
... | Individual differences in brain functional networks may be related to complex personal identifiers, including health, age, and ability. Understanding and quantifying these differences is a necessary first step towards developing predictive methods derived from network topology. Here, we present a method to quantify individual differences in brain functional dynamics by applying hypergraph analysis, a method from dynamic network theory. Using a summary metric derived from the hypergraph formalism---hypergraph cardinality---we investigate individual variations in two separate and complementary data sets. The first data set ("multi-task") consists of 77 individuals engaging in four consecutive cognitive tasks. We observed that hypergraph cardinality exhibits variation across individuals while remaining consistent within individuals between tasks; moreover, one of the memory tasks evinced a marginally significant correspondence between hypergraph cardinality and age. This finding motivated a similar analysis of the second data set ("age-memory"), in which 95 individuals of varying ages performed a memory task with a similar structure to the multi-task memory task. With the increased age range in the age-memory data set, the correlation between hypergraph cardinality and age correspondence becomes significant. We discuss these results in the context of the well-known finding linking age with network structure, and suggest that age-related changes in brain function can be better understood by taking an integrative approach that incorporates information about the dynamics of functional interactions. |
2206.06225 | Sophia Tushak | Sophia K. Tushak (1), Bronislaw D. Gepner (1), Jason L. Forman (1),
Jason J. Hallman (2), Bengt Pipkorn (3), Jason R. Kerrigan (1) ((1)
University of Virginia, (2) Toyota Motor Engineering & Manufacturing North
America, Inc., (3) Autoliv Research) | Human lumbar spine injury risk in dynamic combined compression and
flexion loading | 24 pages total, 4 figures, 1 table. Updates to clarify variable units
in the injury risk function development and implementation | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Anticipating changes to vehicle interiors with future automated driving
systems, the automobile industry recently has focused attention on crash
response in relaxed postures with increased seatback recline. Prior research
found that this posture may result in greater risk of lumbar spine injury in
the event of a frontal crash. This study developed a lumbar spine injury risk
function that estimated injury risk as a function of simultaneously applied
compression force and flexion moment. Force and moment failure data from 40
compression-flexion tests were utilized in a Weibull survival model, including
appropriate data censoring. A mechanics-based injury metric was formulated,
where lumbar spine compression force and flexion moment were normalized to
specimen geometry. Subject age was incorporated as a covariate to further
improve model fit. A weighting factor was included to adjust the influence of
force and moment, and parameter optimization yielded a value of 0.11. Thus, the
normalized compression force component had a greater effect on injury risk than
the normalized flexion moment component. Additionally, as force was nominally
increased, less moment was required to produce injury for a given age and
specimen geometry. The resulting injury risk function can be utilized to
improve occupant safety in the field.
| [
{
"created": "Mon, 13 Jun 2022 14:56:29 GMT",
"version": "v1"
},
{
"created": "Tue, 21 Jun 2022 15:49:12 GMT",
"version": "v2"
}
] | 2022-06-22 | [
[
"Tushak",
"Sophia K.",
""
],
[
"Gepner",
"Bronislaw D.",
""
],
[
"Forman",
"Jason L.",
""
],
[
"Hallman",
"Jason J.",
""
],
[
"Pipkorn",
"Bengt",
""
],
[
"Kerrigan",
"Jason R.",
""
]
] | Anticipating changes to vehicle interiors with future automated driving systems, the automobile industry recently has focused attention on crash response in relaxed postures with increased seatback recline. Prior research found that this posture may result in greater risk of lumbar spine injury in the event of a frontal crash. This study developed a lumbar spine injury risk function that estimated injury risk as a function of simultaneously applied compression force and flexion moment. Force and moment failure data from 40 compression-flexion tests were utilized in a Weibull survival model, including appropriate data censoring. A mechanics-based injury metric was formulated, where lumbar spine compression force and flexion moment were normalized to specimen geometry. Subject age was incorporated as a covariate to further improve model fit. A weighting factor was included to adjust the influence of force and moment, and parameter optimization yielded a value of 0.11. Thus, the normalized compression force component had a greater effect on injury risk than the normalized flexion moment component. Additionally, as force was nominally increased, less moment was required to produce injury for a given age and specimen geometry. The resulting injury risk function can be utilized to improve occupant safety in the field. |
1812.04162 | Allan Haldane | Allan Haldane and Ronald M. Levy | Influence of Multiple Sequence Alignment Depth on Potts Statistical
Models of Protein Covariation | null | Phys. Rev. E 99, 032405 (2019) | 10.1103/PhysRevE.99.032405 | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Potts statistical models have become a popular and promising way to analyze
mutational covariation in protein Multiple Sequence Alignments (MSAs) in order
to understand protein structure, function and fitness. But the statistical
limitations of these models, which can have millions of parameters and are fit
to MSAs of only thousands or hundreds of effective sequences using a procedure
known as inverse Ising inference, are incompletely understood. In this work we
predict how model quality degrades as a function of the number of sequences
$N$, sequence length $L$, amino-acid alphabet size $q$, and the degree of
conservation of the MSA, in different applications of the Potts models: In
"fitness" predictions of individual protein sequences, in predictions of the
effects of single-point mutations, in "double mutant cycle" predictions of
epistasis, and in 3-d contact prediction in protein structure. We show how as
MSA depth $N$ decreases an "overfitting" effect occurs such that sequences in
the training MSA have overestimated fitness, and we predict the magnitude of
this effect and discuss how regularization can help correct for it, use a
regularization procedure motivated by statistical analysis of the effects of
finite sampling. We find that as $N$ decreases the quality of point-mutation
effect predictions degrade least, fitness and epistasis predictions degrade
more rapidly, and contact predictions are most affected. However, overfitting
becomes negligible for MSA depths of more than a few thousand effective
sequences, as often used in practice, and regularization becomes less
necessary. We discuss the implications of these results for users of Potts
covariation analysis.
| [
{
"created": "Tue, 11 Dec 2018 00:36:21 GMT",
"version": "v1"
},
{
"created": "Sat, 19 Jan 2019 01:08:26 GMT",
"version": "v2"
}
] | 2019-03-13 | [
[
"Haldane",
"Allan",
""
],
[
"Levy",
"Ronald M.",
""
]
] | Potts statistical models have become a popular and promising way to analyze mutational covariation in protein Multiple Sequence Alignments (MSAs) in order to understand protein structure, function and fitness. But the statistical limitations of these models, which can have millions of parameters and are fit to MSAs of only thousands or hundreds of effective sequences using a procedure known as inverse Ising inference, are incompletely understood. In this work we predict how model quality degrades as a function of the number of sequences $N$, sequence length $L$, amino-acid alphabet size $q$, and the degree of conservation of the MSA, in different applications of the Potts models: In "fitness" predictions of individual protein sequences, in predictions of the effects of single-point mutations, in "double mutant cycle" predictions of epistasis, and in 3-d contact prediction in protein structure. We show how as MSA depth $N$ decreases an "overfitting" effect occurs such that sequences in the training MSA have overestimated fitness, and we predict the magnitude of this effect and discuss how regularization can help correct for it, use a regularization procedure motivated by statistical analysis of the effects of finite sampling. We find that as $N$ decreases the quality of point-mutation effect predictions degrade least, fitness and epistasis predictions degrade more rapidly, and contact predictions are most affected. However, overfitting becomes negligible for MSA depths of more than a few thousand effective sequences, as often used in practice, and regularization becomes less necessary. We discuss the implications of these results for users of Potts covariation analysis. |
2303.16903 | Rasmus Netterstr{\o}m | Rasmus Netterstr{\o}m, Nikolay Kutuzov, Sune Darkner, Maurits
J{\o}rring Pallesen, Martin Johannes Lauritzen, Kenny Erleben, Francois Lauze | Deep Learning-Assisted Localisation of Nanoparticles in synthetically
generated two-photon microscopy images | null | null | null | null | q-bio.QM cs.AI cs.CV eess.IV | http://creativecommons.org/licenses/by/4.0/ | Tracking single molecules is instrumental for quantifying the transport of
molecules and nanoparticles in biological samples, e.g., in brain drug delivery
studies. Existing intensity-based localisation methods are not developed for
imaging with a scanning microscope, typically used for in vivo imaging. Low
signal-to-noise ratios, movement of molecules out-of-focus, and high motion
blur on images recorded with scanning two-photon microscopy (2PM) in vivo pose
a challenge to the accurate localisation of molecules. Using data-driven models
is challenging due to low data volumes, typical for in vivo experiments. We
developed a 2PM image simulator to supplement scarce training data. The
simulator mimics realistic motion blur, background fluorescence, and shot noise
observed in vivo imaging. Training a data-driven model with simulated data
improves localisation quality in simulated images and shows why intensity-based
methods fail.
| [
{
"created": "Fri, 17 Mar 2023 14:48:50 GMT",
"version": "v1"
}
] | 2023-03-31 | [
[
"Netterstrøm",
"Rasmus",
""
],
[
"Kutuzov",
"Nikolay",
""
],
[
"Darkner",
"Sune",
""
],
[
"Pallesen",
"Maurits Jørring",
""
],
[
"Lauritzen",
"Martin Johannes",
""
],
[
"Erleben",
"Kenny",
""
],
[
"Lauze",
"Fra... | Tracking single molecules is instrumental for quantifying the transport of molecules and nanoparticles in biological samples, e.g., in brain drug delivery studies. Existing intensity-based localisation methods are not developed for imaging with a scanning microscope, typically used for in vivo imaging. Low signal-to-noise ratios, movement of molecules out-of-focus, and high motion blur on images recorded with scanning two-photon microscopy (2PM) in vivo pose a challenge to the accurate localisation of molecules. Using data-driven models is challenging due to low data volumes, typical for in vivo experiments. We developed a 2PM image simulator to supplement scarce training data. The simulator mimics realistic motion blur, background fluorescence, and shot noise observed in vivo imaging. Training a data-driven model with simulated data improves localisation quality in simulated images and shows why intensity-based methods fail. |
2204.07054 | Yanqiao Zhu | Hejie Cui and Wei Dai and Yanqiao Zhu and Xuan Kan and Antonio Aodong
Chen Gu and Joshua Lukemire, Liang Zhan, Lifang He, Ying Guo, Carl Yang | BrainGB: A Benchmark for Brain Network Analysis with Graph Neural
Networks | IEEE Transactions on Medical Imaging | null | 10.1109/TMI.2022.3218745 | null | q-bio.NC cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mapping the connectome of the human brain using structural or functional
connectivity has become one of the most pervasive paradigms for neuroimaging
analysis. Recently, Graph Neural Networks (GNNs) motivated from geometric deep
learning have attracted broad interest due to their established power for
modeling complex networked data. Despite their superior performance in many
fields, there has not yet been a systematic study of how to design effective
GNNs for brain network analysis. To bridge this gap, we present BrainGB, a
benchmark for brain network analysis with GNNs. BrainGB standardizes the
process by (1) summarizing brain network construction pipelines for both
functional and structural neuroimaging modalities and (2) modularizing the
implementation of GNN designs. We conduct extensive experiments on datasets
across cohorts and modalities and recommend a set of general recipes for
effective GNN designs on brain networks. To support open and reproducible
research on GNN-based brain network analysis, we host the BrainGB website at
https://braingb.us with models, tutorials, examples, as well as an out-of-box
Python package. We hope that this work will provide useful empirical evidence
and offer insights for future research in this novel and promising direction.
| [
{
"created": "Thu, 17 Mar 2022 08:31:13 GMT",
"version": "v1"
},
{
"created": "Sat, 15 Oct 2022 20:32:05 GMT",
"version": "v2"
},
{
"created": "Tue, 29 Nov 2022 04:12:29 GMT",
"version": "v3"
}
] | 2022-11-30 | [
[
"Cui",
"Hejie",
""
],
[
"Dai",
"Wei",
""
],
[
"Zhu",
"Yanqiao",
""
],
[
"Kan",
"Xuan",
""
],
[
"Gu",
"Antonio Aodong Chen",
""
],
[
"Lukemire",
"Joshua",
""
],
[
"Zhan",
"Liang",
""
],
[
"He",
"... | Mapping the connectome of the human brain using structural or functional connectivity has become one of the most pervasive paradigms for neuroimaging analysis. Recently, Graph Neural Networks (GNNs) motivated from geometric deep learning have attracted broad interest due to their established power for modeling complex networked data. Despite their superior performance in many fields, there has not yet been a systematic study of how to design effective GNNs for brain network analysis. To bridge this gap, we present BrainGB, a benchmark for brain network analysis with GNNs. BrainGB standardizes the process by (1) summarizing brain network construction pipelines for both functional and structural neuroimaging modalities and (2) modularizing the implementation of GNN designs. We conduct extensive experiments on datasets across cohorts and modalities and recommend a set of general recipes for effective GNN designs on brain networks. To support open and reproducible research on GNN-based brain network analysis, we host the BrainGB website at https://braingb.us with models, tutorials, examples, as well as an out-of-box Python package. We hope that this work will provide useful empirical evidence and offer insights for future research in this novel and promising direction. |
2209.09372 | Milo M. Lin | Milo M. Lin | Thermodynamic force thresholds biomolecular behavior | null | null | null | null | q-bio.SC cond-mat.stat-mech physics.bio-ph q-bio.BM | http://creativecommons.org/licenses/by/4.0/ | In living systems, collective molecular behavior is driven by thermodynamic
forces in the form of chemical gradients. Leveraging recent advances in the
field of nonequilibrium physics, I show that increasing the thermodynamic force
alone can induce qualitatively new behavior. To demonstrate this principle,
general equations governing kinetic proofreading and microtubule assembly are
derived. These equations show that new capabilities, including catalytic
regulation of steady-state behavior and exponential enhancement of molecular
discrimination, are only possible if the system is driven sufficiently far from
equilibrium, and can emerge sharply at a threshold force. Regardless of design
parameters, these results reveal that the thermodynamic force sets fundamental
performance limits on tuning sensitivity, error, and waste. Experimental data
show that these biomolecular processes operate at the limits allowed by theory.
| [
{
"created": "Mon, 19 Sep 2022 22:49:51 GMT",
"version": "v1"
}
] | 2022-09-21 | [
[
"Lin",
"Milo M.",
""
]
] | In living systems, collective molecular behavior is driven by thermodynamic forces in the form of chemical gradients. Leveraging recent advances in the field of nonequilibrium physics, I show that increasing the thermodynamic force alone can induce qualitatively new behavior. To demonstrate this principle, general equations governing kinetic proofreading and microtubule assembly are derived. These equations show that new capabilities, including catalytic regulation of steady-state behavior and exponential enhancement of molecular discrimination, are only possible if the system is driven sufficiently far from equilibrium, and can emerge sharply at a threshold force. Regardless of design parameters, these results reveal that the thermodynamic force sets fundamental performance limits on tuning sensitivity, error, and waste. Experimental data show that these biomolecular processes operate at the limits allowed by theory. |
2103.09554 | Mar\'ia Vallet-Regi | JL Paris, C Mannris, M Cabanas, R Calisle, M Manzano, M Vallet-Regi,
CC Coussios | Ultrasound-mediated Cavitation-enhanced extravasation of mesoporous
silica nanoparticles for controlled-release drug delivery | 27 pages, 6 figures | Chem. Eng. J., 340, 2-8 (2018) | 10.1016/j.cej.2017.12.051 | null | q-bio.TO | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Mesoporous silica nanoparticles have been reported as suitable drug carriers,
but their successful delivery to target tissues following systemic
administration remains a challenge. In the present work, ultrasound-induced
inertial cavitation was evaluated as a mechanism to promote their extravasation
in a flow-through tissue-mimicking agarose phantom. Two different ultrasound
frequencies, 0.5 or 1.6 MHz, with pressures in the range 0.5-4 MPa were used to
drive cavitation activity which was detected in real time. The optimal
ultrasound conditions identified were employed to deliver dye-loaded
nanoparticles as a model for drug-loaded nanocarriers, with the level of
extravasation evaluated by fluorescence microscopy. The same nanoparticles were
then co-injected with submicrometric polymeric cavitation nuclei as a means to
promote cavitation activity and decrease the required in-situ acoustic pressure
required to attain extravasation. The overall cavitation energy and penetration
of the combination was compared to mesoporous silica nanoparticles alone. The
results of the present work suggest that combining mesoporous silica
nanocarriers and submcrometric cavitation nuclei may help enhance the
extravasation of the nanocarrier, thus enabling subsequent sustained drug
release to happen from those particles already embedded in the tumour tissue.
| [
{
"created": "Wed, 17 Mar 2021 10:35:10 GMT",
"version": "v1"
}
] | 2021-03-18 | [
[
"Paris",
"JL",
""
],
[
"Mannris",
"C",
""
],
[
"Cabanas",
"M",
""
],
[
"Calisle",
"R",
""
],
[
"Manzano",
"M",
""
],
[
"Vallet-Regi",
"M",
""
],
[
"Coussios",
"CC",
""
]
] | Mesoporous silica nanoparticles have been reported as suitable drug carriers, but their successful delivery to target tissues following systemic administration remains a challenge. In the present work, ultrasound-induced inertial cavitation was evaluated as a mechanism to promote their extravasation in a flow-through tissue-mimicking agarose phantom. Two different ultrasound frequencies, 0.5 or 1.6 MHz, with pressures in the range 0.5-4 MPa were used to drive cavitation activity which was detected in real time. The optimal ultrasound conditions identified were employed to deliver dye-loaded nanoparticles as a model for drug-loaded nanocarriers, with the level of extravasation evaluated by fluorescence microscopy. The same nanoparticles were then co-injected with submicrometric polymeric cavitation nuclei as a means to promote cavitation activity and decrease the required in-situ acoustic pressure required to attain extravasation. The overall cavitation energy and penetration of the combination was compared to mesoporous silica nanoparticles alone. The results of the present work suggest that combining mesoporous silica nanocarriers and submcrometric cavitation nuclei may help enhance the extravasation of the nanocarrier, thus enabling subsequent sustained drug release to happen from those particles already embedded in the tumour tissue. |
2407.06195 | Jared Deighton | Jared Deighton, Wyatt Mackey, Ioannis Schizas, David L. Boothe Jr.,
Vasileios Maroulas | Higher-Order Spatial Information for Self-Supervised Place Cell Learning | null | null | null | null | q-bio.NC cs.IT cs.LG cs.NE math.IT | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Mammals navigate novel environments and exhibit resilience to sparse
environmental sensory cues via place and grid cells, which encode position in
space. While the efficiency of grid cell coding has been extensively studied,
the computational role of place cells is less well understood. This gap arises
partially because spatial information measures have, until now, been limited to
single place cells. We derive and implement a higher-order spatial information
measure, allowing for the study of the emergence of multiple place cells in a
self-supervised manner. We show that emergent place cells have many desirable
features, including high-accuracy spatial decoding. This is the first work in
which higher-order spatial information measures that depend solely on place
cells' firing rates have been derived and which focuses on the emergence of
multiple place cells via self-supervised learning. By quantifying the spatial
information of multiple place cells, we enhance our understanding of place cell
formation and capabilities in recurrent neural networks, thereby improving the
potential navigation capabilities of artificial systems in novel environments
without objective location information.
| [
{
"created": "Mon, 10 Jun 2024 15:03:54 GMT",
"version": "v1"
}
] | 2024-07-10 | [
[
"Deighton",
"Jared",
""
],
[
"Mackey",
"Wyatt",
""
],
[
"Schizas",
"Ioannis",
""
],
[
"Boothe",
"David L.",
"Jr."
],
[
"Maroulas",
"Vasileios",
""
]
] | Mammals navigate novel environments and exhibit resilience to sparse environmental sensory cues via place and grid cells, which encode position in space. While the efficiency of grid cell coding has been extensively studied, the computational role of place cells is less well understood. This gap arises partially because spatial information measures have, until now, been limited to single place cells. We derive and implement a higher-order spatial information measure, allowing for the study of the emergence of multiple place cells in a self-supervised manner. We show that emergent place cells have many desirable features, including high-accuracy spatial decoding. This is the first work in which higher-order spatial information measures that depend solely on place cells' firing rates have been derived and which focuses on the emergence of multiple place cells via self-supervised learning. By quantifying the spatial information of multiple place cells, we enhance our understanding of place cell formation and capabilities in recurrent neural networks, thereby improving the potential navigation capabilities of artificial systems in novel environments without objective location information. |
1002.2251 | Steve Yaeli | Steve Yaeli, Ron Meir | MSE-based analysis of optimal tuning functions predicts phenomena
observed in sensory neurons | Submitted to Frontiers in Computational Neuroscience | null | null | null | q-bio.NC q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Biological systems display impressive capabilities in effectively responding
to environmental signals in real time. There is increasing evidence that
organisms may indeed be employing near optimal Bayesian calculations in their
decision-making. An intriguing question relates to the properties of optimal
encoding methods, namely determining the properties of neural populations in
sensory layers that optimize performance, subject to physiological constraints.
Within an ecological theory of neural encoding/decoding, we show that optimal
Bayesian performance requires neural adaptation which reflects environmental
changes. Specifically, we predict that neuronal tuning functions possess an
optimal width, which increases with prior uncertainty and environmental noise,
and decreases with the decoding time window. Furthermore, even for static
stimuli, we demonstrate that dynamic sensory tuning functions, acting at
relatively short time scales, lead to improved performance. Interestingly, the
narrowing of tuning functions as a function of time was recently observed in
several biological systems. Such results set the stage for a functional theory
which may explain the high reliability of sensory systems, and the utility of
neuronal adaptation occurring at multiple time scales.
| [
{
"created": "Thu, 11 Feb 2010 15:39:20 GMT",
"version": "v1"
}
] | 2010-02-12 | [
[
"Yaeli",
"Steve",
""
],
[
"Meir",
"Ron",
""
]
] | Biological systems display impressive capabilities in effectively responding to environmental signals in real time. There is increasing evidence that organisms may indeed be employing near optimal Bayesian calculations in their decision-making. An intriguing question relates to the properties of optimal encoding methods, namely determining the properties of neural populations in sensory layers that optimize performance, subject to physiological constraints. Within an ecological theory of neural encoding/decoding, we show that optimal Bayesian performance requires neural adaptation which reflects environmental changes. Specifically, we predict that neuronal tuning functions possess an optimal width, which increases with prior uncertainty and environmental noise, and decreases with the decoding time window. Furthermore, even for static stimuli, we demonstrate that dynamic sensory tuning functions, acting at relatively short time scales, lead to improved performance. Interestingly, the narrowing of tuning functions as a function of time was recently observed in several biological systems. Such results set the stage for a functional theory which may explain the high reliability of sensory systems, and the utility of neuronal adaptation occurring at multiple time scales. |
1712.01417 | Gopal P. Sarma | Gopal P. Sarma and Victor Faundez | Integrative biological simulation praxis: Considerations from physics,
philosophy, and data/model curation practices | 10 pages | Cellular Logistics, Volume 7, No. 4 e1392400 (2017) | 10.1080/21592799.2017.1392400 | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Integrative biological simulations have a varied and controversial history in
the biological sciences. From computational models of organelles, cells, and
simple organisms, to physiological models of tissues, organ systems, and
ecosystems, a diverse array of biological systems have been the target of
large-scale computational modeling efforts. Nonetheless, these research agendas
have yet to prove decisively their value among the broader community of
theoretical and experimental biologists. In this commentary, we examine a range
of philosophical and practical issues relevant to understanding the potential
of integrative simulations. We discuss the role of theory and modeling in
different areas of physics and suggest that certain sub-disciplines of physics
provide useful cultural analogies for imagining the future role of simulations
in biological research. We examine philosophical issues related to modeling
which consistently arise in discussions about integrative simulations and
suggest a pragmatic viewpoint that balances a belief in philosophy with the
recognition of the relative infancy of our state of philosophical
understanding. Finally, we discuss community workflow and publication practices
to allow research to be readily discoverable and amenable to incorporation into
simulations. We argue that there are aligned incentives in widespread adoption
of practices which will both advance the needs of integrative simulation
efforts as well as other contemporary trends in the biological sciences,
ranging from open science and data sharing to improving reproducibility.
| [
{
"created": "Mon, 4 Dec 2017 23:43:54 GMT",
"version": "v1"
}
] | 2017-12-06 | [
[
"Sarma",
"Gopal P.",
""
],
[
"Faundez",
"Victor",
""
]
] | Integrative biological simulations have a varied and controversial history in the biological sciences. From computational models of organelles, cells, and simple organisms, to physiological models of tissues, organ systems, and ecosystems, a diverse array of biological systems have been the target of large-scale computational modeling efforts. Nonetheless, these research agendas have yet to prove decisively their value among the broader community of theoretical and experimental biologists. In this commentary, we examine a range of philosophical and practical issues relevant to understanding the potential of integrative simulations. We discuss the role of theory and modeling in different areas of physics and suggest that certain sub-disciplines of physics provide useful cultural analogies for imagining the future role of simulations in biological research. We examine philosophical issues related to modeling which consistently arise in discussions about integrative simulations and suggest a pragmatic viewpoint that balances a belief in philosophy with the recognition of the relative infancy of our state of philosophical understanding. Finally, we discuss community workflow and publication practices to allow research to be readily discoverable and amenable to incorporation into simulations. We argue that there are aligned incentives in widespread adoption of practices which will both advance the needs of integrative simulation efforts as well as other contemporary trends in the biological sciences, ranging from open science and data sharing to improving reproducibility. |
2112.04172 | Adrian Liston | Carlos P. Roca1, Oliver T. Burton, Julika Neumann, Samar Tareen, Carly
E. Whyte, St\'ephanie Humblet-Baron and Adrian Liston | A Cross Entropy test allows quantitative statistical comparison of t-SNE
and UMAP representations | 26 pages, 5 figures | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The advent of high dimensional single cell data in the biomedical sciences
has necessitated the development of dimensionality-reduction tools. t-SNE and
UMAP are the two most frequently used approaches, allowing clear visualisation
of highly complex single cell datasets. Despite the ubiquity of these
approaches and the clear need for quantitative comparison of single cell
datasets, t-SNE and UMAP have largely remained data visualisation tools due to
the lack of robust statistical approaches available. Here, we have derived a
statistical test for evaluating the difference between dimensionality-reduced
datasets, using the Kolmogorov-Smirnov test on the distributions of cross
entropy of single cells within each dataset. As the approach uses the
interrelationship of single cells for comparison, the resulting statistic is
robust and capable of distinguishing between true biological variation and
rotational symmetry generation during dimensionality reduction. Further, the
test provides a valid distance between single cell datasets, allowing the
organisation of multiple samples into a dendrogram for quantitative comparison
of complex datasets. These results demonstrate the largely untapped potential
of dimensionality-reduction tools for biomedical data analysis beyond
visualisation.
| [
{
"created": "Wed, 8 Dec 2021 08:45:22 GMT",
"version": "v1"
}
] | 2021-12-09 | [
[
"Roca1",
"Carlos P.",
""
],
[
"Burton",
"Oliver T.",
""
],
[
"Neumann",
"Julika",
""
],
[
"Tareen",
"Samar",
""
],
[
"Whyte",
"Carly E.",
""
],
[
"Humblet-Baron",
"Stéphanie",
""
],
[
"Liston",
"Adrian",
""... | The advent of high dimensional single cell data in the biomedical sciences has necessitated the development of dimensionality-reduction tools. t-SNE and UMAP are the two most frequently used approaches, allowing clear visualisation of highly complex single cell datasets. Despite the ubiquity of these approaches and the clear need for quantitative comparison of single cell datasets, t-SNE and UMAP have largely remained data visualisation tools due to the lack of robust statistical approaches available. Here, we have derived a statistical test for evaluating the difference between dimensionality-reduced datasets, using the Kolmogorov-Smirnov test on the distributions of cross entropy of single cells within each dataset. As the approach uses the interrelationship of single cells for comparison, the resulting statistic is robust and capable of distinguishing between true biological variation and rotational symmetry generation during dimensionality reduction. Further, the test provides a valid distance between single cell datasets, allowing the organisation of multiple samples into a dendrogram for quantitative comparison of complex datasets. These results demonstrate the largely untapped potential of dimensionality-reduction tools for biomedical data analysis beyond visualisation. |
1201.6643 | Rosemary Redfield | M. L. Reaves, S. Sinha, J. D. Rabinowitz, L. Kruglyak, R. J. Redfield | Absence of arsenate in DNA from arsenate-grown GFAJ-1 cells | Originally submitted to Science January 30 2012. This is the revised
version, resubmitted on April 13 2012. It has not been officially accepted | null | 10.1126/science.1219861 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A strain of Halomonas bacteria, GFAJ-1, has been reported to be able to use
arsenate as a nutrient when phosphate is limiting, and to specifically
incorporate arsenic into its DNA in place of phosphorus. However, we have found
that arsenate does not contribute to growth of GFAJ-1 when phosphate is
limiting and that DNA purified from cells grown with limiting phosphate and
abundant arsenate does not exhibit the spontaneous hydrolysis expected of
arsenate ester bonds. Furthermore, mass spectrometry showed that this DNA
contains only trace amounts of free arsenate and no detectable covalently bound
arsenate.
| [
{
"created": "Tue, 31 Jan 2012 18:37:30 GMT",
"version": "v1"
},
{
"created": "Thu, 19 Apr 2012 20:35:30 GMT",
"version": "v2"
}
] | 2015-06-04 | [
[
"Reaves",
"M. L.",
""
],
[
"Sinha",
"S.",
""
],
[
"Rabinowitz",
"J. D.",
""
],
[
"Kruglyak",
"L.",
""
],
[
"Redfield",
"R. J.",
""
]
] | A strain of Halomonas bacteria, GFAJ-1, has been reported to be able to use arsenate as a nutrient when phosphate is limiting, and to specifically incorporate arsenic into its DNA in place of phosphorus. However, we have found that arsenate does not contribute to growth of GFAJ-1 when phosphate is limiting and that DNA purified from cells grown with limiting phosphate and abundant arsenate does not exhibit the spontaneous hydrolysis expected of arsenate ester bonds. Furthermore, mass spectrometry showed that this DNA contains only trace amounts of free arsenate and no detectable covalently bound arsenate. |
1603.06230 | Josh Merel | Josh Merel, Ben Shababo, Alex Naka, Hillel Adesnik, Liam Paninski | Bayesian methods for event analysis of intracellular currents | null | null | 10.1016/j.jneumeth.2016.05.015 | null | q-bio.QM q-bio.NC stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Investigation of neural circuit functioning often requires statistical
interpretation of events in subthreshold electrophysiological recordings. This
problem is non-trivial because recordings may have moderate levels of
structured noise and events may have distinct kinetics. In addition, novel
experimental designs that combine optical and electrophysiological methods will
depend upon statistical tools that combine multimodal data. We present a
Bayesian approach for inferring the timing, strength, and kinetics of
postsynaptic currents (PSCs) from voltage-clamp recordings on a per event
basis. The simple generative model for a single voltage-clamp recording
flexibly extends to include network-level structure to enable experiments
designed to probe synaptic connectivity. We validate the approach on simulated
and real data. We also demonstrate that extensions of the basic PSC detection
algorithm can handle recordings contaminated with optically evoked currents,
and we simulate a scenario in which calcium imaging observations, available for
a subset of neurons, can be fused with electrophysiological data to achieve
higher temporal resolution. We apply this approach to simulated and real ground
truth data to demonstrate its higher sensitivity in detecting small
signal-to-noise events and its increased robustness to noise compared to
standard methods for detecting PSCs. The new Bayesian event analysis approach
for electrophysiological recordings should allow for better estimation of
physiological parameters under more variable conditions and help support new
experimental designs for circuit mapping.
| [
{
"created": "Sun, 20 Mar 2016 15:44:40 GMT",
"version": "v1"
},
{
"created": "Wed, 18 May 2016 16:35:52 GMT",
"version": "v2"
}
] | 2016-05-19 | [
[
"Merel",
"Josh",
""
],
[
"Shababo",
"Ben",
""
],
[
"Naka",
"Alex",
""
],
[
"Adesnik",
"Hillel",
""
],
[
"Paninski",
"Liam",
""
]
] | Investigation of neural circuit functioning often requires statistical interpretation of events in subthreshold electrophysiological recordings. This problem is non-trivial because recordings may have moderate levels of structured noise and events may have distinct kinetics. In addition, novel experimental designs that combine optical and electrophysiological methods will depend upon statistical tools that combine multimodal data. We present a Bayesian approach for inferring the timing, strength, and kinetics of postsynaptic currents (PSCs) from voltage-clamp recordings on a per event basis. The simple generative model for a single voltage-clamp recording flexibly extends to include network-level structure to enable experiments designed to probe synaptic connectivity. We validate the approach on simulated and real data. We also demonstrate that extensions of the basic PSC detection algorithm can handle recordings contaminated with optically evoked currents, and we simulate a scenario in which calcium imaging observations, available for a subset of neurons, can be fused with electrophysiological data to achieve higher temporal resolution. We apply this approach to simulated and real ground truth data to demonstrate its higher sensitivity in detecting small signal-to-noise events and its increased robustness to noise compared to standard methods for detecting PSCs. The new Bayesian event analysis approach for electrophysiological recordings should allow for better estimation of physiological parameters under more variable conditions and help support new experimental designs for circuit mapping. |
1809.04334 | Guo-Wei Wei | David Bramer and Guo-Wei Wei | Blind prediction of protein B-factor and flexibility | 5 figures, 23 pages | Journal of Chemical Physics, 2018 | 10.1063/1.5048469 | null | q-bio.BM q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Debye-Waller factor, a measure of X-ray attenuation, can be experimentally
observed in protein X-ray crystallography. Previous theoretical models have
made strong inroads in the analysis of B-factors by linearly fitting protein
B-factors from experimental data. However, the blind prediction of B-factors
for unknown proteins is an unsolved problem. This work integrates machine
learning and advanced graph theory, namely, multiscale weighted colored graphs
(MWCGs), to blindly predict B-factors of unknown proteins. MWCGs are local
features that measure the intrinsic flexibility due to a protein structure.
Global features that connect the B-factors of different proteins, e.g., the
resolution of X-ray crystallography, are introduced to enable the cross-protein
B-factor predictions. Several machine learning approaches, including ensemble
methods and deep learning, are considered in the present work. The proposed
method is validated with hundreds of thousands of experimental B-factors.
Extensive numerical results indicate that the blind B-factor predictions
obtained from the present method are more accurate than the least squares
fittings using traditional methods.
| [
{
"created": "Wed, 12 Sep 2018 09:51:42 GMT",
"version": "v1"
}
] | 2018-10-17 | [
[
"Bramer",
"David",
""
],
[
"Wei",
"Guo-Wei",
""
]
] | Debye-Waller factor, a measure of X-ray attenuation, can be experimentally observed in protein X-ray crystallography. Previous theoretical models have made strong inroads in the analysis of B-factors by linearly fitting protein B-factors from experimental data. However, the blind prediction of B-factors for unknown proteins is an unsolved problem. This work integrates machine learning and advanced graph theory, namely, multiscale weighted colored graphs (MWCGs), to blindly predict B-factors of unknown proteins. MWCGs are local features that measure the intrinsic flexibility due to a protein structure. Global features that connect the B-factors of different proteins, e.g., the resolution of X-ray crystallography, are introduced to enable the cross-protein B-factor predictions. Several machine learning approaches, including ensemble methods and deep learning, are considered in the present work. The proposed method is validated with hundreds of thousands of experimental B-factors. Extensive numerical results indicate that the blind B-factor predictions obtained from the present method are more accurate than the least squares fittings using traditional methods. |
1707.07461 | Anton Zadorin | Anton S. Zadorin, Yannick Rondelez | Natural selection in compartmentalized environment with reshuffling | 50 pages, 7 figures | J. Math. Biol. (2019) 79: 1401 | 10.1007/s00285-019-01399-4 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The emerging field of high-throughput compartmentalized in vitro evolution is
a promising new approach to protein engineering. In these experiments,
libraries of mutant genotypes are randomly distributed and expressed in
microscopic compartments - droplets of an emulsion. The selection of desirable
variants is performed according to the phenotype of each compartment. The
random partitioning leads to a fraction of compartments receiving more than one
genotype making the whole process a lab implementation of the group selection.
From a practical point of view (where efficient selection is typically sought),
it is important to know the impact of the increase in the mean occupancy of
compartments on the selection efficiency. We carried out a theoretical
investigation of this problem in the context of selection dynamics for an
infinite non-mutating subdivided population that randomly colonizes an infinite
number of patches (compartments) at each reproduction cycle. We derive here an
update equation for any distribution of phenotypes and any value of the mean
occupancy. Using this result, we demonstrate that, for the linear additive
fitness, the best genotype is still selected regardless of the mean occupancy.
Furthermore, the selection process is remarkably resilient to the presence of
multiple genotypes per compartments, and slows down approximately inversely
proportional to the mean occupancy at high values. We extend out results to
more general expressions that cover nonadditive and non-linear fitnesses, as
well non-Poissonian distribution among compartments. Our conclusions may also
apply to natural genetic compartmentalized replicators, such as viruses or
early trans-acting RNA replicators.
| [
{
"created": "Mon, 24 Jul 2017 10:10:35 GMT",
"version": "v1"
},
{
"created": "Thu, 11 Jul 2019 17:14:18 GMT",
"version": "v2"
}
] | 2019-11-06 | [
[
"Zadorin",
"Anton S.",
""
],
[
"Rondelez",
"Yannick",
""
]
] | The emerging field of high-throughput compartmentalized in vitro evolution is a promising new approach to protein engineering. In these experiments, libraries of mutant genotypes are randomly distributed and expressed in microscopic compartments - droplets of an emulsion. The selection of desirable variants is performed according to the phenotype of each compartment. The random partitioning leads to a fraction of compartments receiving more than one genotype making the whole process a lab implementation of the group selection. From a practical point of view (where efficient selection is typically sought), it is important to know the impact of the increase in the mean occupancy of compartments on the selection efficiency. We carried out a theoretical investigation of this problem in the context of selection dynamics for an infinite non-mutating subdivided population that randomly colonizes an infinite number of patches (compartments) at each reproduction cycle. We derive here an update equation for any distribution of phenotypes and any value of the mean occupancy. Using this result, we demonstrate that, for the linear additive fitness, the best genotype is still selected regardless of the mean occupancy. Furthermore, the selection process is remarkably resilient to the presence of multiple genotypes per compartments, and slows down approximately inversely proportional to the mean occupancy at high values. We extend out results to more general expressions that cover nonadditive and non-linear fitnesses, as well non-Poissonian distribution among compartments. Our conclusions may also apply to natural genetic compartmentalized replicators, such as viruses or early trans-acting RNA replicators. |
1710.10229 | Hyunjin Shim | Hyunjin Shim | Feature learning of virus genome evolution with the nucleotide skip-gram
neural network | 16 pages, 4 figures | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent studies reveal even the smallest genomes such as viruses evolve
through complex and stochastic processes, and the assumption of independent
alleles is not valid in most applications. Advances in sequencing technologies
produce multiple time-point whole-genome data, which enable potential
interactions between these alleles to be investigated empirically. To
investigate these interactions, we represent alleles as distributed vectors
that encode for relationships with other alleles in the course of evolution,
and apply artificial neural networks to time-sampled whole-genome datasets for
feature learning. We build this platform using methods and algorithms derived
from Natural Language Processing (NLP), and we denote it as the nucleotide
skip-gram neural network. We learn distributed vectors of alleles using the
changes in allele frequency of echovirus 11 in the presence or absence of the
disinfectant from the experimental evolution data. Results from the training
using a new open-source software TensorFlow show that the learned distributed
vectors can be clustered using Principal Component Analysis and Hierarchical
Clustering to reveal a list of non-synonymous mutations that arise on the
structural protein VP1 in connection to the candidate mutation for disinfectant
adaptation. Furthermore, this method can account for recombination rates by
setting the extent of interactions as a biological hyper-parameter, and the
results show the most realistic scenario of mid-range interactions across the
genome is most consistent with the previous studies.
| [
{
"created": "Fri, 27 Oct 2017 16:27:40 GMT",
"version": "v1"
}
] | 2017-10-30 | [
[
"Shim",
"Hyunjin",
""
]
] | Recent studies reveal even the smallest genomes such as viruses evolve through complex and stochastic processes, and the assumption of independent alleles is not valid in most applications. Advances in sequencing technologies produce multiple time-point whole-genome data, which enable potential interactions between these alleles to be investigated empirically. To investigate these interactions, we represent alleles as distributed vectors that encode for relationships with other alleles in the course of evolution, and apply artificial neural networks to time-sampled whole-genome datasets for feature learning. We build this platform using methods and algorithms derived from Natural Language Processing (NLP), and we denote it as the nucleotide skip-gram neural network. We learn distributed vectors of alleles using the changes in allele frequency of echovirus 11 in the presence or absence of the disinfectant from the experimental evolution data. Results from the training using a new open-source software TensorFlow show that the learned distributed vectors can be clustered using Principal Component Analysis and Hierarchical Clustering to reveal a list of non-synonymous mutations that arise on the structural protein VP1 in connection to the candidate mutation for disinfectant adaptation. Furthermore, this method can account for recombination rates by setting the extent of interactions as a biological hyper-parameter, and the results show the most realistic scenario of mid-range interactions across the genome is most consistent with the previous studies. |
1310.5237 | Minyoung Wyman | Minyoung Wyman and Mark Wyman | Sex-specific recombination rates and allele frequencies affect the
invasion of sexually antagonistic variation on autosomes | 27 pages, 2 figures | Journal of Evolutionary Biology 26 (2013) 2428 | 10.1111/jeb.12236 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The introduction and persistence of novel sexually antagonistic alleles can
depend upon factors that differ between males and females. Understanding the
conditions for invasion in a two-locus model can elucidate these processes. For
instance, selection can act differently upon the sexes, or sex-linkage can
facilitate the invasion of genetic variation with opposing fitness effects
between the sexes. Two factors that deserve further attention are recombination
rates and allele frequencies -- both of which can vary substantially between
the sexes. We find that sex-specific recombination rates in a two-locus diploid
model can affect the invasion outcome of sexually antagonistic alleles and that
the sex-averaged recombination rate is not necessarily sufficient to predict
invasion. We confirm that the range of permissible recombination rates is
smaller in the sex benefitting from invasion and larger in the sex harmed by
invasion. However, within the invasion space, male recombination rate can be
greater than, equal to, or less than female recombination rate in order for a
male-benefit, female-detriment allele to invade (and similarly for a
female-benefit, male-detriment allele). We further show that a novel, sexually
antagonistic allele that is also associated with a lowered recombination rate
can invade more easily when present in the double heterozygote genotype.
Finally, we find that sexual dimorphism in resident allele frequencies can
impact the invasion of new sexually antagonistic alleles at a second locus. Our
results suggest that accounting for sex-specific recombination rates and allele
frequencies can determine the difference between invasion and non-invasion of
novel sexually antagonistic alleles in a two-locus model.
| [
{
"created": "Sat, 19 Oct 2013 15:09:52 GMT",
"version": "v1"
}
] | 2013-10-22 | [
[
"Wyman",
"Minyoung",
""
],
[
"Wyman",
"Mark",
""
]
] | The introduction and persistence of novel sexually antagonistic alleles can depend upon factors that differ between males and females. Understanding the conditions for invasion in a two-locus model can elucidate these processes. For instance, selection can act differently upon the sexes, or sex-linkage can facilitate the invasion of genetic variation with opposing fitness effects between the sexes. Two factors that deserve further attention are recombination rates and allele frequencies -- both of which can vary substantially between the sexes. We find that sex-specific recombination rates in a two-locus diploid model can affect the invasion outcome of sexually antagonistic alleles and that the sex-averaged recombination rate is not necessarily sufficient to predict invasion. We confirm that the range of permissible recombination rates is smaller in the sex benefitting from invasion and larger in the sex harmed by invasion. However, within the invasion space, male recombination rate can be greater than, equal to, or less than female recombination rate in order for a male-benefit, female-detriment allele to invade (and similarly for a female-benefit, male-detriment allele). We further show that a novel, sexually antagonistic allele that is also associated with a lowered recombination rate can invade more easily when present in the double heterozygote genotype. Finally, we find that sexual dimorphism in resident allele frequencies can impact the invasion of new sexually antagonistic alleles at a second locus. Our results suggest that accounting for sex-specific recombination rates and allele frequencies can determine the difference between invasion and non-invasion of novel sexually antagonistic alleles in a two-locus model. |
2301.11846 | Yanni Jiang | Xiaofu He, Diana Rodriguez Moreno, Zhenghua Hou, Keely
Cheslack-Postava, Yanni Jiang, Tong Li, Ronit Kishon, Larry Amsel, George
Musa, Zhishun Wang, Christina W. Hoven | Connectivity based Real-Time fMRI Neurofeedback Training in Youth with a
History of Major Depressive Disorder | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: Real-time functional magnetic resonance imaging neurofeedback
(rtfMRI-nf) has proven to be a powerful technique to help subjects to gauge and
enhance emotional control. Traditionally, rtfMRI-nf has focused on emotional
regulation through self-regulation of amygdala. Recently, rtfMRI studies have
observed that regulation of a target brain region is accompanied by
connectivity changes beyond the target region. Therefore, the aim of present
study is to investigate the use of connectivity between amygdala and prefrontal
regions as the target of neurofeedback training in healthy individuals and
subjects with a life-time history of major depressive disorder (MDD) performing
an emotion regulation task. Method: Ten remitted MDD subjects and twelve
healthy controls (HC) performed an emotion regulation task in 4 runs of
rtfMRI-nf training followed by one transfer run without neurofeedback conducted
in a single session. The functional connectivity between amygdala and
prefrontal cortex was presented as a feedback bar concurrent with the emotion
regulation task. Participants' emotional state was measured by the Positive and
Negative Affect Schedule (PANAS) prior to and following the rtfMRI-nf.
Psychological assessments were used to determine subjects' history of
depression. Results: Participants with a history of MDD showed a trend of
decreasing functional connectivity across the four rtfMRI-nf runs, and there
was a marginally significant interaction between the MDD history and number of
training runs. The HC group showed a significant increase of frontal cortex
activation between the second and third neurofeedback runs. Comparing PANAS
scores before and after connectivity-based rtfMRI-nf, we observed a significant
decrease in negative PANAS score in the whole group overall, and a significant
decrease in positive PANAS score in the MDD group alone.
| [
{
"created": "Wed, 25 Jan 2023 01:32:25 GMT",
"version": "v1"
}
] | 2023-02-01 | [
[
"He",
"Xiaofu",
""
],
[
"Moreno",
"Diana Rodriguez",
""
],
[
"Hou",
"Zhenghua",
""
],
[
"Cheslack-Postava",
"Keely",
""
],
[
"Jiang",
"Yanni",
""
],
[
"Li",
"Tong",
""
],
[
"Kishon",
"Ronit",
""
],
[
... | Background: Real-time functional magnetic resonance imaging neurofeedback (rtfMRI-nf) has proven to be a powerful technique to help subjects to gauge and enhance emotional control. Traditionally, rtfMRI-nf has focused on emotional regulation through self-regulation of amygdala. Recently, rtfMRI studies have observed that regulation of a target brain region is accompanied by connectivity changes beyond the target region. Therefore, the aim of present study is to investigate the use of connectivity between amygdala and prefrontal regions as the target of neurofeedback training in healthy individuals and subjects with a life-time history of major depressive disorder (MDD) performing an emotion regulation task. Method: Ten remitted MDD subjects and twelve healthy controls (HC) performed an emotion regulation task in 4 runs of rtfMRI-nf training followed by one transfer run without neurofeedback conducted in a single session. The functional connectivity between amygdala and prefrontal cortex was presented as a feedback bar concurrent with the emotion regulation task. Participants' emotional state was measured by the Positive and Negative Affect Schedule (PANAS) prior to and following the rtfMRI-nf. Psychological assessments were used to determine subjects' history of depression. Results: Participants with a history of MDD showed a trend of decreasing functional connectivity across the four rtfMRI-nf runs, and there was a marginally significant interaction between the MDD history and number of training runs. The HC group showed a significant increase of frontal cortex activation between the second and third neurofeedback runs. Comparing PANAS scores before and after connectivity-based rtfMRI-nf, we observed a significant decrease in negative PANAS score in the whole group overall, and a significant decrease in positive PANAS score in the MDD group alone. |
1612.04074 | Aristides Moustakas | Aristides Moustakas | Spatio-temporal data mining in ecological and veterinary epidemiology | null | null | null | null | q-bio.PE stat.AP stat.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding the spread of any disease is a highly complex and
interdisciplinary exercise as biological, social, geographic, economic, and
medical factors may shape the way a disease moves through a population and
options for its eventual control or eradication. Disease spread poses a serious
threat in animal and plant health and has implications for ecosystem
functioning and species extinctions as well as implications in society through
food security and potential disease spread in humans. Space-time epidemiology
is based on the concept that various characteristics of the pathogenic agents
and the environment interact in order to alter the probability of disease
occurrence and form temporal or spatial patterns. Epidemiology aims to identify
these patterns and factors, to assess the relevant uncertainty sources, and to
describe disease in the population. Thus disease spread at the population level
differs from the approach traditionally taken by veterinary practitioners that
are principally concerned with the health status of the individual. Patterns of
disease occurrence provide insights into which factors may be affecting the
health of the population, through investigating which individuals are affected,
where are these individuals located and when did they become infected. With the
rapid development of smart sensors, social networks, as well as digital maps
and remotely-sensed imagery spatio-temporal data are more ubiquitous and richer
than ever before. The availability of such large datasets (Big data) poses
great challenges in data analysis. In addition, increased availability of
computing power facilitates the use of computationally-intensive methods for
the analysis of such data. Thus new methods as well as case studies are needed
to understand veterinary and ecological epidemiology. A special issue aimed to
address this topic.
| [
{
"created": "Tue, 13 Dec 2016 09:42:39 GMT",
"version": "v1"
}
] | 2016-12-14 | [
[
"Moustakas",
"Aristides",
""
]
] | Understanding the spread of any disease is a highly complex and interdisciplinary exercise as biological, social, geographic, economic, and medical factors may shape the way a disease moves through a population and options for its eventual control or eradication. Disease spread poses a serious threat in animal and plant health and has implications for ecosystem functioning and species extinctions as well as implications in society through food security and potential disease spread in humans. Space-time epidemiology is based on the concept that various characteristics of the pathogenic agents and the environment interact in order to alter the probability of disease occurrence and form temporal or spatial patterns. Epidemiology aims to identify these patterns and factors, to assess the relevant uncertainty sources, and to describe disease in the population. Thus disease spread at the population level differs from the approach traditionally taken by veterinary practitioners that are principally concerned with the health status of the individual. Patterns of disease occurrence provide insights into which factors may be affecting the health of the population, through investigating which individuals are affected, where are these individuals located and when did they become infected. With the rapid development of smart sensors, social networks, as well as digital maps and remotely-sensed imagery spatio-temporal data are more ubiquitous and richer than ever before. The availability of such large datasets (Big data) poses great challenges in data analysis. In addition, increased availability of computing power facilitates the use of computationally-intensive methods for the analysis of such data. Thus new methods as well as case studies are needed to understand veterinary and ecological epidemiology. A special issue aimed to address this topic. |
2402.10146 | Justen Geddes | Justen R Geddes and Amanda Randles | Optimizing Temporal Waveform Analysis: A Novel Pipeline for Efficient
Characterization of Left Coronary Artery Velocity Profiles | 4 pages, 4 figures | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Continuously measured arterial blood velocity can provide insight into
physiological parameters and potential disease states. The efficient and
effective description of the temporal profiles of arterial velocity is crucial
for both clinical practice and research. We propose a pipeline to identify the
minimum number of points of interest to adequately describe a velocity profile
of the left coronary artery. This pipeline employs a novel operation that
"stretches" a baseline waveform to quantify the utility of a point in fitting
other waveforms. Our study introduces a comprehensive pipeline specifically
designed to identify the minimal yet crucial number of points needed to
accurately represent the velocity profile of the left coronary artery.
Additionally, the only location-dependent portion of this pipeline is the first
step, choosing all of the possible points of interest. Hence, this work is
broadly applicable to other waveforms. This versatility paves the way for a
novel non-frequency domain method that can enhance the analysis of
physiological waveforms. Such advancements have potential implications in both
research and clinical treatment of various diseases, underscoring the broader
applicability and impact.
| [
{
"created": "Thu, 15 Feb 2024 17:50:18 GMT",
"version": "v1"
}
] | 2024-02-16 | [
[
"Geddes",
"Justen R",
""
],
[
"Randles",
"Amanda",
""
]
] | Continuously measured arterial blood velocity can provide insight into physiological parameters and potential disease states. The efficient and effective description of the temporal profiles of arterial velocity is crucial for both clinical practice and research. We propose a pipeline to identify the minimum number of points of interest to adequately describe a velocity profile of the left coronary artery. This pipeline employs a novel operation that "stretches" a baseline waveform to quantify the utility of a point in fitting other waveforms. Our study introduces a comprehensive pipeline specifically designed to identify the minimal yet crucial number of points needed to accurately represent the velocity profile of the left coronary artery. Additionally, the only location-dependent portion of this pipeline is the first step, choosing all of the possible points of interest. Hence, this work is broadly applicable to other waveforms. This versatility paves the way for a novel non-frequency domain method that can enhance the analysis of physiological waveforms. Such advancements have potential implications in both research and clinical treatment of various diseases, underscoring the broader applicability and impact. |
2205.06405 | Serik Sagitov | Serik Sagitov and Anders St{\aa}hlberg | Counting unique molecular identifiers in sequencing using a multitype
branching process with immigration | null | null | null | null | q-bio.QM math.PR | http://creativecommons.org/licenses/by/4.0/ | Detection of extremely rare variant alleles, such as tumour DNA, within a
complex mixture of DNA molecules is experimentally challenging due to
sequencing errors. Barcoding of target DNA molecules in library construction
for next-generation sequencing provides a way to identify and bioinformatically
remove polymerase induced errors. During the barcoding procedure involving $t$
consecutive PCR cycles, the DNA molecules become barcoded by unique molecular
identifiers (UMI). Different library construction protocols utilise different
values of $t$. The effect of a larger $t$ and imperfect PCR amplifications is
poorly described.
This paper proposes a branching process with growing immigration as a model
describing the random outcome of $t$ cycles of PCR barcoding. Our model
discriminates between five different amplification rates $r_1$, $r_2$, $r_3$,
$r_4$, $r$ for different types of molecules associated with the PCR barcoding
procedure. We study this model by focussing on $C_t$, the number of clusters of
molecules sharing the same UMI, as well as $C_t(m)$, the number of UMI clusters
of size $m$. Our main finding is a remarkable asymptotic pattern valid for
moderately large $t$. It turns out that $E(C_t(m))/E(C_t)\approx 2^{-m}$ for
$m=1,2,\ldots$, regardless of the underlying parameters $(r_1,r_2,r_3,r_4,r)$.
The knowledge of the quantities $C_t$ and $C_t(m)$ as functions of the
experimental parameters $t$ and $(r_1,r_2,r_3,r_4,r)$ will help the users to
draw more adequate conclusions from the outcomes of different sequencing
protocols.
| [
{
"created": "Fri, 13 May 2022 00:33:00 GMT",
"version": "v1"
},
{
"created": "Sun, 5 Jun 2022 10:33:30 GMT",
"version": "v2"
}
] | 2022-06-07 | [
[
"Sagitov",
"Serik",
""
],
[
"Ståhlberg",
"Anders",
""
]
] | Detection of extremely rare variant alleles, such as tumour DNA, within a complex mixture of DNA molecules is experimentally challenging due to sequencing errors. Barcoding of target DNA molecules in library construction for next-generation sequencing provides a way to identify and bioinformatically remove polymerase induced errors. During the barcoding procedure involving $t$ consecutive PCR cycles, the DNA molecules become barcoded by unique molecular identifiers (UMI). Different library construction protocols utilise different values of $t$. The effect of a larger $t$ and imperfect PCR amplifications is poorly described. This paper proposes a branching process with growing immigration as a model describing the random outcome of $t$ cycles of PCR barcoding. Our model discriminates between five different amplification rates $r_1$, $r_2$, $r_3$, $r_4$, $r$ for different types of molecules associated with the PCR barcoding procedure. We study this model by focussing on $C_t$, the number of clusters of molecules sharing the same UMI, as well as $C_t(m)$, the number of UMI clusters of size $m$. Our main finding is a remarkable asymptotic pattern valid for moderately large $t$. It turns out that $E(C_t(m))/E(C_t)\approx 2^{-m}$ for $m=1,2,\ldots$, regardless of the underlying parameters $(r_1,r_2,r_3,r_4,r)$. The knowledge of the quantities $C_t$ and $C_t(m)$ as functions of the experimental parameters $t$ and $(r_1,r_2,r_3,r_4,r)$ will help the users to draw more adequate conclusions from the outcomes of different sequencing protocols. |
1506.08624 | Marisa Eisenberg | Michael A. L. Hayashi, Marisa C. Eisenberg | Effects of adaptive protective behavior on the dynamics of sexually
transmitted infections | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sexually transmitted infections (STIs) continue to present a complex and
costly challenge to public health programs. The preferences and social dynamics
of a population can have a large impact on the course of an outbreak as well as
the effectiveness of interventions intended to influence individual behavior.
In addition, individuals may alter their sexual behavior in response to the
presence of STIs, creating a feedback loop between transmission and behavior.
We investigate the consequences of modeling the interaction between STI
transmission and prophylactic use with a model that links a
Susceptible-Infectious-Susceptible (SIS) system to evolutionary game dynamics
that determine the effective contact rate. The combined model framework allows
us to address protective behavior by both infected and susceptible individuals.
Feedback between behavioral adaptation and prevalence creates a wide range of
dynamic behaviors in the combined model, including damped and sustained
oscillations as well as bistability, depending on the behavioral parameters and
disease growth rate. We found that disease extinction is possible for multiple
regions where R0 > 1, due to behavior adaptation driving the epidemic downward,
although conversely endemic prevalence for arbitrarily low R0 is also possible
if contact rates are sufficiently high. We also tested how model
misspecification might affect disease forecasting and estimation of the model
parameters and R0. We found that alternative models that neglect the behavioral
feedback or only consider behavior adaptation by susceptible individuals can
potentially yield misleading parameter estimates or omit significant features
of the disease trajectory.
| [
{
"created": "Mon, 29 Jun 2015 13:46:34 GMT",
"version": "v1"
}
] | 2015-06-30 | [
[
"Hayashi",
"Michael A. L.",
""
],
[
"Eisenberg",
"Marisa C.",
""
]
] | Sexually transmitted infections (STIs) continue to present a complex and costly challenge to public health programs. The preferences and social dynamics of a population can have a large impact on the course of an outbreak as well as the effectiveness of interventions intended to influence individual behavior. In addition, individuals may alter their sexual behavior in response to the presence of STIs, creating a feedback loop between transmission and behavior. We investigate the consequences of modeling the interaction between STI transmission and prophylactic use with a model that links a Susceptible-Infectious-Susceptible (SIS) system to evolutionary game dynamics that determine the effective contact rate. The combined model framework allows us to address protective behavior by both infected and susceptible individuals. Feedback between behavioral adaptation and prevalence creates a wide range of dynamic behaviors in the combined model, including damped and sustained oscillations as well as bistability, depending on the behavioral parameters and disease growth rate. We found that disease extinction is possible for multiple regions where R0 > 1, due to behavior adaptation driving the epidemic downward, although conversely endemic prevalence for arbitrarily low R0 is also possible if contact rates are sufficiently high. We also tested how model misspecification might affect disease forecasting and estimation of the model parameters and R0. We found that alternative models that neglect the behavioral feedback or only consider behavior adaptation by susceptible individuals can potentially yield misleading parameter estimates or omit significant features of the disease trajectory. |
2011.09958 | Thierry Mora | Daniele Conti and Thierry Mora | Non-equilibrium dynamics of adaptation in sensory systems | null | null | null | null | q-bio.NC nlin.AO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Adaptation is used by biological sensory systems to respond to a wide range
of environmental signals, by adapting their response properties to the
statistics of the stimulus in order to maximize information transmission. We
derive rules of optimal adaptation to changes in the mean and variance of a
continuous stimulus in terms of Bayesian filters, and map them onto stochastic
equations that couple the state of the environment to an internal variable
controling the response function. We calculate numerical and exact results for
the speed and accuracy of adaptation, and its impact on information
transmission. We find that, in the regime of efficient adaptation, the speed of
adaptation scales sublinearly with the rate of change of the environment.
Finally, we exploit the mathematical equivalence between adaptation and
stochastic thermodynamics to quantitatively relate adaptation to the
irreversibility of the adaptation time course, defined by the rate of entropy
production. Our results suggest a means to empirically quantify adaptation in a
model-free and non-parametric way.
| [
{
"created": "Thu, 19 Nov 2020 16:57:40 GMT",
"version": "v1"
},
{
"created": "Mon, 12 Apr 2021 18:41:46 GMT",
"version": "v2"
}
] | 2021-04-14 | [
[
"Conti",
"Daniele",
""
],
[
"Mora",
"Thierry",
""
]
] | Adaptation is used by biological sensory systems to respond to a wide range of environmental signals, by adapting their response properties to the statistics of the stimulus in order to maximize information transmission. We derive rules of optimal adaptation to changes in the mean and variance of a continuous stimulus in terms of Bayesian filters, and map them onto stochastic equations that couple the state of the environment to an internal variable controling the response function. We calculate numerical and exact results for the speed and accuracy of adaptation, and its impact on information transmission. We find that, in the regime of efficient adaptation, the speed of adaptation scales sublinearly with the rate of change of the environment. Finally, we exploit the mathematical equivalence between adaptation and stochastic thermodynamics to quantitatively relate adaptation to the irreversibility of the adaptation time course, defined by the rate of entropy production. Our results suggest a means to empirically quantify adaptation in a model-free and non-parametric way. |
2111.12025 | Smarajit Triambak | S. Triambak, D.P. Mahapatra, N. Mallick, and R. Sahoo | A new logistic growth model applied to COVID-19 fatality data | Final version published as a journal article in Epidemics | Epidemics, Volume 37, 100515 (2021) | 10.1016/j.epidem.2021.100515 | null | q-bio.PE | http://creativecommons.org/publicdomain/zero/1.0/ | Background: Recent work showed that the temporal growth of the novel
coronavirus disease (COVID-19) follows a sub-exponential power-law scaling
whenever effective control interventions are in place. Taking this into
consideration, we present a new phenomenological logistic model that is
well-suited for such power-law epidemic growth.
Methods: We empirically develop the logistic growth model using simple
scaling arguments, known boundary conditions and a comparison with available
data from four countries, Belgium, China, Denmark and Germany, where (arguably)
effective containment measures were put in place during the first wave of the
pandemic. A non-linear least-squares minimization algorithm is used to map the
parameter space and make optimal predictions.
Results: Unlike other logistic growth models, our presented model is shown to
consistently make accurate predictions of peak heights, peak locations and
cumulative saturation values for incomplete epidemic growth curves. We further
show that the power-law growth model also works reasonably well when
containment and lock down strategies are not as stringent as they were during
the first wave of infections in 2020. On the basis of this agreement, the model
was used to forecast COVID-19 fatalities for the third wave in South Africa,
which is currently in progress.
Conclusions: We anticipate that our presented model will be useful for a
similar forecasting of COVID-19 induced infections/deaths in other regions as
well as other cases of infectious disease outbreaks, particularly when
power-law scaling is observed.
| [
{
"created": "Sat, 20 Nov 2021 08:39:38 GMT",
"version": "v1"
}
] | 2021-11-24 | [
[
"Triambak",
"S.",
""
],
[
"Mahapatra",
"D. P.",
""
],
[
"Mallick",
"N.",
""
],
[
"Sahoo",
"R.",
""
]
] | Background: Recent work showed that the temporal growth of the novel coronavirus disease (COVID-19) follows a sub-exponential power-law scaling whenever effective control interventions are in place. Taking this into consideration, we present a new phenomenological logistic model that is well-suited for such power-law epidemic growth. Methods: We empirically develop the logistic growth model using simple scaling arguments, known boundary conditions and a comparison with available data from four countries, Belgium, China, Denmark and Germany, where (arguably) effective containment measures were put in place during the first wave of the pandemic. A non-linear least-squares minimization algorithm is used to map the parameter space and make optimal predictions. Results: Unlike other logistic growth models, our presented model is shown to consistently make accurate predictions of peak heights, peak locations and cumulative saturation values for incomplete epidemic growth curves. We further show that the power-law growth model also works reasonably well when containment and lock down strategies are not as stringent as they were during the first wave of infections in 2020. On the basis of this agreement, the model was used to forecast COVID-19 fatalities for the third wave in South Africa, which is currently in progress. Conclusions: We anticipate that our presented model will be useful for a similar forecasting of COVID-19 induced infections/deaths in other regions as well as other cases of infectious disease outbreaks, particularly when power-law scaling is observed. |
1701.05004 | Ardavan Salehi Nobandegani | Ardavan Salehi Nobandegani, Thomas R. Shultz | Converting Cascade-Correlation Neural Nets into Probabilistic Generative
Models | null | Proceedings of the 39th Annual Conference of the Cognitive Science
Society (2017) (pp. 1029-1034). Austin, TX: Cognitive Science Society | null | null | q-bio.NC cs.AI cs.NE stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Humans are not only adept in recognizing what class an input instance belongs
to (i.e., classification task), but perhaps more remarkably, they can imagine
(i.e., generate) plausible instances of a desired class with ease, when
prompted. Inspired by this, we propose a framework which allows transforming
Cascade-Correlation Neural Networks (CCNNs) into probabilistic generative
models, thereby enabling CCNNs to generate samples from a category of interest.
CCNNs are a well-known class of deterministic, discriminative NNs, which
autonomously construct their topology, and have been successful in giving
accounts for a variety of psychological phenomena. Our proposed framework is
based on a Markov Chain Monte Carlo (MCMC) method, called the
Metropolis-adjusted Langevin algorithm, which capitalizes on the gradient
information of the target distribution to direct its explorations towards
regions of high probability, thereby achieving good mixing properties. Through
extensive simulations, we demonstrate the efficacy of our proposed framework.
| [
{
"created": "Wed, 18 Jan 2017 10:51:58 GMT",
"version": "v1"
}
] | 2021-07-02 | [
[
"Nobandegani",
"Ardavan Salehi",
""
],
[
"Shultz",
"Thomas R.",
""
]
] | Humans are not only adept in recognizing what class an input instance belongs to (i.e., classification task), but perhaps more remarkably, they can imagine (i.e., generate) plausible instances of a desired class with ease, when prompted. Inspired by this, we propose a framework which allows transforming Cascade-Correlation Neural Networks (CCNNs) into probabilistic generative models, thereby enabling CCNNs to generate samples from a category of interest. CCNNs are a well-known class of deterministic, discriminative NNs, which autonomously construct their topology, and have been successful in giving accounts for a variety of psychological phenomena. Our proposed framework is based on a Markov Chain Monte Carlo (MCMC) method, called the Metropolis-adjusted Langevin algorithm, which capitalizes on the gradient information of the target distribution to direct its explorations towards regions of high probability, thereby achieving good mixing properties. Through extensive simulations, we demonstrate the efficacy of our proposed framework. |
1910.09051 | Mike Steel Prof. | Oliver Weller-Davies, Mike Steel and Jotun Hein | Combinatorial results for network-based models of metabolic origins | 29 pages, 10 figures | null | null | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A key step in the origin of life is the emergence of a primitive metabolism.
This requires the formation of a subset of chemical reactions that is both
self-sustaining and collectively autocatalytic. A generic theory to study such
processes (called 'RAF theory') has provided a precise and computationally
effective way to address these questions, both on simulated data and in
laboratory studies. One of the classic applications of this theory (arising
from Stuart Kauffman's pioneering work in the 1980s) involves networks of
polymers under cleavage and ligation reactions; in the first part of this
paper, we provide the first exact description of the number of such reactions
under various model assumptions. Conclusions from earlier studies relied on
either approximations or asymptotic counting, and we show that the exact counts
lead to similar (though not always identical) asymptotic results. In the second
part of the paper, we solve some questions posed in more recent papers
concerning the computational complexity of some key questions in RAF theory. In
particular, although there is a fast algorithm to determine whether or not a
catalytic reaction network contains a subset that is both self-sustaining and
autocatalytic (and, if so, find one), determining whether or not sets exist
that satisfy certain additional constraints exist turns out to be NP-complete.
| [
{
"created": "Sun, 20 Oct 2019 19:35:40 GMT",
"version": "v1"
}
] | 2019-10-22 | [
[
"Weller-Davies",
"Oliver",
""
],
[
"Steel",
"Mike",
""
],
[
"Hein",
"Jotun",
""
]
] | A key step in the origin of life is the emergence of a primitive metabolism. This requires the formation of a subset of chemical reactions that is both self-sustaining and collectively autocatalytic. A generic theory to study such processes (called 'RAF theory') has provided a precise and computationally effective way to address these questions, both on simulated data and in laboratory studies. One of the classic applications of this theory (arising from Stuart Kauffman's pioneering work in the 1980s) involves networks of polymers under cleavage and ligation reactions; in the first part of this paper, we provide the first exact description of the number of such reactions under various model assumptions. Conclusions from earlier studies relied on either approximations or asymptotic counting, and we show that the exact counts lead to similar (though not always identical) asymptotic results. In the second part of the paper, we solve some questions posed in more recent papers concerning the computational complexity of some key questions in RAF theory. In particular, although there is a fast algorithm to determine whether or not a catalytic reaction network contains a subset that is both self-sustaining and autocatalytic (and, if so, find one), determining whether or not sets exist that satisfy certain additional constraints exist turns out to be NP-complete. |
1609.03926 | Yue Wu | Li Dongmei, Yue Wu, Xu Yajing | The Dynamic Behavior (Effects) of Impulsive Toxicant Input on a
Single-Species Population in a Small Polluted Environment | 14 pages, 4 figures | null | null | null | q-bio.PE math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we study a single-species population model with pulse toxicant
input to a small polluted environment. The intrinsic rate of population is
affected by environment and toxin in organisms. The toxin in organisms is
influenced by toxin in environment and the food chain. A new mathematical model
is derived. By the Pulse Compare Theorem, we find the surviving threshold of
the population and obtain the sufficient conditions of persistence and
extinction of the population.
| [
{
"created": "Mon, 12 Sep 2016 02:47:35 GMT",
"version": "v1"
}
] | 2016-09-14 | [
[
"Dongmei",
"Li",
""
],
[
"Wu",
"Yue",
""
],
[
"Yajing",
"Xu",
""
]
] | In this paper, we study a single-species population model with pulse toxicant input to a small polluted environment. The intrinsic rate of population is affected by environment and toxin in organisms. The toxin in organisms is influenced by toxin in environment and the food chain. A new mathematical model is derived. By the Pulse Compare Theorem, we find the surviving threshold of the population and obtain the sufficient conditions of persistence and extinction of the population. |
1310.4058 | Changbong Hyeon | Yoonji Lee, Sun Choi, Changbong Hyeon | Mapping the intramolecular signal transduction of G-protein coupled
receptors | null | Proteins: Struct. Funct. Bioinfo. 2013 | null | null | q-bio.BM q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | G-protein coupled receptors (GPCRs), a major gatekeeper of extracellular
signals on plasma membrane, are unarguably one of the most important
therapeutic targets. Given the recent discoveries of allosteric modulations, an
allosteric wiring diagram of intramolecular signal transductions would be of
great use to glean the mechanism of receptor regulation. Here, by evaluating
betweenness centrality ($C_B$) of each residue, we calculate maps of
information flow in GPCRs and identify key residues for signal transductions
and their pathways. Compared with preexisting approaches, the allosteric
hotspots that our $C_B$-based analysis detects for A$_{2A}$ adenosine receptor
(A$_{2A}$AR) and bovine rhodopsin are better correlated with biochemical data.
In particular, our analysis outperforms other methods in locating the rotameric
microswitches, which are generally deemed critical for mediating orthosteric
signaling in class A GPCRs. For A$_{2A}$AR, the inter-residue cross-correlation
map, calculated using equilibrium structural ensemble from molecular dynamics
simulations, reveals that strong signals of long-range transmembrane
communications exist only in the agonist-bound state. A seemingly subtle
variation in structure, found in different GPCR subtypes or imparted by agonist
bindings or a point mutation at an allosteric site, can lead to a drastic
difference in the map of signaling pathways and protein activity. The signaling
map of GPCRs provides valuable insights into allosteric modulations as well as
reliable identifications of orthosteric signaling pathways.
| [
{
"created": "Tue, 15 Oct 2013 14:00:12 GMT",
"version": "v1"
}
] | 2013-10-16 | [
[
"Lee",
"Yoonji",
""
],
[
"Choi",
"Sun",
""
],
[
"Hyeon",
"Changbong",
""
]
] | G-protein coupled receptors (GPCRs), a major gatekeeper of extracellular signals on plasma membrane, are unarguably one of the most important therapeutic targets. Given the recent discoveries of allosteric modulations, an allosteric wiring diagram of intramolecular signal transductions would be of great use to glean the mechanism of receptor regulation. Here, by evaluating betweenness centrality ($C_B$) of each residue, we calculate maps of information flow in GPCRs and identify key residues for signal transductions and their pathways. Compared with preexisting approaches, the allosteric hotspots that our $C_B$-based analysis detects for A$_{2A}$ adenosine receptor (A$_{2A}$AR) and bovine rhodopsin are better correlated with biochemical data. In particular, our analysis outperforms other methods in locating the rotameric microswitches, which are generally deemed critical for mediating orthosteric signaling in class A GPCRs. For A$_{2A}$AR, the inter-residue cross-correlation map, calculated using equilibrium structural ensemble from molecular dynamics simulations, reveals that strong signals of long-range transmembrane communications exist only in the agonist-bound state. A seemingly subtle variation in structure, found in different GPCR subtypes or imparted by agonist bindings or a point mutation at an allosteric site, can lead to a drastic difference in the map of signaling pathways and protein activity. The signaling map of GPCRs provides valuable insights into allosteric modulations as well as reliable identifications of orthosteric signaling pathways. |
q-bio/0412026 | Martin Lind\'en | Martin Linden, Tomi Tuohimaa, Ann-Beth Jonsson and Mats Wallin | Force generation in small ensembles of Brownian motors | RevTex 9 pages, 4 figures. Revised version, new subsections in Sec.
III, removed typos | Phys. Rev. E 74 021908 (2006) | 10.1103/PhysRevE.74.021908 | null | q-bio.SC cond-mat.stat-mech physics.bio-ph | null | The motility of certain gram-negative bacteria is mediated by retraction of
type IV pili surface filaments, which are essential for infectivity. The
retraction is powered by a strong molecular motor protein, PilT, producing very
high forces that can exceed 150 pN. The molecular details of the motor
mechanism are still largely unknown, while other features have been identified,
such as the ring-shaped protein structure of the PilT motor. The surprisingly
high forces generated by the PilT system motivate a model investigation of the
generation of large forces in molecular motors. We propose a simple model,
involving a small ensemble of motor subunits interacting through the
deformations on a circular backbone with finite stiffness. The model describes
the motor subunits in terms of diffusing particles in an asymmetric,
time-dependent binding potential (flashing ratchet potential), roughly
corresponding to the ATP hydrolysis cycle. We compute force-velocity relations
in a subset of the parameter space and explore how the maximum force (stall
force) is determined by stiffness, binding strength, ensemble size, and degree
of asymmetry. We identify two qualitatively different regimes of operation
depending on the relation between ensemble size and asymmetry. In the
transition between these two regimes, the stall force depends nonlinearly on
the number of motor subunits. Compared to its constituents without
interactions, we find higher efficiency and qualitatively different
force-velocity relations. The model captures several of the qualitative
features obtained in experiments on pilus retraction forces, such as roughly
constant velocity at low applied forces and insensitivity in the stall force to
changes in the ATP concentration.
| [
{
"created": "Wed, 15 Dec 2004 13:06:04 GMT",
"version": "v1"
},
{
"created": "Wed, 15 Jun 2005 09:49:18 GMT",
"version": "v2"
},
{
"created": "Wed, 9 Aug 2006 07:50:29 GMT",
"version": "v3"
}
] | 2007-05-23 | [
[
"Linden",
"Martin",
""
],
[
"Tuohimaa",
"Tomi",
""
],
[
"Jonsson",
"Ann-Beth",
""
],
[
"Wallin",
"Mats",
""
]
] | The motility of certain gram-negative bacteria is mediated by retraction of type IV pili surface filaments, which are essential for infectivity. The retraction is powered by a strong molecular motor protein, PilT, producing very high forces that can exceed 150 pN. The molecular details of the motor mechanism are still largely unknown, while other features have been identified, such as the ring-shaped protein structure of the PilT motor. The surprisingly high forces generated by the PilT system motivate a model investigation of the generation of large forces in molecular motors. We propose a simple model, involving a small ensemble of motor subunits interacting through the deformations on a circular backbone with finite stiffness. The model describes the motor subunits in terms of diffusing particles in an asymmetric, time-dependent binding potential (flashing ratchet potential), roughly corresponding to the ATP hydrolysis cycle. We compute force-velocity relations in a subset of the parameter space and explore how the maximum force (stall force) is determined by stiffness, binding strength, ensemble size, and degree of asymmetry. We identify two qualitatively different regimes of operation depending on the relation between ensemble size and asymmetry. In the transition between these two regimes, the stall force depends nonlinearly on the number of motor subunits. Compared to its constituents without interactions, we find higher efficiency and qualitatively different force-velocity relations. The model captures several of the qualitative features obtained in experiments on pilus retraction forces, such as roughly constant velocity at low applied forces and insensitivity in the stall force to changes in the ATP concentration. |
1108.3590 | Eugene Koonin | Alexander E. Lobkovsky, Yuri I. Wolf, Eugene V. Koonin | Predictability of evolutionary trajectories in fitness landscapes | 14 pages, 7 figures | null | 10.1371/journal.pcbi.1002302 | null | q-bio.PE q-bio.BM q-bio.MN | http://creativecommons.org/licenses/publicdomain/ | Experimental studies on enzyme evolution show that only a small fraction of
all possible mutation trajectories are accessible to evolution. However, these
experiments deal with individual enzymes and explore a tiny part of the fitness
landscape. We report an exhaustive analysis of fitness landscapes constructed
with an off-lattice model of protein folding where fitness is equated with
robustness to misfolding. This model mimics the essential features of the
interactions between amino acids, is consistent with the key paradigms of
protein folding and reproduces the universal distribution of evolutionary rates
among orthologous proteins. We introduce mean path divergence as a quantitative
measure of the degree to which the starting and ending points determine the
path of evolution in fitness landscapes. Global measures of landscape roughness
are good predictors of path divergence in all studied landscapes: the mean path
divergence is greater in smooth landscapes than in rough ones. The
model-derived and experimental landscapes are significantly smoother than
random landscapes and resemble additive landscapes perturbed with moderate
amounts of noise; thus, these landscapes are substantially robust to mutation.
The model landscapes show a deficit of suboptimal peaks even compared with
noisy additive landscapes with similar overall roughness. We suggest that
smoothness and the substantial deficit of peaks in the fitness landscapes of
protein evolution are fundamental consequences of the physics of protein
folding.
| [
{
"created": "Wed, 17 Aug 2011 22:23:03 GMT",
"version": "v1"
}
] | 2015-05-30 | [
[
"Lobkovsky",
"Alexander E.",
""
],
[
"Wolf",
"Yuri I.",
""
],
[
"Koonin",
"Eugene V.",
""
]
] | Experimental studies on enzyme evolution show that only a small fraction of all possible mutation trajectories are accessible to evolution. However, these experiments deal with individual enzymes and explore a tiny part of the fitness landscape. We report an exhaustive analysis of fitness landscapes constructed with an off-lattice model of protein folding where fitness is equated with robustness to misfolding. This model mimics the essential features of the interactions between amino acids, is consistent with the key paradigms of protein folding and reproduces the universal distribution of evolutionary rates among orthologous proteins. We introduce mean path divergence as a quantitative measure of the degree to which the starting and ending points determine the path of evolution in fitness landscapes. Global measures of landscape roughness are good predictors of path divergence in all studied landscapes: the mean path divergence is greater in smooth landscapes than in rough ones. The model-derived and experimental landscapes are significantly smoother than random landscapes and resemble additive landscapes perturbed with moderate amounts of noise; thus, these landscapes are substantially robust to mutation. The model landscapes show a deficit of suboptimal peaks even compared with noisy additive landscapes with similar overall roughness. We suggest that smoothness and the substantial deficit of peaks in the fitness landscapes of protein evolution are fundamental consequences of the physics of protein folding. |
1505.04600 | Zedong Bi | Zedong Bi, Changsong Zhou, Hai-Jun Zhou | Spike Pattern Structure Influences Efficacy Variability under STDP and
Synaptic Homeostasis | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In neural systems, synaptic plasticity is usually driven by spike trains. Due
to the inherent noises of neurons, synapses and networks, spike trains
typically exhibit externally uncontrollable variability such as spatial
heterogeneity and temporal stochasticity, resulting in variability of synapses,
which we call efficacy variability. Spike patterns with the same population
rate but inducing different efficacy variability may result in neuronal
networks with sharply different structures and functions. However, how the
variability of spike trains influences the efficacy variability remains
unclear. Here, we systematically study this influence when spike patterns
possess four aspects of statistical features, i.e. synchronous firing,
auto-temporal structure, heterogeneity of rates and heterogeneity of
cross-correlations, under spike-timing dependent plasticity (STDP) after
dynamically bounding the mean strength of plastic synapses into or out of a
neuron (synaptic homeostasis). We then show the functional importance of
efficacy variability on the encoding and maintenance of connection patterns and
on the early development of primary visual systems driven by retinal waves. We
anticipate our work brings a fresh perspective to the understanding of the
interaction between synaptic plasticity and dynamical spike patterns in
functional processes of neural systems.
| [
{
"created": "Mon, 18 May 2015 11:36:18 GMT",
"version": "v1"
},
{
"created": "Fri, 29 May 2015 02:50:46 GMT",
"version": "v2"
},
{
"created": "Wed, 17 Jun 2015 14:51:59 GMT",
"version": "v3"
}
] | 2015-06-18 | [
[
"Bi",
"Zedong",
""
],
[
"Zhou",
"Changsong",
""
],
[
"Zhou",
"Hai-Jun",
""
]
] | In neural systems, synaptic plasticity is usually driven by spike trains. Due to the inherent noises of neurons, synapses and networks, spike trains typically exhibit externally uncontrollable variability such as spatial heterogeneity and temporal stochasticity, resulting in variability of synapses, which we call efficacy variability. Spike patterns with the same population rate but inducing different efficacy variability may result in neuronal networks with sharply different structures and functions. However, how the variability of spike trains influences the efficacy variability remains unclear. Here, we systematically study this influence when spike patterns possess four aspects of statistical features, i.e. synchronous firing, auto-temporal structure, heterogeneity of rates and heterogeneity of cross-correlations, under spike-timing dependent plasticity (STDP) after dynamically bounding the mean strength of plastic synapses into or out of a neuron (synaptic homeostasis). We then show the functional importance of efficacy variability on the encoding and maintenance of connection patterns and on the early development of primary visual systems driven by retinal waves. We anticipate our work brings a fresh perspective to the understanding of the interaction between synaptic plasticity and dynamical spike patterns in functional processes of neural systems. |
2105.10404 | Michael Rosenblum | Michael Rosenblum and Arkady Pikovsky and Andrea A. K\"uhn and
Johannes L. Busch | Real-time estimation of phase and amplitude with application to neural
data | 11 pages, 5 figures | null | null | null | q-bio.NC eess.SP | http://creativecommons.org/licenses/by/4.0/ | Computation of the instantaneous phase and amplitude via the Hilbert
Transform is a powerful tool of data analysis. This approach finds many
applications in various science and engineering branches but is not proper for
causal estimation because it requires knowledge of the signal's past and
future. However, several problems require real-time estimation of phase and
amplitude; an illustrative example is phase-locked or amplitude-dependent
stimulation in neuroscience. In this paper, we discuss and compare three causal
algorithms that do not rely on the Hilbert Transform but exploit well-known
physical phenomena, the synchronization and the resonance. After testing the
algorithms on a synthetic data set, we illustrate their performance computing
phase and amplitude for the accelerometer tremor measurements and a
Parkinsonian patient's beta-band brain activity.
| [
{
"created": "Thu, 20 May 2021 12:21:33 GMT",
"version": "v1"
}
] | 2021-05-24 | [
[
"Rosenblum",
"Michael",
""
],
[
"Pikovsky",
"Arkady",
""
],
[
"Kühn",
"Andrea A.",
""
],
[
"Busch",
"Johannes L.",
""
]
] | Computation of the instantaneous phase and amplitude via the Hilbert Transform is a powerful tool of data analysis. This approach finds many applications in various science and engineering branches but is not proper for causal estimation because it requires knowledge of the signal's past and future. However, several problems require real-time estimation of phase and amplitude; an illustrative example is phase-locked or amplitude-dependent stimulation in neuroscience. In this paper, we discuss and compare three causal algorithms that do not rely on the Hilbert Transform but exploit well-known physical phenomena, the synchronization and the resonance. After testing the algorithms on a synthetic data set, we illustrate their performance computing phase and amplitude for the accelerometer tremor measurements and a Parkinsonian patient's beta-band brain activity. |
1501.01167 | Helene Loevenbruck | Claudia Kubicek, Judit Gervain, Anne Hillairet De Boisferon, Olivier
Pascalis (LPNC), H\'el\`ene L{\oe}venbruck (LPNC), Gudrun Schwarzer | The influence of infant-directed speech on 12-month-olds' intersensory
perception of fluent speech | null | Infant Behavior and Development, Elsevier, 2014, 37 (4),
pp.644-651 | 10.1016/j.infbeh.2014.08.010 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The present study examined whether infant-directed (ID) speech facilitates
intersensory matching of audio--visual fluent speech in 12-month-old infants.
German-learning infants\^a audio--visual matching ability of German and French
fluent speech was assessed by using a variant of the intermodal matching
procedure, with auditory and visual speech information presented sequentially.
In Experiment 1, the sentences were spoken in an adult-directed (AD) manner.
Results showed that 12-month-old infants did not exhibit a matching performance
for the native, nor for the non-native language. However, Experiment 2 revealed
that when ID speech stimuli were used, infants did perceive the relation
between auditory and visual speech attributes, but only in response to their
native language. Thus, the findings suggest that ID speech might have an
influence on the intersensory perception of fluent speech and shed further
light on multisensory perceptual narrowing.
| [
{
"created": "Tue, 6 Jan 2015 12:51:25 GMT",
"version": "v1"
}
] | 2016-08-08 | [
[
"Kubicek",
"Claudia",
"",
"LPNC"
],
[
"Gervain",
"Judit",
"",
"LPNC"
],
[
"De Boisferon",
"Anne Hillairet",
"",
"LPNC"
],
[
"Pascalis",
"Olivier",
"",
"LPNC"
],
[
"Lœvenbruck",
"Hélène",
"",
"LPNC"
],
[
"Schwar... | The present study examined whether infant-directed (ID) speech facilitates intersensory matching of audio--visual fluent speech in 12-month-old infants. German-learning infants\^a audio--visual matching ability of German and French fluent speech was assessed by using a variant of the intermodal matching procedure, with auditory and visual speech information presented sequentially. In Experiment 1, the sentences were spoken in an adult-directed (AD) manner. Results showed that 12-month-old infants did not exhibit a matching performance for the native, nor for the non-native language. However, Experiment 2 revealed that when ID speech stimuli were used, infants did perceive the relation between auditory and visual speech attributes, but only in response to their native language. Thus, the findings suggest that ID speech might have an influence on the intersensory perception of fluent speech and shed further light on multisensory perceptual narrowing. |
1307.8014 | Joseph Pickrell | Joseph K. Pickrell, Nick Patterson, Po-Ru Loh, Mark Lipson, Bonnie
Berger, Mark Stoneking, Brigitte Pakendorf, David Reich | Ancient west Eurasian ancestry in southern and eastern Africa | Added additional simulations, some additional discussion | Proc Natl Acad Sci U S A. 2014 Feb 18;111(7):2632-7 | 10.1073/pnas.1313787111 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The history of southern Africa involved interactions between indigenous
hunter-gatherers and a range of populations that moved into the region. Here we
use genome-wide genetic data to show that there are at least two admixture
events in the history of Khoisan populations (southern African hunter-gatherers
and pastoralists who speak non-Bantu languages with click consonants). One
involved populations related to Niger-Congo-speaking African populations, and
the other introduced ancestry most closely related to west Eurasian (European
or Middle Eastern) populations. We date this latter admixture event to
approximately 900-1,800 years ago, and show that it had the largest demographic
impact in Khoisan populations that speak Khoe-Kwadi languages. A similar signal
of west Eurasian ancestry is present throughout eastern Africa. In particular,
we also find evidence for two admixture events in the history of Kenyan,
Tanzanian, and Ethiopian populations, the earlier of which involved populations
related to west Eurasians and which we date to approximately 2,700 - 3,300
years ago. We reconstruct the allele frequencies of the putative west Eurasian
population in eastern Africa, and show that this population is a good proxy for
the west Eurasian ancestry in southern Africa. The most parsimonious
explanation for these findings is that west Eurasian ancestry entered southern
Africa indirectly through eastern Africa.
| [
{
"created": "Tue, 30 Jul 2013 15:25:48 GMT",
"version": "v1"
},
{
"created": "Mon, 21 Oct 2013 20:03:52 GMT",
"version": "v2"
}
] | 2014-04-24 | [
[
"Pickrell",
"Joseph K.",
""
],
[
"Patterson",
"Nick",
""
],
[
"Loh",
"Po-Ru",
""
],
[
"Lipson",
"Mark",
""
],
[
"Berger",
"Bonnie",
""
],
[
"Stoneking",
"Mark",
""
],
[
"Pakendorf",
"Brigitte",
""
],
[
... | The history of southern Africa involved interactions between indigenous hunter-gatherers and a range of populations that moved into the region. Here we use genome-wide genetic data to show that there are at least two admixture events in the history of Khoisan populations (southern African hunter-gatherers and pastoralists who speak non-Bantu languages with click consonants). One involved populations related to Niger-Congo-speaking African populations, and the other introduced ancestry most closely related to west Eurasian (European or Middle Eastern) populations. We date this latter admixture event to approximately 900-1,800 years ago, and show that it had the largest demographic impact in Khoisan populations that speak Khoe-Kwadi languages. A similar signal of west Eurasian ancestry is present throughout eastern Africa. In particular, we also find evidence for two admixture events in the history of Kenyan, Tanzanian, and Ethiopian populations, the earlier of which involved populations related to west Eurasians and which we date to approximately 2,700 - 3,300 years ago. We reconstruct the allele frequencies of the putative west Eurasian population in eastern Africa, and show that this population is a good proxy for the west Eurasian ancestry in southern Africa. The most parsimonious explanation for these findings is that west Eurasian ancestry entered southern Africa indirectly through eastern Africa. |
2102.05813 | Bryce Morsky | Bryce Morsky and Dervis Can Vural | Suppressing evolution through environmental switching | null | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | Ecology and evolution under changing environments are important in many
subfields of biology with implications for medicine. Here, we explore an
example: the consequences of fluctuating environments on the emergence of
antibiotic resistance, which is an immense and growing problem. Typically, high
doses of antibiotics are employed to eliminate the infection quickly and
minimize the time under which resistance may emerge. However, this strategy may
not be optimal. Since competition can reduce fitness and resistance typically
has a reproductive cost, resistant mutants' fitness can depend on their
environment. Here we show conditions under which environmental varying fitness
can be exploited to prevent the emergence of resistance. We develop a
stochastic Lotka-Volterra model of a microbial system with competing
phenotypes: a wild strain susceptible to the antibiotic, and a mutant strain
that is resistant. We investigate the impact of various pulsed applications of
antibiotics on population suppression. Leveraging competition, we show how a
strategy of environmental switching can suppress the infection while avoiding
resistant mutants. We discuss limitations of the procedure depending on the
microbe and pharmacodynamics and methods to ameliorate them.
| [
{
"created": "Thu, 11 Feb 2021 02:32:46 GMT",
"version": "v1"
},
{
"created": "Mon, 12 Apr 2021 16:44:29 GMT",
"version": "v2"
}
] | 2021-04-13 | [
[
"Morsky",
"Bryce",
""
],
[
"Vural",
"Dervis Can",
""
]
] | Ecology and evolution under changing environments are important in many subfields of biology with implications for medicine. Here, we explore an example: the consequences of fluctuating environments on the emergence of antibiotic resistance, which is an immense and growing problem. Typically, high doses of antibiotics are employed to eliminate the infection quickly and minimize the time under which resistance may emerge. However, this strategy may not be optimal. Since competition can reduce fitness and resistance typically has a reproductive cost, resistant mutants' fitness can depend on their environment. Here we show conditions under which environmental varying fitness can be exploited to prevent the emergence of resistance. We develop a stochastic Lotka-Volterra model of a microbial system with competing phenotypes: a wild strain susceptible to the antibiotic, and a mutant strain that is resistant. We investigate the impact of various pulsed applications of antibiotics on population suppression. Leveraging competition, we show how a strategy of environmental switching can suppress the infection while avoiding resistant mutants. We discuss limitations of the procedure depending on the microbe and pharmacodynamics and methods to ameliorate them. |
2008.05218 | Zeynep Gokce Islier | Zeynep G\"ok\c{c}e \.I\c{s}lier and Wolfgang H\"ormann and Refik
G\"ull\"u | Assessing Intervention Strategies for Non Homogeneous Populations Using
a Closed Form Formula for R0 | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A general stochastic model for susceptible -> infective -> recovered (SIR)
epidemics in non homogeneous populations is considered. The heterogeneity is a
very important aspect here since it allows more realistic but also more complex
models. The basic reproduction number $R_0$, an indication of the probability
of an outbreak for homogeneous populations does not indicate the probability of
an outbreak for non homogeneous models anymore, because it changes with the
initially infected case. Therefore, we use "individual $R_0$" that is the
expected number of secondary cases for a given initially infected individual.
Thus, the effectiveness of intervention strategies can be assessed by their
capability to reduce individual $R_0$ values. Also an intelligent vaccination
plan for fully heterogeneous populations is proposed. It is based on the
recursive calculation of individual R0 values.
| [
{
"created": "Wed, 12 Aug 2020 10:29:25 GMT",
"version": "v1"
}
] | 2020-08-13 | [
[
"İşlier",
"Zeynep Gökçe",
""
],
[
"Hörmann",
"Wolfgang",
""
],
[
"Güllü",
"Refik",
""
]
] | A general stochastic model for susceptible -> infective -> recovered (SIR) epidemics in non homogeneous populations is considered. The heterogeneity is a very important aspect here since it allows more realistic but also more complex models. The basic reproduction number $R_0$, an indication of the probability of an outbreak for homogeneous populations does not indicate the probability of an outbreak for non homogeneous models anymore, because it changes with the initially infected case. Therefore, we use "individual $R_0$" that is the expected number of secondary cases for a given initially infected individual. Thus, the effectiveness of intervention strategies can be assessed by their capability to reduce individual $R_0$ values. Also an intelligent vaccination plan for fully heterogeneous populations is proposed. It is based on the recursive calculation of individual R0 values. |
2111.06511 | Matthew Kvalheim | Simon Wilshin, Matthew D. Kvalheim, and Shai Revzen | Phase Response Curves and the Role of Coordinates | 21 pages, 9 figures, comments welcome | null | null | null | q-bio.QM math.CA math.DS math.OC q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The "Phase Response Curve" (PRC) is a common tool used to analyze phase
resetting in the natural sciences. We make the observation that the PRC with
respect to a coordinate $y\in\mathbb{R}$ actually depends on the full choice of
coordinates $(x,y)$, $x\in\mathbb{R}^d$. We give a coordinate-free definition
of the PRC making this observation obvious. We show how by controlling $y$,
using delay coordinates of $y$, and postulating the dynamics of $x$ as a
function of $x$ and $y$, we can sometimes reconstruct the PRC with respect to
the $(x,y)$ coordinates. This suggests a means for obtaining the PRC of, e.g.,
a neuron via a voltage clamp.
| [
{
"created": "Fri, 12 Nov 2021 00:46:13 GMT",
"version": "v1"
}
] | 2021-11-15 | [
[
"Wilshin",
"Simon",
""
],
[
"Kvalheim",
"Matthew D.",
""
],
[
"Revzen",
"Shai",
""
]
] | The "Phase Response Curve" (PRC) is a common tool used to analyze phase resetting in the natural sciences. We make the observation that the PRC with respect to a coordinate $y\in\mathbb{R}$ actually depends on the full choice of coordinates $(x,y)$, $x\in\mathbb{R}^d$. We give a coordinate-free definition of the PRC making this observation obvious. We show how by controlling $y$, using delay coordinates of $y$, and postulating the dynamics of $x$ as a function of $x$ and $y$, we can sometimes reconstruct the PRC with respect to the $(x,y)$ coordinates. This suggests a means for obtaining the PRC of, e.g., a neuron via a voltage clamp. |
1207.2627 | J\'er\^ome Bastien | Thomas Creveaux, J\'er\^ome Bastien, Cl\'ement Villars, and Pierre
Legreneur | Model of joint displacement using sigmoid function. Experimental
approach for planar pointing task and squat jump | 27 pages, 17 figures | null | null | null | q-bio.QM physics.bio-ph | http://creativecommons.org/licenses/by/3.0/ | Using an experimental optimization approach, this study investigated whether
two human movements, pointing tasks and squat-jumps, could be modelled with a
reduced set of kinematic parameters. Three sigmoid models were proposed to
model the evolution of joint angles. The models parameters were optimized to
fit the 2D position of the joints obtained from 304 pointing tasks and 120
squat-jumps. The models were accurate for both movements. This study provides a
new framework to model planar movements with a small number of meaningful
kinematic parameters, allowing a continuous description of both kinematics and
kinetics. Further researches should investigate the implication of the control
parameters in relation to motor control and validate this approach for three
dimensional movements.
| [
{
"created": "Wed, 11 Jul 2012 13:08:35 GMT",
"version": "v1"
},
{
"created": "Mon, 17 Sep 2012 09:36:09 GMT",
"version": "v2"
},
{
"created": "Mon, 13 May 2013 09:39:17 GMT",
"version": "v3"
}
] | 2013-05-16 | [
[
"Creveaux",
"Thomas",
""
],
[
"Bastien",
"Jérôme",
""
],
[
"Villars",
"Clément",
""
],
[
"Legreneur",
"Pierre",
""
]
] | Using an experimental optimization approach, this study investigated whether two human movements, pointing tasks and squat-jumps, could be modelled with a reduced set of kinematic parameters. Three sigmoid models were proposed to model the evolution of joint angles. The models parameters were optimized to fit the 2D position of the joints obtained from 304 pointing tasks and 120 squat-jumps. The models were accurate for both movements. This study provides a new framework to model planar movements with a small number of meaningful kinematic parameters, allowing a continuous description of both kinematics and kinetics. Further researches should investigate the implication of the control parameters in relation to motor control and validate this approach for three dimensional movements. |
2106.01867 | Oscar Garc\'ia | Oscar Garc\'ia | Plasticity as a link between spatially explicit, distance-independent,
and whole-stand forest growth models | 9 pages, 8 figures. Submitted manuscript. Version 2 with minor
changes: expanded caption in Fig. 2, new Fig. 6, corrected Wikipedia
reference, fix typos | Forest Science 68(1), 1-7. 2022 | 10.1093/forsci/fxab043 | null | q-bio.PE stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Models at various levels of resolution are commonly used, both for forest
management and in ecological research. They all have comparative advantages and
disadvantages, making desirable a better understanding of the relationships
between the various approaches. It is found that accounting for crown and root
plasticity creates more realistic links between spatial and non-spatial models
than simply ignoring spatial structure. The article reviews also the connection
between distance-independent models and size distributions, and how
distributions evolve over time and relate to whole-stand descriptions. In
addition, some ways in which stand-level knowledge feeds back into detailed
individual-tree formulations are demonstrated. The presentation intends to be
accessible to non-specialists.
Study implications: Introducing plasticity improves the representation of
physio-ecological processes in spatial modelling. Plasticity explains in part
the practical success of distance-independent models. The nature of size
distributions and their relationship to individual-tree and whole-stand models
are discussed. I point out limitations of various approaches and questions for
future research.
| [
{
"created": "Sun, 30 May 2021 23:59:03 GMT",
"version": "v1"
},
{
"created": "Mon, 21 Jun 2021 16:19:32 GMT",
"version": "v2"
}
] | 2022-02-03 | [
[
"García",
"Oscar",
""
]
] | Models at various levels of resolution are commonly used, both for forest management and in ecological research. They all have comparative advantages and disadvantages, making desirable a better understanding of the relationships between the various approaches. It is found that accounting for crown and root plasticity creates more realistic links between spatial and non-spatial models than simply ignoring spatial structure. The article reviews also the connection between distance-independent models and size distributions, and how distributions evolve over time and relate to whole-stand descriptions. In addition, some ways in which stand-level knowledge feeds back into detailed individual-tree formulations are demonstrated. The presentation intends to be accessible to non-specialists. Study implications: Introducing plasticity improves the representation of physio-ecological processes in spatial modelling. Plasticity explains in part the practical success of distance-independent models. The nature of size distributions and their relationship to individual-tree and whole-stand models are discussed. I point out limitations of various approaches and questions for future research. |
2101.01748 | Giuseppe Tronci | Amy Contreras, Michael J. Raxworthy, Simon Wood, Giuseppe Tronci | Hydrolytic Degradability, Cell Tolerance and On-Demand Antibacterial
Effect of Electrospun Photodynamically Active Fibres | null | null | 10.3390/pharmaceutics12080711 | null | q-bio.TO | http://creativecommons.org/licenses/by/4.0/ | Photodynamically active fibres (PAFs) are a novel class of stimulus-sensitive
systems capable of triggering antibiotic-free antibacterial effect on-demand
when exposed to light. Despite their relevance in infection control, however,
the broad clinical applicability of PAFs has not yet been fully realised due to
the limited control in fibrous microstructure, cell tolerance and antibacterial
activity in the physiologic environment. We addressed this challenge by
creating semicrystalline electrospun fibres with varying content of
poly[(L-lactide)-co-(glycolide)] (PLGA), poly(epsilon-caprolactone) (PCL) and
methylene blue (MB), whereby the effect of polymer morphology, fibre
composition and photosensitiser (PS) uptake on wet state fibre behaviour and
functions was studied. The presence of crystalline domains and PS-polymer
secondary interactions proved key to accomplishing long-lasting fibrous
microstructure, controlled mass loss and controlled MB release profiles (37
degC, pH 7.4, 8 weeks). PAFs with equivalent PLGA:PCL weight ratio successfully
promoted attachment and proliferation of L929 cells over a 7-day culture with
and without light activation, while triggering up to 2.5 and 4 log reduction in
E. coli and S. mutans viability, respectively. These results support the
therapeutic applicability of PAFs for frequently encountered bacterial
infections, opening up new opportunities in photodynamic fibrous systems with
integrated wound healing and infection control capabilities.
| [
{
"created": "Tue, 5 Jan 2021 19:25:32 GMT",
"version": "v1"
}
] | 2021-01-07 | [
[
"Contreras",
"Amy",
""
],
[
"Raxworthy",
"Michael J.",
""
],
[
"Wood",
"Simon",
""
],
[
"Tronci",
"Giuseppe",
""
]
] | Photodynamically active fibres (PAFs) are a novel class of stimulus-sensitive systems capable of triggering antibiotic-free antibacterial effect on-demand when exposed to light. Despite their relevance in infection control, however, the broad clinical applicability of PAFs has not yet been fully realised due to the limited control in fibrous microstructure, cell tolerance and antibacterial activity in the physiologic environment. We addressed this challenge by creating semicrystalline electrospun fibres with varying content of poly[(L-lactide)-co-(glycolide)] (PLGA), poly(epsilon-caprolactone) (PCL) and methylene blue (MB), whereby the effect of polymer morphology, fibre composition and photosensitiser (PS) uptake on wet state fibre behaviour and functions was studied. The presence of crystalline domains and PS-polymer secondary interactions proved key to accomplishing long-lasting fibrous microstructure, controlled mass loss and controlled MB release profiles (37 degC, pH 7.4, 8 weeks). PAFs with equivalent PLGA:PCL weight ratio successfully promoted attachment and proliferation of L929 cells over a 7-day culture with and without light activation, while triggering up to 2.5 and 4 log reduction in E. coli and S. mutans viability, respectively. These results support the therapeutic applicability of PAFs for frequently encountered bacterial infections, opening up new opportunities in photodynamic fibrous systems with integrated wound healing and infection control capabilities. |
1711.04084 | Siu Hung Joshua Chan | Siu Hung Joshua Chan, Lin Wang, Satyakam Dash and Costas D. Maranas | Accelerating flux balance calculations in genome-scale metabolic models
by localizing the application of loopless constraints | Manuscript + SI Methods | null | null | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: Genome-scale metabolic network models and constraint-based
modeling techniques have become important tools for analyzing cellular
metabolism. Thermodynamically infeasible cy-cles (TICs) causing unbounded
metabolic flux ranges are often encountered. TICs satisfy the mass balance and
directionality constraints but violate the second law of thermodynamics.
Current prac-tices involve implementing additional constraints to ensure not
only optimal but also loopless flux distributions. However, the mixed integer
linear programming problems required to solve become computationally
intractable for genome-scale metabolic models. Results: We aimed to identify
the fewest needed constraints sufficient for optimality under the loop-less
requirement. We found that loopless constraints are required only for the
reactions that share elementary flux modes representing TICs with reactions
that are part of the objective function. We put forth the concept of localized
loopless constraints (LLCs) to enforce this minimal required set of loopless
constraints. By combining with a novel procedure for minimal null-space
calculation, the computational time for loopless flux variability analysis is
reduced by a factor of 10-150 compared to the original loopless constraints and
by 4-20 times compared to the currently fastest method Fast-SNP with the
percent improvement increasing with model size. Importantly, LLCs offer a
scalable strategy for loopless flux calculations for
multi-compartment/multi-organism models of very large sizes (e.g. >104
reactions) not feasible before. Matlab functions are available at
https://github.com/maranasgroup/lll-FVA.
| [
{
"created": "Sat, 11 Nov 2017 06:03:08 GMT",
"version": "v1"
}
] | 2017-11-15 | [
[
"Chan",
"Siu Hung Joshua",
""
],
[
"Wang",
"Lin",
""
],
[
"Dash",
"Satyakam",
""
],
[
"Maranas",
"Costas D.",
""
]
] | Background: Genome-scale metabolic network models and constraint-based modeling techniques have become important tools for analyzing cellular metabolism. Thermodynamically infeasible cy-cles (TICs) causing unbounded metabolic flux ranges are often encountered. TICs satisfy the mass balance and directionality constraints but violate the second law of thermodynamics. Current prac-tices involve implementing additional constraints to ensure not only optimal but also loopless flux distributions. However, the mixed integer linear programming problems required to solve become computationally intractable for genome-scale metabolic models. Results: We aimed to identify the fewest needed constraints sufficient for optimality under the loop-less requirement. We found that loopless constraints are required only for the reactions that share elementary flux modes representing TICs with reactions that are part of the objective function. We put forth the concept of localized loopless constraints (LLCs) to enforce this minimal required set of loopless constraints. By combining with a novel procedure for minimal null-space calculation, the computational time for loopless flux variability analysis is reduced by a factor of 10-150 compared to the original loopless constraints and by 4-20 times compared to the currently fastest method Fast-SNP with the percent improvement increasing with model size. Importantly, LLCs offer a scalable strategy for loopless flux calculations for multi-compartment/multi-organism models of very large sizes (e.g. >104 reactions) not feasible before. Matlab functions are available at https://github.com/maranasgroup/lll-FVA. |
2002.05942 | Ng Shyh-Chang | Ng Shyh-Chang, Liaofu Luo | DNA Torsion-based Model of Cell Fate Phase Transitions | 11 pages | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | All stem cell fate transitions, including the metabolic reprogramming of stem
cells and the somatic reprogramming of fibroblasts into pluripotent stem cells,
can be understood from a unified theoretical model of cell fates. Each cell
fate transition can be regarded as a phase transition in DNA supercoiling.
However, there has been a dearth of quantitative biophysical models to explain
and predict the behaviors of these phase transitions. The generalized Ising
model is proposed to define such phase transitions. The model predicts that,
apart from temperature-induced phase transitions, there exists DNA torsion
frequency-induced phase transitions. Major transitions in epigenetic states,
from stem cell activation to differentiation and reprogramming, can be
explained by such torsion frequency-induced phase transitions, with important
implications for regenerative medicine and medical diagnostics in the future.
| [
{
"created": "Fri, 14 Feb 2020 10:06:20 GMT",
"version": "v1"
}
] | 2020-02-17 | [
[
"Shyh-Chang",
"Ng",
""
],
[
"Luo",
"Liaofu",
""
]
] | All stem cell fate transitions, including the metabolic reprogramming of stem cells and the somatic reprogramming of fibroblasts into pluripotent stem cells, can be understood from a unified theoretical model of cell fates. Each cell fate transition can be regarded as a phase transition in DNA supercoiling. However, there has been a dearth of quantitative biophysical models to explain and predict the behaviors of these phase transitions. The generalized Ising model is proposed to define such phase transitions. The model predicts that, apart from temperature-induced phase transitions, there exists DNA torsion frequency-induced phase transitions. Major transitions in epigenetic states, from stem cell activation to differentiation and reprogramming, can be explained by such torsion frequency-induced phase transitions, with important implications for regenerative medicine and medical diagnostics in the future. |
2301.07808 | Robert Noble | Maciej Bak, Blair Colyer, Veselin Manojlovi\'c, Robert Noble | Warlock: an automated computational workflow for simulating spatially
structured tumour evolution | 5 pages, 2 figures | null | null | null | q-bio.QM q-bio.TO | http://creativecommons.org/licenses/by/4.0/ | A primary goal of modern cancer research is to characterize tumour growth and
evolution, to improve clinical forecasting and individualized treatment.
Agent-based models support this endeavour but existing models either
oversimplify spatial structure or are mathematically intractable. Here we
present warlock, an open-source automated computational workflow for fast,
efficient simulation of intratumour population genetics in any of a diverse set
of spatial structures. Warlock encapsulates a deme-based oncology model
(demon), designed to bridge the divide between agent-based simulations and
analytical population genetics models, such as the spatial Moran process. Model
output can be readily compared to multi-region and single-cell sequencing data
for model selection or biological parameter inference. An interface for High
Performance Computing permits hundreds of simulations to be run in parallel. We
discuss prior applications of this workflow to investigating human cancer
evolution.
| [
{
"created": "Wed, 18 Jan 2023 22:38:08 GMT",
"version": "v1"
}
] | 2023-01-20 | [
[
"Bak",
"Maciej",
""
],
[
"Colyer",
"Blair",
""
],
[
"Manojlović",
"Veselin",
""
],
[
"Noble",
"Robert",
""
]
] | A primary goal of modern cancer research is to characterize tumour growth and evolution, to improve clinical forecasting and individualized treatment. Agent-based models support this endeavour but existing models either oversimplify spatial structure or are mathematically intractable. Here we present warlock, an open-source automated computational workflow for fast, efficient simulation of intratumour population genetics in any of a diverse set of spatial structures. Warlock encapsulates a deme-based oncology model (demon), designed to bridge the divide between agent-based simulations and analytical population genetics models, such as the spatial Moran process. Model output can be readily compared to multi-region and single-cell sequencing data for model selection or biological parameter inference. An interface for High Performance Computing permits hundreds of simulations to be run in parallel. We discuss prior applications of this workflow to investigating human cancer evolution. |
1301.0736 | Denis Menshykau | Amarendra Badugu, Conradin Kraemer, Philipp Germann, Denis Menshykau
and Dagmar Iber | Digit patterning during limb development as a result of the BMP-receptor
interaction | null | SCIENTIFIC REPORTS 2012 2 : 991 | 10.1038/srep00991 | null | q-bio.TO q-bio.MN | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Turing models have been proposed to explain the emergence of digits during
limb development. However, so far the molecular components that would give rise
to Turing patterns are elusive. We have recently shown that a particular type
of receptor-ligand interaction can give rise to Schnakenberg-type Turing
patterns, which reproduce patterning during lung and kidney branching
morphogenesis. Recent knock-out experiments have identified Smad4 as a key
protein in digit patterning. We show here that the BMP-receptor interaction
meets the conditions for a Schnakenberg-type Turing pattern, and that the
resulting model reproduces available wildtype and mutant data on the expression
patterns of BMP, its receptor, and Fgfs in the apical ectodermal ridge (AER)
when solved on a realistic 2D domain that we extracted from limb bud images of
E11.5 mouse embryos. We propose that receptor-ligand-based mechanisms serve as
a molecular basis for the emergence of Turing patterns in many developing
tissues.
| [
{
"created": "Fri, 4 Jan 2013 14:36:28 GMT",
"version": "v1"
}
] | 2013-01-07 | [
[
"Badugu",
"Amarendra",
""
],
[
"Kraemer",
"Conradin",
""
],
[
"Germann",
"Philipp",
""
],
[
"Menshykau",
"Denis",
""
],
[
"Iber",
"Dagmar",
""
]
] | Turing models have been proposed to explain the emergence of digits during limb development. However, so far the molecular components that would give rise to Turing patterns are elusive. We have recently shown that a particular type of receptor-ligand interaction can give rise to Schnakenberg-type Turing patterns, which reproduce patterning during lung and kidney branching morphogenesis. Recent knock-out experiments have identified Smad4 as a key protein in digit patterning. We show here that the BMP-receptor interaction meets the conditions for a Schnakenberg-type Turing pattern, and that the resulting model reproduces available wildtype and mutant data on the expression patterns of BMP, its receptor, and Fgfs in the apical ectodermal ridge (AER) when solved on a realistic 2D domain that we extracted from limb bud images of E11.5 mouse embryos. We propose that receptor-ligand-based mechanisms serve as a molecular basis for the emergence of Turing patterns in many developing tissues. |
1507.05249 | Leonardo L. Gollo | Leonardo L. Gollo, Mauro Copelli, James A. Roberts | Diversity improves performance in excitable networks | 17 pages, 7 figures | PeerJ 4:e1912 (2016) | 10.7717/peerj.1912 | null | q-bio.NC cond-mat.dis-nn cond-mat.stat-mech nlin.CG physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As few real systems comprise indistinguishable units, diversity is a hallmark
of nature. Diversity among interacting units shapes properties of collective
behavior such as synchronization and information transmission. However, the
benefits of diversity on information processing at the edge of a phase
transition, ordinarily assumed to emerge from identical elements, remain
largely unexplored. Analyzing a general model of excitable systems with
heterogeneous excitability, we find that diversity can greatly enhance optimal
performance (by two orders of magnitude) when distinguishing incoming inputs.
Heterogeneous systems possess a subset of specialized elements whose capability
greatly exceeds that of the nonspecialized elements. Nonetheless, the behavior
of the whole network can outperform all subgroups. We also find that diversity
can yield multiple percolation, with performance optimized at tricriticality.
Our results are robust in specific and more realistic neuronal systems
comprising a combination of excitatory and inhibitory units, and indicate that
diversity-induced amplification can be harnessed by neuronal systems for
evaluating stimulus intensities.
| [
{
"created": "Sun, 19 Jul 2015 06:38:14 GMT",
"version": "v1"
}
] | 2016-05-06 | [
[
"Gollo",
"Leonardo L.",
""
],
[
"Copelli",
"Mauro",
""
],
[
"Roberts",
"James A.",
""
]
] | As few real systems comprise indistinguishable units, diversity is a hallmark of nature. Diversity among interacting units shapes properties of collective behavior such as synchronization and information transmission. However, the benefits of diversity on information processing at the edge of a phase transition, ordinarily assumed to emerge from identical elements, remain largely unexplored. Analyzing a general model of excitable systems with heterogeneous excitability, we find that diversity can greatly enhance optimal performance (by two orders of magnitude) when distinguishing incoming inputs. Heterogeneous systems possess a subset of specialized elements whose capability greatly exceeds that of the nonspecialized elements. Nonetheless, the behavior of the whole network can outperform all subgroups. We also find that diversity can yield multiple percolation, with performance optimized at tricriticality. Our results are robust in specific and more realistic neuronal systems comprising a combination of excitatory and inhibitory units, and indicate that diversity-induced amplification can be harnessed by neuronal systems for evaluating stimulus intensities. |
1409.2031 | Jan P. Radomski Dr. | Ana Carolina Arcanjo, Giovanni Mazzocco, Silviene Fabiana de Oliveira,
Dariusz Plewczynski, and Jan P. Radomski | Role of the host genetic variability in the influenza-A virus
susceptibility; a review | 17 pages, 3 figures, 4 tables | null | null | null | q-bio.PE q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The aftermath of influenza infection is determined by a complex set of
host-pathogen interactions, where genomic variability on both viral and host
sides influences the final outcome. Although there exists large body of
literature describing influenza virus variability, only a very small fraction
covers the issue of host variance. The goal of this review is to explore the
variability of host genes responsible for host-pathogen interactions, paying
particular attention to genes responsible for the presence of sialylated
glycans in the host endothelial membrane, mucus, genes used by viral immune
escape mechanisms, and genes particularly expressed after vaccination, since
they are more likely to have a direct influence on the infection outcome.
| [
{
"created": "Sat, 6 Sep 2014 16:51:33 GMT",
"version": "v1"
}
] | 2014-09-09 | [
[
"Arcanjo",
"Ana Carolina",
""
],
[
"Mazzocco",
"Giovanni",
""
],
[
"de Oliveira",
"Silviene Fabiana",
""
],
[
"Plewczynski",
"Dariusz",
""
],
[
"Radomski",
"Jan P.",
""
]
] | The aftermath of influenza infection is determined by a complex set of host-pathogen interactions, where genomic variability on both viral and host sides influences the final outcome. Although there exists large body of literature describing influenza virus variability, only a very small fraction covers the issue of host variance. The goal of this review is to explore the variability of host genes responsible for host-pathogen interactions, paying particular attention to genes responsible for the presence of sialylated glycans in the host endothelial membrane, mucus, genes used by viral immune escape mechanisms, and genes particularly expressed after vaccination, since they are more likely to have a direct influence on the infection outcome. |
0711.0916 | Pedro Ojeda MSC | Pedro Ojeda, Aurora Londono, Nan-Yow Chen, Martin Garcia | Monte Carlo simulations of proteins in cages: influence of confinement
on the stability of intermediate states | 27 pages, 9 figures | null | null | null | q-bio.BM q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a theoretical study of the folding of small proteins inside
confining potentials. Proteins are described in the framework of an effective
potential model which contains the Ramachandran angles as degrees of freedom
and does not need any {\it a priori} information about the native state.
Hydrogen bonds, dipole-dipole- and hydrophobic interactions are taken
explicitly into account. An interesting feature displayed by this potential is
the presence of some intermediates between the unfolded and native states. We
consider different types of confining potentials in order to study the
structural properties of proteins folding inside cages with repulsive or
attractive walls. Using the Wang-Landau algorithm we determine the density of
states (DOS) and analyze in detail the thermodynamical properties of the
confined proteins for different sizes of the cages. We show that confinement
dramatically reduces the phase space available to the protein and that the
presence of intermediate states can be controlled by varying the properties of
the confining potential. Cages with strongly attractive walls lead to the
disappearance of the intermediate states and to a two-state folding into a less
stable configuration. However, cages with slightly attractive walls make the
native structure more stable than in the case of pure repulsive potentials, and
the folding process occurs through intermediate configurations. In order to
test the metastable states we analyze the free energy landscapes as a function
of the configurational energy and of the end-to-end distance as an order
parameter.
| [
{
"created": "Tue, 6 Nov 2007 16:12:03 GMT",
"version": "v1"
},
{
"created": "Mon, 4 Aug 2008 08:38:30 GMT",
"version": "v2"
}
] | 2008-08-04 | [
[
"Ojeda",
"Pedro",
""
],
[
"Londono",
"Aurora",
""
],
[
"Chen",
"Nan-Yow",
""
],
[
"Garcia",
"Martin",
""
]
] | We present a theoretical study of the folding of small proteins inside confining potentials. Proteins are described in the framework of an effective potential model which contains the Ramachandran angles as degrees of freedom and does not need any {\it a priori} information about the native state. Hydrogen bonds, dipole-dipole- and hydrophobic interactions are taken explicitly into account. An interesting feature displayed by this potential is the presence of some intermediates between the unfolded and native states. We consider different types of confining potentials in order to study the structural properties of proteins folding inside cages with repulsive or attractive walls. Using the Wang-Landau algorithm we determine the density of states (DOS) and analyze in detail the thermodynamical properties of the confined proteins for different sizes of the cages. We show that confinement dramatically reduces the phase space available to the protein and that the presence of intermediate states can be controlled by varying the properties of the confining potential. Cages with strongly attractive walls lead to the disappearance of the intermediate states and to a two-state folding into a less stable configuration. However, cages with slightly attractive walls make the native structure more stable than in the case of pure repulsive potentials, and the folding process occurs through intermediate configurations. In order to test the metastable states we analyze the free energy landscapes as a function of the configurational energy and of the end-to-end distance as an order parameter. |
2109.01488 | Omar El Deeb Dr. | Omar El Deeb and Maya Jalloul | Efficacy versus abundancy: Comparing vaccination schemes | 17 pages, 8 figures | PLoS ONE 17-5 (2022):e0267840 | 10.1371/journal.pone.0267840 | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | We introduce a novel compartmental model accounting for the effects of
vaccine efficacy, deployment rates and timing of initiation of deployment. We
simulate different scenarios and initial conditions, and we find that higher
abundancy and rate of deployment of low efficacy vaccines lowers the cumulative
number of deaths in comparison to slower deployment of high efficacy vaccines.
We also forecast that, at the same daily deployment rate, the earlier
introduction of vaccination schemes with lower efficacy would also lower the
number of deaths with respect to a delayed introduction of high efficacy
vaccines, which can however, still achieve lower numbers of infections and
better herd immunity.
| [
{
"created": "Tue, 31 Aug 2021 16:47:55 GMT",
"version": "v1"
},
{
"created": "Mon, 27 Jun 2022 05:23:18 GMT",
"version": "v2"
}
] | 2022-06-28 | [
[
"Deeb",
"Omar El",
""
],
[
"Jalloul",
"Maya",
""
]
] | We introduce a novel compartmental model accounting for the effects of vaccine efficacy, deployment rates and timing of initiation of deployment. We simulate different scenarios and initial conditions, and we find that higher abundancy and rate of deployment of low efficacy vaccines lowers the cumulative number of deaths in comparison to slower deployment of high efficacy vaccines. We also forecast that, at the same daily deployment rate, the earlier introduction of vaccination schemes with lower efficacy would also lower the number of deaths with respect to a delayed introduction of high efficacy vaccines, which can however, still achieve lower numbers of infections and better herd immunity. |
2005.04142 | Attilio Vittorio Vargiu | Andrea Basciu, Panagiotis I. Koukos, Giuliano Malloci, Alexandre M. J.
J. Bonvin and Attilio V. Vargiu | Coupling enhanced sampling of the apo-receptor with template-based
ligand conformers selection: Performance in pose prediction in the D3R-GC4 | null | Journal of Computer-Aided Molecular Design 34, 149-162 (2020) | 10.1007/s10822-019-00244-6 | null | q-bio.BM cond-mat.soft | http://creativecommons.org/licenses/by/4.0/ | We report the performance of our newly introduced Ensemble Docking with
Enhanced sampling of pocket Shape (EDES) protocol coupled to a template-based
algorithm to generate near-native ligand conformations in the 2019 iteration of
the Grand Challenge organized by the D3R consortium. Using either AutoDock4.2
or HADDOCK2.2 docking programs (each software in two variants of the protocol)
our method generated native-like poses among the top 5 submitted for evaluation
for most of the 20 targets with similar performances. The protein selected for
GC4 was the human beta-site amyloid precursor protein cleaving enzyme 1
(BACE-1), a transmembrane aspartic-acid protease. We identified at least one
pose whose heavy-atoms RMSD was less than 2.5 {\AA} from the native
conformation for 16 (80%) and 17 (85%) of the twenty targets using AutoDock and
HADDOCK, respectively. Dissecting the possible sources of errors revealed that:
i) our EDES protocol (with minor modifications) was able to sample
sub-{\aa}ngstrom conformations for all 20 protein targets, reproducing the
correct conformation of the binding site within ~1 {\AA} RMSD; ii) as already
shown by some of us in GC3, even in the presence of near-native protein
structures, a proper selection of ligand conformers is crucial for the success
of ensemble-docking calculations. Importantly, our approach performed best
among the protocols exploiting only structural information of the apo protein
to generate conformations of the receptor for ensemble-docking calculations.
| [
{
"created": "Mon, 4 May 2020 19:29:39 GMT",
"version": "v1"
}
] | 2020-06-14 | [
[
"Basciu",
"Andrea",
""
],
[
"Koukos",
"Panagiotis I.",
""
],
[
"Malloci",
"Giuliano",
""
],
[
"Bonvin",
"Alexandre M. J. J.",
""
],
[
"Vargiu",
"Attilio V.",
""
]
] | We report the performance of our newly introduced Ensemble Docking with Enhanced sampling of pocket Shape (EDES) protocol coupled to a template-based algorithm to generate near-native ligand conformations in the 2019 iteration of the Grand Challenge organized by the D3R consortium. Using either AutoDock4.2 or HADDOCK2.2 docking programs (each software in two variants of the protocol) our method generated native-like poses among the top 5 submitted for evaluation for most of the 20 targets with similar performances. The protein selected for GC4 was the human beta-site amyloid precursor protein cleaving enzyme 1 (BACE-1), a transmembrane aspartic-acid protease. We identified at least one pose whose heavy-atoms RMSD was less than 2.5 {\AA} from the native conformation for 16 (80%) and 17 (85%) of the twenty targets using AutoDock and HADDOCK, respectively. Dissecting the possible sources of errors revealed that: i) our EDES protocol (with minor modifications) was able to sample sub-{\aa}ngstrom conformations for all 20 protein targets, reproducing the correct conformation of the binding site within ~1 {\AA} RMSD; ii) as already shown by some of us in GC3, even in the presence of near-native protein structures, a proper selection of ligand conformers is crucial for the success of ensemble-docking calculations. Importantly, our approach performed best among the protocols exploiting only structural information of the apo protein to generate conformations of the receptor for ensemble-docking calculations. |
2112.12278 | Xiaoyue Zhu | Xiaoyue Zhu, Jeffrey C. Erlich | A rodent paradigm for studying perceptual decisions under asymmetric
reward | null | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Many real-life decisions involve both perceptual processes and weighing the
consequences of different actions. However, the neural mechanisms underlying
perceptual decisions have typically been examined separately from those
underlying economic decisions. Here, we trained rats to make choices informed
by both perceptual and value cues on a trial-by-trial basis. As in typical
perceptual tasks, subjects were rewarded for correctly categorizing a tone
relative to a learned threshold. To add an economic component, a light
indicated, on each trial, whether correct responses to one side gave higher
rewards than correct responses to the other side. As such, on trials with some
perceptual uncertainty, it could be worthwhile to choose the unlikely option,
if it had higher expected value. We found that, despite subjects sensitivity to
the frequency of the cue and the reward sizes, their behavior was not optimal:
subjects tended to shift their choices in a stimulus-independent way following
light flashes. Moreover, subjects tended to under-shift, which could be
interpreted as being over-confident in their perceptual beliefs or as being
risk-averse.
| [
{
"created": "Wed, 22 Dec 2021 23:50:52 GMT",
"version": "v1"
}
] | 2021-12-24 | [
[
"Zhu",
"Xiaoyue",
""
],
[
"Erlich",
"Jeffrey C.",
""
]
] | Many real-life decisions involve both perceptual processes and weighing the consequences of different actions. However, the neural mechanisms underlying perceptual decisions have typically been examined separately from those underlying economic decisions. Here, we trained rats to make choices informed by both perceptual and value cues on a trial-by-trial basis. As in typical perceptual tasks, subjects were rewarded for correctly categorizing a tone relative to a learned threshold. To add an economic component, a light indicated, on each trial, whether correct responses to one side gave higher rewards than correct responses to the other side. As such, on trials with some perceptual uncertainty, it could be worthwhile to choose the unlikely option, if it had higher expected value. We found that, despite subjects sensitivity to the frequency of the cue and the reward sizes, their behavior was not optimal: subjects tended to shift their choices in a stimulus-independent way following light flashes. Moreover, subjects tended to under-shift, which could be interpreted as being over-confident in their perceptual beliefs or as being risk-averse. |
2009.14293 | Sharon Chiang | Sharon Chiang, Ankit N. Khambhati, Emily T. Wang, Marina Vannucci,
Edward F. Chang, Vikram R. Rao | Evidence of state-dependence in the effectiveness of responsive
neurostimulation for seizure modulation | null | Brain Stimulation (2021); 14(2):366-375 | 10.1016/j.brs.2021.01.023 | null | q-bio.NC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | An implanted device for brain-responsive neurostimulation (RNS System) is
approved as an effective treatment to reduce seizures in adults with
medically-refractory focal epilepsy. Clinical trials of the RNS System
demonstrate population-level reduction in average seizure frequency, but
therapeutic response is highly variable. Recent evidence links seizures to
cyclical fluctuations in underlying risk. We tested the hypothesis that
effectiveness of responsive neurostimulation varies based on current state
within cyclical risk fluctuations. We analyzed retrospective data from 25
adults with medically-refractory focal epilepsy implanted with the RNS System.
Chronic electrocorticography was used to record electrographic seizures, and
hidden Markov models decoded seizures into fluctuations in underlying risk.
State-dependent associations of RNS System stimulation parameters with changes
in risk were estimated. Higher charge density was associated with improved
outcomes, both for remaining in a low seizure risk state and for transitioning
from a high to a low seizure risk state. The effect of stimulation frequency
depended on initial seizure risk state: when starting in a low risk state,
higher stimulation frequencies were associated with remaining in a low risk
state, but when starting in a high risk state, lower stimulation frequencies
were associated with transition to a low risk state. Findings were consistent
across bipolar and monopolar stimulation configurations. The impact of RNS on
seizure frequency exhibits state-dependence, such that stimulation parameters
which are effective in one seizure risk state may not be effective in another.
These findings represent conceptual advances in understanding the therapeutic
mechanism of RNS, and directly inform current practices of RNS tuning and the
development of next-generation neurostimulation systems.
| [
{
"created": "Tue, 29 Sep 2020 20:20:16 GMT",
"version": "v1"
},
{
"created": "Fri, 19 Feb 2021 04:44:52 GMT",
"version": "v2"
}
] | 2021-02-22 | [
[
"Chiang",
"Sharon",
""
],
[
"Khambhati",
"Ankit N.",
""
],
[
"Wang",
"Emily T.",
""
],
[
"Vannucci",
"Marina",
""
],
[
"Chang",
"Edward F.",
""
],
[
"Rao",
"Vikram R.",
""
]
] | An implanted device for brain-responsive neurostimulation (RNS System) is approved as an effective treatment to reduce seizures in adults with medically-refractory focal epilepsy. Clinical trials of the RNS System demonstrate population-level reduction in average seizure frequency, but therapeutic response is highly variable. Recent evidence links seizures to cyclical fluctuations in underlying risk. We tested the hypothesis that effectiveness of responsive neurostimulation varies based on current state within cyclical risk fluctuations. We analyzed retrospective data from 25 adults with medically-refractory focal epilepsy implanted with the RNS System. Chronic electrocorticography was used to record electrographic seizures, and hidden Markov models decoded seizures into fluctuations in underlying risk. State-dependent associations of RNS System stimulation parameters with changes in risk were estimated. Higher charge density was associated with improved outcomes, both for remaining in a low seizure risk state and for transitioning from a high to a low seizure risk state. The effect of stimulation frequency depended on initial seizure risk state: when starting in a low risk state, higher stimulation frequencies were associated with remaining in a low risk state, but when starting in a high risk state, lower stimulation frequencies were associated with transition to a low risk state. Findings were consistent across bipolar and monopolar stimulation configurations. The impact of RNS on seizure frequency exhibits state-dependence, such that stimulation parameters which are effective in one seizure risk state may not be effective in another. These findings represent conceptual advances in understanding the therapeutic mechanism of RNS, and directly inform current practices of RNS tuning and the development of next-generation neurostimulation systems. |
1201.5204 | Baruch Meerson | Michael Khasin, Baruch Meerson, Evgeniy Khain and Leonard M. Sander | Minimizing the population extinction risk by migration | 7 pages, 3 figures, accepted for publication in PRL, appendix
contains supplementary material | Phys. Rev. Lett. 109, 138104 (2012) | 10.1103/PhysRevLett.109.138104 | null | q-bio.PE cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many populations in nature are fragmented: they consist of local populations
occupying separate patches. A local population is prone to extinction due to
the shot noise of birth and death processes. A migrating population from
another patch can dramatically delay the extinction. What is the optimal
migration rate that minimizes the extinction risk of the whole population? Here
we answer this question for a connected network of model habitat patches with
different carrying capacities.
| [
{
"created": "Wed, 25 Jan 2012 08:21:29 GMT",
"version": "v1"
},
{
"created": "Thu, 5 Apr 2012 01:49:05 GMT",
"version": "v2"
},
{
"created": "Tue, 21 Aug 2012 06:16:31 GMT",
"version": "v3"
},
{
"created": "Wed, 22 Aug 2012 06:46:08 GMT",
"version": "v4"
}
] | 2015-06-03 | [
[
"Khasin",
"Michael",
""
],
[
"Meerson",
"Baruch",
""
],
[
"Khain",
"Evgeniy",
""
],
[
"Sander",
"Leonard M.",
""
]
] | Many populations in nature are fragmented: they consist of local populations occupying separate patches. A local population is prone to extinction due to the shot noise of birth and death processes. A migrating population from another patch can dramatically delay the extinction. What is the optimal migration rate that minimizes the extinction risk of the whole population? Here we answer this question for a connected network of model habitat patches with different carrying capacities. |
1804.03190 | Hosein M. Golshan | Hosein M. Golshan, Adam O. Hebb, Joshua Nedrud, Mohammad H. Mahoor | Studying the Effects of Deep Brain Stimulation and Medication on the
Dynamics of STN-LFP Signals for Human Behavior Analysis | 40th IEEE International Conference on Engineering in Medicine and
Biology (IEEE EMBC), Honolulu, Hawaii, July 17-21, 2018 | null | null | null | q-bio.NC cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents the results of our recent work on studying the effects of
deep brain stimulation (DBS) and medication on the dynamics of brain local
field potential (LFP) signals used for behavior analysis of patients with
Parkinson s disease (PD). DBS is a technique used to alleviate the severe
symptoms of PD when pharmacotherapy is not very effective. Behavior recognition
from the LFP signals recorded from the subthalamic nucleus (STN) has
application in developing closed-loop DBS systems, where the stimulation pulse
is adaptively generated according to subjects performing behavior. Most of the
existing studies on behavior recognition that use STN-LFPs are based on the DBS
being off. This paper discovers how the performance and accuracy of automated
behavior recognition from the LFP signals are affected under different
paradigms of stimulation on/off. We first study the notion of beta power
suppression in LFP signals under different scenarios (stimulation on/off and
medication on/off). Afterward, we explore the accuracy of support vector
machines in predicting human actions (button press and reach) using the
spectrogram of STN-LFP signals. Our experiments on the recorded LFP signals of
three subjects confirm that the beta power is suppressed significantly when the
patients take medication (p-value<0.002) or stimulation (p-value<0.0003). The
results also show that we can classify different behaviors with a reasonable
accuracy of 85% even when the high-amplitude stimulation is applied.
| [
{
"created": "Mon, 9 Apr 2018 19:10:31 GMT",
"version": "v1"
}
] | 2018-04-11 | [
[
"Golshan",
"Hosein M.",
""
],
[
"Hebb",
"Adam O.",
""
],
[
"Nedrud",
"Joshua",
""
],
[
"Mahoor",
"Mohammad H.",
""
]
] | This paper presents the results of our recent work on studying the effects of deep brain stimulation (DBS) and medication on the dynamics of brain local field potential (LFP) signals used for behavior analysis of patients with Parkinson s disease (PD). DBS is a technique used to alleviate the severe symptoms of PD when pharmacotherapy is not very effective. Behavior recognition from the LFP signals recorded from the subthalamic nucleus (STN) has application in developing closed-loop DBS systems, where the stimulation pulse is adaptively generated according to subjects performing behavior. Most of the existing studies on behavior recognition that use STN-LFPs are based on the DBS being off. This paper discovers how the performance and accuracy of automated behavior recognition from the LFP signals are affected under different paradigms of stimulation on/off. We first study the notion of beta power suppression in LFP signals under different scenarios (stimulation on/off and medication on/off). Afterward, we explore the accuracy of support vector machines in predicting human actions (button press and reach) using the spectrogram of STN-LFP signals. Our experiments on the recorded LFP signals of three subjects confirm that the beta power is suppressed significantly when the patients take medication (p-value<0.002) or stimulation (p-value<0.0003). The results also show that we can classify different behaviors with a reasonable accuracy of 85% even when the high-amplitude stimulation is applied. |
q-bio/0702025 | Chunguang Li | Chunguang Li, Luonan Chen, Kazuyuki Aihara | Stochastic Synchronization of Genetic Oscillator Networks | 14 pages, 4 figures | BMC Systems Biology, Vol.1, article no.6, 2007 | 10.1186/1752-0509-1-6 | null | q-bio.MN cond-mat.dis-nn nlin.CD | null | The study of synchronization among genetic oscillators is essential for the
understanding of the rhythmic phenomena of living organisms at both molecular
and cellular levels. Genetic networks are intrinsically noisy due to natural
random intra- and inter-cellular fluctuations. Therefore, it is important to
study the effects of noise perturbation on the synchronous dynamics of genetic
oscillators. From the synthetic biology viewpoint, it is also important to
implement biological systems that minimizing the negative influence of the
perturbations. In this paper, based on systems biology approach, we provide a
general theoretical result on the synchronization of genetic oscillators with
stochastic perturbations. By exploiting the specific properties of many genetic
oscillator models, we provide an easy-verified sufficient condition for the
stochastic synchronization of coupled genetic oscillators, based on the Lur'e
system approach in control theory. A design principle for minimizing the
influence of noise is also presented. To demonstrate the effectiveness of our
theoretical results, a population of coupled repressillators is adopted as a
numerical example. In summary, we present an efficient theoretical method for
analyzing the synchronization of genetic oscillator networks, which is helpful
for understanding and testing the synchronization phenomena in biological
organisms. Besides, the results are actually applicable to general oscillator
networks.
| [
{
"created": "Sun, 11 Feb 2007 12:22:44 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Li",
"Chunguang",
""
],
[
"Chen",
"Luonan",
""
],
[
"Aihara",
"Kazuyuki",
""
]
] | The study of synchronization among genetic oscillators is essential for the understanding of the rhythmic phenomena of living organisms at both molecular and cellular levels. Genetic networks are intrinsically noisy due to natural random intra- and inter-cellular fluctuations. Therefore, it is important to study the effects of noise perturbation on the synchronous dynamics of genetic oscillators. From the synthetic biology viewpoint, it is also important to implement biological systems that minimizing the negative influence of the perturbations. In this paper, based on systems biology approach, we provide a general theoretical result on the synchronization of genetic oscillators with stochastic perturbations. By exploiting the specific properties of many genetic oscillator models, we provide an easy-verified sufficient condition for the stochastic synchronization of coupled genetic oscillators, based on the Lur'e system approach in control theory. A design principle for minimizing the influence of noise is also presented. To demonstrate the effectiveness of our theoretical results, a population of coupled repressillators is adopted as a numerical example. In summary, we present an efficient theoretical method for analyzing the synchronization of genetic oscillator networks, which is helpful for understanding and testing the synchronization phenomena in biological organisms. Besides, the results are actually applicable to general oscillator networks. |
1605.09312 | Jeffrey Herron | Jeffrey Herron, Anca Velisar, Mahsa Malekmohammadi, Helen
Bronte-Stewart, Howard Jay Chizeck | A Metric for Evaluating and Comparing Closed-Loop Deep Brain Stimulation
Algorithms | 9 pages, 3 figures | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Objective: Closed-loop deep brain stimulation (DBS) may improve current
clinical DBS treatment for neurological movement disorders, but control
algorithms may perform differently across patients. New metrics are needed for
comparing and evaluating closed-loop algorithm performance that address the
specific needs of closed-loop neuromodulation controllers. Approach: A metric
is proposed for system performance that includes normalized terms that can be
used to compare algorithm performance for a patient. This metric was evaluated
using two closed-loop control algorithms that were tested in patients with
Parkinson's Disease (PD) who experience rest tremor. Main Results: The metric's
resulting balance between tremor treatment and power savings varied on a per
patient and algorithm basis. This was expected given how each trial resulted in
a variable reduction in stimulation power at the cost of additional tremor for
the patient when compared to open-loop stimulation. Significance: The proposed
metric will aid in clinical evaluation of new algorithms and provide a
benchmark for future system designers. This will be important given the growing
potential applications of dynamically adjusted neural stimulation.
ClinicalTrials.gov Identifier: NCT02384421.
| [
{
"created": "Thu, 19 May 2016 18:15:24 GMT",
"version": "v1"
}
] | 2016-05-31 | [
[
"Herron",
"Jeffrey",
""
],
[
"Velisar",
"Anca",
""
],
[
"Malekmohammadi",
"Mahsa",
""
],
[
"Bronte-Stewart",
"Helen",
""
],
[
"Chizeck",
"Howard Jay",
""
]
] | Objective: Closed-loop deep brain stimulation (DBS) may improve current clinical DBS treatment for neurological movement disorders, but control algorithms may perform differently across patients. New metrics are needed for comparing and evaluating closed-loop algorithm performance that address the specific needs of closed-loop neuromodulation controllers. Approach: A metric is proposed for system performance that includes normalized terms that can be used to compare algorithm performance for a patient. This metric was evaluated using two closed-loop control algorithms that were tested in patients with Parkinson's Disease (PD) who experience rest tremor. Main Results: The metric's resulting balance between tremor treatment and power savings varied on a per patient and algorithm basis. This was expected given how each trial resulted in a variable reduction in stimulation power at the cost of additional tremor for the patient when compared to open-loop stimulation. Significance: The proposed metric will aid in clinical evaluation of new algorithms and provide a benchmark for future system designers. This will be important given the growing potential applications of dynamically adjusted neural stimulation. ClinicalTrials.gov Identifier: NCT02384421. |
1604.00828 | Georg Meisl | Georg Meisl, Xiaoting Yang, Christopher M. Dobson, Sara Linse and
Tuomas P. J. Knowles | A general reaction network unifies the aggregation behaviour of the
A$\beta$42 peptide and its variants | null | null | 10.1039/C7SC00215G | null | q-bio.MN q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The amyloid $\beta$ peptide (A$\beta$42), whose aggregation is associated
with Alzheimer's disease, is an amphiphatic peptide with a high propensity to
self-assemble. A$\beta$42 has a net negative charge at physiological pH and
modulations of intermolecular electrostatic interactions can significantly
alter its aggregation behaviour. Variations in sequence and solution conditions
lead to varied macroscopic behaviour, often resulting in a number of different
mechanistic explanations for the aggregation of these closely related systems.
Here we alter the electrostatic interactions governing the fibril aggregation
kinetics by varying the ionic strength over an order of magnitude, which allows
us to sample the space of different reaction mechanisms, and develop a minimal
reaction network that explains the experimental kinetics under all the
different conditions. We find that an increase in the ionic strength leads to
an increased rate of surface catalysed nucleation over fragmentation and
eventually to a saturation of this nucleation process. More generally, this
reaction network connects previously separate systems, such as mutants of
A$\beta$42 and the wild type, on a continuous mechanistic landscape, thereby
providing a unified picture of the aggregation mechanism of A$\beta$42 and the
means of directly comparing the effects of intrinsic modifications of the
peptide to those of simple electrostatic shielding.
| [
{
"created": "Mon, 4 Apr 2016 12:13:33 GMT",
"version": "v1"
}
] | 2020-08-25 | [
[
"Meisl",
"Georg",
""
],
[
"Yang",
"Xiaoting",
""
],
[
"Dobson",
"Christopher M.",
""
],
[
"Linse",
"Sara",
""
],
[
"Knowles",
"Tuomas P. J.",
""
]
] | The amyloid $\beta$ peptide (A$\beta$42), whose aggregation is associated with Alzheimer's disease, is an amphiphatic peptide with a high propensity to self-assemble. A$\beta$42 has a net negative charge at physiological pH and modulations of intermolecular electrostatic interactions can significantly alter its aggregation behaviour. Variations in sequence and solution conditions lead to varied macroscopic behaviour, often resulting in a number of different mechanistic explanations for the aggregation of these closely related systems. Here we alter the electrostatic interactions governing the fibril aggregation kinetics by varying the ionic strength over an order of magnitude, which allows us to sample the space of different reaction mechanisms, and develop a minimal reaction network that explains the experimental kinetics under all the different conditions. We find that an increase in the ionic strength leads to an increased rate of surface catalysed nucleation over fragmentation and eventually to a saturation of this nucleation process. More generally, this reaction network connects previously separate systems, such as mutants of A$\beta$42 and the wild type, on a continuous mechanistic landscape, thereby providing a unified picture of the aggregation mechanism of A$\beta$42 and the means of directly comparing the effects of intrinsic modifications of the peptide to those of simple electrostatic shielding. |
1807.10269 | Shennan Weiss | Zachary J. Waldman, Liliana Camarillo-Rodriguez, Inna Chervenova,
Brent Berry, Shoichi Shimamoto, Bahareh Elahian, Michal Kucewicz, Chaitanya
Ganne, Xiao-Song He, Leon A. Davis, Joel Stein, Sandhitsu Das, Richard
Gorniak, Ashwini D. Sharan, Robert Gross, Cory S. Inman, Bradley C. Lega,
Kareem Zaghloul, Barbara C. Jobst, Katheryn A. Davis, Paul Wanda, Mehraneh
Khadjevand, Joseph Tracy, Daniel S. Rizzuto, Gregory Worrell, Michael
Sperling, Shennan A. Weiss | Ripple oscillations in the left temporal neocortex are associated with
impaired verbal episodic memory encoding | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: We sought to determine if ripple oscillations (80-120Hz),
detected in intracranial EEG (iEEG) recordings of epilepsy patients, correlate
with an enhancement or disruption of verbal episodic memory encoding. Methods:
We defined ripple and spike events in depth iEEG recordings during list
learning in 107 patients with focal epilepsy. We used logistic regression
models (LRMs) to investigate the relationship between the occurrence of ripple
and spike events during word presentation and the odds of successful word
recall following a distractor epoch, and included the seizure onset zone (SOZ)
as a covariate in the LRMs. Results: We detected events during 58,312 word
presentation trials from 7,630 unique electrode sites. The probability of
ripple on spike (RonS) events was increased in the seizure onset zone (SOZ,
p<0.04). In the left temporal neocortex RonS events during word presentation
corresponded with a decrease in the odds ratio (OR) of successful recall,
however this effect only met significance in the SOZ (OR of word recall 0.71,
95% CI: 0.59-0.85, n=158 events, adaptive Hochberg p<0.01). Ripple on
oscillation events (RonO) that occurred in the left temporal neocortex non-SOZ
also correlated with decreased odds of successful recall (OR 0.52, 95% CI:
0.34-0.80, n=140, adaptive Hochberg , p<0.01). Spikes and RonS that occurred
during word presentation in the left middle temporal gyrus during word
presentation correlated with the most significant decrease in the odds of
successful recall, irrespective of the location of the SOZ (adaptive Hochberg,
p<0.01). Conclusion: Ripples and spikes generated in left temporal neocortex
are associated with impaired verbal episodic memory encoding.
| [
{
"created": "Thu, 26 Jul 2018 17:57:14 GMT",
"version": "v1"
}
] | 2018-07-27 | [
[
"Waldman",
"Zachary J.",
""
],
[
"Camarillo-Rodriguez",
"Liliana",
""
],
[
"Chervenova",
"Inna",
""
],
[
"Berry",
"Brent",
""
],
[
"Shimamoto",
"Shoichi",
""
],
[
"Elahian",
"Bahareh",
""
],
[
"Kucewicz",
"Mich... | Background: We sought to determine if ripple oscillations (80-120Hz), detected in intracranial EEG (iEEG) recordings of epilepsy patients, correlate with an enhancement or disruption of verbal episodic memory encoding. Methods: We defined ripple and spike events in depth iEEG recordings during list learning in 107 patients with focal epilepsy. We used logistic regression models (LRMs) to investigate the relationship between the occurrence of ripple and spike events during word presentation and the odds of successful word recall following a distractor epoch, and included the seizure onset zone (SOZ) as a covariate in the LRMs. Results: We detected events during 58,312 word presentation trials from 7,630 unique electrode sites. The probability of ripple on spike (RonS) events was increased in the seizure onset zone (SOZ, p<0.04). In the left temporal neocortex RonS events during word presentation corresponded with a decrease in the odds ratio (OR) of successful recall, however this effect only met significance in the SOZ (OR of word recall 0.71, 95% CI: 0.59-0.85, n=158 events, adaptive Hochberg p<0.01). Ripple on oscillation events (RonO) that occurred in the left temporal neocortex non-SOZ also correlated with decreased odds of successful recall (OR 0.52, 95% CI: 0.34-0.80, n=140, adaptive Hochberg , p<0.01). Spikes and RonS that occurred during word presentation in the left middle temporal gyrus during word presentation correlated with the most significant decrease in the odds of successful recall, irrespective of the location of the SOZ (adaptive Hochberg, p<0.01). Conclusion: Ripples and spikes generated in left temporal neocortex are associated with impaired verbal episodic memory encoding. |
2304.05068 | Anatole Jimenez | Anatole Jimenez (PhysMed Paris), Bruno Osmanski, Denis Vivien (PhIND),
Mickael Tanter (PhysMed Paris), Thomas Gaberel (PhIND), Thomas Deffieux
(PhysMed Paris) | Toward Whole-Brain Minimally-Invasive Vascular Imaging | null | null | null | null | q-bio.NC eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Imaging the brain vasculature can be critical for cerebral perfusion
monitoring in the context of neurocritical care. Although ultrasensitive
Doppler (UD) can provide good sensitivity to cerebral blood volume (CBV) in a
large field of view, it remains difficult to perform through the skull. In this
work, we investigate how a minimally invasive burr hole, performed for
intracranial pressure (ICP) monitoring, could be used to map the entire brain
vascular tree. We explored the use of a small motorized phased array probe with
a non-implantable preclinical prototype in pigs. The scan duration (18 min) and
coverage (62 $\pm$ 12 % of the brain) obtained allowed global CBV variations
detection (relative in brain Dopplerdecrease =-3[-4-+16]% \& Dopplerincrease. =
+1[-3-+15]%, n = 6 \& 5) and stroke detection (relative in core Dopplerstroke.
=-25%, n = 1). This technology could one day be miniaturized to be implanted
for brain perfusion monitoring in neurocritical care.
| [
{
"created": "Tue, 11 Apr 2023 09:03:20 GMT",
"version": "v1"
}
] | 2023-04-12 | [
[
"Jimenez",
"Anatole",
"",
"PhysMed Paris"
],
[
"Osmanski",
"Bruno",
"",
"PhIND"
],
[
"Vivien",
"Denis",
"",
"PhIND"
],
[
"Tanter",
"Mickael",
"",
"PhysMed Paris"
],
[
"Gaberel",
"Thomas",
"",
"PhIND"
],
[
"Deff... | Imaging the brain vasculature can be critical for cerebral perfusion monitoring in the context of neurocritical care. Although ultrasensitive Doppler (UD) can provide good sensitivity to cerebral blood volume (CBV) in a large field of view, it remains difficult to perform through the skull. In this work, we investigate how a minimally invasive burr hole, performed for intracranial pressure (ICP) monitoring, could be used to map the entire brain vascular tree. We explored the use of a small motorized phased array probe with a non-implantable preclinical prototype in pigs. The scan duration (18 min) and coverage (62 $\pm$ 12 % of the brain) obtained allowed global CBV variations detection (relative in brain Dopplerdecrease =-3[-4-+16]% \& Dopplerincrease. = +1[-3-+15]%, n = 6 \& 5) and stroke detection (relative in core Dopplerstroke. =-25%, n = 1). This technology could one day be miniaturized to be implanted for brain perfusion monitoring in neurocritical care. |
1710.09879 | Elias Kouskoumvekakis | Elias S. Manolakos, Elias Kouskoumvekakis | StochSoCs: High performance biocomputing simulations for large scale
Systems Biology | The 2017 International Conference on High Performance Computing &
Simulation (HPCS 2017), 8 pages | null | 10.1109/HPCS.2017.156 | null | q-bio.MN cs.CE q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The stochastic simulation of large-scale biochemical reaction networks is of
great importance for systems biology since it enables the study of inherently
stochastic biological mechanisms at the whole cell scale. Stochastic Simulation
Algorithms (SSA) allow us to simulate the dynamic behavior of complex kinetic
models, but their high computational cost makes them very slow for many
realistic size problems. We present a pilot service, named WebStoch, developed
in the context of our StochSoCs research project, allowing life scientists with
no high-performance computing expertise to perform over the internet stochastic
simulations of large-scale biological network models described in the SBML
standard format. Biomodels submitted to the service are parsed automatically
and then placed for parallel execution on distributed worker nodes. The workers
are implemented using multi-core and many-core processors, or FPGA accelerators
that can handle the simulation of thousands of stochastic repetitions of
complex biomodels, with possibly thousands of reactions and interacting
species. Using benchmark LCSE biomodels, whose workload can be scaled on
demand, we demonstrate linear speedup and more than two orders of magnitude
higher throughput than existing serial simulators.
| [
{
"created": "Thu, 26 Oct 2017 19:16:29 GMT",
"version": "v1"
}
] | 2017-10-30 | [
[
"Manolakos",
"Elias S.",
""
],
[
"Kouskoumvekakis",
"Elias",
""
]
] | The stochastic simulation of large-scale biochemical reaction networks is of great importance for systems biology since it enables the study of inherently stochastic biological mechanisms at the whole cell scale. Stochastic Simulation Algorithms (SSA) allow us to simulate the dynamic behavior of complex kinetic models, but their high computational cost makes them very slow for many realistic size problems. We present a pilot service, named WebStoch, developed in the context of our StochSoCs research project, allowing life scientists with no high-performance computing expertise to perform over the internet stochastic simulations of large-scale biological network models described in the SBML standard format. Biomodels submitted to the service are parsed automatically and then placed for parallel execution on distributed worker nodes. The workers are implemented using multi-core and many-core processors, or FPGA accelerators that can handle the simulation of thousands of stochastic repetitions of complex biomodels, with possibly thousands of reactions and interacting species. Using benchmark LCSE biomodels, whose workload can be scaled on demand, we demonstrate linear speedup and more than two orders of magnitude higher throughput than existing serial simulators. |
q-bio/0411002 | Jonathan Coe | J.B.Coe, Y.Mao and M.E.Cates | Solvable senescence model with positive mutations | 3 figures | Physical Review E 70, 021907 (2004) | 10.1103/PhysRevE.70.021907 | null | q-bio.PE | null | We build upon our previous analytical results for the Penna model of
senescence to include positive mutations. We investigate whether a small but
non-zero positive mutation rate gives qualitatively different results to the
traditional Penna model in which no positive mutations are considered. We find
that the high-lifespan tail of the distribution is radically changed in
structure, but that there is not much effect on the bulk of the population. Th
e mortality plateau that we found previously for a stochastic generalization of
the Penna model is stable to a small positive mutation rate.
| [
{
"created": "Mon, 1 Nov 2004 12:22:51 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Coe",
"J. B.",
""
],
[
"Mao",
"Y.",
""
],
[
"Cates",
"M. E.",
""
]
] | We build upon our previous analytical results for the Penna model of senescence to include positive mutations. We investigate whether a small but non-zero positive mutation rate gives qualitatively different results to the traditional Penna model in which no positive mutations are considered. We find that the high-lifespan tail of the distribution is radically changed in structure, but that there is not much effect on the bulk of the population. Th e mortality plateau that we found previously for a stochastic generalization of the Penna model is stable to a small positive mutation rate. |
1505.07047 | Rebekah Rogers | Rebekah L. Rogers | Chromosomal rearrangements as barriers to genetic homogenization between
archaic and modern humans | null | null | 10.1093/molbev/msv204 | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Chromosomal rearrangements, which shuffle DNA throughout the genome, are an
important source of divergence across taxa. Using a paired-end read approach
with Illumina sequence data for archaic humans, I identify changes in genome
structure that occurred recently in human evolution. Hundreds of rearrangements
indicate genomic trafficking between the sex chromosomes and autosomes, raising
the possibility of sex-specific changes. Additionally, genes adjacent to genome
structure changes in Neanderthals are associated with testis-specific
expression, consistent with evolutionary theory that new genes commonly form
with expression in the testes. I identify one case of new-gene creation through
transposition from the Y chromosome to chromosome 10 that combines the 5' end
of the testis-specific gene Fank1 with previously untranscribed sequence. This
new transcript experienced copy number expansion in archaic genomes, indicating
rapid genomic change. Among rearrangements identified in Neanderthals, 13% are
transposition of selfish genetic elements, while 32% appear to be ectopic
exchange between repeats. In Denisovan, the pattern is similar but numbers are
significantly higher with 18% of rearrangements reflecting transposition and
40% ectopic exchange between distantly related repeats. There is an excess of
divergent rearrangements relative to polymorphism in Denisovan, which might
result from non-uniform rates of mutation, possibly reflecting a burst of TE
activity in the lineage that led to Denisovan. Finally, loci containing genome
structure changes show diminished rates of introgression from Neanderthals into
modern humans, consistent with the hypothesis that rearrangements serve as
barriers to gene flow during hybridization. Together, these results suggest
that this previously unidentified source of genomic variation has important
biological consequences in human evolution.
| [
{
"created": "Tue, 26 May 2015 17:06:31 GMT",
"version": "v1"
},
{
"created": "Wed, 5 Aug 2015 22:00:03 GMT",
"version": "v2"
},
{
"created": "Mon, 14 Sep 2015 17:45:08 GMT",
"version": "v3"
}
] | 2015-10-07 | [
[
"Rogers",
"Rebekah L.",
""
]
] | Chromosomal rearrangements, which shuffle DNA throughout the genome, are an important source of divergence across taxa. Using a paired-end read approach with Illumina sequence data for archaic humans, I identify changes in genome structure that occurred recently in human evolution. Hundreds of rearrangements indicate genomic trafficking between the sex chromosomes and autosomes, raising the possibility of sex-specific changes. Additionally, genes adjacent to genome structure changes in Neanderthals are associated with testis-specific expression, consistent with evolutionary theory that new genes commonly form with expression in the testes. I identify one case of new-gene creation through transposition from the Y chromosome to chromosome 10 that combines the 5' end of the testis-specific gene Fank1 with previously untranscribed sequence. This new transcript experienced copy number expansion in archaic genomes, indicating rapid genomic change. Among rearrangements identified in Neanderthals, 13% are transposition of selfish genetic elements, while 32% appear to be ectopic exchange between repeats. In Denisovan, the pattern is similar but numbers are significantly higher with 18% of rearrangements reflecting transposition and 40% ectopic exchange between distantly related repeats. There is an excess of divergent rearrangements relative to polymorphism in Denisovan, which might result from non-uniform rates of mutation, possibly reflecting a burst of TE activity in the lineage that led to Denisovan. Finally, loci containing genome structure changes show diminished rates of introgression from Neanderthals into modern humans, consistent with the hypothesis that rearrangements serve as barriers to gene flow during hybridization. Together, these results suggest that this previously unidentified source of genomic variation has important biological consequences in human evolution. |
1701.01311 | Demian Wassermann | Nahuel Lascano (ATHENA), Guillermo Gallardo (ATHENA), Rachid Deriche
(ATHENA), Dorian Mazauric (ABS), Demian Wassermann (ATHENA) | Extracting the Groupwise Core Structural Connectivity Network: Bridging
Statistical and Graph-Theoretical Approaches | null | null | null | null | q-bio.NC cs.DM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Finding the common structural brain connectivity network for a given
population is an open problem, crucial for current neuro-science. Recent
evidence suggests there's a tightly connected network shared between humans.
Obtaining this network will, among many advantages , allow us to focus
cognitive and clinical analyses on common connections, thus increasing their
statistical power. In turn, knowledge about the common network will facilitate
novel analyses to understand the structure-function relationship in the brain.
In this work, we present a new algorithm for computing the core structural
connectivity network of a subject sample combining graph theory and statistics.
Our algorithm works in accordance with novel evidence on brain topology. We
analyze the problem theoretically and prove its complexity. Using 309 subjects,
we show its advantages when used as a feature selection for connectivity
analysis on populations, outperforming the current approaches.
| [
{
"created": "Thu, 5 Jan 2017 13:44:59 GMT",
"version": "v1"
},
{
"created": "Fri, 6 Jan 2017 15:41:49 GMT",
"version": "v2"
},
{
"created": "Mon, 9 Jan 2017 14:21:38 GMT",
"version": "v3"
}
] | 2017-01-10 | [
[
"Lascano",
"Nahuel",
"",
"ATHENA"
],
[
"Gallardo",
"Guillermo",
"",
"ATHENA"
],
[
"Deriche",
"Rachid",
"",
"ATHENA"
],
[
"Mazauric",
"Dorian",
"",
"ABS"
],
[
"Wassermann",
"Demian",
"",
"ATHENA"
]
] | Finding the common structural brain connectivity network for a given population is an open problem, crucial for current neuro-science. Recent evidence suggests there's a tightly connected network shared between humans. Obtaining this network will, among many advantages , allow us to focus cognitive and clinical analyses on common connections, thus increasing their statistical power. In turn, knowledge about the common network will facilitate novel analyses to understand the structure-function relationship in the brain. In this work, we present a new algorithm for computing the core structural connectivity network of a subject sample combining graph theory and statistics. Our algorithm works in accordance with novel evidence on brain topology. We analyze the problem theoretically and prove its complexity. Using 309 subjects, we show its advantages when used as a feature selection for connectivity analysis on populations, outperforming the current approaches. |
2005.03659 | Qi Li | Qi Li, Khalique Newaz, and Tijana Milenkovi\'c | Improved supervised prediction of aging-related genes via weighted
dynamic network analysis | null | null | null | null | q-bio.MN cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This study focuses on the task of supervised prediction of aging-related
genes from -omics data. Unlike gene expression methods for this task that
capture aging-specific information but ignore interactions between genes (i.e.,
their protein products), or protein-protein interaction (PPI) network methods
for this task that account for PPIs but the PPIs are context-unspecific, we
recently integrated the two data types into an aging-specific PPI subnetwork,
which yielded more accurate aging-related gene predictions. However, a dynamic
aging-specific subnetwork did not improve prediction performance compared to a
static aging-specific subnetwork, despite the aging process being dynamic. This
could be because the dynamic subnetwork was inferred using a naive Induced
subgraph approach. Instead, we recently inferred a dynamic aging-specific
subnetwork using a methodologically more advanced notion of network propagation
(NP), which improved upon Induced dynamic aging-specific subnetwork in a
different task, that of unsupervised analyses of the aging process. Here, we
evaluate whether our existing NP-based dynamic subnetwork will improve upon the
dynamic as well as static subnetwork constructed by the Induced approach in the
considered task of supervised prediction of aging-related genes. The existing
NP-based subnetwork is unweighted, i.e., it gives equal importance to each of
the aging-specific PPIs. Because accounting for aging-specific edge weights
might be important, we additionally propose a weighted NP-based dynamic
aging-specific subnetwork. We demonstrate that a predictive machine learning
model trained and tested on the weighted subnetwork yields higher accuracy when
predicting aging-related genes than predictive models run on the existing
unweighted dynamic or static subnetworks, regardless of whether the existing
subnetworks were inferred using NP or the Induced approach.
| [
{
"created": "Thu, 7 May 2020 01:50:25 GMT",
"version": "v1"
},
{
"created": "Tue, 11 Aug 2020 16:48:44 GMT",
"version": "v2"
},
{
"created": "Tue, 13 Apr 2021 16:03:22 GMT",
"version": "v3"
}
] | 2021-04-14 | [
[
"Li",
"Qi",
""
],
[
"Newaz",
"Khalique",
""
],
[
"Milenković",
"Tijana",
""
]
] | This study focuses on the task of supervised prediction of aging-related genes from -omics data. Unlike gene expression methods for this task that capture aging-specific information but ignore interactions between genes (i.e., their protein products), or protein-protein interaction (PPI) network methods for this task that account for PPIs but the PPIs are context-unspecific, we recently integrated the two data types into an aging-specific PPI subnetwork, which yielded more accurate aging-related gene predictions. However, a dynamic aging-specific subnetwork did not improve prediction performance compared to a static aging-specific subnetwork, despite the aging process being dynamic. This could be because the dynamic subnetwork was inferred using a naive Induced subgraph approach. Instead, we recently inferred a dynamic aging-specific subnetwork using a methodologically more advanced notion of network propagation (NP), which improved upon Induced dynamic aging-specific subnetwork in a different task, that of unsupervised analyses of the aging process. Here, we evaluate whether our existing NP-based dynamic subnetwork will improve upon the dynamic as well as static subnetwork constructed by the Induced approach in the considered task of supervised prediction of aging-related genes. The existing NP-based subnetwork is unweighted, i.e., it gives equal importance to each of the aging-specific PPIs. Because accounting for aging-specific edge weights might be important, we additionally propose a weighted NP-based dynamic aging-specific subnetwork. We demonstrate that a predictive machine learning model trained and tested on the weighted subnetwork yields higher accuracy when predicting aging-related genes than predictive models run on the existing unweighted dynamic or static subnetworks, regardless of whether the existing subnetworks were inferred using NP or the Induced approach. |
2204.08086 | Nishant Sinha | Nishant Sinha, John S. Duncan, Beate Diehl, Fahmida A. Chowdhury, Jane
de Tisi, Anna Miserocchi, Andrew W. McEvoy, Kathryn A. Davis, Sjoerd B. Vos,
Gavin P. Winston, Yujiang Wang, Peter N. Taylor | Intracranial EEG structure-function coupling predicts surgical outcomes
in focal epilepsy | null | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Alterations to structural and functional brain networks have been reported
across many neurological conditions. However, the relationship between
structure and function -- their coupling -- is relatively unexplored,
particularly in the context of an intervention. Epilepsy surgery alters the
brain structure and networks to control the functional abnormality of seizures.
Given that surgery is a structural modification aiming to alter the function,
we hypothesized that stronger structure-function coupling preoperatively is
associated with a greater chance of post-operative seizure control. We
constructed structural and functional brain networks in 39 subjects with
medication-resistant focal epilepsy using data from intracranial EEG
(pre-surgery), structural MRI (pre-and post-surgery), and diffusion MRI
(pre-surgery). We investigated pre-operative structure-function coupling at two
spatial scales a) at the global iEEG network level and b) at the resolution of
individual iEEG electrode contacts using virtual surgeries. At global network
level, seizure-free individuals had stronger structure-function coupling
pre-operatively than those that were not seizure-free regardless of the choice
of interictal segment or frequency band. At the resolution of individual iEEG
contacts, the virtual surgery approach provided complementary information to
localize epileptogenic tissues. In predicting seizure outcomes,
structure-function coupling measures were more important than clinical
attributes, and together they predicted seizure outcomes with an accuracy of
85% and sensitivity of 87%. The underlying assumption that the structural
changes induced by surgery translate to the functional level to control
seizures is valid when the structure-functional coupling is strong. Mapping the
regions that contribute to structure-functional coupling using virtual
surgeries may help aid surgical planning.
| [
{
"created": "Sun, 17 Apr 2022 20:38:14 GMT",
"version": "v1"
}
] | 2022-04-19 | [
[
"Sinha",
"Nishant",
""
],
[
"Duncan",
"John S.",
""
],
[
"Diehl",
"Beate",
""
],
[
"Chowdhury",
"Fahmida A.",
""
],
[
"de Tisi",
"Jane",
""
],
[
"Miserocchi",
"Anna",
""
],
[
"McEvoy",
"Andrew W.",
""
],
... | Alterations to structural and functional brain networks have been reported across many neurological conditions. However, the relationship between structure and function -- their coupling -- is relatively unexplored, particularly in the context of an intervention. Epilepsy surgery alters the brain structure and networks to control the functional abnormality of seizures. Given that surgery is a structural modification aiming to alter the function, we hypothesized that stronger structure-function coupling preoperatively is associated with a greater chance of post-operative seizure control. We constructed structural and functional brain networks in 39 subjects with medication-resistant focal epilepsy using data from intracranial EEG (pre-surgery), structural MRI (pre-and post-surgery), and diffusion MRI (pre-surgery). We investigated pre-operative structure-function coupling at two spatial scales a) at the global iEEG network level and b) at the resolution of individual iEEG electrode contacts using virtual surgeries. At global network level, seizure-free individuals had stronger structure-function coupling pre-operatively than those that were not seizure-free regardless of the choice of interictal segment or frequency band. At the resolution of individual iEEG contacts, the virtual surgery approach provided complementary information to localize epileptogenic tissues. In predicting seizure outcomes, structure-function coupling measures were more important than clinical attributes, and together they predicted seizure outcomes with an accuracy of 85% and sensitivity of 87%. The underlying assumption that the structural changes induced by surgery translate to the functional level to control seizures is valid when the structure-functional coupling is strong. Mapping the regions that contribute to structure-functional coupling using virtual surgeries may help aid surgical planning. |
1510.00642 | Hannah Bos | Hannah Bos, Markus Diesmann, Moritz Helias | Identifying anatomical origins of coexisting oscillations in the
cortical microcircuit | null | null | 10.1371/journal.pcbi.1005132 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Oscillations are omnipresent in neural population signals, like multi-unit
recordings, EEG/MEG, and the local field potential. They have been linked to
the population firing rate of neurons, with individual neurons firing in a
close-to-irregular fashion at low rates. Using a combination of mean-field and
linear response theory we predict the spectra generated in a layered
microcircuit model of V1, composed of leaky integrate-and-fire neurons and
based on connectivity compiled from anatomical and electrophysiological
studies. The model exhibits low- and high-gamma oscillations visible in all
populations. Since locally generated frequencies are imposed onto other
populations, the origin of the oscillations cannot be deduced from the spectra.
We develop an universally applicable systematic approach that identifies the
anatomical circuits underlying the generation of oscillations in a given
network. Based on a theoretical reduction of the dynamics, we derive a
sensitivity measure resulting in a frequency-dependent connectivity map that
reveals connections crucial for the peak amplitude and frequency of the
observed oscillations and identifies the minimal circuit generating a given
frequency. The low-gamma peak turns out to be generated in a sub-circuit
located in layer 2/3 and 4, while the high-gamma peak emerges from the
inter-neurons in layer 4. Connections within and onto layer 5 are found to
regulate slow rate fluctuations. We further demonstrate how small perturbations
of the crucial connections have significant impact on the population spectra,
while the impairment of other connections leaves the dynamics on the population
level unaltered. The study uncovers connections where mechanisms controlling
the spectra of the cortical microcircuit are most effective.
| [
{
"created": "Fri, 2 Oct 2015 17:01:34 GMT",
"version": "v1"
},
{
"created": "Fri, 13 Nov 2015 11:45:37 GMT",
"version": "v2"
},
{
"created": "Mon, 25 Jan 2016 15:54:19 GMT",
"version": "v3"
},
{
"created": "Thu, 19 May 2016 08:41:27 GMT",
"version": "v4"
}
] | 2017-02-08 | [
[
"Bos",
"Hannah",
""
],
[
"Diesmann",
"Markus",
""
],
[
"Helias",
"Moritz",
""
]
] | Oscillations are omnipresent in neural population signals, like multi-unit recordings, EEG/MEG, and the local field potential. They have been linked to the population firing rate of neurons, with individual neurons firing in a close-to-irregular fashion at low rates. Using a combination of mean-field and linear response theory we predict the spectra generated in a layered microcircuit model of V1, composed of leaky integrate-and-fire neurons and based on connectivity compiled from anatomical and electrophysiological studies. The model exhibits low- and high-gamma oscillations visible in all populations. Since locally generated frequencies are imposed onto other populations, the origin of the oscillations cannot be deduced from the spectra. We develop an universally applicable systematic approach that identifies the anatomical circuits underlying the generation of oscillations in a given network. Based on a theoretical reduction of the dynamics, we derive a sensitivity measure resulting in a frequency-dependent connectivity map that reveals connections crucial for the peak amplitude and frequency of the observed oscillations and identifies the minimal circuit generating a given frequency. The low-gamma peak turns out to be generated in a sub-circuit located in layer 2/3 and 4, while the high-gamma peak emerges from the inter-neurons in layer 4. Connections within and onto layer 5 are found to regulate slow rate fluctuations. We further demonstrate how small perturbations of the crucial connections have significant impact on the population spectra, while the impairment of other connections leaves the dynamics on the population level unaltered. The study uncovers connections where mechanisms controlling the spectra of the cortical microcircuit are most effective. |
2407.05055 | Yihang Zhou | Yihang Zhou | Metagenomic analysis revealed significant changes in cattle rectum
microbiome and antimicrobial resistome under fescue toxicosis | null | null | null | null | q-bio.GN | http://creativecommons.org/licenses/by/4.0/ | Fescue toxicity causes reduced growth and reproductive issues in cattle
grazing endophyte-infected tall fescue. To characterize the gut microbiota and
its response to fescue toxicosis, we collected fecal samples before and after a
30-days toxic fescue seeds supplementation from eight Angus Simmental pregnant
cows and heifers. We sequenced the 16 metagenomes using the whole-genome
shotgun approach and generated 157 Gbp of metagenomic sequences. Through de
novo assembly and annotation, we obtained a 13.1 Gbp reference contig assembly
and identified 22 million microbial genes for cattle rectum microbiota. We
discovered a significant reduction of microbial diversity after toxic seed
treatment (P<0.01), suggesting dysbiosis of the microbiome. Six bacterial
families and 31 species are significantly increased in the fecal microbiota
(P-adj<0.05), including members of the top abundant rumen core taxa. This
global elevation of rumen microbes in the rectum microbiota suggests a
potential impairment of rumen microbiota under fescue toxicosis. Among these,
Ruminococcaceae bacterium P7, an important species accounting for ~2% of rumen
microbiota, was the most impacted with a 16-fold increase from 0.17% to 2.8% in
feces (P<0.01). We hypothesized that rumen Ruminococcaceae bacterium P7
re-adapted to the large intestine environment under toxic fescue stress,
causing this dramatic increase in abundance. Functional enrichment analysis
revealed that the overrepresented pathways shifted from energy metabolism to
antimicrobial resistance and DNA replication. In conclusion, we discovered
dramatic microbiota alterations in composition, abundance, and functional
capacities under fescue toxicosis, and our results suggest Ruminococcaceae
bacterium P7 as a potential biomarker for fescue toxicosis management.
| [
{
"created": "Sat, 6 Jul 2024 11:58:18 GMT",
"version": "v1"
}
] | 2024-07-09 | [
[
"Zhou",
"Yihang",
""
]
] | Fescue toxicity causes reduced growth and reproductive issues in cattle grazing endophyte-infected tall fescue. To characterize the gut microbiota and its response to fescue toxicosis, we collected fecal samples before and after a 30-days toxic fescue seeds supplementation from eight Angus Simmental pregnant cows and heifers. We sequenced the 16 metagenomes using the whole-genome shotgun approach and generated 157 Gbp of metagenomic sequences. Through de novo assembly and annotation, we obtained a 13.1 Gbp reference contig assembly and identified 22 million microbial genes for cattle rectum microbiota. We discovered a significant reduction of microbial diversity after toxic seed treatment (P<0.01), suggesting dysbiosis of the microbiome. Six bacterial families and 31 species are significantly increased in the fecal microbiota (P-adj<0.05), including members of the top abundant rumen core taxa. This global elevation of rumen microbes in the rectum microbiota suggests a potential impairment of rumen microbiota under fescue toxicosis. Among these, Ruminococcaceae bacterium P7, an important species accounting for ~2% of rumen microbiota, was the most impacted with a 16-fold increase from 0.17% to 2.8% in feces (P<0.01). We hypothesized that rumen Ruminococcaceae bacterium P7 re-adapted to the large intestine environment under toxic fescue stress, causing this dramatic increase in abundance. Functional enrichment analysis revealed that the overrepresented pathways shifted from energy metabolism to antimicrobial resistance and DNA replication. In conclusion, we discovered dramatic microbiota alterations in composition, abundance, and functional capacities under fescue toxicosis, and our results suggest Ruminococcaceae bacterium P7 as a potential biomarker for fescue toxicosis management. |
2105.02217 | Fernando Adri\'an Fern\'andez Tojo | Iv\'an Area, F.J. Fern\'andez, Juan J. Nieto, F. Adri\'an F. Tojo | A Digital Twin of a Compartmental Epidemiological Model based on a
Stieltjes Differential Equation | null | null | null | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a digital twin of the classical compartmental SIR (Susceptible,
Infected, Recovered) epidemic model and study the interrelation between the
digital twin and the system. In doing so, we use Stieltjes derivatives to feed
the data from the real system to the virtual model which, in return, improves
it in real time. As a byproduct of the model, we present a precise mathematical
definition of solution to the problem. We also analyze the existence and
uniqueness of solutions, introduce the concept of Main Digital Twin and present
some numerical simulations with real data of the COVID-19 epidemic, showing the
accuracy of the proposed ideas.
| [
{
"created": "Thu, 29 Apr 2021 10:20:15 GMT",
"version": "v1"
}
] | 2021-05-06 | [
[
"Area",
"Iván",
""
],
[
"Fernández",
"F. J.",
""
],
[
"Nieto",
"Juan J.",
""
],
[
"Tojo",
"F. Adrián F.",
""
]
] | We introduce a digital twin of the classical compartmental SIR (Susceptible, Infected, Recovered) epidemic model and study the interrelation between the digital twin and the system. In doing so, we use Stieltjes derivatives to feed the data from the real system to the virtual model which, in return, improves it in real time. As a byproduct of the model, we present a precise mathematical definition of solution to the problem. We also analyze the existence and uniqueness of solutions, introduce the concept of Main Digital Twin and present some numerical simulations with real data of the COVID-19 epidemic, showing the accuracy of the proposed ideas. |
2401.12601 | Francois Le Jeune | Fran\c{c}ois Le Jeune (UCBM, Hybrid, IRISA), Marco D'Alonzo (UCBM),
Valeria Piombino (UCBM), Alessia Noccaro (UCBM, Neurorobotics Lab), Domenico
Formica (UCBM, Neurorobotics Lab), Giovanni Di Pino (UCBM) | Experiencing an elongated limb in virtual reality modifies the tactile
distance perception of the corresponding real limb | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In measurement, a reference frame is needed to compare the measured object to
something already known. This raises the neuroscientific question of which
reference frame is used by humans when exploring the environment. Previous
studies suggested that, in touch, the body employed as measuring tool also
serves as reference frame. Indeed, an artificial modification of the perceived
dimensions of the body changes the tactile perception of external object
dimensions. However, it is unknown if such a change in tactile perception would
occur when the body schema is modified through the illusion of owning an limb
altered in size. Therefore, employing a virtual hand illusion paradigm with an
elongated forearm of different lengths, we systematically tested the subjective
perception of distance between two points (tactile distance perception task,
TDP task) on the corresponding real forearm following the illusion. Thus, TDP
task is used as a proxy to gauge changes in the body schema. Embodiment of the
virtual arm was found significantly greater after the synchronous visuo-tactile
stimulation condition compared to the asynchronous one, and the forearm
elongation significantly increased the TDP. However, we did not find any link
between the visuo-tactile induced ownership over the elongated arm and TDP
variation, suggesting that vision plays the main role in the modification of
the body schema. Additionally, significant effect of elongation found on TDP
but not on proprioception suggests that these are affected differently by body
schema modifications. These findings confirm the body schema malleability and
its role as reference frame in touch.
| [
{
"created": "Tue, 23 Jan 2024 09:59:00 GMT",
"version": "v1"
}
] | 2024-01-24 | [
[
"Jeune",
"François Le",
"",
"UCBM, Hybrid, IRISA"
],
[
"D'Alonzo",
"Marco",
"",
"UCBM"
],
[
"Piombino",
"Valeria",
"",
"UCBM"
],
[
"Noccaro",
"Alessia",
"",
"UCBM, Neurorobotics Lab"
],
[
"Formica",
"Domenico",
"",
"UC... | In measurement, a reference frame is needed to compare the measured object to something already known. This raises the neuroscientific question of which reference frame is used by humans when exploring the environment. Previous studies suggested that, in touch, the body employed as measuring tool also serves as reference frame. Indeed, an artificial modification of the perceived dimensions of the body changes the tactile perception of external object dimensions. However, it is unknown if such a change in tactile perception would occur when the body schema is modified through the illusion of owning an limb altered in size. Therefore, employing a virtual hand illusion paradigm with an elongated forearm of different lengths, we systematically tested the subjective perception of distance between two points (tactile distance perception task, TDP task) on the corresponding real forearm following the illusion. Thus, TDP task is used as a proxy to gauge changes in the body schema. Embodiment of the virtual arm was found significantly greater after the synchronous visuo-tactile stimulation condition compared to the asynchronous one, and the forearm elongation significantly increased the TDP. However, we did not find any link between the visuo-tactile induced ownership over the elongated arm and TDP variation, suggesting that vision plays the main role in the modification of the body schema. Additionally, significant effect of elongation found on TDP but not on proprioception suggests that these are affected differently by body schema modifications. These findings confirm the body schema malleability and its role as reference frame in touch. |
2112.08519 | Valeria Garbin | Angelo Pommella, Irina Harun, Klaus Hellgardt, Valeria Garbin | Enhancing microalgal cell wall permeability by microbubble streaming
flow | null | null | null | null | q-bio.QM cond-mat.soft physics.app-ph physics.flu-dyn | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We demonstrate that the permeability of the cell wall of microalga
$\textit{C. reinhardtii}$ can be enhanced in controlled microstreaming flow
conditions using single bubbles adherent to a wall and driven by ultrasound
near their resonance frequency. We find that microstreaming flow is effective
at acoustic pressures as low as tens of kPa, at least one order of magnitude
lower than those used in bulk ultrasonication. We quantify the increase in
number of fluorescent cells for different acoustic pressure amplitudes and
ultrasound exposure times. We interpret the increase in dye uptake after
microstreaming flow as an indication of enhanced permeability of the cell wall.
We perform microscopic visualizations to verify the occurrence of non-spherical
shape oscillations of the bubbles, and to identify the optimal conditions for
dye uptake. In our experimental conditions, with acoustic pressures in the
range 20-40 kPa and frequencies in the range 25-70 kHz, we obtained spherical
oscillations of bare microbubbles and non-spherical shape oscillations in
microbubbles with algae attached to their surface. Only the latter were able to
generate a non-linear microstreaming flow with sufficiently large viscous
stresses to induce algae permeabilization. The results demonstrate that
controlled microbubble streaming flow can enhance cell wall permeability in
energy-efficient conditions.
| [
{
"created": "Sat, 11 Dec 2021 09:14:01 GMT",
"version": "v1"
}
] | 2021-12-20 | [
[
"Pommella",
"Angelo",
""
],
[
"Harun",
"Irina",
""
],
[
"Hellgardt",
"Klaus",
""
],
[
"Garbin",
"Valeria",
""
]
] | We demonstrate that the permeability of the cell wall of microalga $\textit{C. reinhardtii}$ can be enhanced in controlled microstreaming flow conditions using single bubbles adherent to a wall and driven by ultrasound near their resonance frequency. We find that microstreaming flow is effective at acoustic pressures as low as tens of kPa, at least one order of magnitude lower than those used in bulk ultrasonication. We quantify the increase in number of fluorescent cells for different acoustic pressure amplitudes and ultrasound exposure times. We interpret the increase in dye uptake after microstreaming flow as an indication of enhanced permeability of the cell wall. We perform microscopic visualizations to verify the occurrence of non-spherical shape oscillations of the bubbles, and to identify the optimal conditions for dye uptake. In our experimental conditions, with acoustic pressures in the range 20-40 kPa and frequencies in the range 25-70 kHz, we obtained spherical oscillations of bare microbubbles and non-spherical shape oscillations in microbubbles with algae attached to their surface. Only the latter were able to generate a non-linear microstreaming flow with sufficiently large viscous stresses to induce algae permeabilization. The results demonstrate that controlled microbubble streaming flow can enhance cell wall permeability in energy-efficient conditions. |
2002.00931 | Christoph Zechner | Lorenzo Duso and Christoph Zechner | Stochastic reaction networks in dynamic compartment populations | 9 pages, 3 figures, appendix included | null | 10.1073/pnas.2003734117 | null | q-bio.QM q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Compartmentalization of biochemical processes underlies all biological
systems, from the organelle to the tissue scale. Theoretical models to study
the interplay between noisy reaction dynamics and compartmentalization are
sparse, and typically very challenging to analyze computationally. Recent
studies have made progress towards addressing this problem in the context of
concrete biological systems but general approaches remain lacking. In this work
we propose a mathematical framework based on counting processes that allows us
to study compartment populations with arbitrary interactions and internal
biochemistry. We provide an efficient description of the population dynamics in
terms of differential equations which capture moments of the population and
their variability. We demonstrate the relevance of our approach using several
case studies inspired by biological systems at different scales.
| [
{
"created": "Mon, 3 Feb 2020 18:21:52 GMT",
"version": "v1"
}
] | 2022-06-08 | [
[
"Duso",
"Lorenzo",
""
],
[
"Zechner",
"Christoph",
""
]
] | Compartmentalization of biochemical processes underlies all biological systems, from the organelle to the tissue scale. Theoretical models to study the interplay between noisy reaction dynamics and compartmentalization are sparse, and typically very challenging to analyze computationally. Recent studies have made progress towards addressing this problem in the context of concrete biological systems but general approaches remain lacking. In this work we propose a mathematical framework based on counting processes that allows us to study compartment populations with arbitrary interactions and internal biochemistry. We provide an efficient description of the population dynamics in terms of differential equations which capture moments of the population and their variability. We demonstrate the relevance of our approach using several case studies inspired by biological systems at different scales. |
1107.2903 | Youval Dar | Youval Dar, Benjamin Bairrington, Daniel Cox, Rajiv Singh | Model for competing pathways in protein-aggregation: role of membrane
bound proteins | null | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Motivated by the biologically important and complex phenomena of A\beta\
peptide aggregation in Alzheimer's disease, we introduce a model and simulation
methodology for studying protein aggregation that includes extra-cellular
aggregation, aggregation on the cell-surface assisted by a membrane bound
protein, and in addition, supply, clearance, production and sequestration of
peptides and proteins. The model is used to produce equilibrium and
kinetic-aggregation phase diagrams for aggregation onset and of reduced stable
A\beta\ monomer concentrations due to aggregation. The methodology we
implemented permits modeling of a phenomenon involving orders of magnitude
differences in time scales and concentrations which can be retained in the
simulation. We demonstrate how to identify ranges of parameter values that give
monomer concentration depletion upon aggregation similar to that observed in
Alzheimer's disease. We show how very different behavior can be obtained as
reaction parameters and protein concentrations vary, and discuss the difficulty
reconciling results of experiments from two vastly different concentration
regimes. The latter is an important general issue in relating in-vitro and mice
based experiments to humans.
| [
{
"created": "Thu, 14 Jul 2011 19:33:46 GMT",
"version": "v1"
},
{
"created": "Tue, 16 Oct 2012 05:57:47 GMT",
"version": "v2"
},
{
"created": "Sun, 28 Apr 2013 07:49:27 GMT",
"version": "v3"
},
{
"created": "Thu, 2 May 2013 21:49:26 GMT",
"version": "v4"
},
{
"cr... | 2018-03-02 | [
[
"Dar",
"Youval",
""
],
[
"Bairrington",
"Benjamin",
""
],
[
"Cox",
"Daniel",
""
],
[
"Singh",
"Rajiv",
""
]
] | Motivated by the biologically important and complex phenomena of A\beta\ peptide aggregation in Alzheimer's disease, we introduce a model and simulation methodology for studying protein aggregation that includes extra-cellular aggregation, aggregation on the cell-surface assisted by a membrane bound protein, and in addition, supply, clearance, production and sequestration of peptides and proteins. The model is used to produce equilibrium and kinetic-aggregation phase diagrams for aggregation onset and of reduced stable A\beta\ monomer concentrations due to aggregation. The methodology we implemented permits modeling of a phenomenon involving orders of magnitude differences in time scales and concentrations which can be retained in the simulation. We demonstrate how to identify ranges of parameter values that give monomer concentration depletion upon aggregation similar to that observed in Alzheimer's disease. We show how very different behavior can be obtained as reaction parameters and protein concentrations vary, and discuss the difficulty reconciling results of experiments from two vastly different concentration regimes. The latter is an important general issue in relating in-vitro and mice based experiments to humans. |
1701.02128 | KaYin Leung | Ka Yin Leung, Mirjam Kretzschmar, Odo Diekmann | Mean field at distance one | revised version | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To be able to understand how infectious diseases spread on networks, it is
important to understand the network structure itself in the absence of
infection. In this text we consider dynamic network models that are inspired by
the (static) configuration network. The networks are described by
population-level averages such as the fraction of the population with $k$
partners, $k=0,1,2,\ldots$ This means that the bookkeeping contains information
about individuals and their partners, but no information about partners of
partners. Can we average over the population to obtain information about
partners of partners? The answer is `it depends', and this is where the mean
field at distance one assumption comes into play. In this text we explain that,
yes, we may average over the population (in the right way) in the static
network. Moreover, we provide evidence in support of a positive answer for the
network model that is dynamic due to partnership changes. If, however, we
additionally allow for demographic changes, dependencies between partners
arise. In earlier work we used the slogan `mean field at distance one' as a
justification of simply ignoring the dependencies. Here we discuss the
subtleties that come with the mean field at distance one assumption, especially
when demography is involved. Particular attention is given to the accuracy of
the approximation in the setting with demography. Next, the mean field at
distance one assumption is discussed in the context of an infection
superimposed on the network. We end with the conjecture that an extension of
the bookkeeping leads to an exact description of the network structure.
| [
{
"created": "Mon, 9 Jan 2017 10:45:31 GMT",
"version": "v1"
},
{
"created": "Tue, 7 Mar 2017 13:20:44 GMT",
"version": "v2"
}
] | 2017-03-08 | [
[
"Leung",
"Ka Yin",
""
],
[
"Kretzschmar",
"Mirjam",
""
],
[
"Diekmann",
"Odo",
""
]
] | To be able to understand how infectious diseases spread on networks, it is important to understand the network structure itself in the absence of infection. In this text we consider dynamic network models that are inspired by the (static) configuration network. The networks are described by population-level averages such as the fraction of the population with $k$ partners, $k=0,1,2,\ldots$ This means that the bookkeeping contains information about individuals and their partners, but no information about partners of partners. Can we average over the population to obtain information about partners of partners? The answer is `it depends', and this is where the mean field at distance one assumption comes into play. In this text we explain that, yes, we may average over the population (in the right way) in the static network. Moreover, we provide evidence in support of a positive answer for the network model that is dynamic due to partnership changes. If, however, we additionally allow for demographic changes, dependencies between partners arise. In earlier work we used the slogan `mean field at distance one' as a justification of simply ignoring the dependencies. Here we discuss the subtleties that come with the mean field at distance one assumption, especially when demography is involved. Particular attention is given to the accuracy of the approximation in the setting with demography. Next, the mean field at distance one assumption is discussed in the context of an infection superimposed on the network. We end with the conjecture that an extension of the bookkeeping leads to an exact description of the network structure. |
0806.1292 | Jose L. Oliver | Pedro Bernaola-Galv\'an, Pedro Carpena and Jos\'e L. Oliver | A standalone version of IsoFinder for the computational prediction of
isochores in genome sequences | 7 pages, 3 figures | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Isochores are long genome segments relatively homogeneous in G+C. A heuristic
algorithm based on entropic segmentation has been developed by our group, and a
web server implementing all the required components is available. However, a
researcher may want to perform batch processing of many sequences
simultaneously in its local machine, instead of analyzing them on one by one
basis through the web. To this end, standalone versions are required. We report
here the implementation of two standalone programs, able to predict isochores
at the sequence level: 1) a command-line version (IsoFinder) for Windows and
Linux systems; and 2) a user-friendly version (IsoFinderWin) running under
Windows.
| [
{
"created": "Sat, 7 Jun 2008 17:12:19 GMT",
"version": "v1"
}
] | 2009-09-29 | [
[
"Bernaola-Galván",
"Pedro",
""
],
[
"Carpena",
"Pedro",
""
],
[
"Oliver",
"José L.",
""
]
] | Isochores are long genome segments relatively homogeneous in G+C. A heuristic algorithm based on entropic segmentation has been developed by our group, and a web server implementing all the required components is available. However, a researcher may want to perform batch processing of many sequences simultaneously in its local machine, instead of analyzing them on one by one basis through the web. To this end, standalone versions are required. We report here the implementation of two standalone programs, able to predict isochores at the sequence level: 1) a command-line version (IsoFinder) for Windows and Linux systems; and 2) a user-friendly version (IsoFinderWin) running under Windows. |
1506.04438 | Mike Steel Prof. | Katharina T. Huber and Vincent Moulton and Mike Steel and Taoyang Wu | Folding and unfolding phylogenetic trees and networks | 17 pages, 5 figures | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Phylogenetic networks are rooted, labelled directed acyclic graphs which are
commonly used to represent reticulate evolution. There is a close relationship
between phylogenetic networks and multi-labelled trees (MUL-trees). Indeed, any
phylogenetic network $N$ can be 'unfolded' to obtain a MUL-tree $U(N)$ and,
conversely, a MUL-tree $T$ can in certain circumstances be 'folded' to obtain a
phylogenetic network $F(T)$ that exhibits $T$. In this paper, we study
properties of the operations $U$ and $F$ in more detail. In particular, we
introduce the class of stable networks, phylogenetic networks $N$ for which
$F(U(N))$ is isomorphic to $N$, characterise such networks, and show that that
they are related to the well-known class of tree-sibling networks. We also
explore how the concept of displaying a tree in a network $N$ can be related to
displaying the tree in the MUL-tree $U(N)$. To do this, we develop a
phylogenetic analogue of graph fibrations. This allows us to view $U(N)$ as the
analogue of the universal cover of a digraph, and to establish a close
connection between displaying trees in $U(N)$ and reconciling phylogenetic
trees with networks.
| [
{
"created": "Sun, 14 Jun 2015 21:10:25 GMT",
"version": "v1"
}
] | 2015-06-16 | [
[
"Huber",
"Katharina T.",
""
],
[
"Moulton",
"Vincent",
""
],
[
"Steel",
"Mike",
""
],
[
"Wu",
"Taoyang",
""
]
] | Phylogenetic networks are rooted, labelled directed acyclic graphs which are commonly used to represent reticulate evolution. There is a close relationship between phylogenetic networks and multi-labelled trees (MUL-trees). Indeed, any phylogenetic network $N$ can be 'unfolded' to obtain a MUL-tree $U(N)$ and, conversely, a MUL-tree $T$ can in certain circumstances be 'folded' to obtain a phylogenetic network $F(T)$ that exhibits $T$. In this paper, we study properties of the operations $U$ and $F$ in more detail. In particular, we introduce the class of stable networks, phylogenetic networks $N$ for which $F(U(N))$ is isomorphic to $N$, characterise such networks, and show that that they are related to the well-known class of tree-sibling networks. We also explore how the concept of displaying a tree in a network $N$ can be related to displaying the tree in the MUL-tree $U(N)$. To do this, we develop a phylogenetic analogue of graph fibrations. This allows us to view $U(N)$ as the analogue of the universal cover of a digraph, and to establish a close connection between displaying trees in $U(N)$ and reconciling phylogenetic trees with networks. |
1912.05071 | Nova Smedley | Nova F. Smedley, Suzie El-Saden, and William Hsu | Discovering and interpreting transcriptomic drivers of imaging traits
using neural networks | null | null | 10.1093/bioinformatics/btaa126 | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Motivation. Cancer heterogeneity is observed at multiple biological levels.
To improve our understanding of these differences and their relevance in
medicine, approaches to link organ- and tissue-level information from
diagnostic images and cellular-level information from genomics are needed.
However, these "radiogenomic" studies often use linear, shallow models, depend
on feature selection, or consider one gene at a time to map images to genes.
Moreover, no study has systematically attempted to understand the molecular
basis of imaging traits based on the interpretation of what the neural network
has learned. These current studies are thus limited in their ability to
understand the transcriptomic drivers of imaging traits, which could provide
additional context for determining clinical traits, such as prognosis.
Results. We present an approach based on neural networks that takes
high-dimensional gene expressions as input and performs nonlinear mapping to an
imaging trait. To interpret the models, we propose gene masking and gene
saliency to extract learned relationships from radiogenomic neural networks. In
glioblastoma patients, our models outperform comparable classifiers (>0.10 AUC)
and our interpretation methods were validated using a similar model to identify
known relationships between genes and molecular subtypes. We found that imaging
traits had specific transcription patterns, e.g., edema and genes related to
cellular invasion, and 15 radiogenomic associations were predictive of
survival. We demonstrate that neural networks can model transcriptomic
heterogeneity to reflect differences in imaging and can be used to derive
radiogenomic associations with clinical value.
| [
{
"created": "Wed, 11 Dec 2019 01:11:39 GMT",
"version": "v1"
}
] | 2020-05-19 | [
[
"Smedley",
"Nova F.",
""
],
[
"El-Saden",
"Suzie",
""
],
[
"Hsu",
"William",
""
]
] | Motivation. Cancer heterogeneity is observed at multiple biological levels. To improve our understanding of these differences and their relevance in medicine, approaches to link organ- and tissue-level information from diagnostic images and cellular-level information from genomics are needed. However, these "radiogenomic" studies often use linear, shallow models, depend on feature selection, or consider one gene at a time to map images to genes. Moreover, no study has systematically attempted to understand the molecular basis of imaging traits based on the interpretation of what the neural network has learned. These current studies are thus limited in their ability to understand the transcriptomic drivers of imaging traits, which could provide additional context for determining clinical traits, such as prognosis. Results. We present an approach based on neural networks that takes high-dimensional gene expressions as input and performs nonlinear mapping to an imaging trait. To interpret the models, we propose gene masking and gene saliency to extract learned relationships from radiogenomic neural networks. In glioblastoma patients, our models outperform comparable classifiers (>0.10 AUC) and our interpretation methods were validated using a similar model to identify known relationships between genes and molecular subtypes. We found that imaging traits had specific transcription patterns, e.g., edema and genes related to cellular invasion, and 15 radiogenomic associations were predictive of survival. We demonstrate that neural networks can model transcriptomic heterogeneity to reflect differences in imaging and can be used to derive radiogenomic associations with clinical value. |
1606.03620 | Ben Shirt-Ediss | Benjamin John Shirt-Ediss | Modelling Early Transitions Toward Autonomous Protocells | 205 Pages, 27 Figures, PhD Thesis Defended Feb 2016 | null | null | null | q-bio.OT | http://creativecommons.org/licenses/by-sa/4.0/ | This thesis broadly concerns the origins of life problem, pursuing a joint
approach that combines general philosophical/conceptual reflection on the
problem along with more detailed and formal scientific modelling work oriented
in the conceptual perspective developed. The central subject matter addressed
is the emergence and maintenance of compartmentalised chemistries as precursors
of more complex systems with a proper cellular organization. Whereas an
evolutionary conception of life dominates prebiotic chemistry research and
overflows into the protocells field, this thesis defends that the 'autonomous
systems perspective' of living phenomena is a suitable - arguably the most
suitable - conceptual framework to serve as a backdrop for protocell research.
The autonomy approach allows a careful and thorough reformulation of the
origins of cellular life problem as the problem of how integrated autopoietic
chemical organisation, present in all full-fledged cells, originated and
developed from more simple far-from-equilibrium chemical aggregate systems.
| [
{
"created": "Sat, 11 Jun 2016 19:40:50 GMT",
"version": "v1"
}
] | 2016-06-14 | [
[
"Shirt-Ediss",
"Benjamin John",
""
]
] | This thesis broadly concerns the origins of life problem, pursuing a joint approach that combines general philosophical/conceptual reflection on the problem along with more detailed and formal scientific modelling work oriented in the conceptual perspective developed. The central subject matter addressed is the emergence and maintenance of compartmentalised chemistries as precursors of more complex systems with a proper cellular organization. Whereas an evolutionary conception of life dominates prebiotic chemistry research and overflows into the protocells field, this thesis defends that the 'autonomous systems perspective' of living phenomena is a suitable - arguably the most suitable - conceptual framework to serve as a backdrop for protocell research. The autonomy approach allows a careful and thorough reformulation of the origins of cellular life problem as the problem of how integrated autopoietic chemical organisation, present in all full-fledged cells, originated and developed from more simple far-from-equilibrium chemical aggregate systems. |
1404.4112 | Caterina La Porta AM | Z. Budrikis, G. Costantini, C.A. La Porta, S. Zapperi | Protein accumulation in the endoplasmic reticulum as a non-equilibrium
phase transition | null | null | 10.1038/ncomms4620 | null | q-bio.NC q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Several neurological disorders are associated with the aggregation of
aberrant proteins, often localized in intracellular organelles such as the
endoplasmic reticulum. Here we study protein aggregation kinetics by mean-field
reactions and three dimensional Monte carlo simulations of diffusion-limited
aggregation of linear polymers in a confined space, representing the
endoplasmic reticulum. By tuning the rates of protein production and
degradation, we show that the system undergoes a non-equilibrium phase
transition from a physiological phase with little or no polymer accumulation to
a pathological phase characterized by persistent polymerization. A combination
of external factors accumulating during the lifetime of a patient can thus
slightly modify the phase transition control parameters, tipping the balance
from a long symptomless lag phase to an accelerated pathological development.
The model can be successfully used to interpret experimental data on
amyloid-\b{eta} clearance from the central nervous system.
| [
{
"created": "Tue, 15 Apr 2014 23:59:01 GMT",
"version": "v1"
}
] | 2014-04-17 | [
[
"Budrikis",
"Z.",
""
],
[
"Costantini",
"G.",
""
],
[
"La Porta",
"C. A.",
""
],
[
"Zapperi",
"S.",
""
]
] | Several neurological disorders are associated with the aggregation of aberrant proteins, often localized in intracellular organelles such as the endoplasmic reticulum. Here we study protein aggregation kinetics by mean-field reactions and three dimensional Monte carlo simulations of diffusion-limited aggregation of linear polymers in a confined space, representing the endoplasmic reticulum. By tuning the rates of protein production and degradation, we show that the system undergoes a non-equilibrium phase transition from a physiological phase with little or no polymer accumulation to a pathological phase characterized by persistent polymerization. A combination of external factors accumulating during the lifetime of a patient can thus slightly modify the phase transition control parameters, tipping the balance from a long symptomless lag phase to an accelerated pathological development. The model can be successfully used to interpret experimental data on amyloid-\b{eta} clearance from the central nervous system. |
1702.06192 | Giacomo Dimarco | Mathieu Leroy-Ler\^etre, Giacomo Dimarco, Martine Cazales, Marie-Laure
Boizeau, Bernard Ducommun, Val\`erie Lobjois, Pierre Degond | Are tumor cell lineages solely shaped by mechanical forces? | null | null | null | null | q-bio.CB nlin.CG nlin.PS q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper investigates cells proliferation dynamics in small tumor cell
aggregates using an individual based model (IBM). The simulation model is
designed to study the morphology of the cell population and of the cell
lineages as well as the impact of the orientation of the division plane on this
morphology. Our IBM model is based on the hypothesis that cells are
incompressible objects that grow in size and divide once a threshold size is
reached, and that newly born cell adhere to the existing cell cluster. We
performed comparisons between the simulation model and experimental data by
using several statistical indicators. The results suggest that the emergence of
particular morphologies can be explained by simple mechanical interactions.
| [
{
"created": "Mon, 20 Feb 2017 22:04:25 GMT",
"version": "v1"
}
] | 2017-02-22 | [
[
"Leroy-Lerêtre",
"Mathieu",
""
],
[
"Dimarco",
"Giacomo",
""
],
[
"Cazales",
"Martine",
""
],
[
"Boizeau",
"Marie-Laure",
""
],
[
"Ducommun",
"Bernard",
""
],
[
"Lobjois",
"Valèrie",
""
],
[
"Degond",
"Pierre",... | This paper investigates cells proliferation dynamics in small tumor cell aggregates using an individual based model (IBM). The simulation model is designed to study the morphology of the cell population and of the cell lineages as well as the impact of the orientation of the division plane on this morphology. Our IBM model is based on the hypothesis that cells are incompressible objects that grow in size and divide once a threshold size is reached, and that newly born cell adhere to the existing cell cluster. We performed comparisons between the simulation model and experimental data by using several statistical indicators. The results suggest that the emergence of particular morphologies can be explained by simple mechanical interactions. |
1201.1094 | Alan D. Rendall | Alan D. Rendall | Mathematics of the NFAT signalling pathway | 21 pages | null | null | AEI-2012-001 | q-bio.MN math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper is a mathematical study of some aspects of the signalling pathway
leading to the activation of the transcription factor NFAT (nuclear factor of
activated T cells). Activation takes place by dephosphorylation at multiple
sites. This has been modelled by Salazar and H\"ofer using a large system of
ordinary differential equations depending on many parameters. With the help of
chemical reaction network theory we show that for any choice of the parameters
this system has a unique stationary solution for each value of the conserved
quantity given by the total amount of NFAT and that all solutions converge to
this stationary solution at late times. The dephosphorylation is carried out by
calcineurin, which in turn is activated by a rise in calcium concentration. We
study the way in which the dynamics of the calcium concentration influences
NFAT activation, an issue also considered by Salazar and H\"ofer with the help
of a model arising from work of Somogyi and Stucki. Criteria are obtained for
convergence to equilibrium of solutions of the model for the calcium
concentration.
| [
{
"created": "Thu, 5 Jan 2012 09:41:52 GMT",
"version": "v1"
}
] | 2012-01-06 | [
[
"Rendall",
"Alan D.",
""
]
] | This paper is a mathematical study of some aspects of the signalling pathway leading to the activation of the transcription factor NFAT (nuclear factor of activated T cells). Activation takes place by dephosphorylation at multiple sites. This has been modelled by Salazar and H\"ofer using a large system of ordinary differential equations depending on many parameters. With the help of chemical reaction network theory we show that for any choice of the parameters this system has a unique stationary solution for each value of the conserved quantity given by the total amount of NFAT and that all solutions converge to this stationary solution at late times. The dephosphorylation is carried out by calcineurin, which in turn is activated by a rise in calcium concentration. We study the way in which the dynamics of the calcium concentration influences NFAT activation, an issue also considered by Salazar and H\"ofer with the help of a model arising from work of Somogyi and Stucki. Criteria are obtained for convergence to equilibrium of solutions of the model for the calcium concentration. |
2212.01144 | Tianyu Qiu | Tianyu Qiu, Amir Jahangiri, Xiao Han, Dmitry Lesovoy, Tatiana Agback,
Peter Agback, Adnane Achour, Xiaobo Qu, and Vladislav Orekhov | Resolution enhancement of NMR by decoupling with low-rank Hankel model | 8 pages, 4 figures | null | null | null | q-bio.BM eess.SP | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Nuclear magnetic resonance (NMR) spectroscopy has become a formidable tool
for biochemistry and medicine. Although J-coupling carries essential structural
information it may also limit the spectral resolution. Homonuclear decoupling
remains a challenging problem. In this work, we introduce a new approach that
uses a specific coupling value as prior knowledge, and Hankel property of
exponential NMR signal to achieve the broadband heteronuclear decoupling using
the low-rank method. Our results on synthetic and realistic HMQC spectra
demonstrate that the proposed method not only effectively enhances resolution
by decoupling, but also maintains sensitivity and suppresses spectral
artefacts. The approach can be combined with the non-uniform sampling, which
means that the resolution can be further improved without any extra acquisition
time
| [
{
"created": "Fri, 2 Dec 2022 12:45:44 GMT",
"version": "v1"
}
] | 2022-12-05 | [
[
"Qiu",
"Tianyu",
""
],
[
"Jahangiri",
"Amir",
""
],
[
"Han",
"Xiao",
""
],
[
"Lesovoy",
"Dmitry",
""
],
[
"Agback",
"Tatiana",
""
],
[
"Agback",
"Peter",
""
],
[
"Achour",
"Adnane",
""
],
[
"Qu",
... | Nuclear magnetic resonance (NMR) spectroscopy has become a formidable tool for biochemistry and medicine. Although J-coupling carries essential structural information it may also limit the spectral resolution. Homonuclear decoupling remains a challenging problem. In this work, we introduce a new approach that uses a specific coupling value as prior knowledge, and Hankel property of exponential NMR signal to achieve the broadband heteronuclear decoupling using the low-rank method. Our results on synthetic and realistic HMQC spectra demonstrate that the proposed method not only effectively enhances resolution by decoupling, but also maintains sensitivity and suppresses spectral artefacts. The approach can be combined with the non-uniform sampling, which means that the resolution can be further improved without any extra acquisition time |
1112.2868 | Gilles Guillot | Gilles Guillot and Alexandra Carpentier-Skandalis | On the informativeness of dominant and co-dominant genetic markers for
Bayesian supervised clustering | null | The Open Statistics & Probability Journal, 2011, 3, 7-12 | 10.2174/1876527001103010007 | null | q-bio.PE math.ST stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the accuracy of Bayesian supervised method used to cluster
individuals into genetically homogeneous groups on the basis of dominant or
codominant molecular markers. We provide a formula relating an error criterion
the number of loci used and the number of clusters. This formula is exact and
holds for arbitrary number of clusters and markers. Our work suggests that
dominant markers studies can achieve an accuracy similar to that of codominant
markers studies if the number of markers used in the former is about 1.7 times
larger than in the latter.
| [
{
"created": "Tue, 13 Dec 2011 12:17:33 GMT",
"version": "v1"
}
] | 2011-12-14 | [
[
"Guillot",
"Gilles",
""
],
[
"Carpentier-Skandalis",
"Alexandra",
""
]
] | We study the accuracy of Bayesian supervised method used to cluster individuals into genetically homogeneous groups on the basis of dominant or codominant molecular markers. We provide a formula relating an error criterion the number of loci used and the number of clusters. This formula is exact and holds for arbitrary number of clusters and markers. Our work suggests that dominant markers studies can achieve an accuracy similar to that of codominant markers studies if the number of markers used in the former is about 1.7 times larger than in the latter. |
0904.2678 | Alexey Mazur K | Alexey K. Mazur | Comments on "Remeasuring the Double Helix" | 4 pages, 1 figure | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mathew-Fenn et al. (Science (2008) 322, 446-9) measured end-to-end distances
of short DNA and concluded that stretching fluctuations in several consecutive
turns of the double helix should be strongly correlated. I argue that this
conclusion is based on incorrect assumptions, notably, on a simplistic
treatment of the excluded volume effect of reporter labels. Contrary to the
author's claim, their conclusion is not supported by other data.
| [
{
"created": "Fri, 17 Apr 2009 11:48:56 GMT",
"version": "v1"
}
] | 2009-04-20 | [
[
"Mazur",
"Alexey K.",
""
]
] | Mathew-Fenn et al. (Science (2008) 322, 446-9) measured end-to-end distances of short DNA and concluded that stretching fluctuations in several consecutive turns of the double helix should be strongly correlated. I argue that this conclusion is based on incorrect assumptions, notably, on a simplistic treatment of the excluded volume effect of reporter labels. Contrary to the author's claim, their conclusion is not supported by other data. |
2204.07478 | Sebastian Lotter | Sebastian Lotter and Michael T. Barros and Robert Schober and
Maximilian Sch\"afer | Signal Reception With Generic Three-State Receptors in Synaptic MC | 6 pages, 6 figures. Presented at 2022 IEEE Global Communications
Conference (GLOBECOM), Rio de Janeiro, Brasil | null | 10.1109/GLOBECOM48099.2022.10001154 | null | q-bio.NC cs.ET | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Synaptic communication is studied by communication engineers for two main
reasons. One is to enable novel neuroengineering applications that require
interfacing with neurons. The other reason is to draw inspiration for the
design of synthetic molecular communication systems. Both of these goals
require understanding of how the chemical synaptic signal is sensed and
transduced at the synaptic receiver (Rx). While signal reception in synaptic
molecular communication (SMC) depends heavily on the kinetics of the receptors
employed by the synaptic Rxs, existing channel models for SMC either
oversimplify the receptor kinetics or employ complex, high-dimensional kinetic
schemes limited to specific types of receptors. Both approaches do not
facilitate a comparative analysis of different types of natural synapses. In
this paper, we propose a novel deterministic channel model for SMC which
employs a generic three-state receptor model that captures the characteristics
of the most important receptor types in SMC. The model is based on a transfer
function expansion of Fick's diffusion equation and accounts for release,
diffusion, and degradation of neurotransmitters as well as their reversible
binding to finitely many generic postsynaptic receptors. The proposed SMC model
is the first that allows studying the impact of the characteristic dynamics of
the main postsynaptic receptor types on synaptic signal transmission. Numerical
results indicate that the proposed model indeed exhibits a wide range of
biologically plausible dynamics when specialized to specific natural receptor
types.
| [
{
"created": "Thu, 14 Apr 2022 10:56:19 GMT",
"version": "v1"
},
{
"created": "Mon, 16 Jan 2023 13:38:47 GMT",
"version": "v2"
}
] | 2023-01-18 | [
[
"Lotter",
"Sebastian",
""
],
[
"Barros",
"Michael T.",
""
],
[
"Schober",
"Robert",
""
],
[
"Schäfer",
"Maximilian",
""
]
] | Synaptic communication is studied by communication engineers for two main reasons. One is to enable novel neuroengineering applications that require interfacing with neurons. The other reason is to draw inspiration for the design of synthetic molecular communication systems. Both of these goals require understanding of how the chemical synaptic signal is sensed and transduced at the synaptic receiver (Rx). While signal reception in synaptic molecular communication (SMC) depends heavily on the kinetics of the receptors employed by the synaptic Rxs, existing channel models for SMC either oversimplify the receptor kinetics or employ complex, high-dimensional kinetic schemes limited to specific types of receptors. Both approaches do not facilitate a comparative analysis of different types of natural synapses. In this paper, we propose a novel deterministic channel model for SMC which employs a generic three-state receptor model that captures the characteristics of the most important receptor types in SMC. The model is based on a transfer function expansion of Fick's diffusion equation and accounts for release, diffusion, and degradation of neurotransmitters as well as their reversible binding to finitely many generic postsynaptic receptors. The proposed SMC model is the first that allows studying the impact of the characteristic dynamics of the main postsynaptic receptor types on synaptic signal transmission. Numerical results indicate that the proposed model indeed exhibits a wide range of biologically plausible dynamics when specialized to specific natural receptor types. |
2103.10462 | Tamal Batabyal | Tamal Batabyal, Aijaz Ahmad Naik, Daniel Weller, Jaideep Kapur | Cellcounter: a deep learning framework for high-fidelity spatial
localization of neurons | Submitted to a journal | null | null | null | q-bio.QM cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Many neuroscientific applications require robust and accurate localization of
neurons. It is still an unsolved problem because of the enormous variation in
intensity, texture, spatial overlap, morphology and background artifacts. In
addition, curation of a large dataset containing complete manual annotation of
neurons from high-resolution images to train a classifier requires significant
time and effort. We present Cellcounter, a deep learning-based model trained on
images containing incompletely-annotated neurons with highly-varied morphology
and control images containing artifacts and background structures. Leveraging
the striking self-learning ability, Cellcounter gradually labels neurons,
obviating the need for time-intensive complete annotation. Cellcounter shows
its efficacy over the state of the arts in the accurate localization of neurons
while significantly reducing false-positive detection in several protocols.
| [
{
"created": "Thu, 18 Mar 2021 18:19:17 GMT",
"version": "v1"
}
] | 2021-03-22 | [
[
"Batabyal",
"Tamal",
""
],
[
"Naik",
"Aijaz Ahmad",
""
],
[
"Weller",
"Daniel",
""
],
[
"Kapur",
"Jaideep",
""
]
] | Many neuroscientific applications require robust and accurate localization of neurons. It is still an unsolved problem because of the enormous variation in intensity, texture, spatial overlap, morphology and background artifacts. In addition, curation of a large dataset containing complete manual annotation of neurons from high-resolution images to train a classifier requires significant time and effort. We present Cellcounter, a deep learning-based model trained on images containing incompletely-annotated neurons with highly-varied morphology and control images containing artifacts and background structures. Leveraging the striking self-learning ability, Cellcounter gradually labels neurons, obviating the need for time-intensive complete annotation. Cellcounter shows its efficacy over the state of the arts in the accurate localization of neurons while significantly reducing false-positive detection in several protocols. |
2305.04934 | Markus Buehler | Markus J. Buehler | Generative Pretrained Autoregressive Transformer Graph Neural Network
applied to the Analysis and Discovery of Novel Proteins | null | null | null | null | q-bio.BM cond-mat.dis-nn cond-mat.soft cs.CL cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We report a flexible language-model based deep learning strategy, applied
here to solve complex forward and inverse problems in protein modeling, based
on an attention neural network that integrates transformer and graph
convolutional architectures in a causal multi-headed graph mechanism, to
realize a generative pretrained model. The model is applied to predict
secondary structure content (per-residue level and overall content), protein
solubility, and sequencing tasks. Further trained on inverse tasks, the model
is rendered capable of designing proteins with these properties as target
features. The model is formulated as a general framework, completely
prompt-based, and can be adapted for a variety of downstream tasks. We find
that adding additional tasks yields emergent synergies that the model exploits
in improving overall performance, beyond what would be possible by training a
model on each dataset alone. Case studies are presented to validate the method,
yielding protein designs specifically focused on structural proteins, but also
exploring the applicability in the design of soluble, antimicrobial
biomaterials. While our model is trained to ultimately perform 8 distinct
tasks, with available datasets it can be extended to solve additional problems.
In a broader sense, this work illustrates a form of multiscale modeling that
relates a set of ultimate building blocks (here, byte-level utf8 characters
that define the nature of the physical system at hand) to complex output. This
materiomic scheme captures complex emergent relationships between universal
building block and resulting properties via a synergizing learning capacity to
express a set of potentialities embedded in the knowledge used in training, via
the interplay of universality and diversity.
| [
{
"created": "Sun, 7 May 2023 12:30:24 GMT",
"version": "v1"
},
{
"created": "Tue, 11 Jul 2023 12:41:39 GMT",
"version": "v2"
}
] | 2023-10-20 | [
[
"Buehler",
"Markus J.",
""
]
] | We report a flexible language-model based deep learning strategy, applied here to solve complex forward and inverse problems in protein modeling, based on an attention neural network that integrates transformer and graph convolutional architectures in a causal multi-headed graph mechanism, to realize a generative pretrained model. The model is applied to predict secondary structure content (per-residue level and overall content), protein solubility, and sequencing tasks. Further trained on inverse tasks, the model is rendered capable of designing proteins with these properties as target features. The model is formulated as a general framework, completely prompt-based, and can be adapted for a variety of downstream tasks. We find that adding additional tasks yields emergent synergies that the model exploits in improving overall performance, beyond what would be possible by training a model on each dataset alone. Case studies are presented to validate the method, yielding protein designs specifically focused on structural proteins, but also exploring the applicability in the design of soluble, antimicrobial biomaterials. While our model is trained to ultimately perform 8 distinct tasks, with available datasets it can be extended to solve additional problems. In a broader sense, this work illustrates a form of multiscale modeling that relates a set of ultimate building blocks (here, byte-level utf8 characters that define the nature of the physical system at hand) to complex output. This materiomic scheme captures complex emergent relationships between universal building block and resulting properties via a synergizing learning capacity to express a set of potentialities embedded in the knowledge used in training, via the interplay of universality and diversity. |
2202.07093 | Aditi Deshpande | Aditi Deshpande, Nitya Kari, Jordan Elliott McKenzie, Bin Jiang,
Patrik Michel, Nima Toosizadeh, Pouya Tahsili Fahadan, Chelsea Kidwell, Max
Wintermark, Kaveh Laksari | Cerebrovascular morphology in aging and disease -- imaging biomarkers
for ischemic stroke and Alzheimers disease | 15 pages, 5 figures, 3 tables | null | null | null | q-bio.QM eess.IV | http://creativecommons.org/licenses/by/4.0/ | Background and Purpose: Altered brain vasculature is a key phenomenon in
several neurologic disorders. This paper presents a quantitative assessment of
vascular morphology in healthy and diseased adults including changes during
aging and the anatomical variations in the Circle of Willis (CoW). Methods: We
used our automatic method to segment and extract novel geometric features of
the cerebral vasculature from MRA scans of 175 healthy subjects, 45 AIS, and 50
AD patients after spatial registration. This is followed by quantification and
statistical analysis of vascular alterations in acute ischemic stroke (AIS) and
Alzheimer's disease (AD), the biggest cerebrovascular and neurodegenerative
disorders. Results: We determined that the CoW is fully formed in only 35
percent of healthy adults and found significantly increased tortuosity and
fractality, with increasing age and with disease -- both AIS and AD. We also
found significantly decreased vessel length, volume and number of branches in
AIS patients. Lastly, we found that AD cerebral vessels exhibited significantly
smaller diameter and more complex branching patterns, compared to age-matched
healthy adults. These changes were significantly heightened with progression of
AD from early onset to moderate-severe dementia. Conclusion: Altered vessel
geometry in AIS patients shows that there is pathological morphology coupled
with stroke. In AD due to pathological alterations in the endothelium or
amyloid depositions leading to neuronal damage and hypoperfusion, vessel
geometry is significantly altered even in mild or early dementia. The specific
geometric features and quantitative comparisons demonstrate potential for using
vascular morphology as a non-invasive imaging biomarker for neurologic
disorders.
| [
{
"created": "Mon, 14 Feb 2022 23:31:43 GMT",
"version": "v1"
}
] | 2022-02-16 | [
[
"Deshpande",
"Aditi",
""
],
[
"Kari",
"Nitya",
""
],
[
"McKenzie",
"Jordan Elliott",
""
],
[
"Jiang",
"Bin",
""
],
[
"Michel",
"Patrik",
""
],
[
"Toosizadeh",
"Nima",
""
],
[
"Fahadan",
"Pouya Tahsili",
""
... | Background and Purpose: Altered brain vasculature is a key phenomenon in several neurologic disorders. This paper presents a quantitative assessment of vascular morphology in healthy and diseased adults including changes during aging and the anatomical variations in the Circle of Willis (CoW). Methods: We used our automatic method to segment and extract novel geometric features of the cerebral vasculature from MRA scans of 175 healthy subjects, 45 AIS, and 50 AD patients after spatial registration. This is followed by quantification and statistical analysis of vascular alterations in acute ischemic stroke (AIS) and Alzheimer's disease (AD), the biggest cerebrovascular and neurodegenerative disorders. Results: We determined that the CoW is fully formed in only 35 percent of healthy adults and found significantly increased tortuosity and fractality, with increasing age and with disease -- both AIS and AD. We also found significantly decreased vessel length, volume and number of branches in AIS patients. Lastly, we found that AD cerebral vessels exhibited significantly smaller diameter and more complex branching patterns, compared to age-matched healthy adults. These changes were significantly heightened with progression of AD from early onset to moderate-severe dementia. Conclusion: Altered vessel geometry in AIS patients shows that there is pathological morphology coupled with stroke. In AD due to pathological alterations in the endothelium or amyloid depositions leading to neuronal damage and hypoperfusion, vessel geometry is significantly altered even in mild or early dementia. The specific geometric features and quantitative comparisons demonstrate potential for using vascular morphology as a non-invasive imaging biomarker for neurologic disorders. |
2101.03023 | Xiang-Sheng Wang | Jun Liu and Xiang-Sheng Wang | Optimal allocation of face masks during the COVID-19 pandemic: a case
study of the first epidemic wave in the United States | null | null | null | null | q-bio.PE math.OC q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we propose a two-group SIR epidemic model to simulate the
outcome of stay-at-home policy and wearing face masks during the first COVID-19
epidemic wave in the United States. Based on our proposed model, we further use
the optimal control approach (with the objective of minimizing total deaths) to
find the optimal dynamical distribution of face masks between the healthcare
workers and the general public. It is not surprising that all the face masks
should be solely reserved for the healthcare workers if the supply is short.
However, when the supply is indeed sufficient, our numerical study indicates
that the general public should share a large portion of face masks at the
beginning of an epidemic wave to dramatically reduce the death toll. This
interesting result partially contradicts with the guideline advised by the US
Surgeon General and the Centers for Disease Control and Prevention (CDC) in
March 2020. The optimality of this sounding CDC guideline highly depends on the
supply level of face masks that changes frequently, and hence it should be
adjusted according to the supply of face masks.
| [
{
"created": "Tue, 29 Dec 2020 15:30:35 GMT",
"version": "v1"
}
] | 2021-01-11 | [
[
"Liu",
"Jun",
""
],
[
"Wang",
"Xiang-Sheng",
""
]
] | In this paper, we propose a two-group SIR epidemic model to simulate the outcome of stay-at-home policy and wearing face masks during the first COVID-19 epidemic wave in the United States. Based on our proposed model, we further use the optimal control approach (with the objective of minimizing total deaths) to find the optimal dynamical distribution of face masks between the healthcare workers and the general public. It is not surprising that all the face masks should be solely reserved for the healthcare workers if the supply is short. However, when the supply is indeed sufficient, our numerical study indicates that the general public should share a large portion of face masks at the beginning of an epidemic wave to dramatically reduce the death toll. This interesting result partially contradicts with the guideline advised by the US Surgeon General and the Centers for Disease Control and Prevention (CDC) in March 2020. The optimality of this sounding CDC guideline highly depends on the supply level of face masks that changes frequently, and hence it should be adjusted according to the supply of face masks. |
2112.05527 | Kresten Lindorff-Larsen | F. Emil Thomasen and Kresten Lindorff-Larsen | Conformational ensembles of intrinsically disordered proteins and
flexible multidomain proteins | 25 pages, 2 figures | null | null | null | q-bio.BM | http://creativecommons.org/licenses/by/4.0/ | Intrinsically disordered proteins (IDPs) and multidomain proteins with
flexible linkers show a high level of structural heterogeneity and are best
described by ensembles consisting of multiple conformations with associated
thermodynamic weights. Determining conformational ensembles usually involves
integration of biophysical experiments and computational models. In this
review, we discuss current approaches to determining conformational ensembles
of IDPs and multidomain proteins, including the choice of biophysical
experiments, computational models used to sample protein conformations, models
to calculate experimental observables from protein structure, and methods to
refine ensembles against experimental data. We also provide examples of recent
applications of integrative conformational ensemble determination to study IDPs
and multidomain proteins and suggest future directions for research in the
field.
| [
{
"created": "Fri, 10 Dec 2021 13:32:44 GMT",
"version": "v1"
}
] | 2021-12-13 | [
[
"Thomasen",
"F. Emil",
""
],
[
"Lindorff-Larsen",
"Kresten",
""
]
] | Intrinsically disordered proteins (IDPs) and multidomain proteins with flexible linkers show a high level of structural heterogeneity and are best described by ensembles consisting of multiple conformations with associated thermodynamic weights. Determining conformational ensembles usually involves integration of biophysical experiments and computational models. In this review, we discuss current approaches to determining conformational ensembles of IDPs and multidomain proteins, including the choice of biophysical experiments, computational models used to sample protein conformations, models to calculate experimental observables from protein structure, and methods to refine ensembles against experimental data. We also provide examples of recent applications of integrative conformational ensemble determination to study IDPs and multidomain proteins and suggest future directions for research in the field. |
2303.02509 | Ansgar Gruber | Ansgar Gruber (1), Cedar McKay (2), Gabrielle Rocap (2), Miroslav
Oborn\'ik (1 and 3) ((1) Biology Centre, Institute of Parasitology, Czech
Academy of Sciences, (2) School of Oceanography, University of Washington,
(3) University of South Bohemia) | Comparison of different versions of SignalP and TargetP for diatom
plastid protein predictions with ASAFind | 8 pages, 1 figure in two panels, two sets of raw data (available on
request) | Matters (ISSN: 2297-8240), e202005000001 (2020) | null | null | q-bio.SC | http://creativecommons.org/licenses/by/4.0/ | Plastid targeted proteins of diatoms and related algae can be predicted with
high sensitivity and specificity using the ASAFind method published in 2015.
ASAFind predictions rely on SignalP predictions of endoplasmic reticulum (ER)
targeting signal peptides. Recently (in 2019), a new version of SignalP was
released, SignalP 5.0. We tested the ability of SignalP 5.0 to recognize signal
peptides of nucleus-encoded, plastid-targeted diatom pre-proteins, and to
identify the signal peptide cleavage site. The results were compared to manual
predictions of the characteristic cleavage site motif, and to previous versions
of SignalP. SignalP 5.0 is less sensitive than the previous versions of SignalP
in this specific task, and also in the detection of signal peptides of
non-plastid proteins in diatoms. However, in combination with ASAFind, the
resulting prediction performance for plastid proteins is high. In addition, we
tested the multi-location prediction tool TargetP for its suitability to
provide signal peptide information to ASAFind. The newest version, TargetP 2.0,
had the highest prediction performances for diatom signal peptides and
mitochondrial transit peptides compared to other versions of SignalP and
TargetP, thus it provides a good basis for ASAFind predictions.
| [
{
"created": "Sat, 4 Mar 2023 21:37:55 GMT",
"version": "v1"
}
] | 2023-03-07 | [
[
"Gruber",
"Ansgar",
"",
"1 and 3"
],
[
"McKay",
"Cedar",
"",
"1 and 3"
],
[
"Rocap",
"Gabrielle",
"",
"1 and 3"
],
[
"Oborník",
"Miroslav",
"",
"1 and 3"
]
] | Plastid targeted proteins of diatoms and related algae can be predicted with high sensitivity and specificity using the ASAFind method published in 2015. ASAFind predictions rely on SignalP predictions of endoplasmic reticulum (ER) targeting signal peptides. Recently (in 2019), a new version of SignalP was released, SignalP 5.0. We tested the ability of SignalP 5.0 to recognize signal peptides of nucleus-encoded, plastid-targeted diatom pre-proteins, and to identify the signal peptide cleavage site. The results were compared to manual predictions of the characteristic cleavage site motif, and to previous versions of SignalP. SignalP 5.0 is less sensitive than the previous versions of SignalP in this specific task, and also in the detection of signal peptides of non-plastid proteins in diatoms. However, in combination with ASAFind, the resulting prediction performance for plastid proteins is high. In addition, we tested the multi-location prediction tool TargetP for its suitability to provide signal peptide information to ASAFind. The newest version, TargetP 2.0, had the highest prediction performances for diatom signal peptides and mitochondrial transit peptides compared to other versions of SignalP and TargetP, thus it provides a good basis for ASAFind predictions. |
0902.4552 | Leonardo L. Gollo | Leonardo L. Gollo, Osame Kinouchi, Mauro Copelli | Active dendrites enhance neuronal dynamic range | 20 pages, 6 figures | PLoS Comput Biol 5(6): e1000402 (2009) | 10.1371/journal.pcbi.1000402 | null | q-bio.NC nlin.CG physics.bio-ph q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Since the first experimental evidences of active conductances in dendrites,
most neurons have been shown to exhibit dendritic excitability through the
expression of a variety of voltage-gated ion channels. However, despite
experimental and theoretical efforts undertaken in the last decades, the role
of this excitability for some kind of dendritic computation has remained
elusive. Here we show that, owing to very general properties of excitable
media, the average output of a model of active dendritic trees is a highly
non-linear function of their afferent rate, attaining extremely large dynamic
ranges (above 50 dB). Moreover, the model yields double-sigmoid response
functions as experimentally observed in retinal ganglion cells. We claim that
enhancement of dynamic range is the primary functional role of active dendritic
conductances. We predict that neurons with larger dendritic trees should have
larger dynamic range and that blocking of active conductances should lead to a
decrease of dynamic range.
| [
{
"created": "Thu, 26 Feb 2009 11:09:56 GMT",
"version": "v1"
},
{
"created": "Sat, 8 Aug 2009 13:37:56 GMT",
"version": "v2"
}
] | 2012-01-18 | [
[
"Gollo",
"Leonardo L.",
""
],
[
"Kinouchi",
"Osame",
""
],
[
"Copelli",
"Mauro",
""
]
] | Since the first experimental evidences of active conductances in dendrites, most neurons have been shown to exhibit dendritic excitability through the expression of a variety of voltage-gated ion channels. However, despite experimental and theoretical efforts undertaken in the last decades, the role of this excitability for some kind of dendritic computation has remained elusive. Here we show that, owing to very general properties of excitable media, the average output of a model of active dendritic trees is a highly non-linear function of their afferent rate, attaining extremely large dynamic ranges (above 50 dB). Moreover, the model yields double-sigmoid response functions as experimentally observed in retinal ganglion cells. We claim that enhancement of dynamic range is the primary functional role of active dendritic conductances. We predict that neurons with larger dendritic trees should have larger dynamic range and that blocking of active conductances should lead to a decrease of dynamic range. |
2203.04641 | Alex Roxin | Federico Devalle and Alex Roxin | Fluctuation-driven plasticity allows for flexible rewiring of neuronal
assemblies | 30 pages, 7 figures | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Synaptic connections in neuronal circuits are modulated by pre- and
post-synaptic spiking activity. Heuristic models of this process of synaptic
plasticity can provide excellent fits to results from in-vitro experiments in
which pre- and post-synaptic spiking is varied in a controlled fashion.
However, the plasticity rules inferred from fitting such data are inevitably
unstable, in that given constant pre- and post-synaptic activity the synapse
will either fully potentiate or depress. This instability can be held in check
by adding additional mechanisms, such as homeostasis. Here we consider an
alternative scenario in which the plasticity rule itself is stable. When this
is the case, net potentiation or depression only occur when pre- and
post-synaptic activity vary in time, e.g. when driven by time-varying inputs.
We study how the features of such inputs shape the recurrent synaptic
connections in models of neuronal circuits. In the case of oscillatory inputs,
the resulting structure is strongly affected by the phase relationship between
drive to different neurons. In large networks, distributed phases tend to lead
to hierarchical clustering. Our results may be of relevance for understanding
the effect of sensory-driven inputs, which are by nature time-varying, on
synaptic plasticity, and hence on learning and memory.
| [
{
"created": "Wed, 9 Mar 2022 11:05:33 GMT",
"version": "v1"
},
{
"created": "Wed, 13 Jul 2022 08:06:43 GMT",
"version": "v2"
}
] | 2022-07-14 | [
[
"Devalle",
"Federico",
""
],
[
"Roxin",
"Alex",
""
]
] | Synaptic connections in neuronal circuits are modulated by pre- and post-synaptic spiking activity. Heuristic models of this process of synaptic plasticity can provide excellent fits to results from in-vitro experiments in which pre- and post-synaptic spiking is varied in a controlled fashion. However, the plasticity rules inferred from fitting such data are inevitably unstable, in that given constant pre- and post-synaptic activity the synapse will either fully potentiate or depress. This instability can be held in check by adding additional mechanisms, such as homeostasis. Here we consider an alternative scenario in which the plasticity rule itself is stable. When this is the case, net potentiation or depression only occur when pre- and post-synaptic activity vary in time, e.g. when driven by time-varying inputs. We study how the features of such inputs shape the recurrent synaptic connections in models of neuronal circuits. In the case of oscillatory inputs, the resulting structure is strongly affected by the phase relationship between drive to different neurons. In large networks, distributed phases tend to lead to hierarchical clustering. Our results may be of relevance for understanding the effect of sensory-driven inputs, which are by nature time-varying, on synaptic plasticity, and hence on learning and memory. |
2008.11716 | Ghader Rezazadeh | Kamran Soltani (1) and Ghader Rezazadeh (1, 2) ((1) Mechanical
Engineering Department, Urmia, Iran, (2) South Ural State University,
Chelyabinsk, Russian Federation) | A New Dynamic Model to Predict the Effects of Governmental Decisions on
the Progress of the CoViD-19 Epidemic | The model parameters are calculated based on a list square method in
the interval where there exists real published statistical data. But for
predicting the effects of reopening plans which are considered by given
presented models in paper since there is no existing statistical data the
results are given based on assumed parameters just to show the efficiency of
the new proposed epidemic model | null | null | null | q-bio.QM physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We have established a novel mathematical model that considers various aspects
of the spreading of the virus, including, the transmission based on being in
the latent period, environment to human transmission, governmental decisions,
and control measures. To accomplish this, a compartmental model with eight
batches (sub-population groups) has been proposed and the simulation of the set
of differential equations has been conducted to show the effects of the various
involved parameters. Also, to achieve more accurate results and closer to
reality, the coefficients of a system of differential equations containing
transmission rates, death rates, recovery rates and etc. have been proposed by
some new step-functions viewpoint. Results: First of all, the efficiency of the
proposed model has been shown for Iran and Italy, which completely denoted the
flexibility of our model for predicting the epidemic progress and its moment
behavior. The model has shown that the reopening plans and governmental
measures directly affect the number of active cases of the disease. Also, it
has specified that even releasing a small portion of the population (about 2-3
percent) can lead to a severe increase in active patients and consequently
multiple waves in the disease progress. The effects of the healthcare
capacities of the country have been obtained (quantitatively), which clearly
specify the importance of this context. Control strategies including strict
implementation of mitigation (reducing the transmission rates) and
re-quarantine of some portion of population have been investigated and their
efficiency has been shown.
| [
{
"created": "Sat, 15 Aug 2020 10:59:38 GMT",
"version": "v1"
}
] | 2020-08-31 | [
[
"Soltani",
"Kamran",
""
],
[
"Rezazadeh",
"Ghader",
""
]
] | We have established a novel mathematical model that considers various aspects of the spreading of the virus, including, the transmission based on being in the latent period, environment to human transmission, governmental decisions, and control measures. To accomplish this, a compartmental model with eight batches (sub-population groups) has been proposed and the simulation of the set of differential equations has been conducted to show the effects of the various involved parameters. Also, to achieve more accurate results and closer to reality, the coefficients of a system of differential equations containing transmission rates, death rates, recovery rates and etc. have been proposed by some new step-functions viewpoint. Results: First of all, the efficiency of the proposed model has been shown for Iran and Italy, which completely denoted the flexibility of our model for predicting the epidemic progress and its moment behavior. The model has shown that the reopening plans and governmental measures directly affect the number of active cases of the disease. Also, it has specified that even releasing a small portion of the population (about 2-3 percent) can lead to a severe increase in active patients and consequently multiple waves in the disease progress. The effects of the healthcare capacities of the country have been obtained (quantitatively), which clearly specify the importance of this context. Control strategies including strict implementation of mitigation (reducing the transmission rates) and re-quarantine of some portion of population have been investigated and their efficiency has been shown. |
0903.3498 | Bruno. Cessac | Bruno Cessac, H\'el\`ene Paugam-Moisy, Thierry Vi\'eville | Indisputable facts when implementing spiking neuron networks | 29 pages, 11 figures, submitted | J. Physiol., Paris, 104, (1-2), 5-18, (2010) | null | null | q-bio.NC physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this article, our wish is to demystify some aspects of coding with
spike-timing, through a simple review of well-understood technical facts
regarding spike coding. The goal is to help better understanding to which
extend computing and modelling with spiking neuron networks can be biologically
plausible and computationally efficient. We intentionally restrict ourselves to
a deterministic dynamics, in this review, and we consider that the dynamics of
the network is defined by a non-stochastic mapping. This allows us to stay in a
rather simple framework and to propose a review with concrete numerical values,
results and formula on (i) general time constraints, (ii) links between
continuous signals and spike trains, (iii) spiking networks parameter
adjustments. When implementing spiking neuron networks, for computational or
biological simulation purposes, it is important to take into account the
indisputable facts here reviewed. This precaution could prevent from
implementing mechanisms meaningless with regards to obvious time constraints,
or from introducing spikes artificially, when continuous calculations would be
sufficient and simpler. It is also pointed out that implementing a spiking
neuron network is finally a simple task, unless complex neural codes are
considered.
| [
{
"created": "Fri, 20 Mar 2009 12:14:35 GMT",
"version": "v1"
}
] | 2010-03-02 | [
[
"Cessac",
"Bruno",
""
],
[
"Paugam-Moisy",
"Hélène",
""
],
[
"Viéville",
"Thierry",
""
]
] | In this article, our wish is to demystify some aspects of coding with spike-timing, through a simple review of well-understood technical facts regarding spike coding. The goal is to help better understanding to which extend computing and modelling with spiking neuron networks can be biologically plausible and computationally efficient. We intentionally restrict ourselves to a deterministic dynamics, in this review, and we consider that the dynamics of the network is defined by a non-stochastic mapping. This allows us to stay in a rather simple framework and to propose a review with concrete numerical values, results and formula on (i) general time constraints, (ii) links between continuous signals and spike trains, (iii) spiking networks parameter adjustments. When implementing spiking neuron networks, for computational or biological simulation purposes, it is important to take into account the indisputable facts here reviewed. This precaution could prevent from implementing mechanisms meaningless with regards to obvious time constraints, or from introducing spikes artificially, when continuous calculations would be sufficient and simpler. It is also pointed out that implementing a spiking neuron network is finally a simple task, unless complex neural codes are considered. |
2207.03901 | Cyprien Tamekue | Cyprien Tamekue, Dario Prandi, Yacine Chitour | Reproducing sensory induced hallucinations via neural fields | null | null | null | null | q-bio.NC cs.CV math.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding sensory-induced cortical patterns in the primary visual cortex
V1 is an important challenge both for physiological motivations and for
improving our understanding of human perception and visual organisation. In
this work, we focus on pattern formation in the visual cortex when the cortical
activity is driven by a geometric visual hallucination-like stimulus. In
particular, we present a theoretical framework for sensory-induced
hallucinations which allows one to reproduce novel psychophysical results such
as the MacKay effect (Nature, 1957) and the Billock and Tsou experiences (PNAS,
2007).
| [
{
"created": "Fri, 8 Jul 2022 13:41:02 GMT",
"version": "v1"
}
] | 2022-07-11 | [
[
"Tamekue",
"Cyprien",
""
],
[
"Prandi",
"Dario",
""
],
[
"Chitour",
"Yacine",
""
]
] | Understanding sensory-induced cortical patterns in the primary visual cortex V1 is an important challenge both for physiological motivations and for improving our understanding of human perception and visual organisation. In this work, we focus on pattern formation in the visual cortex when the cortical activity is driven by a geometric visual hallucination-like stimulus. In particular, we present a theoretical framework for sensory-induced hallucinations which allows one to reproduce novel psychophysical results such as the MacKay effect (Nature, 1957) and the Billock and Tsou experiences (PNAS, 2007). |
q-bio/0310029 | Patrick Warren | P. B. Warren and P. R. ten Wolde | Statistical analysis of the spatial distribution of operons in the
transcriptional regulation network of Escherichia coli | 9 pages, 6 figs (4 colour) | null | null | null | q-bio.MN q-bio.GN | null | We have performed a statistical analysis of the spatial distribution of
operons in the transcriptional regulation network of Escherichia coli. The
analysis reveals that operons that regulate each other and operons that are
coregulated tend to lie next to each other on the genome. Moreover, these pairs
of operons tend to be transcribed in diverging directions. This spatial
arrangement of operons allows the upstream regulatory regions to interfere with
each other. This affords additional regulatory control, as illustrated by a
mean-field analysis of a feed-forward loop. Our results suggest that regulatory
control can provide a selection pressure that drives operons together in the
course of evolution.
| [
{
"created": "Thu, 23 Oct 2003 12:33:09 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Warren",
"P. B.",
""
],
[
"Wolde",
"P. R. ten",
""
]
] | We have performed a statistical analysis of the spatial distribution of operons in the transcriptional regulation network of Escherichia coli. The analysis reveals that operons that regulate each other and operons that are coregulated tend to lie next to each other on the genome. Moreover, these pairs of operons tend to be transcribed in diverging directions. This spatial arrangement of operons allows the upstream regulatory regions to interfere with each other. This affords additional regulatory control, as illustrated by a mean-field analysis of a feed-forward loop. Our results suggest that regulatory control can provide a selection pressure that drives operons together in the course of evolution. |
1404.3958 | Tomasz Rutkowski | Chisaki Nakaizumi, Toshie Matsui, Koichi Mori, Shoji Makino, and
Tomasz M. Rutkowski | Head-related Impulse Response-based Spatial Auditory Brain-computer
Interface | 5 pages, 1 figure, submitted to 6th International Brain-Computer
Interface Conference 2014, Graz, Austria | null | 10.3217/978-3-85125-378-8-20 | null | q-bio.NC cs.HC | http://creativecommons.org/licenses/by-nc-sa/3.0/ | This study provides a comprehensive test of the head-related impulse response
(HRIR) to an auditory spatial speller brain-computer interface (BCI) paradigm,
including a comparison with a conventional virtual headphone-based spatial
auditory modality. Five BCI-naive users participated in an experiment based on
five Japanese vowels. The auditory evoked potentials obtained produced
encouragingly good and stable P300-responses in online BCI experiments. Our
case study indicates that the auditory HRIR spatial sound paradigm reproduced
with headphones could be a viable alternative to established multi-loudspeaker
surround sound BCI-speller applications.
| [
{
"created": "Tue, 15 Apr 2014 15:40:14 GMT",
"version": "v1"
}
] | 2014-11-27 | [
[
"Nakaizumi",
"Chisaki",
""
],
[
"Matsui",
"Toshie",
""
],
[
"Mori",
"Koichi",
""
],
[
"Makino",
"Shoji",
""
],
[
"Rutkowski",
"Tomasz M.",
""
]
] | This study provides a comprehensive test of the head-related impulse response (HRIR) to an auditory spatial speller brain-computer interface (BCI) paradigm, including a comparison with a conventional virtual headphone-based spatial auditory modality. Five BCI-naive users participated in an experiment based on five Japanese vowels. The auditory evoked potentials obtained produced encouragingly good and stable P300-responses in online BCI experiments. Our case study indicates that the auditory HRIR spatial sound paradigm reproduced with headphones could be a viable alternative to established multi-loudspeaker surround sound BCI-speller applications. |
1212.5025 | Gesa B\"ohme | Gesa A. B\"ohme and Thilo Gross | Persistence of complex food webs in metacommunities | null | null | null | null | q-bio.PE physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Metacommunity theory is considered a promising approach for explaining
species diversity and food web complexity. Recently Pillai et al. proposed a
simple modeling framework for the dynamics of food webs at the metacommunity
level. Here, we employ this framework to compute general conditions for the
persistence of complex food webs in metacommunities. The persistence conditions
found depend on the connectivity of the resource patches and the structure of
the assembled food web, thus linking the underlying spatial patch-network and
the species interaction network. We find that the persistence of omnivores is
more likely when it is feeding on (a) prey on low trophic levels, and (b) prey
on similar trophic levels.
| [
{
"created": "Thu, 20 Dec 2012 13:14:53 GMT",
"version": "v1"
}
] | 2012-12-21 | [
[
"Böhme",
"Gesa A.",
""
],
[
"Gross",
"Thilo",
""
]
] | Metacommunity theory is considered a promising approach for explaining species diversity and food web complexity. Recently Pillai et al. proposed a simple modeling framework for the dynamics of food webs at the metacommunity level. Here, we employ this framework to compute general conditions for the persistence of complex food webs in metacommunities. The persistence conditions found depend on the connectivity of the resource patches and the structure of the assembled food web, thus linking the underlying spatial patch-network and the species interaction network. We find that the persistence of omnivores is more likely when it is feeding on (a) prey on low trophic levels, and (b) prey on similar trophic levels. |
0711.2525 | Enrico Carlon | S. Weckx, E. Carlon, L. De Vuyst, P. Van Hummelen | Thermodynamic behavior of short oligonucleotides in microarray
hybridizations can be described using Gibbs free energy in a nearest-neighbor
model | 32 pages on a single pdf file | J. Phys. Chem. B 111, 13583 (2007) | 10.1021/jp075197x | null | q-bio.BM q-bio.QM | null | While designing oligonucleotide-based microarrays, cross-hybridization
between surface-bound oligos and non-intended labeled targets is probably the
most difficult parameter to predict. Although literature describes
rules-of-thumb concerning oligo length, overall similarity, and continuous
stretches, the final behavior is difficult to predict. The aim of this study
was to investigate the effect of well-defined mismatches on hybridization
specificity using CodeLink Activated Slides, and to study quantitatively the
relation between hybridization intensity and Gibbs free energy (Delta G),
taking the mismatches into account. Our data clearly showed a correlation
between the hybridization intensity and Delta G of the oligos over three orders
of magnitude for the hybridization intensity, which could be described by the
Langmuir model. As Delta G was calculated according to the nearest-neighbor
model, using values related to DNA hybridizations in solution, this study
clearly shows that target-probe hybridizations on microarrays with a
three-dimensional coating are in quantitative agreement with the corresponding
reaction in solution. These results can be interesting for some practical
applications. The correlation between intensity and Delta G can be used in
quality control of microarray hybridizations by designing probes and
corresponding RNA spikes with a range of Delta G values. Furthermore, this
correlation might be of use to fine-tune oligonucleotide design algorithms in a
way to improve the prediction of the influence of mismatching targets on
microarray hybridizations.
| [
{
"created": "Thu, 15 Nov 2007 21:45:31 GMT",
"version": "v1"
}
] | 2007-12-09 | [
[
"Weckx",
"S.",
""
],
[
"Carlon",
"E.",
""
],
[
"De Vuyst",
"L.",
""
],
[
"Van Hummelen",
"P.",
""
]
] | While designing oligonucleotide-based microarrays, cross-hybridization between surface-bound oligos and non-intended labeled targets is probably the most difficult parameter to predict. Although literature describes rules-of-thumb concerning oligo length, overall similarity, and continuous stretches, the final behavior is difficult to predict. The aim of this study was to investigate the effect of well-defined mismatches on hybridization specificity using CodeLink Activated Slides, and to study quantitatively the relation between hybridization intensity and Gibbs free energy (Delta G), taking the mismatches into account. Our data clearly showed a correlation between the hybridization intensity and Delta G of the oligos over three orders of magnitude for the hybridization intensity, which could be described by the Langmuir model. As Delta G was calculated according to the nearest-neighbor model, using values related to DNA hybridizations in solution, this study clearly shows that target-probe hybridizations on microarrays with a three-dimensional coating are in quantitative agreement with the corresponding reaction in solution. These results can be interesting for some practical applications. The correlation between intensity and Delta G can be used in quality control of microarray hybridizations by designing probes and corresponding RNA spikes with a range of Delta G values. Furthermore, this correlation might be of use to fine-tune oligonucleotide design algorithms in a way to improve the prediction of the influence of mismatching targets on microarray hybridizations. |
2004.09911 | Peter Huybers | Parker Liautaud, Peter Huybers, and Mauricio Santillana | Fever and mobility data indicate social distancing has reduced incidence
of communicable disease in the United States | 11 pages, 3 figures | null | null | null | q-bio.PE physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In March of 2020, many U.S. state governments encouraged or mandated
restrictions on social interactions to slow the spread of COVID-19, the disease
caused by the novel coronavirus SARS-CoV-2 that has spread to nearly 180
countries. Estimating the effectiveness of these social-distancing strategies
is challenging because surveillance of COVID-19 has been limited, with tests
generally being prioritized for high-risk or hospitalized cases according to
temporally and regionally varying criteria. Here we show that reductions in
mobility across U.S. counties with at least 100 confirmed cases of COVID-19 led
to reductions in fever incidences, as captured by smart thermometers, after a
mean lag of 6.5 days ($90\%$ within 3--10 days) that is consistent with the
incubation period of COVID-19. Furthermore, counties with larger decreases in
mobility subsequently achieved greater reductions in fevers ($p<0.01$), with
the notable exception of New York City and its immediate vicinity. These
results indicate that social distancing has reduced the transmission of
influenza like illnesses, including COVID 19, and support social distancing as
an effective strategy for slowing the spread of COVID-19.
| [
{
"created": "Tue, 21 Apr 2020 11:29:00 GMT",
"version": "v1"
}
] | 2020-04-22 | [
[
"Liautaud",
"Parker",
""
],
[
"Huybers",
"Peter",
""
],
[
"Santillana",
"Mauricio",
""
]
] | In March of 2020, many U.S. state governments encouraged or mandated restrictions on social interactions to slow the spread of COVID-19, the disease caused by the novel coronavirus SARS-CoV-2 that has spread to nearly 180 countries. Estimating the effectiveness of these social-distancing strategies is challenging because surveillance of COVID-19 has been limited, with tests generally being prioritized for high-risk or hospitalized cases according to temporally and regionally varying criteria. Here we show that reductions in mobility across U.S. counties with at least 100 confirmed cases of COVID-19 led to reductions in fever incidences, as captured by smart thermometers, after a mean lag of 6.5 days ($90\%$ within 3--10 days) that is consistent with the incubation period of COVID-19. Furthermore, counties with larger decreases in mobility subsequently achieved greater reductions in fevers ($p<0.01$), with the notable exception of New York City and its immediate vicinity. These results indicate that social distancing has reduced the transmission of influenza like illnesses, including COVID 19, and support social distancing as an effective strategy for slowing the spread of COVID-19. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.