id stringlengths 9 13 | submitter stringlengths 4 48 | authors stringlengths 4 9.62k | title stringlengths 4 343 | comments stringlengths 2 480 ⌀ | journal-ref stringlengths 9 309 ⌀ | doi stringlengths 12 138 ⌀ | report-no stringclasses 277 values | categories stringlengths 8 87 | license stringclasses 9 values | orig_abstract stringlengths 27 3.76k | versions listlengths 1 15 | update_date stringlengths 10 10 | authors_parsed listlengths 1 147 | abstract stringlengths 24 3.75k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1303.4001 | George F. R. Ellis | George F. R. Ellis | Multi-level selection in biology | Muych expanded version of previous, basic argument unchanged. 14
pages, 1 diagram, i table | null | null | null | q-bio.PE nlin.AO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Okasha (2006) proposed distinguishing aspects of selection: those based in
particle level traits (MSL1), and those based in group level traits (MSL2). It
is proposed here that MSL1 can usefully be further split into two aspects, one
(MLS1E) representing selection of particles based in their individual
interaction with environmental properties, and one (MLS1G) representing Multi
Level Selection of particles based in their relation to group properties.
Similarly MSL2 can be split into two parts based in this distinction. This
splitting enables a characterisation of how emergent group properties can
affect particle selection, and thus affect group traits that are important for
survival. This proposal is illustrated by considering a key aspect of animal
and human life, namely the formation of social groups, which greatly enhances
survival prospects. The biological mechanism that underlies such group
formation is the existence of innate primary emotional systems studied by
Panksepp (1998), effective through the ascending systems in the brain that
diffuse neurotransmitters such as dopamine and norepinephrine through the
cortex. Evolutionary emergence of such brain mechanisms, and hence the
emergence of social groups, can only result from multi-level selection
characterised by the combination of MSL2 and MLS1G. The distinctions proposed
here should be useful in other contexts.
| [
{
"created": "Sat, 16 Mar 2013 17:40:20 GMT",
"version": "v1"
},
{
"created": "Thu, 16 May 2013 19:34:39 GMT",
"version": "v2"
}
] | 2013-05-17 | [
[
"Ellis",
"George F. R.",
""
]
] | Okasha (2006) proposed distinguishing aspects of selection: those based in particle level traits (MSL1), and those based in group level traits (MSL2). It is proposed here that MSL1 can usefully be further split into two aspects, one (MLS1E) representing selection of particles based in their individual interaction with environmental properties, and one (MLS1G) representing Multi Level Selection of particles based in their relation to group properties. Similarly MSL2 can be split into two parts based in this distinction. This splitting enables a characterisation of how emergent group properties can affect particle selection, and thus affect group traits that are important for survival. This proposal is illustrated by considering a key aspect of animal and human life, namely the formation of social groups, which greatly enhances survival prospects. The biological mechanism that underlies such group formation is the existence of innate primary emotional systems studied by Panksepp (1998), effective through the ascending systems in the brain that diffuse neurotransmitters such as dopamine and norepinephrine through the cortex. Evolutionary emergence of such brain mechanisms, and hence the emergence of social groups, can only result from multi-level selection characterised by the combination of MSL2 and MLS1G. The distinctions proposed here should be useful in other contexts. |
1404.0240 | Marcus Kaiser | Sol Lim, Cheol E. Han, Peter J. Uhlhaas and Marcus Kaiser | Preferential Detachment During Human Brain Development: Age- and
Sex-Specific Structural Connectivity in Diffusion Tensor Imaging (DTI) Data | Cerebral Cortex Advance Access, December 2013 | null | 10.1093/cercor/bht333 | null | q-bio.NC physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human brain maturation is characterized by the prolonged development of
structural and functional properties of large-scale networks that extends into
adulthood. However, it is not clearly understood which features change and
which remain stable over time. Here, we examined structural connectivity based
on diffusion tensor imaging (DTI) in 121 participants between 4 and 40 years of
age. DTI data were analyzed for small-world parameters, modularity, and the
number of fiber tracts at the level of streamlines. First, our findings showed
that the number of fiber tracts, small-world topology, and modular organization
remained largely stable despite a substantial overall decrease in the number of
streamlines with age. Second, this decrease mainly affected fiber tracts that
had a large number of streamlines, were short, within modules and within
hemispheres; such connections were affected significantly more often than would
be expected given their number of occurrences in the network. Third, streamline
loss occurred earlier in females than in males. In summary, our findings
suggest that core properties of structural brain connectivity, such as the
small-world and modular organization, remain stable during brain maturation by
focusing streamline loss to specific types of fiber tracts.
| [
{
"created": "Tue, 1 Apr 2014 13:56:19 GMT",
"version": "v1"
}
] | 2014-04-02 | [
[
"Lim",
"Sol",
""
],
[
"Han",
"Cheol E.",
""
],
[
"Uhlhaas",
"Peter J.",
""
],
[
"Kaiser",
"Marcus",
""
]
] | Human brain maturation is characterized by the prolonged development of structural and functional properties of large-scale networks that extends into adulthood. However, it is not clearly understood which features change and which remain stable over time. Here, we examined structural connectivity based on diffusion tensor imaging (DTI) in 121 participants between 4 and 40 years of age. DTI data were analyzed for small-world parameters, modularity, and the number of fiber tracts at the level of streamlines. First, our findings showed that the number of fiber tracts, small-world topology, and modular organization remained largely stable despite a substantial overall decrease in the number of streamlines with age. Second, this decrease mainly affected fiber tracts that had a large number of streamlines, were short, within modules and within hemispheres; such connections were affected significantly more often than would be expected given their number of occurrences in the network. Third, streamline loss occurred earlier in females than in males. In summary, our findings suggest that core properties of structural brain connectivity, such as the small-world and modular organization, remain stable during brain maturation by focusing streamline loss to specific types of fiber tracts. |
2301.12437 | Thomas Michelitsch | Michael Bestehorn and Thomas M. Michelitsch | Oscillating behavior of a compartmental model with retarded noisy
dynamic infection rate | 21 pages, 9 figures | null | 10.1142/S0218127423500566 | null | q-bio.PE nlin.CD | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Our study is based on an epidemiological compartmental model, the SIRS model.
In the SIRS model, each individual is in one of the states susceptible (S),
infected(I) or recovered (R), depending on its state of health. In compartment
R, an individual is assumed to stay immune within a finite time interval only
and then transfers back to the S compartment. We extend the model and allow for
a feedback control of the infection rate by mitigation measures which are
related to the number of infections. A finite response time of the feedback
mechanism is supposed that changes the low-dimensional SIRS model into an
infinite-dimensional set of integro-differential (delay-differential)
equations. It turns out that the retarded feedback renders the originally
stable endemic equilibrium of SIRS (stable focus) into an unstable focus if the
delay exceeds a certain critical value. Nonlinear solutions show persistent
regular oscillations of the number of infected and susceptible individuals. In
the last part we include noise effects from the environment and allow for a
fluctuating infection rate. This results in multiplicative noise terms and our
model turns into a set of stochastic nonlinear integro-differential equations.
Numerical solutions reveal an irregular behavior of repeated disease outbreaks
in the form of infection waves with a variety of frequencies and amplitudes.
| [
{
"created": "Sun, 29 Jan 2023 12:34:47 GMT",
"version": "v1"
}
] | 2023-05-10 | [
[
"Bestehorn",
"Michael",
""
],
[
"Michelitsch",
"Thomas M.",
""
]
] | Our study is based on an epidemiological compartmental model, the SIRS model. In the SIRS model, each individual is in one of the states susceptible (S), infected(I) or recovered (R), depending on its state of health. In compartment R, an individual is assumed to stay immune within a finite time interval only and then transfers back to the S compartment. We extend the model and allow for a feedback control of the infection rate by mitigation measures which are related to the number of infections. A finite response time of the feedback mechanism is supposed that changes the low-dimensional SIRS model into an infinite-dimensional set of integro-differential (delay-differential) equations. It turns out that the retarded feedback renders the originally stable endemic equilibrium of SIRS (stable focus) into an unstable focus if the delay exceeds a certain critical value. Nonlinear solutions show persistent regular oscillations of the number of infected and susceptible individuals. In the last part we include noise effects from the environment and allow for a fluctuating infection rate. This results in multiplicative noise terms and our model turns into a set of stochastic nonlinear integro-differential equations. Numerical solutions reveal an irregular behavior of repeated disease outbreaks in the form of infection waves with a variety of frequencies and amplitudes. |
1607.01841 | Bo Sun | Garrett D. Potter and Tommy A. Byrd and Andrew Mugler and Bo Sun | Dynamic sampling and information encoding in biochemical networks | null | null | 10.1016/j.bpj.2016.12.045 | null | q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cells use biochemical networks to translate environmental information into
intracellular responses. These responses can be highly dynamic, but how the
information is encoded in these dynamics remains poorly understood. Here we
investigate the dynamic encoding of information in the ATP-induced calcium
responses of fibroblast cells, using a vectorial, or multi-time-point, measure
from information theory. We find that the amount of extracted information
depends on physiological constraints such as the sampling rate and memory
capacity of the downstream network, and is affected differentially by intrinsic
vs. extrinsic noise. By comparing to a minimal physical model, we find,
surprisingly, that the information is often insensitive to the detailed
structure of the underlying dynamics, and instead the decoding mechanism acts
as a simple low-pass filter. These results demonstrate the mechanisms and
limitations of dynamic information storage in cells.
| [
{
"created": "Wed, 6 Jul 2016 23:51:10 GMT",
"version": "v1"
}
] | 2017-04-05 | [
[
"Potter",
"Garrett D.",
""
],
[
"Byrd",
"Tommy A.",
""
],
[
"Mugler",
"Andrew",
""
],
[
"Sun",
"Bo",
""
]
] | Cells use biochemical networks to translate environmental information into intracellular responses. These responses can be highly dynamic, but how the information is encoded in these dynamics remains poorly understood. Here we investigate the dynamic encoding of information in the ATP-induced calcium responses of fibroblast cells, using a vectorial, or multi-time-point, measure from information theory. We find that the amount of extracted information depends on physiological constraints such as the sampling rate and memory capacity of the downstream network, and is affected differentially by intrinsic vs. extrinsic noise. By comparing to a minimal physical model, we find, surprisingly, that the information is often insensitive to the detailed structure of the underlying dynamics, and instead the decoding mechanism acts as a simple low-pass filter. These results demonstrate the mechanisms and limitations of dynamic information storage in cells. |
2109.01339 | Eiji Yamamoto | Ikki Yasuda, Katsuhiro Endo, Eiji Yamamoto, Yoshinori Hirano, and
Kenji Yasuoka | Ligand-induced protein dynamics differences correlate with
protein-ligand binding affinities: An unsupervised deep learning approach | null | Commun. Biol. 5, 481 (2022) | 10.1038/s42003-022-03416-7 | null | q-bio.BM physics.bio-ph physics.chem-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Prediction of protein-ligand binding affinity is a major goal in drug
discovery. Generally, free energy gap is calculated between two states (e.g.,
ligand binding and unbinding). The energy gap implicitly includes the effects
of changes in protein dynamics induced by the binding ligand. However, the
relationship between protein dynamics and binding affinity remains unclear.
Here, we propose a novel method that represents protein behavioral change upon
ligand binding with a simple feature that can be used to predict protein-ligand
affinity. From unbiased molecular simulation data, an unsupervised deep
learning method measures the differences in protein dynamics at a
ligand-binding site depending on the bound ligands. A dimension-reduction
method extracts a dynamic feature that is strongly correlated to the binding
affinities. Moreover, the residues that play important roles in protein-ligand
interactions are specified based on their contribution to the differences.
These results indicate the potential for dynamics-based drug discovery.
| [
{
"created": "Fri, 3 Sep 2021 06:54:59 GMT",
"version": "v1"
}
] | 2022-05-20 | [
[
"Yasuda",
"Ikki",
""
],
[
"Endo",
"Katsuhiro",
""
],
[
"Yamamoto",
"Eiji",
""
],
[
"Hirano",
"Yoshinori",
""
],
[
"Yasuoka",
"Kenji",
""
]
] | Prediction of protein-ligand binding affinity is a major goal in drug discovery. Generally, free energy gap is calculated between two states (e.g., ligand binding and unbinding). The energy gap implicitly includes the effects of changes in protein dynamics induced by the binding ligand. However, the relationship between protein dynamics and binding affinity remains unclear. Here, we propose a novel method that represents protein behavioral change upon ligand binding with a simple feature that can be used to predict protein-ligand affinity. From unbiased molecular simulation data, an unsupervised deep learning method measures the differences in protein dynamics at a ligand-binding site depending on the bound ligands. A dimension-reduction method extracts a dynamic feature that is strongly correlated to the binding affinities. Moreover, the residues that play important roles in protein-ligand interactions are specified based on their contribution to the differences. These results indicate the potential for dynamics-based drug discovery. |
1806.04863 | Behrooz Azarkhalili | Farzad Abdolhosseini, Behrooz Azarkhalili, Abbas Maazallahi, Aryan
Kamal, Seyed Abolfazl Motahari, Ali Sharifi-Zarchi, and Hamidreza Chitsaz | Cell Identity Codes: Understanding Cell Identity from Gene Expression
Profiles using Deep Neural Networks | null | null | null | null | q-bio.GN cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding cell identity is an important task in many biomedical areas.
Expression patterns of specific marker genes have been used to characterize
some limited cell types, but exclusive markers are not available for many cell
types. A second approach is to use machine learning to discriminate cell types
based on the whole gene expression profiles (GEPs). The accuracies of simple
classification algorithms such as linear discriminators or support vector
machines are limited due to the complexity of biological systems. We used deep
neural networks to analyze 1040 GEPs from 16 different human tissues and cell
types. After comparing different architectures, we identified a specific
structure of deep autoencoders that can encode a GEP into a vector of 30
numeric values, which we call the cell identity code (CIC). The original GEP
can be reproduced from the CIC with an accuracy comparable to technical
replicates of the same experiment. Although we use an unsupervised approach to
train the autoencoder, we show different values of the CIC are connected to
different biological aspects of the cell, such as different pathways or
biological processes. This network can use CIC to reproduce the GEP of the cell
types it has never seen during the training. It also can resist some noise in
the measurement of the GEP. Furthermore, we introduce classifier autoencoder,
an architecture that can accurately identify cell type based on the GEP or the
CIC.
| [
{
"created": "Wed, 13 Jun 2018 06:42:44 GMT",
"version": "v1"
}
] | 2018-06-14 | [
[
"Abdolhosseini",
"Farzad",
""
],
[
"Azarkhalili",
"Behrooz",
""
],
[
"Maazallahi",
"Abbas",
""
],
[
"Kamal",
"Aryan",
""
],
[
"Motahari",
"Seyed Abolfazl",
""
],
[
"Sharifi-Zarchi",
"Ali",
""
],
[
"Chitsaz",
"H... | Understanding cell identity is an important task in many biomedical areas. Expression patterns of specific marker genes have been used to characterize some limited cell types, but exclusive markers are not available for many cell types. A second approach is to use machine learning to discriminate cell types based on the whole gene expression profiles (GEPs). The accuracies of simple classification algorithms such as linear discriminators or support vector machines are limited due to the complexity of biological systems. We used deep neural networks to analyze 1040 GEPs from 16 different human tissues and cell types. After comparing different architectures, we identified a specific structure of deep autoencoders that can encode a GEP into a vector of 30 numeric values, which we call the cell identity code (CIC). The original GEP can be reproduced from the CIC with an accuracy comparable to technical replicates of the same experiment. Although we use an unsupervised approach to train the autoencoder, we show different values of the CIC are connected to different biological aspects of the cell, such as different pathways or biological processes. This network can use CIC to reproduce the GEP of the cell types it has never seen during the training. It also can resist some noise in the measurement of the GEP. Furthermore, we introduce classifier autoencoder, an architecture that can accurately identify cell type based on the GEP or the CIC. |
1605.06488 | Christopher Miles | Christopher E. Miles and James P. Keener | Bidirectionality From Cargo Thermal Fluctuations in Motor-Mediated
Transport | updated with a 1D analysis more appropriate for the correlated noise
structure between the two motor populations | J. Theor. Biol. 424 (2017) 37-48 | 10.1016/j.jtbi.2017.04.032 | null | q-bio.SC cond-mat.soft math.DS physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Molecular motor proteins serve as an essential component of intracellular
transport by generating forces to haul cargoes along cytoskeletal filaments.
Two species of motors that are directed oppositely (e.g. kinesin, dynein) can
be attached to the same cargo, which is known to produce bidirectional net
motion. Although previous work focuses on the motor number as the driving noise
source for switching, we propose an alternative mechanism: cargo diffusion. A
mean-field mathematical model of mechanical interactions of two populations of
molecular motors with cargo thermal fluctuations (diffusion) is presented to
study this phenomenon. The delayed response of a motor to fluctuations in the
cargo velocity is quantified, allowing for the reduction of the full model a
single "characteristic distance", a proxy for the net force on the cargo. The
system is then found to be metastable, with switching exclusively due to cargo
diffusion between distinct directional transport states. The time to switch
between these states is then investigated using a mean first passage time
analysis. The switching time is found to be non-monotonic in the drag of the
cargo, providing an experimental test of the theory.
| [
{
"created": "Fri, 20 May 2016 19:49:20 GMT",
"version": "v1"
},
{
"created": "Sat, 22 Oct 2016 21:14:10 GMT",
"version": "v2"
},
{
"created": "Tue, 27 Dec 2016 20:27:28 GMT",
"version": "v3"
}
] | 2017-05-10 | [
[
"Miles",
"Christopher E.",
""
],
[
"Keener",
"James P.",
""
]
] | Molecular motor proteins serve as an essential component of intracellular transport by generating forces to haul cargoes along cytoskeletal filaments. Two species of motors that are directed oppositely (e.g. kinesin, dynein) can be attached to the same cargo, which is known to produce bidirectional net motion. Although previous work focuses on the motor number as the driving noise source for switching, we propose an alternative mechanism: cargo diffusion. A mean-field mathematical model of mechanical interactions of two populations of molecular motors with cargo thermal fluctuations (diffusion) is presented to study this phenomenon. The delayed response of a motor to fluctuations in the cargo velocity is quantified, allowing for the reduction of the full model a single "characteristic distance", a proxy for the net force on the cargo. The system is then found to be metastable, with switching exclusively due to cargo diffusion between distinct directional transport states. The time to switch between these states is then investigated using a mean first passage time analysis. The switching time is found to be non-monotonic in the drag of the cargo, providing an experimental test of the theory. |
2003.03260 | \'Elie Besserer-Offroy Ph.D. | Rebecca L. Brouillette, \'Elie Besserer-Offroy, Christine E. Mona,
Magali Chartier, Sandrine Lavenus, Marc Sousbie, Karine Belleville,
Jean-Michel Longpr\'e, \'Eric Marsault, Michel Grandbois, Philippe Sarret | Cell-penetrating pepducins targeting the neurotensin receptor type 1
relieve pain | This is the accepted version of the following article: Brouillette
RL, et al. (2020), Pharmacological Research. doi:10.1016/j.phrs.2020.104750 ,
which has been accepted and published in final form at
https://doi.org/10.1016/j.phrs.2020.104750 | Pharmacological Research. 104750 (2020) | 10.1016/j.phrs.2020.104750 | null | q-bio.BM q-bio.SC | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Pepducins are cell-penetrating, membrane-tethered lipopeptides designed to
target the intracellular region of a G protein-coupled receptor (GPCR) in order
to allosterically modulate the receptor's signaling output. In this
proof-of-concept study, we explored the pain-relief potential of a pepducin
series derived from the first intracellular loop of neurotensin receptor type 1
(NTS1), a class A GPCR that mediates many of the effects of the neurotensin
(NT) tridecapeptide, including hypothermia, hypotension and analgesia. We used
BRET-based biosensors to determine the pepducins' ability to engage G protein
signaling pathways associated with NTS1 activation. We observed partial Gq and
G13 activation at a 10 {\mu}M concentration, indicating that these pepducins
may act as allosteric agonists of NTS1. Additionally, we used surface plasmon
resonance (SPR) as a label-free assay to monitor pepducin-induced responses in
CHO-K1 cells stably expressing hNTS1. This whole-cell integrated assay enabled
us to subdivide our pepducin series into three profile response groups. In
order to determine the pepducins' antinociceptive potential, we then screened
the series in an acute pain model (tail-flick test) by measuring tail
withdrawal latencies to a thermal nociceptive stimulus, following intrathecal
pepducin administration (275 nmol/kg). We further evaluated promising pepducins
in a tonic pain model (formalin test), as well as in neuropathic (Chronic
Constriction Injury) and inflammatory (Complete Freund's Adjuvant) chronic pain
models. We report one pepducin, PP-001, that consistently reduced rat
nociceptive behaviors, even in chronic pain paradigm. Altogether, these results
suggest that NTS1-derived pepducins may represent a promising strategy in
pain-relief.
| [
{
"created": "Fri, 6 Mar 2020 14:57:41 GMT",
"version": "v1"
},
{
"created": "Mon, 9 Mar 2020 15:06:02 GMT",
"version": "v2"
}
] | 2020-06-14 | [
[
"Brouillette",
"Rebecca L.",
""
],
[
"Besserer-Offroy",
"Élie",
""
],
[
"Mona",
"Christine E.",
""
],
[
"Chartier",
"Magali",
""
],
[
"Lavenus",
"Sandrine",
""
],
[
"Sousbie",
"Marc",
""
],
[
"Belleville",
"Kar... | Pepducins are cell-penetrating, membrane-tethered lipopeptides designed to target the intracellular region of a G protein-coupled receptor (GPCR) in order to allosterically modulate the receptor's signaling output. In this proof-of-concept study, we explored the pain-relief potential of a pepducin series derived from the first intracellular loop of neurotensin receptor type 1 (NTS1), a class A GPCR that mediates many of the effects of the neurotensin (NT) tridecapeptide, including hypothermia, hypotension and analgesia. We used BRET-based biosensors to determine the pepducins' ability to engage G protein signaling pathways associated with NTS1 activation. We observed partial Gq and G13 activation at a 10 {\mu}M concentration, indicating that these pepducins may act as allosteric agonists of NTS1. Additionally, we used surface plasmon resonance (SPR) as a label-free assay to monitor pepducin-induced responses in CHO-K1 cells stably expressing hNTS1. This whole-cell integrated assay enabled us to subdivide our pepducin series into three profile response groups. In order to determine the pepducins' antinociceptive potential, we then screened the series in an acute pain model (tail-flick test) by measuring tail withdrawal latencies to a thermal nociceptive stimulus, following intrathecal pepducin administration (275 nmol/kg). We further evaluated promising pepducins in a tonic pain model (formalin test), as well as in neuropathic (Chronic Constriction Injury) and inflammatory (Complete Freund's Adjuvant) chronic pain models. We report one pepducin, PP-001, that consistently reduced rat nociceptive behaviors, even in chronic pain paradigm. Altogether, these results suggest that NTS1-derived pepducins may represent a promising strategy in pain-relief. |
1904.05002 | Weishan Lee | Cheng Sok Kin, Ian Man Ut, Lo Hang, U Ieng Hou, Ng Ka Weng, Un Soi Ha,
Lei Ka Hin, Cheng Kun Heng, Tam Seak Tim, Chan Iong Kuai, Lee Wei Shan | Predicting Earth's Carrying Capacity of Human Population as the Predator
and the Natural Resources as the Prey in the Modified Lotka-Volterra
Equations with Time-dependent Parameters | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We modified the Lotka-Volterra Equations with the assumption that two of the
original four constant parameters in the traditional equations are
time-dependent. In the first place, we assumed that the human population
(borrowed from the T-Function) plays the role as the prey while all lethal
factors that jeopardize the existence of the human race as the predator.
Although we could still calculate the time-dependent lethal function, the idea
of treating the lethal factors as the prey was too general to recognize the
meaning of them. Hence, in the second part of the modified Lotka-Volterra
Equations, we exchanged the roles between the prey and the predator. This time,
we treated the prey as the natural resources while the predator as the human
population (still borrowed from the T-Function). After carefully choosing
appropriate parameters to match the maximum carrying capacity with the
saturated number of the human population predicted by the T-Function, we
successfully calculated the natural resources as a function of time. Contrary
to our intuition, the carrying capacity is constant over time rather than a
time-varying function, with the constant value of 10.2 billion people.
| [
{
"created": "Wed, 10 Apr 2019 04:53:11 GMT",
"version": "v1"
},
{
"created": "Fri, 8 Nov 2019 21:02:42 GMT",
"version": "v2"
}
] | 2019-11-12 | [
[
"Kin",
"Cheng Sok",
""
],
[
"Ut",
"Ian Man",
""
],
[
"Hang",
"Lo",
""
],
[
"Hou",
"U Ieng",
""
],
[
"Weng",
"Ng Ka",
""
],
[
"Ha",
"Un Soi",
""
],
[
"Hin",
"Lei Ka",
""
],
[
"Heng",
"Cheng Kun",... | We modified the Lotka-Volterra Equations with the assumption that two of the original four constant parameters in the traditional equations are time-dependent. In the first place, we assumed that the human population (borrowed from the T-Function) plays the role as the prey while all lethal factors that jeopardize the existence of the human race as the predator. Although we could still calculate the time-dependent lethal function, the idea of treating the lethal factors as the prey was too general to recognize the meaning of them. Hence, in the second part of the modified Lotka-Volterra Equations, we exchanged the roles between the prey and the predator. This time, we treated the prey as the natural resources while the predator as the human population (still borrowed from the T-Function). After carefully choosing appropriate parameters to match the maximum carrying capacity with the saturated number of the human population predicted by the T-Function, we successfully calculated the natural resources as a function of time. Contrary to our intuition, the carrying capacity is constant over time rather than a time-varying function, with the constant value of 10.2 billion people. |
1810.12026 | Sergei Grudinin | Guillaume Pag\`es (NANO-D), Sergei Grudinin (NANO-D) | DeepSymmetry : Using 3D convolutional networks for identification of
tandem repeats and internal symmetries in protein structures | null | null | null | null | q-bio.QM cs.LG physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Motivation: Thanks to the recent advances in structural biology, nowadays
three-dimensional structures of various proteins are solved on a routine basis.
A large portion of these contain structural repetitions or internal symmetries.
To understand the evolution mechanisms of these proteins and how structural
repetitions affect the protein function, we need to be able to detect such
proteins very robustly. As deep learning is particularly suited to deal with
spatially organized data, we applied it to the detection of proteins with
structural repetitions. Results: We present DeepSymmetry, a versatile method
based on three-dimensional (3D) convolutional networks that detects structural
repetitions in proteins and their density maps. Our method is designed to
identify tandem repeat proteins, proteins with internal symmetries, symmetries
in the raw density maps, their symmetry order, and also the corresponding
symmetry axes. Detection of symmetry axes is based on learning six-dimensional
Veronese mappings of 3D vectors, and the median angular error of axis
determination is less than one degree. We demonstrate the capabilities of our
method on benchmarks with tandem repeated proteins and also with symmetrical
assemblies. For example, we have discovered over 10,000 putative tandem repeat
proteins that are not currently present in the RepeatsDB database.
Availability: The method is available at
https://team.inria.fr/nano-d/software/deepsymmetry. It consists of a C++
executable that transforms molecular structures into volumetric density maps,
and a Python code based on the TensorFlow framework for applying the
DeepSymmetry model to these maps.
| [
{
"created": "Mon, 29 Oct 2018 09:38:51 GMT",
"version": "v1"
}
] | 2018-10-30 | [
[
"Pagès",
"Guillaume",
"",
"NANO-D"
],
[
"Grudinin",
"Sergei",
"",
"NANO-D"
]
] | Motivation: Thanks to the recent advances in structural biology, nowadays three-dimensional structures of various proteins are solved on a routine basis. A large portion of these contain structural repetitions or internal symmetries. To understand the evolution mechanisms of these proteins and how structural repetitions affect the protein function, we need to be able to detect such proteins very robustly. As deep learning is particularly suited to deal with spatially organized data, we applied it to the detection of proteins with structural repetitions. Results: We present DeepSymmetry, a versatile method based on three-dimensional (3D) convolutional networks that detects structural repetitions in proteins and their density maps. Our method is designed to identify tandem repeat proteins, proteins with internal symmetries, symmetries in the raw density maps, their symmetry order, and also the corresponding symmetry axes. Detection of symmetry axes is based on learning six-dimensional Veronese mappings of 3D vectors, and the median angular error of axis determination is less than one degree. We demonstrate the capabilities of our method on benchmarks with tandem repeated proteins and also with symmetrical assemblies. For example, we have discovered over 10,000 putative tandem repeat proteins that are not currently present in the RepeatsDB database. Availability: The method is available at https://team.inria.fr/nano-d/software/deepsymmetry. It consists of a C++ executable that transforms molecular structures into volumetric density maps, and a Python code based on the TensorFlow framework for applying the DeepSymmetry model to these maps. |
1509.02045 | Philippe Robert S. | Renaud Dessalles and Vincent Fromion and Philippe Robert | A Stochastic Analysis of Autoregulation of Gene Expression | null | null | null | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper analyzes, in the context of a prokaryotic cell, the stochastic
variability of the number of proteins when there is a control of gene
expression by an autoregulation scheme. The goal of this work is to estimate
the efficiency of the regulation to limit the fluctuations of the number of
copies of a given protein. The autoregulation considered in this paper relies
mainly on a negative feedback: the proteins are repressors of their own gene
expression. The efficiency of a production process without feedback control is
compared to a production process with an autoregulation of the gene expression
assuming that both of them produce the same average number of proteins. The
main characteristic used for the comparison is the standard deviation of the
number of proteins at equilibrium. With a Markovian representation and a simple
model of repression, we prove that, under a scaling regime, the repression
mechanism follows a Hill repression scheme with an hyperbolic control. An
explicit asymptotic expression of the variance of the number of proteins under
this regulation mechanism is obtained. Simulations are used to study other
aspects of autoregulation such as the rate of convergence to equilibrium of the
production process and the case where the control of the production process of
proteins is achieved via the inhibition of mRNAs.
| [
{
"created": "Mon, 7 Sep 2015 13:55:07 GMT",
"version": "v1"
},
{
"created": "Thu, 14 Jul 2016 16:18:21 GMT",
"version": "v2"
}
] | 2016-07-15 | [
[
"Dessalles",
"Renaud",
""
],
[
"Fromion",
"Vincent",
""
],
[
"Robert",
"Philippe",
""
]
] | This paper analyzes, in the context of a prokaryotic cell, the stochastic variability of the number of proteins when there is a control of gene expression by an autoregulation scheme. The goal of this work is to estimate the efficiency of the regulation to limit the fluctuations of the number of copies of a given protein. The autoregulation considered in this paper relies mainly on a negative feedback: the proteins are repressors of their own gene expression. The efficiency of a production process without feedback control is compared to a production process with an autoregulation of the gene expression assuming that both of them produce the same average number of proteins. The main characteristic used for the comparison is the standard deviation of the number of proteins at equilibrium. With a Markovian representation and a simple model of repression, we prove that, under a scaling regime, the repression mechanism follows a Hill repression scheme with an hyperbolic control. An explicit asymptotic expression of the variance of the number of proteins under this regulation mechanism is obtained. Simulations are used to study other aspects of autoregulation such as the rate of convergence to equilibrium of the production process and the case where the control of the production process of proteins is achieved via the inhibition of mRNAs. |
1404.0498 | Peter beim Graben | Peter beim Graben | Contextual emergence of intentionality | 27 pages; 4 figures (Fig 1. Copyright by American Physical Society);
submitted to Journal of Consciousness Studies | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | By means of an intriguing physical example, magnetic surface swimmers, that
can be described in terms of Dennett's intentional stance, I reconstruct a
hierarchy of necessary and sufficient conditions for the applicability of the
intentional strategy. It turns out that the different levels of the intentional
hierarchy are contextually emergent from their respective subjacent levels by
imposing stability constraints upon them. At the lowest level of the hierarchy,
phenomenal physical laws emerge for the coarse-grained description of open,
nonlinear, and dissipative nonequilibrium systems in critical states. One level
higher, dynamic patterns, such as, e.g., magnetic surface swimmers, are
contextually emergent as they are invariant under certain symmetry operations.
Again one level up, these patterns behave apparently rational by selecting
optimal pathways for the dissipation of energy that is delivered by external
gradients. This is in accordance with the restated Second Law of thermodynamics
as a stability criterion. At the highest level, true believers are intentional
systems that are stable under exchanging their observation conditions.
| [
{
"created": "Wed, 2 Apr 2014 09:24:28 GMT",
"version": "v1"
}
] | 2014-04-03 | [
[
"Graben",
"Peter beim",
""
]
] | By means of an intriguing physical example, magnetic surface swimmers, that can be described in terms of Dennett's intentional stance, I reconstruct a hierarchy of necessary and sufficient conditions for the applicability of the intentional strategy. It turns out that the different levels of the intentional hierarchy are contextually emergent from their respective subjacent levels by imposing stability constraints upon them. At the lowest level of the hierarchy, phenomenal physical laws emerge for the coarse-grained description of open, nonlinear, and dissipative nonequilibrium systems in critical states. One level higher, dynamic patterns, such as, e.g., magnetic surface swimmers, are contextually emergent as they are invariant under certain symmetry operations. Again one level up, these patterns behave apparently rational by selecting optimal pathways for the dissipation of energy that is delivered by external gradients. This is in accordance with the restated Second Law of thermodynamics as a stability criterion. At the highest level, true believers are intentional systems that are stable under exchanging their observation conditions. |
2303.07876 | Stephen Turner | V.P. Nagraj and Stephen D. Turner | pracpac: Practical R Packaging with Docker | null | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | R packages are the fundamental units of reproducible code in R, providing a
mechanism for distributing user-developed code, documentation, and data. Docker
is a virtualization technology that allows applications and their dependencies
to be distributed and run reproducibly across platforms. The pracpac package
provides an interface to create Docker images that contain custom R packages.
The pracpac package leverages the renv package management tool to ensure
reproducibility by building dependency packages inside the container image
mirroring those installed on the developer's system. The pracpac package can be
used to containerize any R package to deploy with other domain-specific non-R
tools, Shiny applications, or entire data analysis pipelines. The pracpac
package is available on CRAN (https://cran.r-project.org/package=pracpac), and
source code is available under the MIT license on GitHub
(https://github.com/signaturescience/pracpac).
| [
{
"created": "Mon, 13 Mar 2023 15:13:36 GMT",
"version": "v1"
},
{
"created": "Tue, 21 Mar 2023 19:04:57 GMT",
"version": "v2"
}
] | 2023-03-23 | [
[
"Nagraj",
"V. P.",
""
],
[
"Turner",
"Stephen D.",
""
]
] | R packages are the fundamental units of reproducible code in R, providing a mechanism for distributing user-developed code, documentation, and data. Docker is a virtualization technology that allows applications and their dependencies to be distributed and run reproducibly across platforms. The pracpac package provides an interface to create Docker images that contain custom R packages. The pracpac package leverages the renv package management tool to ensure reproducibility by building dependency packages inside the container image mirroring those installed on the developer's system. The pracpac package can be used to containerize any R package to deploy with other domain-specific non-R tools, Shiny applications, or entire data analysis pipelines. The pracpac package is available on CRAN (https://cran.r-project.org/package=pracpac), and source code is available under the MIT license on GitHub (https://github.com/signaturescience/pracpac). |
2107.01501 | Yin-Wei Kuo | Yin-wei Kuo, Jonathon Howard | In vitro reconstitution of microtubule dynamics and severing imaged by
label-free interference reflection microscopy | 31 pages, 3 figures; to be published in Methods in Molecular Biology | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The dynamic architecture of the microtubule cytoskeleton is crucial for cell
division, motility and morphogenesis. The dynamic properties of microtubules -
growth, shrinkage, nucleation and severing - are regulated by an arsenal of
microtubule-associated proteins (MAPs). The activities of many of these MAPs
have been reconstituted in vitro using microscope assays. As an alternative to
fluorescence microscopy, interference-reflection microscopy (IRM) has been
introduced as an easy-to-use, wide-field imaging technique that allows
label-free visualization of microtubules with high contrast and speed. IRM
circumvents several problems associated with fluorescence microscopy including
the high concentrations of tubulin required for fluorescent labeling, the
potential perturbation of function caused by the fluorophores, and the risks of
photodamage. IRM can be implemented on a standard epifluorescence microscope at
low cost and can be combined with fluorescence techniques like
total-internal-reflection-fluorescence (TIRF) microscopy. Here we describe the
experimental procedure to image microtubule dynamics and severing using IRM,
providing practical tips and guidelines to resolve possible experimental
hurdles.
| [
{
"created": "Sat, 3 Jul 2021 21:37:22 GMT",
"version": "v1"
}
] | 2021-07-06 | [
[
"Kuo",
"Yin-wei",
""
],
[
"Howard",
"Jonathon",
""
]
] | The dynamic architecture of the microtubule cytoskeleton is crucial for cell division, motility and morphogenesis. The dynamic properties of microtubules - growth, shrinkage, nucleation and severing - are regulated by an arsenal of microtubule-associated proteins (MAPs). The activities of many of these MAPs have been reconstituted in vitro using microscope assays. As an alternative to fluorescence microscopy, interference-reflection microscopy (IRM) has been introduced as an easy-to-use, wide-field imaging technique that allows label-free visualization of microtubules with high contrast and speed. IRM circumvents several problems associated with fluorescence microscopy including the high concentrations of tubulin required for fluorescent labeling, the potential perturbation of function caused by the fluorophores, and the risks of photodamage. IRM can be implemented on a standard epifluorescence microscope at low cost and can be combined with fluorescence techniques like total-internal-reflection-fluorescence (TIRF) microscopy. Here we describe the experimental procedure to image microtubule dynamics and severing using IRM, providing practical tips and guidelines to resolve possible experimental hurdles. |
1512.00956 | Peter Gawthrop | Peter J. Gawthrop, Ivo Siekmann, Tatiana Kameneva, Susmita Saha,
Michael R. Ibbotson and Edmund J. Crampin | Bond Graph Modelling of Chemoelectrical Energy Transduction | null | IET Syst. Biol. 2017, 11 (5), 127-138 | 10.1049/iet-syb.2017.0006 | null | q-bio.QM physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Energy-based bond graph modelling of biomolecular systems is extended to
include chemoelectrical trans- duction thus enabling integrated
thermodynamically-compliant modelling of chemoelectrical systems in general and
excitable membranes in particular.
Our general approach is illustrated by recreating a well-known model of an
excitable membrane. This model is used to investigate the energy consumed
during a membrane action potential thus contributing to the current debate on
the trade-off between the speed of an action potential event and energy
consumption. The influx of Na+ is often taken as a proxy for energy
consumption; in contrast, this paper presents an energy based model of action
potentials. As the energy based approach avoids the assumptions underlying the
proxy approach it can be directly used to compute energy consumption in both
healthy and diseased neurons.
These results are illustrated by comparing the energy consumption of healthy
and degenerative retinal ganglion cells using both simulated and in vitro data.
| [
{
"created": "Thu, 3 Dec 2015 05:31:32 GMT",
"version": "v1"
},
{
"created": "Fri, 3 Jun 2016 04:22:03 GMT",
"version": "v2"
},
{
"created": "Wed, 19 Apr 2017 13:52:55 GMT",
"version": "v3"
}
] | 2018-08-14 | [
[
"Gawthrop",
"Peter J.",
""
],
[
"Siekmann",
"Ivo",
""
],
[
"Kameneva",
"Tatiana",
""
],
[
"Saha",
"Susmita",
""
],
[
"Ibbotson",
"Michael R.",
""
],
[
"Crampin",
"Edmund J.",
""
]
] | Energy-based bond graph modelling of biomolecular systems is extended to include chemoelectrical trans- duction thus enabling integrated thermodynamically-compliant modelling of chemoelectrical systems in general and excitable membranes in particular. Our general approach is illustrated by recreating a well-known model of an excitable membrane. This model is used to investigate the energy consumed during a membrane action potential thus contributing to the current debate on the trade-off between the speed of an action potential event and energy consumption. The influx of Na+ is often taken as a proxy for energy consumption; in contrast, this paper presents an energy based model of action potentials. As the energy based approach avoids the assumptions underlying the proxy approach it can be directly used to compute energy consumption in both healthy and diseased neurons. These results are illustrated by comparing the energy consumption of healthy and degenerative retinal ganglion cells using both simulated and in vitro data. |
0910.3516 | Sergio Conte | Sergio Conte, Alessandro Giuliani | Identification of possible differences in coding and non-coding
fragments of DNA sequences by using the method of the Recurrence
Quantification Analysis | null | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Starting with the results of Li et al. in 1992 there is valuable interest in
finding long range correlations in dna sequences since it raises questions
about the role of introns and intron-containing genes. In the present paper we
studied two sequences. We applied the method of the recurrence quantification
analysis (rqa) that was introduced by Zbilut and Webber in 1994. The
significant result that we have here is that both Lmax and Laminarity exhibit
very large values in non coding respect to coding sequences. Therefore we
suggest that there the claimed higher long range correlations of introns
respect to exons from many authors may be explained here in reason of such
found higher values of Lmax and of Laminarity.
| [
{
"created": "Mon, 19 Oct 2009 10:44:47 GMT",
"version": "v1"
}
] | 2009-10-20 | [
[
"Conte",
"Sergio",
""
],
[
"Giuliani",
"Alessandro",
""
]
] | Starting with the results of Li et al. in 1992 there is valuable interest in finding long range correlations in dna sequences since it raises questions about the role of introns and intron-containing genes. In the present paper we studied two sequences. We applied the method of the recurrence quantification analysis (rqa) that was introduced by Zbilut and Webber in 1994. The significant result that we have here is that both Lmax and Laminarity exhibit very large values in non coding respect to coding sequences. Therefore we suggest that there the claimed higher long range correlations of introns respect to exons from many authors may be explained here in reason of such found higher values of Lmax and of Laminarity. |
2305.03297 | Hongmei Hu | Hongmei Hu, Stephan Ewert, Birger Kollmeier, Deborah Vickers | Assessing Rate limits Using Behavioral and Neural Responses of
Interaural-Time-Difference Cues in Fine-Structure and Envelope | null | null | null | null | q-bio.NC physics.med-ph | http://creativecommons.org/licenses/by/4.0/ | The objective was to determine the effect of pulse rate on the sensitivity to
use interaural-time-difference (ITD) cues and to explore the mechanisms behind
rate-dependent degradation in ITD perception in bilateral cochlear implant (CI)
listeners using CI simulations and electroencephalogram (EEG) measures. To
eliminate the impact of CI stimulation artifacts and to develop protocols for
the ongoing bilateral CI studies, upper-frequency limits for both behavior and
EEG responses were obtained from normal hearing (NH) listeners using
sinusoidal-amplitude-modulated (SAM) tones and filtered clicks with changes in
either fine structure ITD or envelope ITD. Multiple EEG responses were
recorded, including the subcortical auditory steady-state responses (ASSRs) and
cortical auditory evoked potentials (CAEPs) elicited by stimuli onset, offset,
and changes. Results indicated that acoustic change complex (ACC) responses
elicited by envelope ITD changes were significantly smaller or absent compared
to those elicited by fine structure ITD changes. The ACC morphologies evoked by
fine structure ITD changes were similar to onset and offset CAEPs, although
smaller than onset CAEPs, with the longest peak latencies for ACC responses and
shortest for offset CAEPs. The study found that high-frequency stimuli clearly
elicited subcortical ASSRs, but smaller than those evoked by lower carrier
frequency SAM tones. The 40-Hz ASSRs decreased with increasing carrier
frequencies. Filtered clicks elicited larger ASSRs compared to high-frequency
SAM tones, with the order being 40-Hz-ASSR>160-Hz-ASSR>80-Hz-ASSR>320-Hz-ASSR
for both stimulus types. Wavelet analysis revealed a clear interaction between
detectable transient CAEPs and 40-Hz-ASSRs in the time-frequency domain for SAM
tones with a low carrier frequency.
| [
{
"created": "Fri, 5 May 2023 05:52:39 GMT",
"version": "v1"
}
] | 2023-05-08 | [
[
"Hu",
"Hongmei",
""
],
[
"Ewert",
"Stephan",
""
],
[
"Kollmeier",
"Birger",
""
],
[
"Vickers",
"Deborah",
""
]
] | The objective was to determine the effect of pulse rate on the sensitivity to use interaural-time-difference (ITD) cues and to explore the mechanisms behind rate-dependent degradation in ITD perception in bilateral cochlear implant (CI) listeners using CI simulations and electroencephalogram (EEG) measures. To eliminate the impact of CI stimulation artifacts and to develop protocols for the ongoing bilateral CI studies, upper-frequency limits for both behavior and EEG responses were obtained from normal hearing (NH) listeners using sinusoidal-amplitude-modulated (SAM) tones and filtered clicks with changes in either fine structure ITD or envelope ITD. Multiple EEG responses were recorded, including the subcortical auditory steady-state responses (ASSRs) and cortical auditory evoked potentials (CAEPs) elicited by stimuli onset, offset, and changes. Results indicated that acoustic change complex (ACC) responses elicited by envelope ITD changes were significantly smaller or absent compared to those elicited by fine structure ITD changes. The ACC morphologies evoked by fine structure ITD changes were similar to onset and offset CAEPs, although smaller than onset CAEPs, with the longest peak latencies for ACC responses and shortest for offset CAEPs. The study found that high-frequency stimuli clearly elicited subcortical ASSRs, but smaller than those evoked by lower carrier frequency SAM tones. The 40-Hz ASSRs decreased with increasing carrier frequencies. Filtered clicks elicited larger ASSRs compared to high-frequency SAM tones, with the order being 40-Hz-ASSR>160-Hz-ASSR>80-Hz-ASSR>320-Hz-ASSR for both stimulus types. Wavelet analysis revealed a clear interaction between detectable transient CAEPs and 40-Hz-ASSRs in the time-frequency domain for SAM tones with a low carrier frequency. |
2101.05359 | Ujwani Nukala | Ujwani Nukala (1), Marisabel Rodriguez Messan (1), Osman N. Yogurtcu
(1), Xiaofei Wang (2), Hong Yang ((1) Office of Biostatistics and
Epidemiology, Center for Biologics Evaluation and Research, US FDA, Silver
Spring, MD, USA (2) Office of Tissues and Advanced Therapies, Center for
Biologics Evaluation and Research, US FDA, Silver Spring, MD, USA) | A Systematic Review of the Efforts and Hindrances of Modeling and
Simulation of CAR T-cell Therapy | 33 pages, 4 Figures, 1 Table | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Chimeric Antigen Receptor (CAR) T-cell therapy is an immunotherapy that has
recently become highly instrumental in the fight against life-threatening
diseases. A variety of modeling and computational simulation efforts have
addressed different aspects of CAR T therapy, including T-cell activation, T-
and malignant cell population dynamics, therapeutic cost-effectiveness
strategies, and patient survival analyses. In this article, we present a
systematic review of those efforts, including mathematical, statistical, and
stochastic models employing a wide range of algorithms, from differential
equations to machine learning. To the best of our knowledge, this is the first
review of all such models studying CAR T therapy. In this review, we provide a
detailed summary of the strengths, limitations, methodology, data used, and
data lacking in current published models. This information may help in
designing and building better models for enhanced prediction and assessment of
the benefit-risk balance associated with novel CAR T therapies, as well as with
the data collection essential for building such models.
| [
{
"created": "Wed, 13 Jan 2021 21:43:35 GMT",
"version": "v1"
},
{
"created": "Tue, 2 Mar 2021 20:27:11 GMT",
"version": "v2"
}
] | 2021-03-04 | [
[
"Nukala",
"Ujwani",
""
],
[
"Messan",
"Marisabel Rodriguez",
""
],
[
"Yogurtcu",
"Osman N.",
""
],
[
"Wang",
"Xiaofei",
""
],
[
"Yang",
"Hong",
""
]
] | Chimeric Antigen Receptor (CAR) T-cell therapy is an immunotherapy that has recently become highly instrumental in the fight against life-threatening diseases. A variety of modeling and computational simulation efforts have addressed different aspects of CAR T therapy, including T-cell activation, T- and malignant cell population dynamics, therapeutic cost-effectiveness strategies, and patient survival analyses. In this article, we present a systematic review of those efforts, including mathematical, statistical, and stochastic models employing a wide range of algorithms, from differential equations to machine learning. To the best of our knowledge, this is the first review of all such models studying CAR T therapy. In this review, we provide a detailed summary of the strengths, limitations, methodology, data used, and data lacking in current published models. This information may help in designing and building better models for enhanced prediction and assessment of the benefit-risk balance associated with novel CAR T therapies, as well as with the data collection essential for building such models. |
2305.07421 | Max Taylor-Davies | Max Taylor-Davies, Stephanie Droop, Christopher G. Lucas | Selective imitation on the basis of reward function similarity | 7 pages, 3 figures, to appear in CogSci 2023 | null | null | null | q-bio.NC cs.LG | http://creativecommons.org/licenses/by/4.0/ | Imitation is a key component of human social behavior, and is widely used by
both children and adults as a way to navigate uncertain or unfamiliar
situations. But in an environment populated by multiple heterogeneous agents
pursuing different goals or objectives, indiscriminate imitation is unlikely to
be an effective strategy -- the imitator must instead determine who is most
useful to copy. There are likely many factors that play into these judgements,
depending on context and availability of information. Here we investigate the
hypothesis that these decisions involve inferences about other agents' reward
functions. We suggest that people preferentially imitate the behavior of others
they deem to have similar reward functions to their own. We further argue that
these inferences can be made on the basis of very sparse or indirect data, by
leveraging an inductive bias toward positing the existence of different
\textit{groups} or \textit{types} of people with similar reward functions,
allowing learners to select imitation targets without direct evidence of
alignment.
| [
{
"created": "Fri, 12 May 2023 12:40:08 GMT",
"version": "v1"
}
] | 2023-05-15 | [
[
"Taylor-Davies",
"Max",
""
],
[
"Droop",
"Stephanie",
""
],
[
"Lucas",
"Christopher G.",
""
]
] | Imitation is a key component of human social behavior, and is widely used by both children and adults as a way to navigate uncertain or unfamiliar situations. But in an environment populated by multiple heterogeneous agents pursuing different goals or objectives, indiscriminate imitation is unlikely to be an effective strategy -- the imitator must instead determine who is most useful to copy. There are likely many factors that play into these judgements, depending on context and availability of information. Here we investigate the hypothesis that these decisions involve inferences about other agents' reward functions. We suggest that people preferentially imitate the behavior of others they deem to have similar reward functions to their own. We further argue that these inferences can be made on the basis of very sparse or indirect data, by leveraging an inductive bias toward positing the existence of different \textit{groups} or \textit{types} of people with similar reward functions, allowing learners to select imitation targets without direct evidence of alignment. |
1612.04471 | Justin Chapman | Justin J. Chapman, James A. Roberts, Vinh T. Nguyen, Michael
Breakspear | Quantification of free-living activity patterns using accelerometry in
adults with mental illness | 24 pages; 4,486 words. PDF document with figures embedded: Five (5)
tables are referred to in the text, two of which are supplementary; Seven (7)
figures are referred to in the text, one of which is supplementary | null | null | null | q-bio.QM q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Physical activity is disrupted in many psychiatric disorders. Advances in
everyday technologies (e.g. accelerometers in smart phones) opens exciting
possibilities for non-intrusive acquisition of activity data. Successful
exploitation of this opportunity requires the validation of analytical methods
that can capture the full movement spectrum. The study aim was to demonstrate
an analytical approach to characterise accelerometer-derived activity patterns.
Here, we use statistical methods to characterise accelerometer-derived activity
patterns from a heterogeneous sample of 99 community-based adults with mental
illnesses. Diagnoses were screened using the Mini international
Neuropsychiatric Interview, and participants wore accelerometers for one week.
We studies the relative ability of simple (exponential), complex
(heavy-tailed), and composite models to explain patterns of activity and
inactivity. Activity during wakefulness was a composite of brief random
(exponential) movements and complex (heavy-tailed) processes, whereas movement
during sleep lacked the heavy-tailed component. In contrast, inactivity
followed a heavy-tailed process, lacking the random component. Activity
patterns differed in nature between those with a diagnosis of bipolar disorder
and a primary psychotic disorder. These results show the potential of complex
models to quntify the rich nature of human movement captured by accelerometry
during wake and sleep, and the interaction with diagnosis and health.
| [
{
"created": "Wed, 14 Dec 2016 03:29:29 GMT",
"version": "v1"
}
] | 2016-12-19 | [
[
"Chapman",
"Justin J.",
""
],
[
"Roberts",
"James A.",
""
],
[
"Nguyen",
"Vinh T.",
""
],
[
"Breakspear",
"Michael",
""
]
] | Physical activity is disrupted in many psychiatric disorders. Advances in everyday technologies (e.g. accelerometers in smart phones) opens exciting possibilities for non-intrusive acquisition of activity data. Successful exploitation of this opportunity requires the validation of analytical methods that can capture the full movement spectrum. The study aim was to demonstrate an analytical approach to characterise accelerometer-derived activity patterns. Here, we use statistical methods to characterise accelerometer-derived activity patterns from a heterogeneous sample of 99 community-based adults with mental illnesses. Diagnoses were screened using the Mini international Neuropsychiatric Interview, and participants wore accelerometers for one week. We studies the relative ability of simple (exponential), complex (heavy-tailed), and composite models to explain patterns of activity and inactivity. Activity during wakefulness was a composite of brief random (exponential) movements and complex (heavy-tailed) processes, whereas movement during sleep lacked the heavy-tailed component. In contrast, inactivity followed a heavy-tailed process, lacking the random component. Activity patterns differed in nature between those with a diagnosis of bipolar disorder and a primary psychotic disorder. These results show the potential of complex models to quntify the rich nature of human movement captured by accelerometry during wake and sleep, and the interaction with diagnosis and health. |
2007.13522 | Javad Khodaei-Mehr | Javad Khodaei-Mehr, Samaneh Tangestanizadeh, Mojtaba Sharifi, Ramin
Vatankhah, Mohammad Eghtesad | Hepatitis C Virus Epidemic Control Using a Nonlinear Adaptive Strategy | accepted for publish in the "Modelling and Control of Drug Delivery
Systems" book | null | null | null | q-bio.PE nlin.AO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hepatitis C is a viral infection that appears as a result of the Hepatitis C
Virus (HCV), and it has been recognized as the main reason for liver diseases.
HCV incidence is growing as an important issue in the epidemiology of
infectious diseases. In the present study, a mathematical model is employed for
simulating the dynamics of HCV outbreak in a population. The total population
is divided into five compartments, including unaware and aware susceptible,
acutely and chronically infected, and treated classes. Then, a Lyapunov-based
nonlinear adaptive method is proposed for the first time to control the HCV
epidemic considering modelling uncertainties. A positive definite Lyapunov
candidate function is suggested, and adaptation and control laws are attained
based on that. The main goal of the proposed control strategy is to decrease
the population of unaware susceptible and chronically infected compartments by
pursuing appropriate treatment scenarios. As a consequence of this decrease in
the mentioned compartments, the population of aware susceptible individuals
increases and the population of acutely infected and treated humans decreases.
The Lyapunov stability theorem and Barbalat's lemma are employed in order to
prove the tracking convergence to desired population reduction scenarios. Based
on the acquired numerical results, the proposed nonlinear adaptive controller
can achieve the above-mentioned objective by adjusting the inputs (rates of
informing the susceptible people and treatment of chronically infected ones)
and estimating uncertain parameter values based on the designed control and
adaptation laws, respectively. Moreover, the proposed strategy is designed to
be robust in the presence of different levels of parametric uncertainties.
| [
{
"created": "Wed, 22 Jul 2020 14:48:18 GMT",
"version": "v1"
}
] | 2020-07-28 | [
[
"Khodaei-Mehr",
"Javad",
""
],
[
"Tangestanizadeh",
"Samaneh",
""
],
[
"Sharifi",
"Mojtaba",
""
],
[
"Vatankhah",
"Ramin",
""
],
[
"Eghtesad",
"Mohammad",
""
]
] | Hepatitis C is a viral infection that appears as a result of the Hepatitis C Virus (HCV), and it has been recognized as the main reason for liver diseases. HCV incidence is growing as an important issue in the epidemiology of infectious diseases. In the present study, a mathematical model is employed for simulating the dynamics of HCV outbreak in a population. The total population is divided into five compartments, including unaware and aware susceptible, acutely and chronically infected, and treated classes. Then, a Lyapunov-based nonlinear adaptive method is proposed for the first time to control the HCV epidemic considering modelling uncertainties. A positive definite Lyapunov candidate function is suggested, and adaptation and control laws are attained based on that. The main goal of the proposed control strategy is to decrease the population of unaware susceptible and chronically infected compartments by pursuing appropriate treatment scenarios. As a consequence of this decrease in the mentioned compartments, the population of aware susceptible individuals increases and the population of acutely infected and treated humans decreases. The Lyapunov stability theorem and Barbalat's lemma are employed in order to prove the tracking convergence to desired population reduction scenarios. Based on the acquired numerical results, the proposed nonlinear adaptive controller can achieve the above-mentioned objective by adjusting the inputs (rates of informing the susceptible people and treatment of chronically infected ones) and estimating uncertain parameter values based on the designed control and adaptation laws, respectively. Moreover, the proposed strategy is designed to be robust in the presence of different levels of parametric uncertainties. |
2004.03224 | Yassine Souilmi | Raymond Tobler, Angad Johar, Christian Huber, Yassine Souilmi | PolyLinkR: A linkage-sensitive gene set enrichment R package | null | null | null | null | q-bio.GN q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce PolyLinkR, an R package for gene set enrichment analysis that
implements a novel null-model that accounts for linkage disequilibrium between
genes belonging to the same gene set - a potential cause of false positives
that is often not controlled for in similar tools. Our benchmarks show that
PolyLinkR has improved performance compared to two similar tools, achieving
comparable power to detect enriched gene sets while producing less than one
falsely detected gene set on average, even at high genetic clustering levels
and nominal false discovery rates of 20%.
| [
{
"created": "Tue, 7 Apr 2020 09:34:12 GMT",
"version": "v1"
}
] | 2020-04-08 | [
[
"Tobler",
"Raymond",
""
],
[
"Johar",
"Angad",
""
],
[
"Huber",
"Christian",
""
],
[
"Souilmi",
"Yassine",
""
]
] | We introduce PolyLinkR, an R package for gene set enrichment analysis that implements a novel null-model that accounts for linkage disequilibrium between genes belonging to the same gene set - a potential cause of false positives that is often not controlled for in similar tools. Our benchmarks show that PolyLinkR has improved performance compared to two similar tools, achieving comparable power to detect enriched gene sets while producing less than one falsely detected gene set on average, even at high genetic clustering levels and nominal false discovery rates of 20%. |
2402.18597 | Alexandre Defossez | Alexandre Defossez (INRAE, UMR TETIS), Samuel Alleaume (INRAE, UMR
TETIS), Marc Montadert (OFB), Dino Ienco (INRAE, UMR TETIS), Sandra Luque
(INRAE, UMR TETIS) | Cartographie de l'habitat de reproduction du t\'etras-lyre (Lyrurus
tetrix) dans les Alpes fran\c{c}aises | in French language | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Black Grouse (Lyrurus tetrix) is an emblematic alpine species with high
conservation importance. The population size of these mountain bird tends to
decline on the reference sites and shows differences according to changes in
local landscape characteristics. Habitat changes are at the centre of the
identified pressures impacting part or all of its life cycle, according to
experts. Hence, an approach to monitor population dynamics, is trough modelling
the favourable habitats of Black Grouse breeding (nesting sites). Then,
coupling modelling with multi-source remote sensing data (medium and very high
spatial resolution), allowed the implementation of a spatial distribution model
of the species. Indeed, the extraction of variables from remote sensing helped
to describe the area studied at appropriate spatial and temporal scales:
horizontal and vertical structure (heterogeneity), functioning (vegetation
indices), phenology (seasonal or inter-annual dynamics) and biodiversity. An
annual time series of radiometric indices (NDVI, NDWI, BI {\ldots}) from
Sentinel-2 has made it possible to generate Dynamic Habitat Indices (DHIs) to
derive phenological indications on the nature and dynamics of natural habitats.
In addition, very high resolution images (SPOT6) provided access to the fine
structure of natural habitats, i.e. the vertical and horizontal organisation by
states identified as elementary (mineral, herbaceous, low and high woody).
Indeed, one of the essential limiting factors for brood rearing is the presence
of a well-developed herbaceous or ericaceous stratum in the northern Alps and
larch forests in the southern region. A deep learning model was used to
classify elementary strata. Finally, Biomod2 R platform, using an ensemble
approach, was applied to model, the favourable habitat of Black Grouse
reproduction. Of all the models, Random Forest and Extreme Boosted Gradient are
the best performing, with TSS and ROC scores close to 1. For the SDM, we
selected only Random Forest models (ensemble modelling) because of their low
susceptibility to overfitting and coherent predictions (after comparing model
predictions).In this ensemble model, the most important explanatory variables
are altitude, the proportion of heathland, and the DHI (NDVI Max and NDWI Max).
Results from the habitat model can be used as an operational tool for
monitoring forest landscape shifts and changes. In addition, to delimiting
potential areas to protect the species habitat, which constitute a valuable
decision-making tool for conservation management of mountain open forest.
| [
{
"created": "Tue, 27 Feb 2024 07:52:38 GMT",
"version": "v1"
}
] | 2024-03-01 | [
[
"Defossez",
"Alexandre",
"",
"INRAE, UMR TETIS"
],
[
"Alleaume",
"Samuel",
"",
"INRAE, UMR\n TETIS"
],
[
"Montadert",
"Marc",
"",
"OFB"
],
[
"Ienco",
"Dino",
"",
"INRAE, UMR TETIS"
],
[
"Luque",
"Sandra",
"",
"INRAE, ... | The Black Grouse (Lyrurus tetrix) is an emblematic alpine species with high conservation importance. The population size of these mountain bird tends to decline on the reference sites and shows differences according to changes in local landscape characteristics. Habitat changes are at the centre of the identified pressures impacting part or all of its life cycle, according to experts. Hence, an approach to monitor population dynamics, is trough modelling the favourable habitats of Black Grouse breeding (nesting sites). Then, coupling modelling with multi-source remote sensing data (medium and very high spatial resolution), allowed the implementation of a spatial distribution model of the species. Indeed, the extraction of variables from remote sensing helped to describe the area studied at appropriate spatial and temporal scales: horizontal and vertical structure (heterogeneity), functioning (vegetation indices), phenology (seasonal or inter-annual dynamics) and biodiversity. An annual time series of radiometric indices (NDVI, NDWI, BI {\ldots}) from Sentinel-2 has made it possible to generate Dynamic Habitat Indices (DHIs) to derive phenological indications on the nature and dynamics of natural habitats. In addition, very high resolution images (SPOT6) provided access to the fine structure of natural habitats, i.e. the vertical and horizontal organisation by states identified as elementary (mineral, herbaceous, low and high woody). Indeed, one of the essential limiting factors for brood rearing is the presence of a well-developed herbaceous or ericaceous stratum in the northern Alps and larch forests in the southern region. A deep learning model was used to classify elementary strata. Finally, Biomod2 R platform, using an ensemble approach, was applied to model, the favourable habitat of Black Grouse reproduction. Of all the models, Random Forest and Extreme Boosted Gradient are the best performing, with TSS and ROC scores close to 1. For the SDM, we selected only Random Forest models (ensemble modelling) because of their low susceptibility to overfitting and coherent predictions (after comparing model predictions).In this ensemble model, the most important explanatory variables are altitude, the proportion of heathland, and the DHI (NDVI Max and NDWI Max). Results from the habitat model can be used as an operational tool for monitoring forest landscape shifts and changes. In addition, to delimiting potential areas to protect the species habitat, which constitute a valuable decision-making tool for conservation management of mountain open forest. |
1810.01412 | Yashika Jayathunga | Wolfgang Bock, Yashika Jayathunga | Compartmental Spatial Multi-Patch Deterministic and Stochastic Models
for Dengue | 2 Figures, 6 pages | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dengue is a vector-borne viral disease increasing dramatically over the past
years due to improvement in human mobility. The movement of host individuals
between and within the patches are captured via a residence-time matrix. A
system of ordinary differential equations (ODEs) modeling the spatial spread of
disease among the multiple patches is used to create a system of stochastic
differential equations (SDEs). Numerical solutions of the system of SDEs are
compared with the deterministic solutions obtained via ODEs.
| [
{
"created": "Tue, 2 Oct 2018 10:49:21 GMT",
"version": "v1"
}
] | 2018-10-04 | [
[
"Bock",
"Wolfgang",
""
],
[
"Jayathunga",
"Yashika",
""
]
] | Dengue is a vector-borne viral disease increasing dramatically over the past years due to improvement in human mobility. The movement of host individuals between and within the patches are captured via a residence-time matrix. A system of ordinary differential equations (ODEs) modeling the spatial spread of disease among the multiple patches is used to create a system of stochastic differential equations (SDEs). Numerical solutions of the system of SDEs are compared with the deterministic solutions obtained via ODEs. |
1309.2428 | Thierry Rabilloud | Thierry Rabilloud (LCBM), Sarah Triboulet (LCBM) | Two-dimensional SDS-PAGE fractionation of biological samples for
biomarker discovery | null | Methods in Molecular Biology -Clifton then Totowa- 1002 (2013)
151-65 | 10.1007/978-1-62703-360-2_13 | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Two-dimensional electrophoresis is still a very valuable tool in proteomics,
due to its reproducibility and its ability to analyze complete proteins.
However, due to its sensitivity to dynamic range issues, its most suitable use
in the frame of biomarker discovery is not on very complex fluids such as
plasma, but rather on more proximal, simpler fluids such as CSF, urine, or
secretome samples. Here, we describe the complete workflow for the analysis of
such dilute samples by two-dimensional electrophoresis, starting from sample
concentration, then the two-dimensional electrophoresis step per se, ending
with the protein detection by fluorescence.
| [
{
"created": "Tue, 10 Sep 2013 09:27:52 GMT",
"version": "v1"
}
] | 2013-09-11 | [
[
"Rabilloud",
"Thierry",
"",
"LCBM"
],
[
"Triboulet",
"Sarah",
"",
"LCBM"
]
] | Two-dimensional electrophoresis is still a very valuable tool in proteomics, due to its reproducibility and its ability to analyze complete proteins. However, due to its sensitivity to dynamic range issues, its most suitable use in the frame of biomarker discovery is not on very complex fluids such as plasma, but rather on more proximal, simpler fluids such as CSF, urine, or secretome samples. Here, we describe the complete workflow for the analysis of such dilute samples by two-dimensional electrophoresis, starting from sample concentration, then the two-dimensional electrophoresis step per se, ending with the protein detection by fluorescence. |
1612.06554 | Swati Patel | Swati Patel and Sebastian J Schreiber | Robust permanence for ecological equations with internal and external
feedbacks | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Species experience both internal feedbacks with endogenous factors such as
trait evolution and external feedbacks with exogenous factors such as weather.
These feedbacks can play an important role in determining whether populations
persist or communities of species coexist. To provide a general mathematical
framework for studying these effects, we develop a theorem for coexistence for
ecological models accounting for internal and external feedbacks. Specifically,
we use average Lyapunov functions and Morse decompositions to develop
sufficient and necessary conditions for robust permanence, a form of
coexistence robust to large perturbations of the population densities and small
structural perturbations of the models. We illustrate how our results can be
applied to verify permanence in non-autonomous models, structured population
models, including those with frequency-dependent feedbacks, and models of
eco-evolutionary dynamics. In these applications, we discuss how our results
relate to previous results for models with particular types of feedbacks.
| [
{
"created": "Tue, 20 Dec 2016 09:05:51 GMT",
"version": "v1"
}
] | 2016-12-21 | [
[
"Patel",
"Swati",
""
],
[
"Schreiber",
"Sebastian J",
""
]
] | Species experience both internal feedbacks with endogenous factors such as trait evolution and external feedbacks with exogenous factors such as weather. These feedbacks can play an important role in determining whether populations persist or communities of species coexist. To provide a general mathematical framework for studying these effects, we develop a theorem for coexistence for ecological models accounting for internal and external feedbacks. Specifically, we use average Lyapunov functions and Morse decompositions to develop sufficient and necessary conditions for robust permanence, a form of coexistence robust to large perturbations of the population densities and small structural perturbations of the models. We illustrate how our results can be applied to verify permanence in non-autonomous models, structured population models, including those with frequency-dependent feedbacks, and models of eco-evolutionary dynamics. In these applications, we discuss how our results relate to previous results for models with particular types of feedbacks. |
1505.01316 | Andrew Lover | Andrew A. Lover | Short Report: Study variability in recent human challenge experiments
with Plasmodium falciparum sporozoites (PfSPZ Challenge) | 8 pages with 1 figure, 3 tables, 2 appendices; submitted manuscript | null | 10.4269/ajtmh.15-0327. | null | q-bio.QM q-bio.PE stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There has been renewed interest in the use of sporozoite-based approaches for
malaria vaccination and controlled human infections, and several sets of human
challenge studies have recently completed. A study undertaken in Tanzania and
published in 2014 found dose-dependence between 10,000 and 25,000 sporozoite
doses, as well as divergent times-to-parasitemia relative to earlier studies in
European volunteers. However, this analysis shows that these conclusions are
based upon suboptimal analytical methods; with more optimal analysis, there is
no evidence for dose-dependence within this dose range; and more importantly,
no evidence for differences in event times between Dutch and Tanzanian study
sites. While these finding do not impact the reported safety and tolerability
of PfSPZ, they highlight critical issues that should be comprehensively
considered in future challenge studies.
| [
{
"created": "Wed, 6 May 2015 10:47:45 GMT",
"version": "v1"
}
] | 2015-10-06 | [
[
"Lover",
"Andrew A.",
""
]
] | There has been renewed interest in the use of sporozoite-based approaches for malaria vaccination and controlled human infections, and several sets of human challenge studies have recently completed. A study undertaken in Tanzania and published in 2014 found dose-dependence between 10,000 and 25,000 sporozoite doses, as well as divergent times-to-parasitemia relative to earlier studies in European volunteers. However, this analysis shows that these conclusions are based upon suboptimal analytical methods; with more optimal analysis, there is no evidence for dose-dependence within this dose range; and more importantly, no evidence for differences in event times between Dutch and Tanzanian study sites. While these finding do not impact the reported safety and tolerability of PfSPZ, they highlight critical issues that should be comprehensively considered in future challenge studies. |
2212.02505 | Jinlu Liu | Jinlu Liu and Sara Wade and Natalia Bochkina | Shared Differential Clustering across Single-cell RNA Sequencing
Datasets with the Hierarchical Dirichlet Process | null | null | null | null | q-bio.GN stat.ME | http://creativecommons.org/licenses/by/4.0/ | Single-cell RNA sequencing (scRNA-seq) is powerful technology that allows
researchers to understand gene expression patterns at the single-cell level.
However, analysing scRNA-seq data is challenging due to issues and biases in
data collection. In this work, we construct an integrated Bayesian model that
simultaneously addresses normalization, imputation and batch effects and also
nonparametrically clusters cells into groups across multiple datasets. A Gibbs
sampler based on a finite-dimensional approximation of the HDP is developed for
posterior inference.
| [
{
"created": "Mon, 5 Dec 2022 11:36:21 GMT",
"version": "v1"
},
{
"created": "Wed, 4 Jan 2023 10:00:55 GMT",
"version": "v2"
},
{
"created": "Wed, 13 Dec 2023 14:57:43 GMT",
"version": "v3"
}
] | 2023-12-14 | [
[
"Liu",
"Jinlu",
""
],
[
"Wade",
"Sara",
""
],
[
"Bochkina",
"Natalia",
""
]
] | Single-cell RNA sequencing (scRNA-seq) is powerful technology that allows researchers to understand gene expression patterns at the single-cell level. However, analysing scRNA-seq data is challenging due to issues and biases in data collection. In this work, we construct an integrated Bayesian model that simultaneously addresses normalization, imputation and batch effects and also nonparametrically clusters cells into groups across multiple datasets. A Gibbs sampler based on a finite-dimensional approximation of the HDP is developed for posterior inference. |
2402.09599 | Rachel Karchin | Jiaying Lai, Yunzhou Liu, Robert B. Scharpf, Rachel Karchin | Evaluation of simulation methods for tumor subclonal reconstruction | null | null | null | null | q-bio.GN | http://creativecommons.org/licenses/by/4.0/ | Most neoplastic tumors originate from a single cell, and their evolution can
be genetically traced through lineages characterized by common alterations such
as small somatic mutations (SSMs), copy number alterations (CNAs), structural
variants (SVs), and aneuploidies. Due to the complexity of these alterations in
most tumors and the errors introduced by sequencing protocols and calling
algorithms, tumor subclonal reconstruction algorithms are necessary to
recapitulate the DNA sequence composition and tumor evolution in silico. With a
growing number of these algorithms available, there is a pressing need for
consistent and comprehensive benchmarking, which relies on realistic tumor
sequencing generated by simulation tools. Here, we examine the current
simulation methods, identifying their strengths and weaknesses, and provide
recommendations for their improvement. Our review also explores potential new
directions for research in this area. This work aims to serve as a resource for
understanding and enhancing tumor genomic simulations, contributing to the
advancement of the field.
| [
{
"created": "Wed, 14 Feb 2024 22:13:11 GMT",
"version": "v1"
}
] | 2024-02-16 | [
[
"Lai",
"Jiaying",
""
],
[
"Liu",
"Yunzhou",
""
],
[
"Scharpf",
"Robert B.",
""
],
[
"Karchin",
"Rachel",
""
]
] | Most neoplastic tumors originate from a single cell, and their evolution can be genetically traced through lineages characterized by common alterations such as small somatic mutations (SSMs), copy number alterations (CNAs), structural variants (SVs), and aneuploidies. Due to the complexity of these alterations in most tumors and the errors introduced by sequencing protocols and calling algorithms, tumor subclonal reconstruction algorithms are necessary to recapitulate the DNA sequence composition and tumor evolution in silico. With a growing number of these algorithms available, there is a pressing need for consistent and comprehensive benchmarking, which relies on realistic tumor sequencing generated by simulation tools. Here, we examine the current simulation methods, identifying their strengths and weaknesses, and provide recommendations for their improvement. Our review also explores potential new directions for research in this area. This work aims to serve as a resource for understanding and enhancing tumor genomic simulations, contributing to the advancement of the field. |
2406.16995 | Hui Liu | Xing Fang, Chenpeng Yu, Shiye Tian, Hui Liu | A large language model for predicting T cell receptor-antigen binding
specificity | null | null | null | null | q-bio.QM cs.AI | http://creativecommons.org/licenses/by/4.0/ | The human immune response depends on the binding of T-cell receptors (TCRs)
to antigens (pTCR), which elicits the T cells to eliminate viruses, tumor
cells, and other pathogens. The ability of human immunity system responding to
unknown viruses and bacteria stems from the TCR diversity. However, this vast
diversity poses challenges on the TCR-antigen binding prediction methods. In
this study, we propose a Masked Language Model (MLM), referred to as tcrLM, to
overcome limitations in model generalization. Specifically, we randomly masked
sequence segments and train tcrLM to infer the masked segment, thereby extract
expressive feature from TCR sequences. Meanwhile, we introduced virtual
adversarial training techniques to enhance the model's robustness. We built the
largest TCR CDR3 sequence dataset to date (comprising 2,277,773,840 residuals),
and pre-trained tcrLM on this dataset. Our extensive experimental results
demonstrate that tcrLM achieved AUC values of 0.937 and 0.933 on independent
test sets and external validation sets, respectively, which remarkably
outperformed four previously published prediction methods. On a large-scale
COVID-19 pTCR binding test set, our method outperforms the current
state-of-the-art method by at least 8%, highlighting the generalizability of
our method. Furthermore, we validated that our approach effectively predicts
immunotherapy response and clinical outcomes on a clinical cohorts. These
findings clearly indicate that tcrLM exhibits significant potential in
predicting antigenic immunogenicity.
| [
{
"created": "Mon, 24 Jun 2024 08:36:40 GMT",
"version": "v1"
}
] | 2024-06-26 | [
[
"Fang",
"Xing",
""
],
[
"Yu",
"Chenpeng",
""
],
[
"Tian",
"Shiye",
""
],
[
"Liu",
"Hui",
""
]
] | The human immune response depends on the binding of T-cell receptors (TCRs) to antigens (pTCR), which elicits the T cells to eliminate viruses, tumor cells, and other pathogens. The ability of human immunity system responding to unknown viruses and bacteria stems from the TCR diversity. However, this vast diversity poses challenges on the TCR-antigen binding prediction methods. In this study, we propose a Masked Language Model (MLM), referred to as tcrLM, to overcome limitations in model generalization. Specifically, we randomly masked sequence segments and train tcrLM to infer the masked segment, thereby extract expressive feature from TCR sequences. Meanwhile, we introduced virtual adversarial training techniques to enhance the model's robustness. We built the largest TCR CDR3 sequence dataset to date (comprising 2,277,773,840 residuals), and pre-trained tcrLM on this dataset. Our extensive experimental results demonstrate that tcrLM achieved AUC values of 0.937 and 0.933 on independent test sets and external validation sets, respectively, which remarkably outperformed four previously published prediction methods. On a large-scale COVID-19 pTCR binding test set, our method outperforms the current state-of-the-art method by at least 8%, highlighting the generalizability of our method. Furthermore, we validated that our approach effectively predicts immunotherapy response and clinical outcomes on a clinical cohorts. These findings clearly indicate that tcrLM exhibits significant potential in predicting antigenic immunogenicity. |
2407.00028 | Xinyu Shen | Xinyu Shen, Qimin Zhang, Huili Zheng, Weiwei Qi | Harnessing XGBoost for Robust Biomarker Selection of
Obsessive-Compulsive Disorder (OCD) from Adolescent Brain Cognitive
Development (ABCD) data | null | null | null | null | q-bio.NC cs.LG stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This study evaluates the performance of various supervised machine learning
models in analyzing highly correlated neural signaling data from the Adolescent
Brain Cognitive Development (ABCD) Study, with a focus on predicting
obsessive-compulsive disorder scales. We simulated a dataset to mimic the
correlation structures commonly found in imaging data and evaluated logistic
regression, elastic networks, random forests, and XGBoost on their ability to
handle multicollinearity and accurately identify predictive features. Our study
aims to guide the selection of appropriate machine learning methods for
processing neuroimaging data, highlighting models that best capture underlying
signals in high feature correlations and prioritize clinically relevant
features associated with Obsessive-Compulsive Disorder (OCD).
| [
{
"created": "Tue, 14 May 2024 23:43:34 GMT",
"version": "v1"
}
] | 2024-07-02 | [
[
"Shen",
"Xinyu",
""
],
[
"Zhang",
"Qimin",
""
],
[
"Zheng",
"Huili",
""
],
[
"Qi",
"Weiwei",
""
]
] | This study evaluates the performance of various supervised machine learning models in analyzing highly correlated neural signaling data from the Adolescent Brain Cognitive Development (ABCD) Study, with a focus on predicting obsessive-compulsive disorder scales. We simulated a dataset to mimic the correlation structures commonly found in imaging data and evaluated logistic regression, elastic networks, random forests, and XGBoost on their ability to handle multicollinearity and accurately identify predictive features. Our study aims to guide the selection of appropriate machine learning methods for processing neuroimaging data, highlighting models that best capture underlying signals in high feature correlations and prioritize clinically relevant features associated with Obsessive-Compulsive Disorder (OCD). |
0805.2796 | Sungho Hong | Sungho Hong (University of Washington and Okinawa Institute of Science
and Technology), Brian N. Lundstrom (University of Washington), Adrienne
Fairhall (University of Washington) | Intrinsic gain modulation and adaptive neural coding | 24 pages, 4 figures, 1 supporting information | null | 10.1371/journal.pcbi.1000119 | null | q-bio.NC physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In many cases, the computation of a neural system can be reduced to a
receptive field, or a set of linear filters, and a thresholding function, or
gain curve, which determines the firing probability; this is known as a
linear/nonlinear model. In some forms of sensory adaptation, these linear
filters and gain curve adjust very rapidly to changes in the variance of a
randomly varying driving input. An apparently similar but previously unrelated
issue is the observation of gain control by background noise in cortical
neurons: the slope of the firing rate vs current (f-I) curve changes with the
variance of background random input. Here, we show a direct correspondence
between these two observations by relating variance-dependent changes in the
gain of f-I curves to characteristics of the changing empirical
linear/nonlinear model obtained by sampling. In the case that the underlying
system is fixed, we derive relationships relating the change of the gain with
respect to both mean and variance with the receptive fields derived from
reverse correlation on a white noise stimulus. Using two conductance-based
model neurons that display distinct gain modulation properties through a simple
change in parameters, we show that coding properties of both these models
quantitatively satisfy the predicted relationships. Our results describe how
both variance-dependent gain modulation and adaptive neural computation result
from intrinsic nonlinearity.
| [
{
"created": "Mon, 19 May 2008 07:10:56 GMT",
"version": "v1"
}
] | 2014-11-12 | [
[
"Hong",
"Sungho",
"",
"University of Washington and Okinawa Institute of Science\n and Technology"
],
[
"Lundstrom",
"Brian N.",
"",
"University of Washington"
],
[
"Fairhall",
"Adrienne",
"",
"University of Washington"
]
] | In many cases, the computation of a neural system can be reduced to a receptive field, or a set of linear filters, and a thresholding function, or gain curve, which determines the firing probability; this is known as a linear/nonlinear model. In some forms of sensory adaptation, these linear filters and gain curve adjust very rapidly to changes in the variance of a randomly varying driving input. An apparently similar but previously unrelated issue is the observation of gain control by background noise in cortical neurons: the slope of the firing rate vs current (f-I) curve changes with the variance of background random input. Here, we show a direct correspondence between these two observations by relating variance-dependent changes in the gain of f-I curves to characteristics of the changing empirical linear/nonlinear model obtained by sampling. In the case that the underlying system is fixed, we derive relationships relating the change of the gain with respect to both mean and variance with the receptive fields derived from reverse correlation on a white noise stimulus. Using two conductance-based model neurons that display distinct gain modulation properties through a simple change in parameters, we show that coding properties of both these models quantitatively satisfy the predicted relationships. Our results describe how both variance-dependent gain modulation and adaptive neural computation result from intrinsic nonlinearity. |
1408.0915 | Neville Boon Ph.D. | Neville J. Boon and Rebecca B. Hoyle | Detachment, Futile Cycling and Nucleotide Pocket Collapse in Myosin-V
Stepping | 11 pages, 5 figures, 6 tables | null | 10.1103/PhysRevE.91.022717 | null | q-bio.BM physics.bio-ph q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Myosin-V is a highly processive dimeric protein that walks with 36nm steps
along actin tracks, powered by coordinated ATP hydrolysis reactions in the two
myosin heads. No previous theoretical models of the myosin-V walk reproduce all
the observed trends of velocity and run-length with [ADP], [ATP] and external
forcing. In particular, a result that has eluded all theoretical studies based
upon rigorous physical chemistry is that run length decreases with both
increasing [ADP] and [ATP]. We systematically analyse which mechanisms in
existing models reproduce which experimental trends and use this information to
guide the development of models that can reproduce them all. We formulate
models as reaction networks between distinct mechanochemical states with
energetically determined transition rates. For each network architecture, we
compare predictions for velocity and run length to a subset of experimentally
measured values, and fit unknown parameters using a bespoke MCSA optimization
routine. Finally we determine which experimental trends are replicated by the
best-fit model for each architecture. Only two models capture them all: one
involving [ADP]-dependent mechanical detachment, and another including
[ADP]-dependent futile cycling and nucleotide pocket collapse. Comparing
model-predicted and experimentally observed kinetic transition rates favors the
latter.
| [
{
"created": "Tue, 5 Aug 2014 10:38:16 GMT",
"version": "v1"
},
{
"created": "Fri, 6 Feb 2015 12:47:03 GMT",
"version": "v2"
}
] | 2015-06-22 | [
[
"Boon",
"Neville J.",
""
],
[
"Hoyle",
"Rebecca B.",
""
]
] | Myosin-V is a highly processive dimeric protein that walks with 36nm steps along actin tracks, powered by coordinated ATP hydrolysis reactions in the two myosin heads. No previous theoretical models of the myosin-V walk reproduce all the observed trends of velocity and run-length with [ADP], [ATP] and external forcing. In particular, a result that has eluded all theoretical studies based upon rigorous physical chemistry is that run length decreases with both increasing [ADP] and [ATP]. We systematically analyse which mechanisms in existing models reproduce which experimental trends and use this information to guide the development of models that can reproduce them all. We formulate models as reaction networks between distinct mechanochemical states with energetically determined transition rates. For each network architecture, we compare predictions for velocity and run length to a subset of experimentally measured values, and fit unknown parameters using a bespoke MCSA optimization routine. Finally we determine which experimental trends are replicated by the best-fit model for each architecture. Only two models capture them all: one involving [ADP]-dependent mechanical detachment, and another including [ADP]-dependent futile cycling and nucleotide pocket collapse. Comparing model-predicted and experimentally observed kinetic transition rates favors the latter. |
1804.01775 | Francesc Rossell\'o | Tom\'as M. Coronado, Gabriel Riera, Francesc Rossell\'o | The Fair Proportion is a Shapley Value on phylogenetic networks too | 12 pages | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Fair Proportion of a species in a phylogenetic tree is a very simple
measure that has been used to assess its value relative to the overall
phylogenetic diversity represented by the tree. It has recently been proved by
Fuchs and Jin to be equal to the Shapley Value of the coallitional game that
sends each subset of species to its rooted Phylogenetic Diversity in the tree.
We prove in this paper that this result extends to the natural translations of
the Fair Proportion and the rooted Phylogenetic Diversity to rooted
phylogenetic networks. We also generalize to rooted phylogenetic networks the
expression for the Shapley Value of the unrooted Phylogenetic Diversity game on
a phylogenetic tree established by Haake, Kashiwada and Su.
| [
{
"created": "Thu, 5 Apr 2018 10:55:33 GMT",
"version": "v1"
}
] | 2018-04-06 | [
[
"Coronado",
"Tomás M.",
""
],
[
"Riera",
"Gabriel",
""
],
[
"Rosselló",
"Francesc",
""
]
] | The Fair Proportion of a species in a phylogenetic tree is a very simple measure that has been used to assess its value relative to the overall phylogenetic diversity represented by the tree. It has recently been proved by Fuchs and Jin to be equal to the Shapley Value of the coallitional game that sends each subset of species to its rooted Phylogenetic Diversity in the tree. We prove in this paper that this result extends to the natural translations of the Fair Proportion and the rooted Phylogenetic Diversity to rooted phylogenetic networks. We also generalize to rooted phylogenetic networks the expression for the Shapley Value of the unrooted Phylogenetic Diversity game on a phylogenetic tree established by Haake, Kashiwada and Su. |
1309.7407 | Liane Gabora | Liane Gabora and Kirsty Kitto | Concept Combination and the Origins of Complex Cognition | 24 pages. arXiv admin note: substantial text overlap with
arXiv:1308.5032 | In E. Swan (Ed.), Origins of mind: Biosemiotics Series (pp.
361-382). Berlin: Springer. (2013) | null | null | q-bio.NC cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | At the core of our uniquely human cognitive abilities is the capacity to see
things from different perspectives, or to place them in a new context. We
propose that this was made possible by two cognitive transitions. First, the
large brain of Homo erectus facilitated the onset of recursive recall: the
ability to string thoughts together into a stream of potentially abstract or
imaginative thought. This hypothesis is supported by a set of computational
models where an artificial society of agents evolved to generate more diverse
and valuable cultural outputs under conditions of recursive recall. We propose
that the capacity to see things in context arose much later, following the
appearance of anatomically modern humans. This second transition was brought on
by the onset of contextual focus: the capacity to shift between a minimally
contextual analytic mode of thought, and a highly contextual associative mode
of thought, conducive to combining concepts in new ways and 'breaking out of a
rut'. When contextual focus is implemented in an art-generating computer
program, the resulting artworks are seen as more creative and appealing. We
summarize how both transitions can be modeled using a theory of concepts which
highlights the manner in which different contexts can lead to modern humans
attributing very different meanings to the interpretation of one concept.
| [
{
"created": "Sat, 28 Sep 2013 02:17:10 GMT",
"version": "v1"
},
{
"created": "Fri, 5 Jul 2019 19:55:57 GMT",
"version": "v2"
},
{
"created": "Tue, 9 Jul 2019 20:02:45 GMT",
"version": "v3"
}
] | 2019-07-11 | [
[
"Gabora",
"Liane",
""
],
[
"Kitto",
"Kirsty",
""
]
] | At the core of our uniquely human cognitive abilities is the capacity to see things from different perspectives, or to place them in a new context. We propose that this was made possible by two cognitive transitions. First, the large brain of Homo erectus facilitated the onset of recursive recall: the ability to string thoughts together into a stream of potentially abstract or imaginative thought. This hypothesis is supported by a set of computational models where an artificial society of agents evolved to generate more diverse and valuable cultural outputs under conditions of recursive recall. We propose that the capacity to see things in context arose much later, following the appearance of anatomically modern humans. This second transition was brought on by the onset of contextual focus: the capacity to shift between a minimally contextual analytic mode of thought, and a highly contextual associative mode of thought, conducive to combining concepts in new ways and 'breaking out of a rut'. When contextual focus is implemented in an art-generating computer program, the resulting artworks are seen as more creative and appealing. We summarize how both transitions can be modeled using a theory of concepts which highlights the manner in which different contexts can lead to modern humans attributing very different meanings to the interpretation of one concept. |
1203.0448 | Monica De Angelis | Monica De Angelis | On a Model of Superconductivity and Biology | null | Advances and Appications in Mathematical Sciences, vol 7, Issue 1,
2010, pages 41-50 | null | null | q-bio.NC cond-mat.supr-con math-ph math.MP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The paper deals with a semilinear integrodifferential equation that
characterizes several dissipative models of Viscoelasticity, Biology and
Superconductivity. The initial - boundary problem with Neumann conditions is
analyzed. When the source term F is a linear function, then the explicit
solution is obtained. When F is non linear, some results on existence,
uniqueness and a priori estimates are deduced. As example of physical model the
reaction - diffusion system of Fitzhugh Nagumo is considered.
| [
{
"created": "Fri, 2 Mar 2012 13:01:39 GMT",
"version": "v1"
}
] | 2012-03-05 | [
[
"De Angelis",
"Monica",
""
]
] | The paper deals with a semilinear integrodifferential equation that characterizes several dissipative models of Viscoelasticity, Biology and Superconductivity. The initial - boundary problem with Neumann conditions is analyzed. When the source term F is a linear function, then the explicit solution is obtained. When F is non linear, some results on existence, uniqueness and a priori estimates are deduced. As example of physical model the reaction - diffusion system of Fitzhugh Nagumo is considered. |
2211.02826 | Arka Sanyal Mr | Anushikha Ghosh, Arka Sanyal, Sameer Sharma | Identification and Molecular Dynamic Simulation of Flavonoids from
Mediterranean species of Oregano against the Zika NS2B-NS3 Protease | 24 Pages, 12 Figures | World Journal of Pharmaceutical Research, Volume 11, Issue 15,
Page 1236-1259, Year 2022 | 10.20959/wjpr202215-26115 | null | q-bio.BM | http://creativecommons.org/licenses/by/4.0/ | The Zika virus, is an emerging infectious disease causing severe
complications such as microcephaly in infants and Guillain Barre syndrome in
adults. There is no licensed vaccination or approved medicine to treat ZIKV
infection. Therefore, extensive research is being carried out to find compounds
that can be used effectively as therapeutic molecules to treat ZIKV infection.
Oregano, a commonly found herb in the Mediterranean region, has been used
predominantly for culinary purposes. The fact that the members of the Origanum
species are a storehouse of various bioactive compounds gives us a solid reason
to study compounds extracted from it for therapeutic purposes. In this study,
were retrieved 20 Flavonoids found in various Origanum species belonging to the
Mediterranean region from the PubChem database and pharmacological analysis
using SwissADME and Molecular docking using AutoDock Vina 4.0. were carried out
against the NS2B NS3 protease since it serves as an effective drug target owing
to its role in viral replication and immune evasion within the host. The best
hit compounds were subjected to MD simulation at 100 ns using Desmond
Schrodinger to analyze the molecule's stability. We observed Cirsiliol as the
best hit compound against the NS2B NS3 complex with a binding affinity of -8.5
kcal per mol. It also showed good stability during MD simulation. We recommend
the use of Cirsiliol for in vitro and in vivo studies for further investigation
concerning the ZIKA virus.
| [
{
"created": "Sat, 5 Nov 2022 06:44:02 GMT",
"version": "v1"
}
] | 2022-11-08 | [
[
"Ghosh",
"Anushikha",
""
],
[
"Sanyal",
"Arka",
""
],
[
"Sharma",
"Sameer",
""
]
] | The Zika virus, is an emerging infectious disease causing severe complications such as microcephaly in infants and Guillain Barre syndrome in adults. There is no licensed vaccination or approved medicine to treat ZIKV infection. Therefore, extensive research is being carried out to find compounds that can be used effectively as therapeutic molecules to treat ZIKV infection. Oregano, a commonly found herb in the Mediterranean region, has been used predominantly for culinary purposes. The fact that the members of the Origanum species are a storehouse of various bioactive compounds gives us a solid reason to study compounds extracted from it for therapeutic purposes. In this study, were retrieved 20 Flavonoids found in various Origanum species belonging to the Mediterranean region from the PubChem database and pharmacological analysis using SwissADME and Molecular docking using AutoDock Vina 4.0. were carried out against the NS2B NS3 protease since it serves as an effective drug target owing to its role in viral replication and immune evasion within the host. The best hit compounds were subjected to MD simulation at 100 ns using Desmond Schrodinger to analyze the molecule's stability. We observed Cirsiliol as the best hit compound against the NS2B NS3 complex with a binding affinity of -8.5 kcal per mol. It also showed good stability during MD simulation. We recommend the use of Cirsiliol for in vitro and in vivo studies for further investigation concerning the ZIKA virus. |
1208.0482 | Simon Powers | Simon T. Powers and Alexandra S. Penn and Richard A. Watson | The concurrent evolution of cooperation and the population structures
that support it | Post-print of accepted manuscript, 6 figures | Evolution 65(6), pp. 1527-1543, June 2011 | 10.1111/j.1558-5646.2011.01250.x | null | q-bio.PE cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The evolution of cooperation often depends upon population structure, yet
nearly all models of cooperation implicitly assume that this structure remains
static. This is a simplifying assumption, because most organisms possess
genetic traits that affect their population structure to some degree. These
traits, such as a group size preference, affect the relatedness of interacting
individuals and hence the opportunity for kin or group selection. We argue that
models that do not explicitly consider their evolution cannot provide a
satisfactory account of the origin of cooperation, because they cannot explain
how the prerequisite population structures arise. Here, we consider the
concurrent evolution of genetic traits that affect population structure, with
those that affect social behavior. We show that not only does population
structure drive social evolution, as in previous models, but that the
opportunity for cooperation can in turn drive the creation of population
structures that support it. This occurs through the generation of linkage
disequilibrium between socio-behavioral and population-structuring traits, such
that direct kin selection on social behavior creates indirect selection
pressure on population structure. We illustrate our argument with a model of
the concurrent evolution of group size preference and social behavior.
| [
{
"created": "Thu, 2 Aug 2012 13:50:57 GMT",
"version": "v1"
}
] | 2012-08-03 | [
[
"Powers",
"Simon T.",
""
],
[
"Penn",
"Alexandra S.",
""
],
[
"Watson",
"Richard A.",
""
]
] | The evolution of cooperation often depends upon population structure, yet nearly all models of cooperation implicitly assume that this structure remains static. This is a simplifying assumption, because most organisms possess genetic traits that affect their population structure to some degree. These traits, such as a group size preference, affect the relatedness of interacting individuals and hence the opportunity for kin or group selection. We argue that models that do not explicitly consider their evolution cannot provide a satisfactory account of the origin of cooperation, because they cannot explain how the prerequisite population structures arise. Here, we consider the concurrent evolution of genetic traits that affect population structure, with those that affect social behavior. We show that not only does population structure drive social evolution, as in previous models, but that the opportunity for cooperation can in turn drive the creation of population structures that support it. This occurs through the generation of linkage disequilibrium between socio-behavioral and population-structuring traits, such that direct kin selection on social behavior creates indirect selection pressure on population structure. We illustrate our argument with a model of the concurrent evolution of group size preference and social behavior. |
2008.13238 | Mariam Aboian | Marina Kazarian, Sandra Abi Fadel, Amit Mahajan, Mariam Aboian | Utilization of 3D segmentation for measurement of pediatric brain tumor
outcomes after treatment: review of available free tools, step-by-step
instructions, and applications to clinical practice | null | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Volumetric measurements are known to provide more information when it comes
to segmenting tumors, in comparison to one- and two-dimensional measurements,
and thus can lead to better informed therapy. In this work, we review the free
and easily accessible computer platforms available for conducting these 3D
measurements, such as Horos and 3D Slicer and compare the segmentations to
commercial Visage software. We compare the time for 3D segmentation of tumors
and demonstrate how to use a novel plugin that we developed in 3D slicer for
the efficient and accurate segmentation of the cystic component of a tumor.
| [
{
"created": "Sun, 30 Aug 2020 18:43:50 GMT",
"version": "v1"
}
] | 2020-09-01 | [
[
"Kazarian",
"Marina",
""
],
[
"Fadel",
"Sandra Abi",
""
],
[
"Mahajan",
"Amit",
""
],
[
"Aboian",
"Mariam",
""
]
] | Volumetric measurements are known to provide more information when it comes to segmenting tumors, in comparison to one- and two-dimensional measurements, and thus can lead to better informed therapy. In this work, we review the free and easily accessible computer platforms available for conducting these 3D measurements, such as Horos and 3D Slicer and compare the segmentations to commercial Visage software. We compare the time for 3D segmentation of tumors and demonstrate how to use a novel plugin that we developed in 3D slicer for the efficient and accurate segmentation of the cystic component of a tumor. |
1210.4469 | Tomas Perez-Acle Dr. | F. Nu\~nez, C. Ravello, H. Urbina and T. Perez-Acle | A Rule-based Model of a Hypothetical Zombie Outbreak: Insights on the
role of emotional factors during behavioral adaptation of an artificial
population | 4 figures | null | null | null | q-bio.PE cs.MA cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Models of infectious diseases have been developed since the first half of the
twentieth century. Most models haven't considered the role that emotional
factors of the individual may play on the population's behavioral adaptation
during the spread of a pandemic disease. Considering that local interactions
among individuals generate patterns that -at a large scale- govern the action
of masses, we have studied the behavioral adaptation of a population induced by
the spread of an infectious disease. Therefore, we have developed a rule-based
model of a hypothetical zombie outbreak, written in Kappa language, and
simulated using Guillespie's stochastic approach. Our study addresses the
specificity and heterogeneity of the system at the individual level, a highly
desirable characteristic, mostly overlooked in classic epidemic models.
Together with the basic elements of a typical epidemiological model, our model
includes an individual representation of the disease progression and the
traveling of agents among cities being affected. It also introduces an
approximation to measure the effect of panic in the population as a function of
the individual situational awareness. In addition, the effect of two possible
countermeasures to overcome the zombie threat is considered: the availability
of medical treatment and the deployment of special armed forces. However, due
to the special characteristics of this hypothetical infectious disease, even
using exaggerated numbers of countermeasures, only a small percentage of the
population can be saved at the end of the simulations. As expected from a
rule-based model approach, the global dynamics of our model resulted primarily
governed by the mechanistic description of local interactions occurring at the
individual level. As a whole, people's situational awareness resulted essential
to modulate the inner dynamics of the system.
| [
{
"created": "Tue, 16 Oct 2012 16:05:49 GMT",
"version": "v1"
}
] | 2012-10-17 | [
[
"Nuñez",
"F.",
""
],
[
"Ravello",
"C.",
""
],
[
"Urbina",
"H.",
""
],
[
"Perez-Acle",
"T.",
""
]
] | Models of infectious diseases have been developed since the first half of the twentieth century. Most models haven't considered the role that emotional factors of the individual may play on the population's behavioral adaptation during the spread of a pandemic disease. Considering that local interactions among individuals generate patterns that -at a large scale- govern the action of masses, we have studied the behavioral adaptation of a population induced by the spread of an infectious disease. Therefore, we have developed a rule-based model of a hypothetical zombie outbreak, written in Kappa language, and simulated using Guillespie's stochastic approach. Our study addresses the specificity and heterogeneity of the system at the individual level, a highly desirable characteristic, mostly overlooked in classic epidemic models. Together with the basic elements of a typical epidemiological model, our model includes an individual representation of the disease progression and the traveling of agents among cities being affected. It also introduces an approximation to measure the effect of panic in the population as a function of the individual situational awareness. In addition, the effect of two possible countermeasures to overcome the zombie threat is considered: the availability of medical treatment and the deployment of special armed forces. However, due to the special characteristics of this hypothetical infectious disease, even using exaggerated numbers of countermeasures, only a small percentage of the population can be saved at the end of the simulations. As expected from a rule-based model approach, the global dynamics of our model resulted primarily governed by the mechanistic description of local interactions occurring at the individual level. As a whole, people's situational awareness resulted essential to modulate the inner dynamics of the system. |
2112.07760 | Fabrizio De Vico Fallani | Giulia Bassignana, Giordano Lacidogna, Paolo Bartolomeo, Olivier
Colliot, Fabrizio De Vico Fallani | The impact of aging on human brain network target controllability | null | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Understanding how few distributed areas can steer large-scale brain activity
is a fundamental question that has practical implications, which range from
inducing specific patterns of behavior to counteracting disease. Recent
endeavors based on network controllability provided fresh insights into the
potential ability of single regions to influence whole brain dynamics through
the underlying structural connectome. However, controlling the entire brain
activity is often unfeasible and might not always be necessary. The question
whether single areas can control specific target subsystems remains crucial,
albeit still poorly explored. Furthermore, the structure of the brain network
exhibits progressive changes across the lifespan, but little is known about the
possible consequences in the controllability properties. To address these
questions, we adopted a novel target controllability approach that quantifies
the centrality of brain nodes in controlling specific target anatomo-functional
systems. We then studied such target control centrality in human connectomes
obtained from healthy individuals aged from 5 to 85. Main results showed that
the sensorimotor system has a high influencing capacity, but it is difficult
for other areas to influence it. Furthermore, we reported that target control
centrality varies with age and that temporal-parietal regions, whose cortical
thinning is crucial in dementia-related diseases, exhibit lower values in older
people. By simulating targeted attacks, such as those 19 occurring in focal
stroke, we showed that the ipsilesional hemisphere is the most affected one
regardless of the damaged area. Notably, such degradation in target control
centrality was more evident in younger people, thus supporting
early-vulnerability hypotheses after stroke.
| [
{
"created": "Tue, 14 Dec 2021 22:02:49 GMT",
"version": "v1"
},
{
"created": "Sun, 9 Oct 2022 20:27:07 GMT",
"version": "v2"
}
] | 2022-10-11 | [
[
"Bassignana",
"Giulia",
""
],
[
"Lacidogna",
"Giordano",
""
],
[
"Bartolomeo",
"Paolo",
""
],
[
"Colliot",
"Olivier",
""
],
[
"Fallani",
"Fabrizio De Vico",
""
]
] | Understanding how few distributed areas can steer large-scale brain activity is a fundamental question that has practical implications, which range from inducing specific patterns of behavior to counteracting disease. Recent endeavors based on network controllability provided fresh insights into the potential ability of single regions to influence whole brain dynamics through the underlying structural connectome. However, controlling the entire brain activity is often unfeasible and might not always be necessary. The question whether single areas can control specific target subsystems remains crucial, albeit still poorly explored. Furthermore, the structure of the brain network exhibits progressive changes across the lifespan, but little is known about the possible consequences in the controllability properties. To address these questions, we adopted a novel target controllability approach that quantifies the centrality of brain nodes in controlling specific target anatomo-functional systems. We then studied such target control centrality in human connectomes obtained from healthy individuals aged from 5 to 85. Main results showed that the sensorimotor system has a high influencing capacity, but it is difficult for other areas to influence it. Furthermore, we reported that target control centrality varies with age and that temporal-parietal regions, whose cortical thinning is crucial in dementia-related diseases, exhibit lower values in older people. By simulating targeted attacks, such as those 19 occurring in focal stroke, we showed that the ipsilesional hemisphere is the most affected one regardless of the damaged area. Notably, such degradation in target control centrality was more evident in younger people, thus supporting early-vulnerability hypotheses after stroke. |
1908.06197 | Joaquin Goni | Diana O. Svaldi, Joaqu\'in Go\~ni, Kausar Abbas, Enrico Amico, David
G. Clark, Charanya Muralidharan, Mario Dzemidzic, John D. West, Shannon L.
Risacher, Andrew J. Saykin, Liana G. Apostolova (for the Alzheimer's Disease
Neuroimaging Initiative) | Optimizing Differential Identifiability Improves Connectome Predictive
Modeling of Cognitive Deficits in Alzheimer's Disease | 26 pages; 7 Figures, 2 Tables, 8 Supplementary Figures | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Functional connectivity, as estimated using resting state fMRI, has shown
potential in bridging the gap between pathophysiology and cognition. However,
clinical use of functional connectivity biomarkers is impeded by unreliable
estimates of individual functional connectomes and lack of generalizability of
models predicting cognitive outcomes from connectivity. To address these
issues, we combine the frameworks of connectome predictive modeling and
differential identifiability. Using the combined framework, we show that
enhancing the individual fingerprint of resting state functional connectomes
leads to robust identification of functional networks associated to cognitive
outcomes and also improves prediction of cognitive outcomes from functional
connectomes. Using a comprehensive spectrum of cognitive outcomes associated to
Alzheimer's disease, we identify and characterize functional networks
associated to specific cognitive deficits exhibited in Alzheimer's disease.
This combined framework is an important step in making individual level
predictions of cognition from resting state functional connectomes and in
understanding the relationship between cognition and connectivity.
| [
{
"created": "Fri, 16 Aug 2019 22:35:57 GMT",
"version": "v1"
},
{
"created": "Tue, 20 Aug 2019 15:10:36 GMT",
"version": "v2"
},
{
"created": "Fri, 13 Dec 2019 04:05:37 GMT",
"version": "v3"
}
] | 2019-12-16 | [
[
"Svaldi",
"Diana O.",
"",
"for the Alzheimer's Disease\n Neuroimaging Initiative"
],
[
"Goñi",
"Joaquín",
"",
"for the Alzheimer's Disease\n Neuroimaging Initiative"
],
[
"Abbas",
"Kausar",
"",
"for the Alzheimer's Disease\n Neuroimaging Initiative"
],
... | Functional connectivity, as estimated using resting state fMRI, has shown potential in bridging the gap between pathophysiology and cognition. However, clinical use of functional connectivity biomarkers is impeded by unreliable estimates of individual functional connectomes and lack of generalizability of models predicting cognitive outcomes from connectivity. To address these issues, we combine the frameworks of connectome predictive modeling and differential identifiability. Using the combined framework, we show that enhancing the individual fingerprint of resting state functional connectomes leads to robust identification of functional networks associated to cognitive outcomes and also improves prediction of cognitive outcomes from functional connectomes. Using a comprehensive spectrum of cognitive outcomes associated to Alzheimer's disease, we identify and characterize functional networks associated to specific cognitive deficits exhibited in Alzheimer's disease. This combined framework is an important step in making individual level predictions of cognition from resting state functional connectomes and in understanding the relationship between cognition and connectivity. |
2109.11156 | Kory Johnson | Kory D. Johnson, Annemarie Grass, Daniel Toneian, Mathias Beiglb\"ock,
Jitka Polechov\'a | Robust models of SARS-CoV-2 heterogeneity and control | null | null | null | null | q-bio.PE q-bio.QM | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In light of the continuing emergence of new SARS-CoV-2 variants and vaccines,
we create a simulation framework for exploring possible infection trajectories
under various scenarios. The situations of primary interest involve the
interaction between three components: vaccination campaigns, non-pharmaceutical
interventions (NPIs), and the emergence of new SARS-CoV-2 variants.
Additionally, immunity waning and vaccine boosters are modeled to account for
their growing importance. New infections are generated according to a
hierarchical model in which people have a random, individual infectiousness.
The model thus includes super-spreading observed in the COVID-19 pandemic. Our
simulation functions as a dynamic compartment model in which an individual's
history of infection, vaccination, and possible reinfection all play a role in
their resistance to further infections. We present a risk measure for each
SARS-CoV-2 variant, $\rho^\V$, that accounts for the amount of resistance
within a population and show how this risk changes as the vaccination rate
increases. Furthermore, by considering different population compositions in
terms of previous infection and type of vaccination, we can learn about
variants which pose differential risk to different countries. Different control
strategies are implemented which aim to both suppress COVID-19 outbreaks when
they occur as well as relax restrictions when possible. We demonstrate that a
controller that responds to the effective reproduction number in addition to
case numbers is more efficient and effective in controlling new waves than
monitoring case numbers alone. This is of interest as the majority of the
public discussion and well-known statistics deal primarily with case numbers.
| [
{
"created": "Thu, 23 Sep 2021 06:05:01 GMT",
"version": "v1"
},
{
"created": "Fri, 7 Jan 2022 10:40:47 GMT",
"version": "v2"
}
] | 2022-01-10 | [
[
"Johnson",
"Kory D.",
""
],
[
"Grass",
"Annemarie",
""
],
[
"Toneian",
"Daniel",
""
],
[
"Beiglböck",
"Mathias",
""
],
[
"Polechová",
"Jitka",
""
]
] | In light of the continuing emergence of new SARS-CoV-2 variants and vaccines, we create a simulation framework for exploring possible infection trajectories under various scenarios. The situations of primary interest involve the interaction between three components: vaccination campaigns, non-pharmaceutical interventions (NPIs), and the emergence of new SARS-CoV-2 variants. Additionally, immunity waning and vaccine boosters are modeled to account for their growing importance. New infections are generated according to a hierarchical model in which people have a random, individual infectiousness. The model thus includes super-spreading observed in the COVID-19 pandemic. Our simulation functions as a dynamic compartment model in which an individual's history of infection, vaccination, and possible reinfection all play a role in their resistance to further infections. We present a risk measure for each SARS-CoV-2 variant, $\rho^\V$, that accounts for the amount of resistance within a population and show how this risk changes as the vaccination rate increases. Furthermore, by considering different population compositions in terms of previous infection and type of vaccination, we can learn about variants which pose differential risk to different countries. Different control strategies are implemented which aim to both suppress COVID-19 outbreaks when they occur as well as relax restrictions when possible. We demonstrate that a controller that responds to the effective reproduction number in addition to case numbers is more efficient and effective in controlling new waves than monitoring case numbers alone. This is of interest as the majority of the public discussion and well-known statistics deal primarily with case numbers. |
2203.13193 | Sitabhra Sinha | Sitabhra Sinha | Modeling-informed policy, policy evaluated by modeling: Evolution of
mathematical epidemiology in the context of society and economy | 26 pages, 7 figures, to appear in "COVID-19 and Global Grand
Challenges on Health, Innovation and Economy" | null | null | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The COronaVIrus Disease 2019 (COVID-19) pandemic that has had the world in
its grip from the beginning of 2020, has resulted in an unprecedented level of
public interest and media attention on the field of mathematical epidemiology.
Ever since the disease came to worldwide attention, numerous models with
varying levels of sophistication have been proposed; many of these have tried
to predict the course of the disease over different time-scales. Other models
have examined the efficacy of various policy measures that have been adopted
(including the unparalleled use of "lockdowns") to contain and combat the
disease. This multiplicity of models may have led to bewilderment in many
quarters about the true capabilities and utility of mathematical modeling. Here
we provide a brief guide to epidemiological modeling, focusing on how it has
emerged as a tool for informed public-health policy-making and has in turn,
influenced the design of interventions aimed at preventing disease outbreaks
from turning into raging epidemics. We show that the diversity of models is
somewhat illusory, as the bulk of them are rooted in the compartmental modeling
framework that we describe here. While their basic structure may appear to be a
highly idealized description of the processes at work, we show that features
that provide more realism, such as the community organization of populations or
strategic decision-making by individuals, can be incorporated into such models.
We conclude with the argument that the true value of models lies in their
ability to test in silico the consequences of different policy choices in the
course of an epidemic, a much superior alternative to trial-and-error
approaches that are highly costly in terms of both lives and socio-economic
disruption.
| [
{
"created": "Wed, 23 Mar 2022 13:23:43 GMT",
"version": "v1"
}
] | 2022-03-25 | [
[
"Sinha",
"Sitabhra",
""
]
] | The COronaVIrus Disease 2019 (COVID-19) pandemic that has had the world in its grip from the beginning of 2020, has resulted in an unprecedented level of public interest and media attention on the field of mathematical epidemiology. Ever since the disease came to worldwide attention, numerous models with varying levels of sophistication have been proposed; many of these have tried to predict the course of the disease over different time-scales. Other models have examined the efficacy of various policy measures that have been adopted (including the unparalleled use of "lockdowns") to contain and combat the disease. This multiplicity of models may have led to bewilderment in many quarters about the true capabilities and utility of mathematical modeling. Here we provide a brief guide to epidemiological modeling, focusing on how it has emerged as a tool for informed public-health policy-making and has in turn, influenced the design of interventions aimed at preventing disease outbreaks from turning into raging epidemics. We show that the diversity of models is somewhat illusory, as the bulk of them are rooted in the compartmental modeling framework that we describe here. While their basic structure may appear to be a highly idealized description of the processes at work, we show that features that provide more realism, such as the community organization of populations or strategic decision-making by individuals, can be incorporated into such models. We conclude with the argument that the true value of models lies in their ability to test in silico the consequences of different policy choices in the course of an epidemic, a much superior alternative to trial-and-error approaches that are highly costly in terms of both lives and socio-economic disruption. |
1803.04498 | \'Elie Besserer-Offroy | \'Elie Besserer-Offroy, Rebecca L Brouillette, Sandrine Lavenus,
Ulrike Froehlich, Andrea Brumwell, Alexandre Murza, Jean-Michel Longpr\'e,
\'Eric Marsault, Michel Grandbois, Philippe Sarret, and Richard Leduc | The signaling signature of the neurotensin type 1 receptor with
endogenous ligands | This is the accepted (postprint) version of the following article:
Besserer-Offroy \'E, et al. (2017). Eur J Pharmacol. doi:
10.1016/j.ejphar.2017.03.046, which has been accepted and published in its
final form at
http://www.sciencedirect.com/science/article/pii/S0014299917302157 V1:
Preprint version V2: Accepted version (postprint) | Besserer-Offroy \'E, et al. (2017). Eur J Pharmacol. 805 (2017)
1-13 | 10.1016/j.ejphar.2017.03.046 | null | q-bio.CB q-bio.MN q-bio.SC | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The human neurotensin 1 receptor (hNTS1) is a G protein-coupled receptor
involved in many physiological functions, including analgesia, hypothermia, and
hypotension. To gain a better understanding of which signaling pathways or
combination of pathways are linked to NTS1 activation and function, we
investigated the ability of activated hNTS1, which was stably expressed by
CHO-K1 cells, to directly engage G proteins, activate second messenger cascades
and recruit \b{eta}-arrestins. Using BRET-based biosensors, we found that
neurotensin (NT), NT(8-13) and neuromedin N (NN) activated the G{\alpha}q-,
G{\alpha}i1-, G{\alpha}oA-, and G{\alpha}13-protein signaling pathways as well
as the recruitment of \b{eta}-arrestins 1 and 2. Using pharmacological
inhibitors, we further demonstrated that all three ligands stimulated the
production of inositol phosphate and modulation of cAMP accumulation along with
ERK1/2 activation. Interestingly, despite the functional coupling to
G{\alpha}i1 and G{\alpha}oA, NT was found to produce higher levels of cAMP in
the presence of pertussis toxin, supporting that hNTS1 activation leads to cAMP
accumulation in a G{\alpha}s-dependent manner. Additionally, we demonstrated
that the full activation of ERK1/2 required signaling through both a
PTX-sensitive Gi/o-c-Src signaling pathway and PLCb-DAG-PKC-Raf-1- dependent
pathway downstream of Gq. Finally, the whole-cell integrated signatures
monitored by the cell-based surface plasmon resonance and changes in the
electrical impedance of a confluent cell monolayer led to identical phenotypic
responses between the three ligands. The characterization of the hNTS1-mediated
cellular signaling network will be helpful to accelerate the validation of
potential NTS1 biased ligands with an improved therapeutic/adverse effect
profile.
| [
{
"created": "Mon, 12 Mar 2018 19:59:18 GMT",
"version": "v1"
},
{
"created": "Wed, 14 Mar 2018 00:45:57 GMT",
"version": "v2"
}
] | 2018-03-15 | [
[
"Besserer-Offroy",
"Élie",
""
],
[
"Brouillette",
"Rebecca L",
""
],
[
"Lavenus",
"Sandrine",
""
],
[
"Froehlich",
"Ulrike",
""
],
[
"Brumwell",
"Andrea",
""
],
[
"Murza",
"Alexandre",
""
],
[
"Longpré",
"Jean-... | The human neurotensin 1 receptor (hNTS1) is a G protein-coupled receptor involved in many physiological functions, including analgesia, hypothermia, and hypotension. To gain a better understanding of which signaling pathways or combination of pathways are linked to NTS1 activation and function, we investigated the ability of activated hNTS1, which was stably expressed by CHO-K1 cells, to directly engage G proteins, activate second messenger cascades and recruit \b{eta}-arrestins. Using BRET-based biosensors, we found that neurotensin (NT), NT(8-13) and neuromedin N (NN) activated the G{\alpha}q-, G{\alpha}i1-, G{\alpha}oA-, and G{\alpha}13-protein signaling pathways as well as the recruitment of \b{eta}-arrestins 1 and 2. Using pharmacological inhibitors, we further demonstrated that all three ligands stimulated the production of inositol phosphate and modulation of cAMP accumulation along with ERK1/2 activation. Interestingly, despite the functional coupling to G{\alpha}i1 and G{\alpha}oA, NT was found to produce higher levels of cAMP in the presence of pertussis toxin, supporting that hNTS1 activation leads to cAMP accumulation in a G{\alpha}s-dependent manner. Additionally, we demonstrated that the full activation of ERK1/2 required signaling through both a PTX-sensitive Gi/o-c-Src signaling pathway and PLCb-DAG-PKC-Raf-1- dependent pathway downstream of Gq. Finally, the whole-cell integrated signatures monitored by the cell-based surface plasmon resonance and changes in the electrical impedance of a confluent cell monolayer led to identical phenotypic responses between the three ligands. The characterization of the hNTS1-mediated cellular signaling network will be helpful to accelerate the validation of potential NTS1 biased ligands with an improved therapeutic/adverse effect profile. |
1404.5212 | Norshuhaila Mohamed Sunar N.M.Sunar | N.M. Sunar, E.I. Stentiford, D.I. Stewart, L.A. Flecther | Survival of Salmonella spp. in Composting using Vial and Direct
Inoculums Technique | ORBIT 2010 International Conference of Organic Resources in the
Carbon Economy. Crete, Greece | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The survival of Salmonella spp. as pathogen indicator in composting was
studied. The inoculums technique was used to gives the known amounts of
Salmonella spp. involved in composting. The inoculums of Salmonella spp.
solution was added directly into the compost material. The direct inoculum was
compared with inoculums in vial technique. The Salmonella spp. solution placed
into a vial and inserted into the middle of compost material before starting
the composting process. The conventional method that is used for the
enumeration of Salmonella spp. is serial dilution followed by standard membrane
filtration as recommended in the compost quality standard method PAS 100 and
the British Standard (BS EN ISO 6579:2002). This study was designed to
investigate the relationship of temperature and contact material that may also
involve in pathogen activation specifically to Salmonella spp. The exposure to
an average temperature during composting of about 55-60{\deg}C was kept for at
least 3 days as it was reported sufficiently kills the vast majority of enteric
pathogen (Deportes et al., 1995). The amount of Salmonella spp. and temperature
for both samples was set as indicator to determine the survival of Salmonella
spp. in direct and non-direct inoculums. This study gives the figures of
die-off rate for Salmonella spp. during composting. The differentiation between
direct contact (Sample A) and non-contact of Salmonella spp. with compost
material (Sample B) during composting was also revealed. The results from
laboratory scale of composting study has been showed that after 8 days (which
included at least at 66{\deg}C) the numbers of Salmonella spp. in Sample A were
below the limits in UK compost standard (known as PAS 100)(BSI, 2005) which
required the compost to be free of Salmonella spp. Meanwhile, Sample B still
gives high amount of Salmonella spp. in even after composting for 20 days.
| [
{
"created": "Mon, 21 Apr 2014 14:34:39 GMT",
"version": "v1"
}
] | 2014-04-22 | [
[
"Sunar",
"N. M.",
""
],
[
"Stentiford",
"E. I.",
""
],
[
"Stewart",
"D. I.",
""
],
[
"Flecther",
"L. A.",
""
]
] | The survival of Salmonella spp. as pathogen indicator in composting was studied. The inoculums technique was used to gives the known amounts of Salmonella spp. involved in composting. The inoculums of Salmonella spp. solution was added directly into the compost material. The direct inoculum was compared with inoculums in vial technique. The Salmonella spp. solution placed into a vial and inserted into the middle of compost material before starting the composting process. The conventional method that is used for the enumeration of Salmonella spp. is serial dilution followed by standard membrane filtration as recommended in the compost quality standard method PAS 100 and the British Standard (BS EN ISO 6579:2002). This study was designed to investigate the relationship of temperature and contact material that may also involve in pathogen activation specifically to Salmonella spp. The exposure to an average temperature during composting of about 55-60{\deg}C was kept for at least 3 days as it was reported sufficiently kills the vast majority of enteric pathogen (Deportes et al., 1995). The amount of Salmonella spp. and temperature for both samples was set as indicator to determine the survival of Salmonella spp. in direct and non-direct inoculums. This study gives the figures of die-off rate for Salmonella spp. during composting. The differentiation between direct contact (Sample A) and non-contact of Salmonella spp. with compost material (Sample B) during composting was also revealed. The results from laboratory scale of composting study has been showed that after 8 days (which included at least at 66{\deg}C) the numbers of Salmonella spp. in Sample A were below the limits in UK compost standard (known as PAS 100)(BSI, 2005) which required the compost to be free of Salmonella spp. Meanwhile, Sample B still gives high amount of Salmonella spp. in even after composting for 20 days. |
q-bio/0702052 | Yuri A. Dabaghian | Yu. Dabaghian, A. G. Cohn and L. Frank | Topological coding in hippocampus | 53 pages, 12 figures | null | null | null | q-bio.OT q-bio.NC q-bio.QM | null | The proposed analysis of the currently available experimental results
concerning the neural cell activity in the brain area known as hippocampus
suggests a particular mechanism of spatial information and memory processing.
Below it is argued that the spatial information available through the analysis
of the hippocampal cell activity is predominantly of topological nature. It is
pointed out that a direct topological analysis can produce a topological
invariant based classification of the cell activity patterns and a complete
topological description of animal's current environment. It also provides a
full first order logical system for local topological reasoning about spatial
structure and animal's navigational strategies.
| [
{
"created": "Sun, 25 Feb 2007 18:55:07 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Dabaghian",
"Yu.",
""
],
[
"Cohn",
"A. G.",
""
],
[
"Frank",
"L.",
""
]
] | The proposed analysis of the currently available experimental results concerning the neural cell activity in the brain area known as hippocampus suggests a particular mechanism of spatial information and memory processing. Below it is argued that the spatial information available through the analysis of the hippocampal cell activity is predominantly of topological nature. It is pointed out that a direct topological analysis can produce a topological invariant based classification of the cell activity patterns and a complete topological description of animal's current environment. It also provides a full first order logical system for local topological reasoning about spatial structure and animal's navigational strategies. |
1502.05120 | Richard C Gerkin PhD | Richard C. Gerkin, Jason B. Castro | Humans can discriminate trillions of olfactory stimuli, or more, or
fewer | 11 pages, 10 figures | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A recent Science paper (Bushdid et al, 2014)[1] proposed that humans can
discriminate between at least a trillion olfactory stimuli. Here we show that
this claim is the result of a fragile estimation framework capable of producing
nearly any result from the reported data, including values tens of orders of
magnitude larger or smaller than the one originally reported in [1]. We
conclude that there is no evidence for the original claim.
| [
{
"created": "Wed, 18 Feb 2015 05:23:38 GMT",
"version": "v1"
}
] | 2015-02-19 | [
[
"Gerkin",
"Richard C.",
""
],
[
"Castro",
"Jason B.",
""
]
] | A recent Science paper (Bushdid et al, 2014)[1] proposed that humans can discriminate between at least a trillion olfactory stimuli. Here we show that this claim is the result of a fragile estimation framework capable of producing nearly any result from the reported data, including values tens of orders of magnitude larger or smaller than the one originally reported in [1]. We conclude that there is no evidence for the original claim. |
2010.13849 | Juan Daniel Sebastia-Saez | Daniel Sebastia-Saez, Faiza Benaouda, Charlie Lim, Guoping Lian,
Stuart Jones, Tao Chen, Liang Cui | Numerical analysis of the strain distribution in skin domes formed upon
the application of hypobaric pressure | null | null | null | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Suction cups are widely used in applications such as in measurement of
mechanical properties of skin in vivo, in drug delivery devices or in
acupuncture treatment. Understanding the mechanical response of skin under
hypobaric pressure are of great importance for users of suction cups. The aims
of this work are to assess the capability of linear elasticity (Young's
modulus) or hyperelasticity in predicting hypobaric pressure induced 3D
stretching of the skin. Using experiments and computational Finite Element
Method modelling, this work demonstrated that although it was possible to
predict the suction dome apex height using both linear elasticity and
hyperelasticity for the typical range of hypobaric pressure in medical
applications (up to -10 psi), linear elasticity theory showed limitations when
predicting the strain distribution across the suction dome. The reason is that
the stretch ratio reaches values exceeding the initial linear elastic stage of
the stress-strain characteristic curve for skin. As a result, the linear
elasticity theory overpredicts the stretch along the rim of domes where there
is stress concentration. In addition, the modelling showed that the skin was
compressed consistently along the thickness direction, leading to reduced
thickness. Using hyperelasticity modelling to predict the 3D strain
distribution paves the way to accurately design safe commercial products that
interface with skin.
| [
{
"created": "Mon, 26 Oct 2020 19:03:21 GMT",
"version": "v1"
}
] | 2020-10-28 | [
[
"Sebastia-Saez",
"Daniel",
""
],
[
"Benaouda",
"Faiza",
""
],
[
"Lim",
"Charlie",
""
],
[
"Lian",
"Guoping",
""
],
[
"Jones",
"Stuart",
""
],
[
"Chen",
"Tao",
""
],
[
"Cui",
"Liang",
""
]
] | Suction cups are widely used in applications such as in measurement of mechanical properties of skin in vivo, in drug delivery devices or in acupuncture treatment. Understanding the mechanical response of skin under hypobaric pressure are of great importance for users of suction cups. The aims of this work are to assess the capability of linear elasticity (Young's modulus) or hyperelasticity in predicting hypobaric pressure induced 3D stretching of the skin. Using experiments and computational Finite Element Method modelling, this work demonstrated that although it was possible to predict the suction dome apex height using both linear elasticity and hyperelasticity for the typical range of hypobaric pressure in medical applications (up to -10 psi), linear elasticity theory showed limitations when predicting the strain distribution across the suction dome. The reason is that the stretch ratio reaches values exceeding the initial linear elastic stage of the stress-strain characteristic curve for skin. As a result, the linear elasticity theory overpredicts the stretch along the rim of domes where there is stress concentration. In addition, the modelling showed that the skin was compressed consistently along the thickness direction, leading to reduced thickness. Using hyperelasticity modelling to predict the 3D strain distribution paves the way to accurately design safe commercial products that interface with skin. |
1703.02453 | Raul Isea | Raul Isea | Quantitative Prediction of Linear B-Cell Epitopes | 3 pages, 2 tables | Biomedical Statistics and Informatics. Vol. 2, No.1, 2017, pp.1-3 | 10.11648/j.bsi.20170201.11 | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | In scientific literature, there are many programs that predict linear B-cell
epitopes from a protein sequence. Each program generates multiple B-cell
epitopes that can be individually studied. This paper defines a function called
<C> that combines results from five different prediction programs concerning
the linear B-cell epitopes (ie., BebiPred, EPMLR, BCPred, ABCPred and Emini
Prediction) for selecting the best B-cell epitopes. We obtained 17 potential
linear B cells consensus epitopes from Glycoprotein E from serotype IV of the
dengue virus for exploring new possibilities in vaccine development. The direct
implication of the results obtained is to open the way to experimentally
validate more epitopes to increase the efficiency of the available treatments
against dengue and to explore the methodology in other diseases.
| [
{
"created": "Tue, 7 Mar 2017 16:18:21 GMT",
"version": "v1"
}
] | 2017-03-08 | [
[
"Isea",
"Raul",
""
]
] | In scientific literature, there are many programs that predict linear B-cell epitopes from a protein sequence. Each program generates multiple B-cell epitopes that can be individually studied. This paper defines a function called <C> that combines results from five different prediction programs concerning the linear B-cell epitopes (ie., BebiPred, EPMLR, BCPred, ABCPred and Emini Prediction) for selecting the best B-cell epitopes. We obtained 17 potential linear B cells consensus epitopes from Glycoprotein E from serotype IV of the dengue virus for exploring new possibilities in vaccine development. The direct implication of the results obtained is to open the way to experimentally validate more epitopes to increase the efficiency of the available treatments against dengue and to explore the methodology in other diseases. |
2211.11166 | Joshua Pickard | Joshua Pickard, Can Chen, Rahmy Salman, Cooper Stansbury, Sion Kim,
Amit Surana, Anthony Bloch, and Indika Rajapakse | Hypergraph Analysis Toolbox for Chromosome Conformation | null | null | 10.1371/journal.pcbi.1011190 | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Recent advances in biological technologies, such as multi-way chromosome
conformation capture (3C), require development of methods for analysis of
multi-way interactions. Hypergraphs are mathematically tractable objects that
can be utilized to precisely represent and analyze multi-way interactions. Here
we present the Hypergraph Analysis Toolbox (HAT), a software package for
visualization and analysis of multi-way interactions in complex systems.
| [
{
"created": "Mon, 21 Nov 2022 03:44:48 GMT",
"version": "v1"
},
{
"created": "Tue, 6 Dec 2022 02:41:07 GMT",
"version": "v2"
}
] | 2023-07-19 | [
[
"Pickard",
"Joshua",
""
],
[
"Chen",
"Can",
""
],
[
"Salman",
"Rahmy",
""
],
[
"Stansbury",
"Cooper",
""
],
[
"Kim",
"Sion",
""
],
[
"Surana",
"Amit",
""
],
[
"Bloch",
"Anthony",
""
],
[
"Rajapakse"... | Recent advances in biological technologies, such as multi-way chromosome conformation capture (3C), require development of methods for analysis of multi-way interactions. Hypergraphs are mathematically tractable objects that can be utilized to precisely represent and analyze multi-way interactions. Here we present the Hypergraph Analysis Toolbox (HAT), a software package for visualization and analysis of multi-way interactions in complex systems. |
1812.07047 | Daniel S Calovi | Daniel S. Calovi, Paul Bardunias, Nicole Carey, J. Scott Turner,
Radhika Nagpal, Justin Werfel | Surface curvature guides early construction activity in mound-building
termites | null | null | 10.1098/rstb.2018.0374 | null | q-bio.QM physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Termite colonies construct towering, complex mounds, in a classic example of
distributed agents coordinating their activity via interaction with a shared
environment. The traditional explanation for how this coordination occurs
focuses on the idea of a "cement pheromone", a chemical signal left with
deposited soil that triggers further deposition. Recent research has called
this idea into question, pointing to a more complicated behavioral response to
cues perceived with multiple senses. In this work, we explored the role of
topological cues in affecting early construction activity in Macrotermes. We
created artificial surfaces with a known range of curvatures, coated them with
nest soil, placed groups of major workers on them, and evaluated soil
displacement as a function of location at the end of one hour. Each point on
the surface has a given curvature, inclination, and absolute height; to
disambiguate these factors, we conducted experiments with the surface in
different orientations. Soil displacement activity is consistently correlated
with surface curvature, and not with inclination nor height. Early exploration
activity is also correlated with curvature, to a lesser degree. Topographical
cues provide a long-term physical memory of building activity in a manner that
ephemeral pheromone labeling cannot. Elucidating the roles of these and other
cues for group coordination may help provide organizing principles for swarm
robotics and other artificial systems.
| [
{
"created": "Mon, 17 Dec 2018 20:51:06 GMT",
"version": "v1"
}
] | 2019-05-15 | [
[
"Calovi",
"Daniel S.",
""
],
[
"Bardunias",
"Paul",
""
],
[
"Carey",
"Nicole",
""
],
[
"Turner",
"J. Scott",
""
],
[
"Nagpal",
"Radhika",
""
],
[
"Werfel",
"Justin",
""
]
] | Termite colonies construct towering, complex mounds, in a classic example of distributed agents coordinating their activity via interaction with a shared environment. The traditional explanation for how this coordination occurs focuses on the idea of a "cement pheromone", a chemical signal left with deposited soil that triggers further deposition. Recent research has called this idea into question, pointing to a more complicated behavioral response to cues perceived with multiple senses. In this work, we explored the role of topological cues in affecting early construction activity in Macrotermes. We created artificial surfaces with a known range of curvatures, coated them with nest soil, placed groups of major workers on them, and evaluated soil displacement as a function of location at the end of one hour. Each point on the surface has a given curvature, inclination, and absolute height; to disambiguate these factors, we conducted experiments with the surface in different orientations. Soil displacement activity is consistently correlated with surface curvature, and not with inclination nor height. Early exploration activity is also correlated with curvature, to a lesser degree. Topographical cues provide a long-term physical memory of building activity in a manner that ephemeral pheromone labeling cannot. Elucidating the roles of these and other cues for group coordination may help provide organizing principles for swarm robotics and other artificial systems. |
1407.3069 | Manon Costa | Manon Costa, C\'eline Hauzy, Nicolas Loeuille, Sylvie M\'el\'eard | Stochastic eco-evolutionary model of a prey-predator community | 47 pages, 15 figures | null | null | null | q-bio.PE math.PR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We are interested in the impact of natural selection in a prey-predator
community. We introduce an individual-based model of the community that takes
into account both prey and predator phenotypes. Our aim is to understand the
phenotypic coevolution of prey and predators. The community evolves as a
multi-type birth and death process with mutations. We first consider the
infinite particle approximation of the process without mutation. In this limit,
the process can be approximated by a system of differential equations. We prove
the existence of a unique globally asymptotically stable equilibrium under
specific conditions on the interaction among prey individuals. When mutations
are rare, the community evolves on the mutational scale according to a
Markovian jump process. This process describes the successive equilibria of the
prey-predator community and extends the Polymorphic Evolutionary Sequence to a
coevolutionary framework. We then assume that mutations have a small impact on
phenotypes and consider the evolution of monomorphic prey and predator
populations. The limit of small mutation steps leads to a system of two
differential equations which is a version of the canonical equation of adaptive
dynamics for the prey-predator coevolution. We illustrate these different
limits with an example of prey-predator community that takes into account
different prey defense mechanisms. We observe through simulations how these
various prey strategies impact the community.
| [
{
"created": "Fri, 11 Jul 2014 08:57:29 GMT",
"version": "v1"
},
{
"created": "Wed, 18 Feb 2015 07:58:50 GMT",
"version": "v2"
}
] | 2015-02-19 | [
[
"Costa",
"Manon",
""
],
[
"Hauzy",
"Céline",
""
],
[
"Loeuille",
"Nicolas",
""
],
[
"Méléard",
"Sylvie",
""
]
] | We are interested in the impact of natural selection in a prey-predator community. We introduce an individual-based model of the community that takes into account both prey and predator phenotypes. Our aim is to understand the phenotypic coevolution of prey and predators. The community evolves as a multi-type birth and death process with mutations. We first consider the infinite particle approximation of the process without mutation. In this limit, the process can be approximated by a system of differential equations. We prove the existence of a unique globally asymptotically stable equilibrium under specific conditions on the interaction among prey individuals. When mutations are rare, the community evolves on the mutational scale according to a Markovian jump process. This process describes the successive equilibria of the prey-predator community and extends the Polymorphic Evolutionary Sequence to a coevolutionary framework. We then assume that mutations have a small impact on phenotypes and consider the evolution of monomorphic prey and predator populations. The limit of small mutation steps leads to a system of two differential equations which is a version of the canonical equation of adaptive dynamics for the prey-predator coevolution. We illustrate these different limits with an example of prey-predator community that takes into account different prey defense mechanisms. We observe through simulations how these various prey strategies impact the community. |
q-bio/0508030 | Thomas Keef Mr | T.Keef, C.Micheletti and R. Twarock | Master equation approach to the assembly of viral capsids | null | null | null | null | q-bio.BM | null | The distribution of inequivalent geometries occurring during self-assembly of
the major capsid protein in thermodynamic equilibrium is determined based on a
master equation approach. These results are implemented to characterize the
assembly of SV40 virus and to obtain information on the putative pathways
controlling the progressive build-up of the SV40 capsid. The experimental
testability of the predictions is assessed and an analysis of the geometries of
the assembly intermediates on the dominant pathways is used to identify targets
for antiviral drug design.
| [
{
"created": "Mon, 22 Aug 2005 14:04:23 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Keef",
"T.",
""
],
[
"Micheletti",
"C.",
""
],
[
"Twarock",
"R.",
""
]
] | The distribution of inequivalent geometries occurring during self-assembly of the major capsid protein in thermodynamic equilibrium is determined based on a master equation approach. These results are implemented to characterize the assembly of SV40 virus and to obtain information on the putative pathways controlling the progressive build-up of the SV40 capsid. The experimental testability of the predictions is assessed and an analysis of the geometries of the assembly intermediates on the dominant pathways is used to identify targets for antiviral drug design. |
2101.07619 | Mostafa Akhavan Safar | Mostafa Akhavan Safar, Babak Teimourpour, Abbas Nozari-Dalini | Cancer driver gene detection in transcriptional regulatory networks
using the structure analysis of weighted regulatory interactions | null | 2022.Current Bioinformatics, 17(4), 327-343 | 10.2174/1574893617666220127094224 | null | q-bio.MN q-bio.BM | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Identification of genes that initiate cell anomalies and cause cancer in
humans is among the important fields in the oncology researches. The mutation
and development of anomalies in these genes are then transferred to other genes
in the cell and therefore disrupt the normal functionality of the cell. These
genes are known as cancer driver genes (CDGs). Various methods have been
proposed for predicting CDGs, most of which based on genomic data and based on
computational methods. Therefore, some researchers have developed novel
bioinformatics approaches. In this study, we propose an algorithm, which is
able to calculate the effectiveness and strength of each gene and rank them by
using the gene regulatory networks and the stochastic analysis of regulatory
linking structures between genes. To do so, firstly we constructed the
regulatory network using gene expression data and the list of regulatory
interactions. Then, using biological and topological features of the network,
we weighted the regulatory interactions. After that, the obtained regulatory
interactions weight was used in interaction structure analysis process.
Interaction analysis was achieved using two separate Markov chains on the
bipartite graph obtained from the main graph of the gene network. To do so, the
stochastic approach for link-structure analysis has been implemented. The
proposed algorithm categorizes higher-ranked genes as driver genes. The
efficiency of the proposed algorithm, regarding the F-measure value and number
of identified driver genes, was compared with 23 other computational and
network-based methods.
| [
{
"created": "Tue, 19 Jan 2021 13:50:54 GMT",
"version": "v1"
}
] | 2023-03-03 | [
[
"Safar",
"Mostafa Akhavan",
""
],
[
"Teimourpour",
"Babak",
""
],
[
"Nozari-Dalini",
"Abbas",
""
]
] | Identification of genes that initiate cell anomalies and cause cancer in humans is among the important fields in the oncology researches. The mutation and development of anomalies in these genes are then transferred to other genes in the cell and therefore disrupt the normal functionality of the cell. These genes are known as cancer driver genes (CDGs). Various methods have been proposed for predicting CDGs, most of which based on genomic data and based on computational methods. Therefore, some researchers have developed novel bioinformatics approaches. In this study, we propose an algorithm, which is able to calculate the effectiveness and strength of each gene and rank them by using the gene regulatory networks and the stochastic analysis of regulatory linking structures between genes. To do so, firstly we constructed the regulatory network using gene expression data and the list of regulatory interactions. Then, using biological and topological features of the network, we weighted the regulatory interactions. After that, the obtained regulatory interactions weight was used in interaction structure analysis process. Interaction analysis was achieved using two separate Markov chains on the bipartite graph obtained from the main graph of the gene network. To do so, the stochastic approach for link-structure analysis has been implemented. The proposed algorithm categorizes higher-ranked genes as driver genes. The efficiency of the proposed algorithm, regarding the F-measure value and number of identified driver genes, was compared with 23 other computational and network-based methods. |
2303.15326 | Qihui Yang | Qihui Yang, Joan Salda\~na, Caterina Scoglio | Generalized epidemic model incorporating non-Markovian infection
processes and waning immunity | null | null | 10.1103/PhysRevE.108.014405 | null | q-bio.PE stat.AP | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The Markovian approach, which assumes exponentially distributed
interinfection times, is dominant in epidemic modeling. However, this
assumption is unrealistic as an individual's infectiousness depends on its
viral load and varies over time. In this paper, we present a
Susceptible-Infected-Recovered-Vaccinated-Susceptible epidemic model
incorporating non-Markovian infection processes. The model can be easily
adapted to accurately capture the generation time distributions of emerging
infectious diseases, which is essential for accurate epidemic prediction. We
observe noticeable variations in the transient behavior under different
infectiousness profiles and the same basic reproduction number R0. The
theoretical analyses show that only R0 and the mean immunity period of the
vaccinated individuals have an impact on the critical vaccination rate needed
to achieve herd immunity. A vaccination level at the critical vaccination rate
can ensure a very low incidence among the population in case of future
epidemics, regardless of the infectiousness profiles.
| [
{
"created": "Mon, 27 Mar 2023 15:24:07 GMT",
"version": "v1"
},
{
"created": "Mon, 19 Jun 2023 21:22:17 GMT",
"version": "v2"
},
{
"created": "Fri, 21 Jul 2023 21:07:25 GMT",
"version": "v3"
}
] | 2023-08-02 | [
[
"Yang",
"Qihui",
""
],
[
"Saldaña",
"Joan",
""
],
[
"Scoglio",
"Caterina",
""
]
] | The Markovian approach, which assumes exponentially distributed interinfection times, is dominant in epidemic modeling. However, this assumption is unrealistic as an individual's infectiousness depends on its viral load and varies over time. In this paper, we present a Susceptible-Infected-Recovered-Vaccinated-Susceptible epidemic model incorporating non-Markovian infection processes. The model can be easily adapted to accurately capture the generation time distributions of emerging infectious diseases, which is essential for accurate epidemic prediction. We observe noticeable variations in the transient behavior under different infectiousness profiles and the same basic reproduction number R0. The theoretical analyses show that only R0 and the mean immunity period of the vaccinated individuals have an impact on the critical vaccination rate needed to achieve herd immunity. A vaccination level at the critical vaccination rate can ensure a very low incidence among the population in case of future epidemics, regardless of the infectiousness profiles. |
1507.06262 | Ziv Williams | Ziv Williams | Lamarckian inheritance following sensorimotor training and its neural
basis in Drosophila | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Jean-Baptiste Lamarck was among first to suggest that certain acquired traits
may be heritable from parents to offspring. In this study, I examine whether
and what aspects of sensorimotor conditioning by parents prior to conception
may influence the behavior of subsequent generations in Drosophila. Using
genetic and anatomic techniques, I find that both first- and second-generation
offspring of parents who underwent prolonged olfactory training over multiple
days displayed a distinct response bias to the same specific trained odors. The
offspring displayed an enhanced anemotactic approach response to the trained
odors, however, and did not differentiate between orders based on whether
parental training was aversive or appetitive. Consequently, disruption of both
olfactory-receptor and dorsal-paired-medial neuron input into the mushroom
bodies abolished this change in offspring response, but disrupting synaptic
output from a/b neurons of the mushroom body themselves had little effect on
behavior even though they remained necessary for enacting newly trained
conditioned responses. These observations identify a unique transgenerational
dissociation between parentally-trained conditioned and unconditioned sensory
stimuli, and provide a putative neural basis for how sensorimotor experiences
in insects may bias the behavior of subsequent generations.
| [
{
"created": "Wed, 22 Jul 2015 17:34:12 GMT",
"version": "v1"
}
] | 2015-07-23 | [
[
"Williams",
"Ziv",
""
]
] | Jean-Baptiste Lamarck was among first to suggest that certain acquired traits may be heritable from parents to offspring. In this study, I examine whether and what aspects of sensorimotor conditioning by parents prior to conception may influence the behavior of subsequent generations in Drosophila. Using genetic and anatomic techniques, I find that both first- and second-generation offspring of parents who underwent prolonged olfactory training over multiple days displayed a distinct response bias to the same specific trained odors. The offspring displayed an enhanced anemotactic approach response to the trained odors, however, and did not differentiate between orders based on whether parental training was aversive or appetitive. Consequently, disruption of both olfactory-receptor and dorsal-paired-medial neuron input into the mushroom bodies abolished this change in offspring response, but disrupting synaptic output from a/b neurons of the mushroom body themselves had little effect on behavior even though they remained necessary for enacting newly trained conditioned responses. These observations identify a unique transgenerational dissociation between parentally-trained conditioned and unconditioned sensory stimuli, and provide a putative neural basis for how sensorimotor experiences in insects may bias the behavior of subsequent generations. |
2107.00474 | Jonas Berx | Jonas Berx, Joseph O Indekeu | Epidemic processes with vaccination and immunity loss studied with the
BLUES function method | 23 pages, 8 figures. v2: Accepted version | Physica A 590, 126724 (2022) | 10.1016/j.physa.2021.126724 | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | The Beyond-Linear-Use-of-Equation-Superposition (BLUES) function method is
extended to coupled nonlinear ordinary differential equations and applied to
the epidemiological SIRS model with vaccination. Accurate analytic
approximations are obtained for the time evolution of the susceptible and
infected population fractions. The results are compared with those obtained
with alternative methods, notably Adomian decomposition, variational iteration
and homotopy perturbation. In contrast with these methods, the BLUES iteration
converges rapidly, globally, and captures the exact asymptotic behavior for
long times. The time of the infection peak is calculated using the BLUES
approximants and the results are compared with numerical solutions, which
indicate that the method is able to generate useful analytic expressions that
coincide with the (numerically) exact ones already for a small number of
iterations.
| [
{
"created": "Wed, 30 Jun 2021 06:06:53 GMT",
"version": "v1"
},
{
"created": "Mon, 29 Nov 2021 14:52:23 GMT",
"version": "v2"
}
] | 2021-12-30 | [
[
"Berx",
"Jonas",
""
],
[
"Indekeu",
"Joseph O",
""
]
] | The Beyond-Linear-Use-of-Equation-Superposition (BLUES) function method is extended to coupled nonlinear ordinary differential equations and applied to the epidemiological SIRS model with vaccination. Accurate analytic approximations are obtained for the time evolution of the susceptible and infected population fractions. The results are compared with those obtained with alternative methods, notably Adomian decomposition, variational iteration and homotopy perturbation. In contrast with these methods, the BLUES iteration converges rapidly, globally, and captures the exact asymptotic behavior for long times. The time of the infection peak is calculated using the BLUES approximants and the results are compared with numerical solutions, which indicate that the method is able to generate useful analytic expressions that coincide with the (numerically) exact ones already for a small number of iterations. |
2307.08435 | Nadav M. Shnerb | David Kessler and Nadav M. Shnerb | Extinction time distributions of populations and genotypes | null | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | In the long run, the eventual extinction of any biological population is an
inevitable outcome. While extensive research has focused on the average time it
takes for a population to go extinct under various circumstances, there has
been limited exploration of the distributions of extinction times and the
likelihood of significant fluctuations. Recently, Hathcock and Strogatz [PRL
128, 218301 (2022)] identified Gumbel statistics as a universal asymptotic
distribution for extinction-prone dynamics in a stable environment. In this
study, we aim to provide a comprehensive survey of this problem by examining a
range of plausible scenarios, including extinction-prone, marginal (neutral),
and stable dynamics. We consider the influence of demographic stochasticity,
which arises from the inherent randomness of the birth-death process, as well
as cases where stochasticity originates from the more pronounced effect of
random environmental variations. Our work proposes several generic criteria
that can be used for the classification of experimental and empirical systems,
thereby enhancing our ability to discern the mechanisms governing extinction
dynamics. By employing these criteria, we can improve our understanding of the
underlying mechanisms driving extinction processes.
| [
{
"created": "Mon, 17 Jul 2023 12:28:47 GMT",
"version": "v1"
}
] | 2023-07-18 | [
[
"Kessler",
"David",
""
],
[
"Shnerb",
"Nadav M.",
""
]
] | In the long run, the eventual extinction of any biological population is an inevitable outcome. While extensive research has focused on the average time it takes for a population to go extinct under various circumstances, there has been limited exploration of the distributions of extinction times and the likelihood of significant fluctuations. Recently, Hathcock and Strogatz [PRL 128, 218301 (2022)] identified Gumbel statistics as a universal asymptotic distribution for extinction-prone dynamics in a stable environment. In this study, we aim to provide a comprehensive survey of this problem by examining a range of plausible scenarios, including extinction-prone, marginal (neutral), and stable dynamics. We consider the influence of demographic stochasticity, which arises from the inherent randomness of the birth-death process, as well as cases where stochasticity originates from the more pronounced effect of random environmental variations. Our work proposes several generic criteria that can be used for the classification of experimental and empirical systems, thereby enhancing our ability to discern the mechanisms governing extinction dynamics. By employing these criteria, we can improve our understanding of the underlying mechanisms driving extinction processes. |
2305.02193 | Kestutis Pyragas Prof. | Viktoras Pyragas and Kestutis Pyragas | Effect of Cauchy noise on a network of quadratic integrate-and-fire
neurons with non-Cauchy heterogeneities | 9 pages, 5 figures | null | 10.1016/j.physleta.2023.128972 | null | q-bio.NC | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We analyze the dynamics of large networks of pulse-coupled quadratic
integrate-and-fire neurons driven by Cauchy noise and non-Cauchy heterogeneous
inputs. Two types of heterogeneities defined by families of $q$-Gaussian and
flat distributions are considered. Both families are parametrized by an integer
$n$, so that as $n$ increases, the first family tends to a normal distribution,
and the second tends to a uniform distribution. For both families, exact
systems of mean-field equations are derived and their bifurcation analysis is
carried out. We show that noise and heterogeneity can have qualitatively
different effects on the collective dynamics of neurons.
| [
{
"created": "Mon, 17 Apr 2023 06:37:22 GMT",
"version": "v1"
}
] | 2023-07-19 | [
[
"Pyragas",
"Viktoras",
""
],
[
"Pyragas",
"Kestutis",
""
]
] | We analyze the dynamics of large networks of pulse-coupled quadratic integrate-and-fire neurons driven by Cauchy noise and non-Cauchy heterogeneous inputs. Two types of heterogeneities defined by families of $q$-Gaussian and flat distributions are considered. Both families are parametrized by an integer $n$, so that as $n$ increases, the first family tends to a normal distribution, and the second tends to a uniform distribution. For both families, exact systems of mean-field equations are derived and their bifurcation analysis is carried out. We show that noise and heterogeneity can have qualitatively different effects on the collective dynamics of neurons. |
1802.05653 | Marisa Eisenberg | Andrew F. Brouwer, Marisa C. Eisenberg, Nancy G. Love, Joseph N. S.
Eisenberg | Persistence-infectivity trade-offs in environmentally transmitted
pathogens change population-level disease dynamics | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human pathogens transmitted through environmental pathways are subject to
stress and pressures outside of the host. These pressures may cause pathogen
pathovars to diverge in their environmental persistence and their infectivity
on an evolutionary time-scale. On a shorter time-scale, a single-genotype
pathogen population may display wide variation in persistence times and exhibit
biphasic decay. Using an infectious disease transmission modeling framework, we
demonstrate in both cases that fitness-preserving trade-offs have implications
for the dynamics of associated epidemics: less infectious, more persistent
pathogens cause epidemics to progress more slowly than more infectious, less
persistent (labile) pathogens, even when the overall risk is the same. Using
identifiability analysis, we show that the usual disease surveillance data does
not sufficiently inform these underlying pathogen population dynamics, even
with basic environmental monitoring. These results suggest directions for
future microbial research and environmental monitoring. In particular,
determining the relative infectivity of persistent pathogen subpopulations and
the rates of phenotypic conversion will help ascertain how much disease risk is
associated with the long tails of biphasic decay. Alternatively, risk can be
indirectly ascertained by developing methods to separately monitor labile and
persistent subpopulations. A better understanding of persistence--infectivity
trade-offs and associated dynamics can improve risk assessment and disease
control strategies.
| [
{
"created": "Thu, 15 Feb 2018 16:48:11 GMT",
"version": "v1"
}
] | 2018-02-16 | [
[
"Brouwer",
"Andrew F.",
""
],
[
"Eisenberg",
"Marisa C.",
""
],
[
"Love",
"Nancy G.",
""
],
[
"Eisenberg",
"Joseph N. S.",
""
]
] | Human pathogens transmitted through environmental pathways are subject to stress and pressures outside of the host. These pressures may cause pathogen pathovars to diverge in their environmental persistence and their infectivity on an evolutionary time-scale. On a shorter time-scale, a single-genotype pathogen population may display wide variation in persistence times and exhibit biphasic decay. Using an infectious disease transmission modeling framework, we demonstrate in both cases that fitness-preserving trade-offs have implications for the dynamics of associated epidemics: less infectious, more persistent pathogens cause epidemics to progress more slowly than more infectious, less persistent (labile) pathogens, even when the overall risk is the same. Using identifiability analysis, we show that the usual disease surveillance data does not sufficiently inform these underlying pathogen population dynamics, even with basic environmental monitoring. These results suggest directions for future microbial research and environmental monitoring. In particular, determining the relative infectivity of persistent pathogen subpopulations and the rates of phenotypic conversion will help ascertain how much disease risk is associated with the long tails of biphasic decay. Alternatively, risk can be indirectly ascertained by developing methods to separately monitor labile and persistent subpopulations. A better understanding of persistence--infectivity trade-offs and associated dynamics can improve risk assessment and disease control strategies. |
2003.03289 | Kalel Luiz Rossi | Kalel Luiz Rossi, Roberto Cesar Budzisnki, Joao Antonio Paludo
Silveira, Bruno Rafael Reichert Boaretto, Thiago Lima Prado, Sergio Roberto
Lopes, Ulrike Feudel | Effects of neuronal variability on phase synchronization of neural
networks | 11 pages, 7 figures, to be submitted to Neural Networks journal | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An important idea in neural information processing is the
communication-through-coherence hypothesis, according to which communication
between two brain regions is effective only if they are phase-locked. Also of
importance is neuronal variability, a phenomenon in which a single neuron's
inter-firing times may be highly variable. In this work, we aim to connect
these two ideas by studying the effects of that variability on the capability
of neurons to reach phase synchronization. We simulate a network of
modified-Hodgkin-Huxley-bursting neurons possessing a small-world topology.
First, variability is shown to be correlated with the average degree of phase
synchronization of the network. Next, restricting to spatial variability -
which measures the deviation of firing times between all neurons in the network
- we show that it is positively correlated to a behavior we call promiscuity,
which is the tendency of neurons to to have their relative phases change with
time. This relation is observed in all cases we tested, regardless of the
degree of synchronization or the strength of the inter-neuronal coupling: high
variability implies high promiscuity (low duration of phase-locking), even if
the network as a whole is synchronized and the coupling is strong. We argue
that spatial variability actually generates promiscuity. Therefore, we conclude
that variability has a strong influence on both the degree and the manner in
which neurons phase synchronize, which is another reason for its relevance in
neural communication.
| [
{
"created": "Thu, 27 Feb 2020 12:59:00 GMT",
"version": "v1"
}
] | 2020-03-09 | [
[
"Rossi",
"Kalel Luiz",
""
],
[
"Budzisnki",
"Roberto Cesar",
""
],
[
"Silveira",
"Joao Antonio Paludo",
""
],
[
"Boaretto",
"Bruno Rafael Reichert",
""
],
[
"Prado",
"Thiago Lima",
""
],
[
"Lopes",
"Sergio Roberto",
""
]... | An important idea in neural information processing is the communication-through-coherence hypothesis, according to which communication between two brain regions is effective only if they are phase-locked. Also of importance is neuronal variability, a phenomenon in which a single neuron's inter-firing times may be highly variable. In this work, we aim to connect these two ideas by studying the effects of that variability on the capability of neurons to reach phase synchronization. We simulate a network of modified-Hodgkin-Huxley-bursting neurons possessing a small-world topology. First, variability is shown to be correlated with the average degree of phase synchronization of the network. Next, restricting to spatial variability - which measures the deviation of firing times between all neurons in the network - we show that it is positively correlated to a behavior we call promiscuity, which is the tendency of neurons to to have their relative phases change with time. This relation is observed in all cases we tested, regardless of the degree of synchronization or the strength of the inter-neuronal coupling: high variability implies high promiscuity (low duration of phase-locking), even if the network as a whole is synchronized and the coupling is strong. We argue that spatial variability actually generates promiscuity. Therefore, we conclude that variability has a strong influence on both the degree and the manner in which neurons phase synchronize, which is another reason for its relevance in neural communication. |
2208.11223 | Lu Yang | Lu Yang, Sheng Wang, and Russ B. Altman | POPDx: An Automated Framework for Patient Phenotyping across 392,246
Individuals in the UK Biobank Study | 45 pages, 6 main figures, 2 main tables. Journal of the American
Medical Informatics Association, 2022 | Journal of the American Medical Informatics Association: JAMIA,
pp.ocac226-ocac226. 2022 Dec 5 | 10.1093/jamia/ocac226 | null | q-bio.QM cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Objective For the UK Biobank standardized phenotype codes are associated with
patients who have been hospitalized but are missing for many patients who have
been treated exclusively in an outpatient setting. We describe a method for
phenotype recognition that imputes phenotype codes for all UK Biobank
participants. Materials and Methods POPDx (Population-based Objective
Phenotyping by Deep Extrapolation) is a bilinear machine learning framework for
simultaneously estimating the probabilities of 1,538 phenotype codes. We
extracted phenotypic and health-related information of 392,246 individuals from
the UK Biobank for POPDx development and evaluation. A total of 12,803 ICD-10
diagnosis codes of the patients were converted to 1,538 Phecodes as gold
standard labels. The POPDx framework was evaluated and compared to other
available methods on automated multi-phenotype recognition. Results POPDx can
predict phenotypes that are rare or even unobserved in training. We demonstrate
substantial improvement of automated multi-phenotype recognition across 22
disease categories, and its application in identifying key epidemiological
features associated with each phenotype. Conclusions POPDx helps provide
well-defined cohorts for downstream studies. It is a general purpose method
that can be applied to other biobanks with diverse but incomplete data.
| [
{
"created": "Tue, 23 Aug 2022 22:43:39 GMT",
"version": "v1"
},
{
"created": "Fri, 18 Nov 2022 01:48:54 GMT",
"version": "v2"
}
] | 2022-12-13 | [
[
"Yang",
"Lu",
""
],
[
"Wang",
"Sheng",
""
],
[
"Altman",
"Russ B.",
""
]
] | Objective For the UK Biobank standardized phenotype codes are associated with patients who have been hospitalized but are missing for many patients who have been treated exclusively in an outpatient setting. We describe a method for phenotype recognition that imputes phenotype codes for all UK Biobank participants. Materials and Methods POPDx (Population-based Objective Phenotyping by Deep Extrapolation) is a bilinear machine learning framework for simultaneously estimating the probabilities of 1,538 phenotype codes. We extracted phenotypic and health-related information of 392,246 individuals from the UK Biobank for POPDx development and evaluation. A total of 12,803 ICD-10 diagnosis codes of the patients were converted to 1,538 Phecodes as gold standard labels. The POPDx framework was evaluated and compared to other available methods on automated multi-phenotype recognition. Results POPDx can predict phenotypes that are rare or even unobserved in training. We demonstrate substantial improvement of automated multi-phenotype recognition across 22 disease categories, and its application in identifying key epidemiological features associated with each phenotype. Conclusions POPDx helps provide well-defined cohorts for downstream studies. It is a general purpose method that can be applied to other biobanks with diverse but incomplete data. |
1901.05071 | Michel Kana PhD | Michel Kana | Mathematical models of cardiovascular control by the autonomic nervous
system | PhD thesis submitted in June, 2010, successfully defended on Feb 3,
2011 | 228 pages, 148 figures, 192 citations | Supervisor: Prof. Ing. Jiri
Holcik, CSc. | Reviewers: Prof. Richard Reilly, Ph.D.; Doc. Ing. Milan
Tysler, CSc. | Department of Biomedical Informatics, Czech Technical
University in Prague, Czech Republic | null | null | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This PhD thesis develops an integrated mathematical model for autonomic
nervous system control on cardiovascular activity. The model extensively covers
cardiovascular neural pathways including a wide range of afferent sensory
neurons, central processing by autonomic premotor neurons, efferent outputs via
preganglionic and postganglionic autonomic neurons and dynamics of
neurotransmitters at cardiovascular effectors organs. We performed over 500
cardiovascular experiments using clinical autonomic tests on 72 subjects
ranging from 11 to 82 years old and collected typical cardiovascular signals
such as electrocardiogram, arterial pulse, arterial blood pressure, respiration
pattern, galvanic skin response and skin temperature. After statistical
evaluation in the time and frequency domains, the data were especially used to
resolving a constrained optimization task. Results bring evidences supporting
the hypothesis that Mayer waves result from a rhythmic sympathetic discharge of
pacemaker-like sympathetic premotor neurons. Simulation also shows that
vagally-mediated tachycardia, observed during vagal maneuvers on some subjects
could be related to the secretion of vasoactive neurotransmitters by the vagal
nerve. We additionally identified model parameters for estimating the resting
sympathetic and parasympathetic tone which are believed to be linked to some
pathological states. Results show higher vagal tone on young subjects with a
decreasing trend with aging, what agrees with the data from heart rate
variability studies. Tonic sympathetic activity was found to possibly emerge
from pacemaker premotor neurons, but also from activation of chemoreceptors to
a lesser extent. The thesis opens perspectives for future work including
validating the markers of autonomic tone provided by our model against data
from experiments with pharmacological blockers and invasive neural activity
recordings.
| [
{
"created": "Sat, 29 Dec 2018 07:19:18 GMT",
"version": "v1"
},
{
"created": "Thu, 17 Jan 2019 12:59:33 GMT",
"version": "v2"
}
] | 2019-01-18 | [
[
"Kana",
"Michel",
""
]
] | This PhD thesis develops an integrated mathematical model for autonomic nervous system control on cardiovascular activity. The model extensively covers cardiovascular neural pathways including a wide range of afferent sensory neurons, central processing by autonomic premotor neurons, efferent outputs via preganglionic and postganglionic autonomic neurons and dynamics of neurotransmitters at cardiovascular effectors organs. We performed over 500 cardiovascular experiments using clinical autonomic tests on 72 subjects ranging from 11 to 82 years old and collected typical cardiovascular signals such as electrocardiogram, arterial pulse, arterial blood pressure, respiration pattern, galvanic skin response and skin temperature. After statistical evaluation in the time and frequency domains, the data were especially used to resolving a constrained optimization task. Results bring evidences supporting the hypothesis that Mayer waves result from a rhythmic sympathetic discharge of pacemaker-like sympathetic premotor neurons. Simulation also shows that vagally-mediated tachycardia, observed during vagal maneuvers on some subjects could be related to the secretion of vasoactive neurotransmitters by the vagal nerve. We additionally identified model parameters for estimating the resting sympathetic and parasympathetic tone which are believed to be linked to some pathological states. Results show higher vagal tone on young subjects with a decreasing trend with aging, what agrees with the data from heart rate variability studies. Tonic sympathetic activity was found to possibly emerge from pacemaker premotor neurons, but also from activation of chemoreceptors to a lesser extent. The thesis opens perspectives for future work including validating the markers of autonomic tone provided by our model against data from experiments with pharmacological blockers and invasive neural activity recordings. |
2105.03254 | Benoit Goussen | Tjalling Jager, Marie Trijau, Neil Sherborne, Benoit Goussen, Roman
Ashauer | Considerations for using reproduction data in
toxicokinetic-toxicodynamic modelling | 13 pages | Integr Environ Assess Manag (2021) 18(2):479-487 | 10.1002/ieam.4476 | null | q-bio.QM cs.OH q-bio.PE | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Toxicokinetic-toxicodynamic (TKTD) modelling is essential to make sense of
the time dependence of toxic effects, and to interpret and predict consequences
of time-varying exposure. These advantages have been recognised in the
regulatory arena, especially for environmental risk assessment (ERA) of
pesticides, where time-varying exposure is the norm. We critically evaluate the
link between the modelled variables in TKTD models and the observations from
laboratory ecotoxicity tests. For the endpoint reproduction, this link is far
from trivial. The relevant TKTD models for sub-lethal effects are based on
Dynamic-Energy Budget (DEB) theory, which specifies a continuous investment
flux into reproduction. In contrast, experimental tests score egg or offspring
release by the mother. The link between model and data is particularly
troublesome when a species reproduces in discrete clutches, and even more so
when eggs are incubated in the mother's brood pouch (and release of neonates is
scored in the test). This situation is quite common among aquatic invertebrates
(e.g., cladocerans, amphipods, mysids), including many popular test species. We
discuss these and other issues with reproduction data, reflect on their
potential impact on DEB-TKTD analysis, and provide preliminary recommendations
to correct them. Both modellers and users of model results need to be aware of
these complications, as ignoring them could easily lead to unnecessary failure
of DEB-TKTD models during calibration, or when validating them against
independent data for other exposure scenarios.
| [
{
"created": "Tue, 4 May 2021 10:13:04 GMT",
"version": "v1"
}
] | 2022-03-22 | [
[
"Jager",
"Tjalling",
""
],
[
"Trijau",
"Marie",
""
],
[
"Sherborne",
"Neil",
""
],
[
"Goussen",
"Benoit",
""
],
[
"Ashauer",
"Roman",
""
]
] | Toxicokinetic-toxicodynamic (TKTD) modelling is essential to make sense of the time dependence of toxic effects, and to interpret and predict consequences of time-varying exposure. These advantages have been recognised in the regulatory arena, especially for environmental risk assessment (ERA) of pesticides, where time-varying exposure is the norm. We critically evaluate the link between the modelled variables in TKTD models and the observations from laboratory ecotoxicity tests. For the endpoint reproduction, this link is far from trivial. The relevant TKTD models for sub-lethal effects are based on Dynamic-Energy Budget (DEB) theory, which specifies a continuous investment flux into reproduction. In contrast, experimental tests score egg or offspring release by the mother. The link between model and data is particularly troublesome when a species reproduces in discrete clutches, and even more so when eggs are incubated in the mother's brood pouch (and release of neonates is scored in the test). This situation is quite common among aquatic invertebrates (e.g., cladocerans, amphipods, mysids), including many popular test species. We discuss these and other issues with reproduction data, reflect on their potential impact on DEB-TKTD analysis, and provide preliminary recommendations to correct them. Both modellers and users of model results need to be aware of these complications, as ignoring them could easily lead to unnecessary failure of DEB-TKTD models during calibration, or when validating them against independent data for other exposure scenarios. |
1902.06614 | Christopher Overton | Christopher E. Overton, Mark Broom, Christoforos Hadjichrysanthou and
Kieran J. Sharkey | Methods for approximating stochastic evolutionary dynamics on graphs | null | J. Theor. Biol. 468, 45-59 (2019) | 10.1016/j.jtbi.2019.02.009 | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | Population structure can have a significant effect on evolution. For some
systems with sufficient symmetry, analytic results can be derived within the
mathematical framework of evolutionary graph theory which relate to the outcome
of the evolutionary process. However, for more complicated heterogeneous
structures, computationally intensive methods are required such as
individual-based stochastic simulations. By adapting methods from statistical
physics, including moment closure techniques, we first show how to derive
existing homogenised pair approximation models and the exact neutral drift
model. We then develop node-level approximations to stochastic evolutionary
processes on arbitrarily complex structured populations represented by finite
graphs, which can capture the different dynamics for individual nodes in the
population. Using these approximations, we evaluate the fixation probability of
invading mutants for given initial conditions, where the dynamics follow
standard evolutionary processes such as the invasion process. Comparisons with
the output of stochastic simulations reveal the effectiveness of our
approximations in describing the stochastic processes and in predicting the
probability of fixation of mutants on a wide range of graphs. Construction of
these models facilitates a systematic analysis and is valuable for a greater
understanding of the influence of population structure on evolutionary
processes.
| [
{
"created": "Mon, 18 Feb 2019 15:35:08 GMT",
"version": "v1"
}
] | 2019-03-11 | [
[
"Overton",
"Christopher E.",
""
],
[
"Broom",
"Mark",
""
],
[
"Hadjichrysanthou",
"Christoforos",
""
],
[
"Sharkey",
"Kieran J.",
""
]
] | Population structure can have a significant effect on evolution. For some systems with sufficient symmetry, analytic results can be derived within the mathematical framework of evolutionary graph theory which relate to the outcome of the evolutionary process. However, for more complicated heterogeneous structures, computationally intensive methods are required such as individual-based stochastic simulations. By adapting methods from statistical physics, including moment closure techniques, we first show how to derive existing homogenised pair approximation models and the exact neutral drift model. We then develop node-level approximations to stochastic evolutionary processes on arbitrarily complex structured populations represented by finite graphs, which can capture the different dynamics for individual nodes in the population. Using these approximations, we evaluate the fixation probability of invading mutants for given initial conditions, where the dynamics follow standard evolutionary processes such as the invasion process. Comparisons with the output of stochastic simulations reveal the effectiveness of our approximations in describing the stochastic processes and in predicting the probability of fixation of mutants on a wide range of graphs. Construction of these models facilitates a systematic analysis and is valuable for a greater understanding of the influence of population structure on evolutionary processes. |
q-bio/0506039 | Heiko Rieger | K. Bartha and H. Rieger | Vascular network remodeling via vessel cooption, regression and growth
in tumors | 30 pages, 11 figures (higher resolution at
http://www.uni-saarland.de/fak7/rieger/HOMEPAGE/BJ0.pdf) | J. Theor. Biol. 241, 903 (2006) | 10.1016/j.jtbi.2006.01.022 | null | q-bio.TO physics.bio-ph q-bio.CB | null | The transformation of the regular vasculature in normal tissue into a highly
inhomogeneous tumor specific capillary network is described by a theoretical
model incorporating tumor growth, vessel cooption, neo-vascularization, vessel
collapse and cell death. Compartmentalization of the tumor into several regions
differing in vessel density, diameter and in necrosis is observed for a wide
range of parameters in agreement with the vessel morphology found in human
melanoma. In accord with data for human melanoma the model predicts, that
microvascular density (MVD, regarded as an important diagnostic tool in cancer
treatment, does not necessarily determine the tempo of tumor progression.
Instead it is suggested, that the MVD of the original tissue as well as the
metabolic demand of the individual tumor cell plays the major role in the
initial stages of tumor growth.
| [
{
"created": "Mon, 27 Jun 2005 23:55:56 GMT",
"version": "v1"
}
] | 2016-09-08 | [
[
"Bartha",
"K.",
""
],
[
"Rieger",
"H.",
""
]
] | The transformation of the regular vasculature in normal tissue into a highly inhomogeneous tumor specific capillary network is described by a theoretical model incorporating tumor growth, vessel cooption, neo-vascularization, vessel collapse and cell death. Compartmentalization of the tumor into several regions differing in vessel density, diameter and in necrosis is observed for a wide range of parameters in agreement with the vessel morphology found in human melanoma. In accord with data for human melanoma the model predicts, that microvascular density (MVD, regarded as an important diagnostic tool in cancer treatment, does not necessarily determine the tempo of tumor progression. Instead it is suggested, that the MVD of the original tissue as well as the metabolic demand of the individual tumor cell plays the major role in the initial stages of tumor growth. |
2009.05847 | Arash Hooshmand | Arash Hooshmand | Machine Learning Against Cancer: Accurate Diagnosis of Cancer by Machine
Learning Classification of the Whole Genome Sequencing Data | 29 pages, 3 figures, 45 tables | null | null | null | q-bio.GN cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Machine learning can precisely identify different cancer tumors at any stage
by classifying cancerous and healthy samples based on their genomic profile. We
have developed novel methods of MLAC (Machine Learning Against Cancer)
achieving perfect results with perfect precision, sensitivity, and specificity.
We have used the whole genome sequencing data acquired by next-generation RNA
sequencing techniques in The Cancer Genome Atlas and Genotype-Tissue Expression
projects for cancerous and healthy tissues respectively. Moreover, we have
shown that unsupervised machine learning clustering has great potential to be
used for cancer diagnosis. Indeed, a creative way to work with data and general
algorithms has resulted in perfect classification i.e. all precision,
sensitivity, and specificity are equal to 1 for most of the different tumor
types even with a modest amount of data, and the same method works well on a
series of cancers and results in great clustering of cancerous and healthy
samples too. Our system can be used in practice because once the classifier is
trained, it can be used to classify any new sample of new potential patients.
One advantage of our work is that the aforementioned perfect precision and
recall are obtained on samples of all stages including very early stages of
cancer; therefore, it is a promising tool for diagnosis of cancers in early
stages. Another advantage of our novel model is that it works with normalized
values of RNA sequencing data, hence people's private sensitive medical data
will remain hidden, protected, and safe. This type of analysis will be
widespread and economical in the future and people can even learn to receive
their RNA sequencing data and do their own preliminary cancer studies
themselves which have the potential to help the healthcare systems. It is a
great step forward toward good health that is the main base of sustainable
societies.
| [
{
"created": "Sat, 12 Sep 2020 18:51:47 GMT",
"version": "v1"
}
] | 2020-09-15 | [
[
"Hooshmand",
"Arash",
""
]
] | Machine learning can precisely identify different cancer tumors at any stage by classifying cancerous and healthy samples based on their genomic profile. We have developed novel methods of MLAC (Machine Learning Against Cancer) achieving perfect results with perfect precision, sensitivity, and specificity. We have used the whole genome sequencing data acquired by next-generation RNA sequencing techniques in The Cancer Genome Atlas and Genotype-Tissue Expression projects for cancerous and healthy tissues respectively. Moreover, we have shown that unsupervised machine learning clustering has great potential to be used for cancer diagnosis. Indeed, a creative way to work with data and general algorithms has resulted in perfect classification i.e. all precision, sensitivity, and specificity are equal to 1 for most of the different tumor types even with a modest amount of data, and the same method works well on a series of cancers and results in great clustering of cancerous and healthy samples too. Our system can be used in practice because once the classifier is trained, it can be used to classify any new sample of new potential patients. One advantage of our work is that the aforementioned perfect precision and recall are obtained on samples of all stages including very early stages of cancer; therefore, it is a promising tool for diagnosis of cancers in early stages. Another advantage of our novel model is that it works with normalized values of RNA sequencing data, hence people's private sensitive medical data will remain hidden, protected, and safe. This type of analysis will be widespread and economical in the future and people can even learn to receive their RNA sequencing data and do their own preliminary cancer studies themselves which have the potential to help the healthcare systems. It is a great step forward toward good health that is the main base of sustainable societies. |
1305.2677 | Ralf Metzler | Otto Pulkkinen and Ralf Metzler | Distance matters: the impact of gene proximity in bacterial gene
regulation | 5 pages, 2 figures; Supplementary material contained in the source
files | Phys Rev Lett 110, 198101 (2013) | 10.1103/PhysRevLett.110.198101 | null | q-bio.SC cond-mat.stat-mech physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Following recent discoveries of colocalization of downstream-regulating genes
in living cells, the impact of the spatial distance between such genes on the
kinetics of gene product formation is increasingly recognized. We here show
from analytical and numerical analysis that the distance between a
transcription factor (TF) gene and its target gene drastically affects the
speed and reliability of transcriptional regulation in bacterial cells. For an
explicit model system we develop a general theory for the interactions between
a TF and a transcription unit. The observed variations in regulation efficiency
are linked to the magnitude of the variation of the TF concentration peaks as a
function of the binding site distance from the signal source. Our results
support the role of rapid binding site search for gene colocalization and
emphasize the role of local concentration differences.
| [
{
"created": "Mon, 13 May 2013 05:35:32 GMT",
"version": "v1"
}
] | 2015-06-15 | [
[
"Pulkkinen",
"Otto",
""
],
[
"Metzler",
"Ralf",
""
]
] | Following recent discoveries of colocalization of downstream-regulating genes in living cells, the impact of the spatial distance between such genes on the kinetics of gene product formation is increasingly recognized. We here show from analytical and numerical analysis that the distance between a transcription factor (TF) gene and its target gene drastically affects the speed and reliability of transcriptional regulation in bacterial cells. For an explicit model system we develop a general theory for the interactions between a TF and a transcription unit. The observed variations in regulation efficiency are linked to the magnitude of the variation of the TF concentration peaks as a function of the binding site distance from the signal source. Our results support the role of rapid binding site search for gene colocalization and emphasize the role of local concentration differences. |
0903.2379 | Szymon {\L}{\ke}ski | Daniel K. Wojcik, Szymon Leski | Current source density reconstruction from incomplete data | null | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose two ways of estimating the current source density (CSD) from
measurements of voltage on a Cartesian grid with missing recording points using
the inverse CSD method. The simplest approach is to substitute local averages
(LA) in place of missing data. A more elaborate alternative is to estimate a
smaller number of CSD parameters than the actual number of recordings and to
take the least-squares fit (LS). We compare the two approaches in the three
dimensional case on several sets of surrogate and experimental data, for
varying numbers of missing data points, and discuss their advantages and
drawbacks. One can construct CSD distributions for which one or the other
approach is better. However, in general, LA method is to be recommended being
more stable and more robust to variations in the recorded fields.
| [
{
"created": "Fri, 13 Mar 2009 14:18:21 GMT",
"version": "v1"
}
] | 2009-03-16 | [
[
"Wojcik",
"Daniel K.",
""
],
[
"Leski",
"Szymon",
""
]
] | We propose two ways of estimating the current source density (CSD) from measurements of voltage on a Cartesian grid with missing recording points using the inverse CSD method. The simplest approach is to substitute local averages (LA) in place of missing data. A more elaborate alternative is to estimate a smaller number of CSD parameters than the actual number of recordings and to take the least-squares fit (LS). We compare the two approaches in the three dimensional case on several sets of surrogate and experimental data, for varying numbers of missing data points, and discuss their advantages and drawbacks. One can construct CSD distributions for which one or the other approach is better. However, in general, LA method is to be recommended being more stable and more robust to variations in the recorded fields. |
2201.10598 | Adam Thomas | Nikhil Goyal1, Dustin Moraczewski, Peter A. Bandettini, Emily S. Finn,
Adam G. Thomas | The positive-negative mode link between brain connectivity,
demographics, and behavior: A pre-registered replication of Smith et al. 2015 | Accepted for publication in Royal Society Open Science on 2021-12-21 | null | 10.1098/rsos.201090 | null | q-bio.NC | http://creativecommons.org/publicdomain/zero/1.0/ | In mental health research, it has proven difficult to find measures of brain
function that provide reliable indicators of mental health and well-being,
including susceptibility to mental health disorders. Recently, a family of
data-driven analyses have provided such reliable measures when applied to
large, population-level datasets. In the current pre-registered replication
study, we show that the canonical correlation analysis (CCA) methods previously
developed using resting-state MRI functional connectivity and subject measures
of cognition and behavior from healthy adults are also effective in measuring
well-being (a "positive-negative axis") in an independent developmental
dataset. Our replication was successful in two out of three of our
pre-registered criteria, such that a primary CCA mode's weights displayed a
significant positive relationship and explained a significant amount of
variance in both functional connectivity and subject measures. The only
criteria that was not successful was that compared to other modes the magnitude
of variance explained by the primary CCA mode was smaller than predicted, a
result which could indicate a developmental trajectory of a primary mode. This
replication establishes a signature neurotypical relationship between
connectivity and phenotype, opening new avenues of research in neuroscience
with clear clinical applications.
| [
{
"created": "Tue, 25 Jan 2022 19:45:43 GMT",
"version": "v1"
}
] | 2022-01-27 | [
[
"Goyal1",
"Nikhil",
""
],
[
"Moraczewski",
"Dustin",
""
],
[
"Bandettini",
"Peter A.",
""
],
[
"Finn",
"Emily S.",
""
],
[
"Thomas",
"Adam G.",
""
]
] | In mental health research, it has proven difficult to find measures of brain function that provide reliable indicators of mental health and well-being, including susceptibility to mental health disorders. Recently, a family of data-driven analyses have provided such reliable measures when applied to large, population-level datasets. In the current pre-registered replication study, we show that the canonical correlation analysis (CCA) methods previously developed using resting-state MRI functional connectivity and subject measures of cognition and behavior from healthy adults are also effective in measuring well-being (a "positive-negative axis") in an independent developmental dataset. Our replication was successful in two out of three of our pre-registered criteria, such that a primary CCA mode's weights displayed a significant positive relationship and explained a significant amount of variance in both functional connectivity and subject measures. The only criteria that was not successful was that compared to other modes the magnitude of variance explained by the primary CCA mode was smaller than predicted, a result which could indicate a developmental trajectory of a primary mode. This replication establishes a signature neurotypical relationship between connectivity and phenotype, opening new avenues of research in neuroscience with clear clinical applications. |
1407.7988 | Luke Jostins | Luke Jostins, Yali Xu, Shane McCarthy, Qasim Ayub, Richard Durbin,
Jeff Barrett, Chris Tyler-Smith | YFitter: Maximum likelihood assignment of Y chromosome haplogroups from
low-coverage sequence data | null | null | null | null | q-bio.PE q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Low-coverage short-read resequencing experiments have the potential to expand
our understanding of Y chromosome haplogroups. However, the uncertainty
associated with these experiments mean that haplogroups must be assigned
probabilistically to avoid false inferences. We propose an efficient dynamic
programming algorithm that can assign haplogroups by maximum likelihood, and
represent the uncertainty in assignment. We apply this to both genotype and
low-coverage sequencing data, and show that it can assign haplogroups
accurately and with high resolution. The method is implemented as the program
YFitter, which can be downloaded from http://sourceforge.net/projects/yfitter/
| [
{
"created": "Wed, 30 Jul 2014 10:20:44 GMT",
"version": "v1"
}
] | 2014-07-31 | [
[
"Jostins",
"Luke",
""
],
[
"Xu",
"Yali",
""
],
[
"McCarthy",
"Shane",
""
],
[
"Ayub",
"Qasim",
""
],
[
"Durbin",
"Richard",
""
],
[
"Barrett",
"Jeff",
""
],
[
"Tyler-Smith",
"Chris",
""
]
] | Low-coverage short-read resequencing experiments have the potential to expand our understanding of Y chromosome haplogroups. However, the uncertainty associated with these experiments mean that haplogroups must be assigned probabilistically to avoid false inferences. We propose an efficient dynamic programming algorithm that can assign haplogroups by maximum likelihood, and represent the uncertainty in assignment. We apply this to both genotype and low-coverage sequencing data, and show that it can assign haplogroups accurately and with high resolution. The method is implemented as the program YFitter, which can be downloaded from http://sourceforge.net/projects/yfitter/ |
2308.09379 | Aram Mohammed | Anwar Mohammed Raouf, Kocher Omer Salih, Aram Akram Mohammad | Examination of Some Nut Traits and Release From Dormancy Along With
Germination Capacity in Some Bitter Almond Genotypes | null | null | 10.25130/tjas.21.4.4 | null | q-bio.OT | http://creativecommons.org/licenses/by/4.0/ | This study was conducted at College of Agricultural Engineering Sciences,
University of Sulaimani, Kurdistan Region-Iraq so as to investigate some nut
traits in 10 bitter almond genotypes, capacity of them to release from dormancy
and finally germination ability. Nut traits were calculated, and stratified in
a sand medium at 6 C in a refrigerator for 55 days, then they were sown in fine
sand on August 22, 2021 for 29 days to calculate germination percentage. There
were great discrepancies among genotypes in nut traits. Nut length was between
(23.66-32.73 mm), nut width (18.77-21.84 mm), nut size (3.16-4.26 cm3), nut
weight (2.67-4.13 g), kernel weight (0.6-0.99 g), shell weight (1.94-3.27 g)
and shell thickness (2.31-3.37 mm). The results of release from dormancy and
calculation of germination percentage trials showed that the highest nut
numbers from G6 (100%), G4 (86.11%) and G2 (65.22%) were released from dormancy
and the same genotypes gave the best germination percentage, particularly G6
and G2 both gave (73.33%) germination. Depending on the results of release
percentage from dormancy and germination percentage, G6 and G2 along with G4
were the best genotypes.
| [
{
"created": "Fri, 18 Aug 2023 08:15:07 GMT",
"version": "v1"
}
] | 2023-08-21 | [
[
"Raouf",
"Anwar Mohammed",
""
],
[
"Salih",
"Kocher Omer",
""
],
[
"Mohammad",
"Aram Akram",
""
]
] | This study was conducted at College of Agricultural Engineering Sciences, University of Sulaimani, Kurdistan Region-Iraq so as to investigate some nut traits in 10 bitter almond genotypes, capacity of them to release from dormancy and finally germination ability. Nut traits were calculated, and stratified in a sand medium at 6 C in a refrigerator for 55 days, then they were sown in fine sand on August 22, 2021 for 29 days to calculate germination percentage. There were great discrepancies among genotypes in nut traits. Nut length was between (23.66-32.73 mm), nut width (18.77-21.84 mm), nut size (3.16-4.26 cm3), nut weight (2.67-4.13 g), kernel weight (0.6-0.99 g), shell weight (1.94-3.27 g) and shell thickness (2.31-3.37 mm). The results of release from dormancy and calculation of germination percentage trials showed that the highest nut numbers from G6 (100%), G4 (86.11%) and G2 (65.22%) were released from dormancy and the same genotypes gave the best germination percentage, particularly G6 and G2 both gave (73.33%) germination. Depending on the results of release percentage from dormancy and germination percentage, G6 and G2 along with G4 were the best genotypes. |
0909.1442 | Steven Kelk | Harry Buhrman, Peter T. S. van der Gulik, Steven M. Kelk, Wouter M.
Koolen, Leen Stougie | Some mathematical refinements concerning error minimization in the
genetic code | Substantially revised with respect to the earlier version. Currently
in review | null | null | null | q-bio.QM q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The genetic code has been shown to be very error robust compared to randomly
selected codes, but to be significantly less error robust than a certain code
found by a heuristic algorithm. We formulate this optimisation problem as a
Quadratic Assignment Problem and thus verify that the code found by the
heuristic is the global optimum. We also argue that it is strongly misleading
to compare the genetic code only with codes sampled from the fixed block model,
because the real code space is orders of magnitude larger. We thus enlarge the
space from which random codes can be sampled from approximately 2.433 x 10^18
codes to approximately 5.908 x 10^45 codes. We do this by leaving the fixed
block model, and using the wobble rules to formulate the characteristics
acceptable for a genetic code. By relaxing more constraints three larger spaces
are also constructed. Using a modified error function, the genetic code is
found to be more error robust compared to a background of randomly generated
codes with increasing space size. We point out that these results do not
necessarily imply that the code was optimized during evolution for error
minimization, but that other mechanisms could explain this error robustness.
| [
{
"created": "Tue, 8 Sep 2009 09:54:21 GMT",
"version": "v1"
},
{
"created": "Mon, 26 Jul 2010 09:21:22 GMT",
"version": "v2"
}
] | 2015-03-13 | [
[
"Buhrman",
"Harry",
""
],
[
"van der Gulik",
"Peter T. S.",
""
],
[
"Kelk",
"Steven M.",
""
],
[
"Koolen",
"Wouter M.",
""
],
[
"Stougie",
"Leen",
""
]
] | The genetic code has been shown to be very error robust compared to randomly selected codes, but to be significantly less error robust than a certain code found by a heuristic algorithm. We formulate this optimisation problem as a Quadratic Assignment Problem and thus verify that the code found by the heuristic is the global optimum. We also argue that it is strongly misleading to compare the genetic code only with codes sampled from the fixed block model, because the real code space is orders of magnitude larger. We thus enlarge the space from which random codes can be sampled from approximately 2.433 x 10^18 codes to approximately 5.908 x 10^45 codes. We do this by leaving the fixed block model, and using the wobble rules to formulate the characteristics acceptable for a genetic code. By relaxing more constraints three larger spaces are also constructed. Using a modified error function, the genetic code is found to be more error robust compared to a background of randomly generated codes with increasing space size. We point out that these results do not necessarily imply that the code was optimized during evolution for error minimization, but that other mechanisms could explain this error robustness. |
1106.4450 | Matthias Jorg Fuhr | M. J. Fuhr, C. St\"uhrk, B. M\"unch, F. W. M. R. Schwarze and M.
Schubert | Automated Quantification of the Impact of the Wood-decay fungus
Physisporinus vitreus on the Cell Wall Structure of Norway spruce by
Tomographic Microscopy | null | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Wood-decay fungi decompose their substrate by extracellular, degradative
enzymes and play an important role in natural ecosystems by recycling carbon
and minerals fixed in plants. Thereby, they cause significant damage to the
wood structure and limit the use of wood as building material. Besides their
role as biodeteriorators wood-decay fungi can be used for biotechnological
purposes, e.g. the white-rot fungus Physisporinus vitreus for improving the
uptake of preservatives and wood-modification substances of refractory wood.
Therefore, the visualization and the quantification of microscopic decay
patterns are important for the study of the impact of wood-decay fungi in
general, as well as for wood-decay fungi and microorganisms with possible
applications in biotechnology. In the present work, we developed a method for
the automated localization and quantification of microscopic cell wall elements
(CWE) of Norway spruce wood such as bordered pits, intrinsic defects, hyphae or
alterations induced by P. vitreus using high resolution X-ray computed
tomographic microscopy. In addition to classical destructive wood anatomical
methods such as light or laser scanning microscopy, our method allows for the
first time to compute the properties (e.g. area, orientation and
size-distribution) of CWE of the tracheids in a sample. This is essential for
modeling the influence of microscopic CWE to macroscopic properties such as
wood strength and permeability.
| [
{
"created": "Wed, 22 Jun 2011 14:01:34 GMT",
"version": "v1"
}
] | 2011-06-23 | [
[
"Fuhr",
"M. J.",
""
],
[
"Stührk",
"C.",
""
],
[
"Münch",
"B.",
""
],
[
"Schwarze",
"F. W. M. R.",
""
],
[
"Schubert",
"M.",
""
]
] | Wood-decay fungi decompose their substrate by extracellular, degradative enzymes and play an important role in natural ecosystems by recycling carbon and minerals fixed in plants. Thereby, they cause significant damage to the wood structure and limit the use of wood as building material. Besides their role as biodeteriorators wood-decay fungi can be used for biotechnological purposes, e.g. the white-rot fungus Physisporinus vitreus for improving the uptake of preservatives and wood-modification substances of refractory wood. Therefore, the visualization and the quantification of microscopic decay patterns are important for the study of the impact of wood-decay fungi in general, as well as for wood-decay fungi and microorganisms with possible applications in biotechnology. In the present work, we developed a method for the automated localization and quantification of microscopic cell wall elements (CWE) of Norway spruce wood such as bordered pits, intrinsic defects, hyphae or alterations induced by P. vitreus using high resolution X-ray computed tomographic microscopy. In addition to classical destructive wood anatomical methods such as light or laser scanning microscopy, our method allows for the first time to compute the properties (e.g. area, orientation and size-distribution) of CWE of the tracheids in a sample. This is essential for modeling the influence of microscopic CWE to macroscopic properties such as wood strength and permeability. |
2102.02649 | Kleber Padovani | Kleber Padovani, Roberto Xavier, Rafael Cabral Borges, Andre Carvalho,
Anna Reali, Annie Chateau, Ronnie Alves | A step toward a reinforcement learning de novo genome assembler | null | null | null | null | q-bio.GN cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | De novo genome assembly is a relevant but computationally complex task in
genomics. Although de novo assemblers have been used successfully in several
genomics projects, there is still no 'best assembler', and the choice and setup
of assemblers still rely on bioinformatics experts. Thus, as with other
computationally complex problems, machine learning may emerge as an alternative
(or complementary) way for developing more accurate and automated assemblers.
Reinforcement learning has proven promising for solving complex activities
without supervision - such games - and there is a pressing need to understand
the limits of this approach to 'real' problems, such as the DFA problem. This
study aimed to shed light on the application of machine learning, using
reinforcement learning (RL), in genome assembly. We expanded upon the sole
previous approach found in the literature to solve this problem by carefully
exploring the learning aspects of the proposed intelligent agent, which uses
the Q-learning algorithm, and we provided insights for the next steps of
automated genome assembly development. We improved the reward system and
optimized the exploration of the state space based on pruning and in
collaboration with evolutionary computing. We tested the new approaches on 23
new larger environments, which are all available on the internet. Our results
suggest consistent performance progress; however, we also found limitations,
especially concerning the high dimensionality of state and action spaces.
Finally, we discuss paths for achieving efficient and automated genome assembly
in real scenarios considering successful RL applications - including deep
reinforcement learning.
| [
{
"created": "Tue, 2 Feb 2021 23:43:42 GMT",
"version": "v1"
},
{
"created": "Wed, 9 Jun 2021 23:16:39 GMT",
"version": "v2"
},
{
"created": "Thu, 3 Nov 2022 17:23:25 GMT",
"version": "v3"
},
{
"created": "Thu, 7 Mar 2024 20:47:45 GMT",
"version": "v4"
}
] | 2024-03-11 | [
[
"Padovani",
"Kleber",
""
],
[
"Xavier",
"Roberto",
""
],
[
"Borges",
"Rafael Cabral",
""
],
[
"Carvalho",
"Andre",
""
],
[
"Reali",
"Anna",
""
],
[
"Chateau",
"Annie",
""
],
[
"Alves",
"Ronnie",
""
]
] | De novo genome assembly is a relevant but computationally complex task in genomics. Although de novo assemblers have been used successfully in several genomics projects, there is still no 'best assembler', and the choice and setup of assemblers still rely on bioinformatics experts. Thus, as with other computationally complex problems, machine learning may emerge as an alternative (or complementary) way for developing more accurate and automated assemblers. Reinforcement learning has proven promising for solving complex activities without supervision - such games - and there is a pressing need to understand the limits of this approach to 'real' problems, such as the DFA problem. This study aimed to shed light on the application of machine learning, using reinforcement learning (RL), in genome assembly. We expanded upon the sole previous approach found in the literature to solve this problem by carefully exploring the learning aspects of the proposed intelligent agent, which uses the Q-learning algorithm, and we provided insights for the next steps of automated genome assembly development. We improved the reward system and optimized the exploration of the state space based on pruning and in collaboration with evolutionary computing. We tested the new approaches on 23 new larger environments, which are all available on the internet. Our results suggest consistent performance progress; however, we also found limitations, especially concerning the high dimensionality of state and action spaces. Finally, we discuss paths for achieving efficient and automated genome assembly in real scenarios considering successful RL applications - including deep reinforcement learning. |
q-bio/0607004 | Eugene Shakhnovich | Konstantin B. Zeldovich, Igor N. Berezovsky, Eugene I. Shakhnovich | Protein and DNA sequence determinants of thermophilic adaptation | in press PLoS Computational Biology; revised version | null | 10.1371/journal.pcbi.0030005 | null | q-bio.BM q-bio.GN | null | Prokaryotes living at extreme environmental temperatures exhibit pronounced
signatures in the amino acid composition of their proteins and nucleotide
compositions of their genomes reflective of adaptation to their thermal
environments. However, despite significant efforts, the definitive answer of
what are the genomic and proteomic compositional determinants of Optimal Growth
Temperature of prokaryotic organisms remained elusive. Here the authors
performed a comprehensive analysis of amino acid and nucleotide compositional
signatures of thermophylic adaptation by exhaustively evaluating all
combinations of amino acids and nucleotides as possible determinants of Optimal
Growth Temperature for all prokaryotic organisms with fully sequences genomes..
The authors discovered that total concentration of seven amino acids in
proteomes, IVYWREL, serves as a universal proteomic predictor of Optimal Growth
Temperature in prokaryotes. Resolving the old-standing controversy the authors
determined that the variation in nucleotide composition (increase of purine
load, or A+G content with temperature) is largely a consequence of thermal
adaptation of proteins. However, the frequency with which A and G nucleotides
appear as nearest neighbors in genome sequences is strongly and independently
correlated with Optimal Growth Temperature. as a result of codon bias in
corresponding genomes. Together these results provide a complete picture of
proteomic and genomic determinants of thermophilic adaptation.
| [
{
"created": "Tue, 4 Jul 2006 19:49:14 GMT",
"version": "v1"
},
{
"created": "Thu, 23 Nov 2006 03:51:45 GMT",
"version": "v2"
}
] | 2015-06-26 | [
[
"Zeldovich",
"Konstantin B.",
""
],
[
"Berezovsky",
"Igor N.",
""
],
[
"Shakhnovich",
"Eugene I.",
""
]
] | Prokaryotes living at extreme environmental temperatures exhibit pronounced signatures in the amino acid composition of their proteins and nucleotide compositions of their genomes reflective of adaptation to their thermal environments. However, despite significant efforts, the definitive answer of what are the genomic and proteomic compositional determinants of Optimal Growth Temperature of prokaryotic organisms remained elusive. Here the authors performed a comprehensive analysis of amino acid and nucleotide compositional signatures of thermophylic adaptation by exhaustively evaluating all combinations of amino acids and nucleotides as possible determinants of Optimal Growth Temperature for all prokaryotic organisms with fully sequences genomes.. The authors discovered that total concentration of seven amino acids in proteomes, IVYWREL, serves as a universal proteomic predictor of Optimal Growth Temperature in prokaryotes. Resolving the old-standing controversy the authors determined that the variation in nucleotide composition (increase of purine load, or A+G content with temperature) is largely a consequence of thermal adaptation of proteins. However, the frequency with which A and G nucleotides appear as nearest neighbors in genome sequences is strongly and independently correlated with Optimal Growth Temperature. as a result of codon bias in corresponding genomes. Together these results provide a complete picture of proteomic and genomic determinants of thermophilic adaptation. |
1607.08840 | Christian Donner | Christian Donner, Klaus Obermayer, Hideaki Shimazaki | Approximate Inference for Time-varying Interactions and Macroscopic
Dynamics of Neural Populations | 28 pages, 7 figures | null | 10.1371/journal.pcbi.1005309 | null | q-bio.NC cond-mat.dis-nn | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The models in statistical physics such as an Ising model offer a convenient
way to characterize stationary activity of neural populations. Such stationary
activity of neurons may be expected for recordings from in vitro slices or
anesthetized animals. However, modeling activity of cortical circuitries of
awake animals has been more challenging because both spike-rates and
interactions can change according to sensory stimulation, behavior, or an
internal state of the brain. Previous approaches modeling the dynamics of
neural interactions suffer from computational cost; therefore, its application
was limited to only a dozen neurons. Here by introducing multiple analytic
approximation methods to a state-space model of neural population activity, we
make it possible to estimate dynamic pairwise interactions of up to 60 neurons.
More specifically, we applied the pseudolikelihood approximation to the
state-space model, and combined it with the Bethe or TAP mean-field
approximation to make the sequential Bayesian estimation of the model
parameters possible. The large-scale analysis allows us to investigate dynamics
of macroscopic properties of neural circuitries underlying stimulus processing
and behavior. We show that the model accurately estimates dynamics of network
properties such as sparseness, entropy, and heat capacity by simulated data,
and demonstrate utilities of these measures by analyzing activity of monkey V4
neurons as well as a simulated balanced network of spiking neurons.
| [
{
"created": "Fri, 29 Jul 2016 15:03:49 GMT",
"version": "v1"
},
{
"created": "Thu, 4 May 2017 16:25:34 GMT",
"version": "v2"
}
] | 2017-05-05 | [
[
"Donner",
"Christian",
""
],
[
"Obermayer",
"Klaus",
""
],
[
"Shimazaki",
"Hideaki",
""
]
] | The models in statistical physics such as an Ising model offer a convenient way to characterize stationary activity of neural populations. Such stationary activity of neurons may be expected for recordings from in vitro slices or anesthetized animals. However, modeling activity of cortical circuitries of awake animals has been more challenging because both spike-rates and interactions can change according to sensory stimulation, behavior, or an internal state of the brain. Previous approaches modeling the dynamics of neural interactions suffer from computational cost; therefore, its application was limited to only a dozen neurons. Here by introducing multiple analytic approximation methods to a state-space model of neural population activity, we make it possible to estimate dynamic pairwise interactions of up to 60 neurons. More specifically, we applied the pseudolikelihood approximation to the state-space model, and combined it with the Bethe or TAP mean-field approximation to make the sequential Bayesian estimation of the model parameters possible. The large-scale analysis allows us to investigate dynamics of macroscopic properties of neural circuitries underlying stimulus processing and behavior. We show that the model accurately estimates dynamics of network properties such as sparseness, entropy, and heat capacity by simulated data, and demonstrate utilities of these measures by analyzing activity of monkey V4 neurons as well as a simulated balanced network of spiking neurons. |
1502.05656 | Michael Schaub | Michael T. Schaub, Yazan N. Billeh, Costas A. Anastassiou, Christof
Koch, and Mauricio Barahona | Emergence of slow-switching assemblies in structured neuronal networks | The first two authors contributed equally -- 18 pages, including
supplementary material, 10 Figures + 2 SI Figures | PLoS Comput Biol 11(7): e1004196 (2015) | 10.1371/journal.pcbi.1004196 | null | q-bio.NC cond-mat.dis-nn nlin.PS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Unraveling the interplay between connectivity and spatio-temporal dynamics in
neuronal networks is a key step to advance our understanding of neuronal
information processing. Here we investigate how particular features of network
connectivity underpin the propensity of neural networks to generate
slow-switching assembly (SSA) dynamics, i.e., sustained epochs of increased
firing within assemblies of neurons which transition slowly between different
assemblies throughout the network. We show that the emergence of SSA activity
is linked to spectral properties of the asymmetric synaptic weight matrix. In
particular, the leading eigenvalues that dictate the slow dynamics exhibit a
gap with respect to the bulk of the spectrum, and the associated Schur vectors
exhibit a measure of block-localization on groups of neurons, thus resulting in
coherent dynamical activity on those groups. Through simple rate models, we
gain analytical understanding of the origin and importance of the spectral gap,
and use these insights to develop new network topologies with alternative
connectivity paradigms which also display SSA activity. Specifically, SSA
dynamics involving excitatory and inhibitory neurons can be achieved by
modifying the connectivity patterns between both types of neurons. We also show
that SSA activity can occur at multiple timescales reflecting a hierarchy in
the connectivity, and demonstrate the emergence of SSA in small-world like
networks. Our work provides a step towards understanding how network structure
(uncovered through advancements in neuroanatomy and connectomics) can impact on
spatio-temporal neural activity and constrain the resulting dynamics.
| [
{
"created": "Thu, 19 Feb 2015 18:04:25 GMT",
"version": "v1"
},
{
"created": "Mon, 23 Feb 2015 19:51:13 GMT",
"version": "v2"
},
{
"created": "Mon, 20 Jul 2015 09:41:29 GMT",
"version": "v3"
}
] | 2015-07-21 | [
[
"Schaub",
"Michael T.",
""
],
[
"Billeh",
"Yazan N.",
""
],
[
"Anastassiou",
"Costas A.",
""
],
[
"Koch",
"Christof",
""
],
[
"Barahona",
"Mauricio",
""
]
] | Unraveling the interplay between connectivity and spatio-temporal dynamics in neuronal networks is a key step to advance our understanding of neuronal information processing. Here we investigate how particular features of network connectivity underpin the propensity of neural networks to generate slow-switching assembly (SSA) dynamics, i.e., sustained epochs of increased firing within assemblies of neurons which transition slowly between different assemblies throughout the network. We show that the emergence of SSA activity is linked to spectral properties of the asymmetric synaptic weight matrix. In particular, the leading eigenvalues that dictate the slow dynamics exhibit a gap with respect to the bulk of the spectrum, and the associated Schur vectors exhibit a measure of block-localization on groups of neurons, thus resulting in coherent dynamical activity on those groups. Through simple rate models, we gain analytical understanding of the origin and importance of the spectral gap, and use these insights to develop new network topologies with alternative connectivity paradigms which also display SSA activity. Specifically, SSA dynamics involving excitatory and inhibitory neurons can be achieved by modifying the connectivity patterns between both types of neurons. We also show that SSA activity can occur at multiple timescales reflecting a hierarchy in the connectivity, and demonstrate the emergence of SSA in small-world like networks. Our work provides a step towards understanding how network structure (uncovered through advancements in neuroanatomy and connectomics) can impact on spatio-temporal neural activity and constrain the resulting dynamics. |
2107.10169 | Kelvin Sarink | Tim Hahn, Nils R. Winter, Jan Ernsting, Marius Gruber, Marco J.
Mauritz, Lukas Fisch, Ramona Leenings, Kelvin Sarink, Julian Blanke, Vincent
Holstein, Daniel Emden, Marie Beisemann, Nils Opel, Dominik Grotegerd,
Susanne Meinert, Walter Heindel, Stephanie Witt, Marcella Rietschel, Markus
M. N\"othen, Andreas J. Forstner, Tilo Kircher, Igor Nenadic, Andreas Jansen,
Bertram M\"uller-Myhsok, Till F. M. Andlauer, Martin Walter, Martijn P. van
den Heuvel, Hamidreza Jamalabadi, Udo Dannlowski, Jonathan Repple | Genetic, Individual, and Familial Risk Correlates of Brain Network
Controllability in Major Depressive Disorder | 24 pages, 1 figure | null | null | null | q-bio.NC cs.SY eess.SY | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Background: A therapeutic intervention in psychiatry can be viewed as an
attempt to influence the brain's large-scale, dynamic network state transitions
underlying cognition and behavior. Building on connectome-based graph analysis
and control theory, Network Control Theory is emerging as a powerful tool to
quantify network controllability - i.e., the influence of one brain region over
others regarding dynamic network state transitions. If and how network
controllability is related to mental health remains elusive.
Methods: From Diffusion Tensor Imaging data, we inferred structural
connectivity and inferred calculated network controllability parameters to
investigate their association with genetic and familial risk in patients
diagnosed with major depressive disorder (MDD, n=692) and healthy controls
(n=820).
Results: First, we establish that controllability measures differ between
healthy controls and MDD patients while not varying with current symptom
severity or remission status. Second, we show that controllability in MDD
patients is associated with polygenic scores for MDD and psychiatric
cross-disorder risk. Finally, we provide evidence that controllability varies
with familial risk of MDD and bipolar disorder as well as with body mass index.
Conclusions: We show that network controllability is related to genetic,
individual, and familial risk in MDD patients. We discuss how these insights
into individual variation of network controllability may inform mechanistic
models of treatment response prediction and personalized intervention-design in
mental health.
| [
{
"created": "Wed, 21 Jul 2021 15:53:49 GMT",
"version": "v1"
}
] | 2021-07-22 | [
[
"Hahn",
"Tim",
""
],
[
"Winter",
"Nils R.",
""
],
[
"Ernsting",
"Jan",
""
],
[
"Gruber",
"Marius",
""
],
[
"Mauritz",
"Marco J.",
""
],
[
"Fisch",
"Lukas",
""
],
[
"Leenings",
"Ramona",
""
],
[
"Sar... | Background: A therapeutic intervention in psychiatry can be viewed as an attempt to influence the brain's large-scale, dynamic network state transitions underlying cognition and behavior. Building on connectome-based graph analysis and control theory, Network Control Theory is emerging as a powerful tool to quantify network controllability - i.e., the influence of one brain region over others regarding dynamic network state transitions. If and how network controllability is related to mental health remains elusive. Methods: From Diffusion Tensor Imaging data, we inferred structural connectivity and inferred calculated network controllability parameters to investigate their association with genetic and familial risk in patients diagnosed with major depressive disorder (MDD, n=692) and healthy controls (n=820). Results: First, we establish that controllability measures differ between healthy controls and MDD patients while not varying with current symptom severity or remission status. Second, we show that controllability in MDD patients is associated with polygenic scores for MDD and psychiatric cross-disorder risk. Finally, we provide evidence that controllability varies with familial risk of MDD and bipolar disorder as well as with body mass index. Conclusions: We show that network controllability is related to genetic, individual, and familial risk in MDD patients. We discuss how these insights into individual variation of network controllability may inform mechanistic models of treatment response prediction and personalized intervention-design in mental health. |
1505.02195 | Dervis Vural | Dervis Can Vural, Alexander Isakov, L. Mahadevan | The Organization and Control of an Evolving Interdependent Population | To download simulation code cf. article in Proceedings of the Royal
Society, Interface | Journal of the Royal Society Interface 12: 20150044 (2015) | 10.1098/rsif.2015.0044 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Starting with Darwin, biologists have asked how populations evolve from a low
fitness state that is evolutionarily stable to a high fitness state that is
not. Specifically of interest is the emergence of cooperation and
multicellularity where the fitness of individuals often appears in conflict
with that of the population. Theories of social evolution and evolutionary game
theory have produced a number of fruitful results employing two-state two-body
frameworks. In this study we depart from this tradition and instead consider a
multi-player, multi-state evolutionary game, in which the fitness of an agent
is determined by its relationship to an arbitrary number of other agents. We
show that populations organize themselves in one of four distinct phases of
interdependence depending on one parameter, selection strength. Some of these
phases involve the formation of specialized large-scale structures. We then
describe how the evolution of independence can be manipulated through various
external perturbations.
| [
{
"created": "Fri, 8 May 2015 21:53:06 GMT",
"version": "v1"
},
{
"created": "Sat, 7 Nov 2015 14:34:18 GMT",
"version": "v2"
}
] | 2015-11-10 | [
[
"Vural",
"Dervis Can",
""
],
[
"Isakov",
"Alexander",
""
],
[
"Mahadevan",
"L.",
""
]
] | Starting with Darwin, biologists have asked how populations evolve from a low fitness state that is evolutionarily stable to a high fitness state that is not. Specifically of interest is the emergence of cooperation and multicellularity where the fitness of individuals often appears in conflict with that of the population. Theories of social evolution and evolutionary game theory have produced a number of fruitful results employing two-state two-body frameworks. In this study we depart from this tradition and instead consider a multi-player, multi-state evolutionary game, in which the fitness of an agent is determined by its relationship to an arbitrary number of other agents. We show that populations organize themselves in one of four distinct phases of interdependence depending on one parameter, selection strength. Some of these phases involve the formation of specialized large-scale structures. We then describe how the evolution of independence can be manipulated through various external perturbations. |
2001.04020 | Liang Huang | Liang Huang, He Zhang, Dezhong Deng, Kai Zhao, Kaibo Liu, David A.
Hendrix, David H. Mathews | LinearFold: linear-time approximate RNA folding by 5'-to-3' dynamic
programming and beam search | 10 pages main text (8 figures); 5 pages supplementary information (7
figures). In Proceedings of ISMB 2019 | Bioinformatics, Volume 35, Issue 14, July 2019, Pages i295--i304 | 10.1093/bioinformatics/btz375 | null | q-bio.BM cs.DS math.CO physics.bio-ph q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Motivation: Predicting the secondary structure of an RNA sequence is useful
in many applications. Existing algorithms (based on dynamic programming) suffer
from a major limitation: their runtimes scale cubically with the RNA length,
and this slowness limits their use in genome-wide applications.
Results: We present a novel alternative $O(n^3)$-time dynamic programming
algorithm for RNA folding that is amenable to heuristics that make it run in
$O(n)$ time and $O(n)$ space, while producing a high-quality approximation to
the optimal solution. Inspired by incremental parsing for context-free grammars
in computational linguistics, our alternative dynamic programming algorithm
scans the sequence in a left-to-right (5'-to-3') direction rather than in a
bottom-up fashion, which allows us to employ the effective beam pruning
heuristic. Our work, though inexact, is the first RNA folding algorithm to
achieve linear runtime (and linear space) without imposing constraints on the
output structure. Surprisingly, our approximate search results in even higher
overall accuracy on a diverse database of sequences with known structures. More
interestingly, it leads to significantly more accurate predictions on the
longest sequence families in that database (16S and 23S Ribosomal RNAs), as
well as improved accuracies for long-range base pairs (500+ nucleotides apart),
both of which are well known to be challenging for the current models.
Availability: Our source code is available at
https://github.com/LinearFold/LinearFold, and our webserver is at
http://linearfold.org (sequence limit: 100,000nt).
| [
{
"created": "Sun, 22 Dec 2019 00:03:23 GMT",
"version": "v1"
}
] | 2020-01-14 | [
[
"Huang",
"Liang",
""
],
[
"Zhang",
"He",
""
],
[
"Deng",
"Dezhong",
""
],
[
"Zhao",
"Kai",
""
],
[
"Liu",
"Kaibo",
""
],
[
"Hendrix",
"David A.",
""
],
[
"Mathews",
"David H.",
""
]
] | Motivation: Predicting the secondary structure of an RNA sequence is useful in many applications. Existing algorithms (based on dynamic programming) suffer from a major limitation: their runtimes scale cubically with the RNA length, and this slowness limits their use in genome-wide applications. Results: We present a novel alternative $O(n^3)$-time dynamic programming algorithm for RNA folding that is amenable to heuristics that make it run in $O(n)$ time and $O(n)$ space, while producing a high-quality approximation to the optimal solution. Inspired by incremental parsing for context-free grammars in computational linguistics, our alternative dynamic programming algorithm scans the sequence in a left-to-right (5'-to-3') direction rather than in a bottom-up fashion, which allows us to employ the effective beam pruning heuristic. Our work, though inexact, is the first RNA folding algorithm to achieve linear runtime (and linear space) without imposing constraints on the output structure. Surprisingly, our approximate search results in even higher overall accuracy on a diverse database of sequences with known structures. More interestingly, it leads to significantly more accurate predictions on the longest sequence families in that database (16S and 23S Ribosomal RNAs), as well as improved accuracies for long-range base pairs (500+ nucleotides apart), both of which are well known to be challenging for the current models. Availability: Our source code is available at https://github.com/LinearFold/LinearFold, and our webserver is at http://linearfold.org (sequence limit: 100,000nt). |
1710.01951 | Sara Zannone | Sara Zannone, Zuzanna Brzosko, Ole Paulsen, Claudia Clopath | Acetylcholine-modulated plasticity in reward-driven navigation: a
computational study | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neuromodulation plays a fundamental role in the acquisition of new
behaviours. Our experimental findings show that, whereas acetylcholine biases
hippocampal synaptic plasticity towards depression, the subsequent application
of dopamine can retroactively convert depression into potentiation. We
previously demonstrated that incorporating this sequentially neuromodulated
Spike-Timing-Dependent Plasticity (STDP) rule in a network model of navigation
yields effective learning of changing reward locations. Here, we further
characterize the effects of cholinergic depression on behaviour. We find that
acetylcholine, by allowing learning from negative outcomes, influences
exploration in a non-trivial manner that highly depends on the specifics of the
model, the environment and the task. Interestingly, sequentially neuromodulated
STDP also yields flexible learning, surpassing the performance of other
reward-modulated plasticity rules.
| [
{
"created": "Thu, 5 Oct 2017 10:27:10 GMT",
"version": "v1"
}
] | 2017-10-06 | [
[
"Zannone",
"Sara",
""
],
[
"Brzosko",
"Zuzanna",
""
],
[
"Paulsen",
"Ole",
""
],
[
"Clopath",
"Claudia",
""
]
] | Neuromodulation plays a fundamental role in the acquisition of new behaviours. Our experimental findings show that, whereas acetylcholine biases hippocampal synaptic plasticity towards depression, the subsequent application of dopamine can retroactively convert depression into potentiation. We previously demonstrated that incorporating this sequentially neuromodulated Spike-Timing-Dependent Plasticity (STDP) rule in a network model of navigation yields effective learning of changing reward locations. Here, we further characterize the effects of cholinergic depression on behaviour. We find that acetylcholine, by allowing learning from negative outcomes, influences exploration in a non-trivial manner that highly depends on the specifics of the model, the environment and the task. Interestingly, sequentially neuromodulated STDP also yields flexible learning, surpassing the performance of other reward-modulated plasticity rules. |
0810.2760 | Jingshan Zhang | Jingshan Zhang, Eugene I. Shakhnovich | Slowly replicating lytic viruses: pseudolysogenic persistence and
within-host competition | 3 figures, 16 pages (4 pages in Phys. Rev. Lett. format) | Phys. Rev. Lett. 102, 178103 (2009) | 10.1103/PhysRevLett.102.178103 | null | q-bio.PE | http://creativecommons.org/licenses/by/3.0/ | We study the population dynamics of lytic viruses which replicate slowly in
dividing host cells within an organism or cell culture, and find a range of
viral replication rates that allows viruses to persist, avoiding extinction of
host cells or dilution of viruses at too rapid or too slow viral replication.
For the within-host competition between multiple viral strains, a strain with a
"stable" replication rate could outcompete another strain with a higher or
lower replication rate, therefore natural selection of viruses stabilizes the
viral persistence. However, when strains with higher and lower than the
"stable" value replication rates are both present, competition between strains
does not result in dominance of one strain, but in their coexistence.
| [
{
"created": "Wed, 15 Oct 2008 18:05:36 GMT",
"version": "v1"
},
{
"created": "Tue, 7 Apr 2009 15:19:41 GMT",
"version": "v2"
}
] | 2013-05-29 | [
[
"Zhang",
"Jingshan",
""
],
[
"Shakhnovich",
"Eugene I.",
""
]
] | We study the population dynamics of lytic viruses which replicate slowly in dividing host cells within an organism or cell culture, and find a range of viral replication rates that allows viruses to persist, avoiding extinction of host cells or dilution of viruses at too rapid or too slow viral replication. For the within-host competition between multiple viral strains, a strain with a "stable" replication rate could outcompete another strain with a higher or lower replication rate, therefore natural selection of viruses stabilizes the viral persistence. However, when strains with higher and lower than the "stable" value replication rates are both present, competition between strains does not result in dominance of one strain, but in their coexistence. |
1405.3226 | Khem Raj Ghusinga | Khem Raj Ghusinga and Abhyudai Singh | Optimal first-passage time in gene regulatory networks | 8 pages, 3 figures, Submitted to Conference on Decision and Control
2014 | null | 10.1109/CDC.2014.7039858 | null | q-bio.QM q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The inherent probabilistic nature of the biochemical reactions, and low copy
number of species can lead to stochasticity in gene expression across identical
cells. As a result, after induction of gene expression, the time at which a
specific protein count is reached is stochastic as well. Therefore events
taking place at a critical protein level will see stochasticity in their
timing. First-passage time (FPT), the time at which a stochastic process hits a
critical threshold, provides a framework to model such events. Here, we
investigate stochasticity in FPT. Particularly, we consider events for which
controlling stochasticity is advantageous. As a possible regulatory mechanism,
we also investigate effect of auto-regulation, where the transcription rate of
gene depends on protein count, on stochasticity of FPT. Specifically, we
investigate for an optimal auto-regulation which minimizes stochasticity in
FPT, given fixed mean FPT and threshold.
For this purpose, we model the gene expression at a single cell level. We
find analytic formulas for statistical moments of the FPT in terms of model
parameters. Moreover, we examine the gene expression model with
auto-regulation. Interestingly, our results show that the stochasticity in FPT,
for a fixed mean, is minimized when the transcription rate is independent of
protein count. Further, we discuss the results in context of lysis time of an
\textit{E. coli} cell infected by a $\lambda$ phage virus. An optimal lysis
time provides evolutionary advantage to the $\lambda$ phage, suggesting a
possible regulation to minimize its stochasticity. Our results indicate that
there is no auto-regulation of the protein responsible for lysis. Moreover,
congruent to experimental evidences, our analysis predicts that the expression
of the lysis protein should have a small burst size.
| [
{
"created": "Tue, 13 May 2014 16:49:26 GMT",
"version": "v1"
}
] | 2016-07-28 | [
[
"Ghusinga",
"Khem Raj",
""
],
[
"Singh",
"Abhyudai",
""
]
] | The inherent probabilistic nature of the biochemical reactions, and low copy number of species can lead to stochasticity in gene expression across identical cells. As a result, after induction of gene expression, the time at which a specific protein count is reached is stochastic as well. Therefore events taking place at a critical protein level will see stochasticity in their timing. First-passage time (FPT), the time at which a stochastic process hits a critical threshold, provides a framework to model such events. Here, we investigate stochasticity in FPT. Particularly, we consider events for which controlling stochasticity is advantageous. As a possible regulatory mechanism, we also investigate effect of auto-regulation, where the transcription rate of gene depends on protein count, on stochasticity of FPT. Specifically, we investigate for an optimal auto-regulation which minimizes stochasticity in FPT, given fixed mean FPT and threshold. For this purpose, we model the gene expression at a single cell level. We find analytic formulas for statistical moments of the FPT in terms of model parameters. Moreover, we examine the gene expression model with auto-regulation. Interestingly, our results show that the stochasticity in FPT, for a fixed mean, is minimized when the transcription rate is independent of protein count. Further, we discuss the results in context of lysis time of an \textit{E. coli} cell infected by a $\lambda$ phage virus. An optimal lysis time provides evolutionary advantage to the $\lambda$ phage, suggesting a possible regulation to minimize its stochasticity. Our results indicate that there is no auto-regulation of the protein responsible for lysis. Moreover, congruent to experimental evidences, our analysis predicts that the expression of the lysis protein should have a small burst size. |
1607.07806 | Peter Swain | Peter S Swain | Lecture notes on stochastic models in systems biology | 24 pages; 7 figures | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | These notes provide a short, focused introduction to modelling stochastic
gene expression, including a derivation of the master equation, the recovery of
deterministic dynamics, birth-and-death processes, and Langevin theory. The
notes were last updated around 2010 and written for lectures given at summer
schools held at McGill University's Centre for Non-linear Dynamics in 2004,
2006, and 2008.
| [
{
"created": "Tue, 26 Jul 2016 17:06:45 GMT",
"version": "v1"
}
] | 2016-07-27 | [
[
"Swain",
"Peter S",
""
]
] | These notes provide a short, focused introduction to modelling stochastic gene expression, including a derivation of the master equation, the recovery of deterministic dynamics, birth-and-death processes, and Langevin theory. The notes were last updated around 2010 and written for lectures given at summer schools held at McGill University's Centre for Non-linear Dynamics in 2004, 2006, and 2008. |
2406.10696 | Giovanna Maria Dimitri Dr | Giovanna Maria Dimitri | Mining comorbidities: a brief survey | null | null | null | null | q-bio.OT | http://creativecommons.org/licenses/by/4.0/ | In this manuscript we will present a brief overview of the comorbidity
concept. We will start by laying its foundations and its definitions and then
describing the role that machine learning can hold in mining and defining it.
The purpose of this short survey is to present a brief overview of the
definition of comorbidity as a concept, and showing some of the latest
applications and potentialities for the application of natural language
processing and text mining techniques.
| [
{
"created": "Sat, 15 Jun 2024 17:31:43 GMT",
"version": "v1"
}
] | 2024-06-18 | [
[
"Dimitri",
"Giovanna Maria",
""
]
] | In this manuscript we will present a brief overview of the comorbidity concept. We will start by laying its foundations and its definitions and then describing the role that machine learning can hold in mining and defining it. The purpose of this short survey is to present a brief overview of the definition of comorbidity as a concept, and showing some of the latest applications and potentialities for the application of natural language processing and text mining techniques. |
q-bio/0702059 | Felix Naef | Gautier Stoll, Jacques Rougemont, Felix Naef | Representing perturbed dynamics in biological network models | presented at CompBioNets, dec 2004, recife, Brazil | null | 10.1103/PhysRevE.76.011917 | null | q-bio.MN | null | We study the dynamics of gene activities in relatively small size biological
networks (up to a few tens of nodes), e.g. the activities of cell-cycle
proteins during the mitotic cell-cycle progression. Using the framework of
deterministic discrete dynamical models, we characterize the dynamical
modifications in response to structural perturbations in the network
connectivities. In particular, we focus on how perturbations affect the set of
fixed points and sizes of the basins of attraction. Our approach uses two
analytical measures: the basin entropy $H$ and the perturbation size $\Delta$,
a quantity that reflects the distance between the set of fixed points of the
perturbed network to that of the unperturbed network. Applying our approach to
the yeast-cell cycle network introduced by Li \textit{et al.} provides a low
dimensional and informative fingerprint of network behavior under large classes
of perturbations. We identify interactions that are crucial for proper network
function, and also pinpoints functionally redundant network connections.
Selected perturbations exemplify the breadth of dynamical responses in this
cell-cycle model.
| [
{
"created": "Wed, 28 Feb 2007 15:44:23 GMT",
"version": "v1"
}
] | 2009-11-13 | [
[
"Stoll",
"Gautier",
""
],
[
"Rougemont",
"Jacques",
""
],
[
"Naef",
"Felix",
""
]
] | We study the dynamics of gene activities in relatively small size biological networks (up to a few tens of nodes), e.g. the activities of cell-cycle proteins during the mitotic cell-cycle progression. Using the framework of deterministic discrete dynamical models, we characterize the dynamical modifications in response to structural perturbations in the network connectivities. In particular, we focus on how perturbations affect the set of fixed points and sizes of the basins of attraction. Our approach uses two analytical measures: the basin entropy $H$ and the perturbation size $\Delta$, a quantity that reflects the distance between the set of fixed points of the perturbed network to that of the unperturbed network. Applying our approach to the yeast-cell cycle network introduced by Li \textit{et al.} provides a low dimensional and informative fingerprint of network behavior under large classes of perturbations. We identify interactions that are crucial for proper network function, and also pinpoints functionally redundant network connections. Selected perturbations exemplify the breadth of dynamical responses in this cell-cycle model. |
2212.01497 | Jian Jiang | Zhu Zailiang, Dou Bozheng, Cao Yukang, Jiang Jian, Zhu Yueying, Chen
Dong, Feng Hongsong, Liu Jie, Zhang Bengong, Zhou Tianshou, Wei Guowei | TIDAL: Topology-Inferred Drug Addiction Learning | null | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Drug addiction or drug overdose is a global public health crisis, and the
design of anti-addiction drugs remains a major challenge due to intricate
mechanisms. Since experimental drug screening and optimization are too
time-consuming and expensive, there is urgent need to develop innovative
artificial intelligence (AI) methods for addressing the challenge. We tackle
this challenge by topology-inferred drug addiction learning (TIDAL) built from
integrating topological Laplacian, deep bidirectional transformer, and
ensemble-assisted neural networks (EANNs). The topological Laplacian is a novel
algebraic topology tool that embeds molecular topological invariants and
algebraic invariants into its harmonic spectra and non-harmonic spectra,
respectively. These invariants complement sequence information extracted from a
bidirectional transformer. We validate the proposed TIDAL framework on 22 drug
addiction related, 4 hERG, and 12 DAT datasets, showing that TIDAL is a
state-of-the-art framework for the modeling and analysis of drug addiction
data. We carry out cross-target analysis of the current drug addiction
candidates to alert their side effects and identify their repurposing
potentials, revealing drugmediated linear and bilinear target correlations.
Finally, TIDAL is applied to shed light on relative efficacy, repurposing
potential, and potential side effects of 12 existing anti-addiction
medications. Our results suggest that TIDAL provides a new computational
strategy for pressingly-needed anti-substance addiction drug development.
| [
{
"created": "Sat, 3 Dec 2022 01:09:21 GMT",
"version": "v1"
}
] | 2022-12-06 | [
[
"Zailiang",
"Zhu",
""
],
[
"Bozheng",
"Dou",
""
],
[
"Yukang",
"Cao",
""
],
[
"Jian",
"Jiang",
""
],
[
"Yueying",
"Zhu",
""
],
[
"Dong",
"Chen",
""
],
[
"Hongsong",
"Feng",
""
],
[
"Jie",
"Liu",... | Drug addiction or drug overdose is a global public health crisis, and the design of anti-addiction drugs remains a major challenge due to intricate mechanisms. Since experimental drug screening and optimization are too time-consuming and expensive, there is urgent need to develop innovative artificial intelligence (AI) methods for addressing the challenge. We tackle this challenge by topology-inferred drug addiction learning (TIDAL) built from integrating topological Laplacian, deep bidirectional transformer, and ensemble-assisted neural networks (EANNs). The topological Laplacian is a novel algebraic topology tool that embeds molecular topological invariants and algebraic invariants into its harmonic spectra and non-harmonic spectra, respectively. These invariants complement sequence information extracted from a bidirectional transformer. We validate the proposed TIDAL framework on 22 drug addiction related, 4 hERG, and 12 DAT datasets, showing that TIDAL is a state-of-the-art framework for the modeling and analysis of drug addiction data. We carry out cross-target analysis of the current drug addiction candidates to alert their side effects and identify their repurposing potentials, revealing drugmediated linear and bilinear target correlations. Finally, TIDAL is applied to shed light on relative efficacy, repurposing potential, and potential side effects of 12 existing anti-addiction medications. Our results suggest that TIDAL provides a new computational strategy for pressingly-needed anti-substance addiction drug development. |
2212.13261 | Md. Rezaul Karim | Md. Rezaul Karim, Tanhim Islam, Oya Beyan, Christoph Lange, Michael
Cochez, Dietrich Rebholz-Schuhmann and Stefan Decker | Explainable AI for Bioinformatics: Methods, Tools, and Applications | null | null | null | null | q-bio.QM cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Artificial intelligence (AI) systems utilizing deep neural networks (DNNs)
and machine learning (ML) algorithms are widely used for solving important
problems in bioinformatics, biomedical informatics, and precision medicine.
However, complex DNNs or ML models, which are often perceived as opaque and
black-box, can make it difficult to understand the reasoning behind their
decisions. This lack of transparency can be a challenge for both end-users and
decision-makers, as well as AI developers. Additionally, in sensitive areas
like healthcare, explainability and accountability are not only desirable but
also legally required for AI systems that can have a significant impact on
human lives. Fairness is another growing concern, as algorithmic decisions
should not show bias or discrimination towards certain groups or individuals
based on sensitive attributes. Explainable artificial intelligence (XAI) aims
to overcome the opaqueness of black-box models and provide transparency in how
AI systems make decisions. Interpretable ML models can explain how they make
predictions and the factors that influence their outcomes. However, most
state-of-the-art interpretable ML methods are domain-agnostic and evolved from
fields like computer vision, automated reasoning, or statistics, making direct
application to bioinformatics problems challenging without customization and
domain-specific adaptation. In this paper, we discuss the importance of
explainability in the context of bioinformatics, provide an overview of
model-specific and model-agnostic interpretable ML methods and tools, and
outline their potential caveats and drawbacks. Besides, we discuss how to
customize existing interpretable ML methods for bioinformatics problems.
Nevertheless, we demonstrate how XAI methods can improve transparency through
case studies in bioimaging, cancer genomics, and text mining.
| [
{
"created": "Sun, 25 Dec 2022 21:00:36 GMT",
"version": "v1"
},
{
"created": "Thu, 9 Feb 2023 14:10:57 GMT",
"version": "v2"
},
{
"created": "Thu, 23 Feb 2023 08:48:26 GMT",
"version": "v3"
}
] | 2023-02-24 | [
[
"Karim",
"Md. Rezaul",
""
],
[
"Islam",
"Tanhim",
""
],
[
"Beyan",
"Oya",
""
],
[
"Lange",
"Christoph",
""
],
[
"Cochez",
"Michael",
""
],
[
"Rebholz-Schuhmann",
"Dietrich",
""
],
[
"Decker",
"Stefan",
""
... | Artificial intelligence (AI) systems utilizing deep neural networks (DNNs) and machine learning (ML) algorithms are widely used for solving important problems in bioinformatics, biomedical informatics, and precision medicine. However, complex DNNs or ML models, which are often perceived as opaque and black-box, can make it difficult to understand the reasoning behind their decisions. This lack of transparency can be a challenge for both end-users and decision-makers, as well as AI developers. Additionally, in sensitive areas like healthcare, explainability and accountability are not only desirable but also legally required for AI systems that can have a significant impact on human lives. Fairness is another growing concern, as algorithmic decisions should not show bias or discrimination towards certain groups or individuals based on sensitive attributes. Explainable artificial intelligence (XAI) aims to overcome the opaqueness of black-box models and provide transparency in how AI systems make decisions. Interpretable ML models can explain how they make predictions and the factors that influence their outcomes. However, most state-of-the-art interpretable ML methods are domain-agnostic and evolved from fields like computer vision, automated reasoning, or statistics, making direct application to bioinformatics problems challenging without customization and domain-specific adaptation. In this paper, we discuss the importance of explainability in the context of bioinformatics, provide an overview of model-specific and model-agnostic interpretable ML methods and tools, and outline their potential caveats and drawbacks. Besides, we discuss how to customize existing interpretable ML methods for bioinformatics problems. Nevertheless, we demonstrate how XAI methods can improve transparency through case studies in bioimaging, cancer genomics, and text mining. |
1211.5073 | R. Mulet | L\'idice Cruz-Rodr\'iguez, Nuris Figueroa-Morales and Roberto Mulet | On the role of intrinsic noise on the response of the p53-Mdm2 module | 10 pages, 9 figures | null | null | null | q-bio.MN cond-mat.soft physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The protein p53 has a well established role in protecting genomic integrity
in human cells. When DNA is damaged p53 induces the cell cycle arrest to
prevent the transmission of the damage to cell progeny, triggers the production
of proteins for DNA repair and ultimately calls for apoptosis. In particular,
the p53-Mdm2 feedback loop seems to be the key circuit in this response of
cells to damage. For many years, based on measurements over populations of
cells it was believed that the p53-Mdm2 feedback loop was the responsible for
the existence of damped oscillations in the levels of p53 and Mdm2 after DNA
damage. However, recent measurements in individual human cells have shown that
p53 and its regulator Mdm2 develop sustained oscillations over long periods of
time even in the absence of stress. These results have attracted a lot of
interest, first because they open a new experimental framework to study the p53
and its interactions and second because they challenge years of mathematical
models with new and accurate data on single cells. Inspired by these
experiments standard models of the p53-Mdm2 circuit were modified introducing
ad-hoc some biologically motivated noise that becomes responsible for the
stability of the oscillations. Here, we follow an alternative approach
proposing that the noise that stabilizes the fluctuations is the intrinsic
noise due to the finite nature of the populations of p53 and Mdm2 in a single
cell.
| [
{
"created": "Wed, 21 Nov 2012 16:18:30 GMT",
"version": "v1"
}
] | 2012-11-22 | [
[
"Cruz-Rodríguez",
"Lídice",
""
],
[
"Figueroa-Morales",
"Nuris",
""
],
[
"Mulet",
"Roberto",
""
]
] | The protein p53 has a well established role in protecting genomic integrity in human cells. When DNA is damaged p53 induces the cell cycle arrest to prevent the transmission of the damage to cell progeny, triggers the production of proteins for DNA repair and ultimately calls for apoptosis. In particular, the p53-Mdm2 feedback loop seems to be the key circuit in this response of cells to damage. For many years, based on measurements over populations of cells it was believed that the p53-Mdm2 feedback loop was the responsible for the existence of damped oscillations in the levels of p53 and Mdm2 after DNA damage. However, recent measurements in individual human cells have shown that p53 and its regulator Mdm2 develop sustained oscillations over long periods of time even in the absence of stress. These results have attracted a lot of interest, first because they open a new experimental framework to study the p53 and its interactions and second because they challenge years of mathematical models with new and accurate data on single cells. Inspired by these experiments standard models of the p53-Mdm2 circuit were modified introducing ad-hoc some biologically motivated noise that becomes responsible for the stability of the oscillations. Here, we follow an alternative approach proposing that the noise that stabilizes the fluctuations is the intrinsic noise due to the finite nature of the populations of p53 and Mdm2 in a single cell. |
1707.01484 | Xerxes D. Arsiwalla | Ivan Herreros, Xerxes D. Arsiwalla, Cosimo Della Santina, Jordi-Ysard
Puigbo, Antonio Bicchi, Paul Verschure | Cerebellar-Inspired Learning Rule for Gain Adaptation of Feedback
Controllers | null | null | null | null | q-bio.NC cond-mat.dis-nn cs.SY nlin.AO physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | How does our nervous system successfully acquire feedback control strategies
in spite of a wide spectrum of response dynamics from different
musculo-skeletal systems? The cerebellum is a crucial brain structure in
enabling precise motor control in animals. Recent advances suggest that
synaptic plasticity of cerebellar Purkinje cells involves molecular mechanisms
that mimic the dynamics of the efferent motor system that they control allowing
them to match the timing of their learning rule to behavior. Counter-Factual
Predictive Control (CFPC) is a cerebellum-based feed-forward control scheme
that exploits that principle for acquiring anticipatory actions. CFPC extends
the classical Widrow-Hoff/Least Mean Squares by inserting a forward model of
the downstream closed-loop system in its learning rule. Here we apply that same
insight to the problem of learning the gains of a feedback controller. To that
end, we frame a Model-Reference Adaptive Control (MRAC) problem and derive an
adaptive control scheme treating the gains of a feedback controller as if they
were the weights of an adaptive linear unit. Our results demonstrate that
rather than being exclusively confined to cerebellar learning, the approach of
controlling plasticity with a forward model of the subsystem controlled, an
approach that we term as Model-Enhanced Least Mean Squares (ME-LMS), can
provide a solution to wide set of adaptive control problems.
| [
{
"created": "Wed, 5 Jul 2017 17:34:14 GMT",
"version": "v1"
}
] | 2017-07-06 | [
[
"Herreros",
"Ivan",
""
],
[
"Arsiwalla",
"Xerxes D.",
""
],
[
"Della Santina",
"Cosimo",
""
],
[
"Puigbo",
"Jordi-Ysard",
""
],
[
"Bicchi",
"Antonio",
""
],
[
"Verschure",
"Paul",
""
]
] | How does our nervous system successfully acquire feedback control strategies in spite of a wide spectrum of response dynamics from different musculo-skeletal systems? The cerebellum is a crucial brain structure in enabling precise motor control in animals. Recent advances suggest that synaptic plasticity of cerebellar Purkinje cells involves molecular mechanisms that mimic the dynamics of the efferent motor system that they control allowing them to match the timing of their learning rule to behavior. Counter-Factual Predictive Control (CFPC) is a cerebellum-based feed-forward control scheme that exploits that principle for acquiring anticipatory actions. CFPC extends the classical Widrow-Hoff/Least Mean Squares by inserting a forward model of the downstream closed-loop system in its learning rule. Here we apply that same insight to the problem of learning the gains of a feedback controller. To that end, we frame a Model-Reference Adaptive Control (MRAC) problem and derive an adaptive control scheme treating the gains of a feedback controller as if they were the weights of an adaptive linear unit. Our results demonstrate that rather than being exclusively confined to cerebellar learning, the approach of controlling plasticity with a forward model of the subsystem controlled, an approach that we term as Model-Enhanced Least Mean Squares (ME-LMS), can provide a solution to wide set of adaptive control problems. |
2306.02893 | Peter Kevei | P\'eter Kevei and M\'at\'e Szalai | Branching model with state dependent offspring distribution for
Chlamydia spread | 16 pages | null | null | null | q-bio.PE stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Chlamydiae are bacteria with an interesting unusual developmental cycle. A
single bacterium in its infectious form (elementary body, EB) enters the host
cell, where it converts into its dividing form (reticulate body, RB), and
divides by binary fission. Since only the EB form is infectious, before the
host cell dies, RBs start to convert into EBs. After the host cell dies RBs do
not survive. We model the population growth by a 2-type discrete-time branching
process, where the probability of duplication depends on the state. Maximizing
the EB production leads to a stochastic optimization problem. Simulation study
shows that our novel model is able to reproduce the main features of the
development of the population.
| [
{
"created": "Mon, 5 Jun 2023 14:02:20 GMT",
"version": "v1"
}
] | 2023-06-06 | [
[
"Kevei",
"Péter",
""
],
[
"Szalai",
"Máté",
""
]
] | Chlamydiae are bacteria with an interesting unusual developmental cycle. A single bacterium in its infectious form (elementary body, EB) enters the host cell, where it converts into its dividing form (reticulate body, RB), and divides by binary fission. Since only the EB form is infectious, before the host cell dies, RBs start to convert into EBs. After the host cell dies RBs do not survive. We model the population growth by a 2-type discrete-time branching process, where the probability of duplication depends on the state. Maximizing the EB production leads to a stochastic optimization problem. Simulation study shows that our novel model is able to reproduce the main features of the development of the population. |
1210.5665 | Maria Rita Fumagalli | Maria Rita Fumagalli and Matteo Osella and Philippe Thomen and
Francois Heslot and Marco Cosentino Lagomarsino | Speed of evolution in large asexual populations with diminishing returns | null | null | null | null | q-bio.PE cond-mat.stat-mech q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The adaptive evolution of large asexual populations is generally
characterized by competition between clones carrying different beneficial
mutations. This interference phenomenon slows down the adaptation speed and
makes the theoretical description of the dynamics more complex with respect to
the successional occurrence and fixation of beneficial mutations typical of
small populations. A simplified modeling framework considering multiple
beneficial mutations with equal and constant fitness advantage captures some of
the essential features of the actual complex dynamics, and some key predictions
from this model are verified in laboratory evolution experiments. However, in
these experiments the relative advantage of a beneficial mutation is generally
dependent on the genetic background. In particular, the general pattern is
that, as mutations in different loci accumulate, the relative advantage of new
mutations decreases, trend often referred to as "diminishing return" epistasis.
In this paper, we propose a phenomenological model that generalizes the
fixed-advantage framework to include in a simple way this feature. To evaluate
the quantitative consequences of diminishing returns on the evolutionary
dynamics, we approach the model analytically as well as with direct
simulations. Finally, we show how the model parameters can be matched with data
from evolutionary experiments in order to infer the mean effect of epistasis
and derive order-of-magnitude estimates of the rate of beneficial mutations.
Applying this procedure to two experimental data sets gives values of the
beneficial mutation rate within the range of previous measurements.
| [
{
"created": "Sat, 20 Oct 2012 22:58:04 GMT",
"version": "v1"
},
{
"created": "Mon, 10 Dec 2012 18:51:36 GMT",
"version": "v2"
},
{
"created": "Wed, 19 Dec 2012 18:58:10 GMT",
"version": "v3"
}
] | 2012-12-20 | [
[
"Fumagalli",
"Maria Rita",
""
],
[
"Osella",
"Matteo",
""
],
[
"Thomen",
"Philippe",
""
],
[
"Heslot",
"Francois",
""
],
[
"Lagomarsino",
"Marco Cosentino",
""
]
] | The adaptive evolution of large asexual populations is generally characterized by competition between clones carrying different beneficial mutations. This interference phenomenon slows down the adaptation speed and makes the theoretical description of the dynamics more complex with respect to the successional occurrence and fixation of beneficial mutations typical of small populations. A simplified modeling framework considering multiple beneficial mutations with equal and constant fitness advantage captures some of the essential features of the actual complex dynamics, and some key predictions from this model are verified in laboratory evolution experiments. However, in these experiments the relative advantage of a beneficial mutation is generally dependent on the genetic background. In particular, the general pattern is that, as mutations in different loci accumulate, the relative advantage of new mutations decreases, trend often referred to as "diminishing return" epistasis. In this paper, we propose a phenomenological model that generalizes the fixed-advantage framework to include in a simple way this feature. To evaluate the quantitative consequences of diminishing returns on the evolutionary dynamics, we approach the model analytically as well as with direct simulations. Finally, we show how the model parameters can be matched with data from evolutionary experiments in order to infer the mean effect of epistasis and derive order-of-magnitude estimates of the rate of beneficial mutations. Applying this procedure to two experimental data sets gives values of the beneficial mutation rate within the range of previous measurements. |
2303.14590 | Gustavo Caetano-Anoll\'es | Gustavo Caetano-Anoll\'es | A note on retrodiction and machine evolution | 7 pages, 1 figure | Annals of the New York Academy of Sciences 1525(1): 88-103, 2023,
supporting information | 10.1111/nyas.15005 | null | q-bio.BM | http://creativecommons.org/licenses/by/4.0/ | Biomolecular communication demands that interactions between parts of a
molecular system act as scaffolds for message transmission. It also requires an
evolving and organized system of signs - a communicative agency - for creating
and transmitting meaning. Here I explore the need to dissect biomolecular
communication with retrodiction approaches that make claims about the past
given information that is available in the present. While the passage of time
restricts the explanatory power of retrodiction, the use of molecular structure
in biology offsets information erosion. This allows description of the gradual
evolutionary rise of structural and functional innovations in RNA and proteins.
The resulting chronologies can also describe the gradual rise of molecular
machines of increasing complexity and computation capabilities. For example,
the accretion of rRNA substructures and ribosomal proteins can be traced in
time and placed within a geological timescale. Phylogenetic, algorithmic and
theoretical-inspired accretion models can be reconciled into a congruent
evolutionary model. Remarkably, the time of origin of enzymes, functional RNA,
non-ribosomal peptide synthetase (NRPS) complexes, and ribosomes suggest they
gradually climbed Chomsky's hierarchy of formal grammars, supporting the
gradual complexification of machines and communication in molecular biology.
Future retrodiction approaches and in-depth exploration of theoretical models
of computation will need to confirm such evolutionary progression.
| [
{
"created": "Sun, 26 Mar 2023 00:09:45 GMT",
"version": "v1"
}
] | 2023-09-04 | [
[
"Caetano-Anollés",
"Gustavo",
""
]
] | Biomolecular communication demands that interactions between parts of a molecular system act as scaffolds for message transmission. It also requires an evolving and organized system of signs - a communicative agency - for creating and transmitting meaning. Here I explore the need to dissect biomolecular communication with retrodiction approaches that make claims about the past given information that is available in the present. While the passage of time restricts the explanatory power of retrodiction, the use of molecular structure in biology offsets information erosion. This allows description of the gradual evolutionary rise of structural and functional innovations in RNA and proteins. The resulting chronologies can also describe the gradual rise of molecular machines of increasing complexity and computation capabilities. For example, the accretion of rRNA substructures and ribosomal proteins can be traced in time and placed within a geological timescale. Phylogenetic, algorithmic and theoretical-inspired accretion models can be reconciled into a congruent evolutionary model. Remarkably, the time of origin of enzymes, functional RNA, non-ribosomal peptide synthetase (NRPS) complexes, and ribosomes suggest they gradually climbed Chomsky's hierarchy of formal grammars, supporting the gradual complexification of machines and communication in molecular biology. Future retrodiction approaches and in-depth exploration of theoretical models of computation will need to confirm such evolutionary progression. |
2104.12003 | Cameron Mura | Cameron Mura, Saskia Preissner, Robert Preissner, Philip E. Bourne | A Birds-eye (Re)View of Acid-suppression Drugs, COVID-19, and the Highly
Variable Literature | 10 pages, 1 figure | null | null | null | q-bio.TO q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the recent surge of information on the potential benefits of
acid-suppression drugs in the context of COVID-19, with an eye on the
variability (and confusion) across the reported findings--at least as regards
the popular antacid famotidine. The inconsistencies reflect contradictory
conclusions from independent clinical-based studies that took roughly similar
approaches, in terms of experimental design (retrospective, cohort-based, etc.)
and statistical analyses (propensity-score matching and stratification, etc.).
The confusion has significant ramifications in choosing therapeutic
interventions: e.g., do potential benefits of famotidine indicate its use in a
particular COVID-19 case? Beyond this pressing therapeutic issue, conflicting
information on famotidine must be resolved before its integration in
ontological and knowledge graph-based frameworks, which in turn are useful in
drug repurposing efforts. To begin systematically structuring the rapidly
accumulating information, in the hopes of clarifying and reconciling the
discrepancies, we consider the contradictory information along three proposed
'axes': (1) a context-of-disease axis, (2) a degree-of-[therapeutic]-benefit
axis, and (3) a mechanism-of-action axis. We suspect that incongruencies in how
these axes have been (implicitly) treated in past studies has led to the
contradictory indications for famotidine and COVID-19. We also trace the
evolution of information on acid-suppression agents as regards the
transmission, severity, and mortality of COVID-19, given the many literature
reports that have accumulated. By grouping the studies conceptually and
thematically, we identify three eras in the progression of our understanding of
famotidine and COVID-19. Harmonizing these findings is a key goal for both
clinical standards-of-care (COVID and beyond) as well as ontological and
knowledge graph-based approaches.
| [
{
"created": "Sat, 24 Apr 2021 19:08:46 GMT",
"version": "v1"
}
] | 2021-04-27 | [
[
"Mura",
"Cameron",
""
],
[
"Preissner",
"Saskia",
""
],
[
"Preissner",
"Robert",
""
],
[
"Bourne",
"Philip E.",
""
]
] | We consider the recent surge of information on the potential benefits of acid-suppression drugs in the context of COVID-19, with an eye on the variability (and confusion) across the reported findings--at least as regards the popular antacid famotidine. The inconsistencies reflect contradictory conclusions from independent clinical-based studies that took roughly similar approaches, in terms of experimental design (retrospective, cohort-based, etc.) and statistical analyses (propensity-score matching and stratification, etc.). The confusion has significant ramifications in choosing therapeutic interventions: e.g., do potential benefits of famotidine indicate its use in a particular COVID-19 case? Beyond this pressing therapeutic issue, conflicting information on famotidine must be resolved before its integration in ontological and knowledge graph-based frameworks, which in turn are useful in drug repurposing efforts. To begin systematically structuring the rapidly accumulating information, in the hopes of clarifying and reconciling the discrepancies, we consider the contradictory information along three proposed 'axes': (1) a context-of-disease axis, (2) a degree-of-[therapeutic]-benefit axis, and (3) a mechanism-of-action axis. We suspect that incongruencies in how these axes have been (implicitly) treated in past studies has led to the contradictory indications for famotidine and COVID-19. We also trace the evolution of information on acid-suppression agents as regards the transmission, severity, and mortality of COVID-19, given the many literature reports that have accumulated. By grouping the studies conceptually and thematically, we identify three eras in the progression of our understanding of famotidine and COVID-19. Harmonizing these findings is a key goal for both clinical standards-of-care (COVID and beyond) as well as ontological and knowledge graph-based approaches. |
2106.13649 | Jaroslav Budi\v{s} | Jaroslav Budis, Werner Krampl, Marcel Kucharik, Rastislav Hekel,
Adrian Goga, Michal Lichvar, David Smolak, Miroslav Bohmer, Andrej Balaz,
Frantisek Duris, Juraj Gazdarica, Katarina Soltys, Jan Turna, Jan Radvanszky,
Tomas Szemes | SnakeLines: integrated set of computational pipelines for sequencing
reads | 22 pages, 3 figures, 1 table | null | null | null | q-bio.GN cs.CE | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Background: With the rapid growth of massively parallel sequencing
technologies, still more laboratories are utilizing sequenced DNA fragments for
genomic analyses. Interpretation of sequencing data is, however, strongly
dependent on bioinformatics processing, which is often too demanding for
clinicians and researchers without a computational background. Another problem
represents the reproducibility of computational analyses across separated
computational centers with inconsistent versions of installed libraries and
bioinformatics tools.
Results: We propose an easily extensible set of computational pipelines,
called SnakeLines, for processing sequencing reads; including mapping,
assembly, variant calling, viral identification, transcriptomics, metagenomics,
and methylation analysis. Individual steps of an analysis, along with methods
and their parameters can be readily modified in a single configuration file.
Provided pipelines are embedded in virtual environments that ensure isolation
of required resources from the host operating system, rapid deployment, and
reproducibility of analysis across different Unix-based platforms.
Conclusion: SnakeLines is a powerful framework for the automation of
bioinformatics analyses, with emphasis on a simple set-up, modifications,
extensibility, and reproducibility.
Keywords: Computational pipeline, framework, massively parallel sequencing,
reproducibility, virtual environment
| [
{
"created": "Fri, 25 Jun 2021 14:10:19 GMT",
"version": "v1"
}
] | 2021-06-28 | [
[
"Budis",
"Jaroslav",
""
],
[
"Krampl",
"Werner",
""
],
[
"Kucharik",
"Marcel",
""
],
[
"Hekel",
"Rastislav",
""
],
[
"Goga",
"Adrian",
""
],
[
"Lichvar",
"Michal",
""
],
[
"Smolak",
"David",
""
],
[
... | Background: With the rapid growth of massively parallel sequencing technologies, still more laboratories are utilizing sequenced DNA fragments for genomic analyses. Interpretation of sequencing data is, however, strongly dependent on bioinformatics processing, which is often too demanding for clinicians and researchers without a computational background. Another problem represents the reproducibility of computational analyses across separated computational centers with inconsistent versions of installed libraries and bioinformatics tools. Results: We propose an easily extensible set of computational pipelines, called SnakeLines, for processing sequencing reads; including mapping, assembly, variant calling, viral identification, transcriptomics, metagenomics, and methylation analysis. Individual steps of an analysis, along with methods and their parameters can be readily modified in a single configuration file. Provided pipelines are embedded in virtual environments that ensure isolation of required resources from the host operating system, rapid deployment, and reproducibility of analysis across different Unix-based platforms. Conclusion: SnakeLines is a powerful framework for the automation of bioinformatics analyses, with emphasis on a simple set-up, modifications, extensibility, and reproducibility. Keywords: Computational pipeline, framework, massively parallel sequencing, reproducibility, virtual environment |
1305.0490 | Ido Kanter | Roni Vardi, Shoshana Guberman, Amir Goldental and Ido Kanter | An experimental evidence-based computational paradigm for new
logic-gates in neuronal activity | 10 pages, 4 figures, 1 table | EPL 103, 66001 (2013) | 10.1209/0295-5075/103/66001 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a new experimentally corroborated paradigm in which the
functionality of the brain's logic-gates depends on the history of their
activity, e.g. an OR-gate that turns into a XOR-gate over time. Our results are
based on an experimental procedure where conditioned stimulations were enforced
on circuits of neurons embedded within a large-scale network of cortical cells
in-vitro. The underlying biological mechanism is the unavoidable increase of
neuronal response latency to ongoing stimulations, which imposes a non-uniform
gradual stretching of network delays.
| [
{
"created": "Thu, 2 May 2013 15:54:20 GMT",
"version": "v1"
}
] | 2015-06-15 | [
[
"Vardi",
"Roni",
""
],
[
"Guberman",
"Shoshana",
""
],
[
"Goldental",
"Amir",
""
],
[
"Kanter",
"Ido",
""
]
] | We propose a new experimentally corroborated paradigm in which the functionality of the brain's logic-gates depends on the history of their activity, e.g. an OR-gate that turns into a XOR-gate over time. Our results are based on an experimental procedure where conditioned stimulations were enforced on circuits of neurons embedded within a large-scale network of cortical cells in-vitro. The underlying biological mechanism is the unavoidable increase of neuronal response latency to ongoing stimulations, which imposes a non-uniform gradual stretching of network delays. |
1603.05659 | Rashid Williams-Garcia | Rashid V. Williams-Garcia, John M. Beggs, and Gerardo Ortiz | Unveiling causal activity of complex networks | null | null | 10.1209/0295-5075/119/18003 | null | q-bio.NC nlin.AO physics.bio-ph physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a novel tool for analyzing complex network dynamics, allowing
for cascades of causally-related events, which we call causal webs (c-webs), to
be separated from other non-causally-related events. This tool shows that
traditionally-conceived avalanches may contain mixtures of spatially-distinct
but temporally-overlapping cascades of events, and dynamical disorder or noise.
In contrast, c-webs separate these components, unveiling previously hidden
features of the network and dynamics. We apply our method to mouse cortical
data with resulting statistics which demonstrate for the first time that
neuronal avalanches are not merely composed of causally-related events.
| [
{
"created": "Thu, 17 Mar 2016 20:00:05 GMT",
"version": "v1"
},
{
"created": "Mon, 9 May 2016 17:02:06 GMT",
"version": "v2"
},
{
"created": "Fri, 29 Jul 2016 19:15:38 GMT",
"version": "v3"
},
{
"created": "Thu, 9 Feb 2017 20:23:29 GMT",
"version": "v4"
},
{
"cre... | 2017-10-11 | [
[
"Williams-Garcia",
"Rashid V.",
""
],
[
"Beggs",
"John M.",
""
],
[
"Ortiz",
"Gerardo",
""
]
] | We introduce a novel tool for analyzing complex network dynamics, allowing for cascades of causally-related events, which we call causal webs (c-webs), to be separated from other non-causally-related events. This tool shows that traditionally-conceived avalanches may contain mixtures of spatially-distinct but temporally-overlapping cascades of events, and dynamical disorder or noise. In contrast, c-webs separate these components, unveiling previously hidden features of the network and dynamics. We apply our method to mouse cortical data with resulting statistics which demonstrate for the first time that neuronal avalanches are not merely composed of causally-related events. |
1308.6033 | Caterina La Porta AM | Elena Monzani, Riccardo Bazzotti, Carla Perego, Caterina A. M. La
Porta | AQP1 Is Not Only a Water Channel: It Contributes to Cell Migration
through Lin7/Beta-Catenin | null | PLoS ONE 4(7): e6167, 2009 | 10.1371/journal.pone.0006167 | null | q-bio.CB q-bio.QM q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: AQP1 belongs to aquaporins family, water-specific,
membrane-channel proteins expressed in diverse tissues. Recent papers showed
that during angiogenesis, AQP1 is expressed preferentially by microvessels,
favoring angiogenesis via the increase of permeability In particular, in AQP1
null mice, endothelial cell migration is impaired without altering their
proliferation or adhesion. Therefore, AQP1 has been proposed as a novel
promoter of tumor angiogenesis. Methods/Findings: Using targeted silencing of
AQP1 gene expression, an impairment in the organization of F-actin and a
reduced migration capacity was demonstrated in human endothelial and melanoma
cell lines. Interestingly, we showed, for the first time, that AQP1
co-immunoprecipitated with Lin-7. Lin7-GFP experiments confirmed
co-immunoprecipitation. In addition, the knock down of AQP1 decreased the level
of expression of Lin-7 and b-catenin and the inhibition of proteasome
contrasted partially such a decrease. Conclusions/Significance: All together,
our findings show that AQP1 plays a role inside the cells through
Lin-7/b-catenin interaction. Such a role of AQP1 is the same in human melanoma
and endothelial cells, suggesting that AQP1 plays a global physiological role.
A model is presented.
| [
{
"created": "Wed, 28 Aug 2013 02:33:24 GMT",
"version": "v1"
}
] | 2013-08-29 | [
[
"Monzani",
"Elena",
""
],
[
"Bazzotti",
"Riccardo",
""
],
[
"Perego",
"Carla",
""
],
[
"La Porta",
"Caterina A. M.",
""
]
] | Background: AQP1 belongs to aquaporins family, water-specific, membrane-channel proteins expressed in diverse tissues. Recent papers showed that during angiogenesis, AQP1 is expressed preferentially by microvessels, favoring angiogenesis via the increase of permeability In particular, in AQP1 null mice, endothelial cell migration is impaired without altering their proliferation or adhesion. Therefore, AQP1 has been proposed as a novel promoter of tumor angiogenesis. Methods/Findings: Using targeted silencing of AQP1 gene expression, an impairment in the organization of F-actin and a reduced migration capacity was demonstrated in human endothelial and melanoma cell lines. Interestingly, we showed, for the first time, that AQP1 co-immunoprecipitated with Lin-7. Lin7-GFP experiments confirmed co-immunoprecipitation. In addition, the knock down of AQP1 decreased the level of expression of Lin-7 and b-catenin and the inhibition of proteasome contrasted partially such a decrease. Conclusions/Significance: All together, our findings show that AQP1 plays a role inside the cells through Lin-7/b-catenin interaction. Such a role of AQP1 is the same in human melanoma and endothelial cells, suggesting that AQP1 plays a global physiological role. A model is presented. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.