id stringlengths 9 13 | submitter stringlengths 4 48 | authors stringlengths 4 9.62k | title stringlengths 4 343 | comments stringlengths 2 480 ⌀ | journal-ref stringlengths 9 309 ⌀ | doi stringlengths 12 138 ⌀ | report-no stringclasses 277 values | categories stringlengths 8 87 | license stringclasses 9 values | orig_abstract stringlengths 27 3.76k | versions listlengths 1 15 | update_date stringlengths 10 10 | authors_parsed listlengths 1 147 | abstract stringlengths 24 3.75k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1801.01533 | D'Artagnan Greene | D'Artagnan Greene, Theodora Po, Jennifer Pan, Tanya Tabibian, Ray Luo | Computational Analysis for the Rational Design of Anti-Amyloid Beta
(ABeta) Antibodies | null | null | 10.1021/acs.jpcb.8b01837 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Alzheimer's Disease (AD) is a neurodegenerative disorder that lacks effective
treatment options. Anti-amyloid beta (ABeta) antibodies are the leading drug
candidates to treat AD, but the results of clinical trials have been
disappointing. Introducing rational mutations into anti-ABeta antibodies to
increase their effectiveness is a way forward, but the path to take is unclear.
In this study, we demonstrate the use of computational fragment-based docking
and MMPBSA binding free energy calculations in the analysis of anti-ABeta
antibodies for rational drug design efforts. Our fragment-based docking method
successfully predicted the emergence of the common EFRH epitope, MD simulations
coupled with MMPBSA binding free energy calculations were used to analyze
scenarios described in prior studies, and we introduced rational mutations into
PFA1 to improve its calculated binding affinity towards the pE3-ABeta3-8 form
of ABeta. Two out of four proposed mutations stabilized binding. Our study
demonstrates that a computational approach may lead to an improved drug
candidate for AD in the future.
| [
{
"created": "Thu, 4 Jan 2018 20:11:54 GMT",
"version": "v1"
},
{
"created": "Thu, 22 Feb 2018 19:25:51 GMT",
"version": "v2"
}
] | 2018-04-24 | [
[
"Greene",
"D'Artagnan",
""
],
[
"Po",
"Theodora",
""
],
[
"Pan",
"Jennifer",
""
],
[
"Tabibian",
"Tanya",
""
],
[
"Luo",
"Ray",
""
]
] | Alzheimer's Disease (AD) is a neurodegenerative disorder that lacks effective treatment options. Anti-amyloid beta (ABeta) antibodies are the leading drug candidates to treat AD, but the results of clinical trials have been disappointing. Introducing rational mutations into anti-ABeta antibodies to increase their effectiveness is a way forward, but the path to take is unclear. In this study, we demonstrate the use of computational fragment-based docking and MMPBSA binding free energy calculations in the analysis of anti-ABeta antibodies for rational drug design efforts. Our fragment-based docking method successfully predicted the emergence of the common EFRH epitope, MD simulations coupled with MMPBSA binding free energy calculations were used to analyze scenarios described in prior studies, and we introduced rational mutations into PFA1 to improve its calculated binding affinity towards the pE3-ABeta3-8 form of ABeta. Two out of four proposed mutations stabilized binding. Our study demonstrates that a computational approach may lead to an improved drug candidate for AD in the future. |
q-bio/0601034 | Hiroshi Fujisaki | Hiroshi Fujisaki, Yong Zhang, John E. Straub | Time-dependent perturbation theory for vibrational energy relaxation and
dephasing in peptides and proteins | 16 pages, 5 figures, 5 tables, submitted to J. Chem. Phys | null | 10.1063/1.2191038 | null | q-bio.BM | null | Without invoking the Markov approximation, we derive formulas for vibrational
energy relaxation (VER) and dephasing for an anharmonic system oscillator using
a time-dependent perturbation theory. The system-bath Hamiltonian contains more
than the third order coupling terms since we take a normal mode picture as a
zeroth order approximation. When we invoke the Markov approximation, our theory
reduces to the Maradudin-Fein formula which is used to describe VER properties
of glass and proteins. When the system anharmonicity and the renormalization
effect due to the environment vanishes, our formulas reduce to those derived by
Mikami and Okazaki invoking the path-integral influence functional method [J.
Chem. Phys. 121 (2004) 10052]. We apply our formulas to VER of the amide I mode
of a small amino-acide like molecule, N-methylacetamide, in heavy water.
| [
{
"created": "Sat, 21 Jan 2006 23:06:18 GMT",
"version": "v1"
}
] | 2009-11-13 | [
[
"Fujisaki",
"Hiroshi",
""
],
[
"Zhang",
"Yong",
""
],
[
"Straub",
"John E.",
""
]
] | Without invoking the Markov approximation, we derive formulas for vibrational energy relaxation (VER) and dephasing for an anharmonic system oscillator using a time-dependent perturbation theory. The system-bath Hamiltonian contains more than the third order coupling terms since we take a normal mode picture as a zeroth order approximation. When we invoke the Markov approximation, our theory reduces to the Maradudin-Fein formula which is used to describe VER properties of glass and proteins. When the system anharmonicity and the renormalization effect due to the environment vanishes, our formulas reduce to those derived by Mikami and Okazaki invoking the path-integral influence functional method [J. Chem. Phys. 121 (2004) 10052]. We apply our formulas to VER of the amide I mode of a small amino-acide like molecule, N-methylacetamide, in heavy water. |
1806.00128 | Emma Towlson | Emma K. Towlson, Petra E. V\'ertes, Ulrich M\"uller, and Sebastian E.
Ahnert | Brain networks reveal the effects of antipsychotic drugs on
schizophrenia patients and controls | 18 pages, 2 figures, 3 tables | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The study of brain networks, including derived from functional neuroimaging
data, attracts broad interest and represents a rapidly growing
interdisciplinary field. Comparing networks of healthy volunteers with those of
patients can potentially offer new, quantitative diagnostic methods, and a
framework for better understanding brain and mind disorders. We explore resting
state fMRI data through network measures, and demonstrate that not only is
there a distinctive network architecture in the healthy brain that is disrupted
in schizophrenia, but also that both networks respond to medication. We
construct networks representing 15 healthy individuals and 12 schizophrenia
patients (males and females), all of whom are administered three drug
treatments: (i) a placebo; and two antipsychotic medications (ii) aripiprazole
and; (iii) sulpiride. We first reproduce the established finding that brain
networks of schizophrenia patients exhibit increased efficiency and reduced
clustering compared to controls. Our data then reveals that the antipsychotic
medications mitigate this effect, shifting the metrics towards those observed
in healthy volunteers, with a marked difference in efficacy between the two
drugs. Additionally, we find that aripiprazole considerably alters the network
statistics of healthy controls. Using a test of cognitive ability, we establish
that aripiprazole also adversely affects their performance. This provides
evidence that changes to macroscopic brain network architecture result in
measurable behavioural differences. This is the first time different
medications have been assessed in this way. Our results lay the groundwork for
an objective methodology with which to calculate and compare the efficacy of
different treatments of mind and brain disorders.
| [
{
"created": "Thu, 31 May 2018 23:05:04 GMT",
"version": "v1"
},
{
"created": "Mon, 4 Jun 2018 00:58:10 GMT",
"version": "v2"
}
] | 2018-06-05 | [
[
"Towlson",
"Emma K.",
""
],
[
"Vértes",
"Petra E.",
""
],
[
"Müller",
"Ulrich",
""
],
[
"Ahnert",
"Sebastian E.",
""
]
] | The study of brain networks, including derived from functional neuroimaging data, attracts broad interest and represents a rapidly growing interdisciplinary field. Comparing networks of healthy volunteers with those of patients can potentially offer new, quantitative diagnostic methods, and a framework for better understanding brain and mind disorders. We explore resting state fMRI data through network measures, and demonstrate that not only is there a distinctive network architecture in the healthy brain that is disrupted in schizophrenia, but also that both networks respond to medication. We construct networks representing 15 healthy individuals and 12 schizophrenia patients (males and females), all of whom are administered three drug treatments: (i) a placebo; and two antipsychotic medications (ii) aripiprazole and; (iii) sulpiride. We first reproduce the established finding that brain networks of schizophrenia patients exhibit increased efficiency and reduced clustering compared to controls. Our data then reveals that the antipsychotic medications mitigate this effect, shifting the metrics towards those observed in healthy volunteers, with a marked difference in efficacy between the two drugs. Additionally, we find that aripiprazole considerably alters the network statistics of healthy controls. Using a test of cognitive ability, we establish that aripiprazole also adversely affects their performance. This provides evidence that changes to macroscopic brain network architecture result in measurable behavioural differences. This is the first time different medications have been assessed in this way. Our results lay the groundwork for an objective methodology with which to calculate and compare the efficacy of different treatments of mind and brain disorders. |
2011.05595 | Yu Liu | Yu Liu, Yinghong Zhao and Mo Chen | Desires and Motivation: The Computational Rule, the Underlying Neural
Circuitry, and the Relevant Clinical Disorders | null | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | As organism is a dissipative system. The process from multi desires to
exclusive motivation is of great importance among all sensory-action loops. In
this paper we argued that a proper Desire-Motivation model should be a
continuous dynamic mapping from the dynamic desire vector to the sparse
motivation vector. Meanwhile, it should at least have specific stability and
adjustability of motivation intensity. Besides, the neuroscience evidences
suggest that the Desire-Motivation model should have dynamic information
acquisition and should be a recurrent neural network. A five-equation model is
built based on the above arguments, namely the Recurrent Gating
Desire-Motivation (RGDM) model. Additionally, a heuristic speculation based on
the RGDM model about corresponding brain regions is carried out. It believes
that the tonic and phasic firing of ventral tegmental area dopamine neurons
should execute the respective and collective feedback functions of recurrent
processing. The analysis about the RGMD model shows the expectations about
individual personality from three dimensions, namely stability, intensity, and
motivation decision speed. These three dimensions can be combined and create
eight different personalities, which is correspondent to Jung's personality
structure theorem. Furthermore, the RGDM model can be used to predict three
different brand-new types of depressive disorder with different phenotypes.
Moreover, it can also explain several other psychiatry disorders from new
perspectives.
| [
{
"created": "Wed, 11 Nov 2020 06:46:51 GMT",
"version": "v1"
}
] | 2020-11-12 | [
[
"Liu",
"Yu",
""
],
[
"Zhao",
"Yinghong",
""
],
[
"Chen",
"Mo",
""
]
] | As organism is a dissipative system. The process from multi desires to exclusive motivation is of great importance among all sensory-action loops. In this paper we argued that a proper Desire-Motivation model should be a continuous dynamic mapping from the dynamic desire vector to the sparse motivation vector. Meanwhile, it should at least have specific stability and adjustability of motivation intensity. Besides, the neuroscience evidences suggest that the Desire-Motivation model should have dynamic information acquisition and should be a recurrent neural network. A five-equation model is built based on the above arguments, namely the Recurrent Gating Desire-Motivation (RGDM) model. Additionally, a heuristic speculation based on the RGDM model about corresponding brain regions is carried out. It believes that the tonic and phasic firing of ventral tegmental area dopamine neurons should execute the respective and collective feedback functions of recurrent processing. The analysis about the RGMD model shows the expectations about individual personality from three dimensions, namely stability, intensity, and motivation decision speed. These three dimensions can be combined and create eight different personalities, which is correspondent to Jung's personality structure theorem. Furthermore, the RGDM model can be used to predict three different brand-new types of depressive disorder with different phenotypes. Moreover, it can also explain several other psychiatry disorders from new perspectives. |
2110.05231 | Shentong Mo | Shentong Mo, Xi Fu, Chenyang Hong, Yizhen Chen, Yuxuan Zheng, Xiangru
Tang, Zhiqiang Shen, Eric P Xing, Yanyan Lan | Multi-modal Self-supervised Pre-training for Regulatory Genome Across
Cell Types | null | null | null | null | q-bio.GN cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the genome biology research, regulatory genome modeling is an important
topic for many regulatory downstream tasks, such as promoter classification,
transaction factor binding sites prediction. The core problem is to model how
regulatory elements interact with each other and its variability across
different cell types. However, current deep learning methods often focus on
modeling genome sequences of a fixed set of cell types and do not account for
the interaction between multiple regulatory elements, making them only perform
well on the cell types in the training set and lack the generalizability
required in biological applications. In this work, we propose a simple yet
effective approach for pre-training genome data in a multi-modal and
self-supervised manner, which we call GeneBERT. Specifically, we simultaneously
take the 1d sequence of genome data and a 2d matrix of (transcription factors x
regions) as the input, where three pre-training tasks are proposed to improve
the robustness and generalizability of our model. We pre-train our model on the
ATAC-seq dataset with 17 million genome sequences. We evaluate our GeneBERT on
regulatory downstream tasks across different cell types, including promoter
classification, transaction factor binding sites prediction, disease risk
estimation, and splicing sites prediction. Extensive experiments demonstrate
the effectiveness of multi-modal and self-supervised pre-training for
large-scale regulatory genomics data.
| [
{
"created": "Mon, 11 Oct 2021 12:48:44 GMT",
"version": "v1"
},
{
"created": "Wed, 3 Nov 2021 08:27:38 GMT",
"version": "v2"
}
] | 2021-11-04 | [
[
"Mo",
"Shentong",
""
],
[
"Fu",
"Xi",
""
],
[
"Hong",
"Chenyang",
""
],
[
"Chen",
"Yizhen",
""
],
[
"Zheng",
"Yuxuan",
""
],
[
"Tang",
"Xiangru",
""
],
[
"Shen",
"Zhiqiang",
""
],
[
"Xing",
"Eric P",
""
],
[
"Lan",
"Yanyan",
""
]
] | In the genome biology research, regulatory genome modeling is an important topic for many regulatory downstream tasks, such as promoter classification, transaction factor binding sites prediction. The core problem is to model how regulatory elements interact with each other and its variability across different cell types. However, current deep learning methods often focus on modeling genome sequences of a fixed set of cell types and do not account for the interaction between multiple regulatory elements, making them only perform well on the cell types in the training set and lack the generalizability required in biological applications. In this work, we propose a simple yet effective approach for pre-training genome data in a multi-modal and self-supervised manner, which we call GeneBERT. Specifically, we simultaneously take the 1d sequence of genome data and a 2d matrix of (transcription factors x regions) as the input, where three pre-training tasks are proposed to improve the robustness and generalizability of our model. We pre-train our model on the ATAC-seq dataset with 17 million genome sequences. We evaluate our GeneBERT on regulatory downstream tasks across different cell types, including promoter classification, transaction factor binding sites prediction, disease risk estimation, and splicing sites prediction. Extensive experiments demonstrate the effectiveness of multi-modal and self-supervised pre-training for large-scale regulatory genomics data. |
1810.12441 | Gustav Markkula | Gustav Markkula, Richard Romano, Rachel Waldram, Oscar Giles, Callum
Mole, Richard Wilkie | Modelling visual-vestibular integration and behavioural adaptation in
the driving simulator | Changes in v2: Minor language improvements to Abstract and
Conclusion; Changes in v3: Added acknowledgments and data statement | null | null | null | q-bio.NC cs.CE cs.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It is well established that not only vision but also other sensory modalities
affect drivers' control of their vehicles, and that drivers adapt over time to
persistent changes in sensory cues (for example in driving simulators), but the
mechanisms underlying these behavioural phenomena are poorly understood. Here,
we consider the existing literature on how driver steering in slalom tasks is
affected by the down-scaling of vestibular cues, and propose a driver model
that can explain the empirically observed effects, namely: decreased task
performance and increased steering effort during initial exposure, followed by
a partial reversal of these effects as task exposure is prolonged.
Unexpectedly, the model also reproduced another empirical finding: a local
optimum for motion down-scaling, where path-tracking is better than when
one-to-one motion cues are available. Overall, the results imply that: (1)
drivers make direct use of vestibular information as part of determining
appropriate steering, and (2) motion down-scaling causes a yaw rate
underestimation phenomenon, where drivers behave as if the simulated vehicle is
rotating more slowly than it is. However, (3) in the slalom task, a certain
degree of such yaw rate underestimation is beneficial to path tracking
performance. Furthermore, (4) behavioural adaptation, as empirically observed
in slalom tasks, may occur due to (a) down-weighting of vestibular cues, and/or
(b) increased sensitivity to control errors, in determining when to adjust
steering and by how much, but (c) seemingly not in the form of a full
compensatory rescaling of the received vestibular input. The analyses presented
here provide new insights and hypotheses about simulator driving, and the
developed models can be used to support research on multisensory integration
and behavioural adaptation in both driving and other task domains.
| [
{
"created": "Mon, 29 Oct 2018 22:30:06 GMT",
"version": "v1"
},
{
"created": "Mon, 5 Nov 2018 16:58:36 GMT",
"version": "v2"
},
{
"created": "Wed, 7 Nov 2018 09:14:46 GMT",
"version": "v3"
}
] | 2018-11-08 | [
[
"Markkula",
"Gustav",
""
],
[
"Romano",
"Richard",
""
],
[
"Waldram",
"Rachel",
""
],
[
"Giles",
"Oscar",
""
],
[
"Mole",
"Callum",
""
],
[
"Wilkie",
"Richard",
""
]
] | It is well established that not only vision but also other sensory modalities affect drivers' control of their vehicles, and that drivers adapt over time to persistent changes in sensory cues (for example in driving simulators), but the mechanisms underlying these behavioural phenomena are poorly understood. Here, we consider the existing literature on how driver steering in slalom tasks is affected by the down-scaling of vestibular cues, and propose a driver model that can explain the empirically observed effects, namely: decreased task performance and increased steering effort during initial exposure, followed by a partial reversal of these effects as task exposure is prolonged. Unexpectedly, the model also reproduced another empirical finding: a local optimum for motion down-scaling, where path-tracking is better than when one-to-one motion cues are available. Overall, the results imply that: (1) drivers make direct use of vestibular information as part of determining appropriate steering, and (2) motion down-scaling causes a yaw rate underestimation phenomenon, where drivers behave as if the simulated vehicle is rotating more slowly than it is. However, (3) in the slalom task, a certain degree of such yaw rate underestimation is beneficial to path tracking performance. Furthermore, (4) behavioural adaptation, as empirically observed in slalom tasks, may occur due to (a) down-weighting of vestibular cues, and/or (b) increased sensitivity to control errors, in determining when to adjust steering and by how much, but (c) seemingly not in the form of a full compensatory rescaling of the received vestibular input. The analyses presented here provide new insights and hypotheses about simulator driving, and the developed models can be used to support research on multisensory integration and behavioural adaptation in both driving and other task domains. |
1705.05191 | Philippe Terrier PhD | Fabienne Reynard and Philippe Terrier | Determinants of gait stability while walking on a treadmill: a machine
learning approach | This is the author's version of a manuscript published in the Journal
of Biomechanics | Journnal of Biomechanics Volume 65, 8 December 2017, Pages 212-215 | 10.1016/j.jbiomech.2017.10.020 | null | q-bio.QM q-bio.NC | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Dynamic balance in human locomotion can be assessed through the local dynamic
stability (LDS) method. Whereas gait LDS has been used successfully in many
settings and applications, little is known about its sensitivity to individual
characteristics of healthy adults. Therefore, we reanalyzed a large dataset of
accelerometric data measured for 100 healthy adults from 20 to 70 years of age
performing 10 min. treadmill walking. We sought to assess the extent to which
the variations of age, body mass and height, sex, and preferred walking speed
(PWS) could influence gait LDS. The random forest (RF) and multiple adaptive
regression splines (MARS) algorithms were selected for their good bias-variance
tradeoff and their capabilities to handle nonlinear associations. First,
through variable importance measure (VIM), we used RF to evaluate which
individual characteristics had the highest influence on gait LDS. Second, we
used MARS to detect potential interactions among individual characteristics
that may influence LDS. The VIM and MARS results indicated that PWS and age
correlated with LDS, whereas no associations were found for sex, body height,
and body mass. Further, the MARS model detected an age by PWS interaction: on
one hand, at high PWS, gait stability is constant across age while, on the
other hand, at low PWS, gait instability increases substantially with age. We
conclude that it is advisable to consider the participants' age as well as
their PWS to avoid potential biases in evaluating dynamic balance through LDS.
| [
{
"created": "Mon, 15 May 2017 12:47:46 GMT",
"version": "v1"
},
{
"created": "Thu, 26 Oct 2017 12:43:14 GMT",
"version": "v2"
}
] | 2018-12-27 | [
[
"Reynard",
"Fabienne",
""
],
[
"Terrier",
"Philippe",
""
]
] | Dynamic balance in human locomotion can be assessed through the local dynamic stability (LDS) method. Whereas gait LDS has been used successfully in many settings and applications, little is known about its sensitivity to individual characteristics of healthy adults. Therefore, we reanalyzed a large dataset of accelerometric data measured for 100 healthy adults from 20 to 70 years of age performing 10 min. treadmill walking. We sought to assess the extent to which the variations of age, body mass and height, sex, and preferred walking speed (PWS) could influence gait LDS. The random forest (RF) and multiple adaptive regression splines (MARS) algorithms were selected for their good bias-variance tradeoff and their capabilities to handle nonlinear associations. First, through variable importance measure (VIM), we used RF to evaluate which individual characteristics had the highest influence on gait LDS. Second, we used MARS to detect potential interactions among individual characteristics that may influence LDS. The VIM and MARS results indicated that PWS and age correlated with LDS, whereas no associations were found for sex, body height, and body mass. Further, the MARS model detected an age by PWS interaction: on one hand, at high PWS, gait stability is constant across age while, on the other hand, at low PWS, gait instability increases substantially with age. We conclude that it is advisable to consider the participants' age as well as their PWS to avoid potential biases in evaluating dynamic balance through LDS. |
2305.19394 | Roman Pogodin | Roman Pogodin, Jonathan Cornford, Arna Ghosh, Gauthier Gidel,
Guillaume Lajoie, Blake Richards | Synaptic Weight Distributions Depend on the Geometry of Plasticity | ICLR 2024 | The Twelfth International Conference on Learning Representations,
2024 | null | null | q-bio.NC cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A growing literature in computational neuroscience leverages gradient descent
and learning algorithms that approximate it to study synaptic plasticity in the
brain. However, the vast majority of this work ignores a critical underlying
assumption: the choice of distance for synaptic changes - i.e. the geometry of
synaptic plasticity. Gradient descent assumes that the distance is Euclidean,
but many other distances are possible, and there is no reason that biology
necessarily uses Euclidean geometry. Here, using the theoretical tools provided
by mirror descent, we show that the distribution of synaptic weights will
depend on the geometry of synaptic plasticity. We use these results to show
that experimentally-observed log-normal weight distributions found in several
brain areas are not consistent with standard gradient descent (i.e. a Euclidean
geometry), but rather with non-Euclidean distances. Finally, we show that it
should be possible to experimentally test for different synaptic geometries by
comparing synaptic weight distributions before and after learning. Overall, our
work shows that the current paradigm in theoretical work on synaptic plasticity
that assumes Euclidean synaptic geometry may be misguided and that it should be
possible to experimentally determine the true geometry of synaptic plasticity
in the brain.
| [
{
"created": "Tue, 30 May 2023 20:16:23 GMT",
"version": "v1"
},
{
"created": "Mon, 4 Mar 2024 20:22:16 GMT",
"version": "v2"
}
] | 2024-03-06 | [
[
"Pogodin",
"Roman",
""
],
[
"Cornford",
"Jonathan",
""
],
[
"Ghosh",
"Arna",
""
],
[
"Gidel",
"Gauthier",
""
],
[
"Lajoie",
"Guillaume",
""
],
[
"Richards",
"Blake",
""
]
] | A growing literature in computational neuroscience leverages gradient descent and learning algorithms that approximate it to study synaptic plasticity in the brain. However, the vast majority of this work ignores a critical underlying assumption: the choice of distance for synaptic changes - i.e. the geometry of synaptic plasticity. Gradient descent assumes that the distance is Euclidean, but many other distances are possible, and there is no reason that biology necessarily uses Euclidean geometry. Here, using the theoretical tools provided by mirror descent, we show that the distribution of synaptic weights will depend on the geometry of synaptic plasticity. We use these results to show that experimentally-observed log-normal weight distributions found in several brain areas are not consistent with standard gradient descent (i.e. a Euclidean geometry), but rather with non-Euclidean distances. Finally, we show that it should be possible to experimentally test for different synaptic geometries by comparing synaptic weight distributions before and after learning. Overall, our work shows that the current paradigm in theoretical work on synaptic plasticity that assumes Euclidean synaptic geometry may be misguided and that it should be possible to experimentally determine the true geometry of synaptic plasticity in the brain. |
2107.02995 | Kaiyuan Yang | Zhanghao Yu, Joshua C. Chen, Fatima T. Alrashdan, Benjamin W. Avants,
Yan He, Amanda Singer, Jacob T. Robinson, Kaiyuan Yang | MagNI: A Magnetoelectrically Powered and Controlled Wireless
Neurostimulating Implant | This work has been accepted to 2020 IEEE Transactions on Biomedical
Circuits and Systems (TBioCAS) | IEEE Transactions on Biomedical Circuits and Systems (TBioCAS),
Volume: 14, Issue: 6, Pages: 1241-1252, Dec. 2020 | 10.1109/TBCAS.2020.3037862 | null | q-bio.NC eess.SP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents the first wireless and programmable neural stimulator
leveraging magnetoelectric (ME) effects for power and data transfer. Thanks to
low tissue absorption, low misalignment sensitivity and high power transfer
efficiency, the ME effect enables safe delivery of high power levels (a few
milliwatts) at low resonant frequencies (~250 kHz) to mm-sized implants deep
inside the body (30-mm depth). The presented MagNI (Magnetoelectric Neural
Implant) consists of a 1.5-mm$^2$ 180-nm CMOS chip, an in-house built 4x2 mm ME
film, an energy storage capacitor, and on-board electrodes on a flexible
polyimide substrate with a total volume of 8.2 mm$^3$ . The chip with a power
consumption of 23.7 $\mu$W includes robust system control and data recovery
mechanisms under source amplitude variations (1-V variation tolerance). The
system delivers fully-programmable bi-phasic current-controlled stimulation
with patterns covering 0.05-to-1.5-mA amplitude, 64-to-512-$\mu$s pulse width,
and 0-to-200Hz repetition frequency for neurostimulation.
| [
{
"created": "Wed, 7 Jul 2021 03:30:10 GMT",
"version": "v1"
}
] | 2021-07-08 | [
[
"Yu",
"Zhanghao",
""
],
[
"Chen",
"Joshua C.",
""
],
[
"Alrashdan",
"Fatima T.",
""
],
[
"Avants",
"Benjamin W.",
""
],
[
"He",
"Yan",
""
],
[
"Singer",
"Amanda",
""
],
[
"Robinson",
"Jacob T.",
""
],
[
"Yang",
"Kaiyuan",
""
]
] | This paper presents the first wireless and programmable neural stimulator leveraging magnetoelectric (ME) effects for power and data transfer. Thanks to low tissue absorption, low misalignment sensitivity and high power transfer efficiency, the ME effect enables safe delivery of high power levels (a few milliwatts) at low resonant frequencies (~250 kHz) to mm-sized implants deep inside the body (30-mm depth). The presented MagNI (Magnetoelectric Neural Implant) consists of a 1.5-mm$^2$ 180-nm CMOS chip, an in-house built 4x2 mm ME film, an energy storage capacitor, and on-board electrodes on a flexible polyimide substrate with a total volume of 8.2 mm$^3$ . The chip with a power consumption of 23.7 $\mu$W includes robust system control and data recovery mechanisms under source amplitude variations (1-V variation tolerance). The system delivers fully-programmable bi-phasic current-controlled stimulation with patterns covering 0.05-to-1.5-mA amplitude, 64-to-512-$\mu$s pulse width, and 0-to-200Hz repetition frequency for neurostimulation. |
2006.14295 | Gemma Massonis | Gemma Massonis, Julio R. Banga, Alejandro F. Villaverde | Structural Identifiability and Observability of Compartmental Models of
the COVID-19 Pandemic | 25 pages, 7 figures | null | null | null | q-bio.PE physics.soc-ph q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The recent coronavirus disease (COVID-19) outbreak has dramatically increased
the public awareness and appreciation of the utility of dynamic models. At the
same time, the dissemination of contradictory model predictions has highlighted
their limitations. If some parameters and/or state variables of a model cannot
be determined from output measurements, its ability to yield correct insights
-- as well as the possibility of controlling the system -- may be compromised.
Epidemic dynamics are commonly analysed using compartmental models, and many
variations of such models have been used for analysing and predicting the
evolution of the COVID-19 pandemic. In this paper we survey the different
models proposed in the literature, assembling a list of 36 model structures and
assessing their ability to provide reliable information. We address the problem
using the control theoretic concepts of structural identifiability and
observability. Since some parameters can vary during the course of an epidemic,
we consider both the constant and time-varying parameter assumptions. We
analyse the structural identifiability and observability of all of the models,
considering all plausible choices of outputs and time-varying parameters, which
leads us to analyse 255 different model versions. We classify the models
according to their structural identifiability and observability under the
different assumptions and discuss the implications of the results. We also
illustrate with an example several alternative ways of remedying the lack of
observability of a model. Our analyses provide guidelines for choosing the most
informative model for each purpose, taking into account the available knowledge
and measurements.
| [
{
"created": "Thu, 25 Jun 2020 10:30:24 GMT",
"version": "v1"
}
] | 2020-06-26 | [
[
"Massonis",
"Gemma",
""
],
[
"Banga",
"Julio R.",
""
],
[
"Villaverde",
"Alejandro F.",
""
]
] | The recent coronavirus disease (COVID-19) outbreak has dramatically increased the public awareness and appreciation of the utility of dynamic models. At the same time, the dissemination of contradictory model predictions has highlighted their limitations. If some parameters and/or state variables of a model cannot be determined from output measurements, its ability to yield correct insights -- as well as the possibility of controlling the system -- may be compromised. Epidemic dynamics are commonly analysed using compartmental models, and many variations of such models have been used for analysing and predicting the evolution of the COVID-19 pandemic. In this paper we survey the different models proposed in the literature, assembling a list of 36 model structures and assessing their ability to provide reliable information. We address the problem using the control theoretic concepts of structural identifiability and observability. Since some parameters can vary during the course of an epidemic, we consider both the constant and time-varying parameter assumptions. We analyse the structural identifiability and observability of all of the models, considering all plausible choices of outputs and time-varying parameters, which leads us to analyse 255 different model versions. We classify the models according to their structural identifiability and observability under the different assumptions and discuss the implications of the results. We also illustrate with an example several alternative ways of remedying the lack of observability of a model. Our analyses provide guidelines for choosing the most informative model for each purpose, taking into account the available knowledge and measurements. |
1710.03786 | Brett McClintock | Brett T. McClintock and Theo Michelot | momentuHMM: R package for generalized hidden Markov models of animal
movement | null | Methods in Ecology and Evolution 2018, Vol. 9, No. 6, 1518-1530 | 10.1111/2041-210X.12995 | null | q-bio.QM stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Discrete-time hidden Markov models (HMMs) have become an immensely popular
tool for inferring latent animal behaviors from telemetry data. Here we
introduce an open-source R package, momentuHMM, that addresses many of the
deficiencies in existing HMM software. Features include: 1) data pre-processing
and visualization; 2) user-specified probability distributions for an unlimited
number of data streams and latent behavior states; 3) biased and correlated
random walk movement models, including "activity centers" associated with
attractive or repulsive forces; 4) user-specified design matrices and
constraints for covariate modelling of parameters using formulas familiar to
most R users; 5) multiple imputation methods that account for measurement error
and temporally-irregular or missing data; 6) seamless integration of
spatio-temporal covariate raster data; 7) cosinor and spline models for
cyclical and other complicated patterns; 8) model checking and selection; and
9) simulation. momentuHMM considerably extends the capabilities of existing HMM
software while accounting for common challenges associated with telemetery
data. It therefore facilitates more realistic hypothesis-driven animal movement
analyses that have hitherto been largely inaccessible to non-statisticians.
While motivated by telemetry data, the package can be used for analyzing any
type of data that is amenable to HMMs. Practitioners interested in additional
features are encouraged to contact the authors.
| [
{
"created": "Tue, 10 Oct 2017 19:03:53 GMT",
"version": "v1"
},
{
"created": "Fri, 9 Mar 2018 23:51:40 GMT",
"version": "v2"
}
] | 2018-06-13 | [
[
"McClintock",
"Brett T.",
""
],
[
"Michelot",
"Theo",
""
]
] | Discrete-time hidden Markov models (HMMs) have become an immensely popular tool for inferring latent animal behaviors from telemetry data. Here we introduce an open-source R package, momentuHMM, that addresses many of the deficiencies in existing HMM software. Features include: 1) data pre-processing and visualization; 2) user-specified probability distributions for an unlimited number of data streams and latent behavior states; 3) biased and correlated random walk movement models, including "activity centers" associated with attractive or repulsive forces; 4) user-specified design matrices and constraints for covariate modelling of parameters using formulas familiar to most R users; 5) multiple imputation methods that account for measurement error and temporally-irregular or missing data; 6) seamless integration of spatio-temporal covariate raster data; 7) cosinor and spline models for cyclical and other complicated patterns; 8) model checking and selection; and 9) simulation. momentuHMM considerably extends the capabilities of existing HMM software while accounting for common challenges associated with telemetery data. It therefore facilitates more realistic hypothesis-driven animal movement analyses that have hitherto been largely inaccessible to non-statisticians. While motivated by telemetry data, the package can be used for analyzing any type of data that is amenable to HMMs. Practitioners interested in additional features are encouraged to contact the authors. |
0806.0108 | Dirson Jian Li | Dirson Jian Li, Shengli Zhang | The C-value enigma and timing of the Cambrian explosion | 46 pages, 10 figures | Biochemical and Biophysical Research Communications 392, 240-245
(2010) | 10.1016/j.bbrc.2010.01.032 | null | q-bio.GN q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Cambrian explosion is a grand challenge to science today and involves
multidisciplinary study. This event is generally believed as a result of
genetic innovations, environmental factors and ecological interactions, even
though there are many conflicts on nature and timing of metazoan origins. The
crux of the matter is that an entire roadmap of the evolution is missing to
discern the biological complexity transition and to evaluate the critical role
of the Cambrian explosion in the overall evolutionary context. Here we
calculate the time of the Cambrian explosion by an innovative and accurate
"C-value clock"; our result (560 million years ago) quite fits the fossil
records. We clarify that the intrinsic reason of genome evolution determined
the Cambrian explosion. A general formula for evaluating genome size of
different species has been found, by which major questions of the C-value
enigma can be solved and the genome size evolution can be illustrated. The
Cambrian explosion is essentially a major transition of biological complexity,
which corresponds to a turning point in genome size evolution. The observed
maximum prokaryotic complexity is just a relic of the Cambrian explosion and it
is supervised by the maximum information storage capability in the observed
universe. Our results open a new prospect of studying metazoan origins and
molecular evolution.
| [
{
"created": "Sat, 31 May 2008 21:13:31 GMT",
"version": "v1"
}
] | 2010-02-10 | [
[
"Li",
"Dirson Jian",
""
],
[
"Zhang",
"Shengli",
""
]
] | The Cambrian explosion is a grand challenge to science today and involves multidisciplinary study. This event is generally believed as a result of genetic innovations, environmental factors and ecological interactions, even though there are many conflicts on nature and timing of metazoan origins. The crux of the matter is that an entire roadmap of the evolution is missing to discern the biological complexity transition and to evaluate the critical role of the Cambrian explosion in the overall evolutionary context. Here we calculate the time of the Cambrian explosion by an innovative and accurate "C-value clock"; our result (560 million years ago) quite fits the fossil records. We clarify that the intrinsic reason of genome evolution determined the Cambrian explosion. A general formula for evaluating genome size of different species has been found, by which major questions of the C-value enigma can be solved and the genome size evolution can be illustrated. The Cambrian explosion is essentially a major transition of biological complexity, which corresponds to a turning point in genome size evolution. The observed maximum prokaryotic complexity is just a relic of the Cambrian explosion and it is supervised by the maximum information storage capability in the observed universe. Our results open a new prospect of studying metazoan origins and molecular evolution. |
1311.3952 | Agnieszka Szymanska | Chang Won Lee, Agnieszka A. Szymanska, Shun Chi Wu, A. Lee
Swindlehurst, Zoran Nenadic | A Method for Neuronal Source Identification | 14 pages, 5 figures | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-sensor microelectrodes for extracellular action potential recording
have significantly improved the quality of in vivo recorded neuronal signals.
These microelectrodes have also been instrumental in the localization of
neuronal signal sources. However, existing neuron localization methods have
been mostly utilized in vivo, where the true neuron location remains unknown.
Therefore, these methods could not be experimentally validated. This article
presents experimental validation of a method capable of estimating both the
location and intensity of an electrical signal source. A four-sensor
microelectrode (tetrode) immersed in a saline solution was used to record
stimulus patterns at multiple intensity levels generated by a stimulating
electrode. The location of the tetrode was varied with respect to the
stimulator. The location and intensity of the stimulator were estimated using
the Multiple Signal Classification (MUSIC) algorithm, and the results were
quantified by comparison to the true values. The localization results, with an
accuracy and precision of ~ 10 microns, and ~ 11 microns respectively, imply
that MUSIC can resolve individual neuronal sources. Similarly, source intensity
estimations indicate that this approach can track changes in signal amplitude
over time. Together, these results suggest that MUSIC can be used to
characterize neuronal signal sources in vivo.
| [
{
"created": "Fri, 15 Nov 2013 19:07:41 GMT",
"version": "v1"
}
] | 2013-11-18 | [
[
"Lee",
"Chang Won",
""
],
[
"Szymanska",
"Agnieszka A.",
""
],
[
"Wu",
"Shun Chi",
""
],
[
"Swindlehurst",
"A. Lee",
""
],
[
"Nenadic",
"Zoran",
""
]
] | Multi-sensor microelectrodes for extracellular action potential recording have significantly improved the quality of in vivo recorded neuronal signals. These microelectrodes have also been instrumental in the localization of neuronal signal sources. However, existing neuron localization methods have been mostly utilized in vivo, where the true neuron location remains unknown. Therefore, these methods could not be experimentally validated. This article presents experimental validation of a method capable of estimating both the location and intensity of an electrical signal source. A four-sensor microelectrode (tetrode) immersed in a saline solution was used to record stimulus patterns at multiple intensity levels generated by a stimulating electrode. The location of the tetrode was varied with respect to the stimulator. The location and intensity of the stimulator were estimated using the Multiple Signal Classification (MUSIC) algorithm, and the results were quantified by comparison to the true values. The localization results, with an accuracy and precision of ~ 10 microns, and ~ 11 microns respectively, imply that MUSIC can resolve individual neuronal sources. Similarly, source intensity estimations indicate that this approach can track changes in signal amplitude over time. Together, these results suggest that MUSIC can be used to characterize neuronal signal sources in vivo. |
1501.00202 | Alexander Iomin | V. M\'endez and A. Iomin | Comb models for transport along spiny dendrites | null | null | null | null | q-bio.NC cond-mat.dis-nn physics.bio-ph physics.med-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This chapter is a contribution in the "Handbook of Applications of Chaos
Theory" ed. by Prof. Christos H Skiadas. The chapter is organized as follows.
First we study the statistical properties of combs and explain how to reduce
the effect of teeth on the movement along the backbone as a waiting time
distribution between consecutive jumps. Second, we justify an employment of a
comb-like structure as a paradigm for further exploration of a spiny dendrite.
In particular, we show how a comb-like structure can sustain the phenomenon of
the anomalous diffusion, reaction-diffusion and L\'evy walks. Finally, we
illustrate how the same models can be also useful to deal with the mechanism of
ta translocation wave / translocation waves of CaMKII and its propagation
failure. We also present a brief introduction to the fractional
integro-differentiation in appendix at the end of the chapter.
| [
{
"created": "Wed, 31 Dec 2014 20:34:56 GMT",
"version": "v1"
}
] | 2015-01-05 | [
[
"Méndez",
"V.",
""
],
[
"Iomin",
"A.",
""
]
] | This chapter is a contribution in the "Handbook of Applications of Chaos Theory" ed. by Prof. Christos H Skiadas. The chapter is organized as follows. First we study the statistical properties of combs and explain how to reduce the effect of teeth on the movement along the backbone as a waiting time distribution between consecutive jumps. Second, we justify an employment of a comb-like structure as a paradigm for further exploration of a spiny dendrite. In particular, we show how a comb-like structure can sustain the phenomenon of the anomalous diffusion, reaction-diffusion and L\'evy walks. Finally, we illustrate how the same models can be also useful to deal with the mechanism of ta translocation wave / translocation waves of CaMKII and its propagation failure. We also present a brief introduction to the fractional integro-differentiation in appendix at the end of the chapter. |
2207.07504 | Anton Arkhipov | Anton Arkhipov | Non-separability of Physical Systems as a Foundation of Consciousness | null | null | 10.3390/e24111539 | null | q-bio.NC cond-mat.dis-nn | http://creativecommons.org/licenses/by/4.0/ | A hypothesis is presented that non-separability of degrees of freedom is the
fundamental property underlying consciousness in physical systems. The amount
of consciousness in a system is determined by the extent of non-separability
and the number of degrees of freedom involved. Non-interacting and feedforward
systems have zero consciousness, whereas most systems of interacting particles
appear to have low non-separability and consciousness. By contrast, brain
circuits exhibit high complexity and weak but tightly coordinated interactions,
which appear to support high non-separability and therefore high amount of
consciousness. The hypothesis applies to both classical and quantum cases, and
we highlight the formalism employing the Wigner function (which in the
classical limit becomes the Liouville density function) as a potentially
fruitful framework for characterizing non-separability and, thus, the amount of
consciousness in a system. The hypothesis appears to be consistent with both
the Integrated Information Theory and the Orchestrated Objective Reduction
Theory and may help reconcile the two. It offers a natural explanation for the
physical properties underlying the amount of consciousness and points to
methods of estimating the amount of non-separability as promising ways of
characterizing the amount of consciousness.
| [
{
"created": "Tue, 28 Jun 2022 19:37:45 GMT",
"version": "v1"
}
] | 2022-11-09 | [
[
"Arkhipov",
"Anton",
""
]
] | A hypothesis is presented that non-separability of degrees of freedom is the fundamental property underlying consciousness in physical systems. The amount of consciousness in a system is determined by the extent of non-separability and the number of degrees of freedom involved. Non-interacting and feedforward systems have zero consciousness, whereas most systems of interacting particles appear to have low non-separability and consciousness. By contrast, brain circuits exhibit high complexity and weak but tightly coordinated interactions, which appear to support high non-separability and therefore high amount of consciousness. The hypothesis applies to both classical and quantum cases, and we highlight the formalism employing the Wigner function (which in the classical limit becomes the Liouville density function) as a potentially fruitful framework for characterizing non-separability and, thus, the amount of consciousness in a system. The hypothesis appears to be consistent with both the Integrated Information Theory and the Orchestrated Objective Reduction Theory and may help reconcile the two. It offers a natural explanation for the physical properties underlying the amount of consciousness and points to methods of estimating the amount of non-separability as promising ways of characterizing the amount of consciousness. |
2001.11125 | Alex Stivala | Alex Stivala and Alessandro Lomi | Testing biological network motif significance with exponential random
graph models | Major revision after reviewer comments. Includes supplementary tables
and figures | Applied Network Science (2021) 6:91 | 10.1007/s41109-021-00434-y | null | q-bio.MN q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Analysis of the structure of biological networks often uses statistical tests
to establish the over-representation of motifs, which are thought to be
important building blocks of such networks, related to their biological
functions. However, there is disagreement as to the statistical significance of
these motifs, and there are potential problems with standard methods for
estimating this significance. Exponential random graph models (ERGMs) are a
class of statistical model that can overcome some of the shortcomings of
commonly used methods for testing the statistical significance of motifs. ERGMs
were first introduced into the bioinformatics literature over ten years ago but
have had limited application to biological networks, possibly due to the
practical difficulty of estimating model parameters. Advances in estimation
algorithms now afford analysis of much larger networks in practical time. We
illustrate the application of ERGM to both an undirected protein-protein
interaction (PPI) network and directed gene regulatory networks. ERGM models
indicate over-representation of triangles in the PPI network, and confirm
results from previous research as to over-representation of transitive
triangles (feed-forward loop) in an E. coli and a yeast regulatory network. We
also confirm, using ERGMs, previous research showing that under-representation
of the cyclic triangle (feedback loop) can be explained as a consequence of
other topological features.
| [
{
"created": "Wed, 29 Jan 2020 23:05:26 GMT",
"version": "v1"
},
{
"created": "Thu, 6 Aug 2020 05:27:08 GMT",
"version": "v2"
},
{
"created": "Sun, 28 Feb 2021 23:36:56 GMT",
"version": "v3"
},
{
"created": "Wed, 9 Jun 2021 06:11:06 GMT",
"version": "v4"
},
{
"created": "Tue, 9 Nov 2021 01:36:28 GMT",
"version": "v5"
}
] | 2021-11-24 | [
[
"Stivala",
"Alex",
""
],
[
"Lomi",
"Alessandro",
""
]
] | Analysis of the structure of biological networks often uses statistical tests to establish the over-representation of motifs, which are thought to be important building blocks of such networks, related to their biological functions. However, there is disagreement as to the statistical significance of these motifs, and there are potential problems with standard methods for estimating this significance. Exponential random graph models (ERGMs) are a class of statistical model that can overcome some of the shortcomings of commonly used methods for testing the statistical significance of motifs. ERGMs were first introduced into the bioinformatics literature over ten years ago but have had limited application to biological networks, possibly due to the practical difficulty of estimating model parameters. Advances in estimation algorithms now afford analysis of much larger networks in practical time. We illustrate the application of ERGM to both an undirected protein-protein interaction (PPI) network and directed gene regulatory networks. ERGM models indicate over-representation of triangles in the PPI network, and confirm results from previous research as to over-representation of transitive triangles (feed-forward loop) in an E. coli and a yeast regulatory network. We also confirm, using ERGMs, previous research showing that under-representation of the cyclic triangle (feedback loop) can be explained as a consequence of other topological features. |
q-bio/0401036 | Axel Brandenburg | A. Brandenburg, A. C. Andersen, S. H\"ofner, and M. Nilsson (all at
NORDITA) | Homochiral growth through enantiomeric cross-inhibition | 20 pages, 6 figures, Orig. Life Evol. Biosph. (accepted) | Orig. Life Evol. Biosph. 35, 225-241 (2005) | 10.1007/s11084-005-0656-9 | NORDITA-2004-5 | q-bio.BM astro-ph physics.bio-ph | null | The stability and conservation properties of a recently proposed
polymerization model are studied. The achiral (racemic) solution is linearly
unstable once the relevant control parameter (here the fidelity of the
catalyst) exceeds a critical value. The growth rate is calculated for different
fidelity parameters and cross-inhibition rates. A chirality parameter is
defined and shown to be conserved by the nonlinear terms of the model. Finally,
a truncated version of the model is used to derive a set of two ordinary
differential equations and it is argued that these equations are more realistic
than those used in earlier models of that form.
| [
{
"created": "Tue, 27 Jan 2004 15:12:24 GMT",
"version": "v1"
},
{
"created": "Mon, 26 Apr 2004 19:08:20 GMT",
"version": "v2"
}
] | 2007-05-23 | [
[
"Brandenburg",
"A.",
"",
"all at\n NORDITA"
],
[
"Andersen",
"A. C.",
"",
"all at\n NORDITA"
],
[
"Höfner",
"S.",
"",
"all at\n NORDITA"
],
[
"Nilsson",
"M.",
"",
"all at\n NORDITA"
]
] | The stability and conservation properties of a recently proposed polymerization model are studied. The achiral (racemic) solution is linearly unstable once the relevant control parameter (here the fidelity of the catalyst) exceeds a critical value. The growth rate is calculated for different fidelity parameters and cross-inhibition rates. A chirality parameter is defined and shown to be conserved by the nonlinear terms of the model. Finally, a truncated version of the model is used to derive a set of two ordinary differential equations and it is argued that these equations are more realistic than those used in earlier models of that form. |
2211.03208 | Christoph Gorgulla | Christoph Gorgulla | Recent Developments in Structure-Based Virtual Screening Approaches | 22 pages, 2 figures | null | null | null | q-bio.BM cs.LG physics.bio-ph physics.chem-ph q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Drug development is a wide scientific field that faces many challenges these
days. Among them are extremely high development costs, long development times,
as well as a low number of new drugs that are approved each year. To solve
these problems, new and innovate technologies are needed that make the drug
discovery process of small-molecules more time and cost-efficient, and which
allow to target previously undruggable target classes such as protein-protein
interactions. Structure-based virtual screenings have become a leading
contender in this context. In this review, we give an introduction to the
foundations of structure-based virtual screenings, and survey their progress in
the past few years. We outline key principles, recent success stories, new
methods, available software, and promising future research directions. Virtual
screenings have an enormous potential for the development of new small-molecule
drugs, and are already starting to transform early-stage drug discovery.
| [
{
"created": "Sun, 6 Nov 2022 19:28:25 GMT",
"version": "v1"
}
] | 2022-11-08 | [
[
"Gorgulla",
"Christoph",
""
]
] | Drug development is a wide scientific field that faces many challenges these days. Among them are extremely high development costs, long development times, as well as a low number of new drugs that are approved each year. To solve these problems, new and innovate technologies are needed that make the drug discovery process of small-molecules more time and cost-efficient, and which allow to target previously undruggable target classes such as protein-protein interactions. Structure-based virtual screenings have become a leading contender in this context. In this review, we give an introduction to the foundations of structure-based virtual screenings, and survey their progress in the past few years. We outline key principles, recent success stories, new methods, available software, and promising future research directions. Virtual screenings have an enormous potential for the development of new small-molecule drugs, and are already starting to transform early-stage drug discovery. |
1402.6752 | Alexander Goltsev | M. A. Lopes, K.-E. Lee, A. V. Goltsev, and J. F. F. Mendes | Noise-enhanced nonlinear response and the role of modular structure for
signal detection in neuronal networks | 10 pages, 8 figures | Phys Rev E 90, 052709 (2014) | 10.1103/PhysRevE.90.052709 | null | q-bio.NC physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We find that sensory noise delivered together with a weak periodic signal not
only enhances nonlinear response of neuronal networks, but also improves the
synchronization of the response to the signal. We reveal this phenomenon in
neuronal networks that are in a dynamical state near a saddle-node bifurcation
corresponding to appearance of sustained network oscillations. In this state,
even a weak periodic signal can evoke sharp nonlinear oscillations of neuronal
activity. These sharp network oscillations have a deterministic form and
amplitude determined by nonlinear dynamical equations. The signal-to-noise
ratio reaches a maximum at an optimum level of sensory noise, manifesting
stochastic resonance (SR) at the population level. We demonstrate SR by use of
simulations and numerical integration of rate equations in a cortical model
with stochastic neurons. Using this model, we mimic the experiments of Gluckman
et al. [B. J. Gluckman et al, Phys. Rev. Lett., v. 77, 4098 (1996)] that have
given evidence of SR in mammalian brain. We also study neuronal networks in
which neurons are grouped in modules and every module works in the regime of
SR. We find that even a few modules can strongly enhance the reliability of
signal detection in comparison with the case when a modular organization is
absent.
| [
{
"created": "Thu, 27 Feb 2014 00:42:07 GMT",
"version": "v1"
}
] | 2015-08-25 | [
[
"Lopes",
"M. A.",
""
],
[
"Lee",
"K. -E.",
""
],
[
"Goltsev",
"A. V.",
""
],
[
"Mendes",
"J. F. F.",
""
]
] | We find that sensory noise delivered together with a weak periodic signal not only enhances nonlinear response of neuronal networks, but also improves the synchronization of the response to the signal. We reveal this phenomenon in neuronal networks that are in a dynamical state near a saddle-node bifurcation corresponding to appearance of sustained network oscillations. In this state, even a weak periodic signal can evoke sharp nonlinear oscillations of neuronal activity. These sharp network oscillations have a deterministic form and amplitude determined by nonlinear dynamical equations. The signal-to-noise ratio reaches a maximum at an optimum level of sensory noise, manifesting stochastic resonance (SR) at the population level. We demonstrate SR by use of simulations and numerical integration of rate equations in a cortical model with stochastic neurons. Using this model, we mimic the experiments of Gluckman et al. [B. J. Gluckman et al, Phys. Rev. Lett., v. 77, 4098 (1996)] that have given evidence of SR in mammalian brain. We also study neuronal networks in which neurons are grouped in modules and every module works in the regime of SR. We find that even a few modules can strongly enhance the reliability of signal detection in comparison with the case when a modular organization is absent. |
2302.00960 | Abhishek Senapati | Biplab Maity, Swarnendu Banerjee, Abhishek Senapati, Joydev
Chattopadhyay | Quantifying optimal resource allocation strategies for controlling
epidemics | 44 Pages, 10 Figures | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | Frequent emergence of communicable diseases has been a major concern
worldwide. Lack of sufficient resources to mitigate the disease-burden makes
the situation even more challenging for lower-income countries. Hence, strategy
development towards disease eradication and optimal management of the social
and economic burden has garnered a lot of attention in recent years. In this
context, we quantify the optimal fraction of resources that can be allocated to
two major intervention measures, namely reduction of disease transmission and
improvement of healthcare infrastructure. Our results demonstrate that the
effectiveness of each of the interventions has a significant impact on the
optimal resource allocation in both long-term disease dynamics and outbreak
scenarios. Often allocating resources to both strategies is optimal. For
long-term dynamics, a non-monotonic behavior of optimal resource allocation
with intervention effectiveness is observed which is different from the more
intuitive strategy recommended in case of outbreaks. Further, our result
indicates that the relationship between investment into interventions and the
corresponding outcomes has a decisive role in determining optimal strategies.
Intervention programs with decreasing returns promote the necessity for
resource sharing. Our study provides a fundamental insight into determining the
best response strategy in case of controlling epidemics under
resource-constrained situations.
| [
{
"created": "Thu, 2 Feb 2023 09:01:58 GMT",
"version": "v1"
}
] | 2023-02-03 | [
[
"Maity",
"Biplab",
""
],
[
"Banerjee",
"Swarnendu",
""
],
[
"Senapati",
"Abhishek",
""
],
[
"Chattopadhyay",
"Joydev",
""
]
] | Frequent emergence of communicable diseases has been a major concern worldwide. Lack of sufficient resources to mitigate the disease-burden makes the situation even more challenging for lower-income countries. Hence, strategy development towards disease eradication and optimal management of the social and economic burden has garnered a lot of attention in recent years. In this context, we quantify the optimal fraction of resources that can be allocated to two major intervention measures, namely reduction of disease transmission and improvement of healthcare infrastructure. Our results demonstrate that the effectiveness of each of the interventions has a significant impact on the optimal resource allocation in both long-term disease dynamics and outbreak scenarios. Often allocating resources to both strategies is optimal. For long-term dynamics, a non-monotonic behavior of optimal resource allocation with intervention effectiveness is observed which is different from the more intuitive strategy recommended in case of outbreaks. Further, our result indicates that the relationship between investment into interventions and the corresponding outcomes has a decisive role in determining optimal strategies. Intervention programs with decreasing returns promote the necessity for resource sharing. Our study provides a fundamental insight into determining the best response strategy in case of controlling epidemics under resource-constrained situations. |
2006.09928 | Biao Cai | Biao Cai, Gemeng Zhang, Aiying Zhang, Li Xiao, Wenxing Hu, Julia M.
Stephen, Tony W. Wilson, Vince D. Calhoun, Yu-Ping Wang | Functional connectome fingerprinting: Identifying individuals and
predicting cognitive function via deep learning | null | null | null | null | q-bio.NC eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The dynamic characteristics of functional network connectivity have been
widely acknowledged and studied. Both shared and unique information has been
shown to be present in the connectomes. However, very little has been known
about whether and how this common pattern can predict the individual
variability of the brain, i.e. "brain fingerprinting", which attempts to
reliably identify a particular individual from a pool of subjects. In this
paper, we propose to enhance the individual uniqueness based on an autoencoder
network. More specifically, we rely on the hypothesis that the common neural
activities shared across individuals may lessen individual discrimination. By
reducing contributions from shared activities, inter-subject variability can be
enhanced. Results show that that refined connectomes utilizing an autoencoder
with sparse dictionary learning can successfully distinguish one individual
from the remaining participants with reasonably high accuracy (up to 99:5% for
the rest-rest pair). Furthermore, high-level cognitive behavior (e.g., fluid
intelligence, executive function, and language comprehension) can also be
better predicted using refined functional connectivity profiles. As expected,
the high-order association cortices contributed more to both individual
discrimination and behavior prediction. The proposed approach provides a
promising way to enhance and leverage the individualized characteristics of
brain networks.
| [
{
"created": "Wed, 17 Jun 2020 15:15:35 GMT",
"version": "v1"
}
] | 2020-06-18 | [
[
"Cai",
"Biao",
""
],
[
"Zhang",
"Gemeng",
""
],
[
"Zhang",
"Aiying",
""
],
[
"Xiao",
"Li",
""
],
[
"Hu",
"Wenxing",
""
],
[
"Stephen",
"Julia M.",
""
],
[
"Wilson",
"Tony W.",
""
],
[
"Calhoun",
"Vince D.",
""
],
[
"Wang",
"Yu-Ping",
""
]
] | The dynamic characteristics of functional network connectivity have been widely acknowledged and studied. Both shared and unique information has been shown to be present in the connectomes. However, very little has been known about whether and how this common pattern can predict the individual variability of the brain, i.e. "brain fingerprinting", which attempts to reliably identify a particular individual from a pool of subjects. In this paper, we propose to enhance the individual uniqueness based on an autoencoder network. More specifically, we rely on the hypothesis that the common neural activities shared across individuals may lessen individual discrimination. By reducing contributions from shared activities, inter-subject variability can be enhanced. Results show that that refined connectomes utilizing an autoencoder with sparse dictionary learning can successfully distinguish one individual from the remaining participants with reasonably high accuracy (up to 99:5% for the rest-rest pair). Furthermore, high-level cognitive behavior (e.g., fluid intelligence, executive function, and language comprehension) can also be better predicted using refined functional connectivity profiles. As expected, the high-order association cortices contributed more to both individual discrimination and behavior prediction. The proposed approach provides a promising way to enhance and leverage the individualized characteristics of brain networks. |
1308.5257 | James Fowler | Nicholas A. Christakis and James H. Fowler | Friendship and Natural Selection | null | PNAS July 22, 2014 vol. 111 no. Supplement 3 10796-10801 | 10.1073/pnas.1400825111 | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | More than any other species, humans form social ties to individuals who are
neither kin nor mates, and these ties tend to be with similar people. Here, we
show that this similarity extends to genotypes. Across the whole genome,
friends' genotypes at the SNP level tend to be positively correlated
(homophilic); however, certain genotypes are negatively correlated
(heterophilic). A focused gene set analysis suggests that some of the overall
correlation can be explained by specific systems; for example, an olfactory
gene set is homophilic and an immune system gene set is heterophilic. Finally,
homophilic genotypes exhibit significantly higher measures of positive
selection, suggesting that, on average, they may yield a synergistic fitness
advantage that has been helping to drive recent human evolution.
| [
{
"created": "Fri, 23 Aug 2013 22:28:34 GMT",
"version": "v1"
},
{
"created": "Thu, 4 Dec 2014 16:33:13 GMT",
"version": "v2"
}
] | 2014-12-05 | [
[
"Christakis",
"Nicholas A.",
""
],
[
"Fowler",
"James H.",
""
]
] | More than any other species, humans form social ties to individuals who are neither kin nor mates, and these ties tend to be with similar people. Here, we show that this similarity extends to genotypes. Across the whole genome, friends' genotypes at the SNP level tend to be positively correlated (homophilic); however, certain genotypes are negatively correlated (heterophilic). A focused gene set analysis suggests that some of the overall correlation can be explained by specific systems; for example, an olfactory gene set is homophilic and an immune system gene set is heterophilic. Finally, homophilic genotypes exhibit significantly higher measures of positive selection, suggesting that, on average, they may yield a synergistic fitness advantage that has been helping to drive recent human evolution. |
2310.18278 | Cecilia Clementi | Nicholas E. Charron, Felix Musil, Andrea Guljas, Yaoyi Chen, Klara
Bonneau, Aldo S. Pasos-Trejo, Jacopo Venturin, Daria Gusew, Iryna
Zaporozhets, Andreas Kr\"amer, Clark Templeton, Atharva Kelkar, Aleksander E.
P. Durumeric, Simon Olsson, Adri\`a P\'erez, Maciej Majewski, Brooke E.
Husic, Ankit Patel, Gianni De Fabritiis, Frank No\'e, Cecilia Clementi | Navigating protein landscapes with a machine-learned transferable
coarse-grained model | null | null | null | null | q-bio.BM physics.bio-ph physics.chem-ph stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The most popular and universally predictive protein simulation models employ
all-atom molecular dynamics (MD), but they come at extreme computational cost.
The development of a universal, computationally efficient coarse-grained (CG)
model with similar prediction performance has been a long-standing challenge.
By combining recent deep learning methods with a large and diverse training set
of all-atom protein simulations, we here develop a bottom-up CG force field
with chemical transferability, which can be used for extrapolative molecular
dynamics on new sequences not used during model parametrization. We demonstrate
that the model successfully predicts folded structures, intermediates,
metastable folded and unfolded basins, and the fluctuations of intrinsically
disordered proteins while it is several orders of magnitude faster than an
all-atom model. This showcases the feasibility of a universal and
computationally efficient machine-learned CG model for proteins.
| [
{
"created": "Fri, 27 Oct 2023 17:10:23 GMT",
"version": "v1"
}
] | 2023-10-30 | [
[
"Charron",
"Nicholas E.",
""
],
[
"Musil",
"Felix",
""
],
[
"Guljas",
"Andrea",
""
],
[
"Chen",
"Yaoyi",
""
],
[
"Bonneau",
"Klara",
""
],
[
"Pasos-Trejo",
"Aldo S.",
""
],
[
"Venturin",
"Jacopo",
""
],
[
"Gusew",
"Daria",
""
],
[
"Zaporozhets",
"Iryna",
""
],
[
"Krämer",
"Andreas",
""
],
[
"Templeton",
"Clark",
""
],
[
"Kelkar",
"Atharva",
""
],
[
"Durumeric",
"Aleksander E. P.",
""
],
[
"Olsson",
"Simon",
""
],
[
"Pérez",
"Adrià",
""
],
[
"Majewski",
"Maciej",
""
],
[
"Husic",
"Brooke E.",
""
],
[
"Patel",
"Ankit",
""
],
[
"De Fabritiis",
"Gianni",
""
],
[
"Noé",
"Frank",
""
],
[
"Clementi",
"Cecilia",
""
]
] | The most popular and universally predictive protein simulation models employ all-atom molecular dynamics (MD), but they come at extreme computational cost. The development of a universal, computationally efficient coarse-grained (CG) model with similar prediction performance has been a long-standing challenge. By combining recent deep learning methods with a large and diverse training set of all-atom protein simulations, we here develop a bottom-up CG force field with chemical transferability, which can be used for extrapolative molecular dynamics on new sequences not used during model parametrization. We demonstrate that the model successfully predicts folded structures, intermediates, metastable folded and unfolded basins, and the fluctuations of intrinsically disordered proteins while it is several orders of magnitude faster than an all-atom model. This showcases the feasibility of a universal and computationally efficient machine-learned CG model for proteins. |
1810.12758 | Cheng Qian | Cheng Qian, Nicholas D. Sidiropoulos, Magda Amiridi, Amin Emad | From Gene Expression to Drug Response: A Collaborative Filtering
Approach | null | null | null | null | q-bio.QM cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Predicting the response of cancer cells to drugs is an important problem in
pharmacogenomics. Recent efforts in generation of large scale datasets
profiling gene expression and drug sensitivity in cell lines have provided a
unique opportunity to study this problem. However, one major challenge is the
small number of samples (cell lines) compared to the number of features (genes)
even in these large datasets. We propose a collaborative filtering (CF) like
algorithm for modeling gene-drug relationship to identify patients most likely
to benefit from a treatment. Due to the correlation of gene expressions in
different cell lines, the gene expression matrix is approximately low-rank,
which suggests that drug responses could be estimated from a reduced dimension
latent space of the gene expression. Towards this end, we propose a joint
low-rank matrix factorization and latent linear regression approach.
Experiments with data from the Genomics of Drug Sensitivity in Cancer database
are included to show that the proposed method can predict drug-gene
associations better than the state-of-the-art methods.
| [
{
"created": "Mon, 29 Oct 2018 13:25:35 GMT",
"version": "v1"
},
{
"created": "Wed, 31 Oct 2018 03:05:47 GMT",
"version": "v2"
}
] | 2018-11-01 | [
[
"Qian",
"Cheng",
""
],
[
"Sidiropoulos",
"Nicholas D.",
""
],
[
"Amiridi",
"Magda",
""
],
[
"Emad",
"Amin",
""
]
] | Predicting the response of cancer cells to drugs is an important problem in pharmacogenomics. Recent efforts in generation of large scale datasets profiling gene expression and drug sensitivity in cell lines have provided a unique opportunity to study this problem. However, one major challenge is the small number of samples (cell lines) compared to the number of features (genes) even in these large datasets. We propose a collaborative filtering (CF) like algorithm for modeling gene-drug relationship to identify patients most likely to benefit from a treatment. Due to the correlation of gene expressions in different cell lines, the gene expression matrix is approximately low-rank, which suggests that drug responses could be estimated from a reduced dimension latent space of the gene expression. Towards this end, we propose a joint low-rank matrix factorization and latent linear regression approach. Experiments with data from the Genomics of Drug Sensitivity in Cancer database are included to show that the proposed method can predict drug-gene associations better than the state-of-the-art methods. |
1208.0520 | Simon Powers | Simon T. Powers and Christopher Heys and Richard A. Watson | How to measure group selection in real-world populations | pp. 672-679 in Proceedings of the Eleventh European Conference on the
Synthesis and Simulation of Living Systems (Advances in Artificial Life, ECAL
2011). Edited by Tom Lenaerts, Mario Giacobini, Hugues Bersini, Paul
Bourgine, Marco Dorigo and Ren\'e Doursat. MIT Press (2011).
http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=12760. 8 pages,
5 figures | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multilevel selection and the evolution of cooperation are fundamental to the
formation of higher-level organisation and the evolution of biocomplexity, but
such notions are controversial and poorly understood in natural populations.
The theoretic principles of group selection are well developed in idealised
models where a population is neatly divided into multiple semi-isolated
sub-populations. But since such models can be explained by individual selection
given the localised frequency-dependent effects involved, some argue that the
group selection concepts offered are, even in the idealised case, redundant and
that in natural conditions where groups are not well-defined that a group
selection framework is entirely inapplicable. This does not necessarily mean,
however, that a natural population is not subject to some interesting localised
frequency-dependent effects -- but how could we formally quantify this under
realistic conditions? Here we focus on the presence of a Simpson's Paradox
where, although the local proportion of cooperators decreases at all locations,
the global proportion of cooperators increases. We illustrate this principle in
a simple individual-based model of bacterial biofilm growth and discuss various
complicating factors in moving from theory to practice of measuring group
selection.
| [
{
"created": "Thu, 2 Aug 2012 15:48:18 GMT",
"version": "v1"
}
] | 2012-08-03 | [
[
"Powers",
"Simon T.",
""
],
[
"Heys",
"Christopher",
""
],
[
"Watson",
"Richard A.",
""
]
] | Multilevel selection and the evolution of cooperation are fundamental to the formation of higher-level organisation and the evolution of biocomplexity, but such notions are controversial and poorly understood in natural populations. The theoretic principles of group selection are well developed in idealised models where a population is neatly divided into multiple semi-isolated sub-populations. But since such models can be explained by individual selection given the localised frequency-dependent effects involved, some argue that the group selection concepts offered are, even in the idealised case, redundant and that in natural conditions where groups are not well-defined that a group selection framework is entirely inapplicable. This does not necessarily mean, however, that a natural population is not subject to some interesting localised frequency-dependent effects -- but how could we formally quantify this under realistic conditions? Here we focus on the presence of a Simpson's Paradox where, although the local proportion of cooperators decreases at all locations, the global proportion of cooperators increases. We illustrate this principle in a simple individual-based model of bacterial biofilm growth and discuss various complicating factors in moving from theory to practice of measuring group selection. |
1010.5602 | Pedro Marijuan | Pedro C. Marijuan and Jorge Navarro | The Bonds of Laughter: A Multidisciplinary Inquiry into the Information
Processes of Human Laughter | 14 pages, 5 figures | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A new core hypothesis on laughter is presented. It has been built by putting
together ideas from several disciplines: neurodynamics, evolutionary
neurobiology, paleoanthropology, social networks, and communication studies.
The hypothesis contributes to ascertain the evolutionary origins of human
laughter in connection with its cognitive emotional signaling functions. The
new behavioral and neurodynamic tenets introduced about this unusual sound
feature of our species justify the ubiquitous presence it has in social
interactions and along the life cycle of the individual. Laughter, far from
being a curious evolutionary relic or a rather trivial innate behavior, should
be considered as a highly efficient tool for inter-individual problem solving
and for maintenance of social bonds.
| [
{
"created": "Wed, 27 Oct 2010 08:29:30 GMT",
"version": "v1"
}
] | 2010-10-28 | [
[
"Marijuan",
"Pedro C.",
""
],
[
"Navarro",
"Jorge",
""
]
] | A new core hypothesis on laughter is presented. It has been built by putting together ideas from several disciplines: neurodynamics, evolutionary neurobiology, paleoanthropology, social networks, and communication studies. The hypothesis contributes to ascertain the evolutionary origins of human laughter in connection with its cognitive emotional signaling functions. The new behavioral and neurodynamic tenets introduced about this unusual sound feature of our species justify the ubiquitous presence it has in social interactions and along the life cycle of the individual. Laughter, far from being a curious evolutionary relic or a rather trivial innate behavior, should be considered as a highly efficient tool for inter-individual problem solving and for maintenance of social bonds. |
1008.0431 | Carsten Wiuf | Elisenda Feliu, Michael Knudsen, Lars N. Andersen and Carsten Wiuf | An Algebraic Approach to Signaling Cascades with n Layers | null | null | null | null | q-bio.QM q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Posttranslational modification of proteins is key in transmission of signals
in cells. Many signaling pathways contain several layers of modification cycles
that mediate and change the signal through the pathway. Here, we study a simple
signaling cascade consisting of n layers of modification cycles, such that the
modified protein of one layer acts as modifier in the next layer. Assuming
mass-action kinetics and taking the formation of intermediate complexes into
account, we show that the steady states are solutions to a polynomial in one
variable, and in fact that there is exactly one steady state for any given
total amounts of substrates and enzymes.
We demonstrate that many steady state concentrations are related through
rational functions, which can be found recursively. For example,
stimulus-response curves arise as inverse functions to explicit rational
functions. We show that the stimulus-response curves of the modified substrates
are shifted to the left as we move down the cascade. Further, our approach
allows us to study enzyme competition, sequestration and how the steady state
changes in response to changes in the total amount of substrates.
Our approach is essentially algebraic and follows recent trends in the study
of posttranslational modification systems.
| [
{
"created": "Tue, 3 Aug 2010 00:37:02 GMT",
"version": "v1"
},
{
"created": "Sat, 18 Dec 2010 18:31:23 GMT",
"version": "v2"
}
] | 2010-12-21 | [
[
"Feliu",
"Elisenda",
""
],
[
"Knudsen",
"Michael",
""
],
[
"Andersen",
"Lars N.",
""
],
[
"Wiuf",
"Carsten",
""
]
] | Posttranslational modification of proteins is key in transmission of signals in cells. Many signaling pathways contain several layers of modification cycles that mediate and change the signal through the pathway. Here, we study a simple signaling cascade consisting of n layers of modification cycles, such that the modified protein of one layer acts as modifier in the next layer. Assuming mass-action kinetics and taking the formation of intermediate complexes into account, we show that the steady states are solutions to a polynomial in one variable, and in fact that there is exactly one steady state for any given total amounts of substrates and enzymes. We demonstrate that many steady state concentrations are related through rational functions, which can be found recursively. For example, stimulus-response curves arise as inverse functions to explicit rational functions. We show that the stimulus-response curves of the modified substrates are shifted to the left as we move down the cascade. Further, our approach allows us to study enzyme competition, sequestration and how the steady state changes in response to changes in the total amount of substrates. Our approach is essentially algebraic and follows recent trends in the study of posttranslational modification systems. |
1302.4051 | Alexander E. Hramov | Evgenia Sitnikova, Alexander E. Hramov, Alexey A. Ovchinnikov, Alexey
A. Koronovskii | On-off intermittency of thalamo-cortical oscillations in the
electroencephalogram of rats with genetic predisposition to absence epilepsy | 15 pages, 5 figures | Brain research. 1436 (2012) 147-156 | 10.1016/j.brainres.2011.12.006 | null | q-bio.NC nlin.CD | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Spike-wave discharges (SWD) are electroencephalographic hallmarks of absence
epilepsy. SWD are known to originate from thalamo-cortical neuronal network
that normally produce sleep spindle oscillations. Although both sleep spindles
and SWD are considered as thalamo-cortical oscillations, functional
relationship between them is still uncertain. The present study describes
temporal dynamics of SWD and sleep spindles as determined in long-term EEG
recordings in WAG/Rij rat model of absence epilepsy. It was found that
non-linear dynamics of SWD fits well to the law of 'on-off intermittency'.
Typical sleep spindles that occur during slow-wave sleep (SWS) also
demonstrated 'on-off intermittency' behavior, in contrast to high-voltage
spindles during intermediate sleep stage, whose dynamics was uncertain. This
implies that both SWS sleep spindles and SWD are controlled by a system-level
mechanism that is responsible for regulating circadian activity and/or
sleep-wake transitions.
| [
{
"created": "Sun, 17 Feb 2013 10:17:58 GMT",
"version": "v1"
}
] | 2013-02-19 | [
[
"Sitnikova",
"Evgenia",
""
],
[
"Hramov",
"Alexander E.",
""
],
[
"Ovchinnikov",
"Alexey A.",
""
],
[
"Koronovskii",
"Alexey A.",
""
]
] | Spike-wave discharges (SWD) are electroencephalographic hallmarks of absence epilepsy. SWD are known to originate from thalamo-cortical neuronal network that normally produce sleep spindle oscillations. Although both sleep spindles and SWD are considered as thalamo-cortical oscillations, functional relationship between them is still uncertain. The present study describes temporal dynamics of SWD and sleep spindles as determined in long-term EEG recordings in WAG/Rij rat model of absence epilepsy. It was found that non-linear dynamics of SWD fits well to the law of 'on-off intermittency'. Typical sleep spindles that occur during slow-wave sleep (SWS) also demonstrated 'on-off intermittency' behavior, in contrast to high-voltage spindles during intermediate sleep stage, whose dynamics was uncertain. This implies that both SWS sleep spindles and SWD are controlled by a system-level mechanism that is responsible for regulating circadian activity and/or sleep-wake transitions. |
1708.03593 | Hugo Fort | Hugo Fort | Quantitative predictions from competition theory with incomplete
information on model parameters tested against experiments across diverse
taxa | 15 pages, 2 figures, one table, two boxes | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We derive an analytical approximation for making quantitative predictions for
ecological communities as a function of the mean intensity of the
inter-specific competition and the species richness. This method, with only a
fraction of the model parameters (carrying capacities and competition
coefficients), is able to predict accurately empirical measurements covering a
wide variety of taxa (algae, plants, protozoa).
| [
{
"created": "Fri, 11 Aug 2017 16:07:59 GMT",
"version": "v1"
}
] | 2017-08-14 | [
[
"Fort",
"Hugo",
""
]
] | We derive an analytical approximation for making quantitative predictions for ecological communities as a function of the mean intensity of the inter-specific competition and the species richness. This method, with only a fraction of the model parameters (carrying capacities and competition coefficients), is able to predict accurately empirical measurements covering a wide variety of taxa (algae, plants, protozoa). |
q-bio/0611026 | Sergei Maslov | Sergei Maslov, Kim Sneppen, Iaroslav Ispolatov | Propagation of fluctuations in interaction networks governed by the law
of mass action | 4 pages; 2 figures | New J. Phys. v.9, 273 (2007). | 10.1088/1367-2630/9/8/273 | null | q-bio.MN cond-mat.stat-mech physics.bio-ph q-bio.GN | null | Using an example of physical interactions between proteins, we study how
perturbations propagate in interconnected networks whose equilibrium state is
governed by the law of mass action. We introduce a comprehensive matrix
formalism which predicts the response of this equilibrium to small changes in
total concentrations of individual molecules, and explain it using a heuristic
analogy to a current flow in a network of resistors. Our main conclusion is
that on average changes in free concentrations exponentially decay with the
distance from the source of perturbation. We then study how this decay is
influenced by such factors as the topology of a network, binding strength, and
correlations between concentrations of neighboring nodes. An exact analytic
expression for the decay constant is obtained for the case of uniform
interactions on the Bethe lattice. Our general findings are illustrated using a
real biological network of protein-protein interactions in baker's yeast with
experimentally determined protein concentrations.
| [
{
"created": "Tue, 7 Nov 2006 23:09:26 GMT",
"version": "v1"
}
] | 2009-11-13 | [
[
"Maslov",
"Sergei",
""
],
[
"Sneppen",
"Kim",
""
],
[
"Ispolatov",
"Iaroslav",
""
]
] | Using an example of physical interactions between proteins, we study how perturbations propagate in interconnected networks whose equilibrium state is governed by the law of mass action. We introduce a comprehensive matrix formalism which predicts the response of this equilibrium to small changes in total concentrations of individual molecules, and explain it using a heuristic analogy to a current flow in a network of resistors. Our main conclusion is that on average changes in free concentrations exponentially decay with the distance from the source of perturbation. We then study how this decay is influenced by such factors as the topology of a network, binding strength, and correlations between concentrations of neighboring nodes. An exact analytic expression for the decay constant is obtained for the case of uniform interactions on the Bethe lattice. Our general findings are illustrated using a real biological network of protein-protein interactions in baker's yeast with experimentally determined protein concentrations. |
1901.07274 | Andras Jakab | Eliane Meuwly, Maria Feldmann, Walter Knirsch, Michael von Rhein,
Kelly Payette, Hitendu Dave, Ruth Tuura, Raimund Kottke, Cornelia Hagmann,
Beatrice Latal, Andras Jakab | Postoperative brain volumes are associated with one-year
neurodevelopmental outcome in children with severe congenital heart disease | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Children with congenital heart disease (CHD) remain at risk for
neurodevelopmental impairment despite improved peri- and intraoperative care.
Our prospective cohort study aimed to determine the relationship between
perioperative brain volumes and neurodevelopmental outcome in neonates with
severe CHD. Pre- and postoperative cerebral MRI was acquired in term born
neonates with CHD undergoing neonatal cardiopulmonary bypass surgery. Brain
volumes were measured using an atlas prior based automated method. One-year
outcome was assessed with the Bayley-III. CHD infants (n=77) had lower pre- and
postoper-ative total and regional brain volumes compared to controls (n=44, all
p<0.01). CHD infants had poorer cognitive and motor outcome (p<=0.0001) and a
trend towards lower language composite score compared to controls (p=0.06). The
total and selected regional postoperative brain volumes predicted cognitive and
language outcome (all p<0.04). This association was independent of length of
intensive care unit stay for total, cortical, temporal, frontal and cerebellar
volumes. In CHD neonates undergoing cardiopulmonary bypass surgery, pre- and
postoperative brain volumes are reduced, and postoperative brain volumes
predict cognitive and language outcome at one year. Reduced cerebral volumes in
CHD patients could serve as a biomarker for im-paired outcome.
| [
{
"created": "Tue, 22 Jan 2019 11:55:59 GMT",
"version": "v1"
}
] | 2019-01-23 | [
[
"Meuwly",
"Eliane",
""
],
[
"Feldmann",
"Maria",
""
],
[
"Knirsch",
"Walter",
""
],
[
"von Rhein",
"Michael",
""
],
[
"Payette",
"Kelly",
""
],
[
"Dave",
"Hitendu",
""
],
[
"Tuura",
"Ruth",
""
],
[
"Kottke",
"Raimund",
""
],
[
"Hagmann",
"Cornelia",
""
],
[
"Latal",
"Beatrice",
""
],
[
"Jakab",
"Andras",
""
]
] | Children with congenital heart disease (CHD) remain at risk for neurodevelopmental impairment despite improved peri- and intraoperative care. Our prospective cohort study aimed to determine the relationship between perioperative brain volumes and neurodevelopmental outcome in neonates with severe CHD. Pre- and postoperative cerebral MRI was acquired in term born neonates with CHD undergoing neonatal cardiopulmonary bypass surgery. Brain volumes were measured using an atlas prior based automated method. One-year outcome was assessed with the Bayley-III. CHD infants (n=77) had lower pre- and postoper-ative total and regional brain volumes compared to controls (n=44, all p<0.01). CHD infants had poorer cognitive and motor outcome (p<=0.0001) and a trend towards lower language composite score compared to controls (p=0.06). The total and selected regional postoperative brain volumes predicted cognitive and language outcome (all p<0.04). This association was independent of length of intensive care unit stay for total, cortical, temporal, frontal and cerebellar volumes. In CHD neonates undergoing cardiopulmonary bypass surgery, pre- and postoperative brain volumes are reduced, and postoperative brain volumes predict cognitive and language outcome at one year. Reduced cerebral volumes in CHD patients could serve as a biomarker for im-paired outcome. |
1805.01552 | Don Krieger | Don Krieger, Paul Shepard, David O. Okonkwo | Normative atlases of neuroelectric brain activity and connectivity from
a large human cohort | 37 pages, 13 figures, 13 tables | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Magnetoencephalographic (MEG) recordings from a large normative cohort (n =
619) were processed to extract measures of regional neuroelectric activity. The
overall objective of the effort was to use these measures to identify normative
human neuroelectric brain function. The aims were (a) to identify and measure
the values and range of those neuroelectric properties which are common to the
cohort, (b) to identify and measure the values and range of those neuroelectric
properties which distinguish one individual from another, and (c) to identify
relationships of the measures to properties of the individual, e.g. sex,
biological age, and sleep symptoms. It is hoped that comparison of the
resultant established norms to measures from recordings of symptomatic
individuals will enable advances is our understanding of pathology.
MEG recordings during resting and task conditions were provided by The
Cambridge (UK) Centre for Ageing and Neuroscience Stage 2 cohort study. Referee
consensus processing was used to localize and validate neuroelectric currents,
p < 10-12 for each, p < 10-4 for each when corrected for multiple comparisons.
Comparisons of regional activity and connectivity within-subjects produced
profuse reliable measures detailing differences between individuals, p < 10-8
for each comparison, p < 0.005 for each when corrected for a total of 5 x 105
comparisons. Cohort-wide regional comparisons (p < 10-8 for each) produced
profuse measures which were common to the preponderance of individuals,
detailing normative commonalities in brain function. Comparisons of regional
gray matter (cellular) vs white matter (communication fibers) activity produced
robust differences both cohort-wide and for each individual. These results
validate the high spatial resolution of the results and establish the
unprecedented ability to obtain neuroelectric measures from the white matter.
| [
{
"created": "Thu, 3 May 2018 21:47:09 GMT",
"version": "v1"
},
{
"created": "Thu, 16 Aug 2018 14:15:41 GMT",
"version": "v2"
},
{
"created": "Sat, 17 Nov 2018 04:03:38 GMT",
"version": "v3"
}
] | 2018-11-20 | [
[
"Krieger",
"Don",
""
],
[
"Shepard",
"Paul",
""
],
[
"Okonkwo",
"David O.",
""
]
] | Magnetoencephalographic (MEG) recordings from a large normative cohort (n = 619) were processed to extract measures of regional neuroelectric activity. The overall objective of the effort was to use these measures to identify normative human neuroelectric brain function. The aims were (a) to identify and measure the values and range of those neuroelectric properties which are common to the cohort, (b) to identify and measure the values and range of those neuroelectric properties which distinguish one individual from another, and (c) to identify relationships of the measures to properties of the individual, e.g. sex, biological age, and sleep symptoms. It is hoped that comparison of the resultant established norms to measures from recordings of symptomatic individuals will enable advances is our understanding of pathology. MEG recordings during resting and task conditions were provided by The Cambridge (UK) Centre for Ageing and Neuroscience Stage 2 cohort study. Referee consensus processing was used to localize and validate neuroelectric currents, p < 10-12 for each, p < 10-4 for each when corrected for multiple comparisons. Comparisons of regional activity and connectivity within-subjects produced profuse reliable measures detailing differences between individuals, p < 10-8 for each comparison, p < 0.005 for each when corrected for a total of 5 x 105 comparisons. Cohort-wide regional comparisons (p < 10-8 for each) produced profuse measures which were common to the preponderance of individuals, detailing normative commonalities in brain function. Comparisons of regional gray matter (cellular) vs white matter (communication fibers) activity produced robust differences both cohort-wide and for each individual. These results validate the high spatial resolution of the results and establish the unprecedented ability to obtain neuroelectric measures from the white matter. |
q-bio/0503041 | Luciano da Fontoura Costa | Luciano da Fontoura Costa | Morphological complex networks: Can individual morphology determine the
general connectivity and dynamics of networks? | 17 pages, 2 figures. Presented at the COSIN final meeting, Salou,
Spain, March 2005 | null | null | null | q-bio.MN cond-mat.dis-nn | null | This article discusses how the individual morphological properties of basic
objects (e.g. neurons, molecules and aggregates), jointly with their particular
spatial distribution, can determine the connectivity and dynamics of systems
composed by those objects. This problem is characterized as a particular case
of the more general shape and function paradigm, which emphasizes the interplay
between shape and function in nature and evolution. Five key issues are
addressed: (a) how to measure shapes; (b) how to obtain stochastic models of
classes of shapes; (c) how to simulate morphologically realistic systems of
multiple objects; (d) how to characterize the connectivity and topology of such
systems in terms of complex network concepts and measurements; and (e) how the
dynamics of such systems can be ultimately affected, and even determined, by
the individual morphological features of the basic objects. Although emphasis
is placed on neuromorphic systems, the presented concepts and methods are
useful also for several other multiple object systems, such as protein-protein
interaction, tissues, aggregates and polymers.
| [
{
"created": "Thu, 31 Mar 2005 18:17:56 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Costa",
"Luciano da Fontoura",
""
]
] | This article discusses how the individual morphological properties of basic objects (e.g. neurons, molecules and aggregates), jointly with their particular spatial distribution, can determine the connectivity and dynamics of systems composed by those objects. This problem is characterized as a particular case of the more general shape and function paradigm, which emphasizes the interplay between shape and function in nature and evolution. Five key issues are addressed: (a) how to measure shapes; (b) how to obtain stochastic models of classes of shapes; (c) how to simulate morphologically realistic systems of multiple objects; (d) how to characterize the connectivity and topology of such systems in terms of complex network concepts and measurements; and (e) how the dynamics of such systems can be ultimately affected, and even determined, by the individual morphological features of the basic objects. Although emphasis is placed on neuromorphic systems, the presented concepts and methods are useful also for several other multiple object systems, such as protein-protein interaction, tissues, aggregates and polymers. |
q-bio/0505012 | Ines Samengo | Marcos A. Trevisan, Sebastian Bouzat, Ines Samengo, Gabriel B. Mindlin | Dynamics of learning in coupled oscillators tutored with delayed
reinforcements | 10 pages, 6 figures, to be published in Phys. Rev. E | null | 10.1103/PhysRevE.72.011907 | null | q-bio.NC | null | In this work we analyze the solutions of a simple system of coupled phase
oscillators in which the connectivity is learned dynamically. The model is
inspired in the process of learning of birdsong by oscine birds. An oscillator
acts as the generator of a basic rhythm, and drives slave oscillators which are
responsible for different motor actions. The driving signal arrives to each
driven oscillator through two different pathways. One of them is a "direct"
pathway. The other one is a "reinforcement" pathway, through which the signal
arrives delayed. The coupling coefficients between the driving oscillator and
the slave ones evolve in time following a Hebbian-like rule. We discuss the
conditions under which a driven oscillator is capable of learning to lock to
the driver. The resulting phase difference and connectivity is a function of
the delay of the reinforcement. Around some specific delays, the system is
capable to generate dramatic changes in the phase difference between the driver
and the driven systems. We discuss the dynamical mechanism responsible for this
effect, and possible applications of this learning scheme.
| [
{
"created": "Thu, 5 May 2005 14:25:26 GMT",
"version": "v1"
}
] | 2009-11-11 | [
[
"Trevisan",
"Marcos A.",
""
],
[
"Bouzat",
"Sebastian",
""
],
[
"Samengo",
"Ines",
""
],
[
"Mindlin",
"Gabriel B.",
""
]
] | In this work we analyze the solutions of a simple system of coupled phase oscillators in which the connectivity is learned dynamically. The model is inspired in the process of learning of birdsong by oscine birds. An oscillator acts as the generator of a basic rhythm, and drives slave oscillators which are responsible for different motor actions. The driving signal arrives to each driven oscillator through two different pathways. One of them is a "direct" pathway. The other one is a "reinforcement" pathway, through which the signal arrives delayed. The coupling coefficients between the driving oscillator and the slave ones evolve in time following a Hebbian-like rule. We discuss the conditions under which a driven oscillator is capable of learning to lock to the driver. The resulting phase difference and connectivity is a function of the delay of the reinforcement. Around some specific delays, the system is capable to generate dramatic changes in the phase difference between the driver and the driven systems. We discuss the dynamical mechanism responsible for this effect, and possible applications of this learning scheme. |
1510.01420 | Debora Marks | Caleb Weinreb, Adam J. Riesselman, John B. Ingraham, Torsten Gross,
Chris Sander, Debora S. Marks | 3D RNA and functional interactions from evolutionary couplings | null | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Non-coding RNAs are ubiquitous, but the discovery of new RNA gene sequences
far outpaces research on their structure and functional interactions. We mine
the evolutionary sequence record to derive precise information about function
and structure of RNAs and RNA-protein complexes. As in protein structure
prediction, we use maximum entropy global probability models of sequence
co-variation to infer evolutionarily constrained nucleotide-nucleotide
interactions within RNA molecules, and nucleotide-amino acid interactions in
RNA-protein complexes. The predicted contacts allow all-atom blinded 3D
structure prediction at good accuracy for several known RNA structures and
RNA-protein complexes. For unknown structures, we predict contacts in 160
non-coding RNA families. Beyond 3D structure prediction, evolutionary couplings
help identify important functional interactions, e.g., at switch points in
riboswitches and at a complex nucleation site in HIV. Aided by accelerating
sequence accumulation, evolutionary coupling analysis can accelerate the
discovery of functional interactions and 3D structures involving RNA.
| [
{
"created": "Tue, 6 Oct 2015 03:43:41 GMT",
"version": "v1"
},
{
"created": "Wed, 14 Oct 2015 21:30:24 GMT",
"version": "v2"
},
{
"created": "Thu, 21 Apr 2016 18:27:30 GMT",
"version": "v3"
}
] | 2016-04-22 | [
[
"Weinreb",
"Caleb",
""
],
[
"Riesselman",
"Adam J.",
""
],
[
"Ingraham",
"John B.",
""
],
[
"Gross",
"Torsten",
""
],
[
"Sander",
"Chris",
""
],
[
"Marks",
"Debora S.",
""
]
] | Non-coding RNAs are ubiquitous, but the discovery of new RNA gene sequences far outpaces research on their structure and functional interactions. We mine the evolutionary sequence record to derive precise information about function and structure of RNAs and RNA-protein complexes. As in protein structure prediction, we use maximum entropy global probability models of sequence co-variation to infer evolutionarily constrained nucleotide-nucleotide interactions within RNA molecules, and nucleotide-amino acid interactions in RNA-protein complexes. The predicted contacts allow all-atom blinded 3D structure prediction at good accuracy for several known RNA structures and RNA-protein complexes. For unknown structures, we predict contacts in 160 non-coding RNA families. Beyond 3D structure prediction, evolutionary couplings help identify important functional interactions, e.g., at switch points in riboswitches and at a complex nucleation site in HIV. Aided by accelerating sequence accumulation, evolutionary coupling analysis can accelerate the discovery of functional interactions and 3D structures involving RNA. |
1708.06579 | Mihalis Kavousanakis | Panagiotis Chrysinas, Michail E. Kavousanakis, Andreas G. Boudouvis | Effect of cell heterogeneity on isogenic populations with the synthetic
genetic toggle switch network: bifurcation analysis of two-dimensional Cell
Population Balance Models | null | null | null | null | q-bio.PE q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The dynamics of gene regulatory networks are often modeled with the
assumption of cellular homogeneity. However, this assumption contradicts the
plethora of experimental results in a variety of systems, which designates that
cell populations are heterogeneous systems in the sense that properties such as
size, shape, and DNA/RNA content are unevenly distributed amongst their
individuals. In order to address the implications of heterogeneity, we utilize
the so-called cell population balance (CPB) models. Here, we solve numerically
multivariable CPB models to study the effect of heterogeneity on populations
carrying the toggle switch network, which features nonlinear behavior at the
single-cell level. In order to answer whether this nonlinear behavior is
inherited to the heterogeneous population level, we perform bifurcation
analysis on the steady-state solutions of the CPB model. We show that
bistability is present at the population level with the pertinent bistability
region shrinking when the impact of heterogeneity is enhanced.
| [
{
"created": "Tue, 22 Aug 2017 12:22:58 GMT",
"version": "v1"
}
] | 2017-08-23 | [
[
"Chrysinas",
"Panagiotis",
""
],
[
"Kavousanakis",
"Michail E.",
""
],
[
"Boudouvis",
"Andreas G.",
""
]
] | The dynamics of gene regulatory networks are often modeled with the assumption of cellular homogeneity. However, this assumption contradicts the plethora of experimental results in a variety of systems, which designates that cell populations are heterogeneous systems in the sense that properties such as size, shape, and DNA/RNA content are unevenly distributed amongst their individuals. In order to address the implications of heterogeneity, we utilize the so-called cell population balance (CPB) models. Here, we solve numerically multivariable CPB models to study the effect of heterogeneity on populations carrying the toggle switch network, which features nonlinear behavior at the single-cell level. In order to answer whether this nonlinear behavior is inherited to the heterogeneous population level, we perform bifurcation analysis on the steady-state solutions of the CPB model. We show that bistability is present at the population level with the pertinent bistability region shrinking when the impact of heterogeneity is enhanced. |
2009.11531 | Ana Carpio | A. Carpio, A. Sim\'on, L.F. Villa | Clustering methods and Bayesian inference for the analysis of the
evolution of immune disorders | null | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Choosing appropriate hyperparameters for unsupervised clustering algorithms
in an optimal way depending on the problem under study is a long standing
challenge, which we tackle while adapting clustering algorithms for immune
disorder diagnoses. We compare the potential ability of unsupervised clustering
algorithms to detect disease flares and remission periods through analysis of
laboratory data from systemic lupus erythematosus patients records with
different hyperparameter choices. To determine which clustering strategy is the
best one we resort to a Bayesian analysis based on the Plackett-Luce model
applied to rankings. This analysis quantifies the uncertainty in the choice of
clustering methods for a given problem
| [
{
"created": "Thu, 24 Sep 2020 07:43:14 GMT",
"version": "v1"
}
] | 2020-09-25 | [
[
"Carpio",
"A.",
""
],
[
"Simón",
"A.",
""
],
[
"Villa",
"L. F.",
""
]
] | Choosing appropriate hyperparameters for unsupervised clustering algorithms in an optimal way depending on the problem under study is a long standing challenge, which we tackle while adapting clustering algorithms for immune disorder diagnoses. We compare the potential ability of unsupervised clustering algorithms to detect disease flares and remission periods through analysis of laboratory data from systemic lupus erythematosus patients records with different hyperparameter choices. To determine which clustering strategy is the best one we resort to a Bayesian analysis based on the Plackett-Luce model applied to rankings. This analysis quantifies the uncertainty in the choice of clustering methods for a given problem |
2202.11635 | Tom Burkart | Tom Burkart, Jan Willeke, and Erwin Frey | Periodic temporal environmental variations induce coexistence in
resource competition models | 21 pages, 10 figures | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | Natural ecosystems, in particular on the microbial scale, are inhabited by a
large number of species. The population size of each species is affected by
interactions of individuals with each other and by spatial and temporal changes
in environmental conditions, such as resource abundance. Here, we use a generic
population dynamics model to study how, and under what conditions, a periodic
temporal environmental variation can alter an ecosystem's composition and
biodiversity. We demonstrate that using time scale separation allows one to
qualitatively predict the long-term population dynamics of interacting species
in varying environments. We show that the notion of Tilman's R* rule, a
well-known principle that applies for constant environments, can be extended to
periodically varying environments if the time scale of environmental changes
(e.g., seasonal variations) is much faster than the time scale of population
growth (doubling time in bacteria). When these time scales are similar, our
analysis shows that a varying environment deters the system from reaching a
steady state, and stable coexistence between multiple species becomes possible.
Our results posit that biodiversity can in part be attributed to natural
environmental variations.
| [
{
"created": "Wed, 23 Feb 2022 17:10:03 GMT",
"version": "v1"
},
{
"created": "Fri, 25 Feb 2022 10:03:53 GMT",
"version": "v2"
},
{
"created": "Thu, 22 Jun 2023 10:28:46 GMT",
"version": "v3"
}
] | 2023-06-23 | [
[
"Burkart",
"Tom",
""
],
[
"Willeke",
"Jan",
""
],
[
"Frey",
"Erwin",
""
]
] | Natural ecosystems, in particular on the microbial scale, are inhabited by a large number of species. The population size of each species is affected by interactions of individuals with each other and by spatial and temporal changes in environmental conditions, such as resource abundance. Here, we use a generic population dynamics model to study how, and under what conditions, a periodic temporal environmental variation can alter an ecosystem's composition and biodiversity. We demonstrate that using time scale separation allows one to qualitatively predict the long-term population dynamics of interacting species in varying environments. We show that the notion of Tilman's R* rule, a well-known principle that applies for constant environments, can be extended to periodically varying environments if the time scale of environmental changes (e.g., seasonal variations) is much faster than the time scale of population growth (doubling time in bacteria). When these time scales are similar, our analysis shows that a varying environment deters the system from reaching a steady state, and stable coexistence between multiple species becomes possible. Our results posit that biodiversity can in part be attributed to natural environmental variations. |
1710.02387 | Chrystopher L. Nehaniv | Chrystopher L. Nehaniv and Elena Antonova | Simulating and Reconstructing Neurodynamics with Epsilon-Automata
Applied to Electroencephalography (EEG) Microstate Sequences | null | null | null | null | q-bio.NC cs.FL nlin.AO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce new techniques to the analysis of neural spatiotemporal dynamics
via applying $\epsilon$-machine reconstruction to electroencephalography (EEG)
microstate sequences. Microstates are short duration quasi-stable states of the
dynamically changing electrical field topographies recorded via an array of
electrodes from the human scalp, and cluster into four canonical classes. The
sequence of microstates observed under particular conditions can be considered
an information source with unknown underlying structure. $\epsilon$-machines
are discrete dynamical system automata with state-dependent probabilities on
different future observations (in this case the next measured EEG microstate).
They artificially reproduce underlying structure in an optimally predictive
manner as generative models exhibiting dynamics emulating the behaviour of the
source. Here we present experiments using both simulations and empirical data
supporting the value of associating these discrete dynamical systems with
mental states (e.g. mind-wandering, focused attention, etc.) and with clinical
populations. The neurodynamics of mental states and clinical populations can
then be further characterized by properties of these dynamical systems,
including: i) statistical complexity (determined by the number of states of the
corresponding $\epsilon$-automaton); ii) entropy rate; iii) characteristic
sequence patterning (syntax, probabilistic grammars); iv) duration, persistence
and stability of dynamical patterns; and v) algebraic measures such as
Krohn-Rhodes complexity or holonomy length of the decompositions of these. The
potential applications include the characterization of mental states in
neurodynamic terms for mental health diagnostics, well-being interventions,
human-machine interface, and others on both subject-specific and
group/population-level.
| [
{
"created": "Fri, 29 Sep 2017 09:19:14 GMT",
"version": "v1"
}
] | 2017-10-09 | [
[
"Nehaniv",
"Chrystopher L.",
""
],
[
"Antonova",
"Elena",
""
]
] | We introduce new techniques to the analysis of neural spatiotemporal dynamics via applying $\epsilon$-machine reconstruction to electroencephalography (EEG) microstate sequences. Microstates are short duration quasi-stable states of the dynamically changing electrical field topographies recorded via an array of electrodes from the human scalp, and cluster into four canonical classes. The sequence of microstates observed under particular conditions can be considered an information source with unknown underlying structure. $\epsilon$-machines are discrete dynamical system automata with state-dependent probabilities on different future observations (in this case the next measured EEG microstate). They artificially reproduce underlying structure in an optimally predictive manner as generative models exhibiting dynamics emulating the behaviour of the source. Here we present experiments using both simulations and empirical data supporting the value of associating these discrete dynamical systems with mental states (e.g. mind-wandering, focused attention, etc.) and with clinical populations. The neurodynamics of mental states and clinical populations can then be further characterized by properties of these dynamical systems, including: i) statistical complexity (determined by the number of states of the corresponding $\epsilon$-automaton); ii) entropy rate; iii) characteristic sequence patterning (syntax, probabilistic grammars); iv) duration, persistence and stability of dynamical patterns; and v) algebraic measures such as Krohn-Rhodes complexity or holonomy length of the decompositions of these. The potential applications include the characterization of mental states in neurodynamic terms for mental health diagnostics, well-being interventions, human-machine interface, and others on both subject-specific and group/population-level. |
1906.09622 | Anne Churchland | Simon Musall, Anne Urai, David Sussillo, Anne Churchland | Harnessing behavioral diversity to understand circuits for cognition | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the increasing acquisition of large-scale neural recordings comes the
challenge of inferring the computations they perform and understanding how
these give rise to behavior. Here, we review emerging conceptual and
technological advances that begin to address this challenge, garnering insights
from both biological and artificial neural networks. We argue that neural data
should be recorded during rich behavioral tasks, to model cognitive processes
and estimate latent behavioral variables. Careful quantification of animal
movements can also provide a more complete picture of how movements shape
neural dynamics and reflect changes in brain state, such as arousal or stress.
Artificial neural networks (ANNs) could serve as an important tool to connect
neural dynamics and rich behavioral data. ANNs have already begun to reveal how
particular behaviors can be optimally solved, generating hypotheses about how
observed neural activity might drive behavior and explaining diversity in
behavioral strategies.
| [
{
"created": "Sun, 23 Jun 2019 18:14:37 GMT",
"version": "v1"
}
] | 2019-06-25 | [
[
"Musall",
"Simon",
""
],
[
"Urai",
"Anne",
""
],
[
"Sussillo",
"David",
""
],
[
"Churchland",
"Anne",
""
]
] | With the increasing acquisition of large-scale neural recordings comes the challenge of inferring the computations they perform and understanding how these give rise to behavior. Here, we review emerging conceptual and technological advances that begin to address this challenge, garnering insights from both biological and artificial neural networks. We argue that neural data should be recorded during rich behavioral tasks, to model cognitive processes and estimate latent behavioral variables. Careful quantification of animal movements can also provide a more complete picture of how movements shape neural dynamics and reflect changes in brain state, such as arousal or stress. Artificial neural networks (ANNs) could serve as an important tool to connect neural dynamics and rich behavioral data. ANNs have already begun to reveal how particular behaviors can be optimally solved, generating hypotheses about how observed neural activity might drive behavior and explaining diversity in behavioral strategies. |
2208.14996 | Thomas Fink | Thomas M. A. Fink | Regulatory motifs: structural and functional building blocks of genetic
computation | null | null | null | null | q-bio.MN cond-mat.stat-mech | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Developing and maintaining life requires a lot of computation. This is done
by gene regulatory networks. But we have little understanding of how this
computation is organized. I show that there is a direct correspondence between
the structural and functional building blocks of regulatory networks, which I
call regulatory motifs. I derive a simple bound on the range of function that
these motifs can perform, in terms of the local network structure. I prove that
this range is a small fraction of all possible functions, which severely
constrains global network behavior. Part of this restriction is due to
redundancy in the function that regulatory motifs can achieve - there are many
ways to perform the same task. Regulatory motifs help us understanding how
genetic computation is organized and what it can achieve.
| [
{
"created": "Wed, 31 Aug 2022 17:51:18 GMT",
"version": "v1"
}
] | 2022-09-01 | [
[
"Fink",
"Thomas M. A.",
""
]
] | Developing and maintaining life requires a lot of computation. This is done by gene regulatory networks. But we have little understanding of how this computation is organized. I show that there is a direct correspondence between the structural and functional building blocks of regulatory networks, which I call regulatory motifs. I derive a simple bound on the range of function that these motifs can perform, in terms of the local network structure. I prove that this range is a small fraction of all possible functions, which severely constrains global network behavior. Part of this restriction is due to redundancy in the function that regulatory motifs can achieve - there are many ways to perform the same task. Regulatory motifs help us understanding how genetic computation is organized and what it can achieve. |
2107.06025 | Manh Hong Duong | Manh Hong Duong and The Anh Han | Statistics of the number of equilibria in random social dilemma
evolutionary games with mutation | 17 pages | null | 10.1140/epjb/s10051-021-00181-0 | null | q-bio.PE math.DS math.PR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we study analytically the statistics of the number of
equilibria in pairwise social dilemma evolutionary games with mutation where a
game's payoff entries are random variables. Using the replicator-mutator
equations, we provide explicit formulas for the probability distributions of
the number of equilibria as well as other statistical quantities. This analysis
is highly relevant assuming that one might know the nature of a social dilemma
game at hand (e.g., cooperation vs coordination vs anti-coordination), but
measuring the exact values of its payoff entries is difficult. Our delicate
analysis shows clearly the influence of the mutation probability on these
probability distributions, providing insights into how varying this important
factor impacts the overall behavioural or biological diversity of the
underlying evolutionary systems.
| [
{
"created": "Tue, 13 Jul 2021 12:23:30 GMT",
"version": "v1"
}
] | 2021-09-15 | [
[
"Duong",
"Manh Hong",
""
],
[
"Han",
"The Anh",
""
]
] | In this paper, we study analytically the statistics of the number of equilibria in pairwise social dilemma evolutionary games with mutation where a game's payoff entries are random variables. Using the replicator-mutator equations, we provide explicit formulas for the probability distributions of the number of equilibria as well as other statistical quantities. This analysis is highly relevant assuming that one might know the nature of a social dilemma game at hand (e.g., cooperation vs coordination vs anti-coordination), but measuring the exact values of its payoff entries is difficult. Our delicate analysis shows clearly the influence of the mutation probability on these probability distributions, providing insights into how varying this important factor impacts the overall behavioural or biological diversity of the underlying evolutionary systems. |
1807.03174 | Tito Arecchi | F. T. Arecchi | A quantum uncertainty entails entangled linguistic sequences | 14 pages. arXiv admin note: substantial text overlap with
arXiv:1506.00610 | null | null | null | q-bio.NC quant-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Synchronization of finite spike sequences is the way two brain regions
compare their content and extract the most suitable sequence. This is the core
of the linguistic comparison between a word and a previous one retrieved by
memory. Classifying the information content of neural spike trains, an
uncertainty relation emerges between the bit size of a word and its duration.
This uncertainty affects the task of synchronizing spike trains of different
duration representing different words, entailing the occurrence of entangled
sequences, so that word comparison amounts to a measurement based quantum
computation. Entanglement explains the inverse Bayes inference that connects
different words in a linguistic search. The behaviour here discussed provides
an explanation for other reported evidences of quantum effects in human
cognitive processes lacking a plausible framework, since either no assignment
of an appropriate quantum constant had been associated, or speculating on
microscopic processes dependent on Planck's constant resulted in unrealistic
de-coherence times.
| [
{
"created": "Fri, 6 Jul 2018 09:04:59 GMT",
"version": "v1"
}
] | 2018-07-10 | [
[
"Arecchi",
"F. T.",
""
]
] | Synchronization of finite spike sequences is the way two brain regions compare their content and extract the most suitable sequence. This is the core of the linguistic comparison between a word and a previous one retrieved by memory. Classifying the information content of neural spike trains, an uncertainty relation emerges between the bit size of a word and its duration. This uncertainty affects the task of synchronizing spike trains of different duration representing different words, entailing the occurrence of entangled sequences, so that word comparison amounts to a measurement based quantum computation. Entanglement explains the inverse Bayes inference that connects different words in a linguistic search. The behaviour here discussed provides an explanation for other reported evidences of quantum effects in human cognitive processes lacking a plausible framework, since either no assignment of an appropriate quantum constant had been associated, or speculating on microscopic processes dependent on Planck's constant resulted in unrealistic de-coherence times. |
1504.00165 | Andy Chung | Stephanie Portet and Anotida Madzvamuse and Andy Chung and Rudolf E.
Leube and Reinhard Windoffer | Keratin Dynamics: Modeling the Interplay between Turnover and Transport | 27 pages, 11 Figures | null | 10.1371/journal.pone.0121090 | null | q-bio.SC q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Keratin are among the most abundant proteins in epithelial cells. Functions
of the keratin network in cells are shaped by their dynamical organization.
Using a collection of experimentally-driven mathematical models, different
hypotheses for the turnover and transport of the keratin material in epithelial
cells are tested. The interplay between turnover and transport and their
effects on the keratin organization in cells are hence investigated by
combining mathematical modeling and experimental data. Amongst the collection
of mathematical models considered, a best model strongly supported by
experimental data is identified. Fundamental to this approach is the fact that
optimal parameter values associated with the best fit for each model are
established. The best candidate among the best fits is characterized by the
disassembly of the assembled keratin material in the perinuclear region and an
active transport of the assembled keratin. Our study shows that an active
transport of the assembled keratin is required to explain the experimentally
observed keratin organization.
| [
{
"created": "Wed, 1 Apr 2015 09:54:36 GMT",
"version": "v1"
}
] | 2015-08-19 | [
[
"Portet",
"Stephanie",
""
],
[
"Madzvamuse",
"Anotida",
""
],
[
"Chung",
"Andy",
""
],
[
"Leube",
"Rudolf E.",
""
],
[
"Windoffer",
"Reinhard",
""
]
] | Keratin are among the most abundant proteins in epithelial cells. Functions of the keratin network in cells are shaped by their dynamical organization. Using a collection of experimentally-driven mathematical models, different hypotheses for the turnover and transport of the keratin material in epithelial cells are tested. The interplay between turnover and transport and their effects on the keratin organization in cells are hence investigated by combining mathematical modeling and experimental data. Amongst the collection of mathematical models considered, a best model strongly supported by experimental data is identified. Fundamental to this approach is the fact that optimal parameter values associated with the best fit for each model are established. The best candidate among the best fits is characterized by the disassembly of the assembled keratin material in the perinuclear region and an active transport of the assembled keratin. Our study shows that an active transport of the assembled keratin is required to explain the experimentally observed keratin organization. |
0906.4875 | Mike Steel Prof. | Mike Steel and Michael J. Sanderson | Characterizing phylogenetically decisive taxon coverage | 6 pages, 1 figure | null | null | null | q-bio.PE q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Increasingly, biologists are constructing evolutionary trees on large numbers
of overlapping sets of taxa, and then combining them into a `supertree' that
classifies all the taxa. In this paper, we ask how much coverage of the total
set of taxa is required by these subsets in order to ensure we have enough
information to reconstruct the supertree uniquely. We describe two results - a
combinatorial characterization of the covering subsets to ensure that at most
one supertree can be constructed from the smaller trees (whatever trees these
may be) and a more liberal analysis that asks only that the supertree is highly
likely to be uniquely specified by the tree structure on the covering subsets.
| [
{
"created": "Fri, 26 Jun 2009 08:39:35 GMT",
"version": "v1"
}
] | 2009-06-29 | [
[
"Steel",
"Mike",
""
],
[
"Sanderson",
"Michael J.",
""
]
] | Increasingly, biologists are constructing evolutionary trees on large numbers of overlapping sets of taxa, and then combining them into a `supertree' that classifies all the taxa. In this paper, we ask how much coverage of the total set of taxa is required by these subsets in order to ensure we have enough information to reconstruct the supertree uniquely. We describe two results - a combinatorial characterization of the covering subsets to ensure that at most one supertree can be constructed from the smaller trees (whatever trees these may be) and a more liberal analysis that asks only that the supertree is highly likely to be uniquely specified by the tree structure on the covering subsets. |
2110.12927 | Sitabhra Sinha | Chandrashekar Kuyyamudi, Shakti N. Menon and Sitabhra Sinha | Precision of morphogen-driven tissue patterning during development is
enhanced through contact-mediated cellular interactions | 7 pages, 3 figures + 4 pages Supplementary Information | Phys. Rev. E 107, 024407 (2023) | 10.1103/PhysRevE.107.024407 | null | q-bio.TO nlin.PS physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Embryonic development involves pattern formation characterized by the
emergence of spatially localized domains characterized by distinct cell fates
resulting from differential gene expression. The boundaries demarcating these
domains are precise and consistent within a species despite stochastic
fluctuations in the morphogen molecular concentration that provides positional
information to the cells, as well as, the intrinsic noise in molecular
processes that interpret this information to guide fate determination. We show
that local interactions between physically adjacent cells mediated by
receptor-ligand binding utilizes the asymmetry between the fate-determining
genes to yield a switch-like response to the global signal provided by the
morphogen. This results in robust developmental outcomes with a consistent
identity of the gene that is dominantly expressed at each cellular location,
thereby substantially reducing the uncertainty in the location of the boundary
between distinct fates.
| [
{
"created": "Mon, 25 Oct 2021 13:04:03 GMT",
"version": "v1"
}
] | 2024-06-11 | [
[
"Kuyyamudi",
"Chandrashekar",
""
],
[
"Menon",
"Shakti N.",
""
],
[
"Sinha",
"Sitabhra",
""
]
] | Embryonic development involves pattern formation characterized by the emergence of spatially localized domains characterized by distinct cell fates resulting from differential gene expression. The boundaries demarcating these domains are precise and consistent within a species despite stochastic fluctuations in the morphogen molecular concentration that provides positional information to the cells, as well as, the intrinsic noise in molecular processes that interpret this information to guide fate determination. We show that local interactions between physically adjacent cells mediated by receptor-ligand binding utilizes the asymmetry between the fate-determining genes to yield a switch-like response to the global signal provided by the morphogen. This results in robust developmental outcomes with a consistent identity of the gene that is dominantly expressed at each cellular location, thereby substantially reducing the uncertainty in the location of the boundary between distinct fates. |
2306.00046 | Helen Moore | Raja Al-Bahou, Julia Bruner, Helen Moore, Ali Zarrinpar | Quantitative Methods for Optimizing Patient Outcomes in Liver
Transplantation | 2 figures, including a graphical abstract | null | null | null | q-bio.OT | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Liver transplantation continues to be the gold standard for treating patients
with end-stage liver diseases. However, despite the huge success of liver
transplantation in improving patient outcomes, long term graft survival
continues to be a major problem. The current clinical practice in the
management of liver transplant patients is centered around immunosuppressive
multidrug regimens. Current research has been focusing on phenotypic
personalized medicine as a novel approach in the optimization of
immunosuppression, a regressional math modeling focusing on individual patient
dose and response using specific markers like transaminases. A prospective area
of study includes the development of a mechanistic computational math modeling
for optimizing immunosuppression to improve patient outcomes and increase
long-term graft survival by exploring the intricate immune/drug interactions to
help us further our understanding and management of medical problems like
transplants, autoimmunity, and cancer therapy. Thus, by increasing long-term
graft survival, the need for redo transplants will decrease, which will free up
organs and potentially help with the organ shortage problem promoting equity
and equal opportunity for transplants, as well as decreasing the medical costs
associated with additional testing and hospital admissions. Although long-term
graft survival remains challenging, computational and quantitative methods have
led to significant improvements. In this article, we review recent advances and
remaining opportunities. We focus on the following topics: donor organ
availability and allocation with a focus on equity, monitoring of patient and
graft health, and optimization of immunosuppression dosing.
| [
{
"created": "Wed, 31 May 2023 16:45:44 GMT",
"version": "v1"
}
] | 2023-06-02 | [
[
"Al-Bahou",
"Raja",
""
],
[
"Bruner",
"Julia",
""
],
[
"Moore",
"Helen",
""
],
[
"Zarrinpar",
"Ali",
""
]
] | Liver transplantation continues to be the gold standard for treating patients with end-stage liver diseases. However, despite the huge success of liver transplantation in improving patient outcomes, long term graft survival continues to be a major problem. The current clinical practice in the management of liver transplant patients is centered around immunosuppressive multidrug regimens. Current research has been focusing on phenotypic personalized medicine as a novel approach in the optimization of immunosuppression, a regressional math modeling focusing on individual patient dose and response using specific markers like transaminases. A prospective area of study includes the development of a mechanistic computational math modeling for optimizing immunosuppression to improve patient outcomes and increase long-term graft survival by exploring the intricate immune/drug interactions to help us further our understanding and management of medical problems like transplants, autoimmunity, and cancer therapy. Thus, by increasing long-term graft survival, the need for redo transplants will decrease, which will free up organs and potentially help with the organ shortage problem promoting equity and equal opportunity for transplants, as well as decreasing the medical costs associated with additional testing and hospital admissions. Although long-term graft survival remains challenging, computational and quantitative methods have led to significant improvements. In this article, we review recent advances and remaining opportunities. We focus on the following topics: donor organ availability and allocation with a focus on equity, monitoring of patient and graft health, and optimization of immunosuppression dosing. |
1802.04532 | Luca Telesca | Luca Telesca, Kati Michalek, Trystan Sanders, Lloyd S. Peck, Jakob
Thyrring, Elizabeth M. Harper | Blue mussel shell shape plasticity and natural environments: a
quantitative approach | null | Scientific Reports 8 (2018) 2865 | 10.1038/s41598-018-20122-9 | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Shape variability represents an important direct response of organisms to
selective environments. Here, we use a combination of geometric morphometrics
and generalised additive mixed models (GAMMs) to identify spatial patterns of
natural shell shape variation in the North Atlantic and Arctic blue mussels,
Mytilus edulis and M. trossulus, with environmental gradients of temperature,
salinity and food availability across 3980 km of coastlines. New statistical
methods and multiple study systems at various geographical scales allowed the
uncoupling of the developmental and genetic contributions to shell shape and
made it possible to identify general relationships between blue mussel shape
variation and environment that are independent of age and species influences.
We find salinity had the strongest effect on the latitudinal patterns of
Mytilus shape, producing shells that were more elongated, narrower and with
more parallel dorsoventral margins at lower salinities. Temperature and food
supply, however, were the main drivers of mussel shape heterogeneity. Our
findings revealed similar shell shape responses in Mytilus to less favourable
environmental conditions across the different geographical scales analysed. Our
results show how shell shape plasticity represents a powerful indicator to
understand the alterations of blue mussel communities in rapidly changing
environments.
| [
{
"created": "Tue, 13 Feb 2018 10:06:11 GMT",
"version": "v1"
}
] | 2018-02-14 | [
[
"Telesca",
"Luca",
""
],
[
"Michalek",
"Kati",
""
],
[
"Sanders",
"Trystan",
""
],
[
"Peck",
"Lloyd S.",
""
],
[
"Thyrring",
"Jakob",
""
],
[
"Harper",
"Elizabeth M.",
""
]
] | Shape variability represents an important direct response of organisms to selective environments. Here, we use a combination of geometric morphometrics and generalised additive mixed models (GAMMs) to identify spatial patterns of natural shell shape variation in the North Atlantic and Arctic blue mussels, Mytilus edulis and M. trossulus, with environmental gradients of temperature, salinity and food availability across 3980 km of coastlines. New statistical methods and multiple study systems at various geographical scales allowed the uncoupling of the developmental and genetic contributions to shell shape and made it possible to identify general relationships between blue mussel shape variation and environment that are independent of age and species influences. We find salinity had the strongest effect on the latitudinal patterns of Mytilus shape, producing shells that were more elongated, narrower and with more parallel dorsoventral margins at lower salinities. Temperature and food supply, however, were the main drivers of mussel shape heterogeneity. Our findings revealed similar shell shape responses in Mytilus to less favourable environmental conditions across the different geographical scales analysed. Our results show how shell shape plasticity represents a powerful indicator to understand the alterations of blue mussel communities in rapidly changing environments. |
2212.13617 | Li Shen | Li Shen, Hongsong Feng, Yuchi Qiu, Guo-Wei Wei | SVSBI: Sequence-based virtual screening of biomolecular interactions | null | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Virtual screening (VS) is an essential technique for understanding
biomolecular interactions, particularly, drug design and discovery. The
best-performing VS models depend vitally on three-dimensional (3D) structures,
which are not available in general but can be obtained from molecular docking.
However, current docking accuracy is relatively low, rendering unreliable VS
models. We introduce sequence-based virtual screening (SVS) as a new generation
of VS models for modeling biomolecular interactions. The SVS model utilizes
advanced natural language processing (NLP) algorithms and optimizes deep
$K$-embedding strategies to encode biomolecular interactions without invoking
3D structure-based docking. We demonstrate the state-of-art performance of SVS
for four regression datasets involving protein-ligand binding, protein-protein,
protein-nucleic acid binding, and ligand inhibition of protein-protein
interactions and five classification datasets for the protein-protein
interactions in five biological species. SVS has the potential to dramatically
change the current practice in drug discovery and protein engineering.
| [
{
"created": "Tue, 27 Dec 2022 21:18:47 GMT",
"version": "v1"
}
] | 2022-12-29 | [
[
"Shen",
"Li",
""
],
[
"Feng",
"Hongsong",
""
],
[
"Qiu",
"Yuchi",
""
],
[
"Wei",
"Guo-Wei",
""
]
] | Virtual screening (VS) is an essential technique for understanding biomolecular interactions, particularly, drug design and discovery. The best-performing VS models depend vitally on three-dimensional (3D) structures, which are not available in general but can be obtained from molecular docking. However, current docking accuracy is relatively low, rendering unreliable VS models. We introduce sequence-based virtual screening (SVS) as a new generation of VS models for modeling biomolecular interactions. The SVS model utilizes advanced natural language processing (NLP) algorithms and optimizes deep $K$-embedding strategies to encode biomolecular interactions without invoking 3D structure-based docking. We demonstrate the state-of-art performance of SVS for four regression datasets involving protein-ligand binding, protein-protein, protein-nucleic acid binding, and ligand inhibition of protein-protein interactions and five classification datasets for the protein-protein interactions in five biological species. SVS has the potential to dramatically change the current practice in drug discovery and protein engineering. |
q-bio/0406031 | Guido Tiana | R. A. Broglia and G. Tiana | Simple models of protein folding and of non--conventional drug design | null | null | 10.1088/0953-8984/16/6/R02 | null | q-bio.BM | null | While all the information required for the folding of a protein is contained
in its amino acid sequence, one has not yet learned how to extract this
information to predict the three--dimensional, biologically active, native
conformation of a protein whose sequence is known. Using insight obtained from
simple model simulations of the folding of proteins, in particular of the fact
that this phenomenon is essentially controlled by conserved (native) contacts
among (few) strongly interacting ("hot"), as a rule hydrophobic, amino acids,
which also stabilize local elementary structures (LES, hidden, incipient
secondary structures like $\alpha$--helices and $\beta$--sheets) formed early
in the folding process and leading to the postcritical folding nucleus (i.e.,
the minimum set of native contacts which bring the system pass beyond the
highest free--energy barrier found in the whole folding process) it is possible
to work out a succesful strategy for reading the native structure of designed
proteins from the knowledge of only their amino acid sequence and of the
contact energies among the amino acids. Because LES have undergone millions of
years of evolution to selectively dock to their complementary structures, small
peptides made out of the same amino acids as the LES are expected to
selectively attach to the newly expressed (unfolded) protein and inhibit its
folding, or to the native (fluctuating) native conformation and denaturate it.
These peptides, or their mimetic molecules, can thus be used as effective
non--conventional drugs to those already existing (and directed at neutralizing
the active site of enzymes), displaying the advantage of not suffering from the
uprise of resistance.
| [
{
"created": "Tue, 15 Jun 2004 12:26:48 GMT",
"version": "v1"
}
] | 2009-11-10 | [
[
"Broglia",
"R. A.",
""
],
[
"Tiana",
"G.",
""
]
] | While all the information required for the folding of a protein is contained in its amino acid sequence, one has not yet learned how to extract this information to predict the three--dimensional, biologically active, native conformation of a protein whose sequence is known. Using insight obtained from simple model simulations of the folding of proteins, in particular of the fact that this phenomenon is essentially controlled by conserved (native) contacts among (few) strongly interacting ("hot"), as a rule hydrophobic, amino acids, which also stabilize local elementary structures (LES, hidden, incipient secondary structures like $\alpha$--helices and $\beta$--sheets) formed early in the folding process and leading to the postcritical folding nucleus (i.e., the minimum set of native contacts which bring the system pass beyond the highest free--energy barrier found in the whole folding process) it is possible to work out a succesful strategy for reading the native structure of designed proteins from the knowledge of only their amino acid sequence and of the contact energies among the amino acids. Because LES have undergone millions of years of evolution to selectively dock to their complementary structures, small peptides made out of the same amino acids as the LES are expected to selectively attach to the newly expressed (unfolded) protein and inhibit its folding, or to the native (fluctuating) native conformation and denaturate it. These peptides, or their mimetic molecules, can thus be used as effective non--conventional drugs to those already existing (and directed at neutralizing the active site of enzymes), displaying the advantage of not suffering from the uprise of resistance. |
1109.5633 | Steven Frank | Steven A. Frank | Natural selection. II. Developmental variability and evolutionary rate | null | Journal of Evolutionary Biology 24:2310-2320 (2011) | 10.1111/j.1420-9101.2011.02373.x | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In classical evolutionary theory, genetic variation provides the source of
heritable phenotypic variation on which natural selection acts. Against this
classical view, several theories have emphasized that developmental variability
and learning enhance nonheritable phenotypic variation, which in turn can
accelerate evolutionary response. In this paper, I show how developmental
variability alters evolutionary dynamics by smoothing the landscape that
relates genotype to fitness. In a fitness landscape with multiple peaks and
valleys, developmental variability can smooth the landscape to provide a
directly increasing path of fitness to the highest peak. Developmental
variability also allows initial survival of a genotype in response to novel or
extreme environmental challenge, providing an opportunity for subsequent
adaptation. This initial survival advantage arises from the way in which
developmental variability smooths and broadens the fitness landscape.
Ultimately, the synergism between developmental processes and genetic variation
sets evolutionary rate.
| [
{
"created": "Mon, 26 Sep 2011 16:47:29 GMT",
"version": "v1"
}
] | 2011-11-08 | [
[
"Frank",
"Steven A.",
""
]
] | In classical evolutionary theory, genetic variation provides the source of heritable phenotypic variation on which natural selection acts. Against this classical view, several theories have emphasized that developmental variability and learning enhance nonheritable phenotypic variation, which in turn can accelerate evolutionary response. In this paper, I show how developmental variability alters evolutionary dynamics by smoothing the landscape that relates genotype to fitness. In a fitness landscape with multiple peaks and valleys, developmental variability can smooth the landscape to provide a directly increasing path of fitness to the highest peak. Developmental variability also allows initial survival of a genotype in response to novel or extreme environmental challenge, providing an opportunity for subsequent adaptation. This initial survival advantage arises from the way in which developmental variability smooths and broadens the fitness landscape. Ultimately, the synergism between developmental processes and genetic variation sets evolutionary rate. |
1803.00871 | Paolo Olivero | L. Guarina, C. Calorio, D. Gavello, E. Moreva, P. Traina, A. Battiato,
S. Ditalia Tchernij, J. Forneris, M. Gai, F. Picollo, P. Olivero, M.
Genovese, E. Carbone, A. Marcantoni, V. Carabelli | Nanodiamonds-induced effects on neuronal firing of mouse hippocampal
microcircuits | 34 pages, 9 figures | Scientific Reports 8, 2221 (2018) | 10.1038/s41598-018-20528-5 | null | q-bio.NC physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fluorescent nanodiamonds (FND) are carbon-based nanomaterials that can
efficiently incorporate optically active photoluminescent centers such as the
nitrogen-vacancy complex, thus making them promising candidates as optical
biolabels and drug-delivery agents. FNDs exhibit bright fluorescence without
photobleaching combined with high uptake rate and low cytotoxicity. Focusing on
FNDs interference with neuronal function, here we examined their effect on
cultured hippocampal neurons, monitoring the whole network development as well
as the electrophysiological properties of single neurons. We observed that FNDs
drastically decreased the frequency of inhibitory (from 1.81 Hz to 0.86 Hz) and
excitatory (from 1.61 Hz to 0.68 Hz) miniature postsynaptic currents, and
consistently reduced action potential (AP) firing frequency (by 36%), as
measured by microelectrode arrays. On the contrary, bursts synchronization was
preserved, as well as the amplitude of spontaneous inhibitory and excitatory
events. Current-clamp recordings revealed that the ratio of neurons responding
with AP trains of high-frequency (fast-spiking) versus neurons responding with
trains of low-frequency (slow-spiking) was unaltered, suggesting that FNDs
exerted a comparable action on neuronal subpopulations. At the single cell
level, rapid onset of the somatic AP ("kink") was drastically reduced in
FND-treated neurons, suggesting a reduced contribution of axonal and dendritic
components while preserving neuronal excitability.
| [
{
"created": "Fri, 2 Mar 2018 14:50:56 GMT",
"version": "v1"
}
] | 2018-03-05 | [
[
"Guarina",
"L.",
""
],
[
"Calorio",
"C.",
""
],
[
"Gavello",
"D.",
""
],
[
"Moreva",
"E.",
""
],
[
"Traina",
"P.",
""
],
[
"Battiato",
"A.",
""
],
[
"Tchernij",
"S. Ditalia",
""
],
[
"Forneris",
"J.",
""
],
[
"Gai",
"M.",
""
],
[
"Picollo",
"F.",
""
],
[
"Olivero",
"P.",
""
],
[
"Genovese",
"M.",
""
],
[
"Carbone",
"E.",
""
],
[
"Marcantoni",
"A.",
""
],
[
"Carabelli",
"V.",
""
]
] | Fluorescent nanodiamonds (FND) are carbon-based nanomaterials that can efficiently incorporate optically active photoluminescent centers such as the nitrogen-vacancy complex, thus making them promising candidates as optical biolabels and drug-delivery agents. FNDs exhibit bright fluorescence without photobleaching combined with high uptake rate and low cytotoxicity. Focusing on FNDs interference with neuronal function, here we examined their effect on cultured hippocampal neurons, monitoring the whole network development as well as the electrophysiological properties of single neurons. We observed that FNDs drastically decreased the frequency of inhibitory (from 1.81 Hz to 0.86 Hz) and excitatory (from 1.61 Hz to 0.68 Hz) miniature postsynaptic currents, and consistently reduced action potential (AP) firing frequency (by 36%), as measured by microelectrode arrays. On the contrary, bursts synchronization was preserved, as well as the amplitude of spontaneous inhibitory and excitatory events. Current-clamp recordings revealed that the ratio of neurons responding with AP trains of high-frequency (fast-spiking) versus neurons responding with trains of low-frequency (slow-spiking) was unaltered, suggesting that FNDs exerted a comparable action on neuronal subpopulations. At the single cell level, rapid onset of the somatic AP ("kink") was drastically reduced in FND-treated neurons, suggesting a reduced contribution of axonal and dendritic components while preserving neuronal excitability. |
1910.11180 | Eric Batsche | Eric Batsch\'e, Oriane Mauger, Etienne Kornobis, Benjamin Hopkins,
Charlotte Hanmer-Lloyd, Christian Muchardt | CD44 alternative splicing is a sensor of intragenic DNA methylation in
tumors | null | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | DNA methylation (meDNA) is a suspected modulator of alternative splicing,
while splicing in turn is involved in tumour formations nearly as frequently as
DNA mutations. Yet, the impact of meDNA on tumorigenesis via its effect on
splicing has not been thoroughly explored. Here, we find that HCT116 colon
carcinoma cells inactivated for the DNA methylases DNMT1 and DNMT3b undergo a
partial epithelial to mesenchymal transition (EMT) associated with alternative
splicing of the CD44 transmembrane receptor. The skipping of CD44 variant exons
is in part explained by altered expression or splicing of splicing and
chromatin factors. A direct effect of meDNA on alternative splicing was
sustained by transient depletion of DNMT1 and the methyl-binding genes MBD1,
MBD2, and MBD3. Yet, local changes in intragenic meDNA also altered recruitment
of MBD1 protein and of the chromatin factor HP1$\gamma$ known to alter
transcriptional pausing and alternative splicing decisions. We further tested
if meDNA level has sufficiently strong direct impact on the outcome of
alternative splicing to have a predictive value in the MCF10A model for breast
cancer progression and in patients with acute lymphoblastic leukemia (B ALL).
We found that a small number of differentially spliced genes mostly involved in
splicing and signal transduction is systematically correlated with local meDNA.
Altogether, our observations suggest that, although DNA methylation has
multiple avenues to alternative splicing, its indirect effect may be also
mediated through alternative splicing isoforms of these sensors of meDNA.
| [
{
"created": "Thu, 24 Oct 2019 14:35:26 GMT",
"version": "v1"
}
] | 2019-10-25 | [
[
"Batsché",
"Eric",
""
],
[
"Mauger",
"Oriane",
""
],
[
"Kornobis",
"Etienne",
""
],
[
"Hopkins",
"Benjamin",
""
],
[
"Hanmer-Lloyd",
"Charlotte",
""
],
[
"Muchardt",
"Christian",
""
]
] | DNA methylation (meDNA) is a suspected modulator of alternative splicing, while splicing in turn is involved in tumour formations nearly as frequently as DNA mutations. Yet, the impact of meDNA on tumorigenesis via its effect on splicing has not been thoroughly explored. Here, we find that HCT116 colon carcinoma cells inactivated for the DNA methylases DNMT1 and DNMT3b undergo a partial epithelial to mesenchymal transition (EMT) associated with alternative splicing of the CD44 transmembrane receptor. The skipping of CD44 variant exons is in part explained by altered expression or splicing of splicing and chromatin factors. A direct effect of meDNA on alternative splicing was sustained by transient depletion of DNMT1 and the methyl-binding genes MBD1, MBD2, and MBD3. Yet, local changes in intragenic meDNA also altered recruitment of MBD1 protein and of the chromatin factor HP1$\gamma$ known to alter transcriptional pausing and alternative splicing decisions. We further tested if meDNA level has sufficiently strong direct impact on the outcome of alternative splicing to have a predictive value in the MCF10A model for breast cancer progression and in patients with acute lymphoblastic leukemia (B ALL). We found that a small number of differentially spliced genes mostly involved in splicing and signal transduction is systematically correlated with local meDNA. Altogether, our observations suggest that, although DNA methylation has multiple avenues to alternative splicing, its indirect effect may be also mediated through alternative splicing isoforms of these sensors of meDNA. |
1105.2830 | Oren Elrad | Michael F. Hagan, Oren M. Elrad and Robert L. Jack | Mechanisms of kinetic trapping in self-assembly and phase transformation | null | null | 10.1063/1.3635775 | null | q-bio.BM cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In self-assembly processes, kinetic trapping effects often hinder the
formation of thermodynamically stable ordered states. In a model of viral
capsid assembly and in the phase transformation of a lattice gas, we show how
simulations in a self-assembling steady state can be used to identify two
distinct mechanisms of kinetic trapping. We argue that one of these mechanisms
can be adequately captured by kinetic rate equations, while the other involves
a breakdown of theories that rely on cluster size as a reaction coordinate. We
discuss how these observations might be useful in designing and optimising
self-assembly reactions.
| [
{
"created": "Fri, 13 May 2011 20:14:50 GMT",
"version": "v1"
}
] | 2015-05-28 | [
[
"Hagan",
"Michael F.",
""
],
[
"Elrad",
"Oren M.",
""
],
[
"Jack",
"Robert L.",
""
]
] | In self-assembly processes, kinetic trapping effects often hinder the formation of thermodynamically stable ordered states. In a model of viral capsid assembly and in the phase transformation of a lattice gas, we show how simulations in a self-assembling steady state can be used to identify two distinct mechanisms of kinetic trapping. We argue that one of these mechanisms can be adequately captured by kinetic rate equations, while the other involves a breakdown of theories that rely on cluster size as a reaction coordinate. We discuss how these observations might be useful in designing and optimising self-assembly reactions. |
2010.13458 | Cintya Dutta | Cintya Nirvana Dutta, Pamela K. Douglas, Hernando Ombao | Structural Brain Asymmetries in Youths with Combined and Inattentive
Presentations of Attention Deficit Hyperactivity Disorder | 5 pages, 3 figures, 1 table, submitted to ISBI conference | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Alterations in structural brain laterality are reported in
attention-deficit/hyperactivity disorder (ADHD). However, few studies examined
differences within presentations of ADHD. We investigate asymmetry index (AI)
across 13 subcortical and 33 cortical regions from anatomical metrics of
volume, surface area, and thickness. Structural T1-weighted MRI data were
obtained from youths with inattentive (n = 64) and combined (n = 51)
presentations, and aged-matched controls (n = 298). We used a linear mixed
effect model that accounts for data site heterogeneity, while studying
associations between AI and covariates of presentation and age. Our paper
contributes to the functional results seen among ADHD presentations evidencing
disrupted connectivity in motor networks from ADHD-C and cingulo-frontal
networks from ADHD-I, as well as new findings in the temporal cortex and
default mode subnetworks. Age patterns of structural asymmetries vary with
presentation type. Linear mixed effects model is a practical tool for
characterizing associations between brain asymmetries, diagnosis, and
neurodevelopment.
| [
{
"created": "Mon, 26 Oct 2020 09:54:18 GMT",
"version": "v1"
}
] | 2020-10-27 | [
[
"Dutta",
"Cintya Nirvana",
""
],
[
"Douglas",
"Pamela K.",
""
],
[
"Ombao",
"Hernando",
""
]
] | Alterations in structural brain laterality are reported in attention-deficit/hyperactivity disorder (ADHD). However, few studies examined differences within presentations of ADHD. We investigate asymmetry index (AI) across 13 subcortical and 33 cortical regions from anatomical metrics of volume, surface area, and thickness. Structural T1-weighted MRI data were obtained from youths with inattentive (n = 64) and combined (n = 51) presentations, and aged-matched controls (n = 298). We used a linear mixed effect model that accounts for data site heterogeneity, while studying associations between AI and covariates of presentation and age. Our paper contributes to the functional results seen among ADHD presentations evidencing disrupted connectivity in motor networks from ADHD-C and cingulo-frontal networks from ADHD-I, as well as new findings in the temporal cortex and default mode subnetworks. Age patterns of structural asymmetries vary with presentation type. Linear mixed effects model is a practical tool for characterizing associations between brain asymmetries, diagnosis, and neurodevelopment. |
1904.08190 | Bosco Emmanuel | Bosco Emmanuel | Regulation of Heart Beats by the Autonomous Nervous System in Health and
Disease: Point-Process-Theory based Models and Simulation [V-I] | 42 pages 20 figures | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We have advanced a point-process based framework for the regulation of heart
beats by the autonomous nervous system and analyzed the model with and without
feedback. The model without feedback was found amenable to several analytical
results that help develop an intuition about the way the heart interacts with
the nervous system. However, in reality, feedback, baroreflex and chemoreflex
controls are important to model healthy and unhealthy scenarios for the heart.
Based on the Hurst exponent as an index of health of the heart we show how the
state of the nervous system may tune it in health and disease. Monte Carlo
simulation is used to generate RR interval series of the Electrocardiogram
(ECG) for different sympathetic and parasympathetic nerve excitations.
| [
{
"created": "Wed, 17 Apr 2019 11:20:41 GMT",
"version": "v1"
}
] | 2019-04-18 | [
[
"Emmanuel",
"Bosco",
""
]
] | We have advanced a point-process based framework for the regulation of heart beats by the autonomous nervous system and analyzed the model with and without feedback. The model without feedback was found amenable to several analytical results that help develop an intuition about the way the heart interacts with the nervous system. However, in reality, feedback, baroreflex and chemoreflex controls are important to model healthy and unhealthy scenarios for the heart. Based on the Hurst exponent as an index of health of the heart we show how the state of the nervous system may tune it in health and disease. Monte Carlo simulation is used to generate RR interval series of the Electrocardiogram (ECG) for different sympathetic and parasympathetic nerve excitations. |
2306.15485 | Diego Vidaurre | Diego Vidaurre | Dynamic functional connectivity: why the controversy? | null | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by-nc-sa/4.0/ | In principle, dynamic functional connectivity in fMRI is just a statistical
measure. A passer-by might think it to be a specialist topic, but it continues
to attract widespread attention and spark controversy. Why?
| [
{
"created": "Tue, 27 Jun 2023 14:06:05 GMT",
"version": "v1"
}
] | 2023-06-28 | [
[
"Vidaurre",
"Diego",
""
]
] | In principle, dynamic functional connectivity in fMRI is just a statistical measure. A passer-by might think it to be a specialist topic, but it continues to attract widespread attention and spark controversy. Why? |
1705.03115 | Eli Shlizerman | David Blaszka, Elischa Sanders, Jeffrey Riffell, Eli Shlizerman | Classification of Fixed Point Network Dynamics From Multiple Node
Timeseries Data | submitted for publication | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fixed point networks are dynamic networks that encode stimuli via distinct
output patterns. Although such networks are omnipresent in neural systems,
their structures are typically unknown or poorly characterized. It is therefore
valuable to use a supervised approach for resolving how a network encodes
distinct inputs of interest, and the superposition of those inputs from sampled
multiple node time series. In this paper we show that accomplishing such a task
involves finding a low-dimensional state space from supervised recordings. We
demonstrate that standard methods for dimension reduction are unable to provide
the desired functionality of optimal separation of the fixed points and
transient trajectories to them. However, the combination of dimension reduction
with selection and optimization can successfully provide such functionality.
Specifically, we propose two methods: Exclusive Threshold Reduction (ETR) and
Optimal Exclusive Threshold Reduction (OETR) for finding a basis for the
classification state space. We show that the classification space constructed
upon combination of dimension reduction optimal separation can directly
facilitate recognition of stimuli, and classify complex inputs (mixtures) into
similarity classes. We test our methodology and compare it to standard
state-of-the-art methods on a benchmark dataset - an experimental neuronal
network (the olfactory system) that we recorded from to test these methods. We
show that our methods are capable of providing a basis for the classification
space in such network, and to perform recognition at a significantly better
rate than previously proposed approaches.
| [
{
"created": "Mon, 8 May 2017 22:47:40 GMT",
"version": "v1"
}
] | 2017-05-10 | [
[
"Blaszka",
"David",
""
],
[
"Sanders",
"Elischa",
""
],
[
"Riffell",
"Jeffrey",
""
],
[
"Shlizerman",
"Eli",
""
]
] | Fixed point networks are dynamic networks that encode stimuli via distinct output patterns. Although such networks are omnipresent in neural systems, their structures are typically unknown or poorly characterized. It is therefore valuable to use a supervised approach for resolving how a network encodes distinct inputs of interest, and the superposition of those inputs from sampled multiple node time series. In this paper we show that accomplishing such a task involves finding a low-dimensional state space from supervised recordings. We demonstrate that standard methods for dimension reduction are unable to provide the desired functionality of optimal separation of the fixed points and transient trajectories to them. However, the combination of dimension reduction with selection and optimization can successfully provide such functionality. Specifically, we propose two methods: Exclusive Threshold Reduction (ETR) and Optimal Exclusive Threshold Reduction (OETR) for finding a basis for the classification state space. We show that the classification space constructed upon combination of dimension reduction optimal separation can directly facilitate recognition of stimuli, and classify complex inputs (mixtures) into similarity classes. We test our methodology and compare it to standard state-of-the-art methods on a benchmark dataset - an experimental neuronal network (the olfactory system) that we recorded from to test these methods. We show that our methods are capable of providing a basis for the classification space in such network, and to perform recognition at a significantly better rate than previously proposed approaches. |
q-bio/0404031 | Paul Higgs | Cendrine Hudelot, Vivek Gowri-Shankar, Howsun Jow, Magnus Rattray and
Paul G. Higgs | RNA-based Phylogenetic Methods: Application to Mammalian Mitochondrial
RNA Sequences | null | Mol. Phyl. Evol. 28, 241-252. (2003) | null | null | q-bio.PE | null | The PHASE software package allows phylogenetic tree construction with a
number of evolutionary models designed specifically for use with RNA sequences
that have conserved secondary structure. Evolution in the paired regions of
RNAs occurs via compensatory substitutions, hence changes on either side of a
pair are correlated. Accounting for this correlation is important for
phylogenetic inference because it affects the likelihood calculation. In the
present study we use the complete set of tRNA and rRNA sequences from 69
complete mammalian mitochondrial genomes. The likelihood calculation uses two
evolutionary models simultaneously for different parts of the sequence: a
paired-site model for the paired sites and a single-site model for the unpaired
sites. We use Bayesian phylogenetic methods and a Markov chain Monte Carlo
algorithm is used to obtain the most probable trees and posterior probabilities
of clades. The results are well resolved for almost all the important branches
on the mammalian tree. They support the arrangement of mammalian orders within
the four supra-ordinal clades that have been identified by studies of much
larger data sets mainly comprising nuclear genes. Groups such as the hedgehogs
and the murid rodents, which have been problematic in previous studies with
mitochondrial proteins, appear in their expected position with the other
members of their order. Our choice of genes and evolutionary model appears to
be more reliable and less subject to biases caused by variation in base
composition than previous studies with mitochondrial genomes.
| [
{
"created": "Fri, 23 Apr 2004 14:39:02 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Hudelot",
"Cendrine",
""
],
[
"Gowri-Shankar",
"Vivek",
""
],
[
"Jow",
"Howsun",
""
],
[
"Rattray",
"Magnus",
""
],
[
"Higgs",
"Paul G.",
""
]
] | The PHASE software package allows phylogenetic tree construction with a number of evolutionary models designed specifically for use with RNA sequences that have conserved secondary structure. Evolution in the paired regions of RNAs occurs via compensatory substitutions, hence changes on either side of a pair are correlated. Accounting for this correlation is important for phylogenetic inference because it affects the likelihood calculation. In the present study we use the complete set of tRNA and rRNA sequences from 69 complete mammalian mitochondrial genomes. The likelihood calculation uses two evolutionary models simultaneously for different parts of the sequence: a paired-site model for the paired sites and a single-site model for the unpaired sites. We use Bayesian phylogenetic methods and a Markov chain Monte Carlo algorithm is used to obtain the most probable trees and posterior probabilities of clades. The results are well resolved for almost all the important branches on the mammalian tree. They support the arrangement of mammalian orders within the four supra-ordinal clades that have been identified by studies of much larger data sets mainly comprising nuclear genes. Groups such as the hedgehogs and the murid rodents, which have been problematic in previous studies with mitochondrial proteins, appear in their expected position with the other members of their order. Our choice of genes and evolutionary model appears to be more reliable and less subject to biases caused by variation in base composition than previous studies with mitochondrial genomes. |
1706.05656 | Stefan Frank | Stefan Frank, Jinbiao Yang | Lexical representation explains cortical entrainment during speech
comprehension | Submitted for publication | null | 10.1371/journal.pone.0197304 | null | q-bio.NC cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Results from a recent neuroimaging study on spoken sentence comprehension
have been interpreted as evidence for cortical entrainment to hierarchical
syntactic structure. We present a simple computational model that predicts the
power spectra from this study, even though the model's linguistic knowledge is
restricted to the lexical level, and word-level representations are not
combined into higher-level units (phrases or sentences). Hence, the cortical
entrainment results can also be explained from the lexical properties of the
stimuli, without recourse to hierarchical syntax.
| [
{
"created": "Sun, 18 Jun 2017 14:04:09 GMT",
"version": "v1"
},
{
"created": "Wed, 12 Jul 2017 07:33:02 GMT",
"version": "v2"
},
{
"created": "Thu, 20 Jul 2017 11:26:48 GMT",
"version": "v3"
},
{
"created": "Mon, 9 Oct 2017 14:24:45 GMT",
"version": "v4"
},
{
"created": "Wed, 3 Jan 2018 13:53:14 GMT",
"version": "v5"
},
{
"created": "Wed, 10 Jan 2018 13:35:26 GMT",
"version": "v6"
}
] | 2018-06-15 | [
[
"Frank",
"Stefan",
""
],
[
"Yang",
"Jinbiao",
""
]
] | Results from a recent neuroimaging study on spoken sentence comprehension have been interpreted as evidence for cortical entrainment to hierarchical syntactic structure. We present a simple computational model that predicts the power spectra from this study, even though the model's linguistic knowledge is restricted to the lexical level, and word-level representations are not combined into higher-level units (phrases or sentences). Hence, the cortical entrainment results can also be explained from the lexical properties of the stimuli, without recourse to hierarchical syntax. |
2301.02623 | Rudy Arthur | Rudy Arthur and Arwen Nicholson | Does Gaia Play Dice? : Simple Models of non-Darwinian Selection | null | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | In this paper we introduce some simple models, based on rolling dice, to
explore mechanisms proposed to explain planetary habitability. The idea is to
study these selection mechanisms in an analytically tractable setting,
isolating their consequences from other details which can confound or obscure
their effect in more realistic models. We find that while the observable of
interest, the face value shown on the die, `improves' over time in all models,
for two of the more popular ideas, Selection by Survival and Sequential
Selection, this is down to sampling effects. A modified version of Sequential
Selection, Sequential Selection with Memory, implies a statistical tendency for
systems to improve over time. We discuss the implications of this and its
relationship to the ideas of the `Inhabitance Paradox' and the `Gaian
bottleneck'.
| [
{
"created": "Fri, 6 Jan 2023 17:58:36 GMT",
"version": "v1"
},
{
"created": "Sat, 27 Jul 2024 14:18:40 GMT",
"version": "v2"
}
] | 2024-07-30 | [
[
"Arthur",
"Rudy",
""
],
[
"Nicholson",
"Arwen",
""
]
] | In this paper we introduce some simple models, based on rolling dice, to explore mechanisms proposed to explain planetary habitability. The idea is to study these selection mechanisms in an analytically tractable setting, isolating their consequences from other details which can confound or obscure their effect in more realistic models. We find that while the observable of interest, the face value shown on the die, `improves' over time in all models, for two of the more popular ideas, Selection by Survival and Sequential Selection, this is down to sampling effects. A modified version of Sequential Selection, Sequential Selection with Memory, implies a statistical tendency for systems to improve over time. We discuss the implications of this and its relationship to the ideas of the `Inhabitance Paradox' and the `Gaian bottleneck'. |
1809.09321 | Bryan Daniels | Bryan C. Daniels, William S. Ryu, and Ilya Nemenman | Automated, predictive, and interpretable inference of C. elegans escape
dynamics | 19 pages, 5 figures | null | 10.1073/pnas.1816531116 | null | q-bio.NC q-bio.QM stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The roundworm C. elegans exhibits robust escape behavior in response to
rapidly rising temperature. The behavior lasts for a few seconds, shows history
dependence, involves both sensory and motor systems, and is too complicated to
model mechanistically using currently available knowledge. Instead we model the
process phenomenologically, and we use the Sir Isaac dynamical inference
platform to infer the model in a fully automated fashion directly from
experimental data. The inferred model requires incorporation of an unobserved
dynamical variable, and is biologically interpretable. The model makes accurate
predictions about the dynamics of the worm behavior, and it can be used to
characterize the functional logic of the dynamical system underlying the escape
response. This work illustrates the power of modern artificial intelligence to
aid in discovery of accurate and interpretable models of complex natural
systems.
| [
{
"created": "Tue, 25 Sep 2018 05:15:40 GMT",
"version": "v1"
}
] | 2019-06-19 | [
[
"Daniels",
"Bryan C.",
""
],
[
"Ryu",
"William S.",
""
],
[
"Nemenman",
"Ilya",
""
]
] | The roundworm C. elegans exhibits robust escape behavior in response to rapidly rising temperature. The behavior lasts for a few seconds, shows history dependence, involves both sensory and motor systems, and is too complicated to model mechanistically using currently available knowledge. Instead we model the process phenomenologically, and we use the Sir Isaac dynamical inference platform to infer the model in a fully automated fashion directly from experimental data. The inferred model requires incorporation of an unobserved dynamical variable, and is biologically interpretable. The model makes accurate predictions about the dynamics of the worm behavior, and it can be used to characterize the functional logic of the dynamical system underlying the escape response. This work illustrates the power of modern artificial intelligence to aid in discovery of accurate and interpretable models of complex natural systems. |
2407.12148 | Daniel Villela | Daniel A.M. Villela | On the Evolution of Virulence in Vector-borne Diseases | 14 pages; 5 figures | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The emergence or adaptation of pathogens may lead to epidemics or even
pandemics, highlighting the need for a thorough understanding of pathogen
evolution. The tradeoff hypothesis suggests that virulence evolves to reach an
optimal transmission intensity relative to the mortality caused by the disease.
This study introduces a mathematical model that incorporates key factors such
as recovery times and mortality rates, focusing on the diminishing effects of
parasite growth on transmission, with a focus on vector-borne diseases. The
analysis reveals conditions under which heightened virulence occurs in hosts,
indicating that these factors can support vector-host transmission of a
pathogen, even if the host-only component is insufficient for sustainable
transmission. This insight helps explain the increased presence of pathogens
with high fatality rates, especially in vector-borne diseases. The findings
emphasize an elevated risk in major pandemics involving vector-host diseases.
Enhanced surveillance of mortality rates and techniques to monitor pathogen
evolution are vital to effectively control future epidemics. This study
provides essential insights for pandemic preparedness and highlights the need
for ongoing research into pathogen evolution.
| [
{
"created": "Tue, 16 Jul 2024 20:06:48 GMT",
"version": "v1"
},
{
"created": "Wed, 14 Aug 2024 20:21:59 GMT",
"version": "v2"
}
] | 2024-08-16 | [
[
"Villela",
"Daniel A. M.",
""
]
] | The emergence or adaptation of pathogens may lead to epidemics or even pandemics, highlighting the need for a thorough understanding of pathogen evolution. The tradeoff hypothesis suggests that virulence evolves to reach an optimal transmission intensity relative to the mortality caused by the disease. This study introduces a mathematical model that incorporates key factors such as recovery times and mortality rates, focusing on the diminishing effects of parasite growth on transmission, with a focus on vector-borne diseases. The analysis reveals conditions under which heightened virulence occurs in hosts, indicating that these factors can support vector-host transmission of a pathogen, even if the host-only component is insufficient for sustainable transmission. This insight helps explain the increased presence of pathogens with high fatality rates, especially in vector-borne diseases. The findings emphasize an elevated risk in major pandemics involving vector-host diseases. Enhanced surveillance of mortality rates and techniques to monitor pathogen evolution are vital to effectively control future epidemics. This study provides essential insights for pandemic preparedness and highlights the need for ongoing research into pathogen evolution. |
q-bio/0412010 | Hernan Garcia | Lacramioara Bintu, Nicolas E. Buchler, Hernan G. Garcia, Ulrich
Gerland, Terence Hwa, Jane' Kondev, and Rob Phillips | Transcriptional Regulation by the Numbers 1: Models | 11 pages and 4 figures in PDF format | null | null | null | q-bio.MN q-bio.QM | null | The study of gene regulation and expression is often discussed in
quantitative terms. In particular, the expression of genes is regularly
characterized with respect to how much, how fast, when and where. Whether
discussing the level of gene expression in a bacterium or its precise location
within a developing embryo, the natural language for these experiments is that
of numbers. Such quantitative data demands quantitative models. We review a
class of models ("thermodynamic models") which exploit statistical mechanics to
compute the probability that RNA polymerase is at the appropriate promoter.
This provides a mathematically precise elaboration of the idea that activators
are agents of recruitment which increase the probability that RNA polymerase
will be found at the promoter of interest. We discuss a framework which
describes the interactions of repressors, activators, helper molecules and RNA
polymerase using the concept of effective concentrations, expressed in terms of
a function we call the "regulation factor". This analysis culminates in an
expression for the probability of RNA polymerase binding at the promoter of
interest as a function of the number of regulatory proteins in the cell. In a
companion paper [1], these ideas are applied to several case studies which
illustrate the use of the general formalism.
| [
{
"created": "Sun, 5 Dec 2004 23:20:53 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Bintu",
"Lacramioara",
""
],
[
"Buchler",
"Nicolas E.",
""
],
[
"Garcia",
"Hernan G.",
""
],
[
"Gerland",
"Ulrich",
""
],
[
"Hwa",
"Terence",
""
],
[
"Kondev",
"Jane'",
""
],
[
"Phillips",
"Rob",
""
]
] | The study of gene regulation and expression is often discussed in quantitative terms. In particular, the expression of genes is regularly characterized with respect to how much, how fast, when and where. Whether discussing the level of gene expression in a bacterium or its precise location within a developing embryo, the natural language for these experiments is that of numbers. Such quantitative data demands quantitative models. We review a class of models ("thermodynamic models") which exploit statistical mechanics to compute the probability that RNA polymerase is at the appropriate promoter. This provides a mathematically precise elaboration of the idea that activators are agents of recruitment which increase the probability that RNA polymerase will be found at the promoter of interest. We discuss a framework which describes the interactions of repressors, activators, helper molecules and RNA polymerase using the concept of effective concentrations, expressed in terms of a function we call the "regulation factor". This analysis culminates in an expression for the probability of RNA polymerase binding at the promoter of interest as a function of the number of regulatory proteins in the cell. In a companion paper [1], these ideas are applied to several case studies which illustrate the use of the general formalism. |
2006.02357 | Seungho Choe | Seungho Choe | Molecular dynamics studies of interactions between Arg9(nona-arginine)
and a DOPC/DOPG(4:1) membrane | 10 pages, 12 figures | AIP Advances 10, 105103 (2020) | null | null | q-bio.BM cond-mat.soft physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It has been known that the uptake mechanisms of cell-penetrating
peptides(CPPs) depend on the experimental conditions such as concentration of
peptides, lipid composition, temperature, etc. In this study we investigate the
temperature dependence of the penetration of Arg9s into a DOPC/DOPG(4:1)
membrane using molecular dynamics(MD) simulations at two different
temperatures, T = 310 K and T = 288 K. Although it is difficult to identify the
temperature dependence because of having only one single simulation at each
temperature and no evidence of translocation of Arg9s across the membrane at
both temperatures, our simulations suggest that followings are strongly
correlated with the penetration of Arg9s: a number of water molecules
coordinated by Arg9s, electrostatic energy between Arg9s and the lipids
molecules. We also present how Arg9s change a bending rigidity of the membrane
and how a collective behavior between Arg9s enhances the penetration and the
membrane bending. Our analyses can be applicable to any cell-penetrating
peptides(CPPs) to investigate their interactions with various membranes using
MD simulations.
| [
{
"created": "Wed, 3 Jun 2020 16:04:56 GMT",
"version": "v1"
},
{
"created": "Wed, 7 Oct 2020 07:24:53 GMT",
"version": "v2"
}
] | 2020-10-08 | [
[
"Choe",
"Seungho",
""
]
] | It has been known that the uptake mechanisms of cell-penetrating peptides(CPPs) depend on the experimental conditions such as concentration of peptides, lipid composition, temperature, etc. In this study we investigate the temperature dependence of the penetration of Arg9s into a DOPC/DOPG(4:1) membrane using molecular dynamics(MD) simulations at two different temperatures, T = 310 K and T = 288 K. Although it is difficult to identify the temperature dependence because of having only one single simulation at each temperature and no evidence of translocation of Arg9s across the membrane at both temperatures, our simulations suggest that followings are strongly correlated with the penetration of Arg9s: a number of water molecules coordinated by Arg9s, electrostatic energy between Arg9s and the lipids molecules. We also present how Arg9s change a bending rigidity of the membrane and how a collective behavior between Arg9s enhances the penetration and the membrane bending. Our analyses can be applicable to any cell-penetrating peptides(CPPs) to investigate their interactions with various membranes using MD simulations. |
2005.02271 | Augusto Gonzalez | Augusto Gonzalez, Frank Quintela, Dario A. Leon, Maria Luisa Bringas
Vega, Pedro Valdes Sosa | Estimating the number of available states for normal and tumor tissues
in gene expression space | null | Biophysical Reports Volume 2, Issue 2, 8 June 2022, 100053 | 10.1016/j.bpr.2022.100053 | null | q-bio.TO q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The topology of gene expression space for a set of 12 cancer types is studied
by means of an entropy-like magnitude, which allows the characterization of the
regions occupied by tumor and normal samples. The comparison indicates that the
number of available states in gene expression space is much greater for tumors
than for normal tissues, suggesting the irreversibility of the progression to
the tumor phase. The entropy is nearly constant for tumors, whereas exhibits a
higher variability in normal tissues, probably due to tissue differentiation.
In addition, we show an interesting correlation between the fraction of
available states and the overlapping between the tumor and normal sample
clouds, interpreted as a way of reducing the decay rate to the tumor phase in
more ordered or structured tissues.
| [
{
"created": "Tue, 5 May 2020 15:07:05 GMT",
"version": "v1"
},
{
"created": "Wed, 3 Nov 2021 22:00:16 GMT",
"version": "v2"
},
{
"created": "Sat, 13 Nov 2021 22:40:36 GMT",
"version": "v3"
}
] | 2022-05-19 | [
[
"Gonzalez",
"Augusto",
""
],
[
"Quintela",
"Frank",
""
],
[
"Leon",
"Dario A.",
""
],
[
"Vega",
"Maria Luisa Bringas",
""
],
[
"Sosa",
"Pedro Valdes",
""
]
] | The topology of gene expression space for a set of 12 cancer types is studied by means of an entropy-like magnitude, which allows the characterization of the regions occupied by tumor and normal samples. The comparison indicates that the number of available states in gene expression space is much greater for tumors than for normal tissues, suggesting the irreversibility of the progression to the tumor phase. The entropy is nearly constant for tumors, whereas exhibits a higher variability in normal tissues, probably due to tissue differentiation. In addition, we show an interesting correlation between the fraction of available states and the overlapping between the tumor and normal sample clouds, interpreted as a way of reducing the decay rate to the tumor phase in more ordered or structured tissues. |
1703.03444 | Fernanda Matias | Fernanda S. Matias, Pedro V. Carelli, Claudio R. Mirasso, Mauro
Copelli | Anticipated synchronization in neuronal circuits unveiled by a
phase-resetting curve analysis | null | Phys. Rev. E 95, 052410 (2017) | 10.1103/PhysRevE.95.052410 | null | q-bio.NC physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Anticipated synchronization (AS) is a counter intuitive behavior that has
been observed in several systems. When AS establishes in a sender-receiver
configuration, the latter can predict the future dynamics of the former for
certain parameter values. In particular, in neuroscience AS was proposed to
explain the apparent discrepancy between information flow and time lag in the
cortical activity recorded in monkeys. Despite its success, a clear
understanding on the mechanisms yielding AS in neuronal circuits is still
missing. Here we use the well-known phase-resetting-curve (PRC) approach to
study the prototypical sender-receiver-interneuron neuronal motif. Our aim is
to better understand how the transitions between delayed to anticipated
synchronization and anticipated synchronization to phase-drift regimes occur.
We construct a map based on the PRC method to predict the phase-locking regimes
and their stability. We find that a PRC function of two variables, accounting
simultaneously for the inputs from sender and interneuron into the receiver, is
essential to reproduce the numerical results obtained using a Hodgkin-Huxley
model for the neurons. On the contrary, the typical approximation that
considers a sum of two independent single-variable PRCs fails for intermediate
to high values of the inhibitory connectivity between interneuron. In
particular, it looses the delayed-synchronization to
anticipated-synchronization transition.
| [
{
"created": "Thu, 9 Mar 2017 19:58:42 GMT",
"version": "v1"
}
] | 2017-05-31 | [
[
"Matias",
"Fernanda S.",
""
],
[
"Carelli",
"Pedro V.",
""
],
[
"Mirasso",
"Claudio R.",
""
],
[
"Copelli",
"Mauro",
""
]
] | Anticipated synchronization (AS) is a counter intuitive behavior that has been observed in several systems. When AS establishes in a sender-receiver configuration, the latter can predict the future dynamics of the former for certain parameter values. In particular, in neuroscience AS was proposed to explain the apparent discrepancy between information flow and time lag in the cortical activity recorded in monkeys. Despite its success, a clear understanding on the mechanisms yielding AS in neuronal circuits is still missing. Here we use the well-known phase-resetting-curve (PRC) approach to study the prototypical sender-receiver-interneuron neuronal motif. Our aim is to better understand how the transitions between delayed to anticipated synchronization and anticipated synchronization to phase-drift regimes occur. We construct a map based on the PRC method to predict the phase-locking regimes and their stability. We find that a PRC function of two variables, accounting simultaneously for the inputs from sender and interneuron into the receiver, is essential to reproduce the numerical results obtained using a Hodgkin-Huxley model for the neurons. On the contrary, the typical approximation that considers a sum of two independent single-variable PRCs fails for intermediate to high values of the inhibitory connectivity between interneuron. In particular, it looses the delayed-synchronization to anticipated-synchronization transition. |
0801.0654 | Michael Welter | Michael Welter, Katalin Bartha and Heiko Rieger | Hot spot formation in tumor vasculature during tumor growth in an
arterio-venous-network environment | 37 pages, 16 figures (full paper with higher resolution figures at
http://www.uni-saarland.de/fak7/rieger/Paper/JTB-welter2-subm.pdf) | null | null | null | q-bio.TO | null | Hot spots in tumors are regions of high vascular density in the center of the
tumor and their analysis is an important diagnostic tool in cancer treatment.
We present a model for vascular remodeling in tumors predicting that the
formation of hot spots correlates with local inhomogeneities of the original
arterio-venous vasculature of the healthy tissue. Probable locations for hot
spots in the late stages of the tumor are locations of increased blood pressure
gradients. The developing tumor vasculature is non-hierarchical but still
complex displaying algebraically decaying density distributions.
| [
{
"created": "Fri, 4 Jan 2008 11:22:49 GMT",
"version": "v1"
},
{
"created": "Mon, 7 Jan 2008 11:13:47 GMT",
"version": "v2"
}
] | 2008-01-07 | [
[
"Welter",
"Michael",
""
],
[
"Bartha",
"Katalin",
""
],
[
"Rieger",
"Heiko",
""
]
] | Hot spots in tumors are regions of high vascular density in the center of the tumor and their analysis is an important diagnostic tool in cancer treatment. We present a model for vascular remodeling in tumors predicting that the formation of hot spots correlates with local inhomogeneities of the original arterio-venous vasculature of the healthy tissue. Probable locations for hot spots in the late stages of the tumor are locations of increased blood pressure gradients. The developing tumor vasculature is non-hierarchical but still complex displaying algebraically decaying density distributions. |
2107.03334 | Andrea Allen | Andrea J. Allen, Mariah C. Boudreau, Nicholas J. Roberts, Antoine
Allard, Laurent H\'ebert-Dufresne | Predicting the diversity of early epidemic spread on networks | null | Phys. Rev. Research 4, 013123 (2022) | 10.1103/PhysRevResearch.4.013123 | null | q-bio.PE physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The interplay of biological, social, structural and random factors makes
disease forecasting extraordinarily complex. The course of an epidemic exhibits
average growth dynamics determined by features of the pathogen and the
population, yet also features significant variability reflecting the stochastic
nature of disease spread. In this work, we reframe a stochastic branching
process analysis in terms of probability generating functions and compare it to
continuous time epidemic simulations on networks. In doing so, we predict the
diversity of emerging epidemic courses on both homogeneous and heterogeneous
networks. We show how the challenge of inferring the early course of an
epidemic falls on the randomness of disease spread more so than on the
heterogeneity of contact patterns. We provide an analysis which helps quantify,
in real time, the probability that an epidemic goes supercritical or
conversely, dies stochastically. These probabilities are often assumed to be
one and zero, respectively, if the basic reproduction number, or R0, is greater
than 1, ignoring the heterogeneity and randomness inherent to disease spread.
This framework can give more insight into early epidemic spread by weighting
standard deterministic models with likelihood to inform pandemic preparedness
with probabilistic forecasts.
| [
{
"created": "Wed, 7 Jul 2021 16:23:27 GMT",
"version": "v1"
},
{
"created": "Wed, 23 Feb 2022 18:45:07 GMT",
"version": "v2"
}
] | 2022-02-24 | [
[
"Allen",
"Andrea J.",
""
],
[
"Boudreau",
"Mariah C.",
""
],
[
"Roberts",
"Nicholas J.",
""
],
[
"Allard",
"Antoine",
""
],
[
"Hébert-Dufresne",
"Laurent",
""
]
] | The interplay of biological, social, structural and random factors makes disease forecasting extraordinarily complex. The course of an epidemic exhibits average growth dynamics determined by features of the pathogen and the population, yet also features significant variability reflecting the stochastic nature of disease spread. In this work, we reframe a stochastic branching process analysis in terms of probability generating functions and compare it to continuous time epidemic simulations on networks. In doing so, we predict the diversity of emerging epidemic courses on both homogeneous and heterogeneous networks. We show how the challenge of inferring the early course of an epidemic falls on the randomness of disease spread more so than on the heterogeneity of contact patterns. We provide an analysis which helps quantify, in real time, the probability that an epidemic goes supercritical or conversely, dies stochastically. These probabilities are often assumed to be one and zero, respectively, if the basic reproduction number, or R0, is greater than 1, ignoring the heterogeneity and randomness inherent to disease spread. This framework can give more insight into early epidemic spread by weighting standard deterministic models with likelihood to inform pandemic preparedness with probabilistic forecasts. |
1704.05912 | Konstantin Blyuss | G.O. Agaba, Y.N. Kyrychko, K.B. Blyuss | Time-delayed SIS epidemic model with population awareness | 15 pages, 5 figures | Ecol. Compl. 31, 50-56 (2017) | 10.1016/j.ecocom.2017.03.002 | null | q-bio.PE nlin.CD q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper analyses the dynamics of infectious disease with a concurrent
spread of disease awareness. The model includes local awareness due to contacts
with aware individuals, as well as global awareness due to reported cases of
infection and awareness campaigns. We investigate the effects of time delay in
response of unaware individuals to available information on the epidemic
dynamics by establishing conditions for the Hopf bifurcation of the endemic
steady state of the model. Analytical results are supported by numerical
bifurcation analysis and simulations.
| [
{
"created": "Wed, 19 Apr 2017 19:45:52 GMT",
"version": "v1"
}
] | 2017-04-21 | [
[
"Agaba",
"G. O.",
""
],
[
"Kyrychko",
"Y. N.",
""
],
[
"Blyuss",
"K. B.",
""
]
] | This paper analyses the dynamics of infectious disease with a concurrent spread of disease awareness. The model includes local awareness due to contacts with aware individuals, as well as global awareness due to reported cases of infection and awareness campaigns. We investigate the effects of time delay in response of unaware individuals to available information on the epidemic dynamics by establishing conditions for the Hopf bifurcation of the endemic steady state of the model. Analytical results are supported by numerical bifurcation analysis and simulations. |
2101.11526 | Catherine Beauchemin | Daniel Cresta, Donald C. Warren, Christian Quirouette, Amanda P.
Smith, Lindey C. Lane, Amber M. Smith, and Catherine A. A. Beauchemin | Time to revisit the endpoint dilution assay and to replace TCID$_{50}$
and PFU as measures of a virus sample's infection concentration | 22 pages, 8 figures | PLOS Comput. Biol., 17(10):e1009480, October, 2021 | 10.1371/journal.pcbi.1009480 | RIKEN-iTHEMS-Report-21 | q-bio.QM | http://creativecommons.org/licenses/by-sa/4.0/ | The infectivity of a virus sample is measured by the infections it causes,
via a plaque or focus forming assay (PFU or FFU) or an endpoint dilution (ED)
assay (TCID$_{50}$, CCID$_{50}$, EID$_{50}$, etc., hereafter collectively
ID$_{50}$). The counting of plaques or foci at a given dilution intuitively and
directly provides the concentration of infectious doses in the undiluted
sample. However, it has many technical and experimental limitations. For
example, it relies on one's judgement in distinguishing between two merged
plaques and a larger one, or between small plaques and staining artifacts. In
this regard, ED assays are more robust because one need only determine whether
infection occurred. The output of the ED assay, the 50% infectious dose
(ID$_{50}$), is calculated using either the Spearman-Karber (SK, 1908,1931) or
Reed-Muench (RM, 1938) mathematical approximations. However, these are often
miscalculated and their ID$_{50}$ approximation is biased. We propose that the
PFU and FFU assays be abandoned, and that the measured output of the ED assay,
the ID$_{50}$, be replaced by a more useful measure we coined Specific
INfections (SIN). We introduce a free, open-source web-application, midSIN,
that computes the SIN concentration in a virus sample from a standard ED assay,
requiring no changes to current experimental protocols. We demonstrate that the
SIN/mL of a sample reliably corresponds to the number of infections the sample
will cause per unit volume, and directly relates to the multiplicity of
infection. midSIN estimates are shown to be more accurate and robust than those
using the RM and SK approximations. The impact of ED plate design choices
(dilution factor, replicates per dilution) on measurement accuracy is also
explored. The simplicity of SIN as a measure and the greater accuracy of midSIN
make them an easy, superior replacement for the PFU, FFU, and ID$_{50}$
measures.
| [
{
"created": "Wed, 27 Jan 2021 16:31:51 GMT",
"version": "v1"
}
] | 2024-04-02 | [
[
"Cresta",
"Daniel",
""
],
[
"Warren",
"Donald C.",
""
],
[
"Quirouette",
"Christian",
""
],
[
"Smith",
"Amanda P.",
""
],
[
"Lane",
"Lindey C.",
""
],
[
"Smith",
"Amber M.",
""
],
[
"Beauchemin",
"Catherine A. A.",
""
]
] | The infectivity of a virus sample is measured by the infections it causes, via a plaque or focus forming assay (PFU or FFU) or an endpoint dilution (ED) assay (TCID$_{50}$, CCID$_{50}$, EID$_{50}$, etc., hereafter collectively ID$_{50}$). The counting of plaques or foci at a given dilution intuitively and directly provides the concentration of infectious doses in the undiluted sample. However, it has many technical and experimental limitations. For example, it relies on one's judgement in distinguishing between two merged plaques and a larger one, or between small plaques and staining artifacts. In this regard, ED assays are more robust because one need only determine whether infection occurred. The output of the ED assay, the 50% infectious dose (ID$_{50}$), is calculated using either the Spearman-Karber (SK, 1908,1931) or Reed-Muench (RM, 1938) mathematical approximations. However, these are often miscalculated and their ID$_{50}$ approximation is biased. We propose that the PFU and FFU assays be abandoned, and that the measured output of the ED assay, the ID$_{50}$, be replaced by a more useful measure we coined Specific INfections (SIN). We introduce a free, open-source web-application, midSIN, that computes the SIN concentration in a virus sample from a standard ED assay, requiring no changes to current experimental protocols. We demonstrate that the SIN/mL of a sample reliably corresponds to the number of infections the sample will cause per unit volume, and directly relates to the multiplicity of infection. midSIN estimates are shown to be more accurate and robust than those using the RM and SK approximations. The impact of ED plate design choices (dilution factor, replicates per dilution) on measurement accuracy is also explored. The simplicity of SIN as a measure and the greater accuracy of midSIN make them an easy, superior replacement for the PFU, FFU, and ID$_{50}$ measures. |
2101.08815 | Vasudevan Lakshminarayanan | Sam Yu and Vasudevan Lakshminarayanan | Fractal Dimension and Retinal Pathology: A Meta-analysis | submitted for publication to MDPI Applied Sciences | null | null | null | q-bio.QM physics.bio-ph | http://creativecommons.org/licenses/by/4.0/ | Due to the fractal nature of retinal blood vessels, the retinal fractal
dimension is a natural parameter for researchers to explore and has garnered
interest as a potential diagnostic tool. This review aims to summarize the
current scientific evidence regarding the relationship between fractal
dimension and retinal pathology and thus assess the clinical value of retinal
fractal dimension. Following the PRISMA guidelines, a literature search for
research articles was conducted in several internet databases (Embase, PubMed,
Web of Science, Scopus). This led to a result of 28 studies included in the
final review, which were analyzed via meta-analysis to determine whether the
fractal dimension changes significantly in retinal disease versus normal
individuals
| [
{
"created": "Thu, 21 Jan 2021 19:11:16 GMT",
"version": "v1"
}
] | 2021-01-25 | [
[
"Yu",
"Sam",
""
],
[
"Lakshminarayanan",
"Vasudevan",
""
]
] | Due to the fractal nature of retinal blood vessels, the retinal fractal dimension is a natural parameter for researchers to explore and has garnered interest as a potential diagnostic tool. This review aims to summarize the current scientific evidence regarding the relationship between fractal dimension and retinal pathology and thus assess the clinical value of retinal fractal dimension. Following the PRISMA guidelines, a literature search for research articles was conducted in several internet databases (Embase, PubMed, Web of Science, Scopus). This led to a result of 28 studies included in the final review, which were analyzed via meta-analysis to determine whether the fractal dimension changes significantly in retinal disease versus normal individuals |
2010.07143 | Simon Wein | Simon Wein, Wilhelm Malloni, Ana Maria Tom\'e, Sebastian M. Frank,
Gina-Isabelle Henze, Stefan W\"ust, Mark W. Greenlee, Elmar W. Lang | A Graph Neural Network Framework for Causal Inference in Brain Networks | null | null | null | null | q-bio.NC cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A central question in neuroscience is how self-organizing dynamic
interactions in the brain emerge on their relatively static structural
backbone. Due to the complexity of spatial and temporal dependencies between
different brain areas, fully comprehending the interplay between structure and
function is still challenging and an area of intense research. In this paper we
present a graph neural network (GNN) framework, to describe functional
interactions based on the structural anatomical layout. A GNN allows us to
process graph-structured spatio-temporal signals, providing a possibility to
combine structural information derived from diffusion tensor imaging (DTI) with
temporal neural activity profiles, like observed in functional magnetic
resonance imaging (fMRI). Moreover, dynamic interactions between different
brain regions learned by this data-driven approach can provide a multi-modal
measure of causal connectivity strength. We assess the proposed model's
accuracy by evaluating its capabilities to replicate empirically observed
neural activation profiles, and compare the performance to those of a vector
auto regression (VAR), like typically used in Granger causality. We show that
GNNs are able to capture long-term dependencies in data and also
computationally scale up to the analysis of large-scale networks. Finally we
confirm that features learned by a GNN can generalize across MRI scanner types
and acquisition protocols, by demonstrating that the performance on small
datasets can be improved by pre-training the GNN on data from an earlier and
different study. We conclude that the proposed multi-modal GNN framework can
provide a novel perspective on the structure-function relationship in the
brain. Therewith this approach can be promising for the characterization of the
information flow in brain networks.
| [
{
"created": "Wed, 14 Oct 2020 15:01:21 GMT",
"version": "v1"
}
] | 2020-10-15 | [
[
"Wein",
"Simon",
""
],
[
"Malloni",
"Wilhelm",
""
],
[
"Tomé",
"Ana Maria",
""
],
[
"Frank",
"Sebastian M.",
""
],
[
"Henze",
"Gina-Isabelle",
""
],
[
"Wüst",
"Stefan",
""
],
[
"Greenlee",
"Mark W.",
""
],
[
"Lang",
"Elmar W.",
""
]
] | A central question in neuroscience is how self-organizing dynamic interactions in the brain emerge on their relatively static structural backbone. Due to the complexity of spatial and temporal dependencies between different brain areas, fully comprehending the interplay between structure and function is still challenging and an area of intense research. In this paper we present a graph neural network (GNN) framework, to describe functional interactions based on the structural anatomical layout. A GNN allows us to process graph-structured spatio-temporal signals, providing a possibility to combine structural information derived from diffusion tensor imaging (DTI) with temporal neural activity profiles, like observed in functional magnetic resonance imaging (fMRI). Moreover, dynamic interactions between different brain regions learned by this data-driven approach can provide a multi-modal measure of causal connectivity strength. We assess the proposed model's accuracy by evaluating its capabilities to replicate empirically observed neural activation profiles, and compare the performance to those of a vector auto regression (VAR), like typically used in Granger causality. We show that GNNs are able to capture long-term dependencies in data and also computationally scale up to the analysis of large-scale networks. Finally we confirm that features learned by a GNN can generalize across MRI scanner types and acquisition protocols, by demonstrating that the performance on small datasets can be improved by pre-training the GNN on data from an earlier and different study. We conclude that the proposed multi-modal GNN framework can provide a novel perspective on the structure-function relationship in the brain. Therewith this approach can be promising for the characterization of the information flow in brain networks. |
q-bio/0606040 | Francoise Tchang | Mohamed Doulazmi (NPA), Francesca Capone (NPA), Florence Frederic
(NPA), Jo\"Elle Bakouche (NPA), Yolande Lemaigre-Dubreuil (NPA), Jean Mariani
(NPA) | Cerebellar Purkinje Cell Loss in Heterozygous Rora+/-Mice: A
Longitudinal Study | null | null | null | null | q-bio.NC | null | The staggerer (sg/sg) mutation is a spontaneous deletion in the Rora gene
that prevents the translation of the ligand-binding domain (LBD), leading to
the loss of ROR\alpha activity. The homozygous Rorasg/sg mutant mouse, whose
most obvious phenotype is ataxia associated with cerebellar degeneration, also
displays a variety of other phenotypes. The heterozygous Rora+/sg is able to
develop a cerebellum which is qualitatively normal but with advancing age
suffers a significant loss of cerebellar neuronal cells. A truncated protein
synthesized by the mutated allele may play a role, both in Rorasg/sg and
Rora+/sg. To determine the effects during life span of true haplo-insufficiency
of the ROR\alpha protein, derived from the invalidation of the gene, we
compared the evolution of Purkinje cell numbers in heterozygous Rora knock-out
males (Rora+/-) and in their wildtype counterparts from 1 to 24 months of age.
We also compared the evolution of Purkinje cell numbers in Rora+/- and Rora+/sg
males from 1 to 9 months. The main finding is that in Rora+/- mice, when only a
half dose of protein is synthesized, the deficit was already established at 1
month and did not change during life span....
| [
{
"created": "Thu, 29 Jun 2006 09:36:48 GMT",
"version": "v1"
}
] | 2016-08-16 | [
[
"Doulazmi",
"Mohamed",
"",
"NPA"
],
[
"Capone",
"Francesca",
"",
"NPA"
],
[
"Frederic",
"Florence",
"",
"NPA"
],
[
"Bakouche",
"JoËlle",
"",
"NPA"
],
[
"Lemaigre-Dubreuil",
"Yolande",
"",
"NPA"
],
[
"Mariani",
"Jean",
"",
"NPA"
]
] | The staggerer (sg/sg) mutation is a spontaneous deletion in the Rora gene that prevents the translation of the ligand-binding domain (LBD), leading to the loss of ROR\alpha activity. The homozygous Rorasg/sg mutant mouse, whose most obvious phenotype is ataxia associated with cerebellar degeneration, also displays a variety of other phenotypes. The heterozygous Rora+/sg is able to develop a cerebellum which is qualitatively normal but with advancing age suffers a significant loss of cerebellar neuronal cells. A truncated protein synthesized by the mutated allele may play a role, both in Rorasg/sg and Rora+/sg. To determine the effects during life span of true haplo-insufficiency of the ROR\alpha protein, derived from the invalidation of the gene, we compared the evolution of Purkinje cell numbers in heterozygous Rora knock-out males (Rora+/-) and in their wildtype counterparts from 1 to 24 months of age. We also compared the evolution of Purkinje cell numbers in Rora+/- and Rora+/sg males from 1 to 9 months. The main finding is that in Rora+/- mice, when only a half dose of protein is synthesized, the deficit was already established at 1 month and did not change during life span.... |
q-bio/0605031 | Fumiko Takagi | Fumiko Takagi, Macoto Kikuchi | Structural Change of Myosin Motor Domain and Nucleotide Dissociation | null | null | null | null | q-bio.BM | null | We investigated the structural relaxation of myosin motor domain from the
pre-power stroke state to the near-rigor state using molecular dynamics
simulation of a coarse-grained protein model. To describe the structural
change, we propose a "dual Go-model," a variant of the Go-like model that has
two reference structures. The nucleotide dissociation process is also studied
by introducing a coarse-grained nucleotide in the simulation. We found that the
myosin structural relaxation toward the near-rigor conformation cannot be
completed before the nucleotide dissociation. Moreover, the relaxation and the
dissociation occurred cooperatively when the nucleotide was tightly bound to
the myosin head. The result suggested that the primary role of the nucleotide
is to suppress the structural relaxation.
| [
{
"created": "Fri, 19 May 2006 06:11:31 GMT",
"version": "v1"
},
{
"created": "Tue, 8 Aug 2006 08:11:04 GMT",
"version": "v2"
}
] | 2007-05-23 | [
[
"Takagi",
"Fumiko",
""
],
[
"Kikuchi",
"Macoto",
""
]
] | We investigated the structural relaxation of myosin motor domain from the pre-power stroke state to the near-rigor state using molecular dynamics simulation of a coarse-grained protein model. To describe the structural change, we propose a "dual Go-model," a variant of the Go-like model that has two reference structures. The nucleotide dissociation process is also studied by introducing a coarse-grained nucleotide in the simulation. We found that the myosin structural relaxation toward the near-rigor conformation cannot be completed before the nucleotide dissociation. Moreover, the relaxation and the dissociation occurred cooperatively when the nucleotide was tightly bound to the myosin head. The result suggested that the primary role of the nucleotide is to suppress the structural relaxation. |
1002.1100 | Vittorio Loreto | F. Tria, E. Caglioti, V. Loreto and A. Pagnani | A Stochastic Local Search algorithm for distance-based phylogeny
reconstruction | 13 pages, 8 figures | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In many interesting cases the reconstruction of a correct phylogeny is
blurred by high mutation rates and/or horizontal transfer events. As a
consequence a divergence arises between the true evolutionary distances and the
differences between pairs of taxa as inferred from available data, making the
phylogenetic reconstruction a challenging problem. Mathematically this
divergence translates in a loss of additivity of the actual distances between
taxa. In distance-based reconstruction methods, two properties of additive
distances were extensively exploited as antagonist criteria to drive phylogeny
reconstruction: on the one hand a local property of quartets, i.e., sets of
four taxa in a tree, the four-points condition; on the other hand a recently
proposed formula that allows to write the tree length as a function of the
distances between taxa, the Pauplin's formula. Here we introduce a new
reconstruction scheme, that exploits in a unified framework both the
four-points condition and the Pauplin's formula. We propose, in particular, a
new general class of distance-based Stochastic Local Search algorithms, which
reduces in a limit case to the minimization of the Pauplin's length. When
tested on artificially generated phylogenies our Stochastic Big-Quartet
Swapping algorithmic scheme significantly outperforms state-of-art
distance-based algorithms in cases of deviation from additivity due to high
rate of back mutations. A significant improvement is also observed with respect
to the state-of-art algorithms in case of high rate of horizontal transfer.
| [
{
"created": "Thu, 4 Feb 2010 23:17:03 GMT",
"version": "v1"
}
] | 2010-02-08 | [
[
"Tria",
"F.",
""
],
[
"Caglioti",
"E.",
""
],
[
"Loreto",
"V.",
""
],
[
"Pagnani",
"A.",
""
]
] | In many interesting cases the reconstruction of a correct phylogeny is blurred by high mutation rates and/or horizontal transfer events. As a consequence a divergence arises between the true evolutionary distances and the differences between pairs of taxa as inferred from available data, making the phylogenetic reconstruction a challenging problem. Mathematically this divergence translates in a loss of additivity of the actual distances between taxa. In distance-based reconstruction methods, two properties of additive distances were extensively exploited as antagonist criteria to drive phylogeny reconstruction: on the one hand a local property of quartets, i.e., sets of four taxa in a tree, the four-points condition; on the other hand a recently proposed formula that allows to write the tree length as a function of the distances between taxa, the Pauplin's formula. Here we introduce a new reconstruction scheme, that exploits in a unified framework both the four-points condition and the Pauplin's formula. We propose, in particular, a new general class of distance-based Stochastic Local Search algorithms, which reduces in a limit case to the minimization of the Pauplin's length. When tested on artificially generated phylogenies our Stochastic Big-Quartet Swapping algorithmic scheme significantly outperforms state-of-art distance-based algorithms in cases of deviation from additivity due to high rate of back mutations. A significant improvement is also observed with respect to the state-of-art algorithms in case of high rate of horizontal transfer. |
1706.02912 | Julia Gallinaro | J\'ulia V Gallinaro and Stefan Rotter | Associative properties of structural plasticity based on firing rate
homeostasis in recurrent neuronal networks | 19 pages, 4 figures | null | 10.1038/s41598-018-22077-3 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Correlation-based Hebbian plasticity is thought to shape neuronal
connectivity during development and learning, whereas homeostatic plasticity
would stabilize network activity. Here we investigate another, new aspect of
this dichotomy: Can Hebbian associative properties also emerge as a network
effect from a plasticity rule based on homeostatic principles on the neuronal
level? To address this question, we simulated a recurrent network of leaky
integrate-and-fire neurons, in which excitatory connections are subject to a
structural plasticity rule based on firing rate homeostasis. We show that a
subgroup of neurons develop stronger within-group connectivity as a consequence
of receiving stronger external stimulation. In an experimentally
well-documented scenario we show that feature specific connectivity, similar to
what has been observed in rodent visual cortex, can emerge from such a
plasticity rule. The experience-dependent structural changes triggered by
stimulation are long-lasting and decay only slowly when the neurons are exposed
again to unspecific external inputs.
| [
{
"created": "Fri, 9 Jun 2017 12:03:23 GMT",
"version": "v1"
},
{
"created": "Wed, 6 Sep 2017 10:59:32 GMT",
"version": "v2"
},
{
"created": "Mon, 22 Jan 2018 14:29:28 GMT",
"version": "v3"
},
{
"created": "Sat, 24 Feb 2018 09:42:39 GMT",
"version": "v4"
}
] | 2018-03-02 | [
[
"Gallinaro",
"Júlia V",
""
],
[
"Rotter",
"Stefan",
""
]
] | Correlation-based Hebbian plasticity is thought to shape neuronal connectivity during development and learning, whereas homeostatic plasticity would stabilize network activity. Here we investigate another, new aspect of this dichotomy: Can Hebbian associative properties also emerge as a network effect from a plasticity rule based on homeostatic principles on the neuronal level? To address this question, we simulated a recurrent network of leaky integrate-and-fire neurons, in which excitatory connections are subject to a structural plasticity rule based on firing rate homeostasis. We show that a subgroup of neurons develop stronger within-group connectivity as a consequence of receiving stronger external stimulation. In an experimentally well-documented scenario we show that feature specific connectivity, similar to what has been observed in rodent visual cortex, can emerge from such a plasticity rule. The experience-dependent structural changes triggered by stimulation are long-lasting and decay only slowly when the neurons are exposed again to unspecific external inputs. |
1409.4251 | Stephan Porz | Stephan Porz, Matth\"aus Kiel, Klaus Lehnertz | Can spurious indications for phase synchronization due to superimposed
signals be avoided? | null | Porz S, Kiel M, Lehnertz K, Chaos 24, 033112 (2014) | 10.1063/1.4890568 | null | q-bio.NC nlin.CD physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate the relative merit of phase-based methods---mean phase
coherence, unweighted and weighted phase lag index---for estimating the
strength of interactions between dynamical systems from empirical time series
which are affected by common sources and noise. By numerically analyzing the
interaction dynamics of coupled model systems, we compare these methods to each
other with respect to their ability to distinguish between different levels of
coupling for various simulated experimental situations. We complement our
numerical studies by investigating consistency and temporal variations of the
strength of interactions within and between brain regions using intracranial
electroencephalographic recordings from an epilepsy patient. Our findings
indicate that the unweighted and weighted phase lag index are less prone to the
influence of common sources but that this advantage may lead to constrictions
limiting the applicability of these methods.
| [
{
"created": "Mon, 15 Sep 2014 13:36:00 GMT",
"version": "v1"
}
] | 2014-09-16 | [
[
"Porz",
"Stephan",
""
],
[
"Kiel",
"Matthäus",
""
],
[
"Lehnertz",
"Klaus",
""
]
] | We investigate the relative merit of phase-based methods---mean phase coherence, unweighted and weighted phase lag index---for estimating the strength of interactions between dynamical systems from empirical time series which are affected by common sources and noise. By numerically analyzing the interaction dynamics of coupled model systems, we compare these methods to each other with respect to their ability to distinguish between different levels of coupling for various simulated experimental situations. We complement our numerical studies by investigating consistency and temporal variations of the strength of interactions within and between brain regions using intracranial electroencephalographic recordings from an epilepsy patient. Our findings indicate that the unweighted and weighted phase lag index are less prone to the influence of common sources but that this advantage may lead to constrictions limiting the applicability of these methods. |
1702.03409 | Kate Inasaridze | Vera Bzhalava, Ketevan Inasaridze | Disruptive Behavior Disorder (DBD) Rating Scale for Georgian Population | 9 pages, 2 tables, 2 figures | null | null | null | q-bio.NC stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the presented study Parent/Teacher Disruptive Behavior Disorder (DBD)
rating scale based on the Diagnostic and Statistical Manual of Mental Disorders
(DSM-IV-TR [APA, 2000]) which was developed by Pelham and his colleagues
(Pelham et al., 1992) was translated and adopted for assessment of childhood
behavioral abnormalities, especially ADHD, ODD and CD in Georgian children and
adolescents. The DBD rating scale was translated into Georgian language using
back translation technique by English language philologists and checked and
corrected by qualified psychologists and psychiatrist of Georgia. Children and
adolescents in the age range of 6 to 16 years (N 290; Mean Age 10.50, SD=2.88)
including 153 males (Mean Age 10.42, SD= 2.62) and 141 females (Mean Age 10.60,
SD=3.14) were recruited from different public schools of Tbilisi and the
Neurology Department of the Pediatric Clinic of the Tbilisi State Medical
University. Participants objectively were assessed via interviewing
parents/teachers and qualified psychologists in three different settings
including school, home and clinic. In terms of DBD total scores revealed
statistically significant differences between healthy controls (M=27.71,
SD=17.26) and children and adolescents with ADHD (M=61.51, SD= 22.79).
Statistically significant differences were found for inattentive subtype
between control (M=8.68, SD=5.68) and ADHD (M=18.15, SD=6.57) groups. In
general it was shown that children and adolescents with ADHD had high score on
DBD in comparison to typically developed persons. In the study also was
determined gender wise prevalence in children and adolescents with ADHD, ODD
and CD. The research revealed prevalence of males in comparison with females in
all investigated categories.
| [
{
"created": "Sat, 11 Feb 2017 10:52:36 GMT",
"version": "v1"
}
] | 2017-02-20 | [
[
"Bzhalava",
"Vera",
""
],
[
"Inasaridze",
"Ketevan",
""
]
] | In the presented study Parent/Teacher Disruptive Behavior Disorder (DBD) rating scale based on the Diagnostic and Statistical Manual of Mental Disorders (DSM-IV-TR [APA, 2000]) which was developed by Pelham and his colleagues (Pelham et al., 1992) was translated and adopted for assessment of childhood behavioral abnormalities, especially ADHD, ODD and CD in Georgian children and adolescents. The DBD rating scale was translated into Georgian language using back translation technique by English language philologists and checked and corrected by qualified psychologists and psychiatrist of Georgia. Children and adolescents in the age range of 6 to 16 years (N 290; Mean Age 10.50, SD=2.88) including 153 males (Mean Age 10.42, SD= 2.62) and 141 females (Mean Age 10.60, SD=3.14) were recruited from different public schools of Tbilisi and the Neurology Department of the Pediatric Clinic of the Tbilisi State Medical University. Participants objectively were assessed via interviewing parents/teachers and qualified psychologists in three different settings including school, home and clinic. In terms of DBD total scores revealed statistically significant differences between healthy controls (M=27.71, SD=17.26) and children and adolescents with ADHD (M=61.51, SD= 22.79). Statistically significant differences were found for inattentive subtype between control (M=8.68, SD=5.68) and ADHD (M=18.15, SD=6.57) groups. In general it was shown that children and adolescents with ADHD had high score on DBD in comparison to typically developed persons. In the study also was determined gender wise prevalence in children and adolescents with ADHD, ODD and CD. The research revealed prevalence of males in comparison with females in all investigated categories. |
1610.04160 | Zarrin Basharat | Zarrin Basharat, Azra Yasmin | Pan-genome Analysis of the Genus Serratia | 17 pages, 5 figures, 3 Tables | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pan-genome analysis is a standard procedure to decipher genome heterogeneity
and diversification of bacterial species. Specie evolution is traced by
defining and comparing the core (conserved), accessory (dispensable) and unique
(strain-specific) gene pool with other strains of interest. Here, we present
pan-genome analysis of the genus Serratia, comprising of a dataset of 100
genomes. The isolates have clinical to environmental origin and consist of ten
different species from the genus, along with two subspecies of the
representative strain Serratia marcescens. Out of 19430 non-redundant coding
DNA sequences (CDS) from the dataset, 972 (5%) belonged to the core genome.
Majority of these genes were linked to metabolic function, followed by cellular
processes/signalling, information storage/processing while rest of them were
poorly characterized. 10,135 CDSs (52.16%) were associated with dispensible
genome while 8,321 CDSs (42.82%) were singletons or strain specific. The
Pan-genome orthologs indicated a positive correlation to the number of genomes
whereas negative correlation was obtained for core genome. Genomes were aligned
to obtain information about synteny, insertion/inversion, deletion and
duplications. This study provides insights into variation of Serratia species
and paves way for pan-genome analysis of other bacterial species at genus
level.
| [
{
"created": "Thu, 13 Oct 2016 16:32:47 GMT",
"version": "v1"
}
] | 2016-10-14 | [
[
"Basharat",
"Zarrin",
""
],
[
"Yasmin",
"Azra",
""
]
] | Pan-genome analysis is a standard procedure to decipher genome heterogeneity and diversification of bacterial species. Specie evolution is traced by defining and comparing the core (conserved), accessory (dispensable) and unique (strain-specific) gene pool with other strains of interest. Here, we present pan-genome analysis of the genus Serratia, comprising of a dataset of 100 genomes. The isolates have clinical to environmental origin and consist of ten different species from the genus, along with two subspecies of the representative strain Serratia marcescens. Out of 19430 non-redundant coding DNA sequences (CDS) from the dataset, 972 (5%) belonged to the core genome. Majority of these genes were linked to metabolic function, followed by cellular processes/signalling, information storage/processing while rest of them were poorly characterized. 10,135 CDSs (52.16%) were associated with dispensible genome while 8,321 CDSs (42.82%) were singletons or strain specific. The Pan-genome orthologs indicated a positive correlation to the number of genomes whereas negative correlation was obtained for core genome. Genomes were aligned to obtain information about synteny, insertion/inversion, deletion and duplications. This study provides insights into variation of Serratia species and paves way for pan-genome analysis of other bacterial species at genus level. |
1611.02735 | Giacomo Falcucci | Nicole Jannelli and Rosa Anna Nastro and Viviana Cigolotti and
Mariagiovanna Minutillo and Giacomo Falcucci | Low pH, high salinity: too much for Microbial Fuel Cells? | 13 pages, 10 Figures | null | 10.1016/j.apenergy.2016.07.079 | null | q-bio.OT physics.bio-ph physics.chem-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Twelve single chambered, air-cathode Tubular Microbial Fuel Cells (TMFCs)
have been filled up with fruit and vegetable residues. The anodes were realized
by means of a carbon fiber brush, while the cathodes were realized through a
graphite-based porous ceramic disk with Nafion membranes (117 Dupont). The
performances in terms of polarization curves and power production were assessed
according to different operating conditions: percentage of solid substrate
water dilution, adoption of freshwater and a 35mg/L NaCl water solution and,
finally, the effect of an initial potentiostatic growth.
All TMFCs operated at low pH (pH$=3.0 \pm 0.5$), as no pH amendment was
carried out. Despite the harsh environmental conditions, our TMFCs showed a
Power Density (PD) ranging from 20 to 55~mW/m$^2 \cdot$kg$_{\text{waste}}$ and
a maximum CD of 20~mA/m$^2 \cdot$kg$_{\text{waste}}$, referred to the cathodic
surface. COD removal after a $28-$day period was about $45 \%$.
The remarkably low pH values as well as the fouling of Nafion membrane very
likely limited TMFC performances. However, a scale-up estimation of our
reactors provides interesting values in terms of power production, compared to
actual anaerobic digestion plants. These results encourage further studies to
characterize the graphite-based porous ceramic cathodes and to optimize the
global TMFC performances, as they may provide a valid and sustainable
alternative to anaerobic digestion technologies.
| [
{
"created": "Fri, 4 Nov 2016 15:00:00 GMT",
"version": "v1"
}
] | 2016-11-10 | [
[
"Jannelli",
"Nicole",
""
],
[
"Nastro",
"Rosa Anna",
""
],
[
"Cigolotti",
"Viviana",
""
],
[
"Minutillo",
"Mariagiovanna",
""
],
[
"Falcucci",
"Giacomo",
""
]
] | Twelve single chambered, air-cathode Tubular Microbial Fuel Cells (TMFCs) have been filled up with fruit and vegetable residues. The anodes were realized by means of a carbon fiber brush, while the cathodes were realized through a graphite-based porous ceramic disk with Nafion membranes (117 Dupont). The performances in terms of polarization curves and power production were assessed according to different operating conditions: percentage of solid substrate water dilution, adoption of freshwater and a 35mg/L NaCl water solution and, finally, the effect of an initial potentiostatic growth. All TMFCs operated at low pH (pH$=3.0 \pm 0.5$), as no pH amendment was carried out. Despite the harsh environmental conditions, our TMFCs showed a Power Density (PD) ranging from 20 to 55~mW/m$^2 \cdot$kg$_{\text{waste}}$ and a maximum CD of 20~mA/m$^2 \cdot$kg$_{\text{waste}}$, referred to the cathodic surface. COD removal after a $28-$day period was about $45 \%$. The remarkably low pH values as well as the fouling of Nafion membrane very likely limited TMFC performances. However, a scale-up estimation of our reactors provides interesting values in terms of power production, compared to actual anaerobic digestion plants. These results encourage further studies to characterize the graphite-based porous ceramic cathodes and to optimize the global TMFC performances, as they may provide a valid and sustainable alternative to anaerobic digestion technologies. |
0807.1759 | Vladimir Ivancevic | Vladimir G. Ivancevic | New Mechanics of Generic Musculo-Skeletal Injury | 13 pages, 1 figure, latex - major corrections | null | null | null | q-bio.TO q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Prediction and prevention of musculo-skeletal injuries is an important aspect
of preventive health science. Using as an example a human knee joint, this
paper proposes a new coupled-loading-rate hypothesis, which states that a
generic cause of any musculo-skeletal injury is a Euclidean jolt, or
SE(3)-jolt, an impulsive loading that hits a joint in several coupled
degrees-of-freedom simultaneously. Informally, it is a rate-of-change of joint
acceleration in all 6-degrees-of-freedom simultaneously, times the
corresponding portion of the body mass. In the case of a human knee, this
happens when most of the body mass is on one leg with a semi-flexed knee -- and
then, caused by some external shock, the knee suddenly `jerks'; this can happen
in running, skiing, sports games (e.g., soccer, rugby) and various
crashes/impacts. To show this formally, based on the previously defined
covariant force law and its application to traumatic brain injury (Ivancevic,
2008), we formulate the coupled Newton--Euler dynamics of human joint motions
and derive from it the corresponding coupled SE(3)-jolt dynamics of the joint
in case. The SE(3)-jolt is the main cause of two forms of discontinuous joint
injury: (i) mild rotational disclinations and (ii) severe translational
dislocations. Both the joint disclinations and dislocations, as caused by the
SE(3)-jolt, are described using the Cosserat multipolar viscoelastic continuum
joint model.
Keywords: musculo-skeletal injury, coupled-loading--rate hypothesis, coupled
Newton-Euler dynamics, Euclidean jolt dynamics, joint dislocations and
disclinations
| [
{
"created": "Fri, 11 Jul 2008 01:24:45 GMT",
"version": "v1"
},
{
"created": "Thu, 14 Aug 2008 04:16:31 GMT",
"version": "v2"
},
{
"created": "Tue, 18 Nov 2008 02:32:22 GMT",
"version": "v3"
},
{
"created": "Thu, 18 Dec 2008 02:01:20 GMT",
"version": "v4"
},
{
"created": "Thu, 9 Apr 2009 00:48:15 GMT",
"version": "v5"
}
] | 2009-04-09 | [
[
"Ivancevic",
"Vladimir G.",
""
]
] | Prediction and prevention of musculo-skeletal injuries is an important aspect of preventive health science. Using as an example a human knee joint, this paper proposes a new coupled-loading-rate hypothesis, which states that a generic cause of any musculo-skeletal injury is a Euclidean jolt, or SE(3)-jolt, an impulsive loading that hits a joint in several coupled degrees-of-freedom simultaneously. Informally, it is a rate-of-change of joint acceleration in all 6-degrees-of-freedom simultaneously, times the corresponding portion of the body mass. In the case of a human knee, this happens when most of the body mass is on one leg with a semi-flexed knee -- and then, caused by some external shock, the knee suddenly `jerks'; this can happen in running, skiing, sports games (e.g., soccer, rugby) and various crashes/impacts. To show this formally, based on the previously defined covariant force law and its application to traumatic brain injury (Ivancevic, 2008), we formulate the coupled Newton--Euler dynamics of human joint motions and derive from it the corresponding coupled SE(3)-jolt dynamics of the joint in case. The SE(3)-jolt is the main cause of two forms of discontinuous joint injury: (i) mild rotational disclinations and (ii) severe translational dislocations. Both the joint disclinations and dislocations, as caused by the SE(3)-jolt, are described using the Cosserat multipolar viscoelastic continuum joint model. Keywords: musculo-skeletal injury, coupled-loading--rate hypothesis, coupled Newton-Euler dynamics, Euclidean jolt dynamics, joint dislocations and disclinations |
0809.1718 | Philippe Rondard | Haijun Tu, Philippe Rondard (IGF), Chanjuan Xu, Federica Bertaso
(IGF), Fangli Cao, Xueying Zhang, Jean-Philippe Pin (IGF), Jianfeng Liu | Dominant role of GABAB2 and Gbetagamma for GABAB
receptor-mediated-ERK1/2/CREB pathway in cerebellar neurons | null | Cellular Signalling 19, 9 (2007) 1996-2002 | 10.1016/j.cellsig.2007.05.004 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | gamma-aminobutyric acid type B (GABA(B)) receptor is an allosteric complex
made of two subunits, GABA(B1) and GABA(B2). GABA(B2) plays a major role in the
coupling to G protein whereas GABA(B1) binds GABA. It has been shown that
GABA(B) receptor activates ERK(1/2) in neurons of the central nervous system,
but the molecular mechanisms underlying this event are poorly characterized.
Here, we demonstrate that activation of GABA(B) receptor by either GABA or the
selective agonist baclofen induces ERK(1/2) phosphorylation in cultured
cerebellar granule neurons. We also show that CGP7930, a positive allosteric
regulator specific of GABA(B2), alone can induce the phosphorylation of
ERK(1/2). PTX, a G(i/o) inhibitor, abolishes both baclofen and
CGP7930-mediated-ERK(1/2) phosphorylation. Moreover, both baclofen and CGP7930
induce ERK-dependent CREB phosphorylation. Furthermore, by using LY294002, a
PI-3 kinase inhibitor, and a C-term of GRK-2 that has been reported to
sequester Gbetagamma subunits, we demonstrate the role of Gbetagamma in GABA(B)
receptor-mediated-ERK(1/2) phosphorylation. In conclusion, the activation of
GABA(B) receptor leads to ERK(1/2) phosphorylation via the coupling of GABA(B2)
to G(i/o) and by releasing Gbetagamma subunits which in turn induce the
activation of CREB. These findings suggest a role of GABA(B) receptor in
long-term change in the central nervous system.
| [
{
"created": "Wed, 10 Sep 2008 07:05:12 GMT",
"version": "v1"
}
] | 2008-09-11 | [
[
"Tu",
"Haijun",
"",
"IGF"
],
[
"Rondard",
"Philippe",
"",
"IGF"
],
[
"Xu",
"Chanjuan",
"",
"IGF"
],
[
"Bertaso",
"Federica",
"",
"IGF"
],
[
"Cao",
"Fangli",
"",
"IGF"
],
[
"Zhang",
"Xueying",
"",
"IGF"
],
[
"Pin",
"Jean-Philippe",
"",
"IGF"
],
[
"Liu",
"Jianfeng",
""
]
] | gamma-aminobutyric acid type B (GABA(B)) receptor is an allosteric complex made of two subunits, GABA(B1) and GABA(B2). GABA(B2) plays a major role in the coupling to G protein whereas GABA(B1) binds GABA. It has been shown that GABA(B) receptor activates ERK(1/2) in neurons of the central nervous system, but the molecular mechanisms underlying this event are poorly characterized. Here, we demonstrate that activation of GABA(B) receptor by either GABA or the selective agonist baclofen induces ERK(1/2) phosphorylation in cultured cerebellar granule neurons. We also show that CGP7930, a positive allosteric regulator specific of GABA(B2), alone can induce the phosphorylation of ERK(1/2). PTX, a G(i/o) inhibitor, abolishes both baclofen and CGP7930-mediated-ERK(1/2) phosphorylation. Moreover, both baclofen and CGP7930 induce ERK-dependent CREB phosphorylation. Furthermore, by using LY294002, a PI-3 kinase inhibitor, and a C-term of GRK-2 that has been reported to sequester Gbetagamma subunits, we demonstrate the role of Gbetagamma in GABA(B) receptor-mediated-ERK(1/2) phosphorylation. In conclusion, the activation of GABA(B) receptor leads to ERK(1/2) phosphorylation via the coupling of GABA(B2) to G(i/o) and by releasing Gbetagamma subunits which in turn induce the activation of CREB. These findings suggest a role of GABA(B) receptor in long-term change in the central nervous system. |
1105.4680 | Jeremy Sumner | Jeremy Sumner, Jesus Fernandez-Sanchez, and Peter Jarvis | Lie Markov Models | 33 Pages, V2: Completed proofs of GTR non-closure | J. Theor. Biol., 298, 16--31, 2012 | null | null | q-bio.PE math.GR math.ST stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent work has discussed the importance of multiplicative closure for the
Markov models used in phylogenetics. For continuous-time Markov chains, a
sufficient condition for multiplicative closure of a model class is ensured by
demanding that the set of rate-matrices belonging to the model class form a Lie
algebra. It is the case that some well-known Markov models do form Lie algebras
and we refer to such models as "Lie Markov models". However it is also the case
that some other well-known Markov models unequivocally do not form Lie
algebras. In this paper, we will discuss how to generate Lie Markov models by
demanding that the models have certain symmetries under nucleotide
permutations. We show that the Lie Markov models include, and hence provide a
unifying concept for, "group-based" and "equivariant" models. For each of two,
three and four character states, the full list of Lie Markov models with
maximal symmetry is presented and shown to include interesting examples that
are neither group-based nor equivariant. We also argue that our scheme is
pleasing in the context of applied phylogenetics, as, for a given symmetry of
nucleotide substitution, it provides a natural hierarchy of models with
increasing number of parameters.
| [
{
"created": "Tue, 24 May 2011 05:09:31 GMT",
"version": "v1"
},
{
"created": "Fri, 29 Jul 2011 05:47:30 GMT",
"version": "v2"
}
] | 2012-04-24 | [
[
"Sumner",
"Jeremy",
""
],
[
"Fernandez-Sanchez",
"Jesus",
""
],
[
"Jarvis",
"Peter",
""
]
] | Recent work has discussed the importance of multiplicative closure for the Markov models used in phylogenetics. For continuous-time Markov chains, a sufficient condition for multiplicative closure of a model class is ensured by demanding that the set of rate-matrices belonging to the model class form a Lie algebra. It is the case that some well-known Markov models do form Lie algebras and we refer to such models as "Lie Markov models". However it is also the case that some other well-known Markov models unequivocally do not form Lie algebras. In this paper, we will discuss how to generate Lie Markov models by demanding that the models have certain symmetries under nucleotide permutations. We show that the Lie Markov models include, and hence provide a unifying concept for, "group-based" and "equivariant" models. For each of two, three and four character states, the full list of Lie Markov models with maximal symmetry is presented and shown to include interesting examples that are neither group-based nor equivariant. We also argue that our scheme is pleasing in the context of applied phylogenetics, as, for a given symmetry of nucleotide substitution, it provides a natural hierarchy of models with increasing number of parameters. |
2310.00939 | Quratul Ain Dr. | Qurat-ul-Ain, Munira Taj Muhammad Khalid Mohammed Khan and M. Iqbal
Choudhary | Estimation of In-Vitro Free Radical Scavenging and Cytotoxic Activities
of Thiocarbohydrazones Derivatives | I figure, 3 Schemes, 3 tables | null | null | null | q-bio.BM | http://creativecommons.org/licenses/by-sa/4.0/ | Objective: To investigate free radical scavenging potential of
thiocarbohydrazones derivatives for the discovery of antioxidant compounds of
synthetic origin. Method: Eighteen thiocarbohydrazones derivatives were
screened for antioxidant activities by using in vitro radical inhibition assay.
Cytotoxicity all derivatives were evaluated using invitro cytotoxicity assay on
mouse fibroblast cell line. Results: various compounds of the have shown
significant radical scavenging activities as compared with the standard drug
butylated hydroxytoluene. In addition in an in vitro cell cytotoxicity assay
selected analogues were found to be non-cytotoxic against fibroblast 3T3 cell
line. Conclusion: Taken together, these findings all compound this series could
be further evaluated in in vivo model of oxidative stress to develop as a
therapeutic agent against oxidative stress related disorders due to exhibiting
excellent free radical scavenging properties and having their non cytotoxic
nature against normal cells.
| [
{
"created": "Mon, 2 Oct 2023 07:09:54 GMT",
"version": "v1"
}
] | 2023-10-03 | [
[
"Qurat-ul-Ain",
"",
""
],
[
"Khan",
"Munira Taj Muhammad Khalid Mohammed",
""
],
[
"Choudhary",
"M. Iqbal",
""
]
] | Objective: To investigate free radical scavenging potential of thiocarbohydrazones derivatives for the discovery of antioxidant compounds of synthetic origin. Method: Eighteen thiocarbohydrazones derivatives were screened for antioxidant activities by using in vitro radical inhibition assay. Cytotoxicity all derivatives were evaluated using invitro cytotoxicity assay on mouse fibroblast cell line. Results: various compounds of the have shown significant radical scavenging activities as compared with the standard drug butylated hydroxytoluene. In addition in an in vitro cell cytotoxicity assay selected analogues were found to be non-cytotoxic against fibroblast 3T3 cell line. Conclusion: Taken together, these findings all compound this series could be further evaluated in in vivo model of oxidative stress to develop as a therapeutic agent against oxidative stress related disorders due to exhibiting excellent free radical scavenging properties and having their non cytotoxic nature against normal cells. |
1304.0132 | Shampa Ghosh | Shampa M. Ghosh, Nicholas D. Testa, Alexander W. Shingleton | Temperature Size Rule is mediated by thermal plasticity of critical size
in Drosophila melanogaster | Accepted in Proceedings of Royal Society B: Biological Sciences | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most ectotherms show an inverse relationship between developmental
temperature and body size, a phenomenon known as the temperature size rule
(TSR). Several competing hypotheses have been proposed to explain its
occurrence. According to one set of views, the TSR results from inevitable
biophysical effects of temperature on the rates of growth and differentiation,
whereas other views suggest the TSR is an adaptation that can be achieved by a
diversity of mechanisms in different taxa. Our data reveal that the fruit fly,
Drosophila melanogaster, obeys the TSR using a novel mechanism: reduction of
critical size at higher temperatures. In holometabolous insects, attainment of
critical size initiates the hormonal cascade that terminates growth, and hence,
Drosophila larvae appear to instigate the signal to stop growth at a smaller
size at higher temperatures. This is in contrast to findings from another
holometabolous insect, Manduca sexta, in which the TSR results from the effect
of temperature on the rate and duration of growth. This contrast suggests that
there is no single mechanism that accounts for the TSR. Instead, the TSR
appears to be an adaptation that is achieved at a proximate level through
different mechanisms in different taxa.
| [
{
"created": "Sat, 30 Mar 2013 19:43:20 GMT",
"version": "v1"
}
] | 2013-04-02 | [
[
"Ghosh",
"Shampa M.",
""
],
[
"Testa",
"Nicholas D.",
""
],
[
"Shingleton",
"Alexander W.",
""
]
] | Most ectotherms show an inverse relationship between developmental temperature and body size, a phenomenon known as the temperature size rule (TSR). Several competing hypotheses have been proposed to explain its occurrence. According to one set of views, the TSR results from inevitable biophysical effects of temperature on the rates of growth and differentiation, whereas other views suggest the TSR is an adaptation that can be achieved by a diversity of mechanisms in different taxa. Our data reveal that the fruit fly, Drosophila melanogaster, obeys the TSR using a novel mechanism: reduction of critical size at higher temperatures. In holometabolous insects, attainment of critical size initiates the hormonal cascade that terminates growth, and hence, Drosophila larvae appear to instigate the signal to stop growth at a smaller size at higher temperatures. This is in contrast to findings from another holometabolous insect, Manduca sexta, in which the TSR results from the effect of temperature on the rate and duration of growth. This contrast suggests that there is no single mechanism that accounts for the TSR. Instead, the TSR appears to be an adaptation that is achieved at a proximate level through different mechanisms in different taxa. |
1607.00128 | Andrea De Martino | Matteo Mori, Terence Hwa, Olivier C. Martin, Andrea De Martino, Enzo
Marinari | Constrained Allocation Flux Balance Analysis | 21 pages, 6 figures (main) + 33 pages, various figures and tables
(supporting); for the supplementary MatLab code, see
http://tinyurl.com/h763es8 | PLoS Comput Biol 12(6): e1004913 (2016) | 10.1371/journal.pcbi.1004913 | null | q-bio.MN cond-mat.dis-nn physics.bio-ph q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | New experimental results on bacterial growth inspire a novel top-down
approach to study cell metabolism, combining mass balance and proteomic
constraints to extend and complement Flux Balance Analysis. We introduce here
Constrained Allocation Flux Balance Analysis, CAFBA, in which the biosynthetic
costs associated to growth are accounted for in an effective way through a
single additional genome-wide constraint. Its roots lie in the experimentally
observed pattern of proteome allocation for metabolic functions, allowing to
bridge regulation and metabolism in a transparent way under the principle of
growth-rate maximization. We provide a simple method to solve CAFBA efficiently
and propose an "ensemble averaging" procedure to account for unknown protein
costs. Applying this approach to modeling E. coli metabolism, we find that, as
the growth rate increases, CAFBA solutions cross over from respiratory,
growth-yield maximizing states (preferred at slow growth) to fermentative
states with carbon overflow (preferred at fast growth). In addition, CAFBA
allows for quantitatively accurate predictions on the rate of acetate excretion
and growth yield based on only 3 parameters determined by empirical growth
laws.
| [
{
"created": "Fri, 1 Jul 2016 07:08:39 GMT",
"version": "v1"
}
] | 2016-07-04 | [
[
"Mori",
"Matteo",
""
],
[
"Hwa",
"Terence",
""
],
[
"Martin",
"Olivier C.",
""
],
[
"De Martino",
"Andrea",
""
],
[
"Marinari",
"Enzo",
""
]
] | New experimental results on bacterial growth inspire a novel top-down approach to study cell metabolism, combining mass balance and proteomic constraints to extend and complement Flux Balance Analysis. We introduce here Constrained Allocation Flux Balance Analysis, CAFBA, in which the biosynthetic costs associated to growth are accounted for in an effective way through a single additional genome-wide constraint. Its roots lie in the experimentally observed pattern of proteome allocation for metabolic functions, allowing to bridge regulation and metabolism in a transparent way under the principle of growth-rate maximization. We provide a simple method to solve CAFBA efficiently and propose an "ensemble averaging" procedure to account for unknown protein costs. Applying this approach to modeling E. coli metabolism, we find that, as the growth rate increases, CAFBA solutions cross over from respiratory, growth-yield maximizing states (preferred at slow growth) to fermentative states with carbon overflow (preferred at fast growth). In addition, CAFBA allows for quantitatively accurate predictions on the rate of acetate excretion and growth yield based on only 3 parameters determined by empirical growth laws. |
2003.07912 | Gabor Vattay | Gabor Vattay | Predicting the ultimate outcome of the COVID-19 outbreak in Italy | 2 pages | null | null | null | q-bio.PE physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | During the COVID-19 outbreak, it is essential to monitor the effectiveness of
measures taken by governments on the course of the epidemic. Here we show that
there is already a sufficient amount of data collected in Italy to predict the
outcome of the process. We show that using the proper metric, the data from
Hubei Province and Italy has striking similarity, which enables us to calculate
the expected number of confirmed cases and the number of deaths by the end of
the process. Our predictions will improve as new data points are generated day
by day, which can help to make further public decisions. The method is based on
the data analysis of logistic growth equations describing the process on the
macroscopic level. At the time of writing of the first version, the number of
fatalities in Italy was expected to be 6000, and the predicted end of the
crisis was April 15, 2020. In this new version, we discuss what changed in the
two weeks which passed since then. The trend changed drastically on March 17,
2020, when the Italian health system reached its capacity limit. Without this
limit, probably 3500 more people would have died. Instead, due to the
limitations, 17.000 people are expected to die now, which is a five-fold
increase. The predicted end of the crisis now shifted to May 8, 2020.
| [
{
"created": "Tue, 17 Mar 2020 19:43:51 GMT",
"version": "v1"
},
{
"created": "Tue, 31 Mar 2020 18:57:34 GMT",
"version": "v2"
}
] | 2020-04-02 | [
[
"Vattay",
"Gabor",
""
]
] | During the COVID-19 outbreak, it is essential to monitor the effectiveness of measures taken by governments on the course of the epidemic. Here we show that there is already a sufficient amount of data collected in Italy to predict the outcome of the process. We show that using the proper metric, the data from Hubei Province and Italy has striking similarity, which enables us to calculate the expected number of confirmed cases and the number of deaths by the end of the process. Our predictions will improve as new data points are generated day by day, which can help to make further public decisions. The method is based on the data analysis of logistic growth equations describing the process on the macroscopic level. At the time of writing of the first version, the number of fatalities in Italy was expected to be 6000, and the predicted end of the crisis was April 15, 2020. In this new version, we discuss what changed in the two weeks which passed since then. The trend changed drastically on March 17, 2020, when the Italian health system reached its capacity limit. Without this limit, probably 3500 more people would have died. Instead, due to the limitations, 17.000 people are expected to die now, which is a five-fold increase. The predicted end of the crisis now shifted to May 8, 2020. |
1307.6786 | Alfredo Braunstein | Fabrizio Altarelli, Alfredo Braunstein, Luca Dall'Asta, Alejandro
Lage-Castellanos, Riccardo Zecchina | Bayesian inference of epidemics on networks via Belief Propagation | null | Phys. Rev. Lett. 112, 118701 (2014) | 10.1103/PhysRevLett.112.118701 | null | q-bio.QM cond-mat.stat-mech cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study several bayesian inference problems for irreversible stochastic
epidemic models on networks from a statistical physics viewpoint. We derive
equations which allow to accurately compute the posterior distribution of the
time evolution of the state of each node given some observations. At difference
with most existing methods, we allow very general observation models, including
unobserved nodes, state observations made at different or unknown times, and
observations of infection times, possibly mixed together. Our method, which is
based on the Belief Propagation algorithm, is efficient, naturally distributed,
and exact on trees. As a particular case, we consider the problem of finding
the "zero patient" of a SIR or SI epidemic given a snapshot of the state of the
network at a later unknown time. Numerical simulations show that our method
outperforms previous ones on both synthetic and real networks, often by a very
large margin.
| [
{
"created": "Thu, 25 Jul 2013 15:21:52 GMT",
"version": "v1"
},
{
"created": "Thu, 27 Mar 2014 18:12:12 GMT",
"version": "v2"
}
] | 2014-03-28 | [
[
"Altarelli",
"Fabrizio",
""
],
[
"Braunstein",
"Alfredo",
""
],
[
"Dall'Asta",
"Luca",
""
],
[
"Lage-Castellanos",
"Alejandro",
""
],
[
"Zecchina",
"Riccardo",
""
]
] | We study several bayesian inference problems for irreversible stochastic epidemic models on networks from a statistical physics viewpoint. We derive equations which allow to accurately compute the posterior distribution of the time evolution of the state of each node given some observations. At difference with most existing methods, we allow very general observation models, including unobserved nodes, state observations made at different or unknown times, and observations of infection times, possibly mixed together. Our method, which is based on the Belief Propagation algorithm, is efficient, naturally distributed, and exact on trees. As a particular case, we consider the problem of finding the "zero patient" of a SIR or SI epidemic given a snapshot of the state of the network at a later unknown time. Numerical simulations show that our method outperforms previous ones on both synthetic and real networks, often by a very large margin. |
1310.2024 | Silvia Scarpetta | Silvia Scarpetta and Antonio de Candia | Neural Avalanches at the Critical Point between Replay and Non-Replay of
Spatiotemporal Patterns | null | PLOS ONE June 2013 | Volume 8 | Issue 6 | e64162 | 10.1371/journal.pone.0064162 | null | q-bio.NC cond-mat.dis-nn | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We model spontaneous cortical activity with a network of coupled spiking
units, in which multiple spatio-temporal patterns are stored as dynamical
attractors. We introduce an order parameter, which measures the overlap
(similarity) between the activity of the network and the stored patterns. We
find that, depending on the excitability of the network, different working
regimes are possible. For high excitability, the dynamical attractors are
stable, and a collective activity that replays one of the stored patterns
emerges spontaneously, while for low excitability, no replay is induced.
Between these two regimes, there is a critical region in which the dynamical
attractors are unstable, and intermittent short replays are induced by noise.
At the critical spiking threshold, the order parameter goes from zero to one,
and its fluctuations are maximized, as expected for a phase transition (and as
observed in recent experimental results in the brain). Notably, in this
critical region, the avalanche size and duration distributions follow power
laws. Critical exponents are consistent with a scaling relationship observed
recently in neural avalanches measurements. In conclusion, our simple model
suggests that avalanche power laws in cortical spontaneous activity may be the
effect of a network at the critical point between the replay and non-replay of
spatio-temporal patterns.
| [
{
"created": "Tue, 8 Oct 2013 07:58:14 GMT",
"version": "v1"
}
] | 2015-06-17 | [
[
"Scarpetta",
"Silvia",
""
],
[
"de Candia",
"Antonio",
""
]
] | We model spontaneous cortical activity with a network of coupled spiking units, in which multiple spatio-temporal patterns are stored as dynamical attractors. We introduce an order parameter, which measures the overlap (similarity) between the activity of the network and the stored patterns. We find that, depending on the excitability of the network, different working regimes are possible. For high excitability, the dynamical attractors are stable, and a collective activity that replays one of the stored patterns emerges spontaneously, while for low excitability, no replay is induced. Between these two regimes, there is a critical region in which the dynamical attractors are unstable, and intermittent short replays are induced by noise. At the critical spiking threshold, the order parameter goes from zero to one, and its fluctuations are maximized, as expected for a phase transition (and as observed in recent experimental results in the brain). Notably, in this critical region, the avalanche size and duration distributions follow power laws. Critical exponents are consistent with a scaling relationship observed recently in neural avalanches measurements. In conclusion, our simple model suggests that avalanche power laws in cortical spontaneous activity may be the effect of a network at the critical point between the replay and non-replay of spatio-temporal patterns. |
1711.10386 | Congrui Jin | Rakenth R. Menon, Jing Luo, Xiaobo Chen, Hui Zhou, Zhiyong Liu,
Guangwen Zhou, Ning Zhang, Congrui Jin | Screening of Fungi for the Application of Self-Healing Concrete | 21 pages. arXiv admin note: text overlap with arXiv:1708.01337 | null | null | null | q-bio.OT physics.app-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Concrete is susceptible to cracking owing to drying shrinkage, freeze-thaw
cycles, delayed ettringite formation, reinforcement corrosion, creep and
fatigue, etc. Since maintenance and inspection of concrete infrastructure
require onerous labor and high costs, self-healing of harmful cracks without
human interference or intervention could be of great attraction. The goal of
this study is to explore a new self-healing approach in which fungi are used as
a self-healing agent to promote calcium carbonate precipitation to fill the
cracks in concrete structures. Recent research results in the field of
geomycology have shown that many species of fungi could play an important role
in promoting calcium carbonate mineralization, but their application in
self-healing concrete has not been reported. Therefore, a screening of
different species of fungi has been conducted in this study. Our results showed
that, despite the drastic pH increase owing to the leaching of calcium
hydroxide from concrete, Aspergillus nidulans (MAD1445), a pH regulatory
mutant, could grow on concrete plates and promote calcium carbonate
precipitation.
| [
{
"created": "Thu, 23 Nov 2017 20:26:27 GMT",
"version": "v1"
},
{
"created": "Wed, 6 Dec 2017 08:21:14 GMT",
"version": "v2"
},
{
"created": "Fri, 22 Dec 2017 03:45:44 GMT",
"version": "v3"
},
{
"created": "Thu, 1 Mar 2018 02:05:01 GMT",
"version": "v4"
},
{
"created": "Fri, 2 Mar 2018 22:49:45 GMT",
"version": "v5"
},
{
"created": "Mon, 16 Jul 2018 16:46:34 GMT",
"version": "v6"
}
] | 2018-07-17 | [
[
"Menon",
"Rakenth R.",
""
],
[
"Luo",
"Jing",
""
],
[
"Chen",
"Xiaobo",
""
],
[
"Zhou",
"Hui",
""
],
[
"Liu",
"Zhiyong",
""
],
[
"Zhou",
"Guangwen",
""
],
[
"Zhang",
"Ning",
""
],
[
"Jin",
"Congrui",
""
]
] | Concrete is susceptible to cracking owing to drying shrinkage, freeze-thaw cycles, delayed ettringite formation, reinforcement corrosion, creep and fatigue, etc. Since maintenance and inspection of concrete infrastructure require onerous labor and high costs, self-healing of harmful cracks without human interference or intervention could be of great attraction. The goal of this study is to explore a new self-healing approach in which fungi are used as a self-healing agent to promote calcium carbonate precipitation to fill the cracks in concrete structures. Recent research results in the field of geomycology have shown that many species of fungi could play an important role in promoting calcium carbonate mineralization, but their application in self-healing concrete has not been reported. Therefore, a screening of different species of fungi has been conducted in this study. Our results showed that, despite the drastic pH increase owing to the leaching of calcium hydroxide from concrete, Aspergillus nidulans (MAD1445), a pH regulatory mutant, could grow on concrete plates and promote calcium carbonate precipitation. |
1712.00336 | Junfang Chen | Junfang Chen and Emanuel Schwarz | BioMM: Biologically-informed Multi-stage Machine learning for
identification of epigenetic fingerprints | 6 pages, NIPS 2017 ML4H | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The identification of reproducible biological patterns from high-dimensional
data is a bottleneck for understanding the biology of complex illnesses such as
schizophrenia. To address this, we developed a biologically informed,
multi-stage machine learning (BioMM) framework. BioMM incorporates biological
pathway information to stratify and aggregate high-dimensional biological data.
We demonstrate the utility of this method using genome-wide DNA methylation
data and show that it substantially outperforms conventional machine learning
approaches. Therefore, the BioMM framework may be a fruitful machine learning
strategy in high-dimensional data and be the basis for future, integrative
analysis approaches.
| [
{
"created": "Fri, 1 Dec 2017 14:31:27 GMT",
"version": "v1"
}
] | 2017-12-04 | [
[
"Chen",
"Junfang",
""
],
[
"Schwarz",
"Emanuel",
""
]
] | The identification of reproducible biological patterns from high-dimensional data is a bottleneck for understanding the biology of complex illnesses such as schizophrenia. To address this, we developed a biologically informed, multi-stage machine learning (BioMM) framework. BioMM incorporates biological pathway information to stratify and aggregate high-dimensional biological data. We demonstrate the utility of this method using genome-wide DNA methylation data and show that it substantially outperforms conventional machine learning approaches. Therefore, the BioMM framework may be a fruitful machine learning strategy in high-dimensional data and be the basis for future, integrative analysis approaches. |
2212.04827 | Cyprien Tamekue | Cyprien Tamekue, Dario Prandi, Yacine Chitour | Cortical origins of MacKay-type visual illusions: A case for the
non-linearity | null | null | null | null | q-bio.NC cs.NA math.NA math.OC nlin.PS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To study the interaction between retinal stimulation by redundant geometrical
patterns and the cortical response in the primary visual cortex (V1), we focus
on the MacKay effect (Nature, 1957) and Billock and Tsou's experiments (PNAS,
2007). We use a controllability approach to describe these phenomena starting
from a classical biological model of neuronal field equations with a non-linear
response function. The external input containing a localised control function
is interpreted as a cortical representation of the static visual stimuli used
in these experiments. We prove that while the MacKay effect is essentially a
linear phenomenon (i.e., the nonlinear nature of the activation does not play
any role in its reproduction), the phenomena reported by Billock and Tsou are
wholly nonlinear and depend strongly on the shape of the nonlinearity used to
model the response function.
| [
{
"created": "Fri, 9 Dec 2022 12:57:14 GMT",
"version": "v1"
},
{
"created": "Fri, 16 Jun 2023 14:26:16 GMT",
"version": "v2"
}
] | 2023-06-19 | [
[
"Tamekue",
"Cyprien",
""
],
[
"Prandi",
"Dario",
""
],
[
"Chitour",
"Yacine",
""
]
] | To study the interaction between retinal stimulation by redundant geometrical patterns and the cortical response in the primary visual cortex (V1), we focus on the MacKay effect (Nature, 1957) and Billock and Tsou's experiments (PNAS, 2007). We use a controllability approach to describe these phenomena starting from a classical biological model of neuronal field equations with a non-linear response function. The external input containing a localised control function is interpreted as a cortical representation of the static visual stimuli used in these experiments. We prove that while the MacKay effect is essentially a linear phenomenon (i.e., the nonlinear nature of the activation does not play any role in its reproduction), the phenomena reported by Billock and Tsou are wholly nonlinear and depend strongly on the shape of the nonlinearity used to model the response function. |
2303.09669 | Thomas Bury | Thomas M. Bury, Daniel Dylewsky, Chris T. Bauch, Madhur Anand, Leon
Glass, Alvin Shrier, Gil Bub | Predicting discrete-time bifurcations with deep learning | null | Nat Commun 14, 6331 (2023) | 10.1038/s41467-023-42020-z | null | q-bio.QM cs.LG math.DS | http://creativecommons.org/licenses/by/4.0/ | Many natural and man-made systems are prone to critical transitions -- abrupt
and potentially devastating changes in dynamics. Deep learning classifiers can
provide an early warning signal (EWS) for critical transitions by learning
generic features of bifurcations (dynamical instabilities) from large simulated
training data sets. So far, classifiers have only been trained to predict
continuous-time bifurcations, ignoring rich dynamics unique to discrete-time
bifurcations. Here, we train a deep learning classifier to provide an EWS for
the five local discrete-time bifurcations of codimension-1. We test the
classifier on simulation data from discrete-time models used in physiology,
economics and ecology, as well as experimental data of spontaneously beating
chick-heart aggregates that undergo a period-doubling bifurcation. The
classifier outperforms commonly used EWS under a wide range of noise
intensities and rates of approach to the bifurcation. It also predicts the
correct bifurcation in most cases, with particularly high accuracy for the
period-doubling, Neimark-Sacker and fold bifurcations. Deep learning as a tool
for bifurcation prediction is still in its nascence and has the potential to
transform the way we monitor systems for critical transitions.
| [
{
"created": "Thu, 16 Mar 2023 22:08:41 GMT",
"version": "v1"
},
{
"created": "Thu, 8 Feb 2024 20:59:42 GMT",
"version": "v2"
}
] | 2024-02-12 | [
[
"Bury",
"Thomas M.",
""
],
[
"Dylewsky",
"Daniel",
""
],
[
"Bauch",
"Chris T.",
""
],
[
"Anand",
"Madhur",
""
],
[
"Glass",
"Leon",
""
],
[
"Shrier",
"Alvin",
""
],
[
"Bub",
"Gil",
""
]
] | Many natural and man-made systems are prone to critical transitions -- abrupt and potentially devastating changes in dynamics. Deep learning classifiers can provide an early warning signal (EWS) for critical transitions by learning generic features of bifurcations (dynamical instabilities) from large simulated training data sets. So far, classifiers have only been trained to predict continuous-time bifurcations, ignoring rich dynamics unique to discrete-time bifurcations. Here, we train a deep learning classifier to provide an EWS for the five local discrete-time bifurcations of codimension-1. We test the classifier on simulation data from discrete-time models used in physiology, economics and ecology, as well as experimental data of spontaneously beating chick-heart aggregates that undergo a period-doubling bifurcation. The classifier outperforms commonly used EWS under a wide range of noise intensities and rates of approach to the bifurcation. It also predicts the correct bifurcation in most cases, with particularly high accuracy for the period-doubling, Neimark-Sacker and fold bifurcations. Deep learning as a tool for bifurcation prediction is still in its nascence and has the potential to transform the way we monitor systems for critical transitions. |
1802.07562 | Sirio Orozco-Fuentes | S. Orozco-Fuentes, G. Griffiths, M. J. Holmes, R. Ettelaie, J. Smith,
A. W. Baggaley and N. G. Parker | Early warning signals in plant disease outbreaks | 11 pages | null | 10.1016/j.ecolmodel.2018.11.003 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Summary
1. Infectious disease outbreaks in plants threaten ecosystems, agricultural
crops and food trade. Currently, several fungal diseases are affecting forests
worldwide, posing a major risk to tree species, habitats and consequently
ecosystem decay. Prediction and control of disease spread are difficult, mainly
due to the complexity of the interaction between individual components
involved.
2. In this work, we introduce a lattice-based epidemic model coupled with a
stochastic process that mimics, in a very simplified way, the interaction
between the hosts and pathogen. We studied the disease spread by measuring the
propagation velocity of the pathogen on the susceptible hosts. Quantitative
results indicate the occurrence of a critical transition between two stable
phases: local confinement and an extended epiphytotic outbreak that depends on
the density of the susceptible individuals.
3. Quantitative predictions of epiphytotics are performed using the framework
early-warning indicators for impending regime shifts, widely applied on
dynamical systems. These signals forecast successfully the outcome of the
critical shift between the two stable phases before the system enters the
epiphytotic regime.
4. Synthesis: Our study demonstrates that early-warning indicators could be
useful for the prediction of forest disease epidemics through mathematical and
computational models suited to more specific pathogen-host-environmental
interactions.
| [
{
"created": "Wed, 21 Feb 2018 13:29:49 GMT",
"version": "v1"
},
{
"created": "Fri, 14 Dec 2018 22:06:14 GMT",
"version": "v2"
}
] | 2018-12-18 | [
[
"Orozco-Fuentes",
"S.",
""
],
[
"Griffiths",
"G.",
""
],
[
"Holmes",
"M. J.",
""
],
[
"Ettelaie",
"R.",
""
],
[
"Smith",
"J.",
""
],
[
"Baggaley",
"A. W.",
""
],
[
"Parker",
"N. G.",
""
]
] | Summary 1. Infectious disease outbreaks in plants threaten ecosystems, agricultural crops and food trade. Currently, several fungal diseases are affecting forests worldwide, posing a major risk to tree species, habitats and consequently ecosystem decay. Prediction and control of disease spread are difficult, mainly due to the complexity of the interaction between individual components involved. 2. In this work, we introduce a lattice-based epidemic model coupled with a stochastic process that mimics, in a very simplified way, the interaction between the hosts and pathogen. We studied the disease spread by measuring the propagation velocity of the pathogen on the susceptible hosts. Quantitative results indicate the occurrence of a critical transition between two stable phases: local confinement and an extended epiphytotic outbreak that depends on the density of the susceptible individuals. 3. Quantitative predictions of epiphytotics are performed using the framework early-warning indicators for impending regime shifts, widely applied on dynamical systems. These signals forecast successfully the outcome of the critical shift between the two stable phases before the system enters the epiphytotic regime. 4. Synthesis: Our study demonstrates that early-warning indicators could be useful for the prediction of forest disease epidemics through mathematical and computational models suited to more specific pathogen-host-environmental interactions. |
2301.04739 | Mike Klymkowsky | Michael W. Klymkowsky | Making sense of noise: introducing students to stochastic processes in
order to better understand biological behaviors | 10 pages, 7 embedded figures, 3 text boxes | null | null | null | q-bio.CB q-bio.MN | http://creativecommons.org/licenses/by-sa/4.0/ | Biological systems are characterized by the ubiquitous roles of weak, that
is, non-covalent molecular interactions, small, often very small, numbers of
specific molecules per cell, and Brownian motion. These combine to produce
stochastic behaviors at all levels from the molecular and cellular to the
behavioral. That said, students are rarely introduced to the ubiquitous role of
stochastic processes in biological systems, and how they produce unpredictable
behaviors. Here I present the case that they need to be and provide some
suggestions as to how it might be approached.
| [
{
"created": "Wed, 11 Jan 2023 22:11:05 GMT",
"version": "v1"
},
{
"created": "Wed, 26 Apr 2023 15:03:22 GMT",
"version": "v2"
}
] | 2023-04-27 | [
[
"Klymkowsky",
"Michael W.",
""
]
] | Biological systems are characterized by the ubiquitous roles of weak, that is, non-covalent molecular interactions, small, often very small, numbers of specific molecules per cell, and Brownian motion. These combine to produce stochastic behaviors at all levels from the molecular and cellular to the behavioral. That said, students are rarely introduced to the ubiquitous role of stochastic processes in biological systems, and how they produce unpredictable behaviors. Here I present the case that they need to be and provide some suggestions as to how it might be approached. |
1708.01502 | Liang Yu | Liang Yu, Jin Zhao, Lin Gao | Predicting potential treatments for complex diseases based on miRNA and
tissue specificity | null | null | null | null | q-bio.GN cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Drug repositioning, that is finding new uses for existing drugs to treat more
patients. Cumulative studies demonstrate that the mature miRNAs as well as
their precursors can be targeted by small molecular drugs. At the same time,
human diseases result from the disordered interplay of tissue- and cell
lineage-specific processes. However, few computational researches predict
drug-disease potential relationships based on miRNA data and tissue
specificity. Therefore, based on miRNA data and the tissue specificity of
diseases, we propose a new method named as miTS to predict the potential
treatments for diseases. Firstly, based on miRNAs data, target genes and
information of FDA approved drugs, we evaluate the relationships between miRNAs
and drugs in the tissue-specific PPI network. Then, we construct a tripartite
network: drug-miRNA-disease Finally, we obtain the potential drug-disease
associations based on the tripartite network. In this paper, we take breast
cancer as case study and focus on the top-30 predicted drugs. 25 of them
(83.3%) are found having known connections with breast cancer in CTD benchmark
and the other 5 drugs are potential drugs for breast cancer. We further
evaluate the 5 newly predicted drugs from clinical records, literature mining,
KEGG pathways enrichment analysis and overlapping genes between enriched
pathways. For each of the 5 new drugs, strongly supported evidences can be
found in three or more aspects. In particular, Regorafenib has 15 overlapping
KEGG pathways with breast cancer and their p-values are all very small. In
addition, whether in the literature curation or clinical validation,
Regorafenib has a strong correlation with breast cancer. All the facts show
that Regorafenib is likely to be a truly effective drug, worthy of our further
study. It further follows that our method miTS is effective and practical for
predicting new drug indications.
| [
{
"created": "Fri, 30 Jun 2017 09:57:54 GMT",
"version": "v1"
}
] | 2017-08-07 | [
[
"Yu",
"Liang",
""
],
[
"Zhao",
"Jin",
""
],
[
"Gao",
"Lin",
""
]
] | Drug repositioning, that is finding new uses for existing drugs to treat more patients. Cumulative studies demonstrate that the mature miRNAs as well as their precursors can be targeted by small molecular drugs. At the same time, human diseases result from the disordered interplay of tissue- and cell lineage-specific processes. However, few computational researches predict drug-disease potential relationships based on miRNA data and tissue specificity. Therefore, based on miRNA data and the tissue specificity of diseases, we propose a new method named as miTS to predict the potential treatments for diseases. Firstly, based on miRNAs data, target genes and information of FDA approved drugs, we evaluate the relationships between miRNAs and drugs in the tissue-specific PPI network. Then, we construct a tripartite network: drug-miRNA-disease Finally, we obtain the potential drug-disease associations based on the tripartite network. In this paper, we take breast cancer as case study and focus on the top-30 predicted drugs. 25 of them (83.3%) are found having known connections with breast cancer in CTD benchmark and the other 5 drugs are potential drugs for breast cancer. We further evaluate the 5 newly predicted drugs from clinical records, literature mining, KEGG pathways enrichment analysis and overlapping genes between enriched pathways. For each of the 5 new drugs, strongly supported evidences can be found in three or more aspects. In particular, Regorafenib has 15 overlapping KEGG pathways with breast cancer and their p-values are all very small. In addition, whether in the literature curation or clinical validation, Regorafenib has a strong correlation with breast cancer. All the facts show that Regorafenib is likely to be a truly effective drug, worthy of our further study. It further follows that our method miTS is effective and practical for predicting new drug indications. |
2002.04564 | John Rhodes | Samaneh Yourdkhani and John A. Rhodes | Inferring metric trees from weighted quartets via an intertaxon distance | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A metric phylogenetic tree relating a collection of taxa induces weighted
rooted triples and weighted quartets for all subsets of three and four taxa,
respectively. New intertaxon distances are defined that can be calculated from
these weights, and shown to exactly fit the same tree topology, but with edge
weights rescaled by certain factors dependent on the associated split size.
These distances are analogs for metric trees of similar ones recently
introduced for topological trees that are based on induced unweighted rooted
triples and quartets. The distances introduced here lead to new statistically
consistent methods of inferring a metric species tree from a collection of
topological gene trees generated under the multispecies coalescent model of
incomplete lineage sorting. Simulations provide insight into their potential.
| [
{
"created": "Tue, 11 Feb 2020 17:39:24 GMT",
"version": "v1"
}
] | 2020-02-12 | [
[
"Yourdkhani",
"Samaneh",
""
],
[
"Rhodes",
"John A.",
""
]
] | A metric phylogenetic tree relating a collection of taxa induces weighted rooted triples and weighted quartets for all subsets of three and four taxa, respectively. New intertaxon distances are defined that can be calculated from these weights, and shown to exactly fit the same tree topology, but with edge weights rescaled by certain factors dependent on the associated split size. These distances are analogs for metric trees of similar ones recently introduced for topological trees that are based on induced unweighted rooted triples and quartets. The distances introduced here lead to new statistically consistent methods of inferring a metric species tree from a collection of topological gene trees generated under the multispecies coalescent model of incomplete lineage sorting. Simulations provide insight into their potential. |
1105.0627 | Bernhard Mehlig | E. Schaper, A. Eriksson, M. Rafajlovic, S. Sagitov, and B. Mehlig | Linkage disequilibrium under recurrent bottlenecks | 34 pages, 7 figures | Genetics 190, 217 (2012) | 10.1534/genetics.111.134437 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding patterns of selectively neutral genetic variation is essential
in order to model deviations from neutrality, caused for example by different
forms of selection. Best understood is neutral genetic variation at a single
locus, but additional insights can be gained by investigating genetic variation
at multiple loci. The corresponding patterns of variation reflect linkage
disequilibrium and provide information about the underlying multi-locus gene
genealogies. The statistical properties of two-locus genealogies have been
intensively studied for populations of constant census size, as well as for
simple demographic histories such as exponential population growth, and single
bottlenecks. By contrast, the combined effect of recombination and sustained
demographic fluctuations is poorly understood. Addressing this issue, we study
a two-locus Wright-Fisher model of a population subject to recurrent
bottlenecks. We derive coalescent approximations for the covariance of the
times to the most recent common ancestor at two loci. We find, first, that an
effective population-size approximation describes the numerically observed
linkage disequilibrium provided that recombination occurs either much faster or
much more slowly than the population size changes. Second, when recombination
occurs frequently between bottlenecks but rarely within bottlenecks, we observe
long-range linkage disequilibrium. Third, we show that in the latter case, a
commonly used measure of linkage disequilibrium, sigma_d^2 (closely related to
r^2), fails to capture long-range linkage disequilibrium because constituent
terms, each reflecting long-range linkage disequilibrium, cancel. Fourth, we
analyse a limiting case in which long-range linkage disequilibrium can be
described in terms of a Xi-coalescent process allowing for simultaneous
multiple mergers of ancestral lines.
| [
{
"created": "Tue, 3 May 2011 16:30:25 GMT",
"version": "v1"
}
] | 2012-06-13 | [
[
"Schaper",
"E.",
""
],
[
"Eriksson",
"A.",
""
],
[
"Rafajlovic",
"M.",
""
],
[
"Sagitov",
"S.",
""
],
[
"Mehlig",
"B.",
""
]
] | Understanding patterns of selectively neutral genetic variation is essential in order to model deviations from neutrality, caused for example by different forms of selection. Best understood is neutral genetic variation at a single locus, but additional insights can be gained by investigating genetic variation at multiple loci. The corresponding patterns of variation reflect linkage disequilibrium and provide information about the underlying multi-locus gene genealogies. The statistical properties of two-locus genealogies have been intensively studied for populations of constant census size, as well as for simple demographic histories such as exponential population growth, and single bottlenecks. By contrast, the combined effect of recombination and sustained demographic fluctuations is poorly understood. Addressing this issue, we study a two-locus Wright-Fisher model of a population subject to recurrent bottlenecks. We derive coalescent approximations for the covariance of the times to the most recent common ancestor at two loci. We find, first, that an effective population-size approximation describes the numerically observed linkage disequilibrium provided that recombination occurs either much faster or much more slowly than the population size changes. Second, when recombination occurs frequently between bottlenecks but rarely within bottlenecks, we observe long-range linkage disequilibrium. Third, we show that in the latter case, a commonly used measure of linkage disequilibrium, sigma_d^2 (closely related to r^2), fails to capture long-range linkage disequilibrium because constituent terms, each reflecting long-range linkage disequilibrium, cancel. Fourth, we analyse a limiting case in which long-range linkage disequilibrium can be described in terms of a Xi-coalescent process allowing for simultaneous multiple mergers of ancestral lines. |
2408.07250 | Alejandro Leon PhD | Carlos Alberto Hern\'andez-Linares, Porfirio Toledo, Brenda Zarah\'i
Medina-P\'erez, Varsovia Hern\'andez, Martha Lorena Avenda\~no Garrido,
V\'ictor Quintero, Alejandro Le\'on | Spatial Dynamics Behavioral Analysis of Motivational Operations Using
Weighted Voronoi Diagrams | 10 pages, 3 figures | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a novel approach to the analysis of spatial behavior
distribution, utilizing weighted Voronoi diagrams. The objective is to map and
understand how an experimental subject moves and spends time in various areas
of a given space, thus identifying the areas of greatest behavioral interest.
The technique entails the partitioning of the space into a grid, the
designation of generator points, and the assignment of weights based on the
time the subject spends in each region. The data analyzed were derived from
multiple experimental sessions in which subjects were exposed to various
conditions, including food deprivation, water deprivation, and combined
deprivation and no deprivation. The aforementioned conditions resulted in the
formation of clearly delineated spatial patterns. Weighted Voronoi diagrams
provided a comprehensive and precise representation of these areas of interest,
facilitating an in-depth examination of the evolution of behavioral patterns in
diverse contexts, such as under different Motivational Operations. This tool
offers a valuable perspective for the dynamic study of spatial behaviors in
variable experimental settings.
| [
{
"created": "Wed, 14 Aug 2024 01:26:22 GMT",
"version": "v1"
}
] | 2024-08-15 | [
[
"Hernández-Linares",
"Carlos Alberto",
""
],
[
"Toledo",
"Porfirio",
""
],
[
"Medina-Pérez",
"Brenda Zarahí",
""
],
[
"Hernández",
"Varsovia",
""
],
[
"Garrido",
"Martha Lorena Avendaño",
""
],
[
"Quintero",
"Víctor",
""
],
[
"León",
"Alejandro",
""
]
] | This paper presents a novel approach to the analysis of spatial behavior distribution, utilizing weighted Voronoi diagrams. The objective is to map and understand how an experimental subject moves and spends time in various areas of a given space, thus identifying the areas of greatest behavioral interest. The technique entails the partitioning of the space into a grid, the designation of generator points, and the assignment of weights based on the time the subject spends in each region. The data analyzed were derived from multiple experimental sessions in which subjects were exposed to various conditions, including food deprivation, water deprivation, and combined deprivation and no deprivation. The aforementioned conditions resulted in the formation of clearly delineated spatial patterns. Weighted Voronoi diagrams provided a comprehensive and precise representation of these areas of interest, facilitating an in-depth examination of the evolution of behavioral patterns in diverse contexts, such as under different Motivational Operations. This tool offers a valuable perspective for the dynamic study of spatial behaviors in variable experimental settings. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.