id stringlengths 9 13 | submitter stringlengths 4 48 | authors stringlengths 4 9.62k | title stringlengths 4 343 | comments stringlengths 2 480 ⌀ | journal-ref stringlengths 9 309 ⌀ | doi stringlengths 12 138 ⌀ | report-no stringclasses 277 values | categories stringlengths 8 87 | license stringclasses 9 values | orig_abstract stringlengths 27 3.76k | versions listlengths 1 15 | update_date stringlengths 10 10 | authors_parsed listlengths 1 147 | abstract stringlengths 24 3.75k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2102.00570 | Tommy Peng | Tommy Peng, Avinash Malik, Laura R. Bear, Mark L. Trew | Impulse data models for the inverse problem of electrocardiography | Accepted by IEEE Journal of Biomedical and Health Informatics | null | 10.1109/JBHI.2021.3106645 | null | q-bio.QM cs.LG | http://creativecommons.org/licenses/by/4.0/ | The proposed method re-frames traditional inverse problems of
electrocardiography into regression problems, constraining the solution space
by decomposing signals with multidimensional Gaussian impulse basis functions.
Impulse HSPs were generated with single Gaussian basis functions at discrete
heart surface locations and projected to corresponding BSPs using a volume
conductor torso model. Both BSP (inputs) and HSP (outputs) were mapped to
regular 2D surface meshes and used to train a neural network. Predictive
capabilities of the network were tested with unseen synthetic and experimental
data. A dense full connected single hidden layer neural network was trained to
map body surface impulses to heart surface Gaussian basis functions for
reconstructing HSP. Synthetic pulses moving across the heart surface were
predicted from the neural network with root mean squared error of $9.1\pm1.4$%.
Predicted signals were robust to noise up to 20 dB and errors due to
displacement and rotation of the heart within the torso were bounded and
predictable. A shift of the heart 40 mm toward the spine resulted in a 4\%
increase in signal feature localization error. The set of training impulse
function data could be reduced and prediction error remained bounded. Recorded
HSPs from in-vitro pig hearts were reliably decomposed using space-time
Gaussian basis functions. Predicted HSPs for left-ventricular pacing had a mean
absolute error of $10.4\pm11.4$ ms. Other pacing scenarios were analyzed with
similar success. Conclusion: Impulses from Gaussian basis functions are
potentially an effective and robust way to train simple neural network data
models for reconstructing HSPs from decomposed BSPs. The HSPs predicted by the
neural network can be used to generate activation maps that non-invasively
identify features of cardiac electrical dysfunction and can guide subsequent
treatment options.
| [
{
"created": "Mon, 1 Feb 2021 00:25:54 GMT",
"version": "v1"
},
{
"created": "Tue, 6 Jul 2021 23:05:04 GMT",
"version": "v2"
},
{
"created": "Thu, 19 Aug 2021 11:02:31 GMT",
"version": "v3"
}
] | 2021-08-20 | [
[
"Peng",
"Tommy",
""
],
[
"Malik",
"Avinash",
""
],
[
"Bear",
"Laura R.",
""
],
[
"Trew",
"Mark L.",
""
]
] | The proposed method re-frames traditional inverse problems of electrocardiography into regression problems, constraining the solution space by decomposing signals with multidimensional Gaussian impulse basis functions. Impulse HSPs were generated with single Gaussian basis functions at discrete heart surface locations and projected to corresponding BSPs using a volume conductor torso model. Both BSP (inputs) and HSP (outputs) were mapped to regular 2D surface meshes and used to train a neural network. Predictive capabilities of the network were tested with unseen synthetic and experimental data. A dense full connected single hidden layer neural network was trained to map body surface impulses to heart surface Gaussian basis functions for reconstructing HSP. Synthetic pulses moving across the heart surface were predicted from the neural network with root mean squared error of $9.1\pm1.4$%. Predicted signals were robust to noise up to 20 dB and errors due to displacement and rotation of the heart within the torso were bounded and predictable. A shift of the heart 40 mm toward the spine resulted in a 4\% increase in signal feature localization error. The set of training impulse function data could be reduced and prediction error remained bounded. Recorded HSPs from in-vitro pig hearts were reliably decomposed using space-time Gaussian basis functions. Predicted HSPs for left-ventricular pacing had a mean absolute error of $10.4\pm11.4$ ms. Other pacing scenarios were analyzed with similar success. Conclusion: Impulses from Gaussian basis functions are potentially an effective and robust way to train simple neural network data models for reconstructing HSPs from decomposed BSPs. The HSPs predicted by the neural network can be used to generate activation maps that non-invasively identify features of cardiac electrical dysfunction and can guide subsequent treatment options. |
1710.07841 | Abhilash Patel | Abhilash Patel, Shaunak Sen | Non-normality Can Facilitate Pulsing in Biomolecular Circuits | null | null | 10.1049/iet-syb.2018.0008 | null | q-bio.MN cs.SY math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Non-normality can underlie pulse dynamics in many engineering contexts.
However, its role in pulses generated in biomolecular contexts is generally
unclear. Here, we address this issue using the mathematical tools of linear
algebra and systems theory on simple computational models of biomolecular
circuits. We find that non-normality is present in standard models of
feedforward loops. We used a generalized framework and pseudospectrum analysis
to identify non-normality in larger biomolecular circuit models, finding that
it correlates well with pulsing dynamics. Finally, we illustrate how these
methods can be used to provide analytical support to numerical screens for
pulsing dynamics as well as provide guidelines for design.
| [
{
"created": "Sat, 21 Oct 2017 18:47:35 GMT",
"version": "v1"
},
{
"created": "Mon, 26 Feb 2018 07:09:03 GMT",
"version": "v2"
},
{
"created": "Sat, 2 Jun 2018 15:17:06 GMT",
"version": "v3"
}
] | 2018-06-05 | [
[
"Patel",
"Abhilash",
""
],
[
"Sen",
"Shaunak",
""
]
] | Non-normality can underlie pulse dynamics in many engineering contexts. However, its role in pulses generated in biomolecular contexts is generally unclear. Here, we address this issue using the mathematical tools of linear algebra and systems theory on simple computational models of biomolecular circuits. We find that non-normality is present in standard models of feedforward loops. We used a generalized framework and pseudospectrum analysis to identify non-normality in larger biomolecular circuit models, finding that it correlates well with pulsing dynamics. Finally, we illustrate how these methods can be used to provide analytical support to numerical screens for pulsing dynamics as well as provide guidelines for design. |
1206.4539 | Pietro Faccioli | P. Faccioli and F. Pederiva | Microscopically Computing Free-energy Profiles and Transition Path Time
of Rare Macromolecular Transitions | Accepted for publication in Physical Review E | null | 10.1103/PhysRevE.86.061916 | null | q-bio.BM physics.chem-ph physics.comp-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a rigorous method to microscopically compute the observables
which characterize the thermodynamics and kinetics of rare macromolecular
transitions for which it is possible to identify a priori a slow reaction
coordinate. In order to sample the ensemble of statistically significant
reaction pathways, we define a biased molecular dynamics (MD) in which
barrier-crossing transitions are accelerated without introducing any unphysical
external force. In contrast to other biased MD methods, in the present approach
the systematic errors which are generated in order to accelerate the transition
can be analytically calculated and therefore can be corrected for. This allows
for a computationally efficient reconstruction of the free-energy profile as a
function of the reaction coordinate and for the calculation of the
corresponding diffusion coefficient. The transition path time can then be
readily evaluated within the Dominant Reaction Pathways (DRP) approach. We
illustrate and test this method by characterizing a thermally activated
transition on a two-dimensional energy surface and the folding of a small
protein fragment within a coarse-grained model.
| [
{
"created": "Wed, 20 Jun 2012 15:34:26 GMT",
"version": "v1"
},
{
"created": "Tue, 18 Dec 2012 14:06:53 GMT",
"version": "v2"
}
] | 2015-06-05 | [
[
"Faccioli",
"P.",
""
],
[
"Pederiva",
"F.",
""
]
] | We introduce a rigorous method to microscopically compute the observables which characterize the thermodynamics and kinetics of rare macromolecular transitions for which it is possible to identify a priori a slow reaction coordinate. In order to sample the ensemble of statistically significant reaction pathways, we define a biased molecular dynamics (MD) in which barrier-crossing transitions are accelerated without introducing any unphysical external force. In contrast to other biased MD methods, in the present approach the systematic errors which are generated in order to accelerate the transition can be analytically calculated and therefore can be corrected for. This allows for a computationally efficient reconstruction of the free-energy profile as a function of the reaction coordinate and for the calculation of the corresponding diffusion coefficient. The transition path time can then be readily evaluated within the Dominant Reaction Pathways (DRP) approach. We illustrate and test this method by characterizing a thermally activated transition on a two-dimensional energy surface and the folding of a small protein fragment within a coarse-grained model. |
0906.0564 | Giuseppe Vitiello | Giuseppe Vitiello | Coherent states, fractals and brain waves | null | New Mathematics and Natural Computing Vol.5, No. 1 (2009) 245-264 | null | null | q-bio.NC q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | I show that a functional representation of self-similarity (as the one
occurring in fractals) is provided by squeezed coherent states. In this way,
the dissipative model of brain is shown to account for the self-similarity in
brain background activity suggested by power-law distributions of power
spectral densities of electrocorticograms. I also briefly discuss the
action-perception cycle in the dissipative model with reference to
intentionality in terms of trajectories in the memory state space.
| [
{
"created": "Tue, 2 Jun 2009 19:19:00 GMT",
"version": "v1"
}
] | 2009-06-03 | [
[
"Vitiello",
"Giuseppe",
""
]
] | I show that a functional representation of self-similarity (as the one occurring in fractals) is provided by squeezed coherent states. In this way, the dissipative model of brain is shown to account for the self-similarity in brain background activity suggested by power-law distributions of power spectral densities of electrocorticograms. I also briefly discuss the action-perception cycle in the dissipative model with reference to intentionality in terms of trajectories in the memory state space. |
0805.0530 | Prashanth Alluvada | Prashanth Alluvada | Analytical Equations to the Chromaticity Cone: Algebraic Methods for
Describing Color | 20 pages, 17 figures, 1 table | null | null | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We describe an affine transformation on the (CIE) color matching functions
and map the spectral locus as a circle. We then homogenize the right circular
cylinder erected by the circle, with respect to a normalizing plane and develop
an analytical equation to the chromaticity cone, for the spectral colors. In
the interior of the (CIE) chromaticity diagram, by homogenizing elliptic
cylinders with respect to the normalizing planes, analytical equations to
subsets (also cones) of the chromaticity cone are developed. These equations
provide an algebraic method for describing color perception. As an application
of the interior chromaticity cones, we demonstrate that by sectioning
homogenized cones with planes and projecting, analytical equations to the
Macadam ellipses may be derived. Further, the cone equations are used to
propose new types of color order systems.
| [
{
"created": "Mon, 5 May 2008 13:42:02 GMT",
"version": "v1"
},
{
"created": "Sun, 1 Jun 2008 19:25:20 GMT",
"version": "v2"
}
] | 2008-06-01 | [
[
"Alluvada",
"Prashanth",
""
]
] | We describe an affine transformation on the (CIE) color matching functions and map the spectral locus as a circle. We then homogenize the right circular cylinder erected by the circle, with respect to a normalizing plane and develop an analytical equation to the chromaticity cone, for the spectral colors. In the interior of the (CIE) chromaticity diagram, by homogenizing elliptic cylinders with respect to the normalizing planes, analytical equations to subsets (also cones) of the chromaticity cone are developed. These equations provide an algebraic method for describing color perception. As an application of the interior chromaticity cones, we demonstrate that by sectioning homogenized cones with planes and projecting, analytical equations to the Macadam ellipses may be derived. Further, the cone equations are used to propose new types of color order systems. |
1809.04613 | Daniele Cappelletti | David Anderson and Daniele Cappelletti | Discrepancies between extinction events and boundary equilibria in
reaction networks | null | null | null | null | q-bio.MN math.DS math.PR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reaction networks are mathematical models of interacting chemical species
that are primarily used in biochemistry. There are two modeling regimes that
are typically used, one of which is deterministic and one that is stochastic.
In particular, the deterministic model consists of an autonomous system of
differential equations, whereas the stochastic system is a continuous time
Markov chain. Connections between the two modeling regimes have been studied
since the seminal paper by Kurtz (1972), where the deterministic model is shown
to be a limit of a properly rescaled stochastic model over compact time
intervals. Further, more recent studies have connected the long-term behaviors
of the two models when the reaction network satisfies certain graphical
properties, such as weak reversibility and a deficiency of zero.
These connections have led some to conjecture a link between the long-term
behavior of the two models exists, in some sense. In particular, one is tempted
to believe that positive recurrence of all states for the stochastic model
implies the existence of positive equilibria in the deterministic setting, and
that boundary equilibria of the deterministic model imply the occurrence of an
extinction event in the stochastic setting. We prove in this paper that these
implications do not hold in general, even if restricting the analysis to
networks that are bimolecular and that conserve the total mass. In particular,
we disprove the implications in the special case of models that have absolute
concentration robustness, thus answering in the negative a conjecture stated in
the literature in 2014.
| [
{
"created": "Wed, 12 Sep 2018 18:01:37 GMT",
"version": "v1"
}
] | 2018-09-14 | [
[
"Anderson",
"David",
""
],
[
"Cappelletti",
"Daniele",
""
]
] | Reaction networks are mathematical models of interacting chemical species that are primarily used in biochemistry. There are two modeling regimes that are typically used, one of which is deterministic and one that is stochastic. In particular, the deterministic model consists of an autonomous system of differential equations, whereas the stochastic system is a continuous time Markov chain. Connections between the two modeling regimes have been studied since the seminal paper by Kurtz (1972), where the deterministic model is shown to be a limit of a properly rescaled stochastic model over compact time intervals. Further, more recent studies have connected the long-term behaviors of the two models when the reaction network satisfies certain graphical properties, such as weak reversibility and a deficiency of zero. These connections have led some to conjecture a link between the long-term behavior of the two models exists, in some sense. In particular, one is tempted to believe that positive recurrence of all states for the stochastic model implies the existence of positive equilibria in the deterministic setting, and that boundary equilibria of the deterministic model imply the occurrence of an extinction event in the stochastic setting. We prove in this paper that these implications do not hold in general, even if restricting the analysis to networks that are bimolecular and that conserve the total mass. In particular, we disprove the implications in the special case of models that have absolute concentration robustness, thus answering in the negative a conjecture stated in the literature in 2014. |
2208.09338 | Christoph Kaleta | Helena U. Zacharias, Christoph Kaleta, Francois Cossais, Eva
Schaeffer, Henry Berndt, Lena Best, Thomas Dost, Svea Gl\"using, Mathieu
Groussin, Mathilde Poyet, Sebastian Heinzel, Corinna Bang, Leonard Siebert,
Tobias Demetrowitsch, Frank Leypoldt, Rainer Adelung, Thorsten Bartsch, Anja
Bosy-Westphal, Karin Schwarz, Daniela Berg | Microbiome and metabolome insights into the role of the
gastrointestinal-brain axis in neurodegenerative diseases: unveiling
potential therapeutic targets | null | null | null | null | q-bio.TO q-bio.MN q-bio.NC | http://creativecommons.org/licenses/by-sa/4.0/ | Due to the aging of the world population and westernization of lifestyles,
the prevalence of neurodegenerative diseases such as Alzheimer's disease (AD)
and Parkinson's disease (PD) is rapidly rising and is expected to put a strong
socioeconomic burden on health systems worldwide. Due to the limited success of
clinical trials of therapies against neurodegenerative diseases, research has
extended its scope to a systems medicine point of view, with a particular focus
on the gastrointestinal-brain axis as a potential main actor in disease
development and progression. Microbiome as well as metabolome studies along the
gastrointestinal-brain axis have already revealed important insights into
disease pathomechanisms. Both the microbiome and metabolome can be easily
manipulated by dietary and lifestyle interventions, and might thus offer novel,
readily available therapeutic options to prevent the onset as well as the
progression of PD and AD. This review summarizes our current knowledge on the
association between microbiota, metabolites, and neurodegeneration in light of
the gastrointestinal-brain axis. In this context, we also illustrate
state-of-the art methods of microbiome and metabolome research as well as
metabolic modeling that facilitate the identification of disease
pathomechanisms. We conclude our review with therapeutic options to modulate
microbiome composition to prevent or delay neurodegeneration and illustrate
potential future research directions to fight PD and AD.
| [
{
"created": "Fri, 19 Aug 2022 13:39:23 GMT",
"version": "v1"
}
] | 2022-08-22 | [
[
"Zacharias",
"Helena U.",
""
],
[
"Kaleta",
"Christoph",
""
],
[
"Cossais",
"Francois",
""
],
[
"Schaeffer",
"Eva",
""
],
[
"Berndt",
"Henry",
""
],
[
"Best",
"Lena",
""
],
[
"Dost",
"Thomas",
""
],
[
... | Due to the aging of the world population and westernization of lifestyles, the prevalence of neurodegenerative diseases such as Alzheimer's disease (AD) and Parkinson's disease (PD) is rapidly rising and is expected to put a strong socioeconomic burden on health systems worldwide. Due to the limited success of clinical trials of therapies against neurodegenerative diseases, research has extended its scope to a systems medicine point of view, with a particular focus on the gastrointestinal-brain axis as a potential main actor in disease development and progression. Microbiome as well as metabolome studies along the gastrointestinal-brain axis have already revealed important insights into disease pathomechanisms. Both the microbiome and metabolome can be easily manipulated by dietary and lifestyle interventions, and might thus offer novel, readily available therapeutic options to prevent the onset as well as the progression of PD and AD. This review summarizes our current knowledge on the association between microbiota, metabolites, and neurodegeneration in light of the gastrointestinal-brain axis. In this context, we also illustrate state-of-the art methods of microbiome and metabolome research as well as metabolic modeling that facilitate the identification of disease pathomechanisms. We conclude our review with therapeutic options to modulate microbiome composition to prevent or delay neurodegeneration and illustrate potential future research directions to fight PD and AD. |
1412.2587 | Laurent Noe | Laurent No\'e (LIFL, INRIA Lille - Nord Europe), Donald E. K. Martin | A Coverage Criterion for Spaced Seeds and its Applications to Support
Vector Machine String Kernels and k-Mer Distances | http://online.liebertpub.com/doi/abs/10.1089/cmb.2014.0173 | Journal of Computational Biology, Mary Ann Liebert, 2014, 21 (12),
pp.28 | 10.1089/cmb.2014.0173 | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Spaced seeds have been recently shown to not only detect more alignments, but
also to give a more accurate measure of phylogenetic distances (Boden et al.,
2013, Horwege et al., 2014, Leimeister et al., 2014), and to provide a lower
misclassification rate when used with Support Vector Machines (SVMs) (On-odera
and Shibuya, 2013), We confirm by independent experiments these two results,
and propose in this article to use a coverage criterion (Benson and Mak, 2008,
Martin, 2013, Martin and No{\'e}, 2014), to measure the seed efficiency in both
cases in order to design better seed patterns. We show first how this coverage
criterion can be directly measured by a full automaton-based approach. We then
illustrate how this criterion performs when compared with two other criteria
frequently used, namely the single-hit and multiple-hit criteria, through
correlation coefficients with the correct classification/the true distance. At
the end, for alignment-free distances, we propose an extension by adopting the
coverage criterion, show how it performs, and indicate how it can be
efficiently computed.
| [
{
"created": "Mon, 8 Dec 2014 14:43:56 GMT",
"version": "v1"
}
] | 2014-12-09 | [
[
"Noé",
"Laurent",
"",
"LIFL, INRIA Lille - Nord Europe"
],
[
"Martin",
"Donald E. K.",
""
]
] | Spaced seeds have been recently shown to not only detect more alignments, but also to give a more accurate measure of phylogenetic distances (Boden et al., 2013, Horwege et al., 2014, Leimeister et al., 2014), and to provide a lower misclassification rate when used with Support Vector Machines (SVMs) (On-odera and Shibuya, 2013), We confirm by independent experiments these two results, and propose in this article to use a coverage criterion (Benson and Mak, 2008, Martin, 2013, Martin and No{\'e}, 2014), to measure the seed efficiency in both cases in order to design better seed patterns. We show first how this coverage criterion can be directly measured by a full automaton-based approach. We then illustrate how this criterion performs when compared with two other criteria frequently used, namely the single-hit and multiple-hit criteria, through correlation coefficients with the correct classification/the true distance. At the end, for alignment-free distances, we propose an extension by adopting the coverage criterion, show how it performs, and indicate how it can be efficiently computed. |
0908.2378 | Yves-Henri Sanejouand | Francesco Piazza, Yves-Henri Sanejouand | Long-range energy transfer in proteins | 7 pages, 5 figures | Phys. Biol. 2009, vol.6, 046014 | 10.1088/1478-3975/6/4/046014 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Proteins are large and complex molecular machines. In order to perform their
function, most of them need energy, e.g. either in the form of a photon, like
in the case of the visual pigment rhodopsin, or through the breaking of a
chemical bond, as in the presence of adenosine triphosphate (ATP). Such energy,
in turn, has to be transmitted to specific locations, often several tens of
Angstroms away from where it is initially released. Here we show, within the
framework of a coarse-grained nonlinear network model, that energy in a protein
can jump from site to site with high yields, covering in many instances
remarkably large distances. Following single-site excitations, few specific
sites are targeted, systematically within the stiffest regions. Such energy
transfers mark the spontaneous formation of a localized mode of nonlinear
origin at the destination site, which acts as an efficient energy-accumulating
centre. Interestingly, yields are found to be optimum for excitation energies
in the range of biologically relevant ones.
| [
{
"created": "Mon, 17 Aug 2009 15:59:53 GMT",
"version": "v1"
}
] | 2009-11-13 | [
[
"Piazza",
"Francesco",
""
],
[
"Sanejouand",
"Yves-Henri",
""
]
] | Proteins are large and complex molecular machines. In order to perform their function, most of them need energy, e.g. either in the form of a photon, like in the case of the visual pigment rhodopsin, or through the breaking of a chemical bond, as in the presence of adenosine triphosphate (ATP). Such energy, in turn, has to be transmitted to specific locations, often several tens of Angstroms away from where it is initially released. Here we show, within the framework of a coarse-grained nonlinear network model, that energy in a protein can jump from site to site with high yields, covering in many instances remarkably large distances. Following single-site excitations, few specific sites are targeted, systematically within the stiffest regions. Such energy transfers mark the spontaneous formation of a localized mode of nonlinear origin at the destination site, which acts as an efficient energy-accumulating centre. Interestingly, yields are found to be optimum for excitation energies in the range of biologically relevant ones. |
1802.00753 | Adam Steel | Adam Steel, Cibu Thomas, Chris I Baker | Effect of time of day on reward circuitry. A discussion of Byrne et al.
2017 | 9 pages, 0 figures | null | null | null | q-bio.NC | http://creativecommons.org/publicdomain/zero/1.0/ | Byrne and colleagues present a paper on a timely topic with potentially
important results. However, we think that issues in the design and analysis
complicate the interpretation and limit the generalizability of the findings.
Specifically, the details of the small volume correction used in the primary
analysis are not adequately described and, moreover, the results do not appear
to be corroborated by the whole-brain analysis. In addition, the follow-up
multilevel modeling, which is fundamental to the conclusions of the paper, is
inherently circular thereby guaranteeing discovery of the reported effect.
Finally, the study does not control for other factors that vary over the course
of the day and are known to impact MRI measurements, and fails to link the
neural results directly to any relevant behavior.
| [
{
"created": "Fri, 2 Feb 2018 16:20:59 GMT",
"version": "v1"
}
] | 2018-02-05 | [
[
"Steel",
"Adam",
""
],
[
"Thomas",
"Cibu",
""
],
[
"Baker",
"Chris I",
""
]
] | Byrne and colleagues present a paper on a timely topic with potentially important results. However, we think that issues in the design and analysis complicate the interpretation and limit the generalizability of the findings. Specifically, the details of the small volume correction used in the primary analysis are not adequately described and, moreover, the results do not appear to be corroborated by the whole-brain analysis. In addition, the follow-up multilevel modeling, which is fundamental to the conclusions of the paper, is inherently circular thereby guaranteeing discovery of the reported effect. Finally, the study does not control for other factors that vary over the course of the day and are known to impact MRI measurements, and fails to link the neural results directly to any relevant behavior. |
2004.08730 | Jian Ma | Jian Ma | Predicting MMSE Score from Finger-Tapping Measurement | 11 pages, 4 figures, 2 tables | Proceedings of 2021 Chinese Intelligent Automation Conference.
Lecture Notes in Electrical Engineering, vol 801 | 10.1007/978-981-16-6372-7_34 | null | q-bio.QM cs.LG stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dementia is a leading cause of diseases for the elderly. Early diagnosis is
very important for the elderly living with dementias. In this paper, we propose
a method for dementia diagnosis by predicting MMSE score from finger-tapping
measurement with machine learning pipeline. Based on measurement of finger
tapping movement, the pipeline is first to select finger-tapping attributes
with copula entropy and then to predict MMSE score from the selected attributes
with predictive models. Experiments on real world data show that the predictive
models such developed present good prediction performance. As a byproduct, the
associations between certain finger-tapping attributes ('Number of taps',
'Average of intervals', and 'Frequency of taps' of both hands of bimanual
in-phase task) and MMSE score are discovered with copula entropy, which may be
interpreted as the biological relationship between cognitive ability and motor
ability and therefore makes the predictive models explainable. The selected
finger-tapping attributes can be considered as dementia biomarkers.
| [
{
"created": "Sat, 18 Apr 2020 23:30:57 GMT",
"version": "v1"
},
{
"created": "Tue, 16 Nov 2021 02:52:54 GMT",
"version": "v2"
}
] | 2021-11-17 | [
[
"Ma",
"Jian",
""
]
] | Dementia is a leading cause of diseases for the elderly. Early diagnosis is very important for the elderly living with dementias. In this paper, we propose a method for dementia diagnosis by predicting MMSE score from finger-tapping measurement with machine learning pipeline. Based on measurement of finger tapping movement, the pipeline is first to select finger-tapping attributes with copula entropy and then to predict MMSE score from the selected attributes with predictive models. Experiments on real world data show that the predictive models such developed present good prediction performance. As a byproduct, the associations between certain finger-tapping attributes ('Number of taps', 'Average of intervals', and 'Frequency of taps' of both hands of bimanual in-phase task) and MMSE score are discovered with copula entropy, which may be interpreted as the biological relationship between cognitive ability and motor ability and therefore makes the predictive models explainable. The selected finger-tapping attributes can be considered as dementia biomarkers. |
2107.06773 | Yan Ding | Yan Ding, Xiaoqian Jiang and Yejin Kim | Relational graph convolutional networks for predicting blood-brain
barrier penetration of drug molecules | null | null | null | null | q-bio.QM cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Evaluating the blood-brain barrier (BBB) permeability of drug molecules is a
critical step in brain drug development. Traditional methods for the evaluation
require complicated in vitro or in vivo testing. Alternatively, in silico
predictions based on machine learning have proved to be a cost-efficient way to
complement the in vitro and in vivo methods. However, the performance of the
established models has been limited by their incapability of dealing with the
interactions between drugs and proteins, which play an important role in the
mechanism behind the BBB penetrating behaviors. To address this limitation, we
employed the relational graph convolutional network (RGCN) to handle the
drug-protein interactions as well as the properties of each individual drug.
The RGCN model achieved an overall accuracy of 0.872, an AUROC of 0.919 and an
AUPRC of 0.838 for the testing dataset with the drug-protein interactions and
the Mordred descriptors as the input. Introducing drug-drug similarity to
connect structurally similar drugs in the data graph further improved the
testing results, giving an overall accuracy of 0.876, an AUROC of 0.926 and an
AUPRC of 0.865. In particular, the RGCN model was found to greatly outperform
the LightGBM base model when evaluated with the drugs whose BBB penetration was
dependent on drug-protein interactions. Our model is expected to provide
high-confidence predictions of BBB permeability for drug prioritization in the
experimental screening of BBB-penetrating drugs.
| [
{
"created": "Sun, 4 Jul 2021 15:56:02 GMT",
"version": "v1"
},
{
"created": "Wed, 6 Apr 2022 16:13:14 GMT",
"version": "v2"
}
] | 2022-04-07 | [
[
"Ding",
"Yan",
""
],
[
"Jiang",
"Xiaoqian",
""
],
[
"Kim",
"Yejin",
""
]
] | Evaluating the blood-brain barrier (BBB) permeability of drug molecules is a critical step in brain drug development. Traditional methods for the evaluation require complicated in vitro or in vivo testing. Alternatively, in silico predictions based on machine learning have proved to be a cost-efficient way to complement the in vitro and in vivo methods. However, the performance of the established models has been limited by their incapability of dealing with the interactions between drugs and proteins, which play an important role in the mechanism behind the BBB penetrating behaviors. To address this limitation, we employed the relational graph convolutional network (RGCN) to handle the drug-protein interactions as well as the properties of each individual drug. The RGCN model achieved an overall accuracy of 0.872, an AUROC of 0.919 and an AUPRC of 0.838 for the testing dataset with the drug-protein interactions and the Mordred descriptors as the input. Introducing drug-drug similarity to connect structurally similar drugs in the data graph further improved the testing results, giving an overall accuracy of 0.876, an AUROC of 0.926 and an AUPRC of 0.865. In particular, the RGCN model was found to greatly outperform the LightGBM base model when evaluated with the drugs whose BBB penetration was dependent on drug-protein interactions. Our model is expected to provide high-confidence predictions of BBB permeability for drug prioritization in the experimental screening of BBB-penetrating drugs. |
2008.10530 | Liam Jones | Liam Dowling Jones, Malik Magdon-Ismail, Laura Mersini-Houghton and
Steven Meshnick | A New Mathematical Model for Controlled Pandemics Like COVID-19 : AI
Implemented Predictions | null | null | null | null | q-bio.PE cs.LG physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a new mathematical model to explicitly capture the effects that
the three restriction measures: the lockdown date and duration, social
distancing and masks, and, schools and border closing, have in controlling the
spread of COVID-19 infections $i(r, t)$. Before restrictions were introduced,
the random spread of infections as described by the SEIR model grew
exponentially. The addition of control measures introduces a mixing of order
and disorder in the system's evolution which fall under a different
mathematical class of models that can eventually lead to critical phenomena. A
generic analytical solution is hard to obtain. We use machine learning to solve
the new equations for $i(r,t)$, the infections $i$ in any region $r$ at time
$t$ and derive predictions for the spread of infections over time as a function
of the strength of the specific measure taken and their duration. The machine
is trained in all of the COVID-19 published data for each region, county,
state, and country in the world. It utilizes optimization to learn the best-fit
values of the model's parameters from past data in each region in the world,
and it updates the predicted infections curves for any future restrictions that
may be added or relaxed anywhere. We hope this interdisciplinary effort, a new
mathematical model that predicts the impact of each measure in slowing down
infection spread combined with the solving power of machine learning, is a
useful tool in the fight against the current pandemic and potentially future
ones.
| [
{
"created": "Mon, 24 Aug 2020 16:07:00 GMT",
"version": "v1"
}
] | 2020-08-25 | [
[
"Jones",
"Liam Dowling",
""
],
[
"Magdon-Ismail",
"Malik",
""
],
[
"Mersini-Houghton",
"Laura",
""
],
[
"Meshnick",
"Steven",
""
]
] | We present a new mathematical model to explicitly capture the effects that the three restriction measures: the lockdown date and duration, social distancing and masks, and, schools and border closing, have in controlling the spread of COVID-19 infections $i(r, t)$. Before restrictions were introduced, the random spread of infections as described by the SEIR model grew exponentially. The addition of control measures introduces a mixing of order and disorder in the system's evolution which fall under a different mathematical class of models that can eventually lead to critical phenomena. A generic analytical solution is hard to obtain. We use machine learning to solve the new equations for $i(r,t)$, the infections $i$ in any region $r$ at time $t$ and derive predictions for the spread of infections over time as a function of the strength of the specific measure taken and their duration. The machine is trained in all of the COVID-19 published data for each region, county, state, and country in the world. It utilizes optimization to learn the best-fit values of the model's parameters from past data in each region in the world, and it updates the predicted infections curves for any future restrictions that may be added or relaxed anywhere. We hope this interdisciplinary effort, a new mathematical model that predicts the impact of each measure in slowing down infection spread combined with the solving power of machine learning, is a useful tool in the fight against the current pandemic and potentially future ones. |
2309.05102 | S\"oren Christensen | S\"oren Christensen and Jan Kallsen | Is Learning in Biological Neural Networks based on Stochastic Gradient
Descent? An analysis using stochastic processes | null | null | null | null | q-bio.NC cs.LG cs.NE math.PR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, there has been an intense debate about how learning in
biological neural networks (BNNs) differs from learning in artificial neural
networks. It is often argued that the updating of connections in the brain
relies only on local information, and therefore a stochastic gradient-descent
type optimization method cannot be used. In this paper, we study a stochastic
model for supervised learning in BNNs. We show that a (continuous) gradient
step occurs approximately when each learning opportunity is processed by many
local updates. This result suggests that stochastic gradient descent may indeed
play a role in optimizing BNNs.
| [
{
"created": "Sun, 10 Sep 2023 18:12:52 GMT",
"version": "v1"
},
{
"created": "Thu, 8 Feb 2024 14:06:38 GMT",
"version": "v2"
},
{
"created": "Wed, 10 Apr 2024 15:02:35 GMT",
"version": "v3"
}
] | 2024-04-11 | [
[
"Christensen",
"Sören",
""
],
[
"Kallsen",
"Jan",
""
]
] | In recent years, there has been an intense debate about how learning in biological neural networks (BNNs) differs from learning in artificial neural networks. It is often argued that the updating of connections in the brain relies only on local information, and therefore a stochastic gradient-descent type optimization method cannot be used. In this paper, we study a stochastic model for supervised learning in BNNs. We show that a (continuous) gradient step occurs approximately when each learning opportunity is processed by many local updates. This result suggests that stochastic gradient descent may indeed play a role in optimizing BNNs. |
2112.03214 | Rolf Bader | Niko Plath, Rolf Bader | Piano Timbre Development Analysis using Machine Learning | null | null | null | null | q-bio.NC cs.SD eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A data set of recorded single played tones of a concert grand piano is
investigated using Machine Learning (ML) on psychoacoustic timbre features. The
examined instrument has been recorded at two stages: firstly right after
manufacture and secondly after being played in a concert hall for one year. A
previous study [Plath2019] revealed that listeners clearly distinguished both
stages but no clear correlation with acoustics, signal processing tools or
verbalizations of perceived differences could be found. Using a Self-Organizing
Map (SOM), training single as well as double feature sets, it can be shown that
spectral flux is able to perfectly cluster the two stages. Sound Pressure Level
(SPL), roughness, and fractal correlation dimension (as a measure for initial
transient chaoticity) are furthermore able to order the keys with respect to
high and low notes. Combining spectral flux with the three other features in
double-feature training sets maintains stage clustering only for SPL and
fractal dimension, showing sub-clusters for both stages. These sub-clusters
point to a homogenization of SPL for stage 2 with respect to stage 1 and a
pronounced ordering and sub-clustering of key regions with respect to initial
transient chaoticity.
| [
{
"created": "Mon, 6 Dec 2021 18:14:52 GMT",
"version": "v1"
},
{
"created": "Thu, 16 Dec 2021 07:49:11 GMT",
"version": "v2"
}
] | 2021-12-17 | [
[
"Plath",
"Niko",
""
],
[
"Bader",
"Rolf",
""
]
] | A data set of recorded single played tones of a concert grand piano is investigated using Machine Learning (ML) on psychoacoustic timbre features. The examined instrument has been recorded at two stages: firstly right after manufacture and secondly after being played in a concert hall for one year. A previous study [Plath2019] revealed that listeners clearly distinguished both stages but no clear correlation with acoustics, signal processing tools or verbalizations of perceived differences could be found. Using a Self-Organizing Map (SOM), training single as well as double feature sets, it can be shown that spectral flux is able to perfectly cluster the two stages. Sound Pressure Level (SPL), roughness, and fractal correlation dimension (as a measure for initial transient chaoticity) are furthermore able to order the keys with respect to high and low notes. Combining spectral flux with the three other features in double-feature training sets maintains stage clustering only for SPL and fractal dimension, showing sub-clusters for both stages. These sub-clusters point to a homogenization of SPL for stage 2 with respect to stage 1 and a pronounced ordering and sub-clustering of key regions with respect to initial transient chaoticity. |
2201.04230 | Stuart Hall Dr | Daniel J. Cole, Stuart J. Hall, Rachael Pirie | Riemannian Geometry and Molecular Surfaces I: Spectrum of the Laplacian | 21 pages, 10 figures | null | null | null | q-bio.QM math.DG | http://creativecommons.org/licenses/by/4.0/ | Ligand-based virtual screening aims to reduce the cost and duration of drug
discovery campaigns. Shape similarity can be used to screen large databases,
with the goal of predicting potential new hits by comparing to molecules with
known favourable properties. This paper presents the theory underpinning
RGMolSA, a new alignment-free and mesh-free surface-based molecular shape
descriptor derived from the mathematical theory of Riemannian geometry. The
treatment of a molecule as a series of intersecting spheres allows the
description of its surface geometry using the Riemannian metric, obtained by
considering the spectrum of the Laplacian. This gives a simple vector
descriptor constructed of the weighted surface area and eight non-zero
eigenvalues, which capture the surface shape. We demonstrate the potential of
our method by considering a series of PDE5 inhibitors that are known to have
similar shape as an initial test case. RGMolSA displays promise when compared
to existing shape descriptors and in its capability to handle different
molecular conformers. The code and data used to produce the results are
available via GitHub: https://github.com/RPirie96/RGMolSA.
| [
{
"created": "Mon, 10 Jan 2022 18:08:10 GMT",
"version": "v1"
}
] | 2022-01-13 | [
[
"Cole",
"Daniel J.",
""
],
[
"Hall",
"Stuart J.",
""
],
[
"Pirie",
"Rachael",
""
]
] | Ligand-based virtual screening aims to reduce the cost and duration of drug discovery campaigns. Shape similarity can be used to screen large databases, with the goal of predicting potential new hits by comparing to molecules with known favourable properties. This paper presents the theory underpinning RGMolSA, a new alignment-free and mesh-free surface-based molecular shape descriptor derived from the mathematical theory of Riemannian geometry. The treatment of a molecule as a series of intersecting spheres allows the description of its surface geometry using the Riemannian metric, obtained by considering the spectrum of the Laplacian. This gives a simple vector descriptor constructed of the weighted surface area and eight non-zero eigenvalues, which capture the surface shape. We demonstrate the potential of our method by considering a series of PDE5 inhibitors that are known to have similar shape as an initial test case. RGMolSA displays promise when compared to existing shape descriptors and in its capability to handle different molecular conformers. The code and data used to produce the results are available via GitHub: https://github.com/RPirie96/RGMolSA. |
1502.01685 | David Williams | C. David Williams and Andrew A. Biewener | Pigeons trade efficiency for stability in response to level of challenge
during confined flight | null | null | 10.1073/pnas.1407298112 | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Individuals traversing challenging obstacles are faced with a decision: they
can adopt traversal strategies that minimally disrupt their normal locomotion
patterns or they can adopt strategies that substantially alter their gait,
conferring new advantages and disadvantages. We flew pigeons (Columba livia)
through an array of vertical obstacles in a flight arena, presenting them with
this choice. The pigeons selected either a strategy involving only a slight
pause in the normal wingbeat cycle, or a wings folded posture granting reduced
efficiency but greater stability should a misjudgment lead to collision. The
more stable but less efficient flight strategy was not employed to traverse
easy obstacles with wide gaps for passage, but came to dominate the postures
used as obstacle challenge increased with narrower gaps and there was a greater
chance of a collision. These results indicate that birds weigh potential
obstacle negotiation strategies and estimate task difficulty during locomotor
pattern selection.
| [
{
"created": "Thu, 5 Feb 2015 19:26:24 GMT",
"version": "v1"
}
] | 2015-08-19 | [
[
"Williams",
"C. David",
""
],
[
"Biewener",
"Andrew A.",
""
]
] | Individuals traversing challenging obstacles are faced with a decision: they can adopt traversal strategies that minimally disrupt their normal locomotion patterns or they can adopt strategies that substantially alter their gait, conferring new advantages and disadvantages. We flew pigeons (Columba livia) through an array of vertical obstacles in a flight arena, presenting them with this choice. The pigeons selected either a strategy involving only a slight pause in the normal wingbeat cycle, or a wings folded posture granting reduced efficiency but greater stability should a misjudgment lead to collision. The more stable but less efficient flight strategy was not employed to traverse easy obstacles with wide gaps for passage, but came to dominate the postures used as obstacle challenge increased with narrower gaps and there was a greater chance of a collision. These results indicate that birds weigh potential obstacle negotiation strategies and estimate task difficulty during locomotor pattern selection. |
1706.08584 | Stephanie Elizabeth Palmer | Audrey J. Sederberg, Jason N. MacLean, Stephanie E. Palmer | Learning to make external sensory stimulus predictions using internal
correlations in populations of neurons | 36 pages, 5 figures, 3 supplemental figures | Proc Natl Acad Sci USA January 30, 2018 115 (5) 1105-1110 | 10.1073/pnas.1710779115 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To compensate for sensory processing delays, the visual system must make
predictions to ensure timely and appropriate behaviors. Recent work has found
predictive information about the stimulus in neural populations early in vision
processing, starting in the retina. However, to utilize this information, cells
downstream must in turn be able to read out the predictive information from the
spiking activity of retinal ganglion cells. Here we investigate whether a
downstream cell could learn efficient encoding of predictive information in its
inputs in the absence of other instructive signals, from the correlations in
the inputs themselves. We simulate learning driven by spiking activity recorded
in salamander retina. We model a downstream cell as a binary neuron receiving a
small group of weighted inputs and quantify the predictive information between
activity in the binary neuron and future input. Input weights change according
to spike timing-dependent learning rules during a training period. We
characterize the readouts learned under spike timing-dependent learning rules,
finding that although the fixed points of learning dynamics are not associated
with absolute optimal readouts, they convey nearly all the information conveyed
by the optimal readout. Moreover, we find that learned perceptrons transmit
position and velocity information of a moving bar stimulus nearly as
efficiently as optimal perceptrons. We conclude that predictive information is,
in principle, readable from the perspective of downstream neurons in the
absence of other inputs, and consequently suggests that bottom-up prediction
may play an important role in sensory processing.
| [
{
"created": "Mon, 26 Jun 2017 20:32:40 GMT",
"version": "v1"
}
] | 2018-10-05 | [
[
"Sederberg",
"Audrey J.",
""
],
[
"MacLean",
"Jason N.",
""
],
[
"Palmer",
"Stephanie E.",
""
]
] | To compensate for sensory processing delays, the visual system must make predictions to ensure timely and appropriate behaviors. Recent work has found predictive information about the stimulus in neural populations early in vision processing, starting in the retina. However, to utilize this information, cells downstream must in turn be able to read out the predictive information from the spiking activity of retinal ganglion cells. Here we investigate whether a downstream cell could learn efficient encoding of predictive information in its inputs in the absence of other instructive signals, from the correlations in the inputs themselves. We simulate learning driven by spiking activity recorded in salamander retina. We model a downstream cell as a binary neuron receiving a small group of weighted inputs and quantify the predictive information between activity in the binary neuron and future input. Input weights change according to spike timing-dependent learning rules during a training period. We characterize the readouts learned under spike timing-dependent learning rules, finding that although the fixed points of learning dynamics are not associated with absolute optimal readouts, they convey nearly all the information conveyed by the optimal readout. Moreover, we find that learned perceptrons transmit position and velocity information of a moving bar stimulus nearly as efficiently as optimal perceptrons. We conclude that predictive information is, in principle, readable from the perspective of downstream neurons in the absence of other inputs, and consequently suggests that bottom-up prediction may play an important role in sensory processing. |
q-bio/0402024 | Hakho Lee | H. Lee, A.M. Purdon, R.M. Westervelt | Microelectromagnets for the manipulation of biological systems | 20 pages, 6 figures | null | null | null | q-bio.QM | null | Microelectromagnet devices, a ring trap and a matrix, were developed for the
microscopic control of biological systems. The ring trap is a circular Au wire
with an insulator on top. The matrix has two arrays of straight Au wires, one
array perpendicular to the other, that are separated and topped by insulating
layers. Microelectromagnets can produce strong magnetic fields to stably
manipulate magnetically tagged biological systems in a fluid. Moreover, by
controlling the currents flowing through the wires, a microelectromagnet matrix
can move a peak in the magnetic field magnitude continuously over the surface
of the device, generate multiple peaks simultaneously and control them
independently. These capabilities of a matrix can be used to trap, continuously
transport, assemble, separate and sort biological samples on micrometer length
scales. Combining microelectromagnets with microfluidic systems, chip-based
experimental systems can be realized for novel applications in biological and
biomedical studies.
| [
{
"created": "Tue, 10 Feb 2004 23:04:00 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Lee",
"H.",
""
],
[
"Purdon",
"A. M.",
""
],
[
"Westervelt",
"R. M.",
""
]
] | Microelectromagnet devices, a ring trap and a matrix, were developed for the microscopic control of biological systems. The ring trap is a circular Au wire with an insulator on top. The matrix has two arrays of straight Au wires, one array perpendicular to the other, that are separated and topped by insulating layers. Microelectromagnets can produce strong magnetic fields to stably manipulate magnetically tagged biological systems in a fluid. Moreover, by controlling the currents flowing through the wires, a microelectromagnet matrix can move a peak in the magnetic field magnitude continuously over the surface of the device, generate multiple peaks simultaneously and control them independently. These capabilities of a matrix can be used to trap, continuously transport, assemble, separate and sort biological samples on micrometer length scales. Combining microelectromagnets with microfluidic systems, chip-based experimental systems can be realized for novel applications in biological and biomedical studies. |
1608.03236 | Andrew D. Rutenberg | Andrew D Rutenberg, Aidan I Brown, and Laurent Kreplak | Uniform spatial distribution of collagen fibril radii within tendon
implies local activation of pC-collagen at individual fibrils | 10 pages, 2 figures | Physical Biology, v. 13 (2016) 046008 | 10.1088/1478-3975/13/4/046008 | null | q-bio.TO physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Collagen fibril cross-sectional radii show no systematic variation between
the interior and the periphery of fibril bundles, indicating an effectively
constant rate of collagen incorporation into fibrils throughout the bundle.
Such spatially homogeneous incorporation constrains the extracellular diffusion
of collagen precursors from sources at the bundle boundary to sinks at the
growing fibrils. With a coarse-grained diffusion equation we determine
stringent bounds, using parameters extracted from published experimental
measurements of tendon development. From the lack of new fibril formation after
birth, we further require that the concentration of diffusing precursors stays
below the critical concentration for fibril nucleation. We find that the
combination of the diffusive bound, which requires larger concentrations to
ensure homogeneous fibril radii, and lack of nucleation, which requires lower
concentrations, is only marginally consistent with fully-processed collagen
using conservative bounds. More realistic bounds may leave no consistent
concentrations. Therefore, we propose that unprocessed pC-collagen diffuses
from the bundle periphery followed by local C-proteinase activity and
subsequent collagen incorporation at each fibril. We suggest that C-proteinase
is localized within bundles, at fibril surfaces, during radial fibrillar
growth. The much greater critical concentration of pC-collagen, as compared to
fully-processed collagen, then provides broad consistency between homogeneous
fibril radii and the lack of fibril nucleation during fibril growth.
| [
{
"created": "Wed, 10 Aug 2016 16:48:45 GMT",
"version": "v1"
}
] | 2016-09-13 | [
[
"Rutenberg",
"Andrew D",
""
],
[
"Brown",
"Aidan I",
""
],
[
"Kreplak",
"Laurent",
""
]
] | Collagen fibril cross-sectional radii show no systematic variation between the interior and the periphery of fibril bundles, indicating an effectively constant rate of collagen incorporation into fibrils throughout the bundle. Such spatially homogeneous incorporation constrains the extracellular diffusion of collagen precursors from sources at the bundle boundary to sinks at the growing fibrils. With a coarse-grained diffusion equation we determine stringent bounds, using parameters extracted from published experimental measurements of tendon development. From the lack of new fibril formation after birth, we further require that the concentration of diffusing precursors stays below the critical concentration for fibril nucleation. We find that the combination of the diffusive bound, which requires larger concentrations to ensure homogeneous fibril radii, and lack of nucleation, which requires lower concentrations, is only marginally consistent with fully-processed collagen using conservative bounds. More realistic bounds may leave no consistent concentrations. Therefore, we propose that unprocessed pC-collagen diffuses from the bundle periphery followed by local C-proteinase activity and subsequent collagen incorporation at each fibril. We suggest that C-proteinase is localized within bundles, at fibril surfaces, during radial fibrillar growth. The much greater critical concentration of pC-collagen, as compared to fully-processed collagen, then provides broad consistency between homogeneous fibril radii and the lack of fibril nucleation during fibril growth. |
0903.1519 | David Lusseau | David Lusseau, Hal Whitehead, Shane Gero | Incorporating uncertainty into the study of animal social networks | 13 pages, 3 figures. published in Animal Behaviour | Lusseau D., Whitehead H. & Gero S. 2008. Incorporating uncertainty
into the study of animal social networks. Animal Behaviour 75(5): 1809-1815 | null | null | q-bio.PE physics.soc-ph q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Over the past decade network theory has been applied successfully to the
study of a variety of complex adaptive systems. However, the application of
these techniques to non-human social networks has several shortfalls. Firstly,
in most cases the strength of associations between individuals is disregarded.
Secondly, present techniques assume that observed interactions are invariant
values and not statistical samples taken from a population. These two
simplifications have weakened the value of these techniques when applied to the
study of animal social systems. Here we introduce a set of behaviorally
meaningful weighted network statistics that can be readily applied to matrices
of association indices between pairs of individual animals. We also introduce
bootstrapping techniques that estimate the effects of sampling uncertainty on
the network statistics and structure. Finally, we discuss the use of
randomisation tests to detect the departure of observed network statistics from
expected values under null hypotheses of random association given the sampling
structure of the data. We use two case studies to show that these techniques
provide invaluable insight in the dynamics of interactions within social units
and in the community structure of societies.
| [
{
"created": "Mon, 9 Mar 2009 10:51:27 GMT",
"version": "v1"
}
] | 2009-03-10 | [
[
"Lusseau",
"David",
""
],
[
"Whitehead",
"Hal",
""
],
[
"Gero",
"Shane",
""
]
] | Over the past decade network theory has been applied successfully to the study of a variety of complex adaptive systems. However, the application of these techniques to non-human social networks has several shortfalls. Firstly, in most cases the strength of associations between individuals is disregarded. Secondly, present techniques assume that observed interactions are invariant values and not statistical samples taken from a population. These two simplifications have weakened the value of these techniques when applied to the study of animal social systems. Here we introduce a set of behaviorally meaningful weighted network statistics that can be readily applied to matrices of association indices between pairs of individual animals. We also introduce bootstrapping techniques that estimate the effects of sampling uncertainty on the network statistics and structure. Finally, we discuss the use of randomisation tests to detect the departure of observed network statistics from expected values under null hypotheses of random association given the sampling structure of the data. We use two case studies to show that these techniques provide invaluable insight in the dynamics of interactions within social units and in the community structure of societies. |
1706.00133 | Zoe Tosi | Zoe Tosi, John Beggs | Cortical Circuits from Scratch: A Metaplastic Architecture for the
Emergence of Lognormal Firing Rates and Realistic Topology | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Our current understanding of neuroplasticity paints a picture of a complex
interconnected system of dependent processes which shape cortical structure so
as to produce an efficient information processing system. Indeed, the
cooperation of these processes is associated with robust, stable, adaptable
networks with characteristic features of activity and synaptic topology.
However, combining the actions of these mechanisms in models has proven
exceptionally difficult and to date no model has been able to do so without
significant hand-tuning. Until such a model exists that can successfully
combine these mechanisms to form a stable circuit with realistic features, our
ability to study neuroplasticity in the context of (more realistic) dynamic
networks and potentially reap whatever rewards these features and mechanisms
imbue biological networks with is hindered. We introduce a model which combines
five known plasticity mechanisms that act on the network as well as a unique
metaplastic mechanism which acts on other plasticity mechanisms, to produce a
neural circuit model which is both stable and capable of broadly reproducing
many characteristic features of cortical networks. The MANA (metaplastic
artificial neural architecture) represents the first model of its kind in that
it is able to self-organize realistic, nonrandom features of cortical networks,
from a null initial state (no synaptic connectivity or neuronal
differentiation). In the same vein as models like the SORN (self-organizing
recurrent network) MANA represents further progress toward the reverse
engineering of the brain at the network level.
| [
{
"created": "Thu, 1 Jun 2017 00:32:02 GMT",
"version": "v1"
},
{
"created": "Fri, 30 Mar 2018 21:34:00 GMT",
"version": "v2"
}
] | 2018-04-03 | [
[
"Tosi",
"Zoe",
""
],
[
"Beggs",
"John",
""
]
] | Our current understanding of neuroplasticity paints a picture of a complex interconnected system of dependent processes which shape cortical structure so as to produce an efficient information processing system. Indeed, the cooperation of these processes is associated with robust, stable, adaptable networks with characteristic features of activity and synaptic topology. However, combining the actions of these mechanisms in models has proven exceptionally difficult and to date no model has been able to do so without significant hand-tuning. Until such a model exists that can successfully combine these mechanisms to form a stable circuit with realistic features, our ability to study neuroplasticity in the context of (more realistic) dynamic networks and potentially reap whatever rewards these features and mechanisms imbue biological networks with is hindered. We introduce a model which combines five known plasticity mechanisms that act on the network as well as a unique metaplastic mechanism which acts on other plasticity mechanisms, to produce a neural circuit model which is both stable and capable of broadly reproducing many characteristic features of cortical networks. The MANA (metaplastic artificial neural architecture) represents the first model of its kind in that it is able to self-organize realistic, nonrandom features of cortical networks, from a null initial state (no synaptic connectivity or neuronal differentiation). In the same vein as models like the SORN (self-organizing recurrent network) MANA represents further progress toward the reverse engineering of the brain at the network level. |
1509.01381 | Davide Vergni | Stefano Berti, Massimo Cencini, Davide Vergni and Angelo Vulpiani | Extinction dynamics of a discrete population in an oasis | 9 pages, 7 figures | Phys. Rev. E 92, 012722 (2015) | 10.1103/PhysRevE.92.012722 | null | q-bio.PE physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding the conditions ensuring the persistence of a population is an
issue of primary importance in population biology. The first theoretical
approach to the problem dates back to the 50's with the KiSS (after Kierstead,
Slobodkin and Skellam) model, namely a continuous reaction-diffusion equation
for a population growing on a patch of finite size $L$ surrounded by a deadly
environment with infinite mortality -- i.e. an oasis in a desert. The main
outcome of the model is that only patches above a critical size allow for
population persistence. Here, we introduce an individual-based analogue of the
KiSS model to investigate the effects of discreteness and demographic
stochasticity. In particular, we study the average time to extinction both
above and below the critical patch size of the continuous model and investigate
the quasi-stationary distribution of the number of individuals for patch sizes
above the critical threshold.
| [
{
"created": "Fri, 4 Sep 2015 09:34:58 GMT",
"version": "v1"
}
] | 2015-09-07 | [
[
"Berti",
"Stefano",
""
],
[
"Cencini",
"Massimo",
""
],
[
"Vergni",
"Davide",
""
],
[
"Vulpiani",
"Angelo",
""
]
] | Understanding the conditions ensuring the persistence of a population is an issue of primary importance in population biology. The first theoretical approach to the problem dates back to the 50's with the KiSS (after Kierstead, Slobodkin and Skellam) model, namely a continuous reaction-diffusion equation for a population growing on a patch of finite size $L$ surrounded by a deadly environment with infinite mortality -- i.e. an oasis in a desert. The main outcome of the model is that only patches above a critical size allow for population persistence. Here, we introduce an individual-based analogue of the KiSS model to investigate the effects of discreteness and demographic stochasticity. In particular, we study the average time to extinction both above and below the critical patch size of the continuous model and investigate the quasi-stationary distribution of the number of individuals for patch sizes above the critical threshold. |
2402.17842 | Christo Morison | Christo Morison, Ma{\l}gorzata Fic, Thomas Marcou, Javad
Mohamadichamgavi, Javier Redondo Ant\'on, Golsa Sayyar, Alexander Stein,
Frank Bastian, Hana Krakovsk\'a, Nandakishor Krishnan, Diogo L. Pires,
Mohammadreza Satouri, Frederik J. Thomsen, Kausutua Tjikundi and Wajid Ali | Public Goods Games in Disease Evolution and Spread | 12 pages, 2 figures, 3 tables | null | 10.5281/zenodo.10719143 | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | Cooperation arises in nature at every scale, from within cells to entire
ecosystems. In the framework of evolutionary game theory, public goods games
(PGGs) are used to analyse scenarios where individuals can cooperate or defect,
and can predict when and how these behaviours emerge. However, too few examples
motivate the transferal of knowledge from one application of PGGs to another.
Here, we focus on PGGs arising in disease modelling of cancer evolution and the
spread of infectious diseases. We use these two systems as case studies for the
development of the theory and applications of PGGs, which we succinctly review
and compare. We also posit that applications of evolutionary game theory to
decision-making in cancer, such as interactions between a clinician and a
tumour, can learn from the PGGs studied in epidemiology, where cooperative
behaviours such as quarantine and vaccination compliance have been more
thoroughly investigated. Furthermore, instances of cellular-level cooperation
observed in cancers point to a corresponding area of potential interest for
modellers of other diseases, be they viral, bacterial or otherwise. We aim to
demonstrate the breadth of applicability of PGGs in disease modelling while
providing a starting point for those interested in quantifying cooperation
arising in healthcare.
| [
{
"created": "Tue, 27 Feb 2024 19:09:49 GMT",
"version": "v1"
}
] | 2024-02-29 | [
[
"Morison",
"Christo",
""
],
[
"Fic",
"Małgorzata",
""
],
[
"Marcou",
"Thomas",
""
],
[
"Mohamadichamgavi",
"Javad",
""
],
[
"Antón",
"Javier Redondo",
""
],
[
"Sayyar",
"Golsa",
""
],
[
"Stein",
"Alexander",
... | Cooperation arises in nature at every scale, from within cells to entire ecosystems. In the framework of evolutionary game theory, public goods games (PGGs) are used to analyse scenarios where individuals can cooperate or defect, and can predict when and how these behaviours emerge. However, too few examples motivate the transferal of knowledge from one application of PGGs to another. Here, we focus on PGGs arising in disease modelling of cancer evolution and the spread of infectious diseases. We use these two systems as case studies for the development of the theory and applications of PGGs, which we succinctly review and compare. We also posit that applications of evolutionary game theory to decision-making in cancer, such as interactions between a clinician and a tumour, can learn from the PGGs studied in epidemiology, where cooperative behaviours such as quarantine and vaccination compliance have been more thoroughly investigated. Furthermore, instances of cellular-level cooperation observed in cancers point to a corresponding area of potential interest for modellers of other diseases, be they viral, bacterial or otherwise. We aim to demonstrate the breadth of applicability of PGGs in disease modelling while providing a starting point for those interested in quantifying cooperation arising in healthcare. |
1511.01056 | Sergio Gabriel Quesada Acuna | Juli\'an Monge-N\'ajera | Reproductive trends, habitat type and body characteristics in velvet
worms (Onychophora) | 12 pages, 2 figures | Rev. Biol.Trop.42 L3: 611-622. 1994 | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A quantitative analysis of several onychophoran characteristics shows that in
habitats with lower rain levels females reproduce at an older age, are more
fecund and tend to have reproductive diapause where rain does not exceed a mean
of 200 cm/year. These habitat characteristics are associated with the southern
family Peripatopsidae. Sex ratio and parental investment per young are not
correlated with general environmental conditions. A comparison of 72 species
showed that larger species are often more variable in morphometry, but species
with the longest females do not always have the longest males. Larger Peripatus
acacioi females (Peripatidae: Brazil) produce more and heavier off spring.
Intrapopulation morphology was studied in 12 peripatid species for which
samples of between II and 798 individuals were available. In general, within
populations the females are more variable than males in length and weight, but
similarly variable in the number of legs. The number of legs has a low
variability (1.73-2.45%). length is intermediate (22.4-25.3%) and weight is
very variable (49.41-75.17%). When sexes are compared within a population,
females can have 14-8.9 % more leg pairs, and be 47-63 % heavier and 26 %
longer than males.
| [
{
"created": "Tue, 3 Nov 2015 19:59:06 GMT",
"version": "v1"
}
] | 2015-11-04 | [
[
"Monge-Nájera",
"Julián",
""
]
] | A quantitative analysis of several onychophoran characteristics shows that in habitats with lower rain levels females reproduce at an older age, are more fecund and tend to have reproductive diapause where rain does not exceed a mean of 200 cm/year. These habitat characteristics are associated with the southern family Peripatopsidae. Sex ratio and parental investment per young are not correlated with general environmental conditions. A comparison of 72 species showed that larger species are often more variable in morphometry, but species with the longest females do not always have the longest males. Larger Peripatus acacioi females (Peripatidae: Brazil) produce more and heavier off spring. Intrapopulation morphology was studied in 12 peripatid species for which samples of between II and 798 individuals were available. In general, within populations the females are more variable than males in length and weight, but similarly variable in the number of legs. The number of legs has a low variability (1.73-2.45%). length is intermediate (22.4-25.3%) and weight is very variable (49.41-75.17%). When sexes are compared within a population, females can have 14-8.9 % more leg pairs, and be 47-63 % heavier and 26 % longer than males. |
1404.4116 | Caterina La Porta AM | Alessandro Taloni, Alexander A. Alemi, Emilio Ciusani, James P.
Sethna, Stefano Zapperi, Caterina A. M. La Porta | Mechanical Properties of Growing Melanocytic Nevi and the Progression to
Melanoma | null | null | 10.1371/journal.pone.0094229 | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Melanocytic nevi are benign proliferations that sometimes turn into malignant
melanoma in a way that is still unclear from the biochemical and genetic point
of view. Diagnostic and prognostic tools are then mostly based on dermoscopic
examination and morphological analysis of histological tissues. To investigate
the role of mechanics and geometry in the morpholgical dynamics of melanocytic
nevi, we study a computation model for cell proliferation in a layered
non-linear elastic tissue. Numerical simulations suggest that the morphology of
the nevus is correlated to the initial location of the proliferating cell
starting the growth process and to the mechanical properties of the tissue. Our
results also support that melanocytes are subject to compressive stresses that
fluctuate widely in the nevus and depend on the growth stage. Numerical
simulations of cells in the epidermis releasing matrix metalloproteinases
display an accelerated invasion of the dermis by destroying the basal membrane.
Moreover, we suggest experimentally that osmotic stress and collagen inhibit
growth in primary melanoma cells while the effect is much weaker in metastatic
cells. Knowing that morphological features of nevi might also reflect geometry
and mechanics rather than malignancy could be relevant for diagnostic purposes
| [
{
"created": "Wed, 16 Apr 2014 00:27:49 GMT",
"version": "v1"
}
] | 2014-04-17 | [
[
"Taloni",
"Alessandro",
""
],
[
"Alemi",
"Alexander A.",
""
],
[
"Ciusani",
"Emilio",
""
],
[
"Sethna",
"James P.",
""
],
[
"Zapperi",
"Stefano",
""
],
[
"La Porta",
"Caterina A. M.",
""
]
] | Melanocytic nevi are benign proliferations that sometimes turn into malignant melanoma in a way that is still unclear from the biochemical and genetic point of view. Diagnostic and prognostic tools are then mostly based on dermoscopic examination and morphological analysis of histological tissues. To investigate the role of mechanics and geometry in the morpholgical dynamics of melanocytic nevi, we study a computation model for cell proliferation in a layered non-linear elastic tissue. Numerical simulations suggest that the morphology of the nevus is correlated to the initial location of the proliferating cell starting the growth process and to the mechanical properties of the tissue. Our results also support that melanocytes are subject to compressive stresses that fluctuate widely in the nevus and depend on the growth stage. Numerical simulations of cells in the epidermis releasing matrix metalloproteinases display an accelerated invasion of the dermis by destroying the basal membrane. Moreover, we suggest experimentally that osmotic stress and collagen inhibit growth in primary melanoma cells while the effect is much weaker in metastatic cells. Knowing that morphological features of nevi might also reflect geometry and mechanics rather than malignancy could be relevant for diagnostic purposes |
2212.01027 | Rahmad Akbar | Chung Yuen Khew, Rahmad Akbar, Norfarhan Mohd. Assaad | Progress and Challenges for the Application of Machine Learning for
Neglected Tropical Diseases | null | null | null | null | q-bio.BM cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Neglected tropical diseases (NTDs) continue to affect the livelihood of
individuals in countries in the Southeast Asia and Western Pacific region.
These diseases have been long existing and have caused devastating health
problems and economic decline to people in low- and middle-income (developing)
countries. An estimated 1.7 billion of the world's population suffer one or
more NTDs annually, this puts approximately one in five individuals at risk for
NTDs. In addition to health and social impact, NTDs inflict significant
financial burden to patients, close relatives, and are responsible for billions
of dollars lost in revenue from reduced labor productivity in developing
countries alone. There is an urgent need to better improve the control and
eradication or elimination efforts towards NTDs. This can be achieved by
utilizing machine learning tools to better the surveillance, prediction and
detection program, and combat NTDs through the discovery of new therapeutics
against these pathogens. This review surveys the current application of machine
learning tools for NTDs and the challenges to elevate the state-of-the-art of
NTDs surveillance, management, and treatment.
| [
{
"created": "Fri, 2 Dec 2022 08:48:22 GMT",
"version": "v1"
}
] | 2022-12-05 | [
[
"Khew",
"Chung Yuen",
""
],
[
"Akbar",
"Rahmad",
""
],
[
"Assaad",
"Norfarhan Mohd.",
""
]
] | Neglected tropical diseases (NTDs) continue to affect the livelihood of individuals in countries in the Southeast Asia and Western Pacific region. These diseases have been long existing and have caused devastating health problems and economic decline to people in low- and middle-income (developing) countries. An estimated 1.7 billion of the world's population suffer one or more NTDs annually, this puts approximately one in five individuals at risk for NTDs. In addition to health and social impact, NTDs inflict significant financial burden to patients, close relatives, and are responsible for billions of dollars lost in revenue from reduced labor productivity in developing countries alone. There is an urgent need to better improve the control and eradication or elimination efforts towards NTDs. This can be achieved by utilizing machine learning tools to better the surveillance, prediction and detection program, and combat NTDs through the discovery of new therapeutics against these pathogens. This review surveys the current application of machine learning tools for NTDs and the challenges to elevate the state-of-the-art of NTDs surveillance, management, and treatment. |
0903.3017 | Dietrich Stauffer | S. Cebrat and D. Stauffer | Influence of a small fraction of individuals with enhanced mutations on
a population genetic pool | 10 pages including 6 figures; draft | null | 10.1142/S0129183109014333 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Computer simulations of the Penna ageing model suggest that already a small
fraction of births with enhanced number of new mutations can negatively
influence the whole population.
| [
{
"created": "Tue, 17 Mar 2009 18:14:45 GMT",
"version": "v1"
}
] | 2015-05-13 | [
[
"Cebrat",
"S.",
""
],
[
"Stauffer",
"D.",
""
]
] | Computer simulations of the Penna ageing model suggest that already a small fraction of births with enhanced number of new mutations can negatively influence the whole population. |
0901.4362 | Andrew Pomerance | Andrew Pomerance, Edward Ott, Michelle Girvan, and Wolfgang Losert | The effect of network topology on the stability of discrete state models
of genetic control | 25 pages, 4 figures; added supplementary information, fixed typos and
figure, reformatted | null | 10.1073/pnas.0900142106 | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Boolean networks have been proposed as potentially useful models for genetic
control. An important aspect of these networks is the stability of their
dynamics in response to small perturbations. Previous approaches to stability
have assumed uncorrelated random network structure. Real gene networks
typically have nontrivial topology significantly different from the random
network paradigm. In order to address such situations, we present a general
method for determining the stability of large Boolean networks of any specified
network topology and predicting their steady-state behavior in response to
small perturbations. Additionally, we generalize to the case where individual
genes have a distribution of `expression biases,' and we consider
non-synchronous update, as well as extension of our method to non-Boolean
models in which there are more than two possible gene states. We find that
stability is governed by the maximum eigenvalue of a modified adjacency matrix,
and we test this result by comparison with numerical simulations. We also
discuss the possible application of our work to experimentally inferred gene
networks.
| [
{
"created": "Tue, 27 Jan 2009 21:59:55 GMT",
"version": "v1"
},
{
"created": "Thu, 26 Feb 2009 19:30:21 GMT",
"version": "v2"
}
] | 2015-05-13 | [
[
"Pomerance",
"Andrew",
""
],
[
"Ott",
"Edward",
""
],
[
"Girvan",
"Michelle",
""
],
[
"Losert",
"Wolfgang",
""
]
] | Boolean networks have been proposed as potentially useful models for genetic control. An important aspect of these networks is the stability of their dynamics in response to small perturbations. Previous approaches to stability have assumed uncorrelated random network structure. Real gene networks typically have nontrivial topology significantly different from the random network paradigm. In order to address such situations, we present a general method for determining the stability of large Boolean networks of any specified network topology and predicting their steady-state behavior in response to small perturbations. Additionally, we generalize to the case where individual genes have a distribution of `expression biases,' and we consider non-synchronous update, as well as extension of our method to non-Boolean models in which there are more than two possible gene states. We find that stability is governed by the maximum eigenvalue of a modified adjacency matrix, and we test this result by comparison with numerical simulations. We also discuss the possible application of our work to experimentally inferred gene networks. |
1910.03129 | Jennifer Hsiao | Jennifer Hsiao, Abigail L.S. Swann, Soo-Hyung Kim | Maize yield under a changing climate: The hidden role of vapor pressure
deficit | null | Agric. For. Meteorol. 279, 107692 (2019) | 10.1016/j.agrformet.2019.107692 | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Temperatures over the next century are expected to rise to levels detrimental
to crop growth and yield. As the atmosphere warms without additional water
vapor input, vapor pressure deficit (VPD) increases as well. Increased
temperatures and accompanied elevated VPD levels can both lead to negative
impacts on crop yield. The independent importance of VPD, however, is often
neglected or conflated with that from temperature due to a tight correlation
between the two climate factors. We used a coupled process-based crop (MAIZSIM)
and soil (2DSOIL) model to gain a mechanistic understanding of the independent
roles temperature and VPD play in crop yield projections, as well as their
interactions with rising CO2 levels and changing precipitation patterns. We
found that by separating out the VPD effect from rising temperatures, VPD
increases had a greater negative impact on yield compared to that from warming.
The negative impact of these two factors varied with precipitation levels and
influenced yield through separate mechanisms. Warmer temperatures caused yield
loss mainly through shortening the growing season, while elevated VPD increased
water loss and triggered several water stress responses such as reduced
photosynthetic rates, lowered leaf area development, and shortened growing
season length. Elevated CO2 concentrations partially alleviated yield loss
under warming or increased VPD conditions through water savings, but the impact
level varied with precipitation levels and was most pronounced under drier
conditions. These results demonstrate the key role VPD plays in crop growth and
yield, displaying a magnitude of impact comparative to temperature and CO2. A
mechanistic understanding of the function of VPD and its relation with other
climate factors and management practices is critical to improving crop yield
projections under a changing climate.
| [
{
"created": "Mon, 7 Oct 2019 23:24:14 GMT",
"version": "v1"
}
] | 2019-10-09 | [
[
"Hsiao",
"Jennifer",
""
],
[
"Swann",
"Abigail L. S.",
""
],
[
"Kim",
"Soo-Hyung",
""
]
] | Temperatures over the next century are expected to rise to levels detrimental to crop growth and yield. As the atmosphere warms without additional water vapor input, vapor pressure deficit (VPD) increases as well. Increased temperatures and accompanied elevated VPD levels can both lead to negative impacts on crop yield. The independent importance of VPD, however, is often neglected or conflated with that from temperature due to a tight correlation between the two climate factors. We used a coupled process-based crop (MAIZSIM) and soil (2DSOIL) model to gain a mechanistic understanding of the independent roles temperature and VPD play in crop yield projections, as well as their interactions with rising CO2 levels and changing precipitation patterns. We found that by separating out the VPD effect from rising temperatures, VPD increases had a greater negative impact on yield compared to that from warming. The negative impact of these two factors varied with precipitation levels and influenced yield through separate mechanisms. Warmer temperatures caused yield loss mainly through shortening the growing season, while elevated VPD increased water loss and triggered several water stress responses such as reduced photosynthetic rates, lowered leaf area development, and shortened growing season length. Elevated CO2 concentrations partially alleviated yield loss under warming or increased VPD conditions through water savings, but the impact level varied with precipitation levels and was most pronounced under drier conditions. These results demonstrate the key role VPD plays in crop growth and yield, displaying a magnitude of impact comparative to temperature and CO2. A mechanistic understanding of the function of VPD and its relation with other climate factors and management practices is critical to improving crop yield projections under a changing climate. |
2007.03367 | Franz Paul Spitzner | F. P. Spitzner, J. Dehning, J. Wilting, A. Hagemann, J. P. Neto, J.
Zierenberg, V. Priesemann | MR. Estimator, a toolbox to determine intrinsic timescales from
subsampled spiking activity | null | PLOS ONE 16, e0249447 (2021) | 10.1371/journal.pone.0249447 | null | q-bio.NC physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Here we present our Python toolbox "MR. Estimator" to reliably estimate the
intrinsic timescale from electrophysiologal recordings of heavily subsampled
systems. Originally intended for the analysis of time series from neuronal
spiking activity, our toolbox is applicable to a wide range of systems where
subsampling -- the difficulty to observe the whole system in full detail --
limits our capability to record. Applications range from epidemic spreading to
any system that can be represented by an autoregressive process.
In the context of neuroscience, the intrinsic timescale can be thought of as
the duration over which any perturbation reverberates within the network; it
has been used as a key observable to investigate a functional hierarchy across
the primate cortex and serves as a measure of working memory. It is also a
proxy for the distance to criticality and quantifies a system's dynamic working
point.
| [
{
"created": "Tue, 7 Jul 2020 11:57:31 GMT",
"version": "v1"
},
{
"created": "Fri, 7 May 2021 18:35:11 GMT",
"version": "v2"
}
] | 2021-05-11 | [
[
"Spitzner",
"F. P.",
""
],
[
"Dehning",
"J.",
""
],
[
"Wilting",
"J.",
""
],
[
"Hagemann",
"A.",
""
],
[
"Neto",
"J. P.",
""
],
[
"Zierenberg",
"J.",
""
],
[
"Priesemann",
"V.",
""
]
] | Here we present our Python toolbox "MR. Estimator" to reliably estimate the intrinsic timescale from electrophysiologal recordings of heavily subsampled systems. Originally intended for the analysis of time series from neuronal spiking activity, our toolbox is applicable to a wide range of systems where subsampling -- the difficulty to observe the whole system in full detail -- limits our capability to record. Applications range from epidemic spreading to any system that can be represented by an autoregressive process. In the context of neuroscience, the intrinsic timescale can be thought of as the duration over which any perturbation reverberates within the network; it has been used as a key observable to investigate a functional hierarchy across the primate cortex and serves as a measure of working memory. It is also a proxy for the distance to criticality and quantifies a system's dynamic working point. |
1505.02521 | Adriana Supady | Adriana Supady, Volker Blum, Carsten Baldauf | First-principles molecular structure search with a genetic algorithm | null | J. Chem. Inf. Model. 55 (2015) 2338-2348 | 10.1021/acs.jcim.5b00243 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The identification of low-energy conformers for a given molecule is a
fundamental problem in computational chemistry and cheminformatics. We assess
here a conformer search that employs a genetic algorithm for sampling the
low-energy segment of the conformation space of molecules. The algorithm is
designed to work with first-principles methods, facilitated by the
incorporation of local optimization and blacklisting conformers to prevent
repeated evaluations of very similar solutions. The aim of the search is not
only to find the global minimum, but to predict all conformers within an energy
window above the global minimum. The performance of the search strategy is: (i)
evaluated for a reference data set extracted from a database with amino acid
dipeptide conformers obtained by an extensive combined force field and
first-principles search and (ii) compared to the performance of a systematic
search and a random conformer generator for the example of a drug-like ligand
with 43 atoms, 8 rotatable bonds and 1 cis/trans bond.
| [
{
"created": "Mon, 11 May 2015 08:27:17 GMT",
"version": "v1"
},
{
"created": "Tue, 13 Oct 2015 14:05:13 GMT",
"version": "v2"
}
] | 2015-11-24 | [
[
"Supady",
"Adriana",
""
],
[
"Blum",
"Volker",
""
],
[
"Baldauf",
"Carsten",
""
]
] | The identification of low-energy conformers for a given molecule is a fundamental problem in computational chemistry and cheminformatics. We assess here a conformer search that employs a genetic algorithm for sampling the low-energy segment of the conformation space of molecules. The algorithm is designed to work with first-principles methods, facilitated by the incorporation of local optimization and blacklisting conformers to prevent repeated evaluations of very similar solutions. The aim of the search is not only to find the global minimum, but to predict all conformers within an energy window above the global minimum. The performance of the search strategy is: (i) evaluated for a reference data set extracted from a database with amino acid dipeptide conformers obtained by an extensive combined force field and first-principles search and (ii) compared to the performance of a systematic search and a random conformer generator for the example of a drug-like ligand with 43 atoms, 8 rotatable bonds and 1 cis/trans bond. |
2208.04236 | Iryna Polishchuk | Daniela Dobrynin, Iryna Polishchuk, Boaz Pokroy | A Comparison Study of the Detection Limit of Omicron SARS-CoV-2
Nucleocapsid by various Rapid Antigen Tests | null | null | null | null | q-bio.OT | http://creativecommons.org/licenses/by/4.0/ | Since the first case of COVID-19 disease in Wuhan in December 2019, there is
a worldwide struggle to reduce the transmission of acute respiratory syndrome
coronavirus SARS-CoV-2. Many countries worldwide decided to impose local
lockdowns in order to reduce person-to-person interactions, masks became
obligatory especially in closed spaces, and there was a general requirement for
social distance. However, the most efficient method to reduce continuing
spreading of infection among the population, and in the meantime maintain a
regular daily life, is early detection of infected contagious people. Up to
now, the most reliable method for SARS-CoV-2 detection is reverse-transcriptase
PCR test (RT-PCR). It is possible to detect the virus even if there is only one
RNA strand in the sample, and run hundreds of samples simultaneously. This
method has a few disadvantages, such as high cost, is time consuming, the need
for medical laboratories and skilled staff to perform the test, and the major
flaw: the lack of appropriate number of available tests. The latterly prominent
Omicron variant (B.1.1.529) and its derivatives have caused a tremendous
increase in the number of infected people due to its enhanced transmissibility.
This all emphasizes the high demand for easy-to-use, cheap and available
detection tests.
| [
{
"created": "Fri, 29 Jul 2022 14:46:07 GMT",
"version": "v1"
}
] | 2022-08-09 | [
[
"Dobrynin",
"Daniela",
""
],
[
"Polishchuk",
"Iryna",
""
],
[
"Pokroy",
"Boaz",
""
]
] | Since the first case of COVID-19 disease in Wuhan in December 2019, there is a worldwide struggle to reduce the transmission of acute respiratory syndrome coronavirus SARS-CoV-2. Many countries worldwide decided to impose local lockdowns in order to reduce person-to-person interactions, masks became obligatory especially in closed spaces, and there was a general requirement for social distance. However, the most efficient method to reduce continuing spreading of infection among the population, and in the meantime maintain a regular daily life, is early detection of infected contagious people. Up to now, the most reliable method for SARS-CoV-2 detection is reverse-transcriptase PCR test (RT-PCR). It is possible to detect the virus even if there is only one RNA strand in the sample, and run hundreds of samples simultaneously. This method has a few disadvantages, such as high cost, is time consuming, the need for medical laboratories and skilled staff to perform the test, and the major flaw: the lack of appropriate number of available tests. The latterly prominent Omicron variant (B.1.1.529) and its derivatives have caused a tremendous increase in the number of infected people due to its enhanced transmissibility. This all emphasizes the high demand for easy-to-use, cheap and available detection tests. |
2302.00826 | Norio Yoshida | Norio Yoshida | Existence of exact solution of the
Susceptible-Exposed-Infectious-Recovered (SEIR) epidemic model | 40 pages, 9 figures | J. Differential Equations 355(2023), 103-143 | 10.1016/j.jde.2023.01.017 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Exact solutions of the SEIR epidemic model are derived, and various
properties of solutions are obtained directly from the exact solution. In this
paper Abel differential equations play an important role in establishing the
exact solution of SEIR differential system, in particular the number of
infected individuals can be represented in a simple form by using a positive
solution of an Abel differential equation. It is shown that the parametric form
of the exact solution satisfies some linear differential system including a
positive solution of an Abel differential equation.
| [
{
"created": "Thu, 2 Feb 2023 02:20:38 GMT",
"version": "v1"
}
] | 2023-02-03 | [
[
"Yoshida",
"Norio",
""
]
] | Exact solutions of the SEIR epidemic model are derived, and various properties of solutions are obtained directly from the exact solution. In this paper Abel differential equations play an important role in establishing the exact solution of SEIR differential system, in particular the number of infected individuals can be represented in a simple form by using a positive solution of an Abel differential equation. It is shown that the parametric form of the exact solution satisfies some linear differential system including a positive solution of an Abel differential equation. |
2112.04327 | Liran Szlak | Liran Szlak, Kristoffer Aberg, Rony Paz | Suboptimal and trait-like reinforcement learning strategies correlate
with midbrain encoding of prediction errors | null | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | During probabilistic learning organisms often apply a sub-optimal
"probability-matching" strategy, where selection rates match reward
probabilities, rather than engaging in the optimal "maximization" strategy,
where the option with the highest reward probability is always selected.
Despite decades of research, the mechanisms contributing to
probability-matching are still under debate, and particularly noteworthy is
that no differences between probability-matching and maximization strategies
have been reported at the level of the brain. Here, we provide theoretical
proof for a computational model that explains the complete range of behaviors
between pure maximization and pure probability-matching. Fitting this model to
behavior of 60 participants performing a probabilistic reinforcement learning
task during fMRI scanning confirmed the model-derived prediction that
probability-matching relates to an increased integration of negative outcomes
during learning, as indicated by a stronger coupling between midbrain BOLD
signal and negative prediction errors. Because the degree of
probability-matching was consistent within an individual across nine different
conditions, our results further suggest that the tendency to express a
particular learning strategy is a trait-like feature of an individual.
| [
{
"created": "Wed, 8 Dec 2021 15:15:02 GMT",
"version": "v1"
}
] | 2021-12-09 | [
[
"Szlak",
"Liran",
""
],
[
"Aberg",
"Kristoffer",
""
],
[
"Paz",
"Rony",
""
]
] | During probabilistic learning organisms often apply a sub-optimal "probability-matching" strategy, where selection rates match reward probabilities, rather than engaging in the optimal "maximization" strategy, where the option with the highest reward probability is always selected. Despite decades of research, the mechanisms contributing to probability-matching are still under debate, and particularly noteworthy is that no differences between probability-matching and maximization strategies have been reported at the level of the brain. Here, we provide theoretical proof for a computational model that explains the complete range of behaviors between pure maximization and pure probability-matching. Fitting this model to behavior of 60 participants performing a probabilistic reinforcement learning task during fMRI scanning confirmed the model-derived prediction that probability-matching relates to an increased integration of negative outcomes during learning, as indicated by a stronger coupling between midbrain BOLD signal and negative prediction errors. Because the degree of probability-matching was consistent within an individual across nine different conditions, our results further suggest that the tendency to express a particular learning strategy is a trait-like feature of an individual. |
1210.7508 | Nico Riedel | Nico Riedel and Johannes Berg | A statistical mechanics approach to the sample deconvolution problem | 8 pages, 4 figures | Phys. Rev. E 87, 042715 (2013) | 10.1103/PhysRevE.87.042715 | null | q-bio.QM cond-mat.dis-nn physics.data-an stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In a multicellular organism different cell types express a gene in different
amounts. Samples from which gene expression levels can be measured typically
contain a mixture of different cell types, the resulting measurements thus give
only averages over the different cell types present. Based on fluctuations in
the mixture proportions from sample to sample it is in principle possible to
reconstruct the underlying expression levels of each cell type: to deconvolute
the sample. We use a statistical mechanics approach to the problem of
deconvoluting such partial concentrations from mixed samples, give analytical
results for when and how well samples can be unmixed, and suggest an algorithm
for sample deconvolution.
| [
{
"created": "Sun, 28 Oct 2012 20:43:03 GMT",
"version": "v1"
}
] | 2017-08-09 | [
[
"Riedel",
"Nico",
""
],
[
"Berg",
"Johannes",
""
]
] | In a multicellular organism different cell types express a gene in different amounts. Samples from which gene expression levels can be measured typically contain a mixture of different cell types, the resulting measurements thus give only averages over the different cell types present. Based on fluctuations in the mixture proportions from sample to sample it is in principle possible to reconstruct the underlying expression levels of each cell type: to deconvolute the sample. We use a statistical mechanics approach to the problem of deconvoluting such partial concentrations from mixed samples, give analytical results for when and how well samples can be unmixed, and suggest an algorithm for sample deconvolution. |
2312.02187 | Albert Kao | Albert B. Kao, Shoubhik Banerjee, Fritz Francisco, Andrew M. Berdahl | Timing decisions as the next frontier for collective intelligence | null | null | null | null | q-bio.OT | http://creativecommons.org/licenses/by/4.0/ | The past decade has witnessed a dramatically growing interest in collective
intelligence - the phenomenon of groups having an ability to make more accurate
decisions than isolated individuals. However, the vast majority of studies to
date have focused, either explicitly or implicitly, on spatial decisions (e.g.,
potential nest sites, food patches, or migration directions). We highlight the
equally important, but severely understudied, realm of temporal collective
decision-making, i.e., decisions about when to perform an action. We argue that
temporal collective decision making is likely to differ from spatial decision
making in several crucial ways and probably involves different mechanisms,
model predictions, and experimental outcomes. We anticipate that research
focused on temporal decisions should lead to a radically expanded understanding
of the adaptiveness and constraints of living in groups.
| [
{
"created": "Fri, 1 Dec 2023 23:02:59 GMT",
"version": "v1"
}
] | 2023-12-06 | [
[
"Kao",
"Albert B.",
""
],
[
"Banerjee",
"Shoubhik",
""
],
[
"Francisco",
"Fritz",
""
],
[
"Berdahl",
"Andrew M.",
""
]
] | The past decade has witnessed a dramatically growing interest in collective intelligence - the phenomenon of groups having an ability to make more accurate decisions than isolated individuals. However, the vast majority of studies to date have focused, either explicitly or implicitly, on spatial decisions (e.g., potential nest sites, food patches, or migration directions). We highlight the equally important, but severely understudied, realm of temporal collective decision-making, i.e., decisions about when to perform an action. We argue that temporal collective decision making is likely to differ from spatial decision making in several crucial ways and probably involves different mechanisms, model predictions, and experimental outcomes. We anticipate that research focused on temporal decisions should lead to a radically expanded understanding of the adaptiveness and constraints of living in groups. |
1010.1239 | Martin Zumsande | Martin Zumsande, Dirk Stiefs, Stefan Siegmund, Thilo Gross | General analysis of mathematical models for bone remodeling | null | null | null | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bone remodeling is regulated by pathways controlling the interplay of
osteoblasts and osteoclasts. In this work, we apply the method of generalized
modelling to systematically analyse a large class of models of bone remodeling.
Our analysis shows that osteoblast precursors can play an important role in the
regulation of bone remodeling. Further, we find that the parameter regime most
likely realized in nature lies very close to bifurcation lines, marking
qualitative changes in the dynamics. Although proximity to a bifurcation
facilitates adaptive responses to changing external conditions, it entails the
danger of losing dynamical stability. Some evidence implicates such dynamical
transitions as a potential mechanism leading to forms of Paget's disease.
| [
{
"created": "Wed, 6 Oct 2010 19:32:11 GMT",
"version": "v1"
},
{
"created": "Tue, 7 Dec 2010 16:58:46 GMT",
"version": "v2"
}
] | 2010-12-08 | [
[
"Zumsande",
"Martin",
""
],
[
"Stiefs",
"Dirk",
""
],
[
"Siegmund",
"Stefan",
""
],
[
"Gross",
"Thilo",
""
]
] | Bone remodeling is regulated by pathways controlling the interplay of osteoblasts and osteoclasts. In this work, we apply the method of generalized modelling to systematically analyse a large class of models of bone remodeling. Our analysis shows that osteoblast precursors can play an important role in the regulation of bone remodeling. Further, we find that the parameter regime most likely realized in nature lies very close to bifurcation lines, marking qualitative changes in the dynamics. Although proximity to a bifurcation facilitates adaptive responses to changing external conditions, it entails the danger of losing dynamical stability. Some evidence implicates such dynamical transitions as a potential mechanism leading to forms of Paget's disease. |
q-bio/0310023 | Ruxandra Dima | Ruxandra I. Dima and D. Thirumalai | Asymmetry in the shapes of folded and denatured states of proteins | 22 pages, 7 figures | null | null | null | q-bio.BM | null | The asymmetry in the shapes of folded and unfolded states are probed using
two parameters, one being a measure of the sphericity and the other that
describes the shape. For the folded states, whose interiors are densely packed,
the radii of gyration (Rg) and these two parameters are calculated using the
coordinates of the experimentally determined structures. Although Rg scales as
expected for maximally compact structures, the distributions of the shape
parameters show that there is considerable asymmetry in the shapes of folded
structures. The degree of asymmetry is greater for proteins that form
oligomers. Analysis of the two- and three-body contacts in the native
structures shows that the presence of near equal number of contacts between
backbone and side-chains and between side-chains gives rise to dense packing.
We suggest that proteins with relatively large values of shape parameters can
tolerate volume mutations without greatly affecting the network of contacts or
their stability. To probe shape characteristics of denatured states we have
developed a model of a WW-like domain. The shape parameters, which are
calculated using Langevin simulations, change dramatically in the course of
coil to globule transition. Comparison of the values of shape parameters
between the globular state and the folded state of WW domain shows that both
energetic (especially dispersion in the hydrophobic interactions) and steric
effects are important in determining packing in proteins.
| [
{
"created": "Thu, 16 Oct 2003 22:40:01 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Dima",
"Ruxandra I.",
""
],
[
"Thirumalai",
"D.",
""
]
] | The asymmetry in the shapes of folded and unfolded states are probed using two parameters, one being a measure of the sphericity and the other that describes the shape. For the folded states, whose interiors are densely packed, the radii of gyration (Rg) and these two parameters are calculated using the coordinates of the experimentally determined structures. Although Rg scales as expected for maximally compact structures, the distributions of the shape parameters show that there is considerable asymmetry in the shapes of folded structures. The degree of asymmetry is greater for proteins that form oligomers. Analysis of the two- and three-body contacts in the native structures shows that the presence of near equal number of contacts between backbone and side-chains and between side-chains gives rise to dense packing. We suggest that proteins with relatively large values of shape parameters can tolerate volume mutations without greatly affecting the network of contacts or their stability. To probe shape characteristics of denatured states we have developed a model of a WW-like domain. The shape parameters, which are calculated using Langevin simulations, change dramatically in the course of coil to globule transition. Comparison of the values of shape parameters between the globular state and the folded state of WW domain shows that both energetic (especially dispersion in the hydrophobic interactions) and steric effects are important in determining packing in proteins. |
2405.06457 | Masato Suzuki | Makiko Aoki, Masato Suzuki, Satoshi Suzuki, Kosuke Oiwa, Yoshitaka
Maeda, Hisayo Okayama | Characterization of mood and emotion regulation in females with PMS/PMDD
using near-infrared spectroscopy to assess prefrontal cerebral blood flow and
the mood questionnaire | 5 pages, 4 figures, 1 table, 2024 IEEE International Conference on
Robotics and Automation (ICRA 2024) | null | null | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many sexually mature women experience premenstrual syndrome (PMS) or
premenstrual dysphoric mood disorder (PMDD). Current approaches for managing
PMS and PMDD rely on daily mental condition recording, which many discontinue
due to its impracticality. Hence, there's a critical need for a simple,
objective method to monitor mental symptoms. One of the principal symptoms of
PMDD is a dysfunction in emotional regulation, which has been demonstrated
through brain-function imaging measurements to involve hyperactivity in the
amygdala and a decrease in functionality in the prefrontal cortex (PFC).
However, most research has been focused on PMDD, leaving a gap in understanding
of PMS. Near-infrared spectroscopy (NIRS) measures brain activity by
spectroscopically determining the amount of hemoglobin in the blood vessels.
This study aimed to characterize the emotional regulation function in PMS. We
measured brain activity in the PFC region using NIRS when participants were
presented with emotion-inducing pictures. Furthermore, moods highly associated
with emotions were assessed through questionnaires. Forty-six participants were
categorized into non-PMS, PMS, and PMDD groups based on the gynecologist's
diagnosis. POMS2 scores revealed higher negative mood and lower positive mood
in the follicular phase for the PMS group, while the PMDD group exhibited
heightened negative mood during the luteal phase. NIRS results showed reduced
emotional expression in the PMS group during both phases, while no significant
differences were observed in the PMDD group compared to non-PMS. It was found
that there are differences in the distribution of mood during the luteal and
follicular phase and in cerebral blood flow responses to emotional stimuli
between PMS and PMDD. These findings suggest the potential for providing
individuals with awareness of PMS or PMDD through scores on the POMS2 and NIRS
measurements.
| [
{
"created": "Fri, 10 May 2024 13:07:30 GMT",
"version": "v1"
}
] | 2024-05-13 | [
[
"Aoki",
"Makiko",
""
],
[
"Suzuki",
"Masato",
""
],
[
"Suzuki",
"Satoshi",
""
],
[
"Oiwa",
"Kosuke",
""
],
[
"Maeda",
"Yoshitaka",
""
],
[
"Okayama",
"Hisayo",
""
]
] | Many sexually mature women experience premenstrual syndrome (PMS) or premenstrual dysphoric mood disorder (PMDD). Current approaches for managing PMS and PMDD rely on daily mental condition recording, which many discontinue due to its impracticality. Hence, there's a critical need for a simple, objective method to monitor mental symptoms. One of the principal symptoms of PMDD is a dysfunction in emotional regulation, which has been demonstrated through brain-function imaging measurements to involve hyperactivity in the amygdala and a decrease in functionality in the prefrontal cortex (PFC). However, most research has been focused on PMDD, leaving a gap in understanding of PMS. Near-infrared spectroscopy (NIRS) measures brain activity by spectroscopically determining the amount of hemoglobin in the blood vessels. This study aimed to characterize the emotional regulation function in PMS. We measured brain activity in the PFC region using NIRS when participants were presented with emotion-inducing pictures. Furthermore, moods highly associated with emotions were assessed through questionnaires. Forty-six participants were categorized into non-PMS, PMS, and PMDD groups based on the gynecologist's diagnosis. POMS2 scores revealed higher negative mood and lower positive mood in the follicular phase for the PMS group, while the PMDD group exhibited heightened negative mood during the luteal phase. NIRS results showed reduced emotional expression in the PMS group during both phases, while no significant differences were observed in the PMDD group compared to non-PMS. It was found that there are differences in the distribution of mood during the luteal and follicular phase and in cerebral blood flow responses to emotional stimuli between PMS and PMDD. These findings suggest the potential for providing individuals with awareness of PMS or PMDD through scores on the POMS2 and NIRS measurements. |
2304.05311 | Jos\'e A. Cuesta | Pablo Catal\'an, Juan Antonio Garc\'ia-Mart\'in, Jacobo Aguirre,
Jos\'e A. Cuesta, Susanna Manrubia | Entropic contribution to phenotype fitness | 25 pages, 10 figures, uses iopart.cls, iopart10.clo, iopart12.clo,
iopams.sty, setstack.sty | null | 10.1088/1751-8121/ace8d6 | null | q-bio.PE cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | All possible phenotypes are not equally accessible to evolving populations.
In fact, only phenotypes of large size, i.e. those resulting from many
different genotypes, are found in populations of sequences, presumably because
they are easier to discover and maintain. Genotypes that map to these
phenotypes usually form mostly connected genotype networks that percolate the
space of sequences, thus guaranteeing access to a large set of alternative
phenotypes. Within a given environment, where specific phenotypic traits become
relevant for adaptation, the replicative ability of a phenotype and its overall
fitness (in competition experiments with alternative phenotypes) can be
estimated. Two primary questions arise: how do phenotype size, reproductive
capability and topology of the genotype network affect the fitness of a
phenotype? And, assuming that evolution is only able to access large
phenotypes, what is the range of unattainable fitness values? In order to
address these questions, we quantify the adaptive advantage of phenotypes of
varying size and spectral radius in a two-peak landscape. We derive analytical
relationships between the three variables (size, topology, and replicative
ability) which are then tested through analysis of genotype-phenotype maps and
simulations of population dynamics on such maps. Finally, we analytically show
that the fraction of attainable phenotypes decreases with the length of the
genotype, though its absolute number increases. The fact that most phenotypes
are not visible to evolution very likely forbids the attainment of the highest
peak in the landscape. Nevertheless, our results indicate that the relative
fitness loss due to this limited accessibility is largely inconsequential for
adaptation.
| [
{
"created": "Tue, 11 Apr 2023 16:10:06 GMT",
"version": "v1"
}
] | 2023-08-16 | [
[
"Catalán",
"Pablo",
""
],
[
"García-Martín",
"Juan Antonio",
""
],
[
"Aguirre",
"Jacobo",
""
],
[
"Cuesta",
"José A.",
""
],
[
"Manrubia",
"Susanna",
""
]
] | All possible phenotypes are not equally accessible to evolving populations. In fact, only phenotypes of large size, i.e. those resulting from many different genotypes, are found in populations of sequences, presumably because they are easier to discover and maintain. Genotypes that map to these phenotypes usually form mostly connected genotype networks that percolate the space of sequences, thus guaranteeing access to a large set of alternative phenotypes. Within a given environment, where specific phenotypic traits become relevant for adaptation, the replicative ability of a phenotype and its overall fitness (in competition experiments with alternative phenotypes) can be estimated. Two primary questions arise: how do phenotype size, reproductive capability and topology of the genotype network affect the fitness of a phenotype? And, assuming that evolution is only able to access large phenotypes, what is the range of unattainable fitness values? In order to address these questions, we quantify the adaptive advantage of phenotypes of varying size and spectral radius in a two-peak landscape. We derive analytical relationships between the three variables (size, topology, and replicative ability) which are then tested through analysis of genotype-phenotype maps and simulations of population dynamics on such maps. Finally, we analytically show that the fraction of attainable phenotypes decreases with the length of the genotype, though its absolute number increases. The fact that most phenotypes are not visible to evolution very likely forbids the attainment of the highest peak in the landscape. Nevertheless, our results indicate that the relative fitness loss due to this limited accessibility is largely inconsequential for adaptation. |
1711.08941 | Marta Diaz Ms | Marta Diaz-delCastillo, David P.D. Woldbye, Anne Marie Heegaard | Neuropeptide Y and its involvement in chronic pain | 18 pages, 1 figure. In press | 2017,Neuroscience | 10.1016/j.neuroscience.2017.08.050 | null | q-bio.NC q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Chronic pain is a serious condition that significantly impairs the quality of
life, affecting an estimate of 1.5 billion people worldwide. Despite the
physiological, emotional and financial burden of chronic pain, there is still a
lack of efficient treatments. Neuropeptide Y (NPY) is a highly conserved
endogenous peptide in the central and peripheral nervous system of all mammals,
which has been implicated in both pro- and antinociceptive effects. NPY is
expressed in the superficial laminae of the dorsal horn of the spinal cord,
where it appears to mediate its antinociceptive actions via the Y1 and Y2
receptors. Intrathecal administration of NPY in animal models of neuropathic,
inflammatory or post-operative pain has been shown to cause analgesia, even
though its exact mechanisms are still unclear. It remains to be seen whether
these promising central antinociceptive effects of NPY can be transferred into
a future treatment for chronic pain.
| [
{
"created": "Fri, 24 Nov 2017 12:30:48 GMT",
"version": "v1"
}
] | 2017-11-27 | [
[
"Diaz-delCastillo",
"Marta",
""
],
[
"Woldbye",
"David P. D.",
""
],
[
"Heegaard",
"Anne Marie",
""
]
] | Chronic pain is a serious condition that significantly impairs the quality of life, affecting an estimate of 1.5 billion people worldwide. Despite the physiological, emotional and financial burden of chronic pain, there is still a lack of efficient treatments. Neuropeptide Y (NPY) is a highly conserved endogenous peptide in the central and peripheral nervous system of all mammals, which has been implicated in both pro- and antinociceptive effects. NPY is expressed in the superficial laminae of the dorsal horn of the spinal cord, where it appears to mediate its antinociceptive actions via the Y1 and Y2 receptors. Intrathecal administration of NPY in animal models of neuropathic, inflammatory or post-operative pain has been shown to cause analgesia, even though its exact mechanisms are still unclear. It remains to be seen whether these promising central antinociceptive effects of NPY can be transferred into a future treatment for chronic pain. |
1803.04377 | Cheng Ly | Cheng Ly, Seth H. Weinberg | Analysis of Heterogeneous Cardiac Pacemaker Tissue Models and Traveling
Wave Dynamics | 34 pages, 11 figures | null | null | null | q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The sinoatrial-node (SAN) is a complex heterogeneous tissue that generates a
stable rhythm in healthy hearts, yet a general mechanistic explanation for when
and how this tissue remains stable is lacking. Although computational and
theoretical analyses could elucidate these phenomena, such methods have rarely
been used in realistic (large-dimensional) gap-junction coupled heterogeneous
pacemaker tissue models. In this study, we adapt a recent model of pacemaker
cells (Severi et al. 2012), incorporating biophysical representations of ion
channel and intracellular calcium dynamics, to capture physiological features
of a heterogeneous population of pacemaker cells, in particular "center" and
"peripheral" cells with distinct intrinsic frequencies and action potential
morphology. Large-scale simulations of the SAN tissue, represented by a
heterogeneous tissue structure of pacemaker cells, exhibit a rich repertoire of
behaviors, including complete synchrony, traveling waves of activity
originating from periphery to center, and transient traveling waves originating
from the center. We use phase reduction methods that do not require fully
simulating the large-scale model to capture these observations. Moreover, the
phase reduced models accurately predict key properties of the tissue electrical
dynamics, including wave frequencies when synchronization occurs, and wave
propagation direction in a variety of tissue models. With the reduced phase
models, we analyze the relationship between cell distributions and coupling
strengths and the resulting transient dynamics. Further, the reduced phase
model predicts parameter regimes of irregular electrical dynamics. Thus, we
demonstrate that phase reduced oscillator models applied to realistic pacemaker
tissue is a useful tool for investigating the spatial-temporal dynamics of
cardiac pacemaker activity.
| [
{
"created": "Mon, 12 Mar 2018 17:11:20 GMT",
"version": "v1"
},
{
"created": "Thu, 5 Jul 2018 02:43:21 GMT",
"version": "v2"
},
{
"created": "Wed, 12 Sep 2018 19:56:42 GMT",
"version": "v3"
}
] | 2018-09-14 | [
[
"Ly",
"Cheng",
""
],
[
"Weinberg",
"Seth H.",
""
]
] | The sinoatrial-node (SAN) is a complex heterogeneous tissue that generates a stable rhythm in healthy hearts, yet a general mechanistic explanation for when and how this tissue remains stable is lacking. Although computational and theoretical analyses could elucidate these phenomena, such methods have rarely been used in realistic (large-dimensional) gap-junction coupled heterogeneous pacemaker tissue models. In this study, we adapt a recent model of pacemaker cells (Severi et al. 2012), incorporating biophysical representations of ion channel and intracellular calcium dynamics, to capture physiological features of a heterogeneous population of pacemaker cells, in particular "center" and "peripheral" cells with distinct intrinsic frequencies and action potential morphology. Large-scale simulations of the SAN tissue, represented by a heterogeneous tissue structure of pacemaker cells, exhibit a rich repertoire of behaviors, including complete synchrony, traveling waves of activity originating from periphery to center, and transient traveling waves originating from the center. We use phase reduction methods that do not require fully simulating the large-scale model to capture these observations. Moreover, the phase reduced models accurately predict key properties of the tissue electrical dynamics, including wave frequencies when synchronization occurs, and wave propagation direction in a variety of tissue models. With the reduced phase models, we analyze the relationship between cell distributions and coupling strengths and the resulting transient dynamics. Further, the reduced phase model predicts parameter regimes of irregular electrical dynamics. Thus, we demonstrate that phase reduced oscillator models applied to realistic pacemaker tissue is a useful tool for investigating the spatial-temporal dynamics of cardiac pacemaker activity. |
2005.06297 | Jos\'e Carcione M | Juan E. Santos, Jose' M. Carcione, Gabriela B. Savioli, Patricia M.
Gauzellino, Alejandro Ravecca, Alfredo Moras | A numerical simulation of the COVID-19 epidemic in Argentina using the
SEIR model | arXiv admin note: substantial text overlap with arXiv:2004.03575 | null | null | null | q-bio.PE physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A pandemic caused by a new coronavirus has spread worldwide, affecting
Argentina. We implement an SEIR model to analyze the disease evolution in
Buenos Aires and neighbouring cities. The model parameters are calibrated using
the number of casualties officially reported. Since infinite solutions honour
the data, we show different cases. In all of them the reproduction ratio $R_0$
decreases after early lockdown, but then raises, probably due to an increase in
contagion in highly populated slums. Therefore it is mandatory to reverse this
growing trend in $R_0$ by applying control strategies to avoid a high number of
infectious and dead individuals. The model provides an effective procedure to
estimate epidemic parameters (fatality rate, transmission probability,
infection and incubation periods) and monitor control measures during the
epidemic evolution.
| [
{
"created": "Tue, 12 May 2020 08:34:09 GMT",
"version": "v1"
},
{
"created": "Thu, 14 May 2020 13:41:14 GMT",
"version": "v2"
},
{
"created": "Tue, 26 May 2020 07:43:11 GMT",
"version": "v3"
},
{
"created": "Wed, 15 Jul 2020 07:13:23 GMT",
"version": "v4"
}
] | 2020-07-16 | [
[
"Santos",
"Juan E.",
""
],
[
"Carcione",
"Jose' M.",
""
],
[
"Savioli",
"Gabriela B.",
""
],
[
"Gauzellino",
"Patricia M.",
""
],
[
"Ravecca",
"Alejandro",
""
],
[
"Moras",
"Alfredo",
""
]
] | A pandemic caused by a new coronavirus has spread worldwide, affecting Argentina. We implement an SEIR model to analyze the disease evolution in Buenos Aires and neighbouring cities. The model parameters are calibrated using the number of casualties officially reported. Since infinite solutions honour the data, we show different cases. In all of them the reproduction ratio $R_0$ decreases after early lockdown, but then raises, probably due to an increase in contagion in highly populated slums. Therefore it is mandatory to reverse this growing trend in $R_0$ by applying control strategies to avoid a high number of infectious and dead individuals. The model provides an effective procedure to estimate epidemic parameters (fatality rate, transmission probability, infection and incubation periods) and monitor control measures during the epidemic evolution. |
0909.1945 | Naoki Masuda Dr. | Naoki Masuda | Immunization of networks with community structure | 3 figures, 1 table | New Journal of Physics, 11, 123018 (2009) | 10.1088/1367-2630/11/12/123018 | null | q-bio.PE cond-mat.dis-nn physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this study, an efficient method to immunize modular networks (i.e.,
networks with community structure) is proposed. The immunization of networks
aims at fragmenting networks into small parts with a small number of removed
nodes. Its applications include prevention of epidemic spreading, intentional
attacks on networks, and conservation of ecosystems. Although preferential
immunization of hubs is efficient, good immunization strategies for modular
networks have not been established. On the basis of an immunization strategy
based on the eigenvector centrality, we develop an analytical framework for
immunizing modular networks. To this end, we quantify the contribution of each
node to the connectivity in a coarse-grained network among modules. We verify
the effectiveness of the proposed method by applying it to model and real
networks with modular structure.
| [
{
"created": "Thu, 10 Sep 2009 13:46:15 GMT",
"version": "v1"
},
{
"created": "Thu, 17 Dec 2009 02:08:33 GMT",
"version": "v2"
}
] | 2010-03-24 | [
[
"Masuda",
"Naoki",
""
]
] | In this study, an efficient method to immunize modular networks (i.e., networks with community structure) is proposed. The immunization of networks aims at fragmenting networks into small parts with a small number of removed nodes. Its applications include prevention of epidemic spreading, intentional attacks on networks, and conservation of ecosystems. Although preferential immunization of hubs is efficient, good immunization strategies for modular networks have not been established. On the basis of an immunization strategy based on the eigenvector centrality, we develop an analytical framework for immunizing modular networks. To this end, we quantify the contribution of each node to the connectivity in a coarse-grained network among modules. We verify the effectiveness of the proposed method by applying it to model and real networks with modular structure. |
1910.04839 | Alexey Chernov | Aleksandr A. Shemendyuk, Alexey A. Chernov, Mark Y Kelbert | Fair Insurance Premium Level in Connected SIR Model under Epidemic
Outbreak | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we aim to study an optimal insurance premium level for
health-care in a deterministic and stochastic SIR models with migration fluxes
and vaccination of population. The studied model considers two standard SIR
centres connected via links and continuous migration fluxes. The premium is
calculated using the basic equivalence principle. Even in this simple setup
there are non-intuitive results that illustrate how the premium depends on
migration rates, severeness of a disease and initial distribution of healthy
and infected individuals through the centres. We investigate how the
vaccination program effects the insurance costs by comparing the savings in
benefits with the expenses for vaccination. We compare the results of
deterministic and stochastic models.
| [
{
"created": "Thu, 10 Oct 2019 20:16:47 GMT",
"version": "v1"
}
] | 2019-10-14 | [
[
"Shemendyuk",
"Aleksandr A.",
""
],
[
"Chernov",
"Alexey A.",
""
],
[
"Kelbert",
"Mark Y",
""
]
] | In this paper we aim to study an optimal insurance premium level for health-care in a deterministic and stochastic SIR models with migration fluxes and vaccination of population. The studied model considers two standard SIR centres connected via links and continuous migration fluxes. The premium is calculated using the basic equivalence principle. Even in this simple setup there are non-intuitive results that illustrate how the premium depends on migration rates, severeness of a disease and initial distribution of healthy and infected individuals through the centres. We investigate how the vaccination program effects the insurance costs by comparing the savings in benefits with the expenses for vaccination. We compare the results of deterministic and stochastic models. |
2105.08801 | Shannon Cartwright | Shannon L. Cartwright, Marnie McKechnie, Julie Schmied, Alexandra M.
Livernois and Bonnie A. Mallard | Effect of In-vitro Heat Stress Challenge on the function of Blood
Mononuclear Cells from Dairy Cattle ranked as High, Average and Low Immune
Responders | 37 pages, 3 figures, submitted to BMC Journal of Veterinary Research | BMC Vet Res 17, 233 (2021) | null | null | q-bio.CB | http://creativecommons.org/licenses/by-sa/4.0/ | The warming climate is causing livestock to experience heat stress at an
increasing frequency. Holstein cows are particularly susceptible to heat stress
because of their high metabolic rate. Heat stress negatively affects immune
function, particularly with respect to the cell-mediated immune response, which
leads to increased susceptibility to disease. Cattle identified as having
enhanced immune response have lower incidence of disease. Therefore, the
objective of this study was to evaluate the impact of in vitro heat challenge
on blood mononuclear cells from dairy cattle, that had previously been ranked
for immune response, in terms of heat shock protein 70 concentration, nitric
oxide production, and cell proliferation. Bovine blood mononuclear cells, from
Holstein dairy cattle previously ranked for immune response based on their
estimated breeding values, were subjected to three heat treatments:
thermoneutral, heat stress 1 and heat stress 2. Cells of each treatment were
evaluated for heat shock protein 70, cell proliferation and nitric oxide
production. Blood mononuclear cells from dairy cattle classified as high immune
responders, based on their estimated breeding values for antibody and
cell-mediated responses, produced a significantly greater concentration of heat
shock protein 70 under most heat stress treatments compared to average and low
responders, and greater cell-proliferation across all treatments. Similarly, a
trend was observed where high responders displayed greater nitric oxide
production compared to average and low responders across heat treatments.
Overall, these results suggest that blood mononuclear cells from high immune
responder dairy cows are more thermotolerant compared to average and low immune
responders
| [
{
"created": "Tue, 18 May 2021 19:42:44 GMT",
"version": "v1"
}
] | 2021-07-08 | [
[
"Cartwright",
"Shannon L.",
""
],
[
"McKechnie",
"Marnie",
""
],
[
"Schmied",
"Julie",
""
],
[
"Livernois",
"Alexandra M.",
""
],
[
"Mallard",
"Bonnie A.",
""
]
] | The warming climate is causing livestock to experience heat stress at an increasing frequency. Holstein cows are particularly susceptible to heat stress because of their high metabolic rate. Heat stress negatively affects immune function, particularly with respect to the cell-mediated immune response, which leads to increased susceptibility to disease. Cattle identified as having enhanced immune response have lower incidence of disease. Therefore, the objective of this study was to evaluate the impact of in vitro heat challenge on blood mononuclear cells from dairy cattle, that had previously been ranked for immune response, in terms of heat shock protein 70 concentration, nitric oxide production, and cell proliferation. Bovine blood mononuclear cells, from Holstein dairy cattle previously ranked for immune response based on their estimated breeding values, were subjected to three heat treatments: thermoneutral, heat stress 1 and heat stress 2. Cells of each treatment were evaluated for heat shock protein 70, cell proliferation and nitric oxide production. Blood mononuclear cells from dairy cattle classified as high immune responders, based on their estimated breeding values for antibody and cell-mediated responses, produced a significantly greater concentration of heat shock protein 70 under most heat stress treatments compared to average and low responders, and greater cell-proliferation across all treatments. Similarly, a trend was observed where high responders displayed greater nitric oxide production compared to average and low responders across heat treatments. Overall, these results suggest that blood mononuclear cells from high immune responder dairy cows are more thermotolerant compared to average and low immune responders |
2311.06521 | Steven Rossi | Steven P. Rossi, Sean P. Cox. Hugues P. Beno\^it | Extirpation of Atlantic Cod from a Northwest Atlantic ecosystem in the
absence of predator control: inference from an ecosystem model of
intermediate complexity | 41 pages, 10 figures, 2 tables, 5 appendices | null | null | null | q-bio.PE q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Atlantic cod (Gadus morhua) in the southern Gulf of St. Lawrence (sGSL)
declined to low abundance in the early 1990s and have since failed to recover
due to high natural mortality, which has been linked to grey seal (Halichoerus
grypus) predation. Increased grey seal harvests have been suggested to improve
cod survival, however, predicting the response of cod to seal abundance changes
in the sGSL is complicated by a hypothesized triangular food web involving
seals, cod, and small pelagic fishes, wherein the pelagic fishes are prey for
cod and grey seals, but may also prey on young cod. Grey seals may therefore
have an indirect positive effect on prerecruit cod survival via predation on
pelagic fish. Using a multispecies model of intermediate complexity fitted to
various scientific and fisheries data, we found that seal predation accounted
for the majority of recent cod mortality and that cod will likely be extirpated
without a strong and rapid reduction in grey seal abundance. We did not find
evidence that reducing grey seal abundance will result in large increases to
herring biomass that could impair cod recovery.
| [
{
"created": "Sat, 11 Nov 2023 09:46:36 GMT",
"version": "v1"
}
] | 2023-11-14 | [
[
"Rossi",
"Steven P.",
""
],
[
"Benoît",
"Sean P. Cox. Hugues P.",
""
]
] | Atlantic cod (Gadus morhua) in the southern Gulf of St. Lawrence (sGSL) declined to low abundance in the early 1990s and have since failed to recover due to high natural mortality, which has been linked to grey seal (Halichoerus grypus) predation. Increased grey seal harvests have been suggested to improve cod survival, however, predicting the response of cod to seal abundance changes in the sGSL is complicated by a hypothesized triangular food web involving seals, cod, and small pelagic fishes, wherein the pelagic fishes are prey for cod and grey seals, but may also prey on young cod. Grey seals may therefore have an indirect positive effect on prerecruit cod survival via predation on pelagic fish. Using a multispecies model of intermediate complexity fitted to various scientific and fisheries data, we found that seal predation accounted for the majority of recent cod mortality and that cod will likely be extirpated without a strong and rapid reduction in grey seal abundance. We did not find evidence that reducing grey seal abundance will result in large increases to herring biomass that could impair cod recovery. |
1408.2552 | Amit Gulab Deshwar | Amit G. Deshwar, Shankar Vembu, Quaid Morris | Comparing Nonparametric Bayesian Tree Priors for Clonal Reconstruction
of Tumors | Preprint of an article submitted for consideration in the Pacific
Symposium on Biocomputing \c{opyright} 2015; World Scientific Publishing Co.,
Singapore, 2015; http://psb.stanford.edu/ | null | null | null | q-bio.PE cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Statistical machine learning methods, especially nonparametric Bayesian
methods, have become increasingly popular to infer clonal population structure
of tumors. Here we describe the treeCRP, an extension of the Chinese restaurant
process (CRP), a popular construction used in nonparametric mixture models, to
infer the phylogeny and genotype of major subclonal lineages represented in the
population of cancer cells. We also propose new split-merge updates tailored to
the subclonal reconstruction problem that improve the mixing time of Markov
chains. In comparisons with the tree-structured stick breaking prior used in
PhyloSub, we demonstrate superior mixing and running time using the treeCRP
with our new split-merge procedures. We also show that given the same number of
samples, TSSB and treeCRP have similar ability to recover the subclonal
structure of a tumor.
| [
{
"created": "Mon, 11 Aug 2014 20:39:42 GMT",
"version": "v1"
}
] | 2014-08-14 | [
[
"Deshwar",
"Amit G.",
""
],
[
"Vembu",
"Shankar",
""
],
[
"Morris",
"Quaid",
""
]
] | Statistical machine learning methods, especially nonparametric Bayesian methods, have become increasingly popular to infer clonal population structure of tumors. Here we describe the treeCRP, an extension of the Chinese restaurant process (CRP), a popular construction used in nonparametric mixture models, to infer the phylogeny and genotype of major subclonal lineages represented in the population of cancer cells. We also propose new split-merge updates tailored to the subclonal reconstruction problem that improve the mixing time of Markov chains. In comparisons with the tree-structured stick breaking prior used in PhyloSub, we demonstrate superior mixing and running time using the treeCRP with our new split-merge procedures. We also show that given the same number of samples, TSSB and treeCRP have similar ability to recover the subclonal structure of a tumor. |
2402.17807 | Md. Alamin Talukder | Abanti Bhattacharjya, Md Manowarul Islam, Md Ashraf Uddin, Md. Alamin
Talukder, AKM Azad, Sunil Aryal, Bikash Kumar Paul, Wahia Tasnim, Muhammad
Ali Abdulllah Almoyad, Mohammad Ali Moni | Exploring Gene Regulatory Interaction Networks and predicting
therapeutic molecules for Hypopharyngeal Cancer and EGFR-mutated lung
adenocarcinoma | Accepted In The FEBS OPEN BIO (Q2, SCOPUS, SCIE, IF: 2.6, CS: 4.7),
Wiley Journal, On FEB 25, 2024 | null | null | null | q-bio.GN cs.LG | http://creativecommons.org/licenses/by/4.0/ | With the advent of Information technology, the Bioinformatics research field
is becoming increasingly attractive to researchers and academicians. The recent
development of various Bioinformatics toolkits has facilitated the rapid
processing and analysis of vast quantities of biological data for human
perception. Most studies focus on locating two connected diseases and making
some observations to construct diverse gene regulatory interaction networks, a
forerunner to general drug design for curing illness. For instance,
Hypopharyngeal cancer is a disease that is associated with EGFR-mutated lung
adenocarcinoma. In this study, we select EGFR-mutated lung adenocarcinoma and
Hypopharyngeal cancer by finding the Lung metastases in hypopharyngeal cancer.
To conduct this study, we collect Mircorarray datasets from GEO (Gene
Expression Omnibus), an online database controlled by NCBI. Differentially
expressed genes, common genes, and hub genes between the selected two diseases
are detected for the succeeding move. Our research findings have suggested
common therapeutic molecules for the selected diseases based on 10 hub genes
with the highest interactions according to the degree topology method and the
maximum clique centrality (MCC). Our suggested therapeutic molecules will be
fruitful for patients with those two diseases simultaneously.
| [
{
"created": "Tue, 27 Feb 2024 11:29:36 GMT",
"version": "v1"
}
] | 2024-02-29 | [
[
"Bhattacharjya",
"Abanti",
""
],
[
"Islam",
"Md Manowarul",
""
],
[
"Uddin",
"Md Ashraf",
""
],
[
"Talukder",
"Md. Alamin",
""
],
[
"Azad",
"AKM",
""
],
[
"Aryal",
"Sunil",
""
],
[
"Paul",
"Bikash Kumar",
"... | With the advent of Information technology, the Bioinformatics research field is becoming increasingly attractive to researchers and academicians. The recent development of various Bioinformatics toolkits has facilitated the rapid processing and analysis of vast quantities of biological data for human perception. Most studies focus on locating two connected diseases and making some observations to construct diverse gene regulatory interaction networks, a forerunner to general drug design for curing illness. For instance, Hypopharyngeal cancer is a disease that is associated with EGFR-mutated lung adenocarcinoma. In this study, we select EGFR-mutated lung adenocarcinoma and Hypopharyngeal cancer by finding the Lung metastases in hypopharyngeal cancer. To conduct this study, we collect Mircorarray datasets from GEO (Gene Expression Omnibus), an online database controlled by NCBI. Differentially expressed genes, common genes, and hub genes between the selected two diseases are detected for the succeeding move. Our research findings have suggested common therapeutic molecules for the selected diseases based on 10 hub genes with the highest interactions according to the degree topology method and the maximum clique centrality (MCC). Our suggested therapeutic molecules will be fruitful for patients with those two diseases simultaneously. |
q-bio/0403001 | Eivind Almaas | E. Almaas, B. Kovacs, T. Vicsek, Z. N. Oltvai and A.-L. Barabasi | Global organization of metabolic fluxes in the bacterium, Escherichia
coli | 15 pages 4 figures | Nature 427, 839-843 (2004) | 10.1038/nature02289 | null | q-bio.MN cond-mat.dis-nn q-bio.CB | null | Cellular metabolism, the integrated interconversion of thousands of metabolic
substrates through enzyme-catalyzed biochemical reactions, is the most
investigated complex intercellular web of molecular interactions. While the
topological organization of individual reactions into metabolic networks is
increasingly well understood, the principles governing their global functional
utilization under different growth conditions pose many open questions. We
implement a flux balance analysis of the E. coli MG1655 metabolism, finding
that the network utilization is highly uneven: while most metabolic reactions
have small fluxes, the metabolism's activity is dominated by several reactions
with very high fluxes. E. coli responds to changes in growth conditions by
reorganizing the rates of selected fluxes predominantly within this high flux
backbone. The identified behavior likely represents a universal feature of
metabolic activity in all cells, with potential implications to metabolic
engineering.
| [
{
"created": "Sat, 28 Feb 2004 20:28:57 GMT",
"version": "v1"
}
] | 2015-06-26 | [
[
"Almaas",
"E.",
""
],
[
"Kovacs",
"B.",
""
],
[
"Vicsek",
"T.",
""
],
[
"Oltvai",
"Z. N.",
""
],
[
"Barabasi",
"A. -L.",
""
]
] | Cellular metabolism, the integrated interconversion of thousands of metabolic substrates through enzyme-catalyzed biochemical reactions, is the most investigated complex intercellular web of molecular interactions. While the topological organization of individual reactions into metabolic networks is increasingly well understood, the principles governing their global functional utilization under different growth conditions pose many open questions. We implement a flux balance analysis of the E. coli MG1655 metabolism, finding that the network utilization is highly uneven: while most metabolic reactions have small fluxes, the metabolism's activity is dominated by several reactions with very high fluxes. E. coli responds to changes in growth conditions by reorganizing the rates of selected fluxes predominantly within this high flux backbone. The identified behavior likely represents a universal feature of metabolic activity in all cells, with potential implications to metabolic engineering. |
1611.09052 | Nikolaos Sfakianakis PhD | Jan Werner, Nikolaos Sfakianakis, Alan Rendall, Eva Maria Griebeler | Energy intake functions of ectotherms and endotherms derived from their
body mass growth | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | How animals allocate energy to different body functions is still not
completely understood and a challenging topic until recently. Here, we
investigate in more detail the allocation of energy intake to growth,
reproduction or heat production by developing energy budget models for
ectothermic and endothermic vertebrates using a mathematical approach. We
calculated energy intake functions of ectotherms and endotherms derived from
their body mass growth. We show that our energy budget model produces energy
intake patterns and distributions as observed in ectothermic and endothermic
species. Our results comply consistently with some empirical studies that in
endothermic species, like birds and mammals, energy is used for heat production
instead of growth. Our model additionally offers an explanation on known
differences in absolute energy intake between ectothermic fish and reptiles and
endothermic birds and mammals. From a mathematical point of view, the model
comes in two equivalent formulations, a differential and an integral one. It is
derived from a discrete level approach, and it is shown to be well-posed and to
attain a unique solution for (almost) every parameter set. Numerically, the
integral formulation of the model is considered as an inverse problem with
unknown parameters that are estimated using a series of experiments/realistic
data.
| [
{
"created": "Mon, 28 Nov 2016 10:30:37 GMT",
"version": "v1"
}
] | 2016-11-29 | [
[
"Werner",
"Jan",
""
],
[
"Sfakianakis",
"Nikolaos",
""
],
[
"Rendall",
"Alan",
""
],
[
"Griebeler",
"Eva Maria",
""
]
] | How animals allocate energy to different body functions is still not completely understood and a challenging topic until recently. Here, we investigate in more detail the allocation of energy intake to growth, reproduction or heat production by developing energy budget models for ectothermic and endothermic vertebrates using a mathematical approach. We calculated energy intake functions of ectotherms and endotherms derived from their body mass growth. We show that our energy budget model produces energy intake patterns and distributions as observed in ectothermic and endothermic species. Our results comply consistently with some empirical studies that in endothermic species, like birds and mammals, energy is used for heat production instead of growth. Our model additionally offers an explanation on known differences in absolute energy intake between ectothermic fish and reptiles and endothermic birds and mammals. From a mathematical point of view, the model comes in two equivalent formulations, a differential and an integral one. It is derived from a discrete level approach, and it is shown to be well-posed and to attain a unique solution for (almost) every parameter set. Numerically, the integral formulation of the model is considered as an inverse problem with unknown parameters that are estimated using a series of experiments/realistic data. |
1103.1167 | Alexander Bershadskii | A. Bershadskii | Prime numbers and spontaneous neuron activity | extended | Adv. Math. Phys., 519178, (2011) | 10.1155/2011/519178 | null | q-bio.NC math.NT nlin.CD | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Logarithmic gaps have been used in order to find a periodic component of the
sequence of prime numbers, hidden by a random noise (stochastic or chaotic). It
is shown that multiplicative nature of the noise is the main reason for the
successful application of the logarithmic gaps transforming the multiplicative
noise into an additive one. A relation of this phenomenon to spontaneous neuron
activity and to chaotic brain computations has been discussed.
| [
{
"created": "Sun, 6 Mar 2011 21:56:07 GMT",
"version": "v1"
},
{
"created": "Tue, 15 Mar 2011 12:02:21 GMT",
"version": "v2"
},
{
"created": "Fri, 6 May 2011 15:16:57 GMT",
"version": "v3"
}
] | 2012-06-26 | [
[
"Bershadskii",
"A.",
""
]
] | Logarithmic gaps have been used in order to find a periodic component of the sequence of prime numbers, hidden by a random noise (stochastic or chaotic). It is shown that multiplicative nature of the noise is the main reason for the successful application of the logarithmic gaps transforming the multiplicative noise into an additive one. A relation of this phenomenon to spontaneous neuron activity and to chaotic brain computations has been discussed. |
2405.01896 | Eric Hermand | Eric Hermand (URePSSS, H&P), L\'eo Lesaint, Laura Denis (H&P),
Jean-Paul Richalet (INSEP), Fran\c{c}ois Lhuissier (H&P) | A Step Test to Evaluate the Susceptibility to Severe High-Altitude
Illness in Field Conditions | High Altitude Medicine and Biology, 2024 | null | 10.1089/ham.2023.0065 | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A laboratory-based hypoxic exercise test, performed on a cycle ergometer, can
be used to predict susceptibility to severe high-altitude illness (SHAI)
through the calculation of a clinicophysiological SHAI score. Our objective was
to design a field-condition test and compare its derived SHAI score and various
physiological parameters, such as peripheral oxygen saturation (SpO2), and
cardiac and ventilatory responses to hypoxia during exercise (HCRe and HVRe,
respectively), to the laboratory test. A group of 43 healthy subjects (15
females and 28 males), with no prior experience at high altitude, performed a
hypoxic cycle ergometer test (simulated altitude of 4,800 m) and step tests (20
cm high step) at 3,000, 4,000, and 4,800 m simulated altitudes. According to
tested altitudes, differences were observed in O2 desaturation, heart rate, and
minute ventilation (p < 0.001), whereas the computed HCRe and HVRe were not
different (p = 0.075 and p = 0.203, respectively). From the linear
relationships between the step test and SHAI scores, we defined a risk zone,
allowing us to evaluate the risk of developing SHAI and take adequate
preventive measures in field conditions, from the calculated step test score
for the given altitude. The predictive value of this new field test remains to
be validated in real high-altitude conditions.
| [
{
"created": "Fri, 3 May 2024 07:39:16 GMT",
"version": "v1"
}
] | 2024-05-06 | [
[
"Hermand",
"Eric",
"",
"URePSSS, H&P"
],
[
"Lesaint",
"Léo",
"",
"H&P"
],
[
"Denis",
"Laura",
"",
"H&P"
],
[
"Richalet",
"Jean-Paul",
"",
"INSEP"
],
[
"Lhuissier",
"François",
"",
"H&P"
]
] | A laboratory-based hypoxic exercise test, performed on a cycle ergometer, can be used to predict susceptibility to severe high-altitude illness (SHAI) through the calculation of a clinicophysiological SHAI score. Our objective was to design a field-condition test and compare its derived SHAI score and various physiological parameters, such as peripheral oxygen saturation (SpO2), and cardiac and ventilatory responses to hypoxia during exercise (HCRe and HVRe, respectively), to the laboratory test. A group of 43 healthy subjects (15 females and 28 males), with no prior experience at high altitude, performed a hypoxic cycle ergometer test (simulated altitude of 4,800 m) and step tests (20 cm high step) at 3,000, 4,000, and 4,800 m simulated altitudes. According to tested altitudes, differences were observed in O2 desaturation, heart rate, and minute ventilation (p < 0.001), whereas the computed HCRe and HVRe were not different (p = 0.075 and p = 0.203, respectively). From the linear relationships between the step test and SHAI scores, we defined a risk zone, allowing us to evaluate the risk of developing SHAI and take adequate preventive measures in field conditions, from the calculated step test score for the given altitude. The predictive value of this new field test remains to be validated in real high-altitude conditions. |
1606.03565 | Marco Zoli | Marco Zoli | Flexibility of short DNA helices under mechanical stretching | Physical Chemistry Chemical Physics (2016). Final version available
at DOI below. arXiv admin note: text overlap with arXiv:1606.01357 | Phys. Chem. Chem. Phys. vol.18, 17666 - 17677 (2016) | 10.1039/C6CP02981G | null | q-bio.BM cond-mat.soft physics.bio-ph physics.chem-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The flexibility of short DNA fragments is studied by a Hamiltonian model
which treats the inter-strand and intra-strand forces at the level of the base
pair. The elastic response of a set of homogeneous helices to externally
applied forces is obtained by computing the average bending angles between
adjacent base pairs along the molecule axis. The ensemble averages are
performed over a room temperature equilibrium distribution of base pair
separations and bending fluctuations. The analysis of the end-to-end distances
and persistence lengths shows that even short sequences with less than $100$
base pairs maintain a significant bendability ascribed to thermal fluctuational
effects and kinks with large bending angles. The discrepancies between the
outcomes of the discrete model and those of the worm-like-chain model are
examined pointing out the inadequacy of the latter on short length scales.
| [
{
"created": "Sat, 11 Jun 2016 08:02:26 GMT",
"version": "v1"
}
] | 2016-06-30 | [
[
"Zoli",
"Marco",
""
]
] | The flexibility of short DNA fragments is studied by a Hamiltonian model which treats the inter-strand and intra-strand forces at the level of the base pair. The elastic response of a set of homogeneous helices to externally applied forces is obtained by computing the average bending angles between adjacent base pairs along the molecule axis. The ensemble averages are performed over a room temperature equilibrium distribution of base pair separations and bending fluctuations. The analysis of the end-to-end distances and persistence lengths shows that even short sequences with less than $100$ base pairs maintain a significant bendability ascribed to thermal fluctuational effects and kinks with large bending angles. The discrepancies between the outcomes of the discrete model and those of the worm-like-chain model are examined pointing out the inadequacy of the latter on short length scales. |
2404.14686 | Thomas McAndrew PhD | Thomas McAndrew and Maimuna S. Majumder and Andrew A. Lover and Srini
Venkatramanan and Paolo Bocchini and Tamay Besiroglu and Allison Codi and
Gaia Dempsey and Sam Abbott and Sylvain Chevalier and Nikos I. Bosse and Juan
Cambeiro and David Braun | Assessing Human Judgment Forecasts in the Rapid Spread of the Mpox
Outbreak: Insights and Challenges for Pandemic Preparedness | null | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | In May 2022, mpox (formerly monkeypox) spread to non-endemic countries
rapidly. Human judgment is a forecasting approach that has been sparsely
evaluated during the beginning of an outbreak. We collected -- between May 19,
2022 and July 31, 2022 -- 1275 forecasts from 442 individuals of six questions
about the mpox outbreak where ground truth data are now available. Individual
human judgment forecasts and an equally weighted ensemble were evaluated, as
well as compared to a random walk, autoregressive, and doubling time model. We
found (1) individual human judgment forecasts underestimated outbreak size, (2)
the ensemble forecast median moved closer to the ground truth over time but
uncertainty around the median did not appreciably decrease, and (3) compared to
computational models, for 2-8 week ahead forecasts, the human judgment ensemble
outperformed all three models when using median absolute error and weighted
interval score; for one week ahead forecasts a random walk outperformed human
judgment. We propose two possible explanations: at the time a forecast was
submitted, the mode was correlated with the most recent (and smaller)
observation that would eventually determine ground truth. Several forecasts
were solicited on a logarithmic scale which may have caused humans to generate
forecasts with unintended, large uncertainty intervals. To aide in outbreak
preparedness, platforms that solicit human judgment forecasts may wish to
assess whether specifying a forecast on logarithmic scale matches an
individual's intended forecast, support human judgment by finding cues that are
typically used to build forecasts, and, to improve performance, tailor their
platform to allow forecasters to assign zero probability to events.
| [
{
"created": "Tue, 23 Apr 2024 02:32:27 GMT",
"version": "v1"
}
] | 2024-04-24 | [
[
"McAndrew",
"Thomas",
""
],
[
"Majumder",
"Maimuna S.",
""
],
[
"Lover",
"Andrew A.",
""
],
[
"Venkatramanan",
"Srini",
""
],
[
"Bocchini",
"Paolo",
""
],
[
"Besiroglu",
"Tamay",
""
],
[
"Codi",
"Allison",
... | In May 2022, mpox (formerly monkeypox) spread to non-endemic countries rapidly. Human judgment is a forecasting approach that has been sparsely evaluated during the beginning of an outbreak. We collected -- between May 19, 2022 and July 31, 2022 -- 1275 forecasts from 442 individuals of six questions about the mpox outbreak where ground truth data are now available. Individual human judgment forecasts and an equally weighted ensemble were evaluated, as well as compared to a random walk, autoregressive, and doubling time model. We found (1) individual human judgment forecasts underestimated outbreak size, (2) the ensemble forecast median moved closer to the ground truth over time but uncertainty around the median did not appreciably decrease, and (3) compared to computational models, for 2-8 week ahead forecasts, the human judgment ensemble outperformed all three models when using median absolute error and weighted interval score; for one week ahead forecasts a random walk outperformed human judgment. We propose two possible explanations: at the time a forecast was submitted, the mode was correlated with the most recent (and smaller) observation that would eventually determine ground truth. Several forecasts were solicited on a logarithmic scale which may have caused humans to generate forecasts with unintended, large uncertainty intervals. To aide in outbreak preparedness, platforms that solicit human judgment forecasts may wish to assess whether specifying a forecast on logarithmic scale matches an individual's intended forecast, support human judgment by finding cues that are typically used to build forecasts, and, to improve performance, tailor their platform to allow forecasters to assign zero probability to events. |
1811.00077 | John Treado | John D. Treado, Zhe Mei, Lynne Regan, and Corey S. O'Hern | Void distributions reveal structural link between jammed packings and
protein cores | 14 pages, 11 figures | Phys. Rev. E 99, 022416 (2019) | 10.1103/PhysRevE.99.022416 | null | q-bio.BM cond-mat.soft physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dense packing of hydrophobic residues in the cores of globular proteins
determines their stability. Recently, we have shown that protein cores possess
packing fraction $\phi \approx 0.56$, which is the same as dense, random
packing of amino acid-shaped particles. In this article, we compare the
structural properties of protein cores and jammed packings of amino acid-shaped
particles in much greater depth by measuring their local and connected void
regions. We find that the distributions of surface Voronoi cell volumes and
local porosities obey similar statistics in both systems. We also measure the
probability that accessible, connected void regions percolate as a function of
the size of a spherical probe particle and show that both systems possess the
same critical probe size. By measuring the critical exponent $\tau$ that
characterizes the size distribution of connected void clusters at the onset of
percolation, we show that void percolation in packings of amino acid-shaped
particles and protein cores belong to the same universality class, which is
different from that for void percolation in jammed sphere packings. We propose
that the connected void regions of proteins are a defining feature of proteins
and can be used to differentiate experimentally observed proteins from decoy
structures that are generated using computational protein design software. This
work emphasizes that jammed packings of amino acid-shaped particles can serve
as structural and mechanical analogs of protein cores, and could therefore be
useful in modeling the response of protein cores to cavity-expanding and
-reducing mutations.
| [
{
"created": "Wed, 31 Oct 2018 19:29:05 GMT",
"version": "v1"
}
] | 2019-02-22 | [
[
"Treado",
"John D.",
""
],
[
"Mei",
"Zhe",
""
],
[
"Regan",
"Lynne",
""
],
[
"O'Hern",
"Corey S.",
""
]
] | Dense packing of hydrophobic residues in the cores of globular proteins determines their stability. Recently, we have shown that protein cores possess packing fraction $\phi \approx 0.56$, which is the same as dense, random packing of amino acid-shaped particles. In this article, we compare the structural properties of protein cores and jammed packings of amino acid-shaped particles in much greater depth by measuring their local and connected void regions. We find that the distributions of surface Voronoi cell volumes and local porosities obey similar statistics in both systems. We also measure the probability that accessible, connected void regions percolate as a function of the size of a spherical probe particle and show that both systems possess the same critical probe size. By measuring the critical exponent $\tau$ that characterizes the size distribution of connected void clusters at the onset of percolation, we show that void percolation in packings of amino acid-shaped particles and protein cores belong to the same universality class, which is different from that for void percolation in jammed sphere packings. We propose that the connected void regions of proteins are a defining feature of proteins and can be used to differentiate experimentally observed proteins from decoy structures that are generated using computational protein design software. This work emphasizes that jammed packings of amino acid-shaped particles can serve as structural and mechanical analogs of protein cores, and could therefore be useful in modeling the response of protein cores to cavity-expanding and -reducing mutations. |
2401.02096 | Nur Adeela Yasid | Nur Ain Shuhada Ab Razak, Syahir Habib, Mohd Yunus Abd Shukor, Siti
Aisyah Alias, Jerzy Smykla, Nur Adeela Yasid | Isolation and Characterisation of Polypropylene Microplastic-Utilising
Bacterium from the Antarctic Soil | null | null | null | null | q-bio.BM | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Despite its remoteness from other continents, the Antarctic region cannot
escape the aftermath of human activities as it is highly influenced by
anthropogenic impacts that occur both in the regional and global context.
Contamination by microplastics, mostly caused by the improper disposal of
plastic waste, is widely recognised as a serious environmental threat due to
its ubiquity. In recent years, most researchers have focused on microplastic
pollution in the marine ecosystem of Antarctica, while pollution in the
terrestrial environment continues to be neglected. This study was conducted to
investigate the ability of Antarctic soil bacteria to use polypropylene (PP)
microplastics as the sole carbon source. Bushnell Haas (BH) medium inoculated
with bacteria and supplemented PP-microplastics as the sole carbon source was
used in the utilisation test. In this study, the growth response of Dermacoccus
sp. strain AYDL3 was assessed after exposure to PP-microplastics in a basal
medium for 40 days. The weight reduction of the polymer was determined to
further support the growth response. The highest and lowest weight loss
percentages were observed on day 20 (23.0%) and day 10 (7.75%), respectively.
Fourier transforms infrared (FTIR) spectroscopy and scanning electron
microscopy (SEM) analyses were used to confirm the utilisation of
PP-microplastics by strain AYDL3. Results indicate that the soil bacteria
possess a mechanism for breaking down microplastics allowing them to utilise
plastics as energy sources without any pre-treatment. This emphasises the
significance of these soil bacteria to adapt and subsequently manage the
plastic fragments in the soil in the future.
| [
{
"created": "Thu, 4 Jan 2024 06:54:56 GMT",
"version": "v1"
}
] | 2024-01-05 | [
[
"Razak",
"Nur Ain Shuhada Ab",
""
],
[
"Habib",
"Syahir",
""
],
[
"Shukor",
"Mohd Yunus Abd",
""
],
[
"Alias",
"Siti Aisyah",
""
],
[
"Smykla",
"Jerzy",
""
],
[
"Yasid",
"Nur Adeela",
""
]
] | Despite its remoteness from other continents, the Antarctic region cannot escape the aftermath of human activities as it is highly influenced by anthropogenic impacts that occur both in the regional and global context. Contamination by microplastics, mostly caused by the improper disposal of plastic waste, is widely recognised as a serious environmental threat due to its ubiquity. In recent years, most researchers have focused on microplastic pollution in the marine ecosystem of Antarctica, while pollution in the terrestrial environment continues to be neglected. This study was conducted to investigate the ability of Antarctic soil bacteria to use polypropylene (PP) microplastics as the sole carbon source. Bushnell Haas (BH) medium inoculated with bacteria and supplemented PP-microplastics as the sole carbon source was used in the utilisation test. In this study, the growth response of Dermacoccus sp. strain AYDL3 was assessed after exposure to PP-microplastics in a basal medium for 40 days. The weight reduction of the polymer was determined to further support the growth response. The highest and lowest weight loss percentages were observed on day 20 (23.0%) and day 10 (7.75%), respectively. Fourier transforms infrared (FTIR) spectroscopy and scanning electron microscopy (SEM) analyses were used to confirm the utilisation of PP-microplastics by strain AYDL3. Results indicate that the soil bacteria possess a mechanism for breaking down microplastics allowing them to utilise plastics as energy sources without any pre-treatment. This emphasises the significance of these soil bacteria to adapt and subsequently manage the plastic fragments in the soil in the future. |
1810.01984 | Koichi Fujimoto | Satoru Okuda, Koichi Fujimoto | A Mechanical Instability in Planar Epithelial Monolayers Leads to Cell
Extrusion | 16 pages, 5 figures. Biophysical Journal (2020) | null | 10.1016/j.bpj.2020.03.028 | null | q-bio.CB cond-mat.soft physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In cell extrusion, a cell embedded in an epithelial monolayer loses its
apical or basal surface and is subsequently squeezed out of the monolayer by
neighboring cells. Cell extrusions occur during apoptosis,
epithelial-mesenchymal transition, or pre-cancerous cell invasion. They play
important roles in embryogenesis, homeostasis, carcinogenesis, and many other
biological processes. Although many of the molecular factors involved in cell
extrusion are known, little is known about the mechanical basis of cell
extrusion. We used a three-dimensional (3D) vertex model to investigate the
mechanical stability of cells arranged in a monolayer with 3D foam geometry. We
found that when the cells composing the monolayer have homogeneous mechanical
properties, cells are extruded from the monolayer when the symmetry of the 3D
geometry is broken due to an increase in cell density or a decrease in the
number of topological neighbors around single cells. Those results suggest that
mechanical instability inherent in the 3D foam geometry of epithelial
monolayers is sufficient to drive epithelial cell extrusion. In the situation
where cells in the monolayer actively generate contractile or adhesive forces
under the control of intrinsic genetic programs, the forces act to break the
symmetry of the monolayer, leading to cell extrusion that is directed to the
apical or basal side of the monolayer by the balance of contractile and
adhesive forces on the apical and basal sides. Although our analyses are based
on a simple mechanical model, our results are in accordance with observations
of epithelial monolayers {\it in vivo} and consistently explain cell extrusions
under a wide range of physiological and pathophysiological conditions. Our
results illustrate the importance of a mechanical understanding of cell
extrusion and provide a basis by which to link molecular regulation to physical
processes.
| [
{
"created": "Wed, 3 Oct 2018 21:42:48 GMT",
"version": "v1"
},
{
"created": "Thu, 23 Apr 2020 07:02:54 GMT",
"version": "v2"
}
] | 2020-04-24 | [
[
"Okuda",
"Satoru",
""
],
[
"Fujimoto",
"Koichi",
""
]
] | In cell extrusion, a cell embedded in an epithelial monolayer loses its apical or basal surface and is subsequently squeezed out of the monolayer by neighboring cells. Cell extrusions occur during apoptosis, epithelial-mesenchymal transition, or pre-cancerous cell invasion. They play important roles in embryogenesis, homeostasis, carcinogenesis, and many other biological processes. Although many of the molecular factors involved in cell extrusion are known, little is known about the mechanical basis of cell extrusion. We used a three-dimensional (3D) vertex model to investigate the mechanical stability of cells arranged in a monolayer with 3D foam geometry. We found that when the cells composing the monolayer have homogeneous mechanical properties, cells are extruded from the monolayer when the symmetry of the 3D geometry is broken due to an increase in cell density or a decrease in the number of topological neighbors around single cells. Those results suggest that mechanical instability inherent in the 3D foam geometry of epithelial monolayers is sufficient to drive epithelial cell extrusion. In the situation where cells in the monolayer actively generate contractile or adhesive forces under the control of intrinsic genetic programs, the forces act to break the symmetry of the monolayer, leading to cell extrusion that is directed to the apical or basal side of the monolayer by the balance of contractile and adhesive forces on the apical and basal sides. Although our analyses are based on a simple mechanical model, our results are in accordance with observations of epithelial monolayers {\it in vivo} and consistently explain cell extrusions under a wide range of physiological and pathophysiological conditions. Our results illustrate the importance of a mechanical understanding of cell extrusion and provide a basis by which to link molecular regulation to physical processes. |
q-bio/0410037 | Ilya M. Nemenman | Adam A. Margolin, Ilya Nemenman, Katia Basso, Ulf Klein, Chris
Wiggins, Gustavo Stolovitzky, Riccardo Dalla Favera, Andrea Califano | ARACNE: An Algorithm for the Reconstruction of Gene Regulatory Networks
in a Mammalian Cellular Context | accepted version; minor revisions following referee suggestions; 28
pages, 9 figures; detailed version of q-bio.MN/0410036 | BMC Bioinformatics 2006, 7(Suppl 1):S7 | 10.1186/1471-2105-7-S1-S7 | null | q-bio.MN q-bio.GN q-bio.QM | null | Background: Elucidating gene regulatory networks is crucial for understanding
normal cell physiology and complex pathologic phenotypes. Existing
computational methods for the genome-wide ``reverse engineering'' of such
networks have been successful only for lower eukaryotes with simple genomes.
Here we present ARACNE, a novel algorithm, using microarray expression
profiles, specifically designed to scale up to the complexity of regulatory
networks in mammalian cells, yet general enough to address a wider range of
network deconvolution problems. This method uses an information theoretic
approach to eliminate the majority of indirect interactions inferred by
co-expression methods.
Results: We prove that ARACNE reconstructs the network exactly
(asymptotically) if the effect of loops in the network topology is negligible,
and we show that the algorithm works well in practice, even in the presence of
numerous loops and complex topologies. We assess ARACNE's ability to
reconstruct transcriptional regulatory networks using both a realistic
synthetic dataset and a microarray dataset from human B cells. On synthetic
datasets ARACNE achieves very low error rates and outperforms established
methods, such as Relevance Networks and Bayesian Networks. Application to the
deconvolution of genetic networks in human B cells demonstrates ARACNE's
ability to infer validated transcriptional targets of the c MYC proto-oncogene.
We also study the effects of mis estimation of mutual information on network
reconstruction, and show that algorithms based on mutual information ranking
are more resilient to estimation errors.
| [
{
"created": "Thu, 28 Oct 2004 17:23:40 GMT",
"version": "v1"
},
{
"created": "Wed, 29 Dec 2004 06:03:35 GMT",
"version": "v2"
},
{
"created": "Fri, 7 Oct 2005 19:36:07 GMT",
"version": "v3"
}
] | 2007-05-23 | [
[
"Margolin",
"Adam A.",
""
],
[
"Nemenman",
"Ilya",
""
],
[
"Basso",
"Katia",
""
],
[
"Klein",
"Ulf",
""
],
[
"Wiggins",
"Chris",
""
],
[
"Stolovitzky",
"Gustavo",
""
],
[
"Favera",
"Riccardo Dalla",
""
],... | Background: Elucidating gene regulatory networks is crucial for understanding normal cell physiology and complex pathologic phenotypes. Existing computational methods for the genome-wide ``reverse engineering'' of such networks have been successful only for lower eukaryotes with simple genomes. Here we present ARACNE, a novel algorithm, using microarray expression profiles, specifically designed to scale up to the complexity of regulatory networks in mammalian cells, yet general enough to address a wider range of network deconvolution problems. This method uses an information theoretic approach to eliminate the majority of indirect interactions inferred by co-expression methods. Results: We prove that ARACNE reconstructs the network exactly (asymptotically) if the effect of loops in the network topology is negligible, and we show that the algorithm works well in practice, even in the presence of numerous loops and complex topologies. We assess ARACNE's ability to reconstruct transcriptional regulatory networks using both a realistic synthetic dataset and a microarray dataset from human B cells. On synthetic datasets ARACNE achieves very low error rates and outperforms established methods, such as Relevance Networks and Bayesian Networks. Application to the deconvolution of genetic networks in human B cells demonstrates ARACNE's ability to infer validated transcriptional targets of the c MYC proto-oncogene. We also study the effects of mis estimation of mutual information on network reconstruction, and show that algorithms based on mutual information ranking are more resilient to estimation errors. |
1209.0831 | Kazuhiro Takemoto | Kazuhiro Takemoto | Metabolic network modularity arising from simple growth processes | 11 pages, 7 figures | Phys. Rev. E 86, 036107 (2012) | 10.1103/PhysRevE.86.036107 | null | q-bio.MN physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Metabolic networks consist of linked functional components, or modules. The
mechanism underlying metabolic network modularity is of great interest not only
to researchers of basic science but also to those in fields of engineering.
Previous studies have suggested a theoretical model, which proposes that a
change in the evolutionary goal (system-specific purpose) increases network
modularity, and this hypothesis was supported by statistical data analysis.
Nevertheless, further investigation has uncovered additional possibilities that
might explain the origin of network modularity. In this work, we propose an
evolving network model without tuning parameters to describe metabolic
networks. We demonstrate, quantitatively, that metabolic network modularity can
arise from simple growth processes, independent of the change in the
evolutionary goal. Our model is applicable to a wide range of organisms, and
appears to suggest that metabolic network modularity can be more simply
determined than previously thought. Nonetheless, our proposition does not serve
to contradict the previous model; it strives to provide an insight from a
different angle in the ongoing efforts to understand metabolic evolution, with
the hope of eventually achieving the synthetic engineering of metabolic
networks.
| [
{
"created": "Tue, 4 Sep 2012 23:48:37 GMT",
"version": "v1"
}
] | 2012-09-14 | [
[
"Takemoto",
"Kazuhiro",
""
]
] | Metabolic networks consist of linked functional components, or modules. The mechanism underlying metabolic network modularity is of great interest not only to researchers of basic science but also to those in fields of engineering. Previous studies have suggested a theoretical model, which proposes that a change in the evolutionary goal (system-specific purpose) increases network modularity, and this hypothesis was supported by statistical data analysis. Nevertheless, further investigation has uncovered additional possibilities that might explain the origin of network modularity. In this work, we propose an evolving network model without tuning parameters to describe metabolic networks. We demonstrate, quantitatively, that metabolic network modularity can arise from simple growth processes, independent of the change in the evolutionary goal. Our model is applicable to a wide range of organisms, and appears to suggest that metabolic network modularity can be more simply determined than previously thought. Nonetheless, our proposition does not serve to contradict the previous model; it strives to provide an insight from a different angle in the ongoing efforts to understand metabolic evolution, with the hope of eventually achieving the synthetic engineering of metabolic networks. |
1610.08566 | Nat\'alia Mota Msr | Natalia B. Mota, Mauro Copelli, Sidarta Ribeiro | Quantifying word salad: The structural randomness of verbal reports
predicts negative symptoms and Schizophrenia diagnosis 6 months later | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: The precise quantification of negative symptoms is necessary to
improve differential diagnosis and prognosis prediction in Schizophrenia. In
chronic psychotic patients, the representation of verbal reports as word graphs
provides automated sorting of schizophrenia, bipolar disorder and control
groups based on the degree of speech connectedness. Here we aim to use machine
learning to verify whether speech connectedness during first clinical contact
can predict negative symptoms and Schizophrenia diagnosis six months later.
Methods: PANSS scores and memory reports were collected from 21 patients
undergoing first clinical contact for recent-onset psychosis and followed for 6
months to establish DSM-IV diagnosis, and 21 healthy controls. Each report was
represented as a graph in which words corresponded to nodes, and node temporal
succession corresponded to edges. Three connectedness attributes were extracted
from each graph, z-scores to random graph distributions were measured,
correlated with the PANSS negative subscale, combined into a single
Fragmentation Index, and used for predictions. Findings: Random-like speech was
prevalent among Schizophrenia patients (64% x 5% in Control group, p=0.0002).
Connectedness explained 92% of the PANSS negative subscale variance (p=0.0001).
The Fragmentation Index classified low versus high scores of PANSS negative
subscale with 93% accuracy (AUC=1), predicted Schizophrenia diagnosis with 89%
accuracy (AUC=0.89), and was validated in an independent cohort of chronic
psychotic patients. Interpretation: The structural randomness of speech graph
connectedness is increased in Schizophrenia. It provides a quantitative
measurement of word salad as a Fragmentation Index that tightly correlates with
negative symptoms and predicts Schizophrenia diagnosis during first clinical
contact of recent-onset psychosis.
| [
{
"created": "Wed, 26 Oct 2016 22:39:17 GMT",
"version": "v1"
},
{
"created": "Sun, 30 Oct 2016 14:59:51 GMT",
"version": "v2"
}
] | 2016-11-01 | [
[
"Mota",
"Natalia B.",
""
],
[
"Copelli",
"Mauro",
""
],
[
"Ribeiro",
"Sidarta",
""
]
] | Background: The precise quantification of negative symptoms is necessary to improve differential diagnosis and prognosis prediction in Schizophrenia. In chronic psychotic patients, the representation of verbal reports as word graphs provides automated sorting of schizophrenia, bipolar disorder and control groups based on the degree of speech connectedness. Here we aim to use machine learning to verify whether speech connectedness during first clinical contact can predict negative symptoms and Schizophrenia diagnosis six months later. Methods: PANSS scores and memory reports were collected from 21 patients undergoing first clinical contact for recent-onset psychosis and followed for 6 months to establish DSM-IV diagnosis, and 21 healthy controls. Each report was represented as a graph in which words corresponded to nodes, and node temporal succession corresponded to edges. Three connectedness attributes were extracted from each graph, z-scores to random graph distributions were measured, correlated with the PANSS negative subscale, combined into a single Fragmentation Index, and used for predictions. Findings: Random-like speech was prevalent among Schizophrenia patients (64% x 5% in Control group, p=0.0002). Connectedness explained 92% of the PANSS negative subscale variance (p=0.0001). The Fragmentation Index classified low versus high scores of PANSS negative subscale with 93% accuracy (AUC=1), predicted Schizophrenia diagnosis with 89% accuracy (AUC=0.89), and was validated in an independent cohort of chronic psychotic patients. Interpretation: The structural randomness of speech graph connectedness is increased in Schizophrenia. It provides a quantitative measurement of word salad as a Fragmentation Index that tightly correlates with negative symptoms and predicts Schizophrenia diagnosis during first clinical contact of recent-onset psychosis. |
1801.00982 | Philip Ernst | Philip A. Ernst, Marek Kimmel, Monika Kurpas, and Quan Zhou | Thick distribution tails in models of cancer secondary tumors | 24 pages | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Recent progress in microdissection and in DNA sequencing has enabled
subsampling of multi-focal cancers in organs such as the liver in several
hundred spots, helping to determine the pattern of mutations in each of these
spots. This has led to the construction of genealogies of the primary,
secondary, tertiary and so forth, foci of the tumor. These studies have led to
diverse conclusions concerning the Darwinian (selective) or neutral evolution
in cancer. Mathematical models of development of multifocal tumors have been
developed to support these claims. We report a model of development of a
multifocal tumor, which is a mathematically rigorous refinement of a model of
Ling et al. (2015). Guided by numerical studies and simulations, we show that
the rigorous model, in the form of an infinite-type branching process, displays
distributions of tumors size which have heavy tails and moments that become
infinite in finite time. To demonstrate these points, we obtain bounds on the
tails of the distributions of the process and infinite-series expression for
the first moments. In addition to its inherent mathematical interest, the model
is corroborated by recent reports of apparent super-exponential growth in
cancer metastases.
| [
{
"created": "Wed, 3 Jan 2018 12:56:41 GMT",
"version": "v1"
}
] | 2018-01-04 | [
[
"Ernst",
"Philip A.",
""
],
[
"Kimmel",
"Marek",
""
],
[
"Kurpas",
"Monika",
""
],
[
"Zhou",
"Quan",
""
]
] | Recent progress in microdissection and in DNA sequencing has enabled subsampling of multi-focal cancers in organs such as the liver in several hundred spots, helping to determine the pattern of mutations in each of these spots. This has led to the construction of genealogies of the primary, secondary, tertiary and so forth, foci of the tumor. These studies have led to diverse conclusions concerning the Darwinian (selective) or neutral evolution in cancer. Mathematical models of development of multifocal tumors have been developed to support these claims. We report a model of development of a multifocal tumor, which is a mathematically rigorous refinement of a model of Ling et al. (2015). Guided by numerical studies and simulations, we show that the rigorous model, in the form of an infinite-type branching process, displays distributions of tumors size which have heavy tails and moments that become infinite in finite time. To demonstrate these points, we obtain bounds on the tails of the distributions of the process and infinite-series expression for the first moments. In addition to its inherent mathematical interest, the model is corroborated by recent reports of apparent super-exponential growth in cancer metastases. |
1409.1470 | Max Souza | Abderrahman Iggidr, Gauthier Sallet and Max O. Souza | On the dynamics of a class of multi-group models for vector-borne
diseases | null | J. Math. Anal. Appl. 441(2):723-743 (2016) | 10.1016/j.jmaa.2016.04.003 | null | q-bio.PE math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The resurgence of vector-borne diseases is an increasing public health
concern, and there is a need for a better understanding of their dynamics. For
a number of diseases, e.g. dengue and chikungunya, this resurgence occurs
mostly in urban environments, which are naturally very heterogeneous,
particularly due to population circulation. In this scenario, there is an
increasing interest in both multi-patch and multi-group models for such
diseases. In this work, we study the dynamics of a vector borne disease within
a class of multi-group models that extends the classical Bailey-Dietz model.
This class includes many of the proposed models in the literature, and it can
accommodate various functional forms of the infection force. For such models,
the vector-host/host-vector contact network topology gives rise to a bipartite
graph which has different properties from the ones usually found in directly
transmitted diseases. Under the assumption that the contact network is strongly
connected, we can define the basic reproductive number $\mathcal{R}_0$ and show
that this system has only two equilibria: the so called disease free
equilibrium (DFE); and a unique interior equilibrium---usually termed the
endemic equilibrium (EE)---that exists if, and only if, $\mathcal{R}_0>1$. We
also show that, if $\mathcal{R}_0\leq1$, then the DFE equilibrium is globally
asymptotically stable, while when $\mathcal{R}_0>1$, we have that the EE is
globally asymptotically stable.
| [
{
"created": "Thu, 4 Sep 2014 15:48:17 GMT",
"version": "v1"
},
{
"created": "Fri, 5 Sep 2014 04:05:07 GMT",
"version": "v2"
},
{
"created": "Tue, 7 Apr 2015 20:21:39 GMT",
"version": "v3"
}
] | 2017-04-14 | [
[
"Iggidr",
"Abderrahman",
""
],
[
"Sallet",
"Gauthier",
""
],
[
"Souza",
"Max O.",
""
]
] | The resurgence of vector-borne diseases is an increasing public health concern, and there is a need for a better understanding of their dynamics. For a number of diseases, e.g. dengue and chikungunya, this resurgence occurs mostly in urban environments, which are naturally very heterogeneous, particularly due to population circulation. In this scenario, there is an increasing interest in both multi-patch and multi-group models for such diseases. In this work, we study the dynamics of a vector borne disease within a class of multi-group models that extends the classical Bailey-Dietz model. This class includes many of the proposed models in the literature, and it can accommodate various functional forms of the infection force. For such models, the vector-host/host-vector contact network topology gives rise to a bipartite graph which has different properties from the ones usually found in directly transmitted diseases. Under the assumption that the contact network is strongly connected, we can define the basic reproductive number $\mathcal{R}_0$ and show that this system has only two equilibria: the so called disease free equilibrium (DFE); and a unique interior equilibrium---usually termed the endemic equilibrium (EE)---that exists if, and only if, $\mathcal{R}_0>1$. We also show that, if $\mathcal{R}_0\leq1$, then the DFE equilibrium is globally asymptotically stable, while when $\mathcal{R}_0>1$, we have that the EE is globally asymptotically stable. |
2008.05937 | Efim Pelinovsky | Efim Pelinovsky, Andrey Kurkin, Oxana Kurkina, Maria Kokoulina and
Anastasia Epifanova | Logistic equation and COVID-19 | Submitted for Chaos, Solitons and Fractals | null | 10.1016/j.chaos.2020.110241 | null | q-bio.PE physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The generalized logistic equation is used to interpret the COVID-19 epidemic
data in several countries: Austria, Switzerland, the Netherlands, Italy, Turkey
and South Korea. The model coefficients are calculated: the growth rate and the
expected number of infected people, as well as the exponent indexes in the
generalized logistic equation. It is shown that the dependence of the number of
the infected people on time is well described on average by the logistic curve
(within the framework of a simple or generalized logistic equation) with a
determination coefficient exceeding 0.8. At the same time, the dependence of
the number of the infected people per day on time has a very uneven character
and can be described very roughly by the logistic curve. To describe it, it is
necessary to take into account the dependence of the model coefficients on time
or on the total number of cases. Variations, for example, of the growth rate
can reach 60%. The variability spectra of the coefficients have characteristic
peaks at periods of several days, which corresponds to the observed serial
intervals. The use of the stochastic logistic equation is proposed to estimate
the number of probable peaks in the coronavirus incidence.
| [
{
"created": "Thu, 13 Aug 2020 14:46:10 GMT",
"version": "v1"
}
] | 2021-02-03 | [
[
"Pelinovsky",
"Efim",
""
],
[
"Kurkin",
"Andrey",
""
],
[
"Kurkina",
"Oxana",
""
],
[
"Kokoulina",
"Maria",
""
],
[
"Epifanova",
"Anastasia",
""
]
] | The generalized logistic equation is used to interpret the COVID-19 epidemic data in several countries: Austria, Switzerland, the Netherlands, Italy, Turkey and South Korea. The model coefficients are calculated: the growth rate and the expected number of infected people, as well as the exponent indexes in the generalized logistic equation. It is shown that the dependence of the number of the infected people on time is well described on average by the logistic curve (within the framework of a simple or generalized logistic equation) with a determination coefficient exceeding 0.8. At the same time, the dependence of the number of the infected people per day on time has a very uneven character and can be described very roughly by the logistic curve. To describe it, it is necessary to take into account the dependence of the model coefficients on time or on the total number of cases. Variations, for example, of the growth rate can reach 60%. The variability spectra of the coefficients have characteristic peaks at periods of several days, which corresponds to the observed serial intervals. The use of the stochastic logistic equation is proposed to estimate the number of probable peaks in the coronavirus incidence. |
1406.5619 | Peter Ashcroft | Peter Ashcroft, Philipp M Altrock, and Tobias Galla | Fixation in finite populations evolving in fluctuating environments | Published in J. R. Soc. Interface. 30 pages, 5 figures | Ashcroft P, Altrock PM, Galla T. 2014 Fixation in finite
populations evolving in fluctuating environments. J. R. Soc. Interface 11:
20140663 | 10.1098/rsif.2014.0663 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The environment in which a population evolves can have a crucial impact on
selection. We study evolutionary dynamics in finite populations of fixed size
in a changing environment. The population dynamics are driven by birth and
death events. The rates of these events may vary in time depending on the state
of the environment, which follows an independent Markov process. We develop a
general theory for the fixation probability of a mutant in a population of
wild-types, and for mean unconditional and conditional fixation times. We apply
our theory to evolutionary games for which the payoff structure varies in time.
The mutant can exploit the environmental noise; a dynamic environment that
switches between two states can lead to a probability of fixation that is
higher than in any of the individual environmental states. We provide an
intuitive interpretation of this surprising effect. We also investigate
stationary distributions when mutations are present in the dynamics. In this
regime, we find two approximations of the stationary measure. One works well
for rapid switching, the other for slowly fluctuating environments.
| [
{
"created": "Sat, 21 Jun 2014 14:45:02 GMT",
"version": "v1"
},
{
"created": "Fri, 29 Aug 2014 11:38:25 GMT",
"version": "v2"
}
] | 2014-09-01 | [
[
"Ashcroft",
"Peter",
""
],
[
"Altrock",
"Philipp M",
""
],
[
"Galla",
"Tobias",
""
]
] | The environment in which a population evolves can have a crucial impact on selection. We study evolutionary dynamics in finite populations of fixed size in a changing environment. The population dynamics are driven by birth and death events. The rates of these events may vary in time depending on the state of the environment, which follows an independent Markov process. We develop a general theory for the fixation probability of a mutant in a population of wild-types, and for mean unconditional and conditional fixation times. We apply our theory to evolutionary games for which the payoff structure varies in time. The mutant can exploit the environmental noise; a dynamic environment that switches between two states can lead to a probability of fixation that is higher than in any of the individual environmental states. We provide an intuitive interpretation of this surprising effect. We also investigate stationary distributions when mutations are present in the dynamics. In this regime, we find two approximations of the stationary measure. One works well for rapid switching, the other for slowly fluctuating environments. |
2209.06318 | Camille Marchet | Camille Marchet | Sneak peek at the tig sequences: useful sequences built from nucleic
acid data | null | null | null | null | q-bio.OT | http://creativecommons.org/licenses/by-nc-nd/4.0/ | This manuscript is a tutorial on tig sequences that emerged after the name
"contig", and are of diverse purposes in sequence bioinformatics. We review
these different sequences (unitigs, simplitigs, monotigs, omnitigs to cite a
few), give intuitions of their construction and interest, and provide some
examples of applications.
| [
{
"created": "Tue, 13 Sep 2022 21:47:31 GMT",
"version": "v1"
}
] | 2022-09-15 | [
[
"Marchet",
"Camille",
""
]
] | This manuscript is a tutorial on tig sequences that emerged after the name "contig", and are of diverse purposes in sequence bioinformatics. We review these different sequences (unitigs, simplitigs, monotigs, omnitigs to cite a few), give intuitions of their construction and interest, and provide some examples of applications. |
0705.2907 | Tom Chou | Tom Chou | Peeling and Sliding in Nucleosome Repositioning | 5 pp, 4 figs | Phys. Rev. Lett., 99, 058105, (2007) | 10.1103/PhysRevLett.99.058105 | null | q-bio.SC q-bio.BM | null | We investigate the mechanisms of histone sliding and detachment with a
stochastic model that couples thermally-induced, passive histone sliding with
active motor-driven histone unwrapping. Analysis of a passive loop or twist
defect-mediated histone sliding mechanism shows that diffusional sliding is
enhanced as larger portions of the DNA is peeled off the histone. The mean
times to histone detachment and the mean distance traveled by the motor complex
prior to histone detachment are computed as functions of the intrinsic speed of
the motor. Fast motors preferentially induce detachment over sliding. However,
for a fixed motor speed, increasing the histone-DNA affinity (and thereby
decreasing the passive sliding rate) increases the mean distance traveled by
the motor.
| [
{
"created": "Mon, 21 May 2007 03:03:34 GMT",
"version": "v1"
}
] | 2009-11-13 | [
[
"Chou",
"Tom",
""
]
] | We investigate the mechanisms of histone sliding and detachment with a stochastic model that couples thermally-induced, passive histone sliding with active motor-driven histone unwrapping. Analysis of a passive loop or twist defect-mediated histone sliding mechanism shows that diffusional sliding is enhanced as larger portions of the DNA is peeled off the histone. The mean times to histone detachment and the mean distance traveled by the motor complex prior to histone detachment are computed as functions of the intrinsic speed of the motor. Fast motors preferentially induce detachment over sliding. However, for a fixed motor speed, increasing the histone-DNA affinity (and thereby decreasing the passive sliding rate) increases the mean distance traveled by the motor. |
1504.00826 | Simona Constantinescu | Simona Constantinescu, Ewa Szczurek, Pejman Mohammadi, J\"org
Rahnenf\"uhrer and Niko Beerenwinkel | TiMEx: A Waiting Time Model for Mutually Exclusive Groups of Cancer
Alterations | Paper accepted for oral presentation at RECOMB CCB Satellite Meeting
(April 2015, Warsaw) | null | null | null | q-bio.MN q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite recent technological advances in genomic sciences, our understanding
of cancer progression and its driving genetic alterations remains incomplete.
Here, we introduce TiMEx, a generative probabilistic model for detecting
patterns of various degrees of mutual exclusivity across genetic alterations,
which can indicate pathways involved in cancer progression. TiMEx explicitly
accounts for the temporal interplay between the waiting times to alterations
and the observation time. In simulation studies, we show that our model
outperforms previous methods for detecting mutual exclusivity. On large-scale
biological datasets, TiMEx identifies gene groups with strong functional
biological relevance, while also proposing many new candidates for biological
validation. TiMEx possesses several advantages over previous methods, including
a novel generative probabilistic model of tumorigenesis, direct estimation of
the probability of mutual exclusivity interaction, computational efficiency, as
well as high sensitivity in detecting gene groups involving low-frequency
alterations. R code is available at www.cbg.bsse.ethz.ch/software/TiMEx.
| [
{
"created": "Fri, 3 Apr 2015 11:58:10 GMT",
"version": "v1"
},
{
"created": "Thu, 16 Apr 2015 22:25:35 GMT",
"version": "v2"
},
{
"created": "Tue, 27 Oct 2015 09:24:31 GMT",
"version": "v3"
}
] | 2015-10-28 | [
[
"Constantinescu",
"Simona",
""
],
[
"Szczurek",
"Ewa",
""
],
[
"Mohammadi",
"Pejman",
""
],
[
"Rahnenführer",
"Jörg",
""
],
[
"Beerenwinkel",
"Niko",
""
]
] | Despite recent technological advances in genomic sciences, our understanding of cancer progression and its driving genetic alterations remains incomplete. Here, we introduce TiMEx, a generative probabilistic model for detecting patterns of various degrees of mutual exclusivity across genetic alterations, which can indicate pathways involved in cancer progression. TiMEx explicitly accounts for the temporal interplay between the waiting times to alterations and the observation time. In simulation studies, we show that our model outperforms previous methods for detecting mutual exclusivity. On large-scale biological datasets, TiMEx identifies gene groups with strong functional biological relevance, while also proposing many new candidates for biological validation. TiMEx possesses several advantages over previous methods, including a novel generative probabilistic model of tumorigenesis, direct estimation of the probability of mutual exclusivity interaction, computational efficiency, as well as high sensitivity in detecting gene groups involving low-frequency alterations. R code is available at www.cbg.bsse.ethz.ch/software/TiMEx. |
2003.07140 | Alan D. Rendall | Alan D. Rendall and Pia Brechmann | Unbounded solutions of models for glycolysis | 21 pages | null | null | null | q-bio.MN math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Selkov oscillator, a simple description of glycolysis, is a system of two
ordinary differential equations with mass action kinetics. In previous work the
authors established several properties of the solutions of this system. In the
present paper we extend this to prove that this system has solutions which
diverge to infinity in an oscillatory manner at late times. This system was
originally derived from another system with Michaelis-Menten kinetics. It is
shown that the Michaelis-Menten system, like that with mass action, has
solutions which diverge to infinity in a monotone manner. It is also shown to
admit subcritical Hopf bifurcations and thus unstable periodic solutions. We
discuss to what extent the unbounded solutions cast doubt on the biological
relevance of the Selkov oscillator and compare it with other models in the
literature.
| [
{
"created": "Mon, 16 Mar 2020 12:18:45 GMT",
"version": "v1"
}
] | 2020-03-17 | [
[
"Rendall",
"Alan D.",
""
],
[
"Brechmann",
"Pia",
""
]
] | The Selkov oscillator, a simple description of glycolysis, is a system of two ordinary differential equations with mass action kinetics. In previous work the authors established several properties of the solutions of this system. In the present paper we extend this to prove that this system has solutions which diverge to infinity in an oscillatory manner at late times. This system was originally derived from another system with Michaelis-Menten kinetics. It is shown that the Michaelis-Menten system, like that with mass action, has solutions which diverge to infinity in a monotone manner. It is also shown to admit subcritical Hopf bifurcations and thus unstable periodic solutions. We discuss to what extent the unbounded solutions cast doubt on the biological relevance of the Selkov oscillator and compare it with other models in the literature. |
1412.5995 | Panagiotis Papastamoulis | James Hensman, Panagiotis Papastamoulis, Peter Glaus, Antti Honkela
and Magnus Rattray | Fast and accurate approximate inference of transcript expression from
RNA-seq data | Main changes: (a) shuffling of reads simulated from spanki and repeat
the analysis for sailfish and eXpress. Now both methods yield better point
estimates. (b) including the Markov chain Monte Carlo sampler of rsem
(RSEM-PME). (c) including the Kallisto method (d) adding alternative measures
of transcript expression (TPM) and filtering out low expressed transcripts
(supplementary material). arXiv admin note: substantial text overlap with
arXiv:1308.5953 | null | null | null | q-bio.QM q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Motivation: Assigning RNA-seq reads to their transcript of origin is a
fundamental task in transcript expression estimation. Where ambiguities in
assignments exist due to transcripts sharing sequence, e.g. alternative
isoforms or alleles, the problem can be solved through probabilistic inference.
Bayesian methods have been shown to provide accurate transcript abundance
estimates compared to competing methods. However, exact Bayesian inference is
intractable and approximate methods such as Markov chain Monte Carlo (MCMC) and
Variational Bayes (VB) are typically used. While providing a high degree of
accuracy and modelling flexibility, standard implementations can be
prohibitively slow for large datasets and complex transcriptome annotations.
Results: We propose a novel approximate inference scheme based on VB and
apply it to an existing model of transcript expression inference from RNA-seq
data. Recent advances in VB algorithmics are used to improve the convergence of
the algorithm beyond the standard Variational Bayes Expectation Maximisation
(VBEM) algorithm. We apply our algorithm to simulated and biological datasets,
demonstrating a significant increase in speed with only very small loss in
accuracy of expression level estimation. We carry out a comparative study
against seven popular alternative methods and demonstrate that our new
algorithm provides excellent accuracy and inter-replicate consistency while
remaining competitive in computation time.
Availability: The methods were implemented in R and C++, and are available as
part of the BitSeq project at \url{https://github.com/BitSeq}. The method is
also available through the BitSeq Bioconductor package. The source code to
reproduce all simulation results can be accessed via
\url{https://github.com/BitSeq/BitSeqVB_benchmarking}.
| [
{
"created": "Thu, 18 Dec 2014 18:48:48 GMT",
"version": "v1"
},
{
"created": "Tue, 27 Jan 2015 09:55:43 GMT",
"version": "v2"
},
{
"created": "Tue, 30 Jun 2015 13:13:09 GMT",
"version": "v3"
}
] | 2015-07-01 | [
[
"Hensman",
"James",
""
],
[
"Papastamoulis",
"Panagiotis",
""
],
[
"Glaus",
"Peter",
""
],
[
"Honkela",
"Antti",
""
],
[
"Rattray",
"Magnus",
""
]
] | Motivation: Assigning RNA-seq reads to their transcript of origin is a fundamental task in transcript expression estimation. Where ambiguities in assignments exist due to transcripts sharing sequence, e.g. alternative isoforms or alleles, the problem can be solved through probabilistic inference. Bayesian methods have been shown to provide accurate transcript abundance estimates compared to competing methods. However, exact Bayesian inference is intractable and approximate methods such as Markov chain Monte Carlo (MCMC) and Variational Bayes (VB) are typically used. While providing a high degree of accuracy and modelling flexibility, standard implementations can be prohibitively slow for large datasets and complex transcriptome annotations. Results: We propose a novel approximate inference scheme based on VB and apply it to an existing model of transcript expression inference from RNA-seq data. Recent advances in VB algorithmics are used to improve the convergence of the algorithm beyond the standard Variational Bayes Expectation Maximisation (VBEM) algorithm. We apply our algorithm to simulated and biological datasets, demonstrating a significant increase in speed with only very small loss in accuracy of expression level estimation. We carry out a comparative study against seven popular alternative methods and demonstrate that our new algorithm provides excellent accuracy and inter-replicate consistency while remaining competitive in computation time. Availability: The methods were implemented in R and C++, and are available as part of the BitSeq project at \url{https://github.com/BitSeq}. The method is also available through the BitSeq Bioconductor package. The source code to reproduce all simulation results can be accessed via \url{https://github.com/BitSeq/BitSeqVB_benchmarking}. |
1608.07499 | Ankur Patel | Ankur Patel, Grishma joshi, Rupali Ugile | Stem Cell Therapy for Alzheimer's Disease | null | null | null | null | q-bio.NC q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The loss of neuronal cells in the central nervous system may happen in
numerous neurodegenerative illnesses. Alzheimer's Disease (AD) is an intricate,
irreversible, dynamic neurodegenerative sickness. It is the main source of
age-related dementia, influencing roughly 5.3 million individuals in the United
States alone. Promotion is a typical feeble ailment in individuals more than 65
years, bringing on disability described by decrease in memory, failure to learn
and do every day exercises, intellectual weakness and influences the personal
satisfaction of patients. Pathologic qualities of AD are an irregular
development of specific proteins called Beta-amyloid "plaques" and Tau
"Tangles" in the mind. Notwithstanding, current treatments against AD are just
to calm manifestations and palliative yet are not the cure and a few promising
medications competitors have fizzled in late clinical trials. There is
consequently a critical need to enhance our comprehension for pathogenesis of
this sickness, making new and creative prescient models with powerful
treatments. As of late, stem cell treatment has been appeared to have a
potential way to deal with different illnesses, including neurodegenerative
disorders. In light of the far reaching nature of AD pathology, stem cell
substitution procedures have been seen as an extraordinarily difficult and
impossible treatment approach. Stem Cell may likewise offer an effective new
way to deal with model and concentrate AD. Patient derived induced Pluripotent
Stem Cells (iPSCs), for instance, may propel our comprehension of disease
mechanism. In this review we will examine the capability of stem cells to help
in these testing tries.
| [
{
"created": "Fri, 26 Aug 2016 15:54:07 GMT",
"version": "v1"
}
] | 2016-08-29 | [
[
"Patel",
"Ankur",
""
],
[
"joshi",
"Grishma",
""
],
[
"Ugile",
"Rupali",
""
]
] | The loss of neuronal cells in the central nervous system may happen in numerous neurodegenerative illnesses. Alzheimer's Disease (AD) is an intricate, irreversible, dynamic neurodegenerative sickness. It is the main source of age-related dementia, influencing roughly 5.3 million individuals in the United States alone. Promotion is a typical feeble ailment in individuals more than 65 years, bringing on disability described by decrease in memory, failure to learn and do every day exercises, intellectual weakness and influences the personal satisfaction of patients. Pathologic qualities of AD are an irregular development of specific proteins called Beta-amyloid "plaques" and Tau "Tangles" in the mind. Notwithstanding, current treatments against AD are just to calm manifestations and palliative yet are not the cure and a few promising medications competitors have fizzled in late clinical trials. There is consequently a critical need to enhance our comprehension for pathogenesis of this sickness, making new and creative prescient models with powerful treatments. As of late, stem cell treatment has been appeared to have a potential way to deal with different illnesses, including neurodegenerative disorders. In light of the far reaching nature of AD pathology, stem cell substitution procedures have been seen as an extraordinarily difficult and impossible treatment approach. Stem Cell may likewise offer an effective new way to deal with model and concentrate AD. Patient derived induced Pluripotent Stem Cells (iPSCs), for instance, may propel our comprehension of disease mechanism. In this review we will examine the capability of stem cells to help in these testing tries. |
2309.00389 | Claudius Kratochwil | Claudius F. Kratochwil, Muktai Kuwalekar, Jan Haege, Nidal Karagic | Building and Managing a Tropical Fish Facility: A Do-It-Yourself Guide | 14 pages, 9 figures, 2 tables | null | null | null | q-bio.OT | http://creativecommons.org/licenses/by-nc-sa/4.0/ | At the core of most research in zoological disciplines, ranging from
developmental biology to genetics to behavioral biology, is the ability to keep
animals in captivity. While facilities for traditional model organisms often
benefit from well-established designs, construction of a facility for less
commonly studied organisms can present a challenge. Here, we detail the process
of designing, constructing, and operating a specialized 10,000-liter aquatic
facility dedicated to housing cichlid fishes for research purposes. The
facility, comprising 42 aquaria capable of division into up to 126
compartments, a flow-through rack for juveniles, egg tumblers for eggs and
embryos, and a microinjection setup, provides a comprehensive environment for
all life stages of cichlid fishes. We anticipate that a similar design can be
also used also for other tropical teleost fishes. This resource is designed to
promote increased efficiency and success in cichlid fish breeding and research,
thereby offering significant insights for aquatic research labs seeking to
build or optimize their own infrastructures.
| [
{
"created": "Fri, 1 Sep 2023 11:06:10 GMT",
"version": "v1"
}
] | 2023-09-04 | [
[
"Kratochwil",
"Claudius F.",
""
],
[
"Kuwalekar",
"Muktai",
""
],
[
"Haege",
"Jan",
""
],
[
"Karagic",
"Nidal",
""
]
] | At the core of most research in zoological disciplines, ranging from developmental biology to genetics to behavioral biology, is the ability to keep animals in captivity. While facilities for traditional model organisms often benefit from well-established designs, construction of a facility for less commonly studied organisms can present a challenge. Here, we detail the process of designing, constructing, and operating a specialized 10,000-liter aquatic facility dedicated to housing cichlid fishes for research purposes. The facility, comprising 42 aquaria capable of division into up to 126 compartments, a flow-through rack for juveniles, egg tumblers for eggs and embryos, and a microinjection setup, provides a comprehensive environment for all life stages of cichlid fishes. We anticipate that a similar design can be also used also for other tropical teleost fishes. This resource is designed to promote increased efficiency and success in cichlid fish breeding and research, thereby offering significant insights for aquatic research labs seeking to build or optimize their own infrastructures. |
1711.00773 | Takahiro Wada | Takahiro Wada, Keigo Yoshida | Effect of passengers' active head tilt and opening/closure of eyes on
motion sickness in lateral acceleration environment of cars | null | Ergonomics, 2016 | 10.1080/00140139.2015.1109713 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This study examined the effect of passengers' active head-tilt and
eyes-open/closed conditions on the severity of motion sickness in the lateral
acceleration environment of cars. In the centrifugal head-tilt condition,
participants intentionally tilted their heads towards the centrifugal force,
whereas in the centripetal head-tilt condition, the participants tilted their
heads against the centrifugal acceleration. The eyes-open and eyes-closed cases
were investigated for each head-tilt condition. In the experimental runs, the
sickness rating in the centripetal head-tilt condition was significantly lower
than that in the centrifugal head-tilt condition. Moreover, the sickness rating
in the eyes-open condition was significantly lower than that in the eyes-closed
condition. The results suggest that an active head-tilt motion against the
centrifugal acceleration reduces the severity of motion sickness both in the
eyes-open and eyes-closed conditions. They also demonstrate that the eyes-open
condition significantly reduces the motion sickness even when the head-tilt
strategy is used.
| [
{
"created": "Thu, 2 Nov 2017 15:01:01 GMT",
"version": "v1"
}
] | 2017-11-03 | [
[
"Wada",
"Takahiro",
""
],
[
"Yoshida",
"Keigo",
""
]
] | This study examined the effect of passengers' active head-tilt and eyes-open/closed conditions on the severity of motion sickness in the lateral acceleration environment of cars. In the centrifugal head-tilt condition, participants intentionally tilted their heads towards the centrifugal force, whereas in the centripetal head-tilt condition, the participants tilted their heads against the centrifugal acceleration. The eyes-open and eyes-closed cases were investigated for each head-tilt condition. In the experimental runs, the sickness rating in the centripetal head-tilt condition was significantly lower than that in the centrifugal head-tilt condition. Moreover, the sickness rating in the eyes-open condition was significantly lower than that in the eyes-closed condition. The results suggest that an active head-tilt motion against the centrifugal acceleration reduces the severity of motion sickness both in the eyes-open and eyes-closed conditions. They also demonstrate that the eyes-open condition significantly reduces the motion sickness even when the head-tilt strategy is used. |
1909.00042 | Chen Jia | Chen Jia, Le Yi Wang, George G. Yin, Michael Q. Zhang | Single-cell stochastic gene expression kinetics with coupled
positive-plus-negative feedback | 27 pages, 7 figures | Phys. Rev. E 100, 052406 (2019) | 10.1103/PhysRevE.100.052406 | null | q-bio.MN math.PR physics.bio-ph q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Here we investigate single-cell stochastic gene expression kinetics in a
minimal coupled gene circuit with positive-plus-negative feedback. A triphasic
stochastic bifurcation upon the increasing ratio of the positive and negative
feedback strengths is observed, which reveals a strong synergistic interaction
between positive and negative feedback loops. We discover that coupled
positive-plus-negative feedback amplifies gene expression mean but reduces gene
expression noise over a wide range of feedback strengths when promoter
switching is relatively slow, stabilizing gene expression around a relatively
high level. In addition, we study two types of macroscopic limits of the
discrete chemical master equation model: the Kurtz limit applies to proteins
with large burst frequencies and the L\'{e}vy limit applies to proteins with
large burst sizes. We derive the analytic steady-state distributions of the
protein abundance in a coupled gene circuit for both the discrete model and its
two macroscopic limits, generalizing the results obtained in [Chaos 26:043108,
2016]. We also obtain the analytic time-dependent protein distribution for the
classical Friedman-Cai-Xie random bursting model proposed in [Phys. Rev. Lett.
97:168302, 2006]. Our analytic results are further applied to study the
structure of gene expression noise in a coupled gene circuit and a complete
decomposition of noise in terms of five different biophysical origins is
provided.
| [
{
"created": "Fri, 30 Aug 2019 19:21:01 GMT",
"version": "v1"
},
{
"created": "Fri, 25 Oct 2019 19:22:30 GMT",
"version": "v2"
}
] | 2019-11-27 | [
[
"Jia",
"Chen",
""
],
[
"Wang",
"Le Yi",
""
],
[
"Yin",
"George G.",
""
],
[
"Zhang",
"Michael Q.",
""
]
] | Here we investigate single-cell stochastic gene expression kinetics in a minimal coupled gene circuit with positive-plus-negative feedback. A triphasic stochastic bifurcation upon the increasing ratio of the positive and negative feedback strengths is observed, which reveals a strong synergistic interaction between positive and negative feedback loops. We discover that coupled positive-plus-negative feedback amplifies gene expression mean but reduces gene expression noise over a wide range of feedback strengths when promoter switching is relatively slow, stabilizing gene expression around a relatively high level. In addition, we study two types of macroscopic limits of the discrete chemical master equation model: the Kurtz limit applies to proteins with large burst frequencies and the L\'{e}vy limit applies to proteins with large burst sizes. We derive the analytic steady-state distributions of the protein abundance in a coupled gene circuit for both the discrete model and its two macroscopic limits, generalizing the results obtained in [Chaos 26:043108, 2016]. We also obtain the analytic time-dependent protein distribution for the classical Friedman-Cai-Xie random bursting model proposed in [Phys. Rev. Lett. 97:168302, 2006]. Our analytic results are further applied to study the structure of gene expression noise in a coupled gene circuit and a complete decomposition of noise in terms of five different biophysical origins is provided. |
1709.09748 | Rodrigo Felipe de Oliveira Pena | Rodrigo F.O. Pena, Michael A. Zaks, Antonio C. Roque | Dynamics of spontaneous activity in random networks with multiple neuron
subtypes and synaptic noise | 30 pages, 19 figures | Journal of Computational Neuroscience (2018) | 10.1007/s10827-018-0688-6 | null | q-bio.NC nlin.AO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Spontaneous cortical population activity exhibits a multitude of oscillatory
patterns, which often display synchrony during slow-wave sleep or under certain
anesthetics and stay asynchronous during quiet wakefulness. The mechanisms
behind these cortical states and transitions among them are not completely
understood. Here we study spontaneous population activity patterns in random
networks of spiking neurons of mixed types modeled by Izhikevich equations.
Neurons are coupled by conductance-based synapses subject to synaptic noise. We
localize the population activity patterns on the parameter diagram spanned by
the relative inhibitory synaptic strength and the magnitude of synaptic noise.
In absence of noise, networks display transient activity patterns, either
oscillatory or at constant level. The effect of noise is to turn transient
patterns into persistent ones: for weak noise, all activity patterns are
asynchronous non-oscillatory independently of synaptic strengths; for stronger
noise, patterns have oscillatory and synchrony characteristics that depend on
the relative inhibitory synaptic strength. In the region of parameter space
where inhibitory synaptic strength exceeds the excitatory synaptic strength and
for moderate noise magnitudes networks feature intermittent switches between
oscillatory and quiescent states with characteristics similar to those of
synchronous and asynchronous cortical states, respectively. We explain these
oscillatory and quiescent patterns by combining a phenomenological global
description of the network state with local descriptions of individual neurons
in their partial phase spaces. Our results point to a bridge from events at the
molecular scale of synapses to the cellular scale of individual neurons to the
collective scale of neuronal populations.
| [
{
"created": "Wed, 27 Sep 2017 22:02:38 GMT",
"version": "v1"
},
{
"created": "Sun, 20 May 2018 01:16:06 GMT",
"version": "v2"
}
] | 2018-06-20 | [
[
"Pena",
"Rodrigo F. O.",
""
],
[
"Zaks",
"Michael A.",
""
],
[
"Roque",
"Antonio C.",
""
]
] | Spontaneous cortical population activity exhibits a multitude of oscillatory patterns, which often display synchrony during slow-wave sleep or under certain anesthetics and stay asynchronous during quiet wakefulness. The mechanisms behind these cortical states and transitions among them are not completely understood. Here we study spontaneous population activity patterns in random networks of spiking neurons of mixed types modeled by Izhikevich equations. Neurons are coupled by conductance-based synapses subject to synaptic noise. We localize the population activity patterns on the parameter diagram spanned by the relative inhibitory synaptic strength and the magnitude of synaptic noise. In absence of noise, networks display transient activity patterns, either oscillatory or at constant level. The effect of noise is to turn transient patterns into persistent ones: for weak noise, all activity patterns are asynchronous non-oscillatory independently of synaptic strengths; for stronger noise, patterns have oscillatory and synchrony characteristics that depend on the relative inhibitory synaptic strength. In the region of parameter space where inhibitory synaptic strength exceeds the excitatory synaptic strength and for moderate noise magnitudes networks feature intermittent switches between oscillatory and quiescent states with characteristics similar to those of synchronous and asynchronous cortical states, respectively. We explain these oscillatory and quiescent patterns by combining a phenomenological global description of the network state with local descriptions of individual neurons in their partial phase spaces. Our results point to a bridge from events at the molecular scale of synapses to the cellular scale of individual neurons to the collective scale of neuronal populations. |
2402.11055 | Jessica Royer | Jessica Royer, Casey Paquola, Sofie L. Valk, Matthias Kirschner,
Seok-Jun Hong, Bo-yong Park, Richard A.I. Bethlehem, Robert Leech, B. T.
Thomas Yeo, Elizabeth Jefferies, Jonathan Smallwood, Daniel Margulies, Boris
C. Bernhardt | Gradients of brain organization: Smooth sailing from methods development
to user community | null | null | null | null | q-bio.OT | http://creativecommons.org/licenses/by/4.0/ | Multimodal neuroimaging grants a powerful in vivo window into the structure
and function of the human brain. Recent methodological and conceptual advances
have enabled investigations of the interplay between large-scale spatial
trends, or gradients, in brain structure and function, offering a framework to
unify principles of brain organization across multiple scales. Strong community
enthusiasm for these techniques has been instrumental in their widespread
adoption and implementation to answer key questions in neuroscience. Following
a brief review of current literature on this framework, this perspective paper
will highlight how pragmatic steps aiming to make gradient methods more
accessible to the community propelled these techniques to the forefront of
neuroscientific inquiry. More specifically, we will emphasize how interest for
gradient methods was catalyzed by data sharing, open-source software
development, as well as the organization of dedicated workshops led by a
diverse team of early career researchers. To this end, we argue that the
growing excitement for brain gradients is the result of coordinated and
consistent efforts to build an inclusive community and can serve as a case in
point for future innovations and conceptual advances in neuroinformatics. We
close this perspective paper by discussing challenges for the continuous
refinement of neuroscientific theory, methodological innovation, and real-world
translation to maintain our collective progress towards integrated models of
brain organization.
| [
{
"created": "Fri, 16 Feb 2024 20:10:35 GMT",
"version": "v1"
}
] | 2024-02-20 | [
[
"Royer",
"Jessica",
""
],
[
"Paquola",
"Casey",
""
],
[
"Valk",
"Sofie L.",
""
],
[
"Kirschner",
"Matthias",
""
],
[
"Hong",
"Seok-Jun",
""
],
[
"Park",
"Bo-yong",
""
],
[
"Bethlehem",
"Richard A. I.",
""
... | Multimodal neuroimaging grants a powerful in vivo window into the structure and function of the human brain. Recent methodological and conceptual advances have enabled investigations of the interplay between large-scale spatial trends, or gradients, in brain structure and function, offering a framework to unify principles of brain organization across multiple scales. Strong community enthusiasm for these techniques has been instrumental in their widespread adoption and implementation to answer key questions in neuroscience. Following a brief review of current literature on this framework, this perspective paper will highlight how pragmatic steps aiming to make gradient methods more accessible to the community propelled these techniques to the forefront of neuroscientific inquiry. More specifically, we will emphasize how interest for gradient methods was catalyzed by data sharing, open-source software development, as well as the organization of dedicated workshops led by a diverse team of early career researchers. To this end, we argue that the growing excitement for brain gradients is the result of coordinated and consistent efforts to build an inclusive community and can serve as a case in point for future innovations and conceptual advances in neuroinformatics. We close this perspective paper by discussing challenges for the continuous refinement of neuroscientific theory, methodological innovation, and real-world translation to maintain our collective progress towards integrated models of brain organization. |
0806.3274 | Leonardo Varuzza | Leonardo Varuzza, Arthur Gruber, Carlos A. de B. Pereira | Significance tests for comparing digital gene expression profiles | 16 pages, 5 figures. Implementations of both tests are available
under the GNU General Public License at http://code.google.com/p/kempbasu | null | null | null | q-bio.GN q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most of the statistical tests currently used to detect differentially
expressed genes are based on asymptotic results, and perform poorly for low
expression tags. Another problem is the common use of a single canonical cutoff
for the significance level (p-value) of all the tags, without taking into
consideration the type II error and the highly variable character of the sample
size of the tags.
This work reports the development of two significance tests for the
comparison of digital expression profiles, based on frequentist and Bayesian
points of view, respectively. Both tests are exact, and do not use any
asymptotic considerations, thus producing more correct results for low
frequency tags than the chi-square test. The frequentist test uses a
tag-customized critical level which minimizes a linear combination of type I
and type II errors. A comparison of the Bayesian and the frequentist tests
revealed that they are linked by a Beta distribution function. These tests can
be used alone or in conjunction, and represent an improvement over the
currently available methods for comparing digital profiles.
| [
{
"created": "Thu, 19 Jun 2008 20:09:01 GMT",
"version": "v1"
},
{
"created": "Mon, 23 Jun 2008 01:39:02 GMT",
"version": "v2"
},
{
"created": "Mon, 4 Aug 2008 01:56:44 GMT",
"version": "v3"
}
] | 2008-08-04 | [
[
"Varuzza",
"Leonardo",
""
],
[
"Gruber",
"Arthur",
""
],
[
"Pereira",
"Carlos A. de B.",
""
]
] | Most of the statistical tests currently used to detect differentially expressed genes are based on asymptotic results, and perform poorly for low expression tags. Another problem is the common use of a single canonical cutoff for the significance level (p-value) of all the tags, without taking into consideration the type II error and the highly variable character of the sample size of the tags. This work reports the development of two significance tests for the comparison of digital expression profiles, based on frequentist and Bayesian points of view, respectively. Both tests are exact, and do not use any asymptotic considerations, thus producing more correct results for low frequency tags than the chi-square test. The frequentist test uses a tag-customized critical level which minimizes a linear combination of type I and type II errors. A comparison of the Bayesian and the frequentist tests revealed that they are linked by a Beta distribution function. These tests can be used alone or in conjunction, and represent an improvement over the currently available methods for comparing digital profiles. |
2011.05368 | Frank Van Bussel | Zeina S. Khan, Frank Van Bussel and Fazle Hussain | Measuring the Change in European and US COVID-19 Death Rates | null | null | null | null | q-bio.PE physics.soc-ph | http://creativecommons.org/licenses/by/4.0/ | By fitting a compartment ODE model for Covid-19 propagation to cumulative
case and death data for US states and European countries, we find that the case
mortality rate seems to have decreased by at least 80% in most of the US and at
least 90% in most of Europe. These are much larger and faster changes than
reported in empirical studies, such as the 18% decrease in mortality found for
the New York City hospital system from March to August 2020 (Horwitz et al,
Trends in Covid-19 risk-adjusted mortality rates, J. Hosp. Med. 2020). Our
reported decreases surprisingly do not have strong correlations to other model
parameters (such as contact rate) or other standard state/national metrics such
as population density, GDP, and median age. Almost all the decreases occurred
between mid-April and mid-June, which unexpectedly corresponds to the time when
many state and national lockdowns were released resulting in surges of new
cases. Several plausible causes for this drop are examined, such as
improvements in treatment, face mask wearing, a new virus strain, and
potentially changing demographics of infected patients, but none are
overwhelmingly convincing given the currently available evidence.
| [
{
"created": "Tue, 10 Nov 2020 19:38:05 GMT",
"version": "v1"
},
{
"created": "Tue, 17 Nov 2020 20:36:11 GMT",
"version": "v2"
},
{
"created": "Wed, 6 Jan 2021 23:36:21 GMT",
"version": "v3"
}
] | 2021-01-08 | [
[
"Khan",
"Zeina S.",
""
],
[
"Van Bussel",
"Frank",
""
],
[
"Hussain",
"Fazle",
""
]
] | By fitting a compartment ODE model for Covid-19 propagation to cumulative case and death data for US states and European countries, we find that the case mortality rate seems to have decreased by at least 80% in most of the US and at least 90% in most of Europe. These are much larger and faster changes than reported in empirical studies, such as the 18% decrease in mortality found for the New York City hospital system from March to August 2020 (Horwitz et al, Trends in Covid-19 risk-adjusted mortality rates, J. Hosp. Med. 2020). Our reported decreases surprisingly do not have strong correlations to other model parameters (such as contact rate) or other standard state/national metrics such as population density, GDP, and median age. Almost all the decreases occurred between mid-April and mid-June, which unexpectedly corresponds to the time when many state and national lockdowns were released resulting in surges of new cases. Several plausible causes for this drop are examined, such as improvements in treatment, face mask wearing, a new virus strain, and potentially changing demographics of infected patients, but none are overwhelmingly convincing given the currently available evidence. |
1402.6628 | Alexander Stewart | Alexander J. Stewart and Joshua B. Plotkin | The collapse of cooperation in evolving games | 33 pages, 13 figures | null | 10.1073/pnas.1408618111 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Game theory provides a quantitative framework for analyzing the behavior of
rational agents. The Iterated Prisoner's Dilemma in particular has become a
standard model for studying cooperation and cheating, with cooperation often
emerging as a robust outcome in evolving populations. Here we extend
evolutionary game theory by allowing players' strategies as well as their
payoffs to evolve in response to selection on heritable mutations. In nature,
many organisms engage in mutually beneficial interactions, and individuals may
seek to change the ratio of risk to reward for cooperation by altering the
resources they commit to cooperative interactions. To study this, we construct
a general framework for the co-evolution of strategies and payoffs in arbitrary
iterated games. We show that, as payoffs evolve, a trade-off between the
benefits and costs of cooperation precipitates a dramatic loss of cooperation
under the Iterated Prisoner's Dilemma; and eventually to evolution away from
the Prisoner's Dilemma altogether. The collapse of cooperation is so extreme
that the average payoff in a population may decline, even as the potential
payoff for mutual cooperation increases. Our work offers a new perspective on
the Prisoner's Dilemma and its predictions for cooperation in natural
populations; and it provides a general framework to understand the co-evolution
of strategies and payoffs in iterated interactions.
| [
{
"created": "Wed, 26 Feb 2014 18:15:43 GMT",
"version": "v1"
},
{
"created": "Tue, 8 Apr 2014 17:37:24 GMT",
"version": "v2"
}
] | 2015-06-18 | [
[
"Stewart",
"Alexander J.",
""
],
[
"Plotkin",
"Joshua B.",
""
]
] | Game theory provides a quantitative framework for analyzing the behavior of rational agents. The Iterated Prisoner's Dilemma in particular has become a standard model for studying cooperation and cheating, with cooperation often emerging as a robust outcome in evolving populations. Here we extend evolutionary game theory by allowing players' strategies as well as their payoffs to evolve in response to selection on heritable mutations. In nature, many organisms engage in mutually beneficial interactions, and individuals may seek to change the ratio of risk to reward for cooperation by altering the resources they commit to cooperative interactions. To study this, we construct a general framework for the co-evolution of strategies and payoffs in arbitrary iterated games. We show that, as payoffs evolve, a trade-off between the benefits and costs of cooperation precipitates a dramatic loss of cooperation under the Iterated Prisoner's Dilemma; and eventually to evolution away from the Prisoner's Dilemma altogether. The collapse of cooperation is so extreme that the average payoff in a population may decline, even as the potential payoff for mutual cooperation increases. Our work offers a new perspective on the Prisoner's Dilemma and its predictions for cooperation in natural populations; and it provides a general framework to understand the co-evolution of strategies and payoffs in iterated interactions. |
2401.03369 | Tianyi Jiang | Zeyu Wang, Tianyi Jiang, Jinhuan Wang, Qi Xuan | Multi-Modal Representation Learning for Molecular Property Prediction:
Sequence, Graph, Geometry | 8 pages, 3 figures | null | null | null | q-bio.MN cs.LG q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Molecular property prediction refers to the task of labeling molecules with
some biochemical properties, playing a pivotal role in the drug discovery and
design process. Recently, with the advancement of machine learning, deep
learning-based molecular property prediction has emerged as a solution to the
resource-intensive nature of traditional methods, garnering significant
attention. Among them, molecular representation learning is the key factor for
molecular property prediction performance. And there are lots of
sequence-based, graph-based, and geometry-based methods that have been
proposed. However, the majority of existing studies focus solely on one
modality for learning molecular representations, failing to comprehensively
capture molecular characteristics and information. In this paper, a novel
multi-modal representation learning model, which integrates the sequence,
graph, and geometry characteristics, is proposed for molecular property
prediction, called SGGRL. Specifically, we design a fusion layer to fusion the
representation of different modalities. Furthermore, to ensure consistency
across modalities, SGGRL is trained to maximize the similarity of
representations for the same molecule while minimizing similarity for different
molecules. To verify the effectiveness of SGGRL, seven molecular datasets, and
several baselines are used for evaluation and comparison. The experimental
results demonstrate that SGGRL consistently outperforms the baselines in most
cases. This further underscores the capability of SGGRL to comprehensively
capture molecular information. Overall, the proposed SGGRL model showcases its
potential to revolutionize molecular property prediction by leveraging
multi-modal representation learning to extract diverse and comprehensive
molecular insights. Our code is released at
https://github.com/Vencent-Won/SGGRL.
| [
{
"created": "Sun, 7 Jan 2024 02:18:00 GMT",
"version": "v1"
},
{
"created": "Tue, 9 Jan 2024 02:20:04 GMT",
"version": "v2"
}
] | 2024-01-10 | [
[
"Wang",
"Zeyu",
""
],
[
"Jiang",
"Tianyi",
""
],
[
"Wang",
"Jinhuan",
""
],
[
"Xuan",
"Qi",
""
]
] | Molecular property prediction refers to the task of labeling molecules with some biochemical properties, playing a pivotal role in the drug discovery and design process. Recently, with the advancement of machine learning, deep learning-based molecular property prediction has emerged as a solution to the resource-intensive nature of traditional methods, garnering significant attention. Among them, molecular representation learning is the key factor for molecular property prediction performance. And there are lots of sequence-based, graph-based, and geometry-based methods that have been proposed. However, the majority of existing studies focus solely on one modality for learning molecular representations, failing to comprehensively capture molecular characteristics and information. In this paper, a novel multi-modal representation learning model, which integrates the sequence, graph, and geometry characteristics, is proposed for molecular property prediction, called SGGRL. Specifically, we design a fusion layer to fusion the representation of different modalities. Furthermore, to ensure consistency across modalities, SGGRL is trained to maximize the similarity of representations for the same molecule while minimizing similarity for different molecules. To verify the effectiveness of SGGRL, seven molecular datasets, and several baselines are used for evaluation and comparison. The experimental results demonstrate that SGGRL consistently outperforms the baselines in most cases. This further underscores the capability of SGGRL to comprehensively capture molecular information. Overall, the proposed SGGRL model showcases its potential to revolutionize molecular property prediction by leveraging multi-modal representation learning to extract diverse and comprehensive molecular insights. Our code is released at https://github.com/Vencent-Won/SGGRL. |
2303.04685 | Belal Abuelnasr | Belal Abuelnasr and Adam R. Stinchcombe | A Multi-Scale Simulation of Retinal Physiology | 24 pages, 9 figures | null | null | null | q-bio.QM cs.NA math.NA q-bio.TO | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We present a detailed physiological model of the retina that includes the
biochemistry and electrophysiology of phototransduction, neuronal electrical
coupling, and the spherical geometry of the eye. The model is a
parabolic-elliptic system of partial differential equations based on the
mathematical framework of the bi-domain equations, which we have generalized to
account for multiple cell-types. We discretize in space with non-uniform finite
differences and step through time with a custom adaptive time-stepper that
employs a backward differentiation formula and an inexact Newton method. A
refinement study confirms the accuracy and efficiency of our numerical method.
Numerical simulations using the model compare favorably with experimental
findings, such as desensitization to light stimuli and calcium buffering in
photoreceptors. Other numerical simulations suggest an interplay between
photoreceptor gap junctions and inner segment, but not outer segment, calcium
concentration. Applications of this model and simulation include analysis of
retinal calcium imaging experiments, the design of electroretinograms, the
design of visual prosthetics, and studies of ephaptic coupling within the
retina.
| [
{
"created": "Wed, 8 Mar 2023 16:20:43 GMT",
"version": "v1"
}
] | 2023-03-09 | [
[
"Abuelnasr",
"Belal",
""
],
[
"Stinchcombe",
"Adam R.",
""
]
] | We present a detailed physiological model of the retina that includes the biochemistry and electrophysiology of phototransduction, neuronal electrical coupling, and the spherical geometry of the eye. The model is a parabolic-elliptic system of partial differential equations based on the mathematical framework of the bi-domain equations, which we have generalized to account for multiple cell-types. We discretize in space with non-uniform finite differences and step through time with a custom adaptive time-stepper that employs a backward differentiation formula and an inexact Newton method. A refinement study confirms the accuracy and efficiency of our numerical method. Numerical simulations using the model compare favorably with experimental findings, such as desensitization to light stimuli and calcium buffering in photoreceptors. Other numerical simulations suggest an interplay between photoreceptor gap junctions and inner segment, but not outer segment, calcium concentration. Applications of this model and simulation include analysis of retinal calcium imaging experiments, the design of electroretinograms, the design of visual prosthetics, and studies of ephaptic coupling within the retina. |
q-bio/0611001 | Katherine Bold | Katherine A. Bold, Yu Zou, Ioannis G. Kevrekidis, and Michael A.
Henson | Heterogeneous Cell Population Dynamics: Equation-Free Uncertainty
Quantification Computations | 26 pages, 7 figures | null | null | null | q-bio.QM | null | We propose a computational approach to modeling the collective dynamics of
populations of coupled heterogeneous biological oscillators. In contrast to
Monte Carlo simulation, this approach utilizes generalized Polynomial Chaos
(gPC) to represent random properties of the population, thus reducing the
dynamics of ensembles of oscillators to dynamics of their (typically
significantly fewer) representative gPC coefficients. Equation-Free (EF)
methods are employed to efficiently evolve these gPC coefficients in time and
compute their coarse-grained stationary state and/or limit cycle solutions,
circumventing the derivation of explicit, closed-form evolution equations.
Ensemble realizations of the oscillators and their statistics can be readily
reconstructed from these gPC coefficients. We apply this methodology to the
synchronization of yeast glycolytic oscillators coupled by the membrane
exchange of an intracellular metabolite. The heterogeneity consists of a single
random parameter, which accounts for glucose influx into a cell, with a
Gaussian distribution over the population. Coarse projective integration is
used to accelerate the evolution of the population statistics in time. Coarse
fixed-point algorithms in conjunction with a Poincar\'e return map are used to
compute oscillatory solutions for the cell population and to quantify their
stability.
| [
{
"created": "Tue, 31 Oct 2006 21:23:12 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Bold",
"Katherine A.",
""
],
[
"Zou",
"Yu",
""
],
[
"Kevrekidis",
"Ioannis G.",
""
],
[
"Henson",
"Michael A.",
""
]
] | We propose a computational approach to modeling the collective dynamics of populations of coupled heterogeneous biological oscillators. In contrast to Monte Carlo simulation, this approach utilizes generalized Polynomial Chaos (gPC) to represent random properties of the population, thus reducing the dynamics of ensembles of oscillators to dynamics of their (typically significantly fewer) representative gPC coefficients. Equation-Free (EF) methods are employed to efficiently evolve these gPC coefficients in time and compute their coarse-grained stationary state and/or limit cycle solutions, circumventing the derivation of explicit, closed-form evolution equations. Ensemble realizations of the oscillators and their statistics can be readily reconstructed from these gPC coefficients. We apply this methodology to the synchronization of yeast glycolytic oscillators coupled by the membrane exchange of an intracellular metabolite. The heterogeneity consists of a single random parameter, which accounts for glucose influx into a cell, with a Gaussian distribution over the population. Coarse projective integration is used to accelerate the evolution of the population statistics in time. Coarse fixed-point algorithms in conjunction with a Poincar\'e return map are used to compute oscillatory solutions for the cell population and to quantify their stability. |
2202.12371 | Josinaldo Menezes | J. Menezes, E. Rangel, B. Moura | Aggregation as an antipredator strategy in the rock-paper-scissors model | 8 pages, 10 figures | Ecological Informatics 69, 101606 (2022) | 10.1016/j.ecoinf.2022.101606 | null | q-bio.PE nlin.AO nlin.PS physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study a nonhierarchical tritrophic system, whose predator-prey
interactions are described by the rock-paper-scissors game rules. In our
stochastic simulations, individuals may move strategically towards the
direction with more conspecifics to form clumps instead of moving aimlessly on
the lattice. Considering that the conditioning to move gregariously depends on
the organism's physical and cognitive abilities, we introduce a maximum
distance an individual can perceive the environment and a minimum conditioning
level to perform the gregarious movement. We investigate the pattern formation
and compute the average size of the single-species spatial domains emerging
from the grouping behaviour. The results reveal that the defence tactic reduces
the predation risk significantly, being more profitable if individuals perceive
further distances, thus creating bigger groups. Our outcomes show that the
species with more conditioned organisms dominate the cyclic spatial game,
controlling most of the territory. On the other hand, the species with fewer
individuals ready to perform aggregation strategy gives its predator the chance
to fill the more significant fraction of the grid. The spatial interactions
assumed in our numerical experiments constitute a data set that may help
biologists and data scientists understand how local interactions influence
ecosystem dynamics.
| [
{
"created": "Thu, 24 Feb 2022 21:48:00 GMT",
"version": "v1"
}
] | 2022-03-02 | [
[
"Menezes",
"J.",
""
],
[
"Rangel",
"E.",
""
],
[
"Moura",
"B.",
""
]
] | We study a nonhierarchical tritrophic system, whose predator-prey interactions are described by the rock-paper-scissors game rules. In our stochastic simulations, individuals may move strategically towards the direction with more conspecifics to form clumps instead of moving aimlessly on the lattice. Considering that the conditioning to move gregariously depends on the organism's physical and cognitive abilities, we introduce a maximum distance an individual can perceive the environment and a minimum conditioning level to perform the gregarious movement. We investigate the pattern formation and compute the average size of the single-species spatial domains emerging from the grouping behaviour. The results reveal that the defence tactic reduces the predation risk significantly, being more profitable if individuals perceive further distances, thus creating bigger groups. Our outcomes show that the species with more conditioned organisms dominate the cyclic spatial game, controlling most of the territory. On the other hand, the species with fewer individuals ready to perform aggregation strategy gives its predator the chance to fill the more significant fraction of the grid. The spatial interactions assumed in our numerical experiments constitute a data set that may help biologists and data scientists understand how local interactions influence ecosystem dynamics. |
1909.00012 | Haym Benaroya | Haym Benaroya | Brain Energetics, Mitochondria, and Traumatic Brain Injury | null | null | null | null | q-bio.CB q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We review current thinking about, and draw connections between, brain
energetics and metabolism, mitochondria and traumatic brain injury. In addition
to summarizing current thinking in these disciplines, our goal is to suggest a
framework for mechanisms and pathways based on optimal energetic decisions.
| [
{
"created": "Fri, 30 Aug 2019 18:00:40 GMT",
"version": "v1"
}
] | 2019-09-04 | [
[
"Benaroya",
"Haym",
""
]
] | We review current thinking about, and draw connections between, brain energetics and metabolism, mitochondria and traumatic brain injury. In addition to summarizing current thinking in these disciplines, our goal is to suggest a framework for mechanisms and pathways based on optimal energetic decisions. |
1108.5217 | Hieu Dinh | Dolly Sharma and Sanguthevar Rajasekaran and Hieu Dinh | An Experimental Comparison of PMSPrune and Other Algorithms for Motif
Search | null | null | null | null | q-bio.QM cs.CE q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Extracting meaningful patterns from voluminous amount of biological data is a
very big challenge. Motifs are biological patterns of great interest to
biologists. Many different versions of the motif finding problem have been
identified by researchers. Examples include the Planted $(l, d)$ Motif version,
those based on position-specific score matrices, etc. A comparative study of
the various motif search algorithms is very important for several reasons. For
example, we could identify the strengths and weaknesses of each. As a result,
we might be able to devise hybrids that will perform better than the individual
components. In this paper we (either directly or indirectly) compare the
performance of PMSprune (an algorithm based on the $(l, d)$ motif model) and
several other algorithms in terms of seven measures and using well established
benchmarks
In this paper, we (directly or indirectly) compare the quality of motifs
predicted by PMSprune and 14 other algorithms. We have employed several
benchmark datasets including the one used by Tompa, et.al. These comparisons
show that the performance of PMSprune is competitive when compared to the other
14 algorithms tested.
We have compared (directly or indirectly) the performance of PMSprune and 14
other algorithms using the Benchmark dataset provided by Tompa, et.al. It is
observed that both PMSprune and DME (an algorithm based on position-specific
score matrices) in general perform better than the 13 algorithms reported in
Tompa et. al.. Subsequently we have compared PMSprune and DME on other
benchmark data sets including ChIP-Chip, ChIP-seq, and ABS. Between PMSprune
and DME, PMSprune performs better than DME on six measures. DME performs better
than PMSprune on one measure (namely, specificity).
| [
{
"created": "Fri, 26 Aug 2011 00:26:44 GMT",
"version": "v1"
}
] | 2011-08-29 | [
[
"Sharma",
"Dolly",
""
],
[
"Rajasekaran",
"Sanguthevar",
""
],
[
"Dinh",
"Hieu",
""
]
] | Extracting meaningful patterns from voluminous amount of biological data is a very big challenge. Motifs are biological patterns of great interest to biologists. Many different versions of the motif finding problem have been identified by researchers. Examples include the Planted $(l, d)$ Motif version, those based on position-specific score matrices, etc. A comparative study of the various motif search algorithms is very important for several reasons. For example, we could identify the strengths and weaknesses of each. As a result, we might be able to devise hybrids that will perform better than the individual components. In this paper we (either directly or indirectly) compare the performance of PMSprune (an algorithm based on the $(l, d)$ motif model) and several other algorithms in terms of seven measures and using well established benchmarks In this paper, we (directly or indirectly) compare the quality of motifs predicted by PMSprune and 14 other algorithms. We have employed several benchmark datasets including the one used by Tompa, et.al. These comparisons show that the performance of PMSprune is competitive when compared to the other 14 algorithms tested. We have compared (directly or indirectly) the performance of PMSprune and 14 other algorithms using the Benchmark dataset provided by Tompa, et.al. It is observed that both PMSprune and DME (an algorithm based on position-specific score matrices) in general perform better than the 13 algorithms reported in Tompa et. al.. Subsequently we have compared PMSprune and DME on other benchmark data sets including ChIP-Chip, ChIP-seq, and ABS. Between PMSprune and DME, PMSprune performs better than DME on six measures. DME performs better than PMSprune on one measure (namely, specificity). |
2304.10071 | Arshed Nabeel | Arshed Nabeel, Vivek Jadhav, Danny Raj M, Cl\'ement Sire, Guy
Theraulaz, Ram\'on Escobedo, Srikanth K. Iyer, Vishwesha Guttal | Data-driven discovery of stochastic dynamical equations of collective
motion | null | Physical Biology, 20, 056003, 2023 | 10.1088/1478-3975/ace22d | null | q-bio.QM physics.bio-ph | http://creativecommons.org/licenses/by/4.0/ | Coarse-grained descriptions of collective motion of flocking systems are
often derived for the macroscopic or the thermodynamic limit. However, many
real flocks are small sized (10 to 100 individuals), called the mesoscopic
scales, where stochasticity arising from the finite flock sizes is important.
Developing mesoscopic scale equations, typically in the form of stochastic
differential equations, can be challenging even for the simplest of the
collective motion models. Here, we take a novel data-driven equation learning
approach to construct the stochastic mesoscopic descriptions of a simple
self-propelled particle (SPP) model of collective motion. In our SPP model, a
focal individual can interact with k randomly chosen neighbours within an
interaction radius. We consider k = 1 (called stochastic pairwise
interactions), k = 2 (stochastic ternary interactions), and k equalling all
available neighbours within the interaction radius (equivalent to Vicsek-like
local averaging). The data-driven mesoscopic equations reveal that the
stochastic pairwise interaction model produces a novel form of collective
motion driven by a multiplicative noise term (hence termed, noise-induced
flocking). In contrast, for higher order interactions (k > 1), including
Vicsek-like averaging interactions, yield collective motion driven primarily by
the deterministic forces. We find that the relation between the parameters of
the mesoscopic equations describing the dynamics and the population size are
sensitive to the density and to the interaction radius, exhibiting deviations
from mean-field theoretical expectations. We provide semi-analytic arguments
potentially explaining these observed deviations. In summary, our study
emphasizes the importance of mesoscopic descriptions of flocking systems and
demonstrates the potential of the data-driven equation discovery methods for
complex systems studies.
| [
{
"created": "Thu, 20 Apr 2023 03:51:58 GMT",
"version": "v1"
}
] | 2023-09-25 | [
[
"Nabeel",
"Arshed",
""
],
[
"Jadhav",
"Vivek",
""
],
[
"M",
"Danny Raj",
""
],
[
"Sire",
"Clément",
""
],
[
"Theraulaz",
"Guy",
""
],
[
"Escobedo",
"Ramón",
""
],
[
"Iyer",
"Srikanth K.",
""
],
[
"G... | Coarse-grained descriptions of collective motion of flocking systems are often derived for the macroscopic or the thermodynamic limit. However, many real flocks are small sized (10 to 100 individuals), called the mesoscopic scales, where stochasticity arising from the finite flock sizes is important. Developing mesoscopic scale equations, typically in the form of stochastic differential equations, can be challenging even for the simplest of the collective motion models. Here, we take a novel data-driven equation learning approach to construct the stochastic mesoscopic descriptions of a simple self-propelled particle (SPP) model of collective motion. In our SPP model, a focal individual can interact with k randomly chosen neighbours within an interaction radius. We consider k = 1 (called stochastic pairwise interactions), k = 2 (stochastic ternary interactions), and k equalling all available neighbours within the interaction radius (equivalent to Vicsek-like local averaging). The data-driven mesoscopic equations reveal that the stochastic pairwise interaction model produces a novel form of collective motion driven by a multiplicative noise term (hence termed, noise-induced flocking). In contrast, for higher order interactions (k > 1), including Vicsek-like averaging interactions, yield collective motion driven primarily by the deterministic forces. We find that the relation between the parameters of the mesoscopic equations describing the dynamics and the population size are sensitive to the density and to the interaction radius, exhibiting deviations from mean-field theoretical expectations. We provide semi-analytic arguments potentially explaining these observed deviations. In summary, our study emphasizes the importance of mesoscopic descriptions of flocking systems and demonstrates the potential of the data-driven equation discovery methods for complex systems studies. |
1212.1979 | Anirban Banerji | Charudatta Navare, Anirban Banerji | ProtFract: A server to calculate interior and exterior fractal
properties of proteins: Case study with Ras superfamily proteins | 11 pages, 13 figues | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Motivation: Protein surface roughness is fractal in nature. Mass,
hydrophobicity, polarizability distributions of protein interior are fractal
too, as are the distributions of dipole moments, aromatic residues, and many
other structural determinants. The open-access server ProtFract, presents a
reliable way to obtain numerous fractal-dimension and correlation-dimension
based results to quantify the symmetry of self-similarity in distributions of
various properties of protein interior and exterior.
Results: Fractal dimension based comparative analyses of various biophysical
properties of Ras superfamily proteins were conducted. Though the extent of
sequence and functional overlapping across Ras superfamily structures is
extremely high, results obtained from ProtFract investigation are found to be
sensitive to detect differences in the distribution of each of the properties.
For example, it was found that the RAN proteins are structurally most stable
amongst all Ras superfamily proteins, the ARFs possess maximum extent of unused
hydrophobicity in their structures, RAB protein interiors have
electrostatically least conducive environment, GEM/REM/RAD proteins possess
exceptionally high symmetry in the structural organization of their active
chiral centres, neither hydrophobicity nor columbic interactions play
significant part in stabilizing the RAS proteins but aromatic interactions do,
though cation-pi interactions are found to be more dominant in RAN than in RAS
proteins. Ras superfamily proteins can best be classified with respect to their
class-specific pi-pi and cation-pi interaction symmetries.
Availability: ProtFract is freely available online at the URL:
http://bioinfo.net.in/protfract/index.html
| [
{
"created": "Mon, 10 Dec 2012 06:26:29 GMT",
"version": "v1"
}
] | 2012-12-11 | [
[
"Navare",
"Charudatta",
""
],
[
"Banerji",
"Anirban",
""
]
] | Motivation: Protein surface roughness is fractal in nature. Mass, hydrophobicity, polarizability distributions of protein interior are fractal too, as are the distributions of dipole moments, aromatic residues, and many other structural determinants. The open-access server ProtFract, presents a reliable way to obtain numerous fractal-dimension and correlation-dimension based results to quantify the symmetry of self-similarity in distributions of various properties of protein interior and exterior. Results: Fractal dimension based comparative analyses of various biophysical properties of Ras superfamily proteins were conducted. Though the extent of sequence and functional overlapping across Ras superfamily structures is extremely high, results obtained from ProtFract investigation are found to be sensitive to detect differences in the distribution of each of the properties. For example, it was found that the RAN proteins are structurally most stable amongst all Ras superfamily proteins, the ARFs possess maximum extent of unused hydrophobicity in their structures, RAB protein interiors have electrostatically least conducive environment, GEM/REM/RAD proteins possess exceptionally high symmetry in the structural organization of their active chiral centres, neither hydrophobicity nor columbic interactions play significant part in stabilizing the RAS proteins but aromatic interactions do, though cation-pi interactions are found to be more dominant in RAN than in RAS proteins. Ras superfamily proteins can best be classified with respect to their class-specific pi-pi and cation-pi interaction symmetries. Availability: ProtFract is freely available online at the URL: http://bioinfo.net.in/protfract/index.html |
2109.07884 | Marc H. E. de Lussanet PhD | Heiko Wagner, Kim Joris Bostr\"om, Marc H.E. de Lussanet, Myriam L. de
Graaf, Christian Puta, and Luis Mochizuki | Optimization reduces knee-joint forces during walking and squatting:
Validating the inverse dynamics approach for full body movements on
instrumented knee prostheses | 19 pages, 4 figures | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Due to the redundancy of our motor system, movements can be performed in many
ways. While multiple motor control strategies can all lead to the desired
behavior, they result in different joint and muscle forces. This creates
opportunities to explore this redundancy, e.g., for pain avoidance or reducing
the risk of further injury. To assess the effect of different motor control
optimization strategies, a direct measurement of muscle and joint forces is
desirable, but problematic for medical and ethical reasons. Computational
modeling might provide a solution by calculating approximations of these
forces. In this study, we used a full-body computational musculoskeletal model
to (1) predict forces measured in knee prostheses during walking and squatting
and (2) to study the effect of different motor control strategies (i.e.,
minimizing joint force vs. muscle activation) on the joint load and prediction
error. We found that musculoskeletal models can accurately predict knee joint
forces with an RMSE of <0.5 BW in the superior direction and about 0.1 BW in
the medial and anterior directions. Generally, minimization of joint forces
produced the best predictions. Furthermore, minimizing muscle activation
resulted in maximum knee forces of about 4 BW for walking and 2.5 BW for
squatting. Minimizing joint forces resulted in maximum knee forces of 2.25 BW
and 2.12 BW, i.e., a reduction of 44% and 15%, respectively. Thus, changing the
muscular coordination strategy can strongly affect knee joint forces. Patients
with a knee prosthesis may adapt their neuromuscular activation to reduce joint
forces during locomotion.
| [
{
"created": "Thu, 16 Sep 2021 11:29:17 GMT",
"version": "v1"
},
{
"created": "Fri, 5 Nov 2021 15:14:06 GMT",
"version": "v2"
},
{
"created": "Thu, 16 Jun 2022 19:54:39 GMT",
"version": "v3"
},
{
"created": "Sun, 3 Jul 2022 14:20:21 GMT",
"version": "v4"
}
] | 2022-07-05 | [
[
"Wagner",
"Heiko",
""
],
[
"Boström",
"Kim Joris",
""
],
[
"de Lussanet",
"Marc H. E.",
""
],
[
"de Graaf",
"Myriam L.",
""
],
[
"Puta",
"Christian",
""
],
[
"Mochizuki",
"Luis",
""
]
] | Due to the redundancy of our motor system, movements can be performed in many ways. While multiple motor control strategies can all lead to the desired behavior, they result in different joint and muscle forces. This creates opportunities to explore this redundancy, e.g., for pain avoidance or reducing the risk of further injury. To assess the effect of different motor control optimization strategies, a direct measurement of muscle and joint forces is desirable, but problematic for medical and ethical reasons. Computational modeling might provide a solution by calculating approximations of these forces. In this study, we used a full-body computational musculoskeletal model to (1) predict forces measured in knee prostheses during walking and squatting and (2) to study the effect of different motor control strategies (i.e., minimizing joint force vs. muscle activation) on the joint load and prediction error. We found that musculoskeletal models can accurately predict knee joint forces with an RMSE of <0.5 BW in the superior direction and about 0.1 BW in the medial and anterior directions. Generally, minimization of joint forces produced the best predictions. Furthermore, minimizing muscle activation resulted in maximum knee forces of about 4 BW for walking and 2.5 BW for squatting. Minimizing joint forces resulted in maximum knee forces of 2.25 BW and 2.12 BW, i.e., a reduction of 44% and 15%, respectively. Thus, changing the muscular coordination strategy can strongly affect knee joint forces. Patients with a knee prosthesis may adapt their neuromuscular activation to reduce joint forces during locomotion. |
1903.08517 | Guillermo Follana Bern\'a | Guillermo Follana-Bern\'a, Miquel Palmer, Andrea Campos-Candela, Pablo
Arechavala-Lopez, Carlos Diaz-Gil, Josep Al\'os, Ignacio A. Catalan, Salvador
Balle, Josep Coll, Gabriel Morey, Francisco Verger, Amalia Grau | Estimating the density of resident coastal fish using underwater
cameras: accounting for individual detectability | null | null | 10.3354/meps12926 | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Technological advances in underwater video recording are opening novel
opportunities for monitoring wild fish. However, extracting data from videos is
often challenging. Nevertheless, it has been recently demonstrated that
accurate and precise estimates of density for animals (whose normal activities
are restricted to a bounded area or home range) can be obtained from counts
averaged across a relatively low number of video frames. The method, however,
requires that individual detectability (PID, the probability of detecting a
given animal provided that it is actually within the area surveyed by a camera)
has to be known. Here we propose a Bayesian implementation for estimating PID
after combining counts from cameras with counts from any reference method. The
proposed framework was demonstrated using Serranus scriba as a case-study, a
widely distributed and resident coastal fish. Density and PID were calculated
after combining fish counts from unbaited remote underwater video (RUV) and
underwater visual censuses (UVC) as reference method. The relevance of the
proposed framework is that after estimating PID, fish density can be estimated
accurately and precisely at the UVC scale (or at the scale of the preferred
reference method) using RUV only. This key statement has been extensively
demonstrated using computer simulations yielded by real empirical data.
Finally, we provide a simulation tool-kit for comparing the expected precision
attainable for different sampling effort and for species with different levels
of PID. Overall, the proposed method may contribute to substantially enlarge
the spatio-temporal scope of density monitoring programs for many resident
fish.
| [
{
"created": "Wed, 20 Mar 2019 14:24:38 GMT",
"version": "v1"
}
] | 2019-03-21 | [
[
"Follana-Berná",
"Guillermo",
""
],
[
"Palmer",
"Miquel",
""
],
[
"Campos-Candela",
"Andrea",
""
],
[
"Arechavala-Lopez",
"Pablo",
""
],
[
"Diaz-Gil",
"Carlos",
""
],
[
"Alós",
"Josep",
""
],
[
"Catalan",
"Igna... | Technological advances in underwater video recording are opening novel opportunities for monitoring wild fish. However, extracting data from videos is often challenging. Nevertheless, it has been recently demonstrated that accurate and precise estimates of density for animals (whose normal activities are restricted to a bounded area or home range) can be obtained from counts averaged across a relatively low number of video frames. The method, however, requires that individual detectability (PID, the probability of detecting a given animal provided that it is actually within the area surveyed by a camera) has to be known. Here we propose a Bayesian implementation for estimating PID after combining counts from cameras with counts from any reference method. The proposed framework was demonstrated using Serranus scriba as a case-study, a widely distributed and resident coastal fish. Density and PID were calculated after combining fish counts from unbaited remote underwater video (RUV) and underwater visual censuses (UVC) as reference method. The relevance of the proposed framework is that after estimating PID, fish density can be estimated accurately and precisely at the UVC scale (or at the scale of the preferred reference method) using RUV only. This key statement has been extensively demonstrated using computer simulations yielded by real empirical data. Finally, we provide a simulation tool-kit for comparing the expected precision attainable for different sampling effort and for species with different levels of PID. Overall, the proposed method may contribute to substantially enlarge the spatio-temporal scope of density monitoring programs for many resident fish. |
2211.07834 | Zhuojun Yu | Zhuojun Yu, Jonathan E. Rubin, Peter J. Thomas | Sensitivity to control signals in triphasic rhythmic neural systems: a
comparative mechanistic analysis via infinitesimal local timing response
curves | 49 pages, 21 figures, 8 tables | Neural Computation, 2023, Volume 35, Issue 6, Page 1028-1085 | 10.1162/neco_a_01586 | null | q-bio.NC math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Similar activity patterns may arise from model neural networks with distinct
coupling properties and individual unit dynamics. These similar patterns may,
however, respond differently to parameter variations and, specifically, to
tuning of inputs that represent control signals. In this work, we analyze the
responses resulting from modulation of a localized input in each of three
classes of model neural networks that have been recognized in the literature
for their capacity to produce robust three-phase rhythms: coupled fast-slow
oscillators, near-heteroclinic oscillators, and threshold-linear networks.
Triphasic rhythms, in which each phase consists of a prolonged activation of a
corresponding subgroup of neurons followed by a fast transition to another
phase, represent a fundamental activity pattern observed across a range of
central pattern generators underlying behaviors critical to survival, including
respiration, locomotion, and feeding. To perform our analysis, we extend the
recently developed local timing response curve (lTRC), which allows us to
characterize the timing effects due to perturbations, and we complement our
lTRC approach with model-specific dynamical systems analysis. Interestingly, we
observe disparate effects of similar perturbations across distinct model
classes. Thus, this work provides an analytical framework for studying control
of oscillations in nonlinear dynamical systems, and may help guide model
selection in future efforts to study systems exhibiting triphasic rhythmic
activity.
| [
{
"created": "Tue, 15 Nov 2022 01:26:59 GMT",
"version": "v1"
},
{
"created": "Thu, 15 Jun 2023 12:50:58 GMT",
"version": "v2"
}
] | 2023-06-16 | [
[
"Yu",
"Zhuojun",
""
],
[
"Rubin",
"Jonathan E.",
""
],
[
"Thomas",
"Peter J.",
""
]
] | Similar activity patterns may arise from model neural networks with distinct coupling properties and individual unit dynamics. These similar patterns may, however, respond differently to parameter variations and, specifically, to tuning of inputs that represent control signals. In this work, we analyze the responses resulting from modulation of a localized input in each of three classes of model neural networks that have been recognized in the literature for their capacity to produce robust three-phase rhythms: coupled fast-slow oscillators, near-heteroclinic oscillators, and threshold-linear networks. Triphasic rhythms, in which each phase consists of a prolonged activation of a corresponding subgroup of neurons followed by a fast transition to another phase, represent a fundamental activity pattern observed across a range of central pattern generators underlying behaviors critical to survival, including respiration, locomotion, and feeding. To perform our analysis, we extend the recently developed local timing response curve (lTRC), which allows us to characterize the timing effects due to perturbations, and we complement our lTRC approach with model-specific dynamical systems analysis. Interestingly, we observe disparate effects of similar perturbations across distinct model classes. Thus, this work provides an analytical framework for studying control of oscillations in nonlinear dynamical systems, and may help guide model selection in future efforts to study systems exhibiting triphasic rhythmic activity. |
1001.3586 | David Morrison | David A. Morrison | Counting chickens before they hatch: reciprocal consistency of
calibration points for estimating divergence dates | 8 pages, including 1 Table | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There has been concern in the literature about the methodology of using
secondary calibration timepoints when estimating evolutionary divergence dates.
Such timepoints are divergence time estimates that have been derived from one
molecular data set on the basis of a primary external calibration timepoint,
and which are then used independently on a second data set. Logically, the
primary and secondary calibration points must be mutually consistent, in the
sense that it must be possible to predict each time point from the other.
However, the attempt by Shaul and Graur (2002, Gene 300: 59-61) to assess the
reliability of secondary timepoints is flawed because they presented time
estimates without presenting confidence intervals on those estimates, and so it
was not possible to make any explicit hypothesis tests of divergence times.
Also, they inappropriately excluded some of the data, which leads to a very
biased estimate of one of the divergence times. Here, I present a re-analysis
of the same data set, with more appropriate methodology, and come to the
conclusion that no inconsistencies are involved. However, it is clear from the
analysis that molecular data often have such large confidence intervals that
they are uninformative, and thus cannot be used for reliable hypothesis tests.
| [
{
"created": "Wed, 20 Jan 2010 13:52:29 GMT",
"version": "v1"
}
] | 2010-01-21 | [
[
"Morrison",
"David A.",
""
]
] | There has been concern in the literature about the methodology of using secondary calibration timepoints when estimating evolutionary divergence dates. Such timepoints are divergence time estimates that have been derived from one molecular data set on the basis of a primary external calibration timepoint, and which are then used independently on a second data set. Logically, the primary and secondary calibration points must be mutually consistent, in the sense that it must be possible to predict each time point from the other. However, the attempt by Shaul and Graur (2002, Gene 300: 59-61) to assess the reliability of secondary timepoints is flawed because they presented time estimates without presenting confidence intervals on those estimates, and so it was not possible to make any explicit hypothesis tests of divergence times. Also, they inappropriately excluded some of the data, which leads to a very biased estimate of one of the divergence times. Here, I present a re-analysis of the same data set, with more appropriate methodology, and come to the conclusion that no inconsistencies are involved. However, it is clear from the analysis that molecular data often have such large confidence intervals that they are uninformative, and thus cannot be used for reliable hypothesis tests. |
1310.4712 | Adrian Melott | Adrian L. Melott (University of Kansas) and Richard K. Bambach
(Smithsonian) | Analysis of periodicity of extinction using the 2012 geological time
scale | 26 pages, 5 figures, 1 Table. To be published in Paleobiology.
Updated to coincide with edits for final published version | Paleobiology 40, 177-196 (2014) | null | null | q-bio.PE astro-ph.EP astro-ph.GA physics.data-an physics.geo-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Analysis of two independent data sets with increased taxonomic resolution
(genera rather than families) using the revised 2012 time scale reveals that an
extinction periodicity first detected by Raup and Sepkoski (1984) for only the
post-Paleozoic actually runs through the entire Phanerozoic. Although there is
not a local peak of extinction every 27 million years, an excess of the
fraction of genus extinction by interval follows a 27 million year timing
interval and differs from a random distribution at the p ~ 0.02 level. A 27
million year periodicity in the spectrum of interval lengths no longer appears,
removing the question of a possible artifact arising from it. Using a method
originally developed in Bambach (2006) we identify 19 intervals of marked
extinction intensity, including mass extinctions, spanning the last 470 million
years (and with another six present in the Cambrian) and find that 10 of the 19
lie within 3 Myr of the maxima in the spacing of the 27 Myr periodicity, which
differs from a random distribution at the p = 0.004 level. These 19 intervals
of marked extinction intensity also preferentially occur during decreasing
diversity phases of a well-known 62 Myr periodicity in diversity (16 of 19, p =
0.002). Both periodicities appear to enhance the likelihood of increased
severity of extinction, but the cause of neither periodicity is known.
Variations in the strength of the many suggested causes of extinction coupled
to the variation in combined effect of the two different periodicities as they
shift in and out of phase is surely one of the reasons that definitive
comparative study of the causes of major extinction events is so elusive.
| [
{
"created": "Thu, 17 Oct 2013 14:26:16 GMT",
"version": "v1"
},
{
"created": "Tue, 22 Oct 2013 19:35:40 GMT",
"version": "v2"
},
{
"created": "Fri, 6 Dec 2013 18:38:01 GMT",
"version": "v3"
}
] | 2014-11-11 | [
[
"Melott",
"Adrian L.",
"",
"University of Kansas"
],
[
"Bambach",
"Richard K.",
"",
"Smithsonian"
]
] | Analysis of two independent data sets with increased taxonomic resolution (genera rather than families) using the revised 2012 time scale reveals that an extinction periodicity first detected by Raup and Sepkoski (1984) for only the post-Paleozoic actually runs through the entire Phanerozoic. Although there is not a local peak of extinction every 27 million years, an excess of the fraction of genus extinction by interval follows a 27 million year timing interval and differs from a random distribution at the p ~ 0.02 level. A 27 million year periodicity in the spectrum of interval lengths no longer appears, removing the question of a possible artifact arising from it. Using a method originally developed in Bambach (2006) we identify 19 intervals of marked extinction intensity, including mass extinctions, spanning the last 470 million years (and with another six present in the Cambrian) and find that 10 of the 19 lie within 3 Myr of the maxima in the spacing of the 27 Myr periodicity, which differs from a random distribution at the p = 0.004 level. These 19 intervals of marked extinction intensity also preferentially occur during decreasing diversity phases of a well-known 62 Myr periodicity in diversity (16 of 19, p = 0.002). Both periodicities appear to enhance the likelihood of increased severity of extinction, but the cause of neither periodicity is known. Variations in the strength of the many suggested causes of extinction coupled to the variation in combined effect of the two different periodicities as they shift in and out of phase is surely one of the reasons that definitive comparative study of the causes of major extinction events is so elusive. |
2006.05125 | Frederique Viard | Fr\'ed\'erique Viard (AD2M), Cynthia Riginos, Nicolas Bierne (UMR
ISEM) | Anthropogenic Hybridization at Sea: three evolutionary questions
relevant to invasive species | Philosophical Transactions of the Royal Society of London. Series B,
Biological Sciences (1934--1990), Royal Society, The, In press | null | 10.1098/rstb.2019.0547 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Species introductions promote secondary contacts between taxa with long
histories of allopatric divergence. Anthropogenic contact zones thus offer
valuable contrasts to speciation studies in natural systems where past spatial
isolations may have been brief or intermittent. Investigations of anthropogenic
hybridization are rare for marine animals, which have high fecundity and high
dispersal ability, characteristics that contrast to most terrestrial animals.
Genomic studies indicate that gene flow can still occur after millions of years
of divergence, as illustrated by invasive mussels and tunicates. In this
context, we highlight three issues: 1) the effects of high propagule pressure
and demographic asymmetries on introgression directionality, 2) the role of
hybridization in preventing introduced species spread, and 3) the importance of
postzygotic barriers in maintaining reproductive isolation. Anthropogenic
contact zones offer evolutionary biologists unprecedented large scale
hybridization experiments. In addition to breaking the highly effective
reproductive isolating barrier of spatial segregation, they allow researchers
to explore unusual demographic contexts with strong asymmetries. The outcomes
are diverse from introgression swamping to strong barriers to gene flow, and
lead to local containment or widespread invasion. These outcomes should not be
neglected in management policies of marine invasive species.
| [
{
"created": "Tue, 9 Jun 2020 08:58:55 GMT",
"version": "v1"
}
] | 2020-06-11 | [
[
"Viard",
"Frédérique",
"",
"AD2M"
],
[
"Riginos",
"Cynthia",
"",
"UMR\n ISEM"
],
[
"Bierne",
"Nicolas",
"",
"UMR\n ISEM"
]
] | Species introductions promote secondary contacts between taxa with long histories of allopatric divergence. Anthropogenic contact zones thus offer valuable contrasts to speciation studies in natural systems where past spatial isolations may have been brief or intermittent. Investigations of anthropogenic hybridization are rare for marine animals, which have high fecundity and high dispersal ability, characteristics that contrast to most terrestrial animals. Genomic studies indicate that gene flow can still occur after millions of years of divergence, as illustrated by invasive mussels and tunicates. In this context, we highlight three issues: 1) the effects of high propagule pressure and demographic asymmetries on introgression directionality, 2) the role of hybridization in preventing introduced species spread, and 3) the importance of postzygotic barriers in maintaining reproductive isolation. Anthropogenic contact zones offer evolutionary biologists unprecedented large scale hybridization experiments. In addition to breaking the highly effective reproductive isolating barrier of spatial segregation, they allow researchers to explore unusual demographic contexts with strong asymmetries. The outcomes are diverse from introgression swamping to strong barriers to gene flow, and lead to local containment or widespread invasion. These outcomes should not be neglected in management policies of marine invasive species. |
1206.2311 | Erik Aurell | Joseph Xu Zhou, M. D. S. Aliyu, Erik Aurell and Sui Huang | Quasi-potential landscape in complex multi-stable systems | 30 pages, 6 figures | null | null | null | q-bio.MN cond-mat.other | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Developmental dynamics of multicellular organism is a process that takes
place in a multi-stable system in which each attractor state represents a cell
type and attractor transitions correspond to cell differentiation paths. This
new understanding has revived the idea of a quasi-potential landscape, first
proposed by Waddington as a metaphor. To describe development one is interested
in the "relative stabilities" of N attractors (N>2). Existing theories of state
transition between local minima on some potential landscape deal with the exit
in the transition between a pair attractor but do not offer the notion of a
global potential function that relate more than two attractors to each other.
Several ad hoc methods have been used in systems biology to compute a landscape
in non-gradient systems, such as gene regulatory networks. Here we present an
overview of the currently available methods, discuss their limitations and
propose a new decomposition of vector fields that permit the computation of a
quasi-potential function that is equivalent to the Freidlin-Wentzell potential
but is not limited to two attractors. Several examples of decomposition are
given and the significance of such a quasi-potential function is discussed.
| [
{
"created": "Mon, 11 Jun 2012 18:50:30 GMT",
"version": "v1"
}
] | 2012-06-12 | [
[
"Zhou",
"Joseph Xu",
""
],
[
"Aliyu",
"M. D. S.",
""
],
[
"Aurell",
"Erik",
""
],
[
"Huang",
"Sui",
""
]
] | Developmental dynamics of multicellular organism is a process that takes place in a multi-stable system in which each attractor state represents a cell type and attractor transitions correspond to cell differentiation paths. This new understanding has revived the idea of a quasi-potential landscape, first proposed by Waddington as a metaphor. To describe development one is interested in the "relative stabilities" of N attractors (N>2). Existing theories of state transition between local minima on some potential landscape deal with the exit in the transition between a pair attractor but do not offer the notion of a global potential function that relate more than two attractors to each other. Several ad hoc methods have been used in systems biology to compute a landscape in non-gradient systems, such as gene regulatory networks. Here we present an overview of the currently available methods, discuss their limitations and propose a new decomposition of vector fields that permit the computation of a quasi-potential function that is equivalent to the Freidlin-Wentzell potential but is not limited to two attractors. Several examples of decomposition are given and the significance of such a quasi-potential function is discussed. |
1107.1648 | Jean-Charles Walter | Jean-Charles Walter, K. Myriam Kroll, Jef Hooyberghs and Enrico Carlon | Nonequilibrium effects in DNA microarrays: a multiplatform study | 27 pages, 9 figures, 1 table | J. Phys. Chem. B, 2011, 115 (20), pp 6732--6739 | 10.1021/jp2014034 | null | q-bio.BM cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It has recently been shown that in some DNA microarrays the time needed to
reach thermal equilibrium may largely exceed the typical experimental time,
which is about 15h in standard protocols (Hooyberghs et al. Phys. Rev. E 81,
012901 (2010)). In this paper we discuss how this breakdown of thermodynamic
equilibrium could be detected in microarray experiments without resorting to
real time hybridization data, which are difficult to implement in standard
experimental conditions. The method is based on the analysis of the
distribution of fluorescence intensities I from different spots for probes
carrying base mismatches. In thermal equilibrium and at sufficiently low
concentrations, log I is expected to be linearly related to the hybridization
free energy $\Delta G$ with a slope equal to $1/RT_{exp}$, where $T_{exp}$ is
the experimental temperature and R is the gas constant. The breakdown of
equilibrium results in the deviation from this law. A model for hybridization
kinetics explaining the observed experimental behavior is discussed, the
so-called 3-state model. It predicts that deviations from equilibrium yield a
proportionality of $\log I$ to $\Delta G/RT_{eff}$. Here, $T_{eff}$ is an
effective temperature, higher than the experimental one. This behavior is
indeed observed in some experiments on Agilent arrays. We analyze experimental
data from two other microarray platforms and discuss, on the basis of the
results, the attainment of equilibrium in these cases. Interestingly, the same
3-state model predicts a (dynamical) saturation of the signal at values below
the expected one at equilibrium.
| [
{
"created": "Fri, 8 Jul 2011 14:40:35 GMT",
"version": "v1"
}
] | 2012-03-22 | [
[
"Walter",
"Jean-Charles",
""
],
[
"Kroll",
"K. Myriam",
""
],
[
"Hooyberghs",
"Jef",
""
],
[
"Carlon",
"Enrico",
""
]
] | It has recently been shown that in some DNA microarrays the time needed to reach thermal equilibrium may largely exceed the typical experimental time, which is about 15h in standard protocols (Hooyberghs et al. Phys. Rev. E 81, 012901 (2010)). In this paper we discuss how this breakdown of thermodynamic equilibrium could be detected in microarray experiments without resorting to real time hybridization data, which are difficult to implement in standard experimental conditions. The method is based on the analysis of the distribution of fluorescence intensities I from different spots for probes carrying base mismatches. In thermal equilibrium and at sufficiently low concentrations, log I is expected to be linearly related to the hybridization free energy $\Delta G$ with a slope equal to $1/RT_{exp}$, where $T_{exp}$ is the experimental temperature and R is the gas constant. The breakdown of equilibrium results in the deviation from this law. A model for hybridization kinetics explaining the observed experimental behavior is discussed, the so-called 3-state model. It predicts that deviations from equilibrium yield a proportionality of $\log I$ to $\Delta G/RT_{eff}$. Here, $T_{eff}$ is an effective temperature, higher than the experimental one. This behavior is indeed observed in some experiments on Agilent arrays. We analyze experimental data from two other microarray platforms and discuss, on the basis of the results, the attainment of equilibrium in these cases. Interestingly, the same 3-state model predicts a (dynamical) saturation of the signal at values below the expected one at equilibrium. |
2203.13186 | Andr\'eas Pastor | Pastor Andr\'eas, Luk\'a\v{s} Krasula, Xiaoqing Zhu, Zhi Li, Patrick
Le Callet | Improving Maximum Likelihood Difference Scaling method to measure inter
content scale | Difference scaling, supra-threshold estimation, human perception,
subjective experiment | null | null | null | q-bio.NC cs.LG | http://creativecommons.org/licenses/by/4.0/ | The goal of most subjective studies is to place a set of stimuli on a
perceptual scale. This is mostly done directly by rating, e.g. using single or
double stimulus methodologies, or indirectly by ranking or pairwise comparison.
All these methods estimate the perceptual magnitudes of the stimuli on a scale.
However, procedures such as Maximum Likelihood Difference Scaling (MLDS) have
shown that considering perceptual distances can bring benefits in terms of
discriminatory power, observers' cognitive load, and the number of trials
required. One of the disadvantages of the MLDS method is that the perceptual
scales obtained for stimuli created from different source content are generally
not comparable. In this paper, we propose an extension of the MLDS method that
ensures inter-content comparability of the results and shows its usefulness
especially in the presence of observer errors.
| [
{
"created": "Fri, 25 Feb 2022 08:35:32 GMT",
"version": "v1"
}
] | 2022-03-25 | [
[
"Andréas",
"Pastor",
""
],
[
"Krasula",
"Lukáš",
""
],
[
"Zhu",
"Xiaoqing",
""
],
[
"Li",
"Zhi",
""
],
[
"Callet",
"Patrick Le",
""
]
] | The goal of most subjective studies is to place a set of stimuli on a perceptual scale. This is mostly done directly by rating, e.g. using single or double stimulus methodologies, or indirectly by ranking or pairwise comparison. All these methods estimate the perceptual magnitudes of the stimuli on a scale. However, procedures such as Maximum Likelihood Difference Scaling (MLDS) have shown that considering perceptual distances can bring benefits in terms of discriminatory power, observers' cognitive load, and the number of trials required. One of the disadvantages of the MLDS method is that the perceptual scales obtained for stimuli created from different source content are generally not comparable. In this paper, we propose an extension of the MLDS method that ensures inter-content comparability of the results and shows its usefulness especially in the presence of observer errors. |
1601.07574 | Tiziano Squartini | Giampiero Bardella, Angelo Bifone, Andrea Gabrielli, Alessandro Gozzi,
Tiziano Squartini | Hierarchical organization of functional connectivity in the mouse brain:
a complex network approach | 11 pages, 9 figures | Sci. Rep. 6 (32060) (2016) | 10.1038/srep32060 | null | q-bio.NC physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper represents a contribution to the study of the brain functional
connectivity from the perspective of complex networks theory. More
specifically, we apply graph theoretical analyses to provide evidence of the
modular structure of the mouse brain and to shed light on its hierarchical
organization. We propose a novel percolation analysis and we apply our approach
to the analysis of a resting-state functional MRI data set from 41 mice. This
approach reveals a robust hierarchical structure of modules persistent across
different subjects. Importantly, we test this approach against a statistical
benchmark (or null model) which constrains only the distributions of empirical
correlations. Our results unambiguously show that the hierarchical character of
the mouse brain modular structure is not trivially encoded into this
lower-order constraint. Finally, we investigate the modular structure of the
mouse brain by computing the Minimal Spanning Forest, a technique that
identifies subnetworks characterized by the strongest internal correlations.
This approach represents a faster alternative to other community detection
methods and provides a means to rank modules on the basis of the strength of
their internal edges.
| [
{
"created": "Wed, 27 Jan 2016 21:30:06 GMT",
"version": "v1"
}
] | 2016-09-06 | [
[
"Bardella",
"Giampiero",
""
],
[
"Bifone",
"Angelo",
""
],
[
"Gabrielli",
"Andrea",
""
],
[
"Gozzi",
"Alessandro",
""
],
[
"Squartini",
"Tiziano",
""
]
] | This paper represents a contribution to the study of the brain functional connectivity from the perspective of complex networks theory. More specifically, we apply graph theoretical analyses to provide evidence of the modular structure of the mouse brain and to shed light on its hierarchical organization. We propose a novel percolation analysis and we apply our approach to the analysis of a resting-state functional MRI data set from 41 mice. This approach reveals a robust hierarchical structure of modules persistent across different subjects. Importantly, we test this approach against a statistical benchmark (or null model) which constrains only the distributions of empirical correlations. Our results unambiguously show that the hierarchical character of the mouse brain modular structure is not trivially encoded into this lower-order constraint. Finally, we investigate the modular structure of the mouse brain by computing the Minimal Spanning Forest, a technique that identifies subnetworks characterized by the strongest internal correlations. This approach represents a faster alternative to other community detection methods and provides a means to rank modules on the basis of the strength of their internal edges. |
2003.03775 | Giuseppe Gaeta | Giuseppe Gaeta | Data Analysis for the COVID-19 early dynamics in Northern Italy. The
effect of first restrictive measures | 13 pages, 5 figures | null | null | null | q-bio.PE physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In a recent report we have collected some data about the COVID-19 epidemics
in Northern Italy; in this follow-up we analyze how these changed after the
mild restrictive measures taken by the Government two weeks ago and the large
campaign of public awareness developed in the meanwhile.
| [
{
"created": "Sun, 8 Mar 2020 12:52:22 GMT",
"version": "v1"
}
] | 2020-03-26 | [
[
"Gaeta",
"Giuseppe",
""
]
] | In a recent report we have collected some data about the COVID-19 epidemics in Northern Italy; in this follow-up we analyze how these changed after the mild restrictive measures taken by the Government two weeks ago and the large campaign of public awareness developed in the meanwhile. |
2003.12150 | Nuno Crokidakis | Nuno Crokidakis | Modeling the early evolution of the COVID-19 in Brazil: results from a
Susceptible-Infectious-Quarantined-Recovered (SIQR) model | 8 pages, 2 figures, updated version (v2, with a different title) will
appear in IJMPC | Int. J. Mod. Phys. C 31, 2050135 (2020) | 10.1142/S0129183120501351 | null | q-bio.PE physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The world evolution of the Severe acute respiratory syndrome coronavirus 2
(SARS-Cov2 or simply COVID-19) led the World Health Organization to declare it
a pandemic. The disease appeared in China in December 2019, and it has spread
fast around the world, specially in european countries like Italy and Spain.
The first reported case in Brazil was recorded in February 26, and after that
the number of cases growed fast. In order to slow down the initial growth of
the disease through the country, confirmed positive cases were isolated to not
transmit the disease. To better understand the early evolution of COVID-19 in
Brazil, we apply a Susceptible-Infectious-Quarantined-Recovered (SIQR) model to
the analysis of data from the Brazilian Department of Health, obtained from
February 26, 2020 through March 25, 2020. Based on analyical and numerical
results, as well on the data, the basic reproduction number is estimated to
$R_{0}=5.25$. In addition, we estimate that the ratio unidentified infectious
individuals and confirmed cases at the beginning of the epidemic is about $10$,
in agreement with previous studies. We also estimated the epidemic doubling
time to be $2.72$ days.
| [
{
"created": "Thu, 26 Mar 2020 20:37:45 GMT",
"version": "v1"
},
{
"created": "Fri, 22 May 2020 17:16:25 GMT",
"version": "v2"
}
] | 2020-10-22 | [
[
"Crokidakis",
"Nuno",
""
]
] | The world evolution of the Severe acute respiratory syndrome coronavirus 2 (SARS-Cov2 or simply COVID-19) led the World Health Organization to declare it a pandemic. The disease appeared in China in December 2019, and it has spread fast around the world, specially in european countries like Italy and Spain. The first reported case in Brazil was recorded in February 26, and after that the number of cases growed fast. In order to slow down the initial growth of the disease through the country, confirmed positive cases were isolated to not transmit the disease. To better understand the early evolution of COVID-19 in Brazil, we apply a Susceptible-Infectious-Quarantined-Recovered (SIQR) model to the analysis of data from the Brazilian Department of Health, obtained from February 26, 2020 through March 25, 2020. Based on analyical and numerical results, as well on the data, the basic reproduction number is estimated to $R_{0}=5.25$. In addition, we estimate that the ratio unidentified infectious individuals and confirmed cases at the beginning of the epidemic is about $10$, in agreement with previous studies. We also estimated the epidemic doubling time to be $2.72$ days. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.