id stringlengths 9 13 | submitter stringlengths 4 48 | authors stringlengths 4 9.62k | title stringlengths 4 343 | comments stringlengths 2 480 ⌀ | journal-ref stringlengths 9 309 ⌀ | doi stringlengths 12 138 ⌀ | report-no stringclasses 277 values | categories stringlengths 8 87 | license stringclasses 9 values | orig_abstract stringlengths 27 3.76k | versions listlengths 1 15 | update_date stringlengths 10 10 | authors_parsed listlengths 1 147 | abstract stringlengths 24 3.75k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
q-bio/0608017 | Bhalchandra Thatte | Bhalchandra D. Thatte | Invertibility of the TKF model of sequence evolution | 23 pages | Mathematical Biosciences 200, no. 1 (2006) 58-75 | null | null | q-bio.GN | null | We consider character sequences evolving on a phylogenetic tree under the
TKF91 model. We show that as the sequence lengths tend to infinity the the
topology of the phylogenetic tree and the edge lengths are determined by any
one of (a) the alignment of sequences (b) the collection of sequence lengths.
We also show that the probability of any homology structure on a collection of
sequences related by a TKF91 process on a tree is independent of the root
location.
Keywords: phylogenetics, DNA sequence evolution models, identifiability,
alignment
| [
{
"created": "Tue, 8 Aug 2006 21:46:36 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Thatte",
"Bhalchandra D.",
""
]
] | We consider character sequences evolving on a phylogenetic tree under the TKF91 model. We show that as the sequence lengths tend to infinity the the topology of the phylogenetic tree and the edge lengths are determined by any one of (a) the alignment of sequences (b) the collection of sequence lengths. We also show that the probability of any homology structure on a collection of sequences related by a TKF91 process on a tree is independent of the root location. Keywords: phylogenetics, DNA sequence evolution models, identifiability, alignment |
1301.2634 | Andrei Zinovyev Dr. | Andrei Zinovyev, Ulykbek Kairov, Tatiana Karpenyuk and Erlan
Ramanculov | Blind source separation methods for deconvolution of complex signals in
cancer biology | Zinovyev A., Kairov U., Karpenyuk T., Ramanculov E. Blind Source
Separation Methods For Deconvolution Of Complex Signals In Cancer Biology.
2012. Biochemical and Biophysical Research Communications. In Press. DOI:
10.1016/j.bbrc.2012.12.043 | 2013. Biochemical and Biophysical Research Communications 430(3),
1182-1187 | 10.1016/j.bbrc.2012.12.043 | null | q-bio.QM cs.CE q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Two blind source separation methods (Independent Component Analysis and
Non-negative Matrix Factorization), developed initially for signal processing
in engineering, found recently a number of applications in analysis of
large-scale data in molecular biology. In this short review, we present the
common idea behind these methods, describe ways of implementing and applying
them and point out to the advantages compared to more traditional statistical
approaches. We focus more specifically on the analysis of gene expression in
cancer. The review is finalized by listing available software implementations
for the methods described.
| [
{
"created": "Fri, 11 Jan 2013 23:47:16 GMT",
"version": "v1"
}
] | 2015-02-03 | [
[
"Zinovyev",
"Andrei",
""
],
[
"Kairov",
"Ulykbek",
""
],
[
"Karpenyuk",
"Tatiana",
""
],
[
"Ramanculov",
"Erlan",
""
]
] | Two blind source separation methods (Independent Component Analysis and Non-negative Matrix Factorization), developed initially for signal processing in engineering, found recently a number of applications in analysis of large-scale data in molecular biology. In this short review, we present the common idea behind these methods, describe ways of implementing and applying them and point out to the advantages compared to more traditional statistical approaches. We focus more specifically on the analysis of gene expression in cancer. The review is finalized by listing available software implementations for the methods described. |
1701.05400 | Andrea De Martino | Araks Martirosyan, Matteo Marsili, Andrea De Martino | Translating ceRNA susceptibilities into correlation functions | 12 pages, includes supporting text | null | 10.1016/j.bpj.2017.05.042 | null | q-bio.MN cond-mat.dis-nn q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Competition to bind microRNAs induces an effective positive crosstalk between
their targets, therefore known as `competing endogenous RNAs' or ceRNAs. While
such an effect is known to play a significant role in specific conditions,
estimating its strength from data and, experimentally, in physiological
conditions appears to be far from simple. Here we show that the susceptibility
of ceRNAs to different types of perturbations affecting their competitors (and
hence their tendency to crosstalk) can be encoded in quantities as intuitive
and as simple to measure as correlation functions. We confirm this scenario by
extensive numerical simulations and validate it by re-analyzing PTEN's
crosstalk pattern from TCGA breast cancer dataset. These results clarify the
links between different quantities used to estimate the intensity of ceRNA
crosstalk and provide new keys to analyze transcriptional datasets and
effectively probe ceRNA networks in silico.
| [
{
"created": "Thu, 19 Jan 2017 13:07:15 GMT",
"version": "v1"
}
] | 2017-08-02 | [
[
"Martirosyan",
"Araks",
""
],
[
"Marsili",
"Matteo",
""
],
[
"De Martino",
"Andrea",
""
]
] | Competition to bind microRNAs induces an effective positive crosstalk between their targets, therefore known as `competing endogenous RNAs' or ceRNAs. While such an effect is known to play a significant role in specific conditions, estimating its strength from data and, experimentally, in physiological conditions appears to be far from simple. Here we show that the susceptibility of ceRNAs to different types of perturbations affecting their competitors (and hence their tendency to crosstalk) can be encoded in quantities as intuitive and as simple to measure as correlation functions. We confirm this scenario by extensive numerical simulations and validate it by re-analyzing PTEN's crosstalk pattern from TCGA breast cancer dataset. These results clarify the links between different quantities used to estimate the intensity of ceRNA crosstalk and provide new keys to analyze transcriptional datasets and effectively probe ceRNA networks in silico. |
1412.5439 | Charalampos Kyriakopoulos Mr | Charalampos Kyriakopoulos, Verena Wolf | Optimal Observation Time Points in Stochastic Chemical Kinetics | The paper has been presented in HSB 2014 | null | null | null | q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Wet-lab experiments, in which the dynamics within living cells are observed,
are usually costly and time consuming. This is particularly true if single-cell
measurements are obtained using experimental techniques such as flow-cytometry
or fluorescence microscopy. It is therefore important to optimize experiments
with respect to the information they provide about the system. In this paper we
make a priori predictions of the amount of information that can be obtained
from measurements. We focus on the case where the measurements are made to
estimate parameters of a stochastic model of the underlying biochemical
reactions. We propose a numerical scheme to approximate the Fisher information
of future experiments at different observation time points and determine
optimal observation time points. To illustrate the usefulness of our approach,
we apply our method to two interesting case studies.
| [
{
"created": "Wed, 17 Dec 2014 15:26:40 GMT",
"version": "v1"
}
] | 2014-12-18 | [
[
"Kyriakopoulos",
"Charalampos",
""
],
[
"Wolf",
"Verena",
""
]
] | Wet-lab experiments, in which the dynamics within living cells are observed, are usually costly and time consuming. This is particularly true if single-cell measurements are obtained using experimental techniques such as flow-cytometry or fluorescence microscopy. It is therefore important to optimize experiments with respect to the information they provide about the system. In this paper we make a priori predictions of the amount of information that can be obtained from measurements. We focus on the case where the measurements are made to estimate parameters of a stochastic model of the underlying biochemical reactions. We propose a numerical scheme to approximate the Fisher information of future experiments at different observation time points and determine optimal observation time points. To illustrate the usefulness of our approach, we apply our method to two interesting case studies. |
2207.03569 | Xin Wang | Yaqian Yang, Zhiming Zheng, Longzhao Liu, Hongwei Zheng, Yi Zhen, Yi
Zheng, Xin Wang, Shaoting Tang | Enhanced brain structure-function tethering in transmodal cortex
revealed by high-frequency eigenmodes | null | null | null | null | q-bio.NC physics.soc-ph | http://creativecommons.org/licenses/by/4.0/ | The brain's structural connectome supports signal propagation between
neuronal elements, shaping diverse coactivation patterns that can be captured
as functional connectivity. While the link between structure and function
remains an ongoing challenge, the prevailing hypothesis is that the
structure-function relationship may itself be gradually decoupled along a
macroscale functional gradient spanning unimodal to transmodal regions.
However, this hypothesis is strongly constrained by the underlying models which
may neglect requisite signaling mechanisms. Here, we transform the structural
connectome into a set of orthogonal eigenmodes governing frequency-specific
diffusion patterns and show that regional structure-function relationships vary
markedly under different signaling mechanisms. Specifically, low-frequency
eigenmodes, which are considered sufficient to capture the essence of the
functional network, contribute little to functional connectivity reconstruction
in transmodal regions, resulting in structure-function decoupling along the
unimodal-transmodal gradient. In contrast, high-frequency eigenmodes, which are
usually on the periphery of attention due to their association with noisy and
random dynamical patterns, contribute significantly to functional connectivity
prediction in transmodal regions, inducing gradually convergent
structure-function relationships from unimodal to transmodal regions. Although
the information in high-frequency eigenmodes is weak and scattered, it
effectively enhances the structure-function correspondence by 35% in unimodal
regions and 56% in transmodal regions. Altogether, our findings suggest that
the structure-function divergence in transmodal areas may not be an intrinsic
property of brain organization, but can be narrowed through multiplexed and
regionally specialized signaling mechanisms.
| [
{
"created": "Thu, 7 Jul 2022 20:55:53 GMT",
"version": "v1"
}
] | 2022-07-11 | [
[
"Yang",
"Yaqian",
""
],
[
"Zheng",
"Zhiming",
""
],
[
"Liu",
"Longzhao",
""
],
[
"Zheng",
"Hongwei",
""
],
[
"Zhen",
"Yi",
""
],
[
"Zheng",
"Yi",
""
],
[
"Wang",
"Xin",
""
],
[
"Tang",
"Shaoting",
""
]
] | The brain's structural connectome supports signal propagation between neuronal elements, shaping diverse coactivation patterns that can be captured as functional connectivity. While the link between structure and function remains an ongoing challenge, the prevailing hypothesis is that the structure-function relationship may itself be gradually decoupled along a macroscale functional gradient spanning unimodal to transmodal regions. However, this hypothesis is strongly constrained by the underlying models which may neglect requisite signaling mechanisms. Here, we transform the structural connectome into a set of orthogonal eigenmodes governing frequency-specific diffusion patterns and show that regional structure-function relationships vary markedly under different signaling mechanisms. Specifically, low-frequency eigenmodes, which are considered sufficient to capture the essence of the functional network, contribute little to functional connectivity reconstruction in transmodal regions, resulting in structure-function decoupling along the unimodal-transmodal gradient. In contrast, high-frequency eigenmodes, which are usually on the periphery of attention due to their association with noisy and random dynamical patterns, contribute significantly to functional connectivity prediction in transmodal regions, inducing gradually convergent structure-function relationships from unimodal to transmodal regions. Although the information in high-frequency eigenmodes is weak and scattered, it effectively enhances the structure-function correspondence by 35% in unimodal regions and 56% in transmodal regions. Altogether, our findings suggest that the structure-function divergence in transmodal areas may not be an intrinsic property of brain organization, but can be narrowed through multiplexed and regionally specialized signaling mechanisms. |
2405.03861 | Fernando Antoneli Jr | Fernando Antoneli, Martin Golubitsky, Jiaxin Jin, Ian Stewart | Homeostasis in Input-Output Networks: Structure, Classification and
Applications | 45 pages, 26 figures, submitted to the MBS special issue "Dynamical
Systems in Life Sciences" | null | null | null | q-bio.MN math.CO math.DS physics.bio-ph | http://creativecommons.org/licenses/by/4.0/ | Homeostasis is concerned with regulatory mechanisms, present in biological
systems, where some specific variable is kept close to a set value as some
external disturbance affects the system. Mathematically, the notion of
homeostasis can be formalized in terms of an input-output function that maps
the parameter representing the external disturbance to the output variable that
must be kept within a fairly narrow range. This observation inspired the
introduction of the notion of infinitesimal homeostasis, namely, the derivative
of the input-output function is zero at an isolated point. This point of view
allows for the application of methods from singularity theory to characterize
infinitesimal homeostasis points (i.e. critical points of the input-output
function). In this paper we review the infinitesimal approach to the study of
homeostasis in input-output networks. An input-output network is a network with
two distinguished nodes `input' and `output', and the dynamics of the network
determines the corresponding input-output function of the system. This class of
dynamical systems provides an appropriate framework to study homeostasis and
several important biological systems can be formulated in this context.
Moreover, this approach, coupled to graph-theoretic ideas from combinatorial
matrix theory, provides a systematic way for classifying different types of
homeostasis (homeostatic mechanisms) in input-output networks, in terms of the
network topology. In turn, this leads to new mathematical concepts, such as,
homeostasis subnetworks, homeostasis patterns, homeostasis mode interaction. We
illustrate the usefulness of this theory with several biological examples:
biochemical networks, chemical reaction networks (CRN), gene regulatory
networks (GRN), Intracellular metal ion regulation and so on.
| [
{
"created": "Mon, 6 May 2024 21:20:25 GMT",
"version": "v1"
}
] | 2024-05-08 | [
[
"Antoneli",
"Fernando",
""
],
[
"Golubitsky",
"Martin",
""
],
[
"Jin",
"Jiaxin",
""
],
[
"Stewart",
"Ian",
""
]
] | Homeostasis is concerned with regulatory mechanisms, present in biological systems, where some specific variable is kept close to a set value as some external disturbance affects the system. Mathematically, the notion of homeostasis can be formalized in terms of an input-output function that maps the parameter representing the external disturbance to the output variable that must be kept within a fairly narrow range. This observation inspired the introduction of the notion of infinitesimal homeostasis, namely, the derivative of the input-output function is zero at an isolated point. This point of view allows for the application of methods from singularity theory to characterize infinitesimal homeostasis points (i.e. critical points of the input-output function). In this paper we review the infinitesimal approach to the study of homeostasis in input-output networks. An input-output network is a network with two distinguished nodes `input' and `output', and the dynamics of the network determines the corresponding input-output function of the system. This class of dynamical systems provides an appropriate framework to study homeostasis and several important biological systems can be formulated in this context. Moreover, this approach, coupled to graph-theoretic ideas from combinatorial matrix theory, provides a systematic way for classifying different types of homeostasis (homeostatic mechanisms) in input-output networks, in terms of the network topology. In turn, this leads to new mathematical concepts, such as, homeostasis subnetworks, homeostasis patterns, homeostasis mode interaction. We illustrate the usefulness of this theory with several biological examples: biochemical networks, chemical reaction networks (CRN), gene regulatory networks (GRN), Intracellular metal ion regulation and so on. |
1811.02923 | Muhammad Saif-Ur-Rehman | Muhammad Saif-ur-Rehman, Robin Lienk\"amper, Yaroslav Parpaley, J\"org
Wellmer, Charles Liu, Brian Lee, Spencer Kellis, Richard Andersen, Ioannis
Iossifidis, Tobias Glasmachers, Christian Klaes | Universal Spike Classifier | 21 Pages, 12 Figures | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In electrophysiology, microelectrodes are the primary source for recording
neural data of single neurons (single unit activity). These microelectrodes can
be implanted individually, or in the form of microelectrodes arrays, consisting
of hundreds of electrodes. During recordings, some channels capture the
activity of neurons, which is usually contaminated with external artifacts and
noise. Another considerable fraction of channels does not record any neural
data, but external artifacts and noise. Therefore, an automatic identification
and tracking of channels containing neural data is of great significance and
can accelerate the process of analysis, e.g. automatic selection of meaningful
channels during offline and online spike sorting. Another important aspect is
the selection of meaningful channels during online decoding in brain-computer
interface applications, where threshold crossing events are usually for feature
extraction, even though they do not necessarily correspond to neural events.
Here, we propose a novel algorithm based on the newly introduced way of feature
vector extraction and a supervised deep learning method: a universal spike
classifier (USC). The USC enables us to address both above-raised issues. The
USC uses the standard architecture of convolutional neural networks (Conv net).
It takes the batch of the waveforms, instead of a single waveform as an input,
propagates it through the multilayered structure, and finally classifies it as
a channel containing neural spike data or artifacts. We have trained the model
of USC on data recorded from single tetraplegic patient with Utah arrays
implanted in different brain areas. This trained model was then evaluated
without retraining on the data collected from six epileptic patients implanted
with depth electrodes and two tetraplegic patients implanted with two Utah
arrays, individually.
| [
{
"created": "Wed, 7 Nov 2018 15:09:01 GMT",
"version": "v1"
}
] | 2018-11-08 | [
[
"Saif-ur-Rehman",
"Muhammad",
""
],
[
"Lienkämper",
"Robin",
""
],
[
"Parpaley",
"Yaroslav",
""
],
[
"Wellmer",
"Jörg",
""
],
[
"Liu",
"Charles",
""
],
[
"Lee",
"Brian",
""
],
[
"Kellis",
"Spencer",
""
],
[
"Andersen",
"Richard",
""
],
[
"Iossifidis",
"Ioannis",
""
],
[
"Glasmachers",
"Tobias",
""
],
[
"Klaes",
"Christian",
""
]
] | In electrophysiology, microelectrodes are the primary source for recording neural data of single neurons (single unit activity). These microelectrodes can be implanted individually, or in the form of microelectrodes arrays, consisting of hundreds of electrodes. During recordings, some channels capture the activity of neurons, which is usually contaminated with external artifacts and noise. Another considerable fraction of channels does not record any neural data, but external artifacts and noise. Therefore, an automatic identification and tracking of channels containing neural data is of great significance and can accelerate the process of analysis, e.g. automatic selection of meaningful channels during offline and online spike sorting. Another important aspect is the selection of meaningful channels during online decoding in brain-computer interface applications, where threshold crossing events are usually for feature extraction, even though they do not necessarily correspond to neural events. Here, we propose a novel algorithm based on the newly introduced way of feature vector extraction and a supervised deep learning method: a universal spike classifier (USC). The USC enables us to address both above-raised issues. The USC uses the standard architecture of convolutional neural networks (Conv net). It takes the batch of the waveforms, instead of a single waveform as an input, propagates it through the multilayered structure, and finally classifies it as a channel containing neural spike data or artifacts. We have trained the model of USC on data recorded from single tetraplegic patient with Utah arrays implanted in different brain areas. This trained model was then evaluated without retraining on the data collected from six epileptic patients implanted with depth electrodes and two tetraplegic patients implanted with two Utah arrays, individually. |
1306.2875 | Joseph Marsh | Joseph A. Marsh | Buried and accessible surface area control intrinsic protein flexibility | 36 pages, 11 figures, author's manuscript, accepted for publication
in Journal of Molecular Biology | J Mol Biol 425, 3250-3263, 9 September 2013 | 10.1016/j.jmb.2013.06.019 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Proteins experience a wide variety of conformational dynamics that can be
crucial for facilitating their diverse functions. How is the intrinsic
flexibility required for these motions encoded in their three-dimensional
structures? Here, the overall flexibility of a protein is demonstrated to be
tightly coupled to the total amount of surface area buried within its fold. A
simple proxy for this, the relative solvent accessible surface area (Arel),
therefore shows excellent agreement with independent measures of global protein
flexibility derived from various experimental and computational methods.
Application of Arel on a large scale demonstrates its utility by revealing
unique sequence and structural properties associated with intrinsic
flexibility. In particular, flexibility as measured by Arel shows little
correspondence with intrinsic disorder, but instead tends to be associated with
multiple domains and increased {\alpha}- helical structure. Furthermore, the
apparent flexibility of monomeric proteins is found to be useful for
identifying quaternary structure errors in published crystal structures. There
is also a strong tendency for the crystal structures of more flexible proteins
to be solved to lower resolutions. Finally, local solvent accessibility is
shown to be a primary determinant of local residue flexibility. Overall this
work provides both fundamental mechanistic insight into the origin of protein
flexibility and a simple, practical method for predicting flexibility from
protein structures.
| [
{
"created": "Wed, 12 Jun 2013 16:02:47 GMT",
"version": "v1"
},
{
"created": "Tue, 13 Aug 2013 17:09:38 GMT",
"version": "v2"
}
] | 2013-08-14 | [
[
"Marsh",
"Joseph A.",
""
]
] | Proteins experience a wide variety of conformational dynamics that can be crucial for facilitating their diverse functions. How is the intrinsic flexibility required for these motions encoded in their three-dimensional structures? Here, the overall flexibility of a protein is demonstrated to be tightly coupled to the total amount of surface area buried within its fold. A simple proxy for this, the relative solvent accessible surface area (Arel), therefore shows excellent agreement with independent measures of global protein flexibility derived from various experimental and computational methods. Application of Arel on a large scale demonstrates its utility by revealing unique sequence and structural properties associated with intrinsic flexibility. In particular, flexibility as measured by Arel shows little correspondence with intrinsic disorder, but instead tends to be associated with multiple domains and increased {\alpha}- helical structure. Furthermore, the apparent flexibility of monomeric proteins is found to be useful for identifying quaternary structure errors in published crystal structures. There is also a strong tendency for the crystal structures of more flexible proteins to be solved to lower resolutions. Finally, local solvent accessibility is shown to be a primary determinant of local residue flexibility. Overall this work provides both fundamental mechanistic insight into the origin of protein flexibility and a simple, practical method for predicting flexibility from protein structures. |
2112.01180 | Tadahaya Mizuno | Iori Azuma, Tadahaya Mizuno, and Hiroyuki Kusuhara | Extraction of diverse gene groups with individual relationship from gene
co-expression networks | 6 pages, 4 figures, correspondence: Tadahaya Mizuno
(tadahaya@mol.f.u-tokyo.ac.jp) | null | null | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Motivation: Modules in gene coexpression networks (GCN) can be regarded as
gene groups with individual relationships. No studies have optimized module
detection methods to extract diverse gene groups from GCN, especially for data
from clinical specimens. Results: Here, we optimized the flow from
transcriptome data to gene modules, aiming to cover diverse gene relationships.
We found the prediction accuracy of relationships in benchmark networks of
non-mammalian was not always suitable for evaluating gene relationships of
human and employed network based metrics. We also proposed a module detection
method involving a combination of graphical embedding and recursive
partitioning, and confirmed its stable and high performance in biological
plausibility of gene groupings. Analysis of differentially ex-pressed genes of
several reported cancers using the extracted modules successfully added
relational information consistent with previous reports, confirming the
usefulness of our framework.
| [
{
"created": "Thu, 2 Dec 2021 12:52:22 GMT",
"version": "v1"
},
{
"created": "Sat, 4 Dec 2021 01:41:15 GMT",
"version": "v2"
}
] | 2021-12-07 | [
[
"Azuma",
"Iori",
""
],
[
"Mizuno",
"Tadahaya",
""
],
[
"Kusuhara",
"Hiroyuki",
""
]
] | Motivation: Modules in gene coexpression networks (GCN) can be regarded as gene groups with individual relationships. No studies have optimized module detection methods to extract diverse gene groups from GCN, especially for data from clinical specimens. Results: Here, we optimized the flow from transcriptome data to gene modules, aiming to cover diverse gene relationships. We found the prediction accuracy of relationships in benchmark networks of non-mammalian was not always suitable for evaluating gene relationships of human and employed network based metrics. We also proposed a module detection method involving a combination of graphical embedding and recursive partitioning, and confirmed its stable and high performance in biological plausibility of gene groupings. Analysis of differentially ex-pressed genes of several reported cancers using the extracted modules successfully added relational information consistent with previous reports, confirming the usefulness of our framework. |
1312.1401 | Jin Yang | Hong Zhang, Wenhong Hou, Laurence Henrot, Marc Dumas, Sylvianne
Schnebert, Catherine Heus, Jin Yang | Modeling Epidermis Homeostasis and Psoriasis Pathogenesis | 22 pages, 11 figures | Journal of Royal Society Interface 2015, 12:20141071 | 10.1098/rsif.2014.1071 | null | q-bio.MN q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a computational model to study the spatiotemporal dynamics of the
epidermis homeostasis under normal and pathological conditions. The model
consists of a population kinetics model of the central transition pathway of
keratinocyte proliferation, differentiation and loss and an agent-based model
that propagates cell movements and generates the stratified epidermis. The
model recapitulates observed homeostatic cell density distribution, the
epidermal turnover time and the multilayered tissue structure. We extend the
model to study the onset, recurrence and phototherapy-induced remission of
psoriasis. The model considers the psoriasis as a parallel homeostasis of
normal and psoriatic keratinocytes originated from a shared stem-cell niche
environment and predicts two homeostatic modes of the psoriasis: a disease mode
and a quiescent mode. Interconversion between the two modes can be controlled
by interactions between psoriatic stem cells and the immune system and by the
normal and psoriatic stem cells competing for growth niches. The prediction of
a quiescent state potentially explains the efficacy of the multi-episode UVB
irradiation therapy and recurrence of psoriasis plaques, which can further
guide designs of therapeutics that specifically target the immune system and/or
the keratinocytes.
| [
{
"created": "Thu, 5 Dec 2013 00:58:13 GMT",
"version": "v1"
},
{
"created": "Sun, 7 Sep 2014 19:57:57 GMT",
"version": "v2"
},
{
"created": "Tue, 23 Dec 2014 06:20:57 GMT",
"version": "v3"
}
] | 2014-12-24 | [
[
"Zhang",
"Hong",
""
],
[
"Hou",
"Wenhong",
""
],
[
"Henrot",
"Laurence",
""
],
[
"Dumas",
"Marc",
""
],
[
"Schnebert",
"Sylvianne",
""
],
[
"Heus",
"Catherine",
""
],
[
"Yang",
"Jin",
""
]
] | We present a computational model to study the spatiotemporal dynamics of the epidermis homeostasis under normal and pathological conditions. The model consists of a population kinetics model of the central transition pathway of keratinocyte proliferation, differentiation and loss and an agent-based model that propagates cell movements and generates the stratified epidermis. The model recapitulates observed homeostatic cell density distribution, the epidermal turnover time and the multilayered tissue structure. We extend the model to study the onset, recurrence and phototherapy-induced remission of psoriasis. The model considers the psoriasis as a parallel homeostasis of normal and psoriatic keratinocytes originated from a shared stem-cell niche environment and predicts two homeostatic modes of the psoriasis: a disease mode and a quiescent mode. Interconversion between the two modes can be controlled by interactions between psoriatic stem cells and the immune system and by the normal and psoriatic stem cells competing for growth niches. The prediction of a quiescent state potentially explains the efficacy of the multi-episode UVB irradiation therapy and recurrence of psoriasis plaques, which can further guide designs of therapeutics that specifically target the immune system and/or the keratinocytes. |
0811.3164 | Dirson Jian Li | Dirson Jian Li and Shengli Zhang | Classification of life by the mechanism of genome size evolution | 53 pages, 9 figures, 2 tables | Mod. Phys. Lett. B, Vol. 23, No. 29 and No. 30 (2009) | 10.1142/S0217984909021533 10.1142/S0217984909021612 | null | q-bio.GN q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The classification of life should be based upon the fundamental mechanism in
the evolution of life. We found that the global relationships among species
should be circular phylogeny, which is quite different from the common sense
based upon phylogenetic trees. The genealogical circles can be observed clearly
according to the analysis of protein length distributions of contemporary
species. Thus, we suggest that domains can be defined by distinguished
phylogenetic circles, which are global and stable characteristics of living
systems. The mechanism in genome size evolution has been clarified; hence main
component questions on C-value enigma can be explained. According to the
correlations and quasi-periodicity of protein length distributions, we can also
classify life into three domains.
| [
{
"created": "Wed, 19 Nov 2008 17:54:14 GMT",
"version": "v1"
},
{
"created": "Sun, 17 May 2009 12:54:48 GMT",
"version": "v2"
}
] | 2009-12-16 | [
[
"Li",
"Dirson Jian",
""
],
[
"Zhang",
"Shengli",
""
]
] | The classification of life should be based upon the fundamental mechanism in the evolution of life. We found that the global relationships among species should be circular phylogeny, which is quite different from the common sense based upon phylogenetic trees. The genealogical circles can be observed clearly according to the analysis of protein length distributions of contemporary species. Thus, we suggest that domains can be defined by distinguished phylogenetic circles, which are global and stable characteristics of living systems. The mechanism in genome size evolution has been clarified; hence main component questions on C-value enigma can be explained. According to the correlations and quasi-periodicity of protein length distributions, we can also classify life into three domains. |
1005.1514 | Gianluca Della Vedova | Paola Bonizzoni and Gianluca Della Vedova and Yuri Pirola and
Raffaella Rizzi | PIntron: a Fast Method for Gene Structure Prediction via Maximal
Pairings of a Pattern and a Text | null | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current computational methods for exon-intron structure prediction from a
cluster of transcript (EST, mRNA) data do not exhibit the time and space
efficiency necessary to process large clusters of over than 20,000 ESTs and
genes longer than 1Mb. Guaranteeing both accuracy and efficiency seems to be a
computational goal quite far to be achieved, since accuracy is strictly related
to exploiting the inherent redundancy of information present in a large
cluster. We propose a fast method for the problem that combines two ideas: a
novel algorithm of proved small time complexity for computing spliced
alignments of a transcript against a genome, and an efficient algorithm that
exploits the inherent redundancy of information in a cluster of transcripts to
select, among all possible factorizations of EST sequences, those allowing to
infer splice site junctions that are highly confirmed by the input data. The
EST alignment procedure is based on the construction of maximal embeddings that
are sequences obtained from paths of a graph structure, called Embedding Graph,
whose vertices are the maximal pairings of a genomic sequence T and an EST P.
The procedure runs in time linear in the size of P, T and of the output.
PIntron, the software tool implementing our methodology, is able to process in
a few seconds some critical genes that are not manageable by other gene
structure prediction tools. At the same time, PIntron exhibits high accuracy
(sensitivity and specificity) when compared with ENCODE data. Detailed
experimental data, additional results and PIntron software are available at
http://www.algolab.eu/PIntron.
| [
{
"created": "Mon, 10 May 2010 12:24:11 GMT",
"version": "v1"
}
] | 2010-05-11 | [
[
"Bonizzoni",
"Paola",
""
],
[
"Della Vedova",
"Gianluca",
""
],
[
"Pirola",
"Yuri",
""
],
[
"Rizzi",
"Raffaella",
""
]
] | Current computational methods for exon-intron structure prediction from a cluster of transcript (EST, mRNA) data do not exhibit the time and space efficiency necessary to process large clusters of over than 20,000 ESTs and genes longer than 1Mb. Guaranteeing both accuracy and efficiency seems to be a computational goal quite far to be achieved, since accuracy is strictly related to exploiting the inherent redundancy of information present in a large cluster. We propose a fast method for the problem that combines two ideas: a novel algorithm of proved small time complexity for computing spliced alignments of a transcript against a genome, and an efficient algorithm that exploits the inherent redundancy of information in a cluster of transcripts to select, among all possible factorizations of EST sequences, those allowing to infer splice site junctions that are highly confirmed by the input data. The EST alignment procedure is based on the construction of maximal embeddings that are sequences obtained from paths of a graph structure, called Embedding Graph, whose vertices are the maximal pairings of a genomic sequence T and an EST P. The procedure runs in time linear in the size of P, T and of the output. PIntron, the software tool implementing our methodology, is able to process in a few seconds some critical genes that are not manageable by other gene structure prediction tools. At the same time, PIntron exhibits high accuracy (sensitivity and specificity) when compared with ENCODE data. Detailed experimental data, additional results and PIntron software are available at http://www.algolab.eu/PIntron. |
1010.1409 | Florian Markowetz | Yinyin Yuan, Christina Curtis, Carlos Caldas, Florian Markowetz | A sparse regulatory network of copy-number driven expression reveals
putative breast cancer oncogenes | Accepted at IEEE International Conference on Bioinformatics &
Biomedicine (BIBM 2010) | null | 10.1109/BIBM.2010.5706612 | null | q-bio.MN stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The influence of DNA cis-regulatory elements on a gene's expression has been
intensively studied. However, little is known about expressions driven by
trans-acting DNA hotspots. DNA hotspots harboring copy number aberrations are
recognized to be important in cancer as they influence multiple genes on a
global scale. The challenge in detecting trans-effects is mainly due to the
computational difficulty in detecting weak and sparse trans-acting signals
amidst co-occuring passenger events. We propose an integrative approach to
learn a sparse interaction network of DNA copy-number regions with their
downstream targets in a breast cancer dataset. Information from this network
helps distinguish copy-number driven from copy-number independent expression
changes on a global scale. Our result further delineates cis- and trans-effects
in a breast cancer dataset, for which important oncogenes such as ESR1 and
ERBB2 appear to be highly copy-number dependent. Further, our model is shown to
be efficient and in terms of goodness of fit no worse than other state-of the
art predictors and network reconstruction models using both simulated and real
data.
| [
{
"created": "Thu, 7 Oct 2010 12:12:13 GMT",
"version": "v1"
}
] | 2016-11-17 | [
[
"Yuan",
"Yinyin",
""
],
[
"Curtis",
"Christina",
""
],
[
"Caldas",
"Carlos",
""
],
[
"Markowetz",
"Florian",
""
]
] | The influence of DNA cis-regulatory elements on a gene's expression has been intensively studied. However, little is known about expressions driven by trans-acting DNA hotspots. DNA hotspots harboring copy number aberrations are recognized to be important in cancer as they influence multiple genes on a global scale. The challenge in detecting trans-effects is mainly due to the computational difficulty in detecting weak and sparse trans-acting signals amidst co-occuring passenger events. We propose an integrative approach to learn a sparse interaction network of DNA copy-number regions with their downstream targets in a breast cancer dataset. Information from this network helps distinguish copy-number driven from copy-number independent expression changes on a global scale. Our result further delineates cis- and trans-effects in a breast cancer dataset, for which important oncogenes such as ESR1 and ERBB2 appear to be highly copy-number dependent. Further, our model is shown to be efficient and in terms of goodness of fit no worse than other state-of the art predictors and network reconstruction models using both simulated and real data. |
1512.04471 | Denis Tverskoy | Denis Tverskoy | Life history evolution and the origin of multicellularity:
differentiation of types, energy constraints, non-linear curvatures of
trade-off functions | 21 pages | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/3.0/ | A fundamental issue discussed in evolutionary biology is the transition from
unicellular to multicellular organisms. Here we develop non-robust models
provided in [1] and attempt to get robust models investigated how
differentiation of types and energy constraints influence on the optimal
behavior of colonies with different size. Constructed models show that each
large-sized colony with high initial costs of reproduction tends to full
specialization, no matter are all cells in this colony identical or are there
cells with different types in this colony. The level of type's diversity
determines the number of cells specialized in soma. In small - sized colonies
with low initial costs of reproduction, when type's diversity is week, an
unspecialized state may bring colony some benefits. However, these benefits may
be only local and in optimum in the colony some cells would be specialized,
others - unspecialized. The amount of specialized cells in small - sized colony
depends on the level of type's diversity in this colony. Adding energy
constraint, we may receive robust models even in convex case. In optimum, the
colony with different types of cells and energy restriction may be indifferent
between some optimal patterns of states. Arbitrary chosen cell may be soma or
germ in some states or may be unspecialized in other. Moreover, in each optimal
state levels of fecundity and viability of each cell lie in limited ranges.
This result reflects the fact that some cell in the colony may lose the
potential ability to achieve, for example, high level of fecundity, but does
not lose the possibility to perform a reproductive function at all. It means
that provided model can describe organisms, which represents the intermediate
between unspecialized colonies and full-specialized multicellular organisms.
| [
{
"created": "Sat, 24 Jan 2015 23:47:17 GMT",
"version": "v1"
}
] | 2015-12-15 | [
[
"Tverskoy",
"Denis",
""
]
] | A fundamental issue discussed in evolutionary biology is the transition from unicellular to multicellular organisms. Here we develop non-robust models provided in [1] and attempt to get robust models investigated how differentiation of types and energy constraints influence on the optimal behavior of colonies with different size. Constructed models show that each large-sized colony with high initial costs of reproduction tends to full specialization, no matter are all cells in this colony identical or are there cells with different types in this colony. The level of type's diversity determines the number of cells specialized in soma. In small - sized colonies with low initial costs of reproduction, when type's diversity is week, an unspecialized state may bring colony some benefits. However, these benefits may be only local and in optimum in the colony some cells would be specialized, others - unspecialized. The amount of specialized cells in small - sized colony depends on the level of type's diversity in this colony. Adding energy constraint, we may receive robust models even in convex case. In optimum, the colony with different types of cells and energy restriction may be indifferent between some optimal patterns of states. Arbitrary chosen cell may be soma or germ in some states or may be unspecialized in other. Moreover, in each optimal state levels of fecundity and viability of each cell lie in limited ranges. This result reflects the fact that some cell in the colony may lose the potential ability to achieve, for example, high level of fecundity, but does not lose the possibility to perform a reproductive function at all. It means that provided model can describe organisms, which represents the intermediate between unspecialized colonies and full-specialized multicellular organisms. |
1509.01351 | Tim Coulson | Tim Coulson, Floriane Plard, Susanne Schindler, Arpat Ozgul and
Jean-Michel Gaillard | Quantitative Genetics Meets Integral Projection Models: Unification of
Widely Used Methods from Ecology and Evolution | 41 pages, 5 figures | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | 1) Micro-evolutionary predictions are complicated by ecological feedbacks
like density dependence, while ecological predictions can be complicated by
evolutionary change. A widely used approach in micro-evolution, quantitative
genetics, struggles to incorporate ecological processes into predictive models,
while structured population modelling, a tool widely used in ecology, rarely
incorporates evolution explicitly.
2) In this paper we develop a flexible, general framework that links
quantitative genetics and structured population models. We use the quantitative
genetic approach to write down the phenotype as an additive map. We then
construct integral projection models for each component of the phenotype. The
dynamics of the distribution of the phenotype are generated by combining
distributions of each of its components. Population projection models can be
formulated on per generation or on shorter time steps.
3) We introduce the framework before developing example models with
parameters chosen to exhibit specific dynamics. These models reveal (i) how
evolution of a phenotype can cause populations to move from one dynamical
regime to another (e.g. from stationarity to cycles), (ii) how additive genetic
variances and covariances (the G matrix) are expected to evolve over multiple
generations, (iii) how changing heritability with age can maintain additive
genetic variation in the face of selection and (iii) life history, population
dynamics, phenotypic characters and parameters in ecological models will change
as adaptation occurs.
4) Our approach unifies population ecology and evolutionary biology providing
a framework allowing a very wide range of questions to be addressed. The next
step is to apply the approach to a variety of laboratory and field systems.
Once this is done we will have a much deeper understanding of eco-evolutionary
dynamics and feedbacks.
| [
{
"created": "Fri, 4 Sep 2015 06:43:29 GMT",
"version": "v1"
}
] | 2015-09-07 | [
[
"Coulson",
"Tim",
""
],
[
"Plard",
"Floriane",
""
],
[
"Schindler",
"Susanne",
""
],
[
"Ozgul",
"Arpat",
""
],
[
"Gaillard",
"Jean-Michel",
""
]
] | 1) Micro-evolutionary predictions are complicated by ecological feedbacks like density dependence, while ecological predictions can be complicated by evolutionary change. A widely used approach in micro-evolution, quantitative genetics, struggles to incorporate ecological processes into predictive models, while structured population modelling, a tool widely used in ecology, rarely incorporates evolution explicitly. 2) In this paper we develop a flexible, general framework that links quantitative genetics and structured population models. We use the quantitative genetic approach to write down the phenotype as an additive map. We then construct integral projection models for each component of the phenotype. The dynamics of the distribution of the phenotype are generated by combining distributions of each of its components. Population projection models can be formulated on per generation or on shorter time steps. 3) We introduce the framework before developing example models with parameters chosen to exhibit specific dynamics. These models reveal (i) how evolution of a phenotype can cause populations to move from one dynamical regime to another (e.g. from stationarity to cycles), (ii) how additive genetic variances and covariances (the G matrix) are expected to evolve over multiple generations, (iii) how changing heritability with age can maintain additive genetic variation in the face of selection and (iii) life history, population dynamics, phenotypic characters and parameters in ecological models will change as adaptation occurs. 4) Our approach unifies population ecology and evolutionary biology providing a framework allowing a very wide range of questions to be addressed. The next step is to apply the approach to a variety of laboratory and field systems. Once this is done we will have a much deeper understanding of eco-evolutionary dynamics and feedbacks. |
q-bio/0610040 | Jean-Philippe Vert | Jean-Philippe Vert (CB), Jian Qiu (GS-UW), William Stafford Noble
(GS-UW) | Metric learning pairwise kernel for graph inference | null | null | null | null | q-bio.QM cs.LG | null | Much recent work in bioinformatics has focused on the inference of various
types of biological networks, representing gene regulation, metabolic
processes, protein-protein interactions, etc. A common setting involves
inferring network edges in a supervised fashion from a set of high-confidence
edges, possibly characterized by multiple, heterogeneous data sets (protein
sequence, gene expression, etc.). Here, we distinguish between two modes of
inference in this setting: direct inference based upon similarities between
nodes joined by an edge, and indirect inference based upon similarities between
one pair of nodes and another pair of nodes. We propose a supervised approach
for the direct case by translating it into a distance metric learning problem.
A relaxation of the resulting convex optimization problem leads to the support
vector machine (SVM) algorithm with a particular kernel for pairs, which we
call the metric learning pairwise kernel (MLPK). We demonstrate, using several
real biological networks, that this direct approach often improves upon the
state-of-the-art SVM for indirect inference with the tensor product pairwise
kernel.
| [
{
"created": "Sat, 21 Oct 2006 06:33:24 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Vert",
"Jean-Philippe",
"",
"CB"
],
[
"Qiu",
"Jian",
"",
"GS-UW"
],
[
"Noble",
"William Stafford",
"",
"GS-UW"
]
] | Much recent work in bioinformatics has focused on the inference of various types of biological networks, representing gene regulation, metabolic processes, protein-protein interactions, etc. A common setting involves inferring network edges in a supervised fashion from a set of high-confidence edges, possibly characterized by multiple, heterogeneous data sets (protein sequence, gene expression, etc.). Here, we distinguish between two modes of inference in this setting: direct inference based upon similarities between nodes joined by an edge, and indirect inference based upon similarities between one pair of nodes and another pair of nodes. We propose a supervised approach for the direct case by translating it into a distance metric learning problem. A relaxation of the resulting convex optimization problem leads to the support vector machine (SVM) algorithm with a particular kernel for pairs, which we call the metric learning pairwise kernel (MLPK). We demonstrate, using several real biological networks, that this direct approach often improves upon the state-of-the-art SVM for indirect inference with the tensor product pairwise kernel. |
2312.14274 | Christof Fehrman | Christof Fehrman and C. Daniel Meliza | Nonlinear Model Predictive Control of a Conductance-Based Neuron Model
via Data-Driven Forecasting | Figures updated; modified structure of noise current; added section
comparing other control strategies; results unchanged | null | null | null | q-bio.NC cs.SY eess.SY | http://creativecommons.org/licenses/by/4.0/ | Objective. Precise control of neural systems is essential to experimental
investigations of how the brain controls behavior and holds the potential for
therapeutic manipulations to correct aberrant network states. Model predictive
control, which employs a dynamical model of the system to find optimal control
inputs, has promise for dealing with the nonlinear dynamics, high levels of
exogenous noise, and limited information about unmeasured states and parameters
that are common in a wide range of neural systems. However, the challenge still
remains of selecting the right model, constraining its parameters, and
synchronizing to the neural system. Approach. As a proof of principle, we used
recent advances in data-driven forecasting to construct a nonlinear
machine-learning model of a Hodgkin-Huxley type neuron when only the membrane
voltage is observable and there are an unknown number of intrinsic currents.
Main Results. We show that this approach is able to learn the dynamics of
different neuron types and can be used with MPC to force the neuron to engage
in arbitrary, researcher-defined spiking behaviors. Significance. To the best
of our knowledge, this is the first application of nonlinear MPC of a
conductance-based model where there is only realistically limited information
about unobservable states and parameters.
| [
{
"created": "Thu, 21 Dec 2023 19:55:05 GMT",
"version": "v1"
},
{
"created": "Fri, 2 Aug 2024 23:29:14 GMT",
"version": "v2"
}
] | 2024-08-06 | [
[
"Fehrman",
"Christof",
""
],
[
"Meliza",
"C. Daniel",
""
]
] | Objective. Precise control of neural systems is essential to experimental investigations of how the brain controls behavior and holds the potential for therapeutic manipulations to correct aberrant network states. Model predictive control, which employs a dynamical model of the system to find optimal control inputs, has promise for dealing with the nonlinear dynamics, high levels of exogenous noise, and limited information about unmeasured states and parameters that are common in a wide range of neural systems. However, the challenge still remains of selecting the right model, constraining its parameters, and synchronizing to the neural system. Approach. As a proof of principle, we used recent advances in data-driven forecasting to construct a nonlinear machine-learning model of a Hodgkin-Huxley type neuron when only the membrane voltage is observable and there are an unknown number of intrinsic currents. Main Results. We show that this approach is able to learn the dynamics of different neuron types and can be used with MPC to force the neuron to engage in arbitrary, researcher-defined spiking behaviors. Significance. To the best of our knowledge, this is the first application of nonlinear MPC of a conductance-based model where there is only realistically limited information about unobservable states and parameters. |
1012.3803 | Leonid Perlovsky | Leonid Perlovsky | Physics of the mind: Concepts, emotions, language, cognition,
consciousness, beauty, music, and symbolic culture | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mathematical approaches to modeling the mind since the 1950s are reviewed.
Difficulties faced by these approaches are related to the fundamental
incompleteness of logic discovered by K. G\"odel. A recent mathematical
advancement, dynamic logic (DL) overcame these past difficulties. DL is
described conceptually and related to neuroscience, psychology, cognitive
science, and philosophy. DL models higher cognitive functions: concepts,
emotions, instincts, understanding, imagination, intuition, consciousness. DL
is related to the knowledge instinct that drives our understanding of the world
and serves as a foundation for higher cognitive functions. Aesthetic emotions
and perception of beauty are related to 'everyday' functioning of the mind. The
article reviews mechanisms of human symbolic ability, language and cognition,
joint evolution of the mind, consciousness, and cultures. It touches on a
manifold of aesthetic emotions in music, their cognitive function, origin, and
evolution. The article concentrates on elucidating the first principles and
reviews aspects of the theory proven in laboratory research.
| [
{
"created": "Fri, 17 Dec 2010 03:45:00 GMT",
"version": "v1"
}
] | 2010-12-20 | [
[
"Perlovsky",
"Leonid",
""
]
] | Mathematical approaches to modeling the mind since the 1950s are reviewed. Difficulties faced by these approaches are related to the fundamental incompleteness of logic discovered by K. G\"odel. A recent mathematical advancement, dynamic logic (DL) overcame these past difficulties. DL is described conceptually and related to neuroscience, psychology, cognitive science, and philosophy. DL models higher cognitive functions: concepts, emotions, instincts, understanding, imagination, intuition, consciousness. DL is related to the knowledge instinct that drives our understanding of the world and serves as a foundation for higher cognitive functions. Aesthetic emotions and perception of beauty are related to 'everyday' functioning of the mind. The article reviews mechanisms of human symbolic ability, language and cognition, joint evolution of the mind, consciousness, and cultures. It touches on a manifold of aesthetic emotions in music, their cognitive function, origin, and evolution. The article concentrates on elucidating the first principles and reviews aspects of the theory proven in laboratory research. |
1703.06270 | Ramin M. Hasani | Ramin M. Hasani, Victoria Beneder, Magdalena Fuchs, David Lung and
Radu Grosu | SIM-CE: An Advanced Simulink Platform for Studying the Brain of
Caenorhabditis elegans | null | null | null | null | q-bio.NC cs.NE q-bio.QM stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce SIM-CE, an advanced, user-friendly modeling and simulation
environment in Simulink for performing multi-scale behavioral analysis of the
nervous system of Caenorhabditis elegans (C. elegans). SIM-CE contains an
implementation of the mathematical models of C. elegans's neurons and synapses,
in Simulink, which can be easily extended and particularized by the user. The
Simulink model is able to capture both complex dynamics of ion channels and
additional biophysical detail such as intracellular calcium concentration. We
demonstrate the performance of SIM-CE by carrying out neuronal, synaptic and
neural-circuit-level behavioral simulations. Such environment enables the user
to capture unknown properties of the neural circuits, test hypotheses and
determine the origin of many behavioral plasticities exhibited by the worm.
| [
{
"created": "Sat, 18 Mar 2017 08:27:42 GMT",
"version": "v1"
},
{
"created": "Wed, 22 Mar 2017 15:49:11 GMT",
"version": "v2"
},
{
"created": "Sat, 25 Mar 2017 13:19:14 GMT",
"version": "v3"
}
] | 2017-03-28 | [
[
"Hasani",
"Ramin M.",
""
],
[
"Beneder",
"Victoria",
""
],
[
"Fuchs",
"Magdalena",
""
],
[
"Lung",
"David",
""
],
[
"Grosu",
"Radu",
""
]
] | We introduce SIM-CE, an advanced, user-friendly modeling and simulation environment in Simulink for performing multi-scale behavioral analysis of the nervous system of Caenorhabditis elegans (C. elegans). SIM-CE contains an implementation of the mathematical models of C. elegans's neurons and synapses, in Simulink, which can be easily extended and particularized by the user. The Simulink model is able to capture both complex dynamics of ion channels and additional biophysical detail such as intracellular calcium concentration. We demonstrate the performance of SIM-CE by carrying out neuronal, synaptic and neural-circuit-level behavioral simulations. Such environment enables the user to capture unknown properties of the neural circuits, test hypotheses and determine the origin of many behavioral plasticities exhibited by the worm. |
1309.4467 | Rogier Braakman | Rogier Braakman, Eric Smith | Metabolic evolution of a deep-branching hyperthermophilic
chemoautotrophic bacterium | 25 pages, 5 figures, 5 tables, 2 supplementary files | null | 10.1371/journal.pone.0087950 | null | q-bio.MN q-bio.GN q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Aquifex aeolicus is a deep-branching hyperthermophilic chemoautotrophic
bacterium restricted to hydrothermal vents and hot springs. These
characteristics make it an excellent model system for studying the early
evolution of metabolism. Here we present the whole-genome metabolic network of
this organism and examine in detail the driving forces that have shaped it. We
make extensive use of phylometabolic analysis, a method we recently introduced
that generates trees of metabolic phenotypes by integrating phylogenetic and
metabolic constraints. We reconstruct the evolution of a range of metabolic
sub-systems, including the reductive citric acid (rTCA) cycle, as well as the
biosynthesis and functional roles of several amino acids and cofactors. We show
that A. aeolicus uses the reconstructed ancestral pathways within many of these
sub-systems, and highlight how the evolutionary interconnections between
sub-systems facilitated several key innovations. Our analyses further highlight
three general classes of driving forces in metabolic evolution. One is the
duplication and divergence of genes for enzymes as these progress from lower to
higher substrate specificity, improving the kinetics of certain sub-systems. A
second is the kinetic optimization of established pathways through fusion of
enzymes, or their organization into larger complexes. The third is the
minimization of the ATP unit cost to synthesize biomass, improving
thermodynamic efficiency. Quantifying the distribution of these classes of
innovations across metabolic sub-systems and across the tree of life will allow
us to assess how a tradeoff between maximizing growth rate and growth
efficiency has shaped the long-term metabolic evolution of the biosphere.
| [
{
"created": "Tue, 17 Sep 2013 20:08:21 GMT",
"version": "v1"
}
] | 2014-03-05 | [
[
"Braakman",
"Rogier",
""
],
[
"Smith",
"Eric",
""
]
] | Aquifex aeolicus is a deep-branching hyperthermophilic chemoautotrophic bacterium restricted to hydrothermal vents and hot springs. These characteristics make it an excellent model system for studying the early evolution of metabolism. Here we present the whole-genome metabolic network of this organism and examine in detail the driving forces that have shaped it. We make extensive use of phylometabolic analysis, a method we recently introduced that generates trees of metabolic phenotypes by integrating phylogenetic and metabolic constraints. We reconstruct the evolution of a range of metabolic sub-systems, including the reductive citric acid (rTCA) cycle, as well as the biosynthesis and functional roles of several amino acids and cofactors. We show that A. aeolicus uses the reconstructed ancestral pathways within many of these sub-systems, and highlight how the evolutionary interconnections between sub-systems facilitated several key innovations. Our analyses further highlight three general classes of driving forces in metabolic evolution. One is the duplication and divergence of genes for enzymes as these progress from lower to higher substrate specificity, improving the kinetics of certain sub-systems. A second is the kinetic optimization of established pathways through fusion of enzymes, or their organization into larger complexes. The third is the minimization of the ATP unit cost to synthesize biomass, improving thermodynamic efficiency. Quantifying the distribution of these classes of innovations across metabolic sub-systems and across the tree of life will allow us to assess how a tradeoff between maximizing growth rate and growth efficiency has shaped the long-term metabolic evolution of the biosphere. |
2306.13633 | F\'elix Foutel-Rodier | F\'elix Foutel-Rodier, Arthur Charpentier, H\'el\`ene Gu\'erin | Optimal Vaccination Policy to Prevent Endemicity: A Stochastic Model | 51 pages, 7 figures | null | null | null | q-bio.PE math.PR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We examine here the effects of recurrent vaccination and waning immunity on
the establishment of an endemic equilibrium in a population. An
individual-based model that incorporates memory effects for transmission rate
during infection and subsequent immunity is introduced, considering
stochasticity at the individual level. By letting the population size going to
infinity, we derive a set of equations describing the large scale behavior of
the epidemic. The analysis of the model's equilibria reveals a criterion for
the existence of an endemic equilibrium, which depends on the rate of immunity
loss and the distribution of time between booster doses. The outcome of a
vaccination policy in this context is influenced by the efficiency of the
vaccine in blocking transmissions and the distribution pattern of booster doses
within the population. Strategies with evenly spaced booster shots at the
individual level prove to be more effective in preventing disease spread
compared to irregularly spaced boosters, as longer intervals without
vaccination increase susceptibility and facilitate more efficient disease
transmission. We provide an expression for the critical fraction of the
population required to adhere to the vaccination policy in order to eradicate
the disease, that resembles a well-known threshold for preventing an outbreak
with an imperfect vaccine. We also investigate the consequences of unequal
vaccine access in a population and prove that, under reasonable assumptions,
fair vaccine allocation is the optimal strategy to prevent endemicity.
| [
{
"created": "Fri, 23 Jun 2023 17:38:45 GMT",
"version": "v1"
},
{
"created": "Fri, 5 Apr 2024 11:12:42 GMT",
"version": "v2"
}
] | 2024-04-08 | [
[
"Foutel-Rodier",
"Félix",
""
],
[
"Charpentier",
"Arthur",
""
],
[
"Guérin",
"Hélène",
""
]
] | We examine here the effects of recurrent vaccination and waning immunity on the establishment of an endemic equilibrium in a population. An individual-based model that incorporates memory effects for transmission rate during infection and subsequent immunity is introduced, considering stochasticity at the individual level. By letting the population size going to infinity, we derive a set of equations describing the large scale behavior of the epidemic. The analysis of the model's equilibria reveals a criterion for the existence of an endemic equilibrium, which depends on the rate of immunity loss and the distribution of time between booster doses. The outcome of a vaccination policy in this context is influenced by the efficiency of the vaccine in blocking transmissions and the distribution pattern of booster doses within the population. Strategies with evenly spaced booster shots at the individual level prove to be more effective in preventing disease spread compared to irregularly spaced boosters, as longer intervals without vaccination increase susceptibility and facilitate more efficient disease transmission. We provide an expression for the critical fraction of the population required to adhere to the vaccination policy in order to eradicate the disease, that resembles a well-known threshold for preventing an outbreak with an imperfect vaccine. We also investigate the consequences of unequal vaccine access in a population and prove that, under reasonable assumptions, fair vaccine allocation is the optimal strategy to prevent endemicity. |
2201.01796 | Marc Howard | Marc W. Howard | Formal models of memory based on temporally-varying representations | In press, In F. G. Ashby, H. Colonius, & E. Dzhafarov (Eds.), The new
handbook of mathematical psychology, Volume 3. Cambridge University Press | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The idea that memory behavior relies on a gradually-changing internal state
has a long history in mathematical psychology. This chapter traces this line of
thought from statistical learning theory in the 1950s, through distributed
memory models in the latter part of the 20th century and early part of the 21st
century through to modern models based on a scale-invariant temporal history.
We discuss the neural phenomena consistent with this form of representation and
sketch the kinds of cognitive models that can be constructed using it and
connections with formal models of various memory tasks.
| [
{
"created": "Wed, 5 Jan 2022 19:28:56 GMT",
"version": "v1"
}
] | 2022-01-07 | [
[
"Howard",
"Marc W.",
""
]
] | The idea that memory behavior relies on a gradually-changing internal state has a long history in mathematical psychology. This chapter traces this line of thought from statistical learning theory in the 1950s, through distributed memory models in the latter part of the 20th century and early part of the 21st century through to modern models based on a scale-invariant temporal history. We discuss the neural phenomena consistent with this form of representation and sketch the kinds of cognitive models that can be constructed using it and connections with formal models of various memory tasks. |
0910.5862 | Henrik Jeldtoft Jensen | Dominic Jones, Henrik Jeldtoft Jensen and Paolo Sibani | Mutual information in the Tangled Nature Model | 7 pages, 5 figures. To appear in Ecological Modelling | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the concept of mutual information in ecological networks, and use
this idea to analyse the Tangled Nature model of co-evolution. We show that
this measure of correlation has two distinct behaviours depending on how we
define the network in question: if we consider only the network of viable
species this measure increases, whereas for the whole system it decreases. It
is suggested that these are complimentary behaviours that show how ecosystems
can become both more stable and better adapted.
| [
{
"created": "Fri, 30 Oct 2009 12:50:18 GMT",
"version": "v1"
}
] | 2009-11-02 | [
[
"Jones",
"Dominic",
""
],
[
"Jensen",
"Henrik Jeldtoft",
""
],
[
"Sibani",
"Paolo",
""
]
] | We consider the concept of mutual information in ecological networks, and use this idea to analyse the Tangled Nature model of co-evolution. We show that this measure of correlation has two distinct behaviours depending on how we define the network in question: if we consider only the network of viable species this measure increases, whereas for the whole system it decreases. It is suggested that these are complimentary behaviours that show how ecosystems can become both more stable and better adapted. |
2006.01226 | Fuhai Li | Fuhai Li, Andrew P. Michelson, Randi Foraker, Ming Zhan, Philip R.O.
Payne | Repurposing drugs for COVID-19 based on transcriptional response of host
cells to SARS-CoV-2 | 11 pages, 3 figures | null | null | null | q-bio.MN q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Coronavirus Disease 2019 (COVID-19) pandemic has infected over 10 million
people globally with a relatively high mortality rate. There are many
therapeutics undergoing clinical trials, but there is no effective vaccine or
therapy for treatment thus far. After affected by the Severe Acute Respiratory
Syndrome Coronavirus 2 (SARS-CoV-2), molecular signaling of host cells plays
critical roles during the life cycle of SARS-CoV-2. Thus, it is significant to
identify the involved molecular signaling pathways within the host cells, and
drugs targeting these molecular signaling pathways could be potentially
effective for COVID-19 treatment. In this study, we aimed to identify these
potential molecular signaling pathways, and repurpose existing drugs as a
potentially effective treatment of COVID-19 to facilitate the therapeutic
discovery, based on the transcriptional response of host cells. We first
identified dysfunctional signaling pathways associated with the infection
caused SARS-CoV-2 in human lung epithelial cells through analysis of the
altered gene expression profiles. In addition to the signaling pathway
analysis, the activated gene ontologies (GOs) and super gene ontologies were
identified. Signaling pathways and GOs such as MAPK, JNK, STAT, ERK, JAK-STAT,
IRF7-NFkB signaling, and MYD88/CXCR6 immune signaling were particularly
identified. Based on the identified signaling pathways and GOs, a set of
potentially effective drugs were repurposed by integrating the drug-target and
reverse gene expression data resources. The dexamethasone was top-ranked in the
prediction, which was the first reported drug to be able to significantly
reduce the death rate of COVID-19 patients receiving respiratory support. The
results can be helpful to understand the associated molecular signaling
pathways within host cells, and facilitate the discovery of effective drugs for
COVID-19 treatment.
| [
{
"created": "Mon, 1 Jun 2020 19:56:18 GMT",
"version": "v1"
},
{
"created": "Tue, 30 Jun 2020 02:18:22 GMT",
"version": "v2"
}
] | 2020-07-01 | [
[
"Li",
"Fuhai",
""
],
[
"Michelson",
"Andrew P.",
""
],
[
"Foraker",
"Randi",
""
],
[
"Zhan",
"Ming",
""
],
[
"Payne",
"Philip R. O.",
""
]
] | The Coronavirus Disease 2019 (COVID-19) pandemic has infected over 10 million people globally with a relatively high mortality rate. There are many therapeutics undergoing clinical trials, but there is no effective vaccine or therapy for treatment thus far. After affected by the Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2), molecular signaling of host cells plays critical roles during the life cycle of SARS-CoV-2. Thus, it is significant to identify the involved molecular signaling pathways within the host cells, and drugs targeting these molecular signaling pathways could be potentially effective for COVID-19 treatment. In this study, we aimed to identify these potential molecular signaling pathways, and repurpose existing drugs as a potentially effective treatment of COVID-19 to facilitate the therapeutic discovery, based on the transcriptional response of host cells. We first identified dysfunctional signaling pathways associated with the infection caused SARS-CoV-2 in human lung epithelial cells through analysis of the altered gene expression profiles. In addition to the signaling pathway analysis, the activated gene ontologies (GOs) and super gene ontologies were identified. Signaling pathways and GOs such as MAPK, JNK, STAT, ERK, JAK-STAT, IRF7-NFkB signaling, and MYD88/CXCR6 immune signaling were particularly identified. Based on the identified signaling pathways and GOs, a set of potentially effective drugs were repurposed by integrating the drug-target and reverse gene expression data resources. The dexamethasone was top-ranked in the prediction, which was the first reported drug to be able to significantly reduce the death rate of COVID-19 patients receiving respiratory support. The results can be helpful to understand the associated molecular signaling pathways within host cells, and facilitate the discovery of effective drugs for COVID-19 treatment. |
1908.05307 | David Tourigny | David S. Tourigny | Cooperative metabolic resource allocation in spatially-structured
systems | 26 pages including 4 figures, appendix, and references: v3 revisions
and expansion in line with referee comments | J. Math. Biol. 2021; 82, 5 | 10.1007/s00285-021-01558-6 | null | q-bio.MN math.DS physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Natural selection has shaped the evolution of cells and multi-cellular
organisms such that social cooperation can often be preferred over an
individualistic approach to metabolic regulation. This paper extends a
framework for dynamic metabolic resource allocation based on the maximum
entropy principle to spatiotemporal models of metabolism with cooperation. Much
like the maximum entropy principle encapsulates `bet-hedging' behaviour
displayed by organisms dealing with future uncertainty in a fluctuating
environment, its cooperative extension describes how individuals adapt their
metabolic resource allocation strategy to further accommodate limited knowledge
about the welfare of others within a community. The resulting theory explains
why local regulation of metabolic cross-feeding can fulfil a community-wide
metabolic objective if individuals take into consideration an ensemble measure
of total population performance as the only form of global information. The
latter is likely supplied by quorum sensing in microbial systems or signalling
molecules such as hormones in multi-cellular eukaryotic organisms.
| [
{
"created": "Wed, 14 Aug 2019 18:51:00 GMT",
"version": "v1"
},
{
"created": "Thu, 5 Dec 2019 21:27:19 GMT",
"version": "v2"
},
{
"created": "Tue, 30 Jun 2020 22:21:10 GMT",
"version": "v3"
}
] | 2021-01-28 | [
[
"Tourigny",
"David S.",
""
]
] | Natural selection has shaped the evolution of cells and multi-cellular organisms such that social cooperation can often be preferred over an individualistic approach to metabolic regulation. This paper extends a framework for dynamic metabolic resource allocation based on the maximum entropy principle to spatiotemporal models of metabolism with cooperation. Much like the maximum entropy principle encapsulates `bet-hedging' behaviour displayed by organisms dealing with future uncertainty in a fluctuating environment, its cooperative extension describes how individuals adapt their metabolic resource allocation strategy to further accommodate limited knowledge about the welfare of others within a community. The resulting theory explains why local regulation of metabolic cross-feeding can fulfil a community-wide metabolic objective if individuals take into consideration an ensemble measure of total population performance as the only form of global information. The latter is likely supplied by quorum sensing in microbial systems or signalling molecules such as hormones in multi-cellular eukaryotic organisms. |
2102.02908 | Juan Antonio Arias L\'opez | Juan A. Arias, Carmen Cadarso-Su\'arez, Pablo Aguiar-Fern\'andez | Simultaneous Confidence Corridors for neuroimaging data analysis:
applications to Alzheimer's Disease diagnosis | Working Draft | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Alzheimer's disease (AD) is a chronic neurodegenerative condition responsible
for most cases of dementia and considered as one of the greatest challenges for
neuroscience in this century. Early Ad signs are usually mistaken for normal
age-related cognitive dysfunctions, thus patients usually start their treatment
in advanced AD stages, when its benefits are severely limited. AD has no known
cure, as such, hope lies on early diagnosis which usually depends on
neuroimaging techniques such as Positron Emission Tomography (PET). PET data is
then analyzed with Statistical Parametric Mapping (SPM) software, which uses
mass univariate statistical analysis, inevitably incurring in errors derived
from this multiple testing approach. Recently, Wang et al. (2020) formulated an
alternative: applying functional data analysis (FDA), a relatively new branch
of statistics, to calculate mean function and simultaneous confidence corridors
(SCCs) for the difference between two groups' PET values. Here we test this
approach with a practical application for AD diagnosis, estimating mean
functions and SCCs for the difference between AD and control group's PET
activity and locating regions where this difference galls outside estimated
SCCs, indicating differences in brain activity attributable to AD-derived
neural loss. Our results are consistent with previous literature on AD
pathology and suggest that this FDA approach is more resilient to reductions in
sample size and less dependent on ad hoc selection of an {\alpha} level than
its counterpart, suggesting that this novel technique is a promising venue for
research in the field of medical imaging.
| [
{
"created": "Fri, 22 Jan 2021 13:22:22 GMT",
"version": "v1"
}
] | 2021-02-08 | [
[
"Arias",
"Juan A.",
""
],
[
"Cadarso-Suárez",
"Carmen",
""
],
[
"Aguiar-Fernández",
"Pablo",
""
]
] | Alzheimer's disease (AD) is a chronic neurodegenerative condition responsible for most cases of dementia and considered as one of the greatest challenges for neuroscience in this century. Early Ad signs are usually mistaken for normal age-related cognitive dysfunctions, thus patients usually start their treatment in advanced AD stages, when its benefits are severely limited. AD has no known cure, as such, hope lies on early diagnosis which usually depends on neuroimaging techniques such as Positron Emission Tomography (PET). PET data is then analyzed with Statistical Parametric Mapping (SPM) software, which uses mass univariate statistical analysis, inevitably incurring in errors derived from this multiple testing approach. Recently, Wang et al. (2020) formulated an alternative: applying functional data analysis (FDA), a relatively new branch of statistics, to calculate mean function and simultaneous confidence corridors (SCCs) for the difference between two groups' PET values. Here we test this approach with a practical application for AD diagnosis, estimating mean functions and SCCs for the difference between AD and control group's PET activity and locating regions where this difference galls outside estimated SCCs, indicating differences in brain activity attributable to AD-derived neural loss. Our results are consistent with previous literature on AD pathology and suggest that this FDA approach is more resilient to reductions in sample size and less dependent on ad hoc selection of an {\alpha} level than its counterpart, suggesting that this novel technique is a promising venue for research in the field of medical imaging. |
0712.4381 | William Bialek | William Bialek, Rob R. de Ruyter van Steveninck and Naftali Tishby | Efficient representation as a design principle for neural coding and
computation | Based on a presentation at the International Symposium on Information
Theory 2006 | null | null | null | q-bio.NC | null | Does the brain construct an efficient representation of the sensory world? We
review progress on this question, focusing on a series of experiments in the
last decade which use fly vision as a model system in which theory and
experiment can confront each other. Although the idea of efficient
representation has been productive, clearly it is incomplete since it doesn't
tell us which bits of sensory information are most valuable to the organism. We
suggest that an organism which maximizes the (biologically meaningful) adaptive
value of its actions given fixed resources should have internal representations
of the outside world that are optimal in a very specific information theoretic
sense: they maximize the information about the future of sensory inputs at a
fixed value of the information about their past. This principle contains as
special cases computations which the brain seems to carry out, and it should be
possible to test this optimization directly. We return to the fly visual system
and report the results of preliminary experiments that are in encouraging
agreement with theory.
| [
{
"created": "Fri, 28 Dec 2007 17:46:41 GMT",
"version": "v1"
}
] | 2007-12-31 | [
[
"Bialek",
"William",
""
],
[
"van Steveninck",
"Rob R. de Ruyter",
""
],
[
"Tishby",
"Naftali",
""
]
] | Does the brain construct an efficient representation of the sensory world? We review progress on this question, focusing on a series of experiments in the last decade which use fly vision as a model system in which theory and experiment can confront each other. Although the idea of efficient representation has been productive, clearly it is incomplete since it doesn't tell us which bits of sensory information are most valuable to the organism. We suggest that an organism which maximizes the (biologically meaningful) adaptive value of its actions given fixed resources should have internal representations of the outside world that are optimal in a very specific information theoretic sense: they maximize the information about the future of sensory inputs at a fixed value of the information about their past. This principle contains as special cases computations which the brain seems to carry out, and it should be possible to test this optimization directly. We return to the fly visual system and report the results of preliminary experiments that are in encouraging agreement with theory. |
q-bio/0701033 | Wei-Rong Zhong | Wei-Rong Zhong, Yuan-Zhi Shao, Zhen-Hui He, Meng-Jie Bie, Dan Huang | Noise Correlation Induced Synchronization in a Mutualism Ecosystem | 6pages,4figures | null | null | null | q-bio.OT | null | Understanding the cause of the synchronization of population evolution is an
important issue for ecological improvement. Here we present a
Lotka-Volterra-type model driven by two correlated environmental noises and
show, via theoretical analysis and direct simulation, that noise correlation
can induce a synchronization of the mutualists. The time series of mutual
species exhibit a chaotic-like fluctuation, which is independent to the noise
correlation, however, the chaotic fluctuation of mutual species ratio decreases
with the noise correlation. A quantitative parameter defined for characterizing
chaotic fluctuation provides a good approach to measure when the complete
synchronization happens.
| [
{
"created": "Sun, 21 Jan 2007 11:43:30 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Zhong",
"Wei-Rong",
""
],
[
"Shao",
"Yuan-Zhi",
""
],
[
"He",
"Zhen-Hui",
""
],
[
"Bie",
"Meng-Jie",
""
],
[
"Huang",
"Dan",
""
]
] | Understanding the cause of the synchronization of population evolution is an important issue for ecological improvement. Here we present a Lotka-Volterra-type model driven by two correlated environmental noises and show, via theoretical analysis and direct simulation, that noise correlation can induce a synchronization of the mutualists. The time series of mutual species exhibit a chaotic-like fluctuation, which is independent to the noise correlation, however, the chaotic fluctuation of mutual species ratio decreases with the noise correlation. A quantitative parameter defined for characterizing chaotic fluctuation provides a good approach to measure when the complete synchronization happens. |
1609.06743 | Dima Kozakov | Dmitry Padhorny, Andrey Kazennov, Brandon S. Zerbe, Kathryn Porter,
Bing Xia, Scott E. Mortadella, Yaroslav Kholodov, David W. Ritchie, Sandor
Vajda, Dima Kozakov | Protein-protein docking by generalized Fourier transforms on 5D
rotational manifolds | null | Proc Natl Acad Sci U S A. 2016 Jul 26;113(30):E4286-93 | 10.1073/pnas.1603929113 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Energy evaluation using fast Fourier transforms enables sampling billions of
putative complex structures and hence revolutionized rigid protein-protein
docking. However, in current methods efficient acceleration is achieved only in
either the translational or the rotational subspace. Developing an efficient
and accurate docking method that expands FFT based sampling to 5 rotational
coordinates is an extensively studied but still unsolved problem. The algorithm
presented here retains the accuracy of earlier methods but yields at least
tenfold speedup. The improvement is due to two innovations. First, the search
space is treated as the product manifold $\mathbf{SO(3)x(SO(3)\setminus S^1)}$,
where $\mathbf{SO(3)}$ is the rotation group representing the space of the
rotating ligand, and $\mathbf{(SO(3)\setminus S^1)}$ is the space spanned by
the two Euler angles that define the orientation of the vector from the center
of the fixed receptor toward the center of the ligand. This representation
enables the use of efficient FFT methods developed for $\mathbf{SO(3)}$.
Second, we select the centers of highly populated clusters of docked
structures, rather than the lowest energy conformations, as predictions of the
complex, and hence there is no need for very high accuracy in energy
evaluation. Therefore it is sufficient to use a limited number of spherical
basis functions in the Fourier space, which increases the efficiency of
sampling while retaining the accuracy of docking results. A major advantage of
the method is that, in contrast to classical approaches, increasing the number
of correlation function terms is computationally inexpensive, which enables
using complex energy functions for scoring.
| [
{
"created": "Wed, 21 Sep 2016 20:42:11 GMT",
"version": "v1"
}
] | 2016-09-23 | [
[
"Padhorny",
"Dmitry",
""
],
[
"Kazennov",
"Andrey",
""
],
[
"Zerbe",
"Brandon S.",
""
],
[
"Porter",
"Kathryn",
""
],
[
"Xia",
"Bing",
""
],
[
"Mortadella",
"Scott E.",
""
],
[
"Kholodov",
"Yaroslav",
""
],
[
"Ritchie",
"David W.",
""
],
[
"Vajda",
"Sandor",
""
],
[
"Kozakov",
"Dima",
""
]
] | Energy evaluation using fast Fourier transforms enables sampling billions of putative complex structures and hence revolutionized rigid protein-protein docking. However, in current methods efficient acceleration is achieved only in either the translational or the rotational subspace. Developing an efficient and accurate docking method that expands FFT based sampling to 5 rotational coordinates is an extensively studied but still unsolved problem. The algorithm presented here retains the accuracy of earlier methods but yields at least tenfold speedup. The improvement is due to two innovations. First, the search space is treated as the product manifold $\mathbf{SO(3)x(SO(3)\setminus S^1)}$, where $\mathbf{SO(3)}$ is the rotation group representing the space of the rotating ligand, and $\mathbf{(SO(3)\setminus S^1)}$ is the space spanned by the two Euler angles that define the orientation of the vector from the center of the fixed receptor toward the center of the ligand. This representation enables the use of efficient FFT methods developed for $\mathbf{SO(3)}$. Second, we select the centers of highly populated clusters of docked structures, rather than the lowest energy conformations, as predictions of the complex, and hence there is no need for very high accuracy in energy evaluation. Therefore it is sufficient to use a limited number of spherical basis functions in the Fourier space, which increases the efficiency of sampling while retaining the accuracy of docking results. A major advantage of the method is that, in contrast to classical approaches, increasing the number of correlation function terms is computationally inexpensive, which enables using complex energy functions for scoring. |
q-bio/0406003 | Markus Porto | Ugo Bastolla, Markus Porto, H. Eduardo Roman, and Michele Vendruscolo | The principal eigenvector of contact matrices and hydrophobicity
profiles in proteins | 13 pages, 3 figures, 2 tables | Proteins 58, 22-30 (2005) | null | null | q-bio.BM | null | With the aim to study the relationship between protein sequences and their
native structures, we adopt vectorial representations for both sequence and
structure. The structural representation is based on the Principal Eigenvector
of the fold's contact matrix (PE). As recently shown, the latter encodes
sufficient information for reconstructing the whole contact matrix. The
sequence is represented through a Hydrophobicity Profile (HP), using a
generalized hydrophobicity scale that we obtain from the principal eigenvector
of a residue-residue interaction matrix and denote it as interactivity scale.
Using this novel scale, we define the optimal HP of a protein fold, and
predict, by means of stability arguments, that it is strongly correlated with
the PE of the fold's contact matrix. This prediction is confirmed through an
evolutionary analysis, which shows that the PE correlates with the HP of each
individual sequence adopting the same fold and, even more strongly, with the
average HP of this set of sequences. Thus, protein sequences evolve in such a
way that their average HP is close to the optimal one, implying that neutral
evolution can be viewed as a kind of motion in sequence space around the
optimal HP. Our results indicate that the correlation coefficient between
N-dimensional vectors constitutes a natural metric in the vectorial space in
which we represent both protein sequences and protein structures, which we call
Vectorial Protein Space. In this way, we define a unified framework for
sequence to sequence, sequence to structure, and structure to structure
alignments. We show that the interactivity scale is nearly optimal both for the
comparison of sequences with sequences and sequences with structures.
| [
{
"created": "Tue, 1 Jun 2004 15:23:23 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Bastolla",
"Ugo",
""
],
[
"Porto",
"Markus",
""
],
[
"Roman",
"H. Eduardo",
""
],
[
"Vendruscolo",
"Michele",
""
]
] | With the aim to study the relationship between protein sequences and their native structures, we adopt vectorial representations for both sequence and structure. The structural representation is based on the Principal Eigenvector of the fold's contact matrix (PE). As recently shown, the latter encodes sufficient information for reconstructing the whole contact matrix. The sequence is represented through a Hydrophobicity Profile (HP), using a generalized hydrophobicity scale that we obtain from the principal eigenvector of a residue-residue interaction matrix and denote it as interactivity scale. Using this novel scale, we define the optimal HP of a protein fold, and predict, by means of stability arguments, that it is strongly correlated with the PE of the fold's contact matrix. This prediction is confirmed through an evolutionary analysis, which shows that the PE correlates with the HP of each individual sequence adopting the same fold and, even more strongly, with the average HP of this set of sequences. Thus, protein sequences evolve in such a way that their average HP is close to the optimal one, implying that neutral evolution can be viewed as a kind of motion in sequence space around the optimal HP. Our results indicate that the correlation coefficient between N-dimensional vectors constitutes a natural metric in the vectorial space in which we represent both protein sequences and protein structures, which we call Vectorial Protein Space. In this way, we define a unified framework for sequence to sequence, sequence to structure, and structure to structure alignments. We show that the interactivity scale is nearly optimal both for the comparison of sequences with sequences and sequences with structures. |
2004.02601 | Lenka Pribylova | Lenka Pribylova, Veronika Hajnova | SEIAR model with asymptomatic cohort and consequences to efficiency of
quarantine government measures in COVID-19 epidemic | 9 pages, 7 figures | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a compartmental SEIAR model of epidemic spread as a generalization
of the SEIR model. We believe that the asymptomatic infectious cohort is an
omitted part of the understanding of the epidemic dynamics of disease COVID-19.
We introduce and derive the basic reproduction number as the weighted
arithmetic mean of the basic reproduction numbers of the symptomatic and
asymptomatic cohorts. Since the asymptomatic subjects people are not detected,
they can spread the disease much longer, and this increases the COVID-19 $R_0$
up to around 9. We show that European epidemic outbreaks in various European
countries correspond to the simulations with commonly used parameters based on
clinical characteristics of the disease COVID-19, but $R_0$ is around three
times bigger if the asymptomatic cohort is taken into account. Many voices in
the academic world are drawing attention to the asymptomatic group of
infectious subjects at present. We are convinced that the asymptomatic cohort
plays a crucial role in the spread of the COVID-19 disease, and it has to be
understood during government measures.
| [
{
"created": "Mon, 6 Apr 2020 12:30:01 GMT",
"version": "v1"
}
] | 2020-04-07 | [
[
"Pribylova",
"Lenka",
""
],
[
"Hajnova",
"Veronika",
""
]
] | We present a compartmental SEIAR model of epidemic spread as a generalization of the SEIR model. We believe that the asymptomatic infectious cohort is an omitted part of the understanding of the epidemic dynamics of disease COVID-19. We introduce and derive the basic reproduction number as the weighted arithmetic mean of the basic reproduction numbers of the symptomatic and asymptomatic cohorts. Since the asymptomatic subjects people are not detected, they can spread the disease much longer, and this increases the COVID-19 $R_0$ up to around 9. We show that European epidemic outbreaks in various European countries correspond to the simulations with commonly used parameters based on clinical characteristics of the disease COVID-19, but $R_0$ is around three times bigger if the asymptomatic cohort is taken into account. Many voices in the academic world are drawing attention to the asymptomatic group of infectious subjects at present. We are convinced that the asymptomatic cohort plays a crucial role in the spread of the COVID-19 disease, and it has to be understood during government measures. |
2407.07538 | Su Yang | Su Yang, Weiqi Chu, Panayotis Kevrekidis | An epidemical model with nonlocal spatial infections | 15 pages, 9 figures | null | null | null | q-bio.QM nlin.PS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The SIR model is one of the most prototypical compartmental models in
epidemiology. Generalizing this ordinary differential equation (ODE) framework
into a spatially distributed partial differential equation (PDE) model is a
considerable challenge. In the present work, we extend a recently proposed
model based on nearest-neighbor spatial interactions by one of the authors
in~\cite{vaziry2022modelling} towards a nonlocal, nonlinear PDE variant of the
SIR prototype. We then seek to develop a set of tools that provide insights for
this PDE framework. Stationary states and their stability analysis offer a
perspective on the early spatial growth of the infection. Evolutionary
computational dynamics enable visualization of the spatio-temporal progression
of infection and recovery, allowing for an appreciation of the effect of
varying parameters of the nonlocal kernel, such as, e.g., its width parameter.
These features are explored in both one- and two-dimensional settings. At a
model-reduction level, we develop a sequence of interpretable moment-based
diagnostics to observe how these reflect the total number of infections, the
epidemic's epicenter, and its spread. Finally, we propose a data-driven
methodology based on the sparse identification of nonlinear dynamics (SINDy) to
identify approximate closed-form dynamical equations for such quantities. These
approaches may pave the way for further spatio-temporal studies, enabling the
quantification of epidemics.
| [
{
"created": "Wed, 10 Jul 2024 10:58:30 GMT",
"version": "v1"
}
] | 2024-07-11 | [
[
"Yang",
"Su",
""
],
[
"Chu",
"Weiqi",
""
],
[
"Kevrekidis",
"Panayotis",
""
]
] | The SIR model is one of the most prototypical compartmental models in epidemiology. Generalizing this ordinary differential equation (ODE) framework into a spatially distributed partial differential equation (PDE) model is a considerable challenge. In the present work, we extend a recently proposed model based on nearest-neighbor spatial interactions by one of the authors in~\cite{vaziry2022modelling} towards a nonlocal, nonlinear PDE variant of the SIR prototype. We then seek to develop a set of tools that provide insights for this PDE framework. Stationary states and their stability analysis offer a perspective on the early spatial growth of the infection. Evolutionary computational dynamics enable visualization of the spatio-temporal progression of infection and recovery, allowing for an appreciation of the effect of varying parameters of the nonlocal kernel, such as, e.g., its width parameter. These features are explored in both one- and two-dimensional settings. At a model-reduction level, we develop a sequence of interpretable moment-based diagnostics to observe how these reflect the total number of infections, the epidemic's epicenter, and its spread. Finally, we propose a data-driven methodology based on the sparse identification of nonlinear dynamics (SINDy) to identify approximate closed-form dynamical equations for such quantities. These approaches may pave the way for further spatio-temporal studies, enabling the quantification of epidemics. |
0807.3608 | Nicolas Vuillerme | Nicolas Pinsault (TIMC), Nicolas Vuillerme (TIMC) | Differential postural effects of plantar-flexor muscles fatigue under
normal, altered and improved vestibular and neck somatosensory conditions | null | Experimental Brain Research (2008) 1-31 | 10.1007/s00221-008-1500-z | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The aim of the present study was to assess the effects of plantar-flexor
muscles fatigue on postural control during quiet standing under normal, altered
and improved vestibular and neck somatosensory conditions. To address this
objective, young male university students were asked to stand upright as still
as possible with their eyes closed in two conditions of No Fatigue and Fatigue
of the plantar-flexor muscles. In Experiment 1 (n=15), the postural task was
executed in two Neutral head and Head tilted backward postures, recognized to
degrade vestibular and neck somatosensory information. In Experiment 2 (n=15),
the postural task was executed in two conditions of No tactile and Tactile
stimulation of the neck provided by the application of strips of adhesive
bandage to the skin over and around the neck. Centre of foot pressure
displacements were recorded using a force platform. Results showed that (1) the
Fatigue condition yielded increased CoP displacements relative to the No
Fatigue condition (Experiment 1 and Experiment 2), (2) this destabilizing
effect was more accentuated in the Head tilted backward posture than Neutral
head posture (Experiment 1) and (3) this destabilizing effect was less
accentuated in the condition of Tactile stimulation than that of No tactile
stimulation of the neck (Experiment 2). In the context of the multisensory
control of balance, these results suggest an increased reliance on vestibular
and neck somatosensory information for controlling posture during quiet
standing in condition of altered ankle neuromuscular function.
| [
{
"created": "Wed, 23 Jul 2008 06:30:09 GMT",
"version": "v1"
}
] | 2008-07-24 | [
[
"Pinsault",
"Nicolas",
"",
"TIMC"
],
[
"Vuillerme",
"Nicolas",
"",
"TIMC"
]
] | The aim of the present study was to assess the effects of plantar-flexor muscles fatigue on postural control during quiet standing under normal, altered and improved vestibular and neck somatosensory conditions. To address this objective, young male university students were asked to stand upright as still as possible with their eyes closed in two conditions of No Fatigue and Fatigue of the plantar-flexor muscles. In Experiment 1 (n=15), the postural task was executed in two Neutral head and Head tilted backward postures, recognized to degrade vestibular and neck somatosensory information. In Experiment 2 (n=15), the postural task was executed in two conditions of No tactile and Tactile stimulation of the neck provided by the application of strips of adhesive bandage to the skin over and around the neck. Centre of foot pressure displacements were recorded using a force platform. Results showed that (1) the Fatigue condition yielded increased CoP displacements relative to the No Fatigue condition (Experiment 1 and Experiment 2), (2) this destabilizing effect was more accentuated in the Head tilted backward posture than Neutral head posture (Experiment 1) and (3) this destabilizing effect was less accentuated in the condition of Tactile stimulation than that of No tactile stimulation of the neck (Experiment 2). In the context of the multisensory control of balance, these results suggest an increased reliance on vestibular and neck somatosensory information for controlling posture during quiet standing in condition of altered ankle neuromuscular function. |
2007.04743 | James Unwin | Yunseo Choi and James Unwin | Racial Impact on Infections and Deaths due to COVID-19 in New York City | 6 pages, 7 figures, 1 table | null | null | null | q-bio.PE physics.soc-ph stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Redlining is the discriminatory practice whereby institutions avoided
investment in certain neighborhoods due to their demographics. Here we explore
the lasting impacts of redlining on the spread of COVID-19 in New York City
(NYC). Using data available through the Home Mortgage Disclosure Act, we
construct a redlining index for each NYC census tract via a multi-level
logistical model. We compare this redlining index with the COVID-19 statistics
for each NYC Zip Code Tabulation Area. Accurate mappings of the pandemic would
aid the identification of the most vulnerable areas and permit the most
effective allocation of medical resources, while reducing ethnic health
disparities.
| [
{
"created": "Thu, 9 Jul 2020 12:27:59 GMT",
"version": "v1"
}
] | 2020-07-10 | [
[
"Choi",
"Yunseo",
""
],
[
"Unwin",
"James",
""
]
] | Redlining is the discriminatory practice whereby institutions avoided investment in certain neighborhoods due to their demographics. Here we explore the lasting impacts of redlining on the spread of COVID-19 in New York City (NYC). Using data available through the Home Mortgage Disclosure Act, we construct a redlining index for each NYC census tract via a multi-level logistical model. We compare this redlining index with the COVID-19 statistics for each NYC Zip Code Tabulation Area. Accurate mappings of the pandemic would aid the identification of the most vulnerable areas and permit the most effective allocation of medical resources, while reducing ethnic health disparities. |
1910.11370 | Caterina Strambio-De-Castillia Ph.D. | Maximiliaan Huisman, Mathias Hammer, Alex Rigano, Ulrike Boehm, James
J. Chambers, Nathalie Gaudreault, Alison J. North, Jaime A. Pimentel, Damir
Sudar, Peter Bajcsy, Claire M. Brown, Alexander D. Corbett, Orestis Faklaris,
Judith Lacoste, Alex Laude, Glyn Nelson, Roland Nitschke, David Grunwald, and
Caterina Strambio-De-Castillia | A perspective on Microscopy Metadata: data provenance and quality
control | null | null | null | null | q-bio.QM cs.DB | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The application of microscopy in biomedical research has come a long way
since Antonie van Leeuwenhoek discovered unicellular organisms. Countless
innovations have positioned light microscopy as a cornerstone of modern biology
and a method of choice for connecting omics datasets to their biological and
clinical correlates. Still, regardless of how convincing published imaging data
looks, it does not always convey meaningful information about the conditions in
which it was acquired, processed, and analyzed. Adequate record-keeping,
reporting, and quality control are therefore essential to ensure experimental
rigor and data fidelity, allow experiments to be reproducibly repeated, and
promote the proper evaluation, interpretation, comparison, and re-use. To this
end, microscopy images should be accompanied by complete descriptions detailing
experimental procedures, biological samples, microscope hardware
specifications, image acquisition parameters, and image analysis procedures, as
well as metrics accounting for instrument performance and calibration. However,
universal, community-accepted Microscopy Metadata standards and reporting
specifications that would result in Findable Accessible Interoperable and
Reproducible (FAIR) microscopy data have not yet been established. To
understand this shortcoming and to propose a way forward, here we provide an
overview of the nature of microscopy metadata and its importance for fostering
data quality, reproducibility, scientific rigor, and sharing value in light
microscopy. The proposal for tiered Microscopy Metadata Specifications that
extend the OME Data Model put forth by the 4D Nucleome Initiative and by
Bioimaging North America [1-3] as well as a suite of three complementary and
interoperable tools are being developed to facilitate the process of image data
documentation and are presented in related manuscripts [4-6].
| [
{
"created": "Thu, 24 Oct 2019 18:33:17 GMT",
"version": "v1"
},
{
"created": "Fri, 10 Jan 2020 21:22:33 GMT",
"version": "v2"
},
{
"created": "Thu, 14 May 2020 23:35:45 GMT",
"version": "v3"
},
{
"created": "Mon, 26 Apr 2021 22:56:39 GMT",
"version": "v4"
},
{
"created": "Thu, 29 Apr 2021 19:37:00 GMT",
"version": "v5"
},
{
"created": "Tue, 1 Jun 2021 03:36:07 GMT",
"version": "v6"
}
] | 2021-06-02 | [
[
"Huisman",
"Maximiliaan",
""
],
[
"Hammer",
"Mathias",
""
],
[
"Rigano",
"Alex",
""
],
[
"Boehm",
"Ulrike",
""
],
[
"Chambers",
"James J.",
""
],
[
"Gaudreault",
"Nathalie",
""
],
[
"North",
"Alison J.",
""
],
[
"Pimentel",
"Jaime A.",
""
],
[
"Sudar",
"Damir",
""
],
[
"Bajcsy",
"Peter",
""
],
[
"Brown",
"Claire M.",
""
],
[
"Corbett",
"Alexander D.",
""
],
[
"Faklaris",
"Orestis",
""
],
[
"Lacoste",
"Judith",
""
],
[
"Laude",
"Alex",
""
],
[
"Nelson",
"Glyn",
""
],
[
"Nitschke",
"Roland",
""
],
[
"Grunwald",
"David",
""
],
[
"Strambio-De-Castillia",
"Caterina",
""
]
] | The application of microscopy in biomedical research has come a long way since Antonie van Leeuwenhoek discovered unicellular organisms. Countless innovations have positioned light microscopy as a cornerstone of modern biology and a method of choice for connecting omics datasets to their biological and clinical correlates. Still, regardless of how convincing published imaging data looks, it does not always convey meaningful information about the conditions in which it was acquired, processed, and analyzed. Adequate record-keeping, reporting, and quality control are therefore essential to ensure experimental rigor and data fidelity, allow experiments to be reproducibly repeated, and promote the proper evaluation, interpretation, comparison, and re-use. To this end, microscopy images should be accompanied by complete descriptions detailing experimental procedures, biological samples, microscope hardware specifications, image acquisition parameters, and image analysis procedures, as well as metrics accounting for instrument performance and calibration. However, universal, community-accepted Microscopy Metadata standards and reporting specifications that would result in Findable Accessible Interoperable and Reproducible (FAIR) microscopy data have not yet been established. To understand this shortcoming and to propose a way forward, here we provide an overview of the nature of microscopy metadata and its importance for fostering data quality, reproducibility, scientific rigor, and sharing value in light microscopy. The proposal for tiered Microscopy Metadata Specifications that extend the OME Data Model put forth by the 4D Nucleome Initiative and by Bioimaging North America [1-3] as well as a suite of three complementary and interoperable tools are being developed to facilitate the process of image data documentation and are presented in related manuscripts [4-6]. |
1302.5118 | Marco Di Stefano | Marco Di Stefano, Angelo Rosa, Vincenzo Belcastro, Diego di Bernardo,
Cristian Micheletti | Colocalization of coregulated genes: a steered molecular dynamics study
of human chromosome 19 | 24 pages, 11 figures, Accepted for publication in PLoS Computational
Biology | null | 10.1371/journal.pcbi.1003019 | null | q-bio.GN cond-mat.soft cond-mat.stat-mech physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The connection between chromatin nuclear organization and gene activity is
vividly illustrated by the observation that transcriptional coregulation of
certain genes appears to be directly influenced by their spatial proximity.
This fact poses the more general question of whether it is at all feasible that
the numerous genes that are coregulated on a given chromosome, especially those
at large genomic distances, might become proximate inside the nucleus. This
problem is studied here using steered molecular dynamics simulations in order
to enforce the colocalization of thousands of knowledge-based gene sequences on
a model for the gene-rich human chromosome 19. Remarkably, it is found that
most (~80%) gene pairs can be brought simultaneously into contact. This is made
possible by the low degree of intra-chromosome entanglement and the large
number of cliques in the gene coregulatory network, that is the many groups of
genes that are all mutually coregulated. The constrained conformations for the
model chromosome 19 are further shown to be organised in spatial macrodomains
that are similar to those inferred from recent HiC measurements. The findings
indicate that gene coregulation and colocalization are largely compatible and
that this relationship can be exploited to draft the overall spatial
organization of the chromosome in vivo. The more general validity and
implications of these findings could be investigated by applying to other
eukaryotic chromosomes the general and transferable computational strategy
introduced here.
| [
{
"created": "Wed, 20 Feb 2013 21:05:36 GMT",
"version": "v1"
}
] | 2015-06-15 | [
[
"Di Stefano",
"Marco",
""
],
[
"Rosa",
"Angelo",
""
],
[
"Belcastro",
"Vincenzo",
""
],
[
"di Bernardo",
"Diego",
""
],
[
"Micheletti",
"Cristian",
""
]
] | The connection between chromatin nuclear organization and gene activity is vividly illustrated by the observation that transcriptional coregulation of certain genes appears to be directly influenced by their spatial proximity. This fact poses the more general question of whether it is at all feasible that the numerous genes that are coregulated on a given chromosome, especially those at large genomic distances, might become proximate inside the nucleus. This problem is studied here using steered molecular dynamics simulations in order to enforce the colocalization of thousands of knowledge-based gene sequences on a model for the gene-rich human chromosome 19. Remarkably, it is found that most (~80%) gene pairs can be brought simultaneously into contact. This is made possible by the low degree of intra-chromosome entanglement and the large number of cliques in the gene coregulatory network, that is the many groups of genes that are all mutually coregulated. The constrained conformations for the model chromosome 19 are further shown to be organised in spatial macrodomains that are similar to those inferred from recent HiC measurements. The findings indicate that gene coregulation and colocalization are largely compatible and that this relationship can be exploited to draft the overall spatial organization of the chromosome in vivo. The more general validity and implications of these findings could be investigated by applying to other eukaryotic chromosomes the general and transferable computational strategy introduced here. |
1905.03978 | Zihang Zeng | Zihang Zeng, Jiali Li, Nannan Zhang, Xueping Jiang, Yanping Gao, Liexi
Xu, Xingyu Liu, Jiarui Chen, Yuke Gao, Linzhi Han, Jiangbo Ren, Yan Gong,
Conghua Xie | Tumor Microenvironment-based Gene Signatures Divides Novel Immune and
Stromal Subgroup Classification of Lung Adenocarcinoma | null | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Tumor microenvironment has complex effects on tumorigenesis and metastasis.
However, there is still a lack of comprehensive understanding of the
relationship among molecular and cellular characteristics in tumor
microenvironment, clinical prognosis and immunotherpy response. In this study,
the immune and stromal (non-immune) signatures of tumor microenvironment were
integrated to identify novel subgroups of lung adenocarcinoma by
eigendecomposition and extraction algorithms of bioinformatics and machine
learning, such as non-negative matrix factorization and multitask learning.
Tumors were classified into 4 groups according to the activation of immunity
and stroma by novel signatures. The 4 groups had different mutation landscape,
molecular, cellular characteristics and prognosis, which have been validation
in 6 independent data sets containing 1551 patients. High-immune and
low-stromal activation group links to high immunocyte infiltration, high
immunocompetence, low fibroblasts, endothelial cells, collagen, laminin, tumor
mutation burden, and better overall survival. We developed a novel model based
on tumor microenvironment by integrating immune and stromal activation, namely
PMBT (prognostic model based on tumor microenvironment). The PMBT showed the
value to predict overall survival and immunotherapy responses.
| [
{
"created": "Fri, 10 May 2019 07:27:21 GMT",
"version": "v1"
}
] | 2019-05-13 | [
[
"Zeng",
"Zihang",
""
],
[
"Li",
"Jiali",
""
],
[
"Zhang",
"Nannan",
""
],
[
"Jiang",
"Xueping",
""
],
[
"Gao",
"Yanping",
""
],
[
"Xu",
"Liexi",
""
],
[
"Liu",
"Xingyu",
""
],
[
"Chen",
"Jiarui",
""
],
[
"Gao",
"Yuke",
""
],
[
"Han",
"Linzhi",
""
],
[
"Ren",
"Jiangbo",
""
],
[
"Gong",
"Yan",
""
],
[
"Xie",
"Conghua",
""
]
] | Tumor microenvironment has complex effects on tumorigenesis and metastasis. However, there is still a lack of comprehensive understanding of the relationship among molecular and cellular characteristics in tumor microenvironment, clinical prognosis and immunotherpy response. In this study, the immune and stromal (non-immune) signatures of tumor microenvironment were integrated to identify novel subgroups of lung adenocarcinoma by eigendecomposition and extraction algorithms of bioinformatics and machine learning, such as non-negative matrix factorization and multitask learning. Tumors were classified into 4 groups according to the activation of immunity and stroma by novel signatures. The 4 groups had different mutation landscape, molecular, cellular characteristics and prognosis, which have been validation in 6 independent data sets containing 1551 patients. High-immune and low-stromal activation group links to high immunocyte infiltration, high immunocompetence, low fibroblasts, endothelial cells, collagen, laminin, tumor mutation burden, and better overall survival. We developed a novel model based on tumor microenvironment by integrating immune and stromal activation, namely PMBT (prognostic model based on tumor microenvironment). The PMBT showed the value to predict overall survival and immunotherapy responses. |
2102.06592 | Ben Baker | Ben Baker, Benjamin Lansdell, Konrad Kording | A Philosophical Understanding of Representation for Neuroscience | null | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Neuroscientists often describe neural activity as a representation of
something, or claim to have found evidence for a neural representation. But
what do these statements mean? The reasons to call some neural activity a
representation and the assumptions that come with this term are not generally
made clear from its common uses in neuroscience. Representation is a central
concept in philosophy of mind, with a rich history going back to the ancient
period. In order to clarify its usage in neuroscience, here we advance a link
between the connotations of this term across these disciplines. We draw on a
broad range of discourse in philosophy to distinguish three key aspects of
representation: correspondence, functional role, and teleology. We argue that
each of these aspects are implied by the explanatory role the term plays in
neuroscience. However, evidence related to all three aspects is rarely
presented or discussed in the course of individual studies that aim to identify
representations. Overlooking the significance of all three aspects hinders
communication in neuroscience, as it obscures the limitations of experimental
paradigms and conceals gaps in our understanding of the phenomena of primary
interest. Working from this three-part view, we discuss how to move toward
clearer communication about representations in the brain.
| [
{
"created": "Fri, 12 Feb 2021 16:01:24 GMT",
"version": "v1"
},
{
"created": "Wed, 28 Apr 2021 13:03:55 GMT",
"version": "v2"
}
] | 2021-04-29 | [
[
"Baker",
"Ben",
""
],
[
"Lansdell",
"Benjamin",
""
],
[
"Kording",
"Konrad",
""
]
] | Neuroscientists often describe neural activity as a representation of something, or claim to have found evidence for a neural representation. But what do these statements mean? The reasons to call some neural activity a representation and the assumptions that come with this term are not generally made clear from its common uses in neuroscience. Representation is a central concept in philosophy of mind, with a rich history going back to the ancient period. In order to clarify its usage in neuroscience, here we advance a link between the connotations of this term across these disciplines. We draw on a broad range of discourse in philosophy to distinguish three key aspects of representation: correspondence, functional role, and teleology. We argue that each of these aspects are implied by the explanatory role the term plays in neuroscience. However, evidence related to all three aspects is rarely presented or discussed in the course of individual studies that aim to identify representations. Overlooking the significance of all three aspects hinders communication in neuroscience, as it obscures the limitations of experimental paradigms and conceals gaps in our understanding of the phenomena of primary interest. Working from this three-part view, we discuss how to move toward clearer communication about representations in the brain. |
1403.2197 | Markus Pagitz Dr | Markus Pagitz and Remco I. Leine | Shape Optimization of Compliant Pressure Actuated Cellular Structures | 15 pages, 13 figures | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Biologically inspired pressure actuated cellular structures can alter their
shape through pressure variations. Previous work introduced a computational
framework for pressure actuated cellular structures which was limited to two
cell rows and central cell corner hinges. This article rigorously extends these
results by taking into account an arbitrary number of cell rows, a more
complicated cell kinematics that includes hinge eccentricities and varying side
lengths as well as rotational and axial cell side springs. The nonlinear
effects of arbitrary cell deformations are fully considered. Furthermore, the
optimization is considerably improved by using a second-order approach. The
presented framework enables the design of compliant pressure actuated cellular
structures that can change their form from one shape to another within a set of
one-dimensional C1 continuous functions.
| [
{
"created": "Mon, 10 Mar 2014 09:40:47 GMT",
"version": "v1"
},
{
"created": "Tue, 9 Jun 2015 17:50:20 GMT",
"version": "v2"
},
{
"created": "Tue, 19 Jan 2016 08:47:50 GMT",
"version": "v3"
},
{
"created": "Fri, 24 Jun 2016 12:32:37 GMT",
"version": "v4"
},
{
"created": "Fri, 7 Apr 2017 08:24:00 GMT",
"version": "v5"
}
] | 2017-04-10 | [
[
"Pagitz",
"Markus",
""
],
[
"Leine",
"Remco I.",
""
]
] | Biologically inspired pressure actuated cellular structures can alter their shape through pressure variations. Previous work introduced a computational framework for pressure actuated cellular structures which was limited to two cell rows and central cell corner hinges. This article rigorously extends these results by taking into account an arbitrary number of cell rows, a more complicated cell kinematics that includes hinge eccentricities and varying side lengths as well as rotational and axial cell side springs. The nonlinear effects of arbitrary cell deformations are fully considered. Furthermore, the optimization is considerably improved by using a second-order approach. The presented framework enables the design of compliant pressure actuated cellular structures that can change their form from one shape to another within a set of one-dimensional C1 continuous functions. |
1707.08529 | Haroun Mansuar A | Haroun Mansuar | The Detection and Localization of Frost Protein in Drosophila | 10 pages, 3 figures | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In response to cold stress, Drosophila Melanogaster increase their expression
of Frost, a candidate gene involved in cold response. The direct role of Frost
in cold tolerance is yet to be determined, and its importance for survival in
cold environments has been questioned. In this study, I attempt to better
understand the molecular machinery of Frost by selecting for its protein in fly
lysate knowing only its RNA has been found. Detection of Frost expression and
subsequently studying its protein will hopefully lead to enhanced comprehension
of cold responses in insects, which is the long-term goal of this research. I
predict that Frost will be expressed in flies that undergo cold stress at 0oC
for 2 hours before recovering at 220C for 3 hours before lysing. Two Western
blots were executed using fly lysate that underwent the above treatment, whose
antibodies were specific to Frost. No bands were seen in any of the lanes
containing treated samples in either of the Immunoblots, indicating that Frost
was either not expressed or not detected in fly lysate.
| [
{
"created": "Wed, 19 Jul 2017 01:38:38 GMT",
"version": "v1"
}
] | 2017-07-27 | [
[
"Mansuar",
"Haroun",
""
]
] | In response to cold stress, Drosophila Melanogaster increase their expression of Frost, a candidate gene involved in cold response. The direct role of Frost in cold tolerance is yet to be determined, and its importance for survival in cold environments has been questioned. In this study, I attempt to better understand the molecular machinery of Frost by selecting for its protein in fly lysate knowing only its RNA has been found. Detection of Frost expression and subsequently studying its protein will hopefully lead to enhanced comprehension of cold responses in insects, which is the long-term goal of this research. I predict that Frost will be expressed in flies that undergo cold stress at 0oC for 2 hours before recovering at 220C for 3 hours before lysing. Two Western blots were executed using fly lysate that underwent the above treatment, whose antibodies were specific to Frost. No bands were seen in any of the lanes containing treated samples in either of the Immunoblots, indicating that Frost was either not expressed or not detected in fly lysate. |
2301.12892 | Claus Metzner | Claus Metzner, Marius E. Yamakou, Dennis Voelkl, Achim Schilling and
Patrick Krauss | Quantifying and maximizing the information flux in recurrent neural
networks | null | null | null | null | q-bio.NC cs.NE | http://creativecommons.org/licenses/by/4.0/ | Free-running Recurrent Neural Networks (RNNs), especially probabilistic
models, generate an ongoing information flux that can be quantified with the
mutual information $I\left[\vec{x}(t),\vec{x}(t\!+\!1)\right]$ between
subsequent system states $\vec{x}$. Although, former studies have shown that
$I$ depends on the statistics of the network's connection weights, it is
unclear (1) how to maximize $I$ systematically and (2) how to quantify the flux
in large systems where computing the mutual information becomes intractable.
Here, we address these questions using Boltzmann machines as model systems. We
find that in networks with moderately strong connections, the mutual
information $I$ is approximately a monotonic transformation of the
root-mean-square averaged Pearson correlations between neuron-pairs, a quantity
that can be efficiently computed even in large systems. Furthermore,
evolutionary maximization of $I\left[\vec{x}(t),\vec{x}(t\!+\!1)\right]$
reveals a general design principle for the weight matrices enabling the
systematic construction of systems with a high spontaneous information flux.
Finally, we simultaneously maximize information flux and the mean period length
of cyclic attractors in the state space of these dynamical networks. Our
results are potentially useful for the construction of RNNs that serve as
short-time memories or pattern generators.
| [
{
"created": "Mon, 30 Jan 2023 13:52:39 GMT",
"version": "v1"
},
{
"created": "Tue, 17 Oct 2023 15:49:24 GMT",
"version": "v2"
}
] | 2023-10-18 | [
[
"Metzner",
"Claus",
""
],
[
"Yamakou",
"Marius E.",
""
],
[
"Voelkl",
"Dennis",
""
],
[
"Schilling",
"Achim",
""
],
[
"Krauss",
"Patrick",
""
]
] | Free-running Recurrent Neural Networks (RNNs), especially probabilistic models, generate an ongoing information flux that can be quantified with the mutual information $I\left[\vec{x}(t),\vec{x}(t\!+\!1)\right]$ between subsequent system states $\vec{x}$. Although, former studies have shown that $I$ depends on the statistics of the network's connection weights, it is unclear (1) how to maximize $I$ systematically and (2) how to quantify the flux in large systems where computing the mutual information becomes intractable. Here, we address these questions using Boltzmann machines as model systems. We find that in networks with moderately strong connections, the mutual information $I$ is approximately a monotonic transformation of the root-mean-square averaged Pearson correlations between neuron-pairs, a quantity that can be efficiently computed even in large systems. Furthermore, evolutionary maximization of $I\left[\vec{x}(t),\vec{x}(t\!+\!1)\right]$ reveals a general design principle for the weight matrices enabling the systematic construction of systems with a high spontaneous information flux. Finally, we simultaneously maximize information flux and the mean period length of cyclic attractors in the state space of these dynamical networks. Our results are potentially useful for the construction of RNNs that serve as short-time memories or pattern generators. |
1212.1036 | Steffen Rulands | Steffen Rulands, Ben Kl\"under, Erwin Frey | Stability of localized wave fronts in bistable systems | 5 pages, 2 figures | Phys. Rev. Lett. 110, 038102 (2013) | 10.1103/PhysRevLett.110.038102 | null | q-bio.CB cond-mat.stat-mech nlin.PS physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Localized wave fronts are a fundamental feature of biological systems from
cell biology to ecology. Here, we study a broad class of bistable models
subject to self-activation, degradation and spatially inhomogeneous activating
agents. We determine the conditions under which wave-front localization is
possible and analyze the stability thereof with respect to extrinsic
perturbations and internal noise. It is found that stability is enhanced upon
regulating a positional signal and, surprisingly, also for a low degree of
binding cooperativity. We further show a contrasting impact of self-activation
to the stability of these two sources of destabilization.
| [
{
"created": "Wed, 5 Dec 2012 14:28:34 GMT",
"version": "v1"
},
{
"created": "Tue, 5 Feb 2013 12:00:49 GMT",
"version": "v2"
}
] | 2014-05-27 | [
[
"Rulands",
"Steffen",
""
],
[
"Klünder",
"Ben",
""
],
[
"Frey",
"Erwin",
""
]
] | Localized wave fronts are a fundamental feature of biological systems from cell biology to ecology. Here, we study a broad class of bistable models subject to self-activation, degradation and spatially inhomogeneous activating agents. We determine the conditions under which wave-front localization is possible and analyze the stability thereof with respect to extrinsic perturbations and internal noise. It is found that stability is enhanced upon regulating a positional signal and, surprisingly, also for a low degree of binding cooperativity. We further show a contrasting impact of self-activation to the stability of these two sources of destabilization. |
1711.10549 | Fenix Huang | Fenix W. Huang, Qijun He, Christopher Barrett and Christian M. Reidys | An efficient dual sampling algorithm with Hamming distance filtration | 8 pages 6 figures | null | null | null | q-bio.BM math.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, a framework considering RNA sequences and their RNA secondary
structures as pairs, led to some information-theoretic perspectives on how the
semantics encoded in RNA sequences can be inferred. In this context, the
pairing arises naturally from the energy model of RNA secondary structures.
Fixing the sequence in the pairing produces the RNA energy landscape, whose
partition function was discovered by McCaskill. Dually, fixing the structure
induces the energy landscape of sequences. The latter has been considered for
designing more efficient inverse folding algorithms. We present here the
Hamming distance filtered, dual partition function, together with a Boltzmann
sampler using novel dynamic programming routines for the loop-based energy
model. The time complexity of the algorithm is $O(h^2n)$, where $h,n$ are
Hamming distance and sequence length, respectively, reducing the time
complexity of samplers, reported in the literature by $O(n^2)$. We then present
two applications, the first being in the context of the evolution of natural
sequence-structure pairs of microRNAs and the second constructing neutral
paths. The former studies the inverse fold rate (IFR) of sequence-structure
pairs, filtered by Hamming distance, observing that such pairs evolve towards
higher levels of robustness, i.e.,~increasing IFR. The latter is an algorithm
that constructs neutral paths: given two sequences in a neutral network, we
employ the sampler in order to construct short paths connecting them,
consisting of sequences all contained in the neutral network.
| [
{
"created": "Tue, 31 Oct 2017 16:06:12 GMT",
"version": "v1"
}
] | 2017-11-30 | [
[
"Huang",
"Fenix W.",
""
],
[
"He",
"Qijun",
""
],
[
"Barrett",
"Christopher",
""
],
[
"Reidys",
"Christian M.",
""
]
] | Recently, a framework considering RNA sequences and their RNA secondary structures as pairs, led to some information-theoretic perspectives on how the semantics encoded in RNA sequences can be inferred. In this context, the pairing arises naturally from the energy model of RNA secondary structures. Fixing the sequence in the pairing produces the RNA energy landscape, whose partition function was discovered by McCaskill. Dually, fixing the structure induces the energy landscape of sequences. The latter has been considered for designing more efficient inverse folding algorithms. We present here the Hamming distance filtered, dual partition function, together with a Boltzmann sampler using novel dynamic programming routines for the loop-based energy model. The time complexity of the algorithm is $O(h^2n)$, where $h,n$ are Hamming distance and sequence length, respectively, reducing the time complexity of samplers, reported in the literature by $O(n^2)$. We then present two applications, the first being in the context of the evolution of natural sequence-structure pairs of microRNAs and the second constructing neutral paths. The former studies the inverse fold rate (IFR) of sequence-structure pairs, filtered by Hamming distance, observing that such pairs evolve towards higher levels of robustness, i.e.,~increasing IFR. The latter is an algorithm that constructs neutral paths: given two sequences in a neutral network, we employ the sampler in order to construct short paths connecting them, consisting of sequences all contained in the neutral network. |
1512.08310 | Osamu Narikiyo | Hirotaka Matsufuji and Osamu Narikiyo | Evolutionary simulations of autopoietic cells with cognition | null | null | null | null | q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The minimal requirements for life are autopoiesis and cognition. We propose
autopoietic models with cognition and perform three classes of evolutionary
simulation. In our models the plasticity of the metabolic cycle and the
regulation function of the membrane are the bases for the cognition. The
cognitive cells show the adaptation and the evolution. The environment also
shows the evolution via the interaction with the system of cells. This is a
prototype of the co-evolution of the living system and its environment.
| [
{
"created": "Mon, 28 Dec 2015 02:54:23 GMT",
"version": "v1"
}
] | 2015-12-29 | [
[
"Matsufuji",
"Hirotaka",
""
],
[
"Narikiyo",
"Osamu",
""
]
] | The minimal requirements for life are autopoiesis and cognition. We propose autopoietic models with cognition and perform three classes of evolutionary simulation. In our models the plasticity of the metabolic cycle and the regulation function of the membrane are the bases for the cognition. The cognitive cells show the adaptation and the evolution. The environment also shows the evolution via the interaction with the system of cells. This is a prototype of the co-evolution of the living system and its environment. |
2301.03511 | BingKan Xue | Leo Law, BingKan Xue | The value of internal memory for population growth in varying
environments | 22 pages, 7 figures | null | null | null | q-bio.PE physics.bio-ph | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In varying environments it is beneficial for organisms to utilize available
cues to infer the conditions they may encounter and express potentially
favorable traits. However, external cues can be unreliable or too costly to
use. We consider an alternative strategy where organisms exploit internal
sources of information. Even without sensing environmental cues, their internal
states may become correlated with the environment as a result of selection,
which then form a memory that helps predict future conditions. To demonstrate
the adaptive value of such internal memory in varying environments, we revisit
the classic example of seed dormancy in annual plants. Previous studies have
considered the germination fraction of seeds and its dependence on
environmental cues. In contrast, we consider a model of germination fraction
that depends on the seed age, which is an internal state that can serve as a
memory. We show that, if the environmental variation has temporal structure,
then age-dependent germination fractions will allow the population to have an
increased long-term growth rate. The more organisms can remember through their
internal states, the higher growth rate a population can potentially achieve.
Our results suggest experimental ways to infer internal memory and its benefit
for adaptation in varying environments.
| [
{
"created": "Mon, 9 Jan 2023 17:01:21 GMT",
"version": "v1"
}
] | 2023-01-10 | [
[
"Law",
"Leo",
""
],
[
"Xue",
"BingKan",
""
]
] | In varying environments it is beneficial for organisms to utilize available cues to infer the conditions they may encounter and express potentially favorable traits. However, external cues can be unreliable or too costly to use. We consider an alternative strategy where organisms exploit internal sources of information. Even without sensing environmental cues, their internal states may become correlated with the environment as a result of selection, which then form a memory that helps predict future conditions. To demonstrate the adaptive value of such internal memory in varying environments, we revisit the classic example of seed dormancy in annual plants. Previous studies have considered the germination fraction of seeds and its dependence on environmental cues. In contrast, we consider a model of germination fraction that depends on the seed age, which is an internal state that can serve as a memory. We show that, if the environmental variation has temporal structure, then age-dependent germination fractions will allow the population to have an increased long-term growth rate. The more organisms can remember through their internal states, the higher growth rate a population can potentially achieve. Our results suggest experimental ways to infer internal memory and its benefit for adaptation in varying environments. |
0903.4131 | Tidjani Negadi | Tidjani Negadi | The genetic code degeneracy and the amino acids chemical composition are
connected | null | Neuroquantology, Vol.7, 1, 181-187, 2009 | null | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We show that our recently published Arithmetic Model of the genetic code
based on Godel Encoding is robust against symmetry transformations, specially
Rumer s one U > G, A > C, and constitutes a link between the degeneracy
structure and the chemical composition of the 20 canonical amino acids. As a
result, several remarkable atomic patterns involving hydrogen, carbon, nucleon
and atom numbers are derived. This study has no obvious practical
application(s) but could, we hope, add some new knowledge concerning the
physico-mathematical structrure of the genetic code.
| [
{
"created": "Tue, 24 Mar 2009 16:50:56 GMT",
"version": "v1"
}
] | 2009-03-25 | [
[
"Negadi",
"Tidjani",
""
]
] | We show that our recently published Arithmetic Model of the genetic code based on Godel Encoding is robust against symmetry transformations, specially Rumer s one U > G, A > C, and constitutes a link between the degeneracy structure and the chemical composition of the 20 canonical amino acids. As a result, several remarkable atomic patterns involving hydrogen, carbon, nucleon and atom numbers are derived. This study has no obvious practical application(s) but could, we hope, add some new knowledge concerning the physico-mathematical structrure of the genetic code. |
1604.03081 | Jan Hoinka | Phuong Dao, Jan Hoinka, Yijie Wang, Mayumi Takahashi, Jiehua Zhou,
Fabrizio Costa, John Rossi, John Burnett, Rolf Backofen, Teresa M. Przytycka | AptaTRACE: Elucidating Sequence-Structure Binding Motifs by Uncovering
Selection Trends in HT-SELEX Experiments | This paper was selected for oral presentation at RECOMB 2016 and an
abstract is published in the conference proceedings | null | null | null | q-bio.QM cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Aptamers, short synthetic RNA/DNA molecules binding specific targets with
high affinity and specificity, are utilized in an increasing spectrum of
bio-medical applications. Aptamers are identified in vitro via the Systematic
Evolution of Ligands by Exponential Enrichment (SELEX) protocol. SELEX selects
binders through an iterative process that, starting from a pool of random
ssDNA/RNA sequences, amplifies target-affine species through a series of
selection cycles. HT-SELEX, which combines SELEX with high throughput
sequencing, has recently transformed aptamer development and has opened the
field to even more applications. HT-SELEX is capable of generating over half a
billion data points, challenging computational scientists with the task of
identifying aptamer properties such as sequence structure motifs that determine
binding. While currently available motif finding approaches suggest partial
solutions to this question, none possess the generality or scalability required
for HT-SELEX data, and they do not take advantage of important properties of
the experimental procedure.
We present AptaTRACE, a novel approach for the identification of
sequence-structure binding motifs in HT-SELEX derived aptamers. Our approach
leverages the experimental design of the SELEX protocol and identifies
sequence-structure motifs that show a signature of selection. Because of its
unique approach, AptaTRACE can uncover motifs even when these are present in
only a minuscule fraction of the pool. Due to these features, our method can
help to reduce the number of selection cycles required to produce aptamers with
the desired properties, thus reducing cost and time of this rather expensive
procedure. The performance of the method on simulated and real data indicates
that AptaTRACE can detect sequence-structure motifs even in highly challenging
data.
| [
{
"created": "Tue, 5 Apr 2016 18:47:52 GMT",
"version": "v1"
}
] | 2016-04-12 | [
[
"Dao",
"Phuong",
""
],
[
"Hoinka",
"Jan",
""
],
[
"Wang",
"Yijie",
""
],
[
"Takahashi",
"Mayumi",
""
],
[
"Zhou",
"Jiehua",
""
],
[
"Costa",
"Fabrizio",
""
],
[
"Rossi",
"John",
""
],
[
"Burnett",
"John",
""
],
[
"Backofen",
"Rolf",
""
],
[
"Przytycka",
"Teresa M.",
""
]
] | Aptamers, short synthetic RNA/DNA molecules binding specific targets with high affinity and specificity, are utilized in an increasing spectrum of bio-medical applications. Aptamers are identified in vitro via the Systematic Evolution of Ligands by Exponential Enrichment (SELEX) protocol. SELEX selects binders through an iterative process that, starting from a pool of random ssDNA/RNA sequences, amplifies target-affine species through a series of selection cycles. HT-SELEX, which combines SELEX with high throughput sequencing, has recently transformed aptamer development and has opened the field to even more applications. HT-SELEX is capable of generating over half a billion data points, challenging computational scientists with the task of identifying aptamer properties such as sequence structure motifs that determine binding. While currently available motif finding approaches suggest partial solutions to this question, none possess the generality or scalability required for HT-SELEX data, and they do not take advantage of important properties of the experimental procedure. We present AptaTRACE, a novel approach for the identification of sequence-structure binding motifs in HT-SELEX derived aptamers. Our approach leverages the experimental design of the SELEX protocol and identifies sequence-structure motifs that show a signature of selection. Because of its unique approach, AptaTRACE can uncover motifs even when these are present in only a minuscule fraction of the pool. Due to these features, our method can help to reduce the number of selection cycles required to produce aptamers with the desired properties, thus reducing cost and time of this rather expensive procedure. The performance of the method on simulated and real data indicates that AptaTRACE can detect sequence-structure motifs even in highly challenging data. |
1705.00550 | Jacek Bialowas | Jacek Bialowas | Topography of the nuclei and distribution of Acetylcholinesterase
activity in the septum of the telencephalon in man | 12 pages, 4 figures | Folia Morphol. (Warsz.), 1976, 35, 4, 405-412 | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Distribution of acetylcholinesterase (AChE) activity in the septum of the
telencephalon in man was studied in 15 human brains using the acetylthiocholine
method. Highest activity of AChE was found in the nucleus of the diagonal band
and nucleus accumbens, and lowest in the Lateral nucleus. Comparison of
histochemical results with cellular structure and wi.th the course of fibers
showed absence in man of some of the nuclei described in animals such as the
anterior medial nucleus, triangular nucleus, and marked reduction of the
septo-hippocampal nucleus and fimbriate nucleus. Areas of the septum showing
AChE activity were divided into an anterior and posterior system. The
applicability to man of some neurophysiologic findings in animals is discussed.
| [
{
"created": "Mon, 1 May 2017 14:50:52 GMT",
"version": "v1"
}
] | 2017-05-02 | [
[
"Bialowas",
"Jacek",
""
]
] | Distribution of acetylcholinesterase (AChE) activity in the septum of the telencephalon in man was studied in 15 human brains using the acetylthiocholine method. Highest activity of AChE was found in the nucleus of the diagonal band and nucleus accumbens, and lowest in the Lateral nucleus. Comparison of histochemical results with cellular structure and wi.th the course of fibers showed absence in man of some of the nuclei described in animals such as the anterior medial nucleus, triangular nucleus, and marked reduction of the septo-hippocampal nucleus and fimbriate nucleus. Areas of the septum showing AChE activity were divided into an anterior and posterior system. The applicability to man of some neurophysiologic findings in animals is discussed. |
1901.02908 | Misha Perepelitsa | Misha Perepelitsa | Adaptive Learning in Large Populations | 17 pages, 1 figure | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the adaptive learning rule of Harley (1981) for behavior
selection in symmetric conflict games in large populations. The rule uses
organisms' past, accumulated rewards as the predictor for the future behavior,
and can be traced in many life forms from bacteria to humans. We derive a
partial differential equation (PDE) that describes the stochastic learning in a
population of agents. The equation has simple structure of the `conservation of
mass'-type equation in the space of stimuli to engage in a particular type of
behavior. We analyze the solutions of the PDE model for typical 2x2 games. It
is found that in games with small residual stimuli, adaptive learning rules
with faster memory decay have an evolutionary advantage.
| [
{
"created": "Wed, 9 Jan 2019 19:25:48 GMT",
"version": "v1"
},
{
"created": "Fri, 10 May 2019 03:22:26 GMT",
"version": "v2"
}
] | 2019-05-13 | [
[
"Perepelitsa",
"Misha",
""
]
] | We consider the adaptive learning rule of Harley (1981) for behavior selection in symmetric conflict games in large populations. The rule uses organisms' past, accumulated rewards as the predictor for the future behavior, and can be traced in many life forms from bacteria to humans. We derive a partial differential equation (PDE) that describes the stochastic learning in a population of agents. The equation has simple structure of the `conservation of mass'-type equation in the space of stimuli to engage in a particular type of behavior. We analyze the solutions of the PDE model for typical 2x2 games. It is found that in games with small residual stimuli, adaptive learning rules with faster memory decay have an evolutionary advantage. |
1309.7549 | Pavel Golovinski | P. A. Golovinski | Mathematical Model of Age Aggression | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We formulate a mathematical model of competition for resources between
representatives of different age groups. A nonlinear kinetic
integral-differential equation of the age aggression describes the process of
redistribution of resources. It is shown that the equation of the age
aggression has a stationary solution, in the absence of age-dependency in the
interaction of different age groups. A numerical simulation of the evolution of
resources for different initial distributions has done. It is shown the
instability of the system and the existence of regimes leading to a
concentration of resources in the certain age groups.
| [
{
"created": "Sun, 29 Sep 2013 07:46:40 GMT",
"version": "v1"
}
] | 2013-10-01 | [
[
"Golovinski",
"P. A.",
""
]
] | We formulate a mathematical model of competition for resources between representatives of different age groups. A nonlinear kinetic integral-differential equation of the age aggression describes the process of redistribution of resources. It is shown that the equation of the age aggression has a stationary solution, in the absence of age-dependency in the interaction of different age groups. A numerical simulation of the evolution of resources for different initial distributions has done. It is shown the instability of the system and the existence of regimes leading to a concentration of resources in the certain age groups. |
2103.14366 | Aymeric Vie | Aymeric Vie | Emergence of more contagious COVID-19 variants from the coevolution of
viruses and policy interventions | arXiv admin note: text overlap with arXiv:2102.12365 | null | null | null | q-bio.PE physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | At the end of 2020, policy responses to the SARS-CoV-2 outbreak have been
shaken by the emergence of virus variants. The emergence of these more
contagious, more severe, or even vaccine-resistant strains have challenged
worldwide policy interventions. Anticipating the emergence of these mutations
to plan ahead adequate policies, and understanding how human behaviors may
affect the evolution of viruses by coevolution, are key challenges. In this
article, we propose coevolution with genetic algorithms (GAs) as a credible
approach to model this relationship, highlighting its implications, potential
and challenges. We present a dual GA model in which both viruses aiming for
survival and policy measures aiming at minimising infection rates in the
population, competitively evolve. Simulation runs reproduce the emergence of
more contagious variants, and identifies the evolution of policy responses as a
determinant cause of this phenomenon. This coevolution opens new possibilities
to visualise the impact of governments interventions not only on outbreak
dynamics, but also on its evolution, to improve the efficacy of policies.
| [
{
"created": "Fri, 26 Mar 2021 10:11:54 GMT",
"version": "v1"
}
] | 2021-03-29 | [
[
"Vie",
"Aymeric",
""
]
] | At the end of 2020, policy responses to the SARS-CoV-2 outbreak have been shaken by the emergence of virus variants. The emergence of these more contagious, more severe, or even vaccine-resistant strains have challenged worldwide policy interventions. Anticipating the emergence of these mutations to plan ahead adequate policies, and understanding how human behaviors may affect the evolution of viruses by coevolution, are key challenges. In this article, we propose coevolution with genetic algorithms (GAs) as a credible approach to model this relationship, highlighting its implications, potential and challenges. We present a dual GA model in which both viruses aiming for survival and policy measures aiming at minimising infection rates in the population, competitively evolve. Simulation runs reproduce the emergence of more contagious variants, and identifies the evolution of policy responses as a determinant cause of this phenomenon. This coevolution opens new possibilities to visualise the impact of governments interventions not only on outbreak dynamics, but also on its evolution, to improve the efficacy of policies. |
2009.03238 | Niharika Shimona D'Souza | Niharika Shimona D'Souza, Mary Beth Nebel, Nicholas Wymbs, Stewart H.
Mostofsky, Archana Venkataraman | A Joint Network Optimization Framework to Predict Clinical Severity from
Resting State Functional MRI Data | null | null | null | null | q-bio.NC cs.LG eess.SP stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel optimization framework to predict clinical severity from
resting state fMRI (rs-fMRI) data. Our model consists of two coupled terms. The
first term decomposes the correlation matrices into a sparse set of
representative subnetworks that define a network manifold. These subnetworks
are modeled as rank-one outer-products which correspond to the elemental
patterns of co-activation across the brain; the subnetworks are combined via
patient-specific non-negative coefficients. The second term is a linear
regression model that uses the patient-specific coefficients to predict a
measure of clinical severity. We validate our framework on two separate
datasets in a ten fold cross validation setting. The first is a cohort of
fifty-eight patients diagnosed with Autism Spectrum Disorder (ASD). The second
dataset consists of sixty three patients from a publicly available ASD
database. Our method outperforms standard semi-supervised frameworks, which
employ conventional graph theoretic and statistical representation learning
techniques to relate the rs-fMRI correlations to behavior. In contrast, our
joint network optimization framework exploits the structure of the rs-fMRI
correlation matrices to simultaneously capture group level effects and patient
heterogeneity. Finally, we demonstrate that our proposed framework robustly
identifies clinically relevant networks characteristic of ASD.
| [
{
"created": "Thu, 27 Aug 2020 23:43:25 GMT",
"version": "v1"
}
] | 2020-09-08 | [
[
"D'Souza",
"Niharika Shimona",
""
],
[
"Nebel",
"Mary Beth",
""
],
[
"Wymbs",
"Nicholas",
""
],
[
"Mostofsky",
"Stewart H.",
""
],
[
"Venkataraman",
"Archana",
""
]
] | We propose a novel optimization framework to predict clinical severity from resting state fMRI (rs-fMRI) data. Our model consists of two coupled terms. The first term decomposes the correlation matrices into a sparse set of representative subnetworks that define a network manifold. These subnetworks are modeled as rank-one outer-products which correspond to the elemental patterns of co-activation across the brain; the subnetworks are combined via patient-specific non-negative coefficients. The second term is a linear regression model that uses the patient-specific coefficients to predict a measure of clinical severity. We validate our framework on two separate datasets in a ten fold cross validation setting. The first is a cohort of fifty-eight patients diagnosed with Autism Spectrum Disorder (ASD). The second dataset consists of sixty three patients from a publicly available ASD database. Our method outperforms standard semi-supervised frameworks, which employ conventional graph theoretic and statistical representation learning techniques to relate the rs-fMRI correlations to behavior. In contrast, our joint network optimization framework exploits the structure of the rs-fMRI correlation matrices to simultaneously capture group level effects and patient heterogeneity. Finally, we demonstrate that our proposed framework robustly identifies clinically relevant networks characteristic of ASD. |
2001.04571 | David Zoltowski | David M. Zoltowski, Jonathan W. Pillow, and Scott W. Linderman | Unifying and generalizing models of neural dynamics during
decision-making | null | null | null | null | q-bio.NC stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An open question in systems and computational neuroscience is how neural
circuits accumulate evidence towards a decision. Fitting models of
decision-making theory to neural activity helps answer this question, but
current approaches limit the number of these models that we can fit to neural
data. Here we propose a unifying framework for modeling neural activity during
decision-making tasks. The framework includes the canonical drift-diffusion
model and enables extensions such as multi-dimensional accumulators, variable
and collapsing boundaries, and discrete jumps. Our framework is based on
constraining the parameters of recurrent state-space models, for which we
introduce a scalable variational Laplace-EM inference algorithm. We applied the
modeling approach to spiking responses recorded from monkey parietal cortex
during two decision-making tasks. We found that a two-dimensional accumulator
better captured the trial-averaged responses of a set of parietal neurons than
a single accumulator model. Next, we identified a variable lower boundary in
the responses of an LIP neuron during a random dot motion task.
| [
{
"created": "Mon, 13 Jan 2020 23:57:28 GMT",
"version": "v1"
}
] | 2020-01-15 | [
[
"Zoltowski",
"David M.",
""
],
[
"Pillow",
"Jonathan W.",
""
],
[
"Linderman",
"Scott W.",
""
]
] | An open question in systems and computational neuroscience is how neural circuits accumulate evidence towards a decision. Fitting models of decision-making theory to neural activity helps answer this question, but current approaches limit the number of these models that we can fit to neural data. Here we propose a unifying framework for modeling neural activity during decision-making tasks. The framework includes the canonical drift-diffusion model and enables extensions such as multi-dimensional accumulators, variable and collapsing boundaries, and discrete jumps. Our framework is based on constraining the parameters of recurrent state-space models, for which we introduce a scalable variational Laplace-EM inference algorithm. We applied the modeling approach to spiking responses recorded from monkey parietal cortex during two decision-making tasks. We found that a two-dimensional accumulator better captured the trial-averaged responses of a set of parietal neurons than a single accumulator model. Next, we identified a variable lower boundary in the responses of an LIP neuron during a random dot motion task. |
1805.05303 | David Papo | David Papo | Neurofeedback: principles, appraisal and outstanding issues | 12 pages | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neurofeedback is a form of brain training in which subjects are fed back
information about some measure of their brain activity which they are
instructed to modify in a way thought to be functionally advantageous. Over the
last twenty years, NF has been used to treat various neurological and
psychiatric conditions, and to improve cognitive function in various contexts.
However, despite its growing popularity, each of the main steps in NF comes
with its own set of often covert assumptions. Here we critically examine some
conceptual and methodological issues associated with the way general objectives
and neural targets of NF are defined, and review the neural mechanisms through
which NF may act, and the way its efficacy is gauged. The NF process is
characterised in terms of functional dynamics, and possible ways in which it
may be controlled are discussed. Finally, it is proposed that improving NF will
require better understanding of various fundamental aspects of brain dynamics
and a more precise definition of functional brain activity and brain-behaviour
relationships.
| [
{
"created": "Mon, 14 May 2018 17:24:42 GMT",
"version": "v1"
}
] | 2018-05-15 | [
[
"Papo",
"David",
""
]
] | Neurofeedback is a form of brain training in which subjects are fed back information about some measure of their brain activity which they are instructed to modify in a way thought to be functionally advantageous. Over the last twenty years, NF has been used to treat various neurological and psychiatric conditions, and to improve cognitive function in various contexts. However, despite its growing popularity, each of the main steps in NF comes with its own set of often covert assumptions. Here we critically examine some conceptual and methodological issues associated with the way general objectives and neural targets of NF are defined, and review the neural mechanisms through which NF may act, and the way its efficacy is gauged. The NF process is characterised in terms of functional dynamics, and possible ways in which it may be controlled are discussed. Finally, it is proposed that improving NF will require better understanding of various fundamental aspects of brain dynamics and a more precise definition of functional brain activity and brain-behaviour relationships. |
2401.15880 | Rosalind Pan | Rosalind Wenshan Pan, Tom Roeschinger, Kian Faizi, Hernan Garcia, Rob
Phillips | Deciphering regulatory architectures from synthetic single-cell
expression patterns | null | null | null | null | q-bio.GN q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | For the vast majority of genes in sequenced genomes, there is limited
understanding of how they are regulated. Without such knowledge, it is not
possible to perform a quantitative theory-experiment dialogue on how such genes
give rise to physiological and evolutionary adaptation. One category of
high-throughput experiments used to understand the sequence-phenotype
relationship of the transcriptome is massively parallel reporter assays
(MPRAs). However, to improve the versatility and scalability of MPRA pipelines,
we need a "theory of the experiment" to help us better understand the impact of
various biological and experimental parameters on the interpretation of
experimental data. To that end, in this paper we create tens of thousands of
synthetic single-cell gene expression outputs using both equilibrium and
out-of-equilibrium models. These models make it possible to imitate the summary
statistics (information footprints and expression shift matrices) used to
characterize the output of MPRAs and from this summary statistic to infer the
underlying regulatory architecture. Specifically, we use a more refined
implementation of the so-called thermodynamic models in which the binding
energies of each sequence variant are derived from energy matrices. Our
simulations reveal important effects of the parameters on MPRA data and we
demonstrate our ability to optimize MPRA experimental designs with the goal of
generating thermodynamic models of the transcriptome with base-pair
specificity. Further, this approach makes it possible to carefully examine the
mapping between mutations in binding sites and their corresponding expression
profiles, a tool useful not only for better designing MPRAs, but also for
exploring regulatory evolution.
| [
{
"created": "Mon, 29 Jan 2024 04:25:29 GMT",
"version": "v1"
},
{
"created": "Wed, 5 Jun 2024 23:19:28 GMT",
"version": "v2"
}
] | 2024-06-07 | [
[
"Pan",
"Rosalind Wenshan",
""
],
[
"Roeschinger",
"Tom",
""
],
[
"Faizi",
"Kian",
""
],
[
"Garcia",
"Hernan",
""
],
[
"Phillips",
"Rob",
""
]
] | For the vast majority of genes in sequenced genomes, there is limited understanding of how they are regulated. Without such knowledge, it is not possible to perform a quantitative theory-experiment dialogue on how such genes give rise to physiological and evolutionary adaptation. One category of high-throughput experiments used to understand the sequence-phenotype relationship of the transcriptome is massively parallel reporter assays (MPRAs). However, to improve the versatility and scalability of MPRA pipelines, we need a "theory of the experiment" to help us better understand the impact of various biological and experimental parameters on the interpretation of experimental data. To that end, in this paper we create tens of thousands of synthetic single-cell gene expression outputs using both equilibrium and out-of-equilibrium models. These models make it possible to imitate the summary statistics (information footprints and expression shift matrices) used to characterize the output of MPRAs and from this summary statistic to infer the underlying regulatory architecture. Specifically, we use a more refined implementation of the so-called thermodynamic models in which the binding energies of each sequence variant are derived from energy matrices. Our simulations reveal important effects of the parameters on MPRA data and we demonstrate our ability to optimize MPRA experimental designs with the goal of generating thermodynamic models of the transcriptome with base-pair specificity. Further, this approach makes it possible to carefully examine the mapping between mutations in binding sites and their corresponding expression profiles, a tool useful not only for better designing MPRAs, but also for exploring regulatory evolution. |
q-bio/0504010 | Florentin Smarandache | Sreepurna Malakar, Florentin Smarandache, Sukanto Bhattacharya | Statistical modeling of primary Ewing tumours of the bone | 12 pages | International Journal of Tomography & Statistics, Vol. 3, No.
JJ05, 81-88, 2005. | null | null | q-bio.QM q-bio.CB | null | This short technical paper advocates a bootstrapping algorithm from which we
can form a statistically reliable opinion based on limited clinically observed
data, regarding whether an osteo-hyperplasia could actually be a case of
Ewing's osteosarcoma. The basic premise underlying our methodology is that a
primary bone tumour, if it is indeed Erwing's osteosarcoma, cannot increase in
volume beyond some critical limit without showing metastasis. We propose a
statistical method to extrapolate such critical limit to primary tumour volume.
Our model does not involve any physiological variables but rather is entirely
based on time series observations of increase in primary tumour volume from the
point of initial detection to the actual detection of metastases.
| [
{
"created": "Wed, 6 Apr 2005 18:12:32 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Malakar",
"Sreepurna",
""
],
[
"Smarandache",
"Florentin",
""
],
[
"Bhattacharya",
"Sukanto",
""
]
] | This short technical paper advocates a bootstrapping algorithm from which we can form a statistically reliable opinion based on limited clinically observed data, regarding whether an osteo-hyperplasia could actually be a case of Ewing's osteosarcoma. The basic premise underlying our methodology is that a primary bone tumour, if it is indeed Erwing's osteosarcoma, cannot increase in volume beyond some critical limit without showing metastasis. We propose a statistical method to extrapolate such critical limit to primary tumour volume. Our model does not involve any physiological variables but rather is entirely based on time series observations of increase in primary tumour volume from the point of initial detection to the actual detection of metastases. |
2103.05754 | Michelle Bartolo | Michelle A. Bartolo, M. Umar Qureshi, Mitchel J. Colebank, Naomi C.
Chesler, and Mette S. Olufsen | Numerical predictions of shear stress and cyclic stretch in the healthy
pulmonary vasculature | 26 pages, 9 figures | null | null | null | q-bio.TO | http://creativecommons.org/licenses/by/4.0/ | Isolated post-capillary pulmonary hypertension (Ipc-PH) occurs due to left
heart failure, which contributes to 1 out of every 9 deaths in the United
States. In some patients, through unknown mechanisms, Ipc-PH transitions to
combined pre-/post-capillary PH (Cpc-PH), diagnosed by an increase in pulmonary
vascular resistance and associated with a dramatic increase in mortality. We
hypothesize that altered mechanical forces and subsequent vasoactive signaling
in the pulmonary capillary bed drive the transition from Ipc-PH to Cpc-PH.
However, even in a healthy pulmonary circulation, the mechanical forces in the
smallest vessels (the arterioles, venules, and capillary bed) have not been
quantitatively defined. This study is the first to examine this question via a
computational fluid dynamics model of the human pulmonary arteries, veins,
arterioles, and venules. Using this model we predict temporal and spatial
dynamics of cyclic stretch and wall shear stress. In the large vessels,
numerical simulations show that shear stress increases coincides with larger
flow and pressure. In the microvasculature, we found that as vessel radius
decreases, shear stress increases and flow decreases. In arterioles, this
corresponds with lower pressures; however, the venules and smaller veins have
higher pressure than larger veins. Our model provides predictions for pressure,
flow, shear stress, and cyclic stretch that provides a way to analyze and
investigate hypotheses related to disease progression in the pulmonary
circulation.
| [
{
"created": "Fri, 5 Mar 2021 20:49:20 GMT",
"version": "v1"
}
] | 2021-03-11 | [
[
"Bartolo",
"Michelle A.",
""
],
[
"Qureshi",
"M. Umar",
""
],
[
"Colebank",
"Mitchel J.",
""
],
[
"Chesler",
"Naomi C.",
""
],
[
"Olufsen",
"Mette S.",
""
]
] | Isolated post-capillary pulmonary hypertension (Ipc-PH) occurs due to left heart failure, which contributes to 1 out of every 9 deaths in the United States. In some patients, through unknown mechanisms, Ipc-PH transitions to combined pre-/post-capillary PH (Cpc-PH), diagnosed by an increase in pulmonary vascular resistance and associated with a dramatic increase in mortality. We hypothesize that altered mechanical forces and subsequent vasoactive signaling in the pulmonary capillary bed drive the transition from Ipc-PH to Cpc-PH. However, even in a healthy pulmonary circulation, the mechanical forces in the smallest vessels (the arterioles, venules, and capillary bed) have not been quantitatively defined. This study is the first to examine this question via a computational fluid dynamics model of the human pulmonary arteries, veins, arterioles, and venules. Using this model we predict temporal and spatial dynamics of cyclic stretch and wall shear stress. In the large vessels, numerical simulations show that shear stress increases coincides with larger flow and pressure. In the microvasculature, we found that as vessel radius decreases, shear stress increases and flow decreases. In arterioles, this corresponds with lower pressures; however, the venules and smaller veins have higher pressure than larger veins. Our model provides predictions for pressure, flow, shear stress, and cyclic stretch that provides a way to analyze and investigate hypotheses related to disease progression in the pulmonary circulation. |
1007.5016 | Tsvi Tlusty | Jordi Soriano, Ilan Breskin, Elisha Moses and Tsvi Tlusty | Percolation Approach to Study Connectivity in Living Neural Networks | Keywords: neural networks, graphs, connectivity, percolation, giant
component PACS: 87.18.Sn, 87.19.La, 64.60.Ak
http://www.weizmann.ac.il/complex/tlusty/papers/AIP2006.pdf | 9th Granada Seminar 2006 - Cooperative Behavior in Neural Systems
AIP, eds Garrido PL, Marro J, & Torres JJ (AIP, Granada, SPAIN), Vol 887, pp
96-106 | null | null | q-bio.NC cond-mat.dis-nn physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study neural connectivity in cultures of rat hippocampal neurons. We
measure the neurons' response to an electric stimulation for gradual lower
connectivity, and characterize the size of the giant cluster in the network.
The connectivity undergoes a percolation transition described by the critical
exponent $\beta \simeq 0.65$. We use a theoretic approach based on
bond.percolation on a graph to describe the process of disintegration of the
network and extract its statistical properties. Together with numerical
simulations we show that the connectivity in the neural culture is local,
characterized by a gaussian degree distribution and not a power law on
| [
{
"created": "Wed, 28 Jul 2010 15:58:03 GMT",
"version": "v1"
}
] | 2010-07-30 | [
[
"Soriano",
"Jordi",
""
],
[
"Breskin",
"Ilan",
""
],
[
"Moses",
"Elisha",
""
],
[
"Tlusty",
"Tsvi",
""
]
] | We study neural connectivity in cultures of rat hippocampal neurons. We measure the neurons' response to an electric stimulation for gradual lower connectivity, and characterize the size of the giant cluster in the network. The connectivity undergoes a percolation transition described by the critical exponent $\beta \simeq 0.65$. We use a theoretic approach based on bond.percolation on a graph to describe the process of disintegration of the network and extract its statistical properties. Together with numerical simulations we show that the connectivity in the neural culture is local, characterized by a gaussian degree distribution and not a power law on |
1605.05247 | Yuan Wang | Yuan Wang, Guy Carrault, Alain Beuchee, Nathalie Costet, Huazhong Shu,
Lotfi Senhadji | Heart Rate Variability and Respiration Signal as Diagnostic Tools for
Late Onset Sepsis in Neonatal Intensive Care Units | null | null | null | null | q-bio.QM cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Apnea-bradycardia is one of the major clinical early indicators of late-onset
sepsis occurring in approximately 7% to 10% of all neonates and in more than
25% of very low birth weight infants in NICU. The objective of this paper was
to determine if HRV, respiration and their relationships help to diagnose
infection in premature infants via non-invasive ways in NICU. Therefore, we
implement Mono-Channel (MC) and Bi-Channel (BC) Analysis in two groups: sepsis
(S) vs. non-sepsis (NS). Firstly, we studied RR series not only by linear
methods: time domain and frequency domain, but also by non-linear methods:
chaos theory and information theory. The results show that alpha Slow, alpha
Fast and Sample Entropy are significant parameters to distinguish S from NS.
Secondly, the question about the functional coupling of HRV and nasal
respiration is addressed. Local linear correlation coefficient r2t,f has been
explored, while non-linear regression coefficient h2 was calculated in two
directions. It is obvious that r2t,f within the third frequency band (0.2<f<0.4
Hz) and h2 in two directions were complementary approaches to diagnose sepsis.
Thirdly, feasibility study is carried out on the candidate parameters selected
from MC and BC respectively. We discovered that the proposed test based on
optimal fusion of 6 features shows good performance with the largest AUC and a
reduced probability of false alarm (PFA).
| [
{
"created": "Thu, 12 May 2016 23:56:31 GMT",
"version": "v1"
}
] | 2016-05-18 | [
[
"Wang",
"Yuan",
""
],
[
"Carrault",
"Guy",
""
],
[
"Beuchee",
"Alain",
""
],
[
"Costet",
"Nathalie",
""
],
[
"Shu",
"Huazhong",
""
],
[
"Senhadji",
"Lotfi",
""
]
] | Apnea-bradycardia is one of the major clinical early indicators of late-onset sepsis occurring in approximately 7% to 10% of all neonates and in more than 25% of very low birth weight infants in NICU. The objective of this paper was to determine if HRV, respiration and their relationships help to diagnose infection in premature infants via non-invasive ways in NICU. Therefore, we implement Mono-Channel (MC) and Bi-Channel (BC) Analysis in two groups: sepsis (S) vs. non-sepsis (NS). Firstly, we studied RR series not only by linear methods: time domain and frequency domain, but also by non-linear methods: chaos theory and information theory. The results show that alpha Slow, alpha Fast and Sample Entropy are significant parameters to distinguish S from NS. Secondly, the question about the functional coupling of HRV and nasal respiration is addressed. Local linear correlation coefficient r2t,f has been explored, while non-linear regression coefficient h2 was calculated in two directions. It is obvious that r2t,f within the third frequency band (0.2<f<0.4 Hz) and h2 in two directions were complementary approaches to diagnose sepsis. Thirdly, feasibility study is carried out on the candidate parameters selected from MC and BC respectively. We discovered that the proposed test based on optimal fusion of 6 features shows good performance with the largest AUC and a reduced probability of false alarm (PFA). |
1610.04041 | Wolfgang Halter | Wolfgang Halter, Jan Maximilian Montenbruck, Zoltan A. Tuza and Frank
Allg\"ower | A resource dependent protein synthesis model for evaluating synthetic
circuits | 14 pages, 7 figures | null | null | null | q-bio.MN q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reliable in-silico design of synthetic gene networks necessitates novel
approaches to model the process of protein synthesis under the influence of
limited resources. We present such a novel protein synthesis model which
originates from the Ribosome Flow Model and among other things describes the
movement of RNA-polymerase and Ribosomes on mRNA and DNA templates
respectively. By analyzing the convergence properties of this model based upon
geometric considerations we present additional insights into the dynamic
mechanisms of the process of protein synthesis. Further, we exemplarily show
how this model can be used to evaluate the performance of synthetic gene
circuits under different loading scenarios.
| [
{
"created": "Thu, 13 Oct 2016 11:54:06 GMT",
"version": "v1"
}
] | 2016-10-14 | [
[
"Halter",
"Wolfgang",
""
],
[
"Montenbruck",
"Jan Maximilian",
""
],
[
"Tuza",
"Zoltan A.",
""
],
[
"Allgöwer",
"Frank",
""
]
] | Reliable in-silico design of synthetic gene networks necessitates novel approaches to model the process of protein synthesis under the influence of limited resources. We present such a novel protein synthesis model which originates from the Ribosome Flow Model and among other things describes the movement of RNA-polymerase and Ribosomes on mRNA and DNA templates respectively. By analyzing the convergence properties of this model based upon geometric considerations we present additional insights into the dynamic mechanisms of the process of protein synthesis. Further, we exemplarily show how this model can be used to evaluate the performance of synthetic gene circuits under different loading scenarios. |
1005.2107 | Gregory Batt | Gr\'egory Batt (INRIA Rocquencourt), Michel Page (INRIA Rh\^one-Alpes,
ESA), Irene Cantone, Gregor Goessler (INRIA Rh\^one-Alpes / LIG Laboratoire
d'Informatique de Grenoble), Pedro T. Monteiro (INRIA Rh\^one-Alpes, IST),
Hidde De Jong (INRIA Rh\^one-Alpes) | Efficient parameter search for qualitative models of regulatory networks
using symbolic model checking | null | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Investigating the relation between the structure and behavior of complex
biological networks often involves posing the following two questions: Is a
hypothesized structure of a regulatory network consistent with the observed
behavior? And can a proposed structure generate a desired behavior? Answering
these questions presupposes that we are able to test the compatibility of
network structure and behavior. We cast these questions into a parameter search
problem for qualitative models of regulatory networks, in particular
piecewise-affine differential equation models. We develop a method based on
symbolic model checking that avoids enumerating all possible parametrizations,
and show that this method performs well on real biological problems, using the
IRMA synthetic network and benchmark experimental data sets. We test the
consistency between the IRMA network structure and the time-series data, and
search for parameter modifications that would improve the robustness of the
external control of the system behavior.
| [
{
"created": "Wed, 12 May 2010 14:07:56 GMT",
"version": "v1"
}
] | 2010-05-20 | [
[
"Batt",
"Grégory",
"",
"INRIA Rocquencourt"
],
[
"Page",
"Michel",
"",
"INRIA Rhône-Alpes,\n ESA"
],
[
"Cantone",
"Irene",
"",
"INRIA Rhône-Alpes / LIG Laboratoire\n d'Informatique de Grenoble"
],
[
"Goessler",
"Gregor",
"",
"INRIA Rhône-Alpes / LIG Laboratoire\n d'Informatique de Grenoble"
],
[
"Monteiro",
"Pedro T.",
"",
"INRIA Rhône-Alpes, IST"
],
[
"De Jong",
"Hidde",
"",
"INRIA Rhône-Alpes"
]
] | Investigating the relation between the structure and behavior of complex biological networks often involves posing the following two questions: Is a hypothesized structure of a regulatory network consistent with the observed behavior? And can a proposed structure generate a desired behavior? Answering these questions presupposes that we are able to test the compatibility of network structure and behavior. We cast these questions into a parameter search problem for qualitative models of regulatory networks, in particular piecewise-affine differential equation models. We develop a method based on symbolic model checking that avoids enumerating all possible parametrizations, and show that this method performs well on real biological problems, using the IRMA synthetic network and benchmark experimental data sets. We test the consistency between the IRMA network structure and the time-series data, and search for parameter modifications that would improve the robustness of the external control of the system behavior. |
2401.02509 | Jitang Li | Jitang Li and Jinzheng Li | Memory, Consciousness and Large Language Model | null | null | null | null | q-bio.NC cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the development in cognitive science and Large Language Models (LLMs),
increasing connections have come to light between these two distinct fields.
Building upon these connections, we propose a conjecture suggesting the
existence of a duality between LLMs and Tulving's theory of memory. We identify
a potential correspondence between Tulving's synergistic ecphory model (SEM) of
retrieval and the emergent abilities observed in LLMs, serving as supporting
evidence for our conjecture. Furthermore, we speculate that consciousness may
be considered a form of emergent ability based on this duality. We also discuss
how other theories of consciousness intersect with our research.
| [
{
"created": "Thu, 4 Jan 2024 19:44:03 GMT",
"version": "v1"
},
{
"created": "Sun, 7 Jul 2024 14:58:22 GMT",
"version": "v2"
}
] | 2024-07-09 | [
[
"Li",
"Jitang",
""
],
[
"Li",
"Jinzheng",
""
]
] | With the development in cognitive science and Large Language Models (LLMs), increasing connections have come to light between these two distinct fields. Building upon these connections, we propose a conjecture suggesting the existence of a duality between LLMs and Tulving's theory of memory. We identify a potential correspondence between Tulving's synergistic ecphory model (SEM) of retrieval and the emergent abilities observed in LLMs, serving as supporting evidence for our conjecture. Furthermore, we speculate that consciousness may be considered a form of emergent ability based on this duality. We also discuss how other theories of consciousness intersect with our research. |
1708.01857 | Peng Chen | Quanya Liu, Peng Chen, Bing Wang and Jinyan Li | dbMPIKT: A web resource for the kinetic and thermodynamic database of
mutant protein interactions | null | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Protein-protein interactions (PPIs) perform important roles on biological
functions. Researches of mutants on protein interactions can further understand
PPIs. In the past, many researchers have developed databases that stored
mutants on protein interactions, which are old and not updated till now. To
address the issue, we developed a kinetic and thermodynamic database of mutant
protein interactions (dbMPIKT) that can be freely accessible at our website.
This database contains 5291 mutants that integrated data from previous
databases and data from literatures for nearly three years. Furthermore, the
data were analyzed, involving mutation number, mutation type, protein pair
source and network map construction. On the whole, the database provides new
data to further improve the study on PPIs. Website:
http://210.45.212.128/lqy/index.php
| [
{
"created": "Sun, 6 Aug 2017 07:55:54 GMT",
"version": "v1"
}
] | 2017-08-08 | [
[
"Liu",
"Quanya",
""
],
[
"Chen",
"Peng",
""
],
[
"Wang",
"Bing",
""
],
[
"Li",
"Jinyan",
""
]
] | Protein-protein interactions (PPIs) perform important roles on biological functions. Researches of mutants on protein interactions can further understand PPIs. In the past, many researchers have developed databases that stored mutants on protein interactions, which are old and not updated till now. To address the issue, we developed a kinetic and thermodynamic database of mutant protein interactions (dbMPIKT) that can be freely accessible at our website. This database contains 5291 mutants that integrated data from previous databases and data from literatures for nearly three years. Furthermore, the data were analyzed, involving mutation number, mutation type, protein pair source and network map construction. On the whole, the database provides new data to further improve the study on PPIs. Website: http://210.45.212.128/lqy/index.php |
2309.12195 | Tristan Venot | Tristan Venot, Arthur Desbois, Marie-Constance Corsi, Laurent
Hugueville, Ludovic Saint-Bauzel, Fabrizio De Vico Fallani | Intentional binding enhances hybrid BCI control | 18 pages, 5 figures, 7 supplementary materials | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Mental imagery-based brain-computer interfaces (BCIs) allow to interact with
the external environment by naturally bypassing the musculoskeletal system.
Making BCIs efficient and accurate is paramount to improve the reliability of
real-life and clinical applications, from open-loop device control to
closed-loop neurorehabilitation. By promoting sense of agency and embodiment,
realistic setups including multimodal channels of communication, such as
eye-gaze, and robotic prostheses aim to improve BCI performance. However, how
the mental imagery command should be integrated in those hybrid systems so as
to ensure the best interaction is still poorly understood. To address this
question, we performed a hybrid EEG-based BCI experiment involving healthy
volunteers enrolled in a reach-and-grasp action operated by a robotic arm. Main
results showed that the hand grasping motor imagery timing significantly
affects the BCI accuracy as well as the spatiotemporal brain dynamics. Higher
control accuracy was obtained when motor imagery is performed just after the
robot reaching, as compared to before or during the movement. The proximity
with the subsequent robot grasping favored intentional binding, led to stronger
motor-related brain activity, and primed the ability of sensorimotor areas to
integrate information from regions implicated in higher-order cognitive
functions. Taken together, these findings provided fresh evidence about the
effects of intentional binding on human behavior and cortical network dynamics
that can be exploited to design a new generation of efficient brain-machine
interfaces.
| [
{
"created": "Thu, 21 Sep 2023 16:02:00 GMT",
"version": "v1"
},
{
"created": "Fri, 22 Sep 2023 11:36:41 GMT",
"version": "v2"
},
{
"created": "Mon, 2 Oct 2023 09:41:13 GMT",
"version": "v3"
},
{
"created": "Fri, 13 Oct 2023 12:19:41 GMT",
"version": "v4"
}
] | 2023-10-16 | [
[
"Venot",
"Tristan",
""
],
[
"Desbois",
"Arthur",
""
],
[
"Corsi",
"Marie-Constance",
""
],
[
"Hugueville",
"Laurent",
""
],
[
"Saint-Bauzel",
"Ludovic",
""
],
[
"Fallani",
"Fabrizio De Vico",
""
]
] | Mental imagery-based brain-computer interfaces (BCIs) allow to interact with the external environment by naturally bypassing the musculoskeletal system. Making BCIs efficient and accurate is paramount to improve the reliability of real-life and clinical applications, from open-loop device control to closed-loop neurorehabilitation. By promoting sense of agency and embodiment, realistic setups including multimodal channels of communication, such as eye-gaze, and robotic prostheses aim to improve BCI performance. However, how the mental imagery command should be integrated in those hybrid systems so as to ensure the best interaction is still poorly understood. To address this question, we performed a hybrid EEG-based BCI experiment involving healthy volunteers enrolled in a reach-and-grasp action operated by a robotic arm. Main results showed that the hand grasping motor imagery timing significantly affects the BCI accuracy as well as the spatiotemporal brain dynamics. Higher control accuracy was obtained when motor imagery is performed just after the robot reaching, as compared to before or during the movement. The proximity with the subsequent robot grasping favored intentional binding, led to stronger motor-related brain activity, and primed the ability of sensorimotor areas to integrate information from regions implicated in higher-order cognitive functions. Taken together, these findings provided fresh evidence about the effects of intentional binding on human behavior and cortical network dynamics that can be exploited to design a new generation of efficient brain-machine interfaces. |
1602.00467 | Giuseppe Jurman | Giuseppe Jurman and Michele Filosi and Samantha Riccadonna and Roberto
Visintainer and Cesare Furlanello | Differential network analysis and graph classification: a glocal
approach | Submitted for BMTL 2015 Proceedings | null | null | null | q-bio.MN q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Based on the glocal HIM metric and its induced graph kernel, we propose a
novel solution in differential network analysis that integrates network
comparison and classification tasks. The HIM distance is defined as the
one-parameter family of product metrics linearly combining the normalised
Hamming distance H and the normalised Ipsen-Mikhailov spectral distance IM. The
combination of the two components within a single metric allows overcoming
their drawbacks and obtaining a measure that is simultaneously global and
local. Furthermore, plugging the HIM kernel into a Support Vector Machine gives
us a classification algorithm based on the HIM distance. First, we outline the
theory underlying the metric construction. We introduce two diverse
applications of the HIM distance and the HIM kernel to biological datasets.
This versatility supports the adoption of the HIM family as a general tool for
information extraction, quantifying difference among diverse in- stances of a
complex system. An Open Source implementation of the HIM metrics is provided by
the R package nettols and in its web interface ReNette.
| [
{
"created": "Mon, 1 Feb 2016 10:45:04 GMT",
"version": "v1"
}
] | 2016-02-02 | [
[
"Jurman",
"Giuseppe",
""
],
[
"Filosi",
"Michele",
""
],
[
"Riccadonna",
"Samantha",
""
],
[
"Visintainer",
"Roberto",
""
],
[
"Furlanello",
"Cesare",
""
]
] | Based on the glocal HIM metric and its induced graph kernel, we propose a novel solution in differential network analysis that integrates network comparison and classification tasks. The HIM distance is defined as the one-parameter family of product metrics linearly combining the normalised Hamming distance H and the normalised Ipsen-Mikhailov spectral distance IM. The combination of the two components within a single metric allows overcoming their drawbacks and obtaining a measure that is simultaneously global and local. Furthermore, plugging the HIM kernel into a Support Vector Machine gives us a classification algorithm based on the HIM distance. First, we outline the theory underlying the metric construction. We introduce two diverse applications of the HIM distance and the HIM kernel to biological datasets. This versatility supports the adoption of the HIM family as a general tool for information extraction, quantifying difference among diverse in- stances of a complex system. An Open Source implementation of the HIM metrics is provided by the R package nettols and in its web interface ReNette. |
1706.07555 | Chengxu Zhuang | Chengxu Zhuang, Jonas Kubilius, Mitra Hartmann, Daniel Yamins | Toward Goal-Driven Neural Network Models for the Rodent
Whisker-Trigeminal System | 17 pages including supplementary information, 8 figures | null | null | null | q-bio.NC cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In large part, rodents see the world through their whiskers, a powerful
tactile sense enabled by a series of brain areas that form the
whisker-trigeminal system. Raw sensory data arrives in the form of mechanical
input to the exquisitely sensitive, actively-controllable whisker array, and is
processed through a sequence of neural circuits, eventually arriving in
cortical regions that communicate with decision-making and memory areas.
Although a long history of experimental studies has characterized many aspects
of these processing stages, the computational operations of the
whisker-trigeminal system remain largely unknown. In the present work, we take
a goal-driven deep neural network (DNN) approach to modeling these
computations. First, we construct a biophysically-realistic model of the rat
whisker array. We then generate a large dataset of whisker sweeps across a wide
variety of 3D objects in highly-varying poses, angles, and speeds. Next, we
train DNNs from several distinct architectural families to solve a shape
recognition task in this dataset. Each architectural family represents a
structurally-distinct hypothesis for processing in the whisker-trigeminal
system, corresponding to different ways in which spatial and temporal
information can be integrated. We find that most networks perform poorly on the
challenging shape recognition task, but that specific architectures from
several families can achieve reasonable performance levels. Finally, we show
that Representational Dissimilarity Matrices (RDMs), a tool for comparing
population codes between neural systems, can separate these higher-performing
networks with data of a type that could plausibly be collected in a
neurophysiological or imaging experiment. Our results are a proof-of-concept
that goal-driven DNN networks of the whisker-trigeminal system are potentially
within reach.
| [
{
"created": "Fri, 23 Jun 2017 03:34:03 GMT",
"version": "v1"
}
] | 2017-06-27 | [
[
"Zhuang",
"Chengxu",
""
],
[
"Kubilius",
"Jonas",
""
],
[
"Hartmann",
"Mitra",
""
],
[
"Yamins",
"Daniel",
""
]
] | In large part, rodents see the world through their whiskers, a powerful tactile sense enabled by a series of brain areas that form the whisker-trigeminal system. Raw sensory data arrives in the form of mechanical input to the exquisitely sensitive, actively-controllable whisker array, and is processed through a sequence of neural circuits, eventually arriving in cortical regions that communicate with decision-making and memory areas. Although a long history of experimental studies has characterized many aspects of these processing stages, the computational operations of the whisker-trigeminal system remain largely unknown. In the present work, we take a goal-driven deep neural network (DNN) approach to modeling these computations. First, we construct a biophysically-realistic model of the rat whisker array. We then generate a large dataset of whisker sweeps across a wide variety of 3D objects in highly-varying poses, angles, and speeds. Next, we train DNNs from several distinct architectural families to solve a shape recognition task in this dataset. Each architectural family represents a structurally-distinct hypothesis for processing in the whisker-trigeminal system, corresponding to different ways in which spatial and temporal information can be integrated. We find that most networks perform poorly on the challenging shape recognition task, but that specific architectures from several families can achieve reasonable performance levels. Finally, we show that Representational Dissimilarity Matrices (RDMs), a tool for comparing population codes between neural systems, can separate these higher-performing networks with data of a type that could plausibly be collected in a neurophysiological or imaging experiment. Our results are a proof-of-concept that goal-driven DNN networks of the whisker-trigeminal system are potentially within reach. |
1911.10252 | Shubham Tripathi | Shubham Tripathi, David A. Kessler, and Herbert Levine | Biological Regulatory Networks are Minimally Frustrated | 5 pages, 4 figures | Phys. Rev. Lett. 125, 088101 (2020) | 10.1103/PhysRevLett.125.088101 | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Characterization of the differences between biological and random networks
can reveal the design principles that enable the robust realization of crucial
biological functions including the establishment of different cell types.
Previous studies, focusing on identifying topological features that are present
in biological networks but not in random networks, have, however, provided few
functional insights. We use a Boolean modeling framework and ideas from spin
glass literature to identify functional differences between five real
biological networks and random networks with similar topological features. We
show that minimal frustration is a fundamental property that allows biological
networks to robustly establish cell types and regulate cell fate choice, and
this property can emerge in complex networks via Darwinian evolution. The study
also provides clues regarding how the regulation of cell fate choice can go
awry in a disease like cancer and lead to the emergence of aberrant cell types.
| [
{
"created": "Fri, 22 Nov 2019 21:16:00 GMT",
"version": "v1"
}
] | 2020-08-26 | [
[
"Tripathi",
"Shubham",
""
],
[
"Kessler",
"David A.",
""
],
[
"Levine",
"Herbert",
""
]
] | Characterization of the differences between biological and random networks can reveal the design principles that enable the robust realization of crucial biological functions including the establishment of different cell types. Previous studies, focusing on identifying topological features that are present in biological networks but not in random networks, have, however, provided few functional insights. We use a Boolean modeling framework and ideas from spin glass literature to identify functional differences between five real biological networks and random networks with similar topological features. We show that minimal frustration is a fundamental property that allows biological networks to robustly establish cell types and regulate cell fate choice, and this property can emerge in complex networks via Darwinian evolution. The study also provides clues regarding how the regulation of cell fate choice can go awry in a disease like cancer and lead to the emergence of aberrant cell types. |
2009.09911 | Weitao Sun | Daniel H. Tao and Weitao Sun | Are mouse and cat the missing link in the COVID-19 outbreaks in seafood
markets? | null | null | null | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) virus caused the
novel coronavirus disease-2019 (COVID-19) affecting the whole world. Like
SARS-CoV and MERS-CoV, SARS-CoV-2 are thought to originate in bats and then
spread to humans through intermediate hosts. Identifying intermediate host
species is critical to understanding the evolution and transmission mechanisms
of COVID-19. However, determining which animals are intermediate hosts remains
a key challenge. Virus host-genome similarity (HGS) is an important factor that
reflects the adaptability of virus to host. SARS-CoV-2 may retain beneficial
mutations to increase HGS and evade the host immune system. This study
investigated the HGSs between 399 SARS-CoV-2 strains and 10 hosts of different
species, including bat, mouse, cat, swine, snake, dog, pangolin, chicken, human
and monkey. The results showed that the HGS between SARS-CoV-2 and bat was the
highest, followed by mouse and cat. Human and monkey had the lowest HGS values.
In terms of genetic similarity, mouse and monkey are halfway between bat and
human. Moreover, given that COVID-19 outbreaks tend to be associated with live
poultry and seafood markets, mouse and cat are more likely sources of infection
in these places. However, more experimental data are needed to confirm whether
mouse and cat are true intermediate hosts. These findings suggest that animals
closely related to human life, especially those with high HGS, need to be
closely monitored.
| [
{
"created": "Fri, 18 Sep 2020 10:23:23 GMT",
"version": "v1"
}
] | 2020-09-22 | [
[
"Tao",
"Daniel H.",
""
],
[
"Sun",
"Weitao",
""
]
] | Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) virus caused the novel coronavirus disease-2019 (COVID-19) affecting the whole world. Like SARS-CoV and MERS-CoV, SARS-CoV-2 are thought to originate in bats and then spread to humans through intermediate hosts. Identifying intermediate host species is critical to understanding the evolution and transmission mechanisms of COVID-19. However, determining which animals are intermediate hosts remains a key challenge. Virus host-genome similarity (HGS) is an important factor that reflects the adaptability of virus to host. SARS-CoV-2 may retain beneficial mutations to increase HGS and evade the host immune system. This study investigated the HGSs between 399 SARS-CoV-2 strains and 10 hosts of different species, including bat, mouse, cat, swine, snake, dog, pangolin, chicken, human and monkey. The results showed that the HGS between SARS-CoV-2 and bat was the highest, followed by mouse and cat. Human and monkey had the lowest HGS values. In terms of genetic similarity, mouse and monkey are halfway between bat and human. Moreover, given that COVID-19 outbreaks tend to be associated with live poultry and seafood markets, mouse and cat are more likely sources of infection in these places. However, more experimental data are needed to confirm whether mouse and cat are true intermediate hosts. These findings suggest that animals closely related to human life, especially those with high HGS, need to be closely monitored. |
1709.05264 | Nicholas Battista | Nicholas A. Battista and Laura A. Miller | Bifurcations in valveless pumping techniques from a coupled
fluid-structure-electrophysiology model in heart development | 11 pages, 13 figures. arXiv admin note: text overlap with
arXiv:1610.03427 | null | 10.11145/j.biomath.2017.11.297 | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We explore an embryonic heart model that couples electrophysiology and
muscle-force generation to flow induced using a $2D$ fluid-structure
interaction framework based on the immersed boundary method. The propagation of
action potentials are coupled to muscular contraction and hence the overall
pumping dynamics. In comparison to previous models, the electro-dynamical model
does not use prescribed motion to initiate the pumping motion, but rather the
pumping dynamics are fully coupled to an underlying electrophysiology model,
governed by the FitzHugh-Nagumo equations. Perturbing the diffusion parameter
in the FitzHugh-Nagumo model leads to a bifurcation in dynamics of action
potential propagation. This bifurcation is able to capture a spectrum of
different pumping regimes, with dynamic suction pumping and peristaltic-like
pumping at the extremes. We find that more bulk flow is produced within the
realm of peristaltic-like pumping.
| [
{
"created": "Thu, 14 Sep 2017 17:05:08 GMT",
"version": "v1"
}
] | 2018-09-19 | [
[
"Battista",
"Nicholas A.",
""
],
[
"Miller",
"Laura A.",
""
]
] | We explore an embryonic heart model that couples electrophysiology and muscle-force generation to flow induced using a $2D$ fluid-structure interaction framework based on the immersed boundary method. The propagation of action potentials are coupled to muscular contraction and hence the overall pumping dynamics. In comparison to previous models, the electro-dynamical model does not use prescribed motion to initiate the pumping motion, but rather the pumping dynamics are fully coupled to an underlying electrophysiology model, governed by the FitzHugh-Nagumo equations. Perturbing the diffusion parameter in the FitzHugh-Nagumo model leads to a bifurcation in dynamics of action potential propagation. This bifurcation is able to capture a spectrum of different pumping regimes, with dynamic suction pumping and peristaltic-like pumping at the extremes. We find that more bulk flow is produced within the realm of peristaltic-like pumping. |
1902.03016 | Eduardo Henrique Colombo | Eduardo H. Colombo, Ricardo Mart\'inez-Garc\'ia, Crist\'obal, L\'opez,
Emilio Hern\'andez-Garc\'ia | Spatial eco-evolutionary feedbacks mediate coexistence in prey-predator
systems | null | null | 10.1038/s41598-019-54510-6 | null | q-bio.PE cond-mat.stat-mech nlin.AO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Eco-evolutionary frameworks can explain certain features of communities in
which ecological and evolutionary processes occur over comparable timescales.
Here, we investigate whether an evolutionary dynamics may interact with the
spatial structure of a prey-predator community in which both species show
limited mobility and predator perceptual ranges are subject to natural
selection. In these conditions, our results unveil an eco-evolutionary feedback
between species spatial mixing and predators perceptual range: different levels
of mixing select for different perceptual ranges, which in turn reshape the
spatial distribution of prey and its interaction with predators. This emergent
pattern of interspecific interactions feeds back to the efficiency of the
various perceptual ranges, thus selecting for new ones. Finally, since
prey-predator mixing is the key factor that regulates the intensity of
predation, we explore the community-level implications of such feedback and
show that it controls both coexistence times and species extinction
probabilities.
| [
{
"created": "Fri, 8 Feb 2019 10:51:22 GMT",
"version": "v1"
},
{
"created": "Fri, 15 Nov 2019 13:26:44 GMT",
"version": "v2"
}
] | 2019-12-04 | [
[
"Colombo",
"Eduardo H.",
""
],
[
"Martínez-García",
"Ricardo",
""
],
[
"Cristóbal",
"",
""
],
[
"López",
"",
""
],
[
"Hernández-García",
"Emilio",
""
]
] | Eco-evolutionary frameworks can explain certain features of communities in which ecological and evolutionary processes occur over comparable timescales. Here, we investigate whether an evolutionary dynamics may interact with the spatial structure of a prey-predator community in which both species show limited mobility and predator perceptual ranges are subject to natural selection. In these conditions, our results unveil an eco-evolutionary feedback between species spatial mixing and predators perceptual range: different levels of mixing select for different perceptual ranges, which in turn reshape the spatial distribution of prey and its interaction with predators. This emergent pattern of interspecific interactions feeds back to the efficiency of the various perceptual ranges, thus selecting for new ones. Finally, since prey-predator mixing is the key factor that regulates the intensity of predation, we explore the community-level implications of such feedback and show that it controls both coexistence times and species extinction probabilities. |
2104.09218 | Nikita Novikov | Nikita Novikov, Denis Zakharov, Victoria Moiseeva and Boris Gutkin | Activity stabilization in a population model of working memory by
sinusoidal and noisy inputs | 35 pages, 10 figures, to be published in Frontiers in Neural Circuits | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | According to mechanistic theories of working memory (WM), information is
retained as persistent spiking activity of cortical neural networks. Yet, how
this activity is related to changes in the oscillatory profile observed during
WM tasks remains an open issue. We explore joint effects of input gamma-band
oscillations and noise on the dynamics of several firing rate models of WM. The
considered models have a metastable active regime, i.e. they demonstrate
long-lasting transient post-stimulus firing rate elevation. We start from a
single excitatory-inhibitory circuit and demonstrate that either gamma-band or
noise input could stabilize the active regime, thus supporting WM retention. We
then consider a system of two circuits with excitatory intercoupling. We find
that fast coupling allows for better stabilization by common noise compared to
independent noise and stronger amplification of this effect by in-phase gamma
inputs compared to anti-phase inputs. Finally, we consider a multi-circuit
system comprised of two clusters, each containing a group of circuits receiving
a common noise input and a group of circuits receiving independent noise. Each
cluster is associated with its own local gamma generator, so all its circuits
receive gamma-band input in the same phase. We find that gamma-band input
differentially stabilizes the activity of the "common-noise" groups compared to
the "independent-noise" groups. If the inter-cluster connections are fast, this
effect is more pronounced when the gamma-band input is delivered to the
clusters in the same phase rather than in the anti-phase. Assuming that the
common noise comes from a large-scale distributed WM representation, our
results demonstrate that local gamma oscillations can stabilize the activity of
the corresponding parts of this representation, with stronger effect for fast
long-range connections and synchronized gamma oscillations.
| [
{
"created": "Mon, 19 Apr 2021 11:30:48 GMT",
"version": "v1"
}
] | 2021-04-20 | [
[
"Novikov",
"Nikita",
""
],
[
"Zakharov",
"Denis",
""
],
[
"Moiseeva",
"Victoria",
""
],
[
"Gutkin",
"Boris",
""
]
] | According to mechanistic theories of working memory (WM), information is retained as persistent spiking activity of cortical neural networks. Yet, how this activity is related to changes in the oscillatory profile observed during WM tasks remains an open issue. We explore joint effects of input gamma-band oscillations and noise on the dynamics of several firing rate models of WM. The considered models have a metastable active regime, i.e. they demonstrate long-lasting transient post-stimulus firing rate elevation. We start from a single excitatory-inhibitory circuit and demonstrate that either gamma-band or noise input could stabilize the active regime, thus supporting WM retention. We then consider a system of two circuits with excitatory intercoupling. We find that fast coupling allows for better stabilization by common noise compared to independent noise and stronger amplification of this effect by in-phase gamma inputs compared to anti-phase inputs. Finally, we consider a multi-circuit system comprised of two clusters, each containing a group of circuits receiving a common noise input and a group of circuits receiving independent noise. Each cluster is associated with its own local gamma generator, so all its circuits receive gamma-band input in the same phase. We find that gamma-band input differentially stabilizes the activity of the "common-noise" groups compared to the "independent-noise" groups. If the inter-cluster connections are fast, this effect is more pronounced when the gamma-band input is delivered to the clusters in the same phase rather than in the anti-phase. Assuming that the common noise comes from a large-scale distributed WM representation, our results demonstrate that local gamma oscillations can stabilize the activity of the corresponding parts of this representation, with stronger effect for fast long-range connections and synchronized gamma oscillations. |
q-bio/0501015 | Akira Kinjo | Akira R. Kinjo and Ken Nishikawa | Predicting Residue-wise Contact Orders of Native Protein Structure from
Amino Acid Sequence | 22 pages, 5 figures, 2 tables, manuscript submitted | null | null | null | q-bio.BM | null | Residue-wise contact order (RWCO) is a new kind of one-dimensional protein
structures which represents the extent of long-range contacts. We have recently
shown that a set of three types of one-dimensional structures (secondary
structure, contact number, and RWCO) contains sufficient information for
reconstructing the three-dimensional structure of proteins. Currently, there
exist prediction methods for secondary structure and contact number from amino
acid sequence, but none exists for RWCO. Also, the properties of amino acids
that affect RWCO is not clearly understood. Here, we present a linear
regression-based method to predict RWCO from amino acid sequence, and analyze
the regression parameters to identify the properties that correlates with the
RWCO. The present method achieves the significant correlation of 0.59 between
the native and predicted RWCOs on average. An unusual feature of the RWCO
prediction is the remarkably large optimal half window size of 26 residues. The
regression parameters for the central and near-central residues of the local
sequence segment highly correlate with those of the contact number prediction,
and hence with hydrophobicity.
| [
{
"created": "Wed, 12 Jan 2005 02:01:02 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Kinjo",
"Akira R.",
""
],
[
"Nishikawa",
"Ken",
""
]
] | Residue-wise contact order (RWCO) is a new kind of one-dimensional protein structures which represents the extent of long-range contacts. We have recently shown that a set of three types of one-dimensional structures (secondary structure, contact number, and RWCO) contains sufficient information for reconstructing the three-dimensional structure of proteins. Currently, there exist prediction methods for secondary structure and contact number from amino acid sequence, but none exists for RWCO. Also, the properties of amino acids that affect RWCO is not clearly understood. Here, we present a linear regression-based method to predict RWCO from amino acid sequence, and analyze the regression parameters to identify the properties that correlates with the RWCO. The present method achieves the significant correlation of 0.59 between the native and predicted RWCOs on average. An unusual feature of the RWCO prediction is the remarkably large optimal half window size of 26 residues. The regression parameters for the central and near-central residues of the local sequence segment highly correlate with those of the contact number prediction, and hence with hydrophobicity. |
1409.0724 | Mansour Taghavi Azar Sharabiani | Mansour Taghavi Azar Sharabiani | Optimizing Metabonomic Spectral Replacement | 5 pages, 2 figures | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Metabonomics, the measure of the fingerprint of biochemical perturbations
caused by disease, drugs or toxins, recently has become a major focus of
research in various areas especially indications of drug toxicity. Two types of
technology (known by the initials NMR and MS) are employed and both produce
massive data in form of spectra. Sophisticated statistical models, known as
pattern recognition techniques, are commonly applied for summarizing and
analyzing these multidimensional data. However, strong signals from compounds
that are administered during toxicological trials interfere with these models.
So called 'spectral replacement' is a method to eliminate these signals by
replacing them with the signals in their corresponding regions in control
spectrum. The replaced regions are subsequently scaled. However, this scaling
is not accurately measured and often results in overestimation of integrated
intensity of the replaced signals. Here, a novel protocol is proposed which
provides an accurate estimation of the replaced regions.
| [
{
"created": "Tue, 2 Sep 2014 14:24:52 GMT",
"version": "v1"
}
] | 2014-09-03 | [
[
"Sharabiani",
"Mansour Taghavi Azar",
""
]
] | Metabonomics, the measure of the fingerprint of biochemical perturbations caused by disease, drugs or toxins, recently has become a major focus of research in various areas especially indications of drug toxicity. Two types of technology (known by the initials NMR and MS) are employed and both produce massive data in form of spectra. Sophisticated statistical models, known as pattern recognition techniques, are commonly applied for summarizing and analyzing these multidimensional data. However, strong signals from compounds that are administered during toxicological trials interfere with these models. So called 'spectral replacement' is a method to eliminate these signals by replacing them with the signals in their corresponding regions in control spectrum. The replaced regions are subsequently scaled. However, this scaling is not accurately measured and often results in overestimation of integrated intensity of the replaced signals. Here, a novel protocol is proposed which provides an accurate estimation of the replaced regions. |
0804.3224 | Anne Taormina | Natasha Jonoska, Anne Taormina and Reidun Twarock | DNA cages with icosahedral symmetry in bionanotechnology | Article contributed to `Algorithmic Bioprocesses', A. Condon, D.
Harel, J. N. Kok, A. Salomaa, and E. Winfree Editors, Springer Verlag, 2008 | Algorithmic Bioprocesses Natural Computing Series 2009, pp 141-158 | 10.1007/978-3-540-88869-7_9 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Blueprints of polyhedral cages with icosahedral symmetry made of circular DNA
molecules are provided. The basic rule is that every edge of the cage is met
twice in opposite directions by the DNA strand, and vertex junctions are
realised by a set of admissible junction types. As nanocontainers for cargo
storage and delivery, the icosidodecahedral cages are of special interest as
they have the largest volume per surface ratio of all cages discussed here.
| [
{
"created": "Mon, 21 Apr 2008 00:31:14 GMT",
"version": "v1"
}
] | 2015-05-13 | [
[
"Jonoska",
"Natasha",
""
],
[
"Taormina",
"Anne",
""
],
[
"Twarock",
"Reidun",
""
]
] | Blueprints of polyhedral cages with icosahedral symmetry made of circular DNA molecules are provided. The basic rule is that every edge of the cage is met twice in opposite directions by the DNA strand, and vertex junctions are realised by a set of admissible junction types. As nanocontainers for cargo storage and delivery, the icosidodecahedral cages are of special interest as they have the largest volume per surface ratio of all cages discussed here. |
1807.08400 | Alicia Dickenstein | Magal\'i Giaroli, Fr\'ed\'eric Bihan and Alicia Dickenstein | Regions of multistationarity in cascades of Goldbeter-Koshland loops | 22 pages, 7 figures | null | null | null | q-bio.MN math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider cascades of enzymatic Goldbeter-Koshland loops with any number n
of layers, for which there exist two layers involving the same phosphatase.
Even if the number of variables and the number of conservation laws grow
linearly with n, we find explicit regions in reaction rate constant and total
conservation constant space for which the associated mass-action kinetics
dynamical system is multistationary. Our computations are based on the
theoretical results of our companion paper in arXiv:1807.05157, which are
inspired by results in real algebraic geometry by Bihan, Santos and
Spaenlehauer.
| [
{
"created": "Mon, 23 Jul 2018 01:49:00 GMT",
"version": "v1"
}
] | 2018-07-24 | [
[
"Giaroli",
"Magalí",
""
],
[
"Bihan",
"Frédéric",
""
],
[
"Dickenstein",
"Alicia",
""
]
] | We consider cascades of enzymatic Goldbeter-Koshland loops with any number n of layers, for which there exist two layers involving the same phosphatase. Even if the number of variables and the number of conservation laws grow linearly with n, we find explicit regions in reaction rate constant and total conservation constant space for which the associated mass-action kinetics dynamical system is multistationary. Our computations are based on the theoretical results of our companion paper in arXiv:1807.05157, which are inspired by results in real algebraic geometry by Bihan, Santos and Spaenlehauer. |
1301.6375 | Dervis Vural | Dervis Can Vural, Greg Morrison, L. Mahadevan | Increased Network Interdependency Leads to Aging | 11 pages, 10 figures, Preprint | null | null | null | q-bio.PE physics.bio-ph q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although species longevity is subject to a diverse range of selective forces,
the mortality curves of a wide variety of organisms are rather similar. We
argue that aging and its universal characteristics may have evolved by means of
a gradual increase in the systemic interdependence between a large collection
of biochemical or mechanical components. Modeling the organism as a dependency
network which we create using a constructive evolutionary process, we age it by
allowing nodes to be broken or repaired according to a probabilistic algorithm
that accounts for random failures/repairs and dependencies. Our simulations
show that the network slowly accumulates damage and then catastrophically
collapses. We use our simulations to fit experimental data for the time
dependent mortality rates of a variety of multicellular organisms and even
complex machines such as automobiles. Our study suggests that aging is an
emergent finite-size effect in networks with dynamical dependencies and that
the qualitative and quantitative features of aging are not sensitively
dependent on the details of system structure.
| [
{
"created": "Sun, 27 Jan 2013 16:58:31 GMT",
"version": "v1"
},
{
"created": "Tue, 29 Jan 2013 16:57:31 GMT",
"version": "v2"
},
{
"created": "Thu, 31 Jan 2013 14:15:07 GMT",
"version": "v3"
}
] | 2013-02-01 | [
[
"Vural",
"Dervis Can",
""
],
[
"Morrison",
"Greg",
""
],
[
"Mahadevan",
"L.",
""
]
] | Although species longevity is subject to a diverse range of selective forces, the mortality curves of a wide variety of organisms are rather similar. We argue that aging and its universal characteristics may have evolved by means of a gradual increase in the systemic interdependence between a large collection of biochemical or mechanical components. Modeling the organism as a dependency network which we create using a constructive evolutionary process, we age it by allowing nodes to be broken or repaired according to a probabilistic algorithm that accounts for random failures/repairs and dependencies. Our simulations show that the network slowly accumulates damage and then catastrophically collapses. We use our simulations to fit experimental data for the time dependent mortality rates of a variety of multicellular organisms and even complex machines such as automobiles. Our study suggests that aging is an emergent finite-size effect in networks with dynamical dependencies and that the qualitative and quantitative features of aging are not sensitively dependent on the details of system structure. |
1804.09739 | Anna Kutschireiter | Anna Kutschireiter, Jean-Pascal Pfister | Particle-filtering approaches for nonlinear Bayesian decoding of
neuronal spike trains | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The number of neurons that can be simultaneously recorded doubles every seven
years. This ever increasing number of recorded neurons opens up the possibility
to address new questions and extract higher dimensional stimuli from the
recordings. Modeling neural spike trains as point processes, this task of
extracting dynamical signals from spike trains is commonly set in the context
of nonlinear filtering theory. Particle filter methods relying on importance
weights are generic algorithms that solve the filtering task numerically, but
exhibit a serious drawback when the problem dimensionality is high: they are
known to suffer from the 'curse of dimensionality' (COD), i.e. the number of
particles required for a certain performance scales exponentially with the
observable dimensions. Here, we first briefly review the theory on filtering
with point process observations in continuous time. Based on this theory, we
investigate both analytically and numerically the reason for the COD of
weighted particle filtering approaches: Similarly to particle filtering with
continuous-time observations, the COD with point-process observations is due to
the decay of effective number of particles, an effect that is stronger when the
number of observable dimensions increases. Given the success of unweighted
particle filtering approaches in overcoming the COD for continuous- time
observations, we introduce an unweighted particle filter for point-process
observations, the spike-based Neural Particle Filter (sNPF), and show that it
exhibits a similar favorable scaling as the number of dimensions grows.
Further, we derive rules for the parameters of the sNPF from a maximum
likelihood approach learning. We finally employ a simple decoding task to
illustrate the capabilities of the sNPF and to highlight one possible future
application of our inference and learning algorithm.
| [
{
"created": "Wed, 25 Apr 2018 18:20:15 GMT",
"version": "v1"
}
] | 2018-04-27 | [
[
"Kutschireiter",
"Anna",
""
],
[
"Pfister",
"Jean-Pascal",
""
]
] | The number of neurons that can be simultaneously recorded doubles every seven years. This ever increasing number of recorded neurons opens up the possibility to address new questions and extract higher dimensional stimuli from the recordings. Modeling neural spike trains as point processes, this task of extracting dynamical signals from spike trains is commonly set in the context of nonlinear filtering theory. Particle filter methods relying on importance weights are generic algorithms that solve the filtering task numerically, but exhibit a serious drawback when the problem dimensionality is high: they are known to suffer from the 'curse of dimensionality' (COD), i.e. the number of particles required for a certain performance scales exponentially with the observable dimensions. Here, we first briefly review the theory on filtering with point process observations in continuous time. Based on this theory, we investigate both analytically and numerically the reason for the COD of weighted particle filtering approaches: Similarly to particle filtering with continuous-time observations, the COD with point-process observations is due to the decay of effective number of particles, an effect that is stronger when the number of observable dimensions increases. Given the success of unweighted particle filtering approaches in overcoming the COD for continuous- time observations, we introduce an unweighted particle filter for point-process observations, the spike-based Neural Particle Filter (sNPF), and show that it exhibits a similar favorable scaling as the number of dimensions grows. Further, we derive rules for the parameters of the sNPF from a maximum likelihood approach learning. We finally employ a simple decoding task to illustrate the capabilities of the sNPF and to highlight one possible future application of our inference and learning algorithm. |
2403.08392 | Giuseppe Tronci | Michael Phillips, Giuseppe Tronci, Christopher M. Pask and Stephen J.
Russell | Nonwoven Reinforced Photocurable Poly(glycerol sebacate)-Based Hydrogels | 26 pages, 12 figures, 3 tables. Accepted in Polymers | null | null | null | q-bio.TO | http://creativecommons.org/licenses/by/4.0/ | Implantable hydrogels should ideally possess mechanical properties matched to
the surrounding tissues to enable adequate mechanical function while
regeneration occurs. This can be challenging, especially when degradable
systems with high water content and hydrolysable chemical bonds are required in
anatomical sites under constant mechanical stimulation, e.g. a foot ulcer
cavity. In these circumstances, the design of hydrogel composites is a
promising strategy to provide controlled structural features and macroscopic
properties over time. To explore this strategy, the synthesis of a new
photocurable elastomeric polymer, poly(glycerol-co-sebacic acid-co-lactic
acid-co-polyethylene glycol) acrylate (PGSLPA), is investigated, along with its
processing into UV-cured hydrogels, electrospun nonwovens and fibre-reinforced
variants, without the need for a high temperature curing step or use of
hazardous solvents. The mechanical properties of bioresorbable PGSLPA hydrogels
were studied with and without electrospun nonwoven reinforcement and with
varied layered configurations, aiming to determine the effects of
microstructure on bulk compressive strength and elasticity. The nonwoven
reinforced PGSLPA hydrogels exhibited a 60 % increase in compressive strength
and an 80 % increase in elastic moduli compared to fibre-free PGSLPA samples.
Mechanical properties of the fibre-reinforced hydrogels could also be modulated
by altering the layering arrangement of the nonwoven and hydrogel phase. The
nanofibre reinforced PGSLPA hydrogels also exhibited good elastic recovery, as
evidenced by hysteresis in compression fatigue stress-strain evaluations
showing a return to original dimensions.
| [
{
"created": "Wed, 13 Mar 2024 10:11:31 GMT",
"version": "v1"
},
{
"created": "Sun, 17 Mar 2024 10:29:07 GMT",
"version": "v2"
}
] | 2024-03-19 | [
[
"Phillips",
"Michael",
""
],
[
"Tronci",
"Giuseppe",
""
],
[
"Pask",
"Christopher M.",
""
],
[
"Russell",
"Stephen J.",
""
]
] | Implantable hydrogels should ideally possess mechanical properties matched to the surrounding tissues to enable adequate mechanical function while regeneration occurs. This can be challenging, especially when degradable systems with high water content and hydrolysable chemical bonds are required in anatomical sites under constant mechanical stimulation, e.g. a foot ulcer cavity. In these circumstances, the design of hydrogel composites is a promising strategy to provide controlled structural features and macroscopic properties over time. To explore this strategy, the synthesis of a new photocurable elastomeric polymer, poly(glycerol-co-sebacic acid-co-lactic acid-co-polyethylene glycol) acrylate (PGSLPA), is investigated, along with its processing into UV-cured hydrogels, electrospun nonwovens and fibre-reinforced variants, without the need for a high temperature curing step or use of hazardous solvents. The mechanical properties of bioresorbable PGSLPA hydrogels were studied with and without electrospun nonwoven reinforcement and with varied layered configurations, aiming to determine the effects of microstructure on bulk compressive strength and elasticity. The nonwoven reinforced PGSLPA hydrogels exhibited a 60 % increase in compressive strength and an 80 % increase in elastic moduli compared to fibre-free PGSLPA samples. Mechanical properties of the fibre-reinforced hydrogels could also be modulated by altering the layering arrangement of the nonwoven and hydrogel phase. The nanofibre reinforced PGSLPA hydrogels also exhibited good elastic recovery, as evidenced by hysteresis in compression fatigue stress-strain evaluations showing a return to original dimensions. |
2202.05293 | Josinaldo Menezes | J. Menezes, B. Moura | Pattern formation and coarsening dynamics in apparent competition models | 9 pages, 8 figures | Chaos, Solitons & Fractals 157, 111903 (2022) | 10.1016/j.chaos.2022.111903 | null | q-bio.PE nlin.AO nlin.PS physics.bio-ph q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Apparent competition is an indirect interaction between species that share
natural resources without any mutual aggression but negatively affect each
other if there is a common enemy. The negative results of the apparent
competition are reflected in the species spatial segregation, which impacts the
dynamics of their populations. Performing a series of stochastic simulations,
we study a model where organisms of two prey species do not compete for space
but share a common predator. Our outcomes elucidate the central role played by
the predator in the pattern formation and coarsening dynamics in apparent
competition models. Investigating the effects of predator mortality on the
persistence of the species, we find a crossover between a curvature driven
scaling regime and a coexistence scenario. For low predator mortality, spatial
domains mainly inhabited by one type of prey arise, surrounded by interfaces
that mostly contain predators. We demonstrate that the dynamics of the
interface network are curvature driven whose coarsening follows a scaling law
common to other nonlinear systems. The effects of the apparent competition
decrease for high predator mortality, allowing organisms of two prey species to
share a more significant fraction of lattice. Finally, our results reveal that
predation capacity in single-prey domains influences the scaling power law that
characterises the coarsening dynamics. Our findings may be helpful to
biologists to understand the pattern formation and dynamics of biodiversity in
systems with apparent competition.
| [
{
"created": "Thu, 10 Feb 2022 19:11:34 GMT",
"version": "v1"
}
] | 2022-02-16 | [
[
"Menezes",
"J.",
""
],
[
"Moura",
"B.",
""
]
] | Apparent competition is an indirect interaction between species that share natural resources without any mutual aggression but negatively affect each other if there is a common enemy. The negative results of the apparent competition are reflected in the species spatial segregation, which impacts the dynamics of their populations. Performing a series of stochastic simulations, we study a model where organisms of two prey species do not compete for space but share a common predator. Our outcomes elucidate the central role played by the predator in the pattern formation and coarsening dynamics in apparent competition models. Investigating the effects of predator mortality on the persistence of the species, we find a crossover between a curvature driven scaling regime and a coexistence scenario. For low predator mortality, spatial domains mainly inhabited by one type of prey arise, surrounded by interfaces that mostly contain predators. We demonstrate that the dynamics of the interface network are curvature driven whose coarsening follows a scaling law common to other nonlinear systems. The effects of the apparent competition decrease for high predator mortality, allowing organisms of two prey species to share a more significant fraction of lattice. Finally, our results reveal that predation capacity in single-prey domains influences the scaling power law that characterises the coarsening dynamics. Our findings may be helpful to biologists to understand the pattern formation and dynamics of biodiversity in systems with apparent competition. |
2109.12901 | Suman Kumar Banik | Tuhin Subhra Roy, Mintu Nandi, Ayan Biswas, Pinaki Chaudhury and Suman
K Banik | Information transmission in a two-step cascade: Interplay of activation
and repression | 10 pages, 4 figures | null | null | null | q-bio.MN | http://creativecommons.org/licenses/by/4.0/ | We present an information-theoretic formalism to study signal transduction in
four architectural variants of a model two-step cascade with increasing input
population. Our results categorize these four types into two classes depending
upon the effect played out by activation and repression on mutual information,
net synergy, and signal-to-noise ratio. Within the Gaussian framework and using
the linear noise approximation, we derive the analytic expressions for these
metrics to establish their underlying relationships in terms of the biochemical
parameters. We also verify our approximations through stochastic simulations.
| [
{
"created": "Mon, 27 Sep 2021 09:37:55 GMT",
"version": "v1"
}
] | 2021-09-28 | [
[
"Roy",
"Tuhin Subhra",
""
],
[
"Nandi",
"Mintu",
""
],
[
"Biswas",
"Ayan",
""
],
[
"Chaudhury",
"Pinaki",
""
],
[
"Banik",
"Suman K",
""
]
] | We present an information-theoretic formalism to study signal transduction in four architectural variants of a model two-step cascade with increasing input population. Our results categorize these four types into two classes depending upon the effect played out by activation and repression on mutual information, net synergy, and signal-to-noise ratio. Within the Gaussian framework and using the linear noise approximation, we derive the analytic expressions for these metrics to establish their underlying relationships in terms of the biochemical parameters. We also verify our approximations through stochastic simulations. |
2309.07470 | Callum Shaw | Callum Shaw, Angus McLure and Kathryn Glass | African swine fever in wild boar: investigating model assumptions and
structure | 37 pages. 11 figures in main, 9 figures in appendix. 3 tables in
main, 8 tables in appendix | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | African swine fever (ASF) is a highly virulent viral disease that affects
both domestic pigs and wild boar. Current ASF transmission in Europe is in part
driven by wild boar populations, which act as a disease reservoir. Wild boar
are abundant throughout Europe and are highly social animals with complex
social organisation. Despite the known importance of wild boar in ASF spread
and persistence, there remain knowledge gaps surrounding wild boar
transmission. To investigate the influence of density-contact functions and
wild boar social structure on disease dynamics, we developed a wild boar
modelling framework. The framework included an ordinary differential equation
model, a homogeneous stochastic model, and various network-based stochastic
models that explicitly included wild boar social grouping. We found that power
law functions (transmission $\propto$ density$^{0.5}$) and frequency-based
density-contact functions were best able to reproduce recent Baltic outbreaks;
however, power law function models predicted considerable carcass transmission,
while frequency-based models had negligible carcass transmission. Furthermore,
increased model heterogeneity caused a decrease in the relative importance of
carcass-based transmission. The different dominant transmission pathways
predicted by each model type affected the efficacy of potential interventions,
which highlights the importance of evaluating model type and structure when
modelling systems with uncertainties.
| [
{
"created": "Thu, 14 Sep 2023 07:00:31 GMT",
"version": "v1"
}
] | 2023-09-15 | [
[
"Shaw",
"Callum",
""
],
[
"McLure",
"Angus",
""
],
[
"Glass",
"Kathryn",
""
]
] | African swine fever (ASF) is a highly virulent viral disease that affects both domestic pigs and wild boar. Current ASF transmission in Europe is in part driven by wild boar populations, which act as a disease reservoir. Wild boar are abundant throughout Europe and are highly social animals with complex social organisation. Despite the known importance of wild boar in ASF spread and persistence, there remain knowledge gaps surrounding wild boar transmission. To investigate the influence of density-contact functions and wild boar social structure on disease dynamics, we developed a wild boar modelling framework. The framework included an ordinary differential equation model, a homogeneous stochastic model, and various network-based stochastic models that explicitly included wild boar social grouping. We found that power law functions (transmission $\propto$ density$^{0.5}$) and frequency-based density-contact functions were best able to reproduce recent Baltic outbreaks; however, power law function models predicted considerable carcass transmission, while frequency-based models had negligible carcass transmission. Furthermore, increased model heterogeneity caused a decrease in the relative importance of carcass-based transmission. The different dominant transmission pathways predicted by each model type affected the efficacy of potential interventions, which highlights the importance of evaluating model type and structure when modelling systems with uncertainties. |
2301.03827 | Yue Wang | Yue Wang | Algorithms for the uniqueness of the longest common subsequence | null | null | null | null | q-bio.GN | http://creativecommons.org/licenses/by/4.0/ | Given several number sequences, determining the longest common subsequence is
a classical problem in computer science. This problem has applications in
bioinformatics, especially determining transposable genes. Nevertheless,
related works only consider how to find one longest common subsequence. In this
paper, we consider how to determine the uniqueness of the longest common
subsequence. If there are multiple longest common subsequences, we also
determine which number appears in all/some/none of the longest common
subsequences. We focus on four scenarios: (1) linear sequences without
duplicated numbers; (2) circular sequences without duplicated numbers; (3)
linear sequences with duplicated numbers; (4) circular sequences with
duplicated numbers. We develop corresponding algorithms and apply them to gene
sequencing data.
| [
{
"created": "Tue, 10 Jan 2023 07:47:30 GMT",
"version": "v1"
},
{
"created": "Mon, 6 Mar 2023 19:56:07 GMT",
"version": "v2"
},
{
"created": "Thu, 30 Mar 2023 02:30:21 GMT",
"version": "v3"
},
{
"created": "Sat, 18 Nov 2023 17:10:31 GMT",
"version": "v4"
}
] | 2023-11-21 | [
[
"Wang",
"Yue",
""
]
] | Given several number sequences, determining the longest common subsequence is a classical problem in computer science. This problem has applications in bioinformatics, especially determining transposable genes. Nevertheless, related works only consider how to find one longest common subsequence. In this paper, we consider how to determine the uniqueness of the longest common subsequence. If there are multiple longest common subsequences, we also determine which number appears in all/some/none of the longest common subsequences. We focus on four scenarios: (1) linear sequences without duplicated numbers; (2) circular sequences without duplicated numbers; (3) linear sequences with duplicated numbers; (4) circular sequences with duplicated numbers. We develop corresponding algorithms and apply them to gene sequencing data. |
1602.00096 | Sriganesh Srihari Dr | Sriganesh Srihari | Exploiting synthetic lethal vulnerabilities for cancer therapy | 5 Figures | null | null | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Synthetic lethality refers to a combination of two or more genetic events
(typically affecting different genes) in which the co-occurrence of the events
results in cell or organismal lethality, but the cell or organism remains
viable when only one of the events occurs. Synthetic lethality has gained
attention in the last few years for its value in selective killing of cancer
cells: by targeting the synthetic lethal partner of an altered gene in cancer,
only the cancer cells can be killed while sparing normal cells. In a recent
study, we showed that mutual exclusive combinations of genetic events in cancer
hint at naturally occurring synthetic lethal combinations, and therefore by
systematically mining for these combinations we can identify novel therapeutic
targets for cancer. Based on this, we had identified a list of 718 genes that
are mutually exclusive to six DNA-damage response genes in cancer. Here, we
extend these results to identify a subset of 43 genes whose over-expression
correlates with significantly poor survival in estrogen receptor-negative
breast cancers, and thus provide a promising list of potential therapeutic
targets and/or biomarkers.
| [
{
"created": "Sat, 30 Jan 2016 09:55:06 GMT",
"version": "v1"
}
] | 2016-02-02 | [
[
"Srihari",
"Sriganesh",
""
]
] | Synthetic lethality refers to a combination of two or more genetic events (typically affecting different genes) in which the co-occurrence of the events results in cell or organismal lethality, but the cell or organism remains viable when only one of the events occurs. Synthetic lethality has gained attention in the last few years for its value in selective killing of cancer cells: by targeting the synthetic lethal partner of an altered gene in cancer, only the cancer cells can be killed while sparing normal cells. In a recent study, we showed that mutual exclusive combinations of genetic events in cancer hint at naturally occurring synthetic lethal combinations, and therefore by systematically mining for these combinations we can identify novel therapeutic targets for cancer. Based on this, we had identified a list of 718 genes that are mutually exclusive to six DNA-damage response genes in cancer. Here, we extend these results to identify a subset of 43 genes whose over-expression correlates with significantly poor survival in estrogen receptor-negative breast cancers, and thus provide a promising list of potential therapeutic targets and/or biomarkers. |
1511.02362 | Grzegorz A Rempala | Mark G. Burch, Karly A. Jacobsen, Joseph H. Tien, Grzegorz A. Rempala | Network-Based Analysis of a Small Ebola Outbreak | 11 pages, 3 figures | null | null | null | q-bio.PE math.PR stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a method for estimating epidemic parameters in network-based
stochastic epidemic models when the total number of infections is assumed to be
small. We illustrate the method by reanalyzing the data from the 2014
Democratic Republic of the Congo (DRC) Ebola outbreak described in Maganga et
al. (2014).
| [
{
"created": "Sat, 7 Nov 2015 14:50:55 GMT",
"version": "v1"
}
] | 2015-11-10 | [
[
"Burch",
"Mark G.",
""
],
[
"Jacobsen",
"Karly A.",
""
],
[
"Tien",
"Joseph H.",
""
],
[
"Rempala",
"Grzegorz A.",
""
]
] | We present a method for estimating epidemic parameters in network-based stochastic epidemic models when the total number of infections is assumed to be small. We illustrate the method by reanalyzing the data from the 2014 Democratic Republic of the Congo (DRC) Ebola outbreak described in Maganga et al. (2014). |
2008.01126 | James Brunner | James Brunner and Jacob Kim and Timothy Downing and Eric Mjolsness and
Kord M. Kober | A dynamical system model for predicting gene expression from the
epigenome | 16 pages, 7 figures, 3 tables | null | null | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Gene regulation is an important fundamental biological process. The
regulation of gene expression is managed through a variety of methods including
epigenetic processes (e.g., DNA methylation). Understanding the role of
epigenetic changes in gene expression is a fundamental question of molecular
biology. Predictions of gene expression values from epigenetic data have
tremendous research and clinical potential. Despite active research, studies to
date have focused on using statistical models to predict gene expression from
methylation data. In contrast, dynamical systems can be used to generate a
model to predict gene expression using epigenetic data and a gene regulatory
network (GRN) which can also serve as a mechanistic hypothesis. Here we present
a novel stochastic dynamical systems model that predicts gene expression levels
from methylation data of genes in a given GRN. We provide an evaluation of the
model using real patient data and a GRN created from robust reference sources.
Software for dataset preparation, model parameter fitting and prediction
generation, and reporting are available at
\verb|https://github.com/kordk/stoch_epi_lib|.
| [
{
"created": "Mon, 3 Aug 2020 18:39:55 GMT",
"version": "v1"
},
{
"created": "Fri, 12 Mar 2021 18:50:27 GMT",
"version": "v2"
},
{
"created": "Fri, 28 May 2021 18:03:26 GMT",
"version": "v3"
}
] | 2021-06-01 | [
[
"Brunner",
"James",
""
],
[
"Kim",
"Jacob",
""
],
[
"Downing",
"Timothy",
""
],
[
"Mjolsness",
"Eric",
""
],
[
"Kober",
"Kord M.",
""
]
] | Gene regulation is an important fundamental biological process. The regulation of gene expression is managed through a variety of methods including epigenetic processes (e.g., DNA methylation). Understanding the role of epigenetic changes in gene expression is a fundamental question of molecular biology. Predictions of gene expression values from epigenetic data have tremendous research and clinical potential. Despite active research, studies to date have focused on using statistical models to predict gene expression from methylation data. In contrast, dynamical systems can be used to generate a model to predict gene expression using epigenetic data and a gene regulatory network (GRN) which can also serve as a mechanistic hypothesis. Here we present a novel stochastic dynamical systems model that predicts gene expression levels from methylation data of genes in a given GRN. We provide an evaluation of the model using real patient data and a GRN created from robust reference sources. Software for dataset preparation, model parameter fitting and prediction generation, and reporting are available at \verb|https://github.com/kordk/stoch_epi_lib|. |
2011.10319 | Vincent Huin | Vincent Huin, Mathieu Barbier, Armand Bottani, Johannes Lobrinus,
Fabienne Clot, Foudil Lamari, Laureen Chat, Beno\^it Rucheton, Fr\'ed\'erique
Fluch\`ere, St\'ephane Auvin, Peter Myers, Antoinette Gelot, Agn\`es Camuzat,
Catherine Caillaud, Ludmila Jorn\'ea, Sylvie Forlani, Dario Saracino, Charles
Duyckaerts, Alexis Brice, Alexandra Durr, Isabelle Le Ber | Homozygous GRN mutations: unexpected phenotypes and new insights into
pathological and molecular mechanisms | null | Brain - A Journal of Neurology , Oxford University Press (OUP),
2020, 143 (1), pp.303-319 | 10.1093/brain/awz377 | null | q-bio.BM q-bio.GN q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Homozygous mutations in the progranulin gene (GRN) are associated with
neuronal ceroid lipofuscinosis 11 (CLN11), a rare lysosomal-storage disorder
characterized by cerebellar ataxia, seizures, retinitis pigmentosa, and
cognitive disorders, usually beginning between 13 and 25 years of age. This is
a rare condition, previously reported in only four families. In contrast,
heterozygous GRN mutations are a major cause of frontotemporal dementia
associated with neuronal cytoplasmic TDP-43 inclusions. We identified
homozygous GRN mutations in six new patients. The phenotypic spectrum is much
broader than previously reported, with two remarkably distinct presentations,
depending on the age of onset. A childhood/juvenile form is characterized by
classical CLN11 symptoms at an early age at onset. Unexpectedly, other
homozygous patients presented a distinct delayed phenotype of frontotemporal
dementia and parkinsonism after 50 years; none had epilepsy or cerebellar
ataxia. Another major finding of this study is that all GRN mutations may not
have the same impact on progranulin protein synthesis. A hypomorphic effect of
some mutations is supported by the presence of residual levels of plasma
progranulin and low levels of normal transcript detected in one case with a
homozygous splice-site mutation and late onset frontotemporal dementia. This is
a new critical finding that must be considered in therapeutic trials based on
replacement strategies. The first neuropathological study in a homozygous
carrier provides new insights into the pathological mechanisms of the disease.
Hallmarks of neuronal ceroid lipofuscinosis were present. The absence of TDP-43
cytoplasmic inclusions markedly differs from observations of heterozygous
mutations, suggesting a pathological shift between lysosomal and TDP-43
pathologies depending on the mono or bi-allelic status. An intriguing
observation was the loss of normal TDP-43 staining in the nucleus of some
neurons, which could be the first stage of the TDP-43 pathological process
preceding the formation of typical cytoplasmic inclusions. Finally, this study
has important implications for genetic counselling and molecular diagnosis.
Semi-dominant inheritance of GRN mutations implies that specific genetic
counseling should be delivered to children and parents of CLN11 patients, as
they are heterozygous carriers with a high risk of developing dementia. More
broadly, this study illustrates the fact that genetic variants can lead to
different phenotypes according to their mono- or bi-allelic state, which is a
challenge for genetic diagnosis.
| [
{
"created": "Fri, 20 Nov 2020 10:16:03 GMT",
"version": "v1"
}
] | 2020-11-23 | [
[
"Huin",
"Vincent",
""
],
[
"Barbier",
"Mathieu",
""
],
[
"Bottani",
"Armand",
""
],
[
"Lobrinus",
"Johannes",
""
],
[
"Clot",
"Fabienne",
""
],
[
"Lamari",
"Foudil",
""
],
[
"Chat",
"Laureen",
""
],
[
"Rucheton",
"Benoît",
""
],
[
"Fluchère",
"Frédérique",
""
],
[
"Auvin",
"Stéphane",
""
],
[
"Myers",
"Peter",
""
],
[
"Gelot",
"Antoinette",
""
],
[
"Camuzat",
"Agnès",
""
],
[
"Caillaud",
"Catherine",
""
],
[
"Jornéa",
"Ludmila",
""
],
[
"Forlani",
"Sylvie",
""
],
[
"Saracino",
"Dario",
""
],
[
"Duyckaerts",
"Charles",
""
],
[
"Brice",
"Alexis",
""
],
[
"Durr",
"Alexandra",
""
],
[
"Ber",
"Isabelle Le",
""
]
] | Homozygous mutations in the progranulin gene (GRN) are associated with neuronal ceroid lipofuscinosis 11 (CLN11), a rare lysosomal-storage disorder characterized by cerebellar ataxia, seizures, retinitis pigmentosa, and cognitive disorders, usually beginning between 13 and 25 years of age. This is a rare condition, previously reported in only four families. In contrast, heterozygous GRN mutations are a major cause of frontotemporal dementia associated with neuronal cytoplasmic TDP-43 inclusions. We identified homozygous GRN mutations in six new patients. The phenotypic spectrum is much broader than previously reported, with two remarkably distinct presentations, depending on the age of onset. A childhood/juvenile form is characterized by classical CLN11 symptoms at an early age at onset. Unexpectedly, other homozygous patients presented a distinct delayed phenotype of frontotemporal dementia and parkinsonism after 50 years; none had epilepsy or cerebellar ataxia. Another major finding of this study is that all GRN mutations may not have the same impact on progranulin protein synthesis. A hypomorphic effect of some mutations is supported by the presence of residual levels of plasma progranulin and low levels of normal transcript detected in one case with a homozygous splice-site mutation and late onset frontotemporal dementia. This is a new critical finding that must be considered in therapeutic trials based on replacement strategies. The first neuropathological study in a homozygous carrier provides new insights into the pathological mechanisms of the disease. Hallmarks of neuronal ceroid lipofuscinosis were present. The absence of TDP-43 cytoplasmic inclusions markedly differs from observations of heterozygous mutations, suggesting a pathological shift between lysosomal and TDP-43 pathologies depending on the mono or bi-allelic status. An intriguing observation was the loss of normal TDP-43 staining in the nucleus of some neurons, which could be the first stage of the TDP-43 pathological process preceding the formation of typical cytoplasmic inclusions. Finally, this study has important implications for genetic counselling and molecular diagnosis. Semi-dominant inheritance of GRN mutations implies that specific genetic counseling should be delivered to children and parents of CLN11 patients, as they are heterozygous carriers with a high risk of developing dementia. More broadly, this study illustrates the fact that genetic variants can lead to different phenotypes according to their mono- or bi-allelic state, which is a challenge for genetic diagnosis. |
1912.04949 | Sharmistha Mishra | Huiting Ma (1), Linwei Wang (1), Peter Gichangi (2,3), Vernon Mochache
(4), Griffins Manguro (3), Helgar K Musyoki (5), Parinita Bhattacharjee
(6,7), Fran\c{c}ois Cholette (8,9), Paul Sandstrom (8,9), Marissa L Becker
(7), Sharmistha Mishra (1,10,11,12) (on behalf of the Transitions Study Team,
(1) Li Ka Shing Knowledge Institute, St. Michael's Hospital, Unity Health
Toronto, Toronto, Canada, (2) University of Nairobi, Nairobi, Kenya, (3)
International Centre for Reproductive Health-Kenya, Mombasa, Kenya, (4)
University of Maryland, Centre for International Health, Education and
Biosecurity, College Park, United States, (5) National AIDS & STI Control
Programme, Nairobi, Kenya, (6) Key Populations Technical Support Unit,
Partners for Health and Development in Africa, Nairobi, Kenya, (7) Centre for
Global Public Health, University of Manitoba, Winnipeg, Canada, (8) National
HIV and Retrovirology Laboratory, JC Wilt Infectious Diseases Research
Centre, Public Health Agency of Canada, Winnipeg, Canada, (9) Department of
Medical Microbiology and Infectious Diseases, University of Manitoba,
Winnipeg, Canada, (10) Department of Medicine, University of Toronto,
Toronto, Canada, (11) Institute of Medical Science, University of Toronto,
Toronto, Canada, (12) Institute of Health Policy, Management, and Evaluation,
University of Toronto, Toronto, Canada) | Venue-based HIV testing at sex work hotspots to reach adolescent girls
and young women living with HIV: a cross-sectional study in Mombasa, Kenya | 16 pages, 6 figures, 7 tables | null | null | null | q-bio.OT | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Background: We estimated the potential number of newly diagnosed HIV
infections among adolescent girls and young women (AGYW) using a venue-based
approach to HIV testing at sex work hotspots.
Methods: We used hotspot enumeration and cross-sectional bio-behavioural
survey data from the 2015 Transitions Study of AGYW aged 14-24 years who
frequented hotspots in Mombasa, Kenya. We compared the HIV cascade among AGYW
who sell sex (YSW, N=408) versus those who do not (NSW, N=891); and
triangulated the potential (100% test acceptance and accuracy) and feasible
(accounting for test acceptance and sensitivity) number of AGYW that could be
newly diagnosed via hotspot-based HIV rapid testing in Mombasa. We identified
the profile of AGYW recently tested for HIV (in the past year) using
multivariable logistic regression.
Results: N=37/365 (10.1%) YSW and N=30/828 (3.6%) NSW were living with HIV,
of whom 27.0% (N=10/37) and 30.0% (N=9/30) were diagnosed and aware (p=0.79).
Rapid test acceptance was 89.3% and sensitivity was 80.4%. Hotspot enumeration
estimated 15,635 (range: 12,172-19,097) AGYW in hotspots in Mombasa. The
potential and feasible number of new diagnosis were 627 (310-1,081), and 450
(223-776), respectively. Thus, hotspot-based testing could feasibly reduce the
undiagnosed fraction from 71.6% to 20.2%. The profile of AGYW who recently
tested was similar among YSW and NSW. YSW were two-fold more likely to report a
recent HIV test after adjusting for other determinants [odds ratio (95% CI):
2.1 (1.6-3.1)].
Conclusion: Reaching AGYW via hotspot-based HIV testing could fill gaps left
by traditional, clinic-based HIV prevention and testing services.
| [
{
"created": "Tue, 10 Dec 2019 19:46:24 GMT",
"version": "v1"
}
] | 2019-12-12 | [
[
"Ma",
"Huiting",
""
],
[
"Wang",
"Linwei",
""
],
[
"Gichangi",
"Peter",
""
],
[
"Mochache",
"Vernon",
""
],
[
"Manguro",
"Griffins",
""
],
[
"Musyoki",
"Helgar K",
""
],
[
"Bhattacharjee",
"Parinita",
""
],
[
"Cholette",
"François",
""
],
[
"Sandstrom",
"Paul",
""
],
[
"Becker",
"Marissa L",
""
],
[
"Mishra",
"Sharmistha",
""
]
] | Background: We estimated the potential number of newly diagnosed HIV infections among adolescent girls and young women (AGYW) using a venue-based approach to HIV testing at sex work hotspots. Methods: We used hotspot enumeration and cross-sectional bio-behavioural survey data from the 2015 Transitions Study of AGYW aged 14-24 years who frequented hotspots in Mombasa, Kenya. We compared the HIV cascade among AGYW who sell sex (YSW, N=408) versus those who do not (NSW, N=891); and triangulated the potential (100% test acceptance and accuracy) and feasible (accounting for test acceptance and sensitivity) number of AGYW that could be newly diagnosed via hotspot-based HIV rapid testing in Mombasa. We identified the profile of AGYW recently tested for HIV (in the past year) using multivariable logistic regression. Results: N=37/365 (10.1%) YSW and N=30/828 (3.6%) NSW were living with HIV, of whom 27.0% (N=10/37) and 30.0% (N=9/30) were diagnosed and aware (p=0.79). Rapid test acceptance was 89.3% and sensitivity was 80.4%. Hotspot enumeration estimated 15,635 (range: 12,172-19,097) AGYW in hotspots in Mombasa. The potential and feasible number of new diagnosis were 627 (310-1,081), and 450 (223-776), respectively. Thus, hotspot-based testing could feasibly reduce the undiagnosed fraction from 71.6% to 20.2%. The profile of AGYW who recently tested was similar among YSW and NSW. YSW were two-fold more likely to report a recent HIV test after adjusting for other determinants [odds ratio (95% CI): 2.1 (1.6-3.1)]. Conclusion: Reaching AGYW via hotspot-based HIV testing could fill gaps left by traditional, clinic-based HIV prevention and testing services. |
1609.04637 | Laszlo P. Csernai | L.P. Csernai, S.F. Spinnangr, S. Velle | Quantitative assessment of increasing complexity | 12 pages, 7 figure | Physica A 473 (2017) 363 | 10.1016/j.physa.2016.12.091 | null | q-bio.OT cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the build up of complexity on the example of 1 kg matter in
different forms. We start on the simplest example of ideal gases, and then
continue with more complex chemical, biological, life and social and technical
structures. We assess the complexity of these systems quantitatively, based on
their entropy. We present a method to attribute the same entropy to known
physical systems and to complex organic molecules up to a DNA. The important
steps in this program and the basic obstacles are discussed.
| [
{
"created": "Thu, 8 Sep 2016 07:52:31 GMT",
"version": "v1"
},
{
"created": "Thu, 12 Jan 2017 13:44:57 GMT",
"version": "v2"
}
] | 2017-01-19 | [
[
"Csernai",
"L. P.",
""
],
[
"Spinnangr",
"S. F.",
""
],
[
"Velle",
"S.",
""
]
] | We study the build up of complexity on the example of 1 kg matter in different forms. We start on the simplest example of ideal gases, and then continue with more complex chemical, biological, life and social and technical structures. We assess the complexity of these systems quantitatively, based on their entropy. We present a method to attribute the same entropy to known physical systems and to complex organic molecules up to a DNA. The important steps in this program and the basic obstacles are discussed. |
2403.19900 | Shi-Lei Xue | Shi-Lei Xue, Qiutan Yang, Prisca Liberali, Edouard Hannezo | Mechanochemical bistability of intestinal organoids enables robust
morphogenesis | null | null | null | null | q-bio.TO physics.app-ph physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | How pattern and form are generated in a reproducible manner during
embryogenesis remains poorly understood. Intestinal organoid morphogenesis
involves a number of mechanochemical regulators, including cell-type specific
cytoskeletal forces and osmotically-driven lumen volume changes. However,
whether and how these forces are coordinated in time and space via feedbacks to
ensure robust morphogenesis remains unclear. Here, we propose a minimal
physical model of organoid morphogenesis with local cellular mechano-sensation,
where lumen volume changes can impact epithelial shape via both direct
mechanical (passive) and indirect mechanosensitive (active) mechanisms. We show
how mechano-sensitive feedbacks on cytoskeletal tension generically give rise
to morphological bistability, where both bulged (open) and budded (closed)
crypt states are possible and dependent on the history of volume changes. Such
bistability can explain several paradoxical experimental observations, such as
the importance of the timing of lumen shrinkage and robustness of the final
morphogenetic state to mechanical perturbations. More quantitatively, we
performed mechanical and pharmacological experiments to validate the key
modelling assumptions and make quantitative predictions on organoid
morphogenesis. This suggests that bistability arising from feedbacks between
cellular tensions and fluid pressure could be a general mechanism to allow for
the coordination of multicellular shape changes in developing systems.
| [
{
"created": "Fri, 29 Mar 2024 00:53:15 GMT",
"version": "v1"
}
] | 2024-04-01 | [
[
"Xue",
"Shi-Lei",
""
],
[
"Yang",
"Qiutan",
""
],
[
"Liberali",
"Prisca",
""
],
[
"Hannezo",
"Edouard",
""
]
] | How pattern and form are generated in a reproducible manner during embryogenesis remains poorly understood. Intestinal organoid morphogenesis involves a number of mechanochemical regulators, including cell-type specific cytoskeletal forces and osmotically-driven lumen volume changes. However, whether and how these forces are coordinated in time and space via feedbacks to ensure robust morphogenesis remains unclear. Here, we propose a minimal physical model of organoid morphogenesis with local cellular mechano-sensation, where lumen volume changes can impact epithelial shape via both direct mechanical (passive) and indirect mechanosensitive (active) mechanisms. We show how mechano-sensitive feedbacks on cytoskeletal tension generically give rise to morphological bistability, where both bulged (open) and budded (closed) crypt states are possible and dependent on the history of volume changes. Such bistability can explain several paradoxical experimental observations, such as the importance of the timing of lumen shrinkage and robustness of the final morphogenetic state to mechanical perturbations. More quantitatively, we performed mechanical and pharmacological experiments to validate the key modelling assumptions and make quantitative predictions on organoid morphogenesis. This suggests that bistability arising from feedbacks between cellular tensions and fluid pressure could be a general mechanism to allow for the coordination of multicellular shape changes in developing systems. |
1008.4006 | Mikael Borg | Thomas Hamelryck, Mikael Borg, Martin Paluszewski, Jonas Paulsen, Jes
Frellsen, Christian Andreetta, Wouter Boomsma, Sandro Bottaro and Jesper
Ferkinghoff-Borg | Potentials of Mean Force for Protein Structure Prediction Vindicated,
Formalized and Generalized | null | Hamelryck T, Borg M, Paluszewski M, Paulsen J, Frellsen J, et al.
(2010) Potentials of Mean Force for Protein Structure Prediction Vindicated,
Formalized and Generalized. PLoS ONE 5(11): e13714 | 10.1371/journal.pone.0013714 | null | q-bio.BM cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding protein structure is of crucial importance in science, medicine
and biotechnology. For about two decades, knowledge based potentials based on
pairwise distances -- so-called "potentials of mean force" (PMFs) -- have been
center stage in the prediction and design of protein structure and the
simulation of protein folding. However, the validity, scope and limitations of
these potentials are still vigorously debated and disputed, and the optimal
choice of the reference state -- a necessary component of these potentials --
is an unsolved problem. PMFs are loosely justified by analogy to the reversible
work theorem in statistical physics, or by a statistical argument based on a
likelihood function. Both justifications are insightful but leave many
questions unanswered. Here, we show for the first time that PMFs can be seen as
approximations to quantities that do have a rigorous probabilistic
justification: they naturally arise when probability distributions over
different features of proteins need to be combined. We call these quantities
reference ratio distributions deriving from the application of the reference
ratio method. This new view is not only of theoretical relevance, but leads to
many insights that are of direct practical use: the reference state is uniquely
defined and does not require external physical insights; the approach can be
generalized beyond pairwise distances to arbitrary features of protein
structure; and it becomes clear for which purposes the use of these quantities
is justified. We illustrate these insights with two applications, involving the
radius of gyration and hydrogen bonding. In the latter case, we also show how
the reference ratio method can be iteratively applied to sculpt an energy
funnel. Our results considerably increase the understanding and scope of energy
functions derived from known biomolecular structures.
| [
{
"created": "Tue, 24 Aug 2010 10:49:18 GMT",
"version": "v1"
},
{
"created": "Wed, 25 Aug 2010 10:37:18 GMT",
"version": "v2"
},
{
"created": "Tue, 23 Nov 2010 09:56:55 GMT",
"version": "v3"
}
] | 2010-11-24 | [
[
"Hamelryck",
"Thomas",
""
],
[
"Borg",
"Mikael",
""
],
[
"Paluszewski",
"Martin",
""
],
[
"Paulsen",
"Jonas",
""
],
[
"Frellsen",
"Jes",
""
],
[
"Andreetta",
"Christian",
""
],
[
"Boomsma",
"Wouter",
""
],
[
"Bottaro",
"Sandro",
""
],
[
"Ferkinghoff-Borg",
"Jesper",
""
]
] | Understanding protein structure is of crucial importance in science, medicine and biotechnology. For about two decades, knowledge based potentials based on pairwise distances -- so-called "potentials of mean force" (PMFs) -- have been center stage in the prediction and design of protein structure and the simulation of protein folding. However, the validity, scope and limitations of these potentials are still vigorously debated and disputed, and the optimal choice of the reference state -- a necessary component of these potentials -- is an unsolved problem. PMFs are loosely justified by analogy to the reversible work theorem in statistical physics, or by a statistical argument based on a likelihood function. Both justifications are insightful but leave many questions unanswered. Here, we show for the first time that PMFs can be seen as approximations to quantities that do have a rigorous probabilistic justification: they naturally arise when probability distributions over different features of proteins need to be combined. We call these quantities reference ratio distributions deriving from the application of the reference ratio method. This new view is not only of theoretical relevance, but leads to many insights that are of direct practical use: the reference state is uniquely defined and does not require external physical insights; the approach can be generalized beyond pairwise distances to arbitrary features of protein structure; and it becomes clear for which purposes the use of these quantities is justified. We illustrate these insights with two applications, involving the radius of gyration and hydrogen bonding. In the latter case, we also show how the reference ratio method can be iteratively applied to sculpt an energy funnel. Our results considerably increase the understanding and scope of energy functions derived from known biomolecular structures. |
1703.10966 | Zixuan Cang | Zixuan Cang and Guo-Wei Wei | Analysis and prediction of protein folding energy changes upon mutation
by element specific persistent homology | 11 pages, 6 figures, 3 tables | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Motivation: Site directed mutagenesis is widely used to understand the
structure and function of biomolecules. Computational prediction of protein
mutation impacts offers a fast, economical and potentially accurate alternative
to laboratory mutagenesis. Most existing methods rely on geometric
descriptions, this work introduces a topology based approach to provide an
entirely new representation of protein mutation impacts that could not be
obtained from conventional techniques. Results: Topology based mutation
predictor (T-MP) is introduced to dramatically reduce the geometric complexity
and number of degrees of freedom of proteins, while element specific persistent
homology is proposed to retain essential biological information. The present
approach is found to outperform other existing methods in globular protein
mutation impact predictions. A Pearson correlation coefficient of 0.82 with an
RMSE of 0.92 kcal/mol is obtained on a test set of 350 mutation samples. For
the prediction of membrane protein stability changes upon mutation, the
proposed topological approach has a 84% higher Pearson correlation coefficient
than the current state-of-the-art empirical methods, achieving a Pearson
correlation of 0.57 and an RMSE of 1.09 kcal/mol in a 5-fold cross validation
on a set of 223 membrane protein mutation samples.
| [
{
"created": "Fri, 31 Mar 2017 16:08:04 GMT",
"version": "v1"
}
] | 2017-04-03 | [
[
"Cang",
"Zixuan",
""
],
[
"Wei",
"Guo-Wei",
""
]
] | Motivation: Site directed mutagenesis is widely used to understand the structure and function of biomolecules. Computational prediction of protein mutation impacts offers a fast, economical and potentially accurate alternative to laboratory mutagenesis. Most existing methods rely on geometric descriptions, this work introduces a topology based approach to provide an entirely new representation of protein mutation impacts that could not be obtained from conventional techniques. Results: Topology based mutation predictor (T-MP) is introduced to dramatically reduce the geometric complexity and number of degrees of freedom of proteins, while element specific persistent homology is proposed to retain essential biological information. The present approach is found to outperform other existing methods in globular protein mutation impact predictions. A Pearson correlation coefficient of 0.82 with an RMSE of 0.92 kcal/mol is obtained on a test set of 350 mutation samples. For the prediction of membrane protein stability changes upon mutation, the proposed topological approach has a 84% higher Pearson correlation coefficient than the current state-of-the-art empirical methods, achieving a Pearson correlation of 0.57 and an RMSE of 1.09 kcal/mol in a 5-fold cross validation on a set of 223 membrane protein mutation samples. |
1912.06381 | Chiara Villa | Chiara Villa, Mark A. J. Chaplain, Tommaso Lorenzi | Evolutionary dynamics in vascularised tumours under chemotherapy | 27 pages, 9 figures | null | null | null | q-bio.TO nlin.AO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider a mathematical model for the evolutionary dynamics of tumour
cells in vascularised tumours under chemotherapy. The model comprises a system
of coupled partial integro-differential equations for the phenotypic
distribution of tumour cells, the concentration of oxygen and the concentration
of a chemotherapeutic agent. In order to disentangle the impact of different
evolutionary parameters on the emergence of intra-tumour phenotypic
heterogeneity and the development of resistance to chemotherapy, we construct
explicit solutions to the equation for the phenotypic distribution of tumour
cells and provide a detailed quantitative characterisation of the long-time
asymptotic behaviour of such solutions. Analytical results are integrated with
numerical simulations of a calibrated version of the model based on
biologically consistent parameter values. The results obtained provide a
theoretical explanation for the observation that the phenotypic properties of
tumour cells in vascularised tumours vary with the distance from the blood
vessels. Moreover, we demonstrate that lower oxygen levels may correlate with
higher levels of phenotypic variability, which suggests that the presence of
hypoxic regions supports intra-tumour phenotypic heterogeneity. Finally, the
results of our analysis put on a rigorous mathematical basis the idea,
previously suggested by formal asymptotic results and numerical simulations,
that hypoxia favours the selection for chemoresistant phenotypic variants prior
to treatment. Consequently, this facilitates the development of resistance
following chemotherapy.
| [
{
"created": "Fri, 13 Dec 2019 09:59:59 GMT",
"version": "v1"
},
{
"created": "Mon, 25 May 2020 14:51:08 GMT",
"version": "v2"
},
{
"created": "Sat, 30 May 2020 15:53:06 GMT",
"version": "v3"
}
] | 2020-06-02 | [
[
"Villa",
"Chiara",
""
],
[
"Chaplain",
"Mark A. J.",
""
],
[
"Lorenzi",
"Tommaso",
""
]
] | We consider a mathematical model for the evolutionary dynamics of tumour cells in vascularised tumours under chemotherapy. The model comprises a system of coupled partial integro-differential equations for the phenotypic distribution of tumour cells, the concentration of oxygen and the concentration of a chemotherapeutic agent. In order to disentangle the impact of different evolutionary parameters on the emergence of intra-tumour phenotypic heterogeneity and the development of resistance to chemotherapy, we construct explicit solutions to the equation for the phenotypic distribution of tumour cells and provide a detailed quantitative characterisation of the long-time asymptotic behaviour of such solutions. Analytical results are integrated with numerical simulations of a calibrated version of the model based on biologically consistent parameter values. The results obtained provide a theoretical explanation for the observation that the phenotypic properties of tumour cells in vascularised tumours vary with the distance from the blood vessels. Moreover, we demonstrate that lower oxygen levels may correlate with higher levels of phenotypic variability, which suggests that the presence of hypoxic regions supports intra-tumour phenotypic heterogeneity. Finally, the results of our analysis put on a rigorous mathematical basis the idea, previously suggested by formal asymptotic results and numerical simulations, that hypoxia favours the selection for chemoresistant phenotypic variants prior to treatment. Consequently, this facilitates the development of resistance following chemotherapy. |
2203.06263 | Soham Mukherjee | Soham Mukherjee, Darren Wethington, Tamal K. Dey and Jayajit Das | Determining clinically relevant features in cytometry data using
persistent homology | 19 pages, 8 figures. Supplementary information contains 15 pages and
17 figures. To be published in PLOS Computational Biology | null | 10.1371/journal.pcbi.1009931 | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Cytometry experiments yield high-dimensional point cloud data that is
difficult to interpret manually. Boolean gating techniques coupled with
comparisons of relative abundances of cellular subsets is the current standard
for cytometry data analysis. However, this approach is unable to capture more
subtle topological features hidden in data, especially if those features are
further masked by data transforms or significant batch effects or
donor-to-donor variations in clinical data. Analysis of publicly available
cytometry data describing non-na\"ive CD8+ T cells in COVID-19 patients and
healthy controls shows that systematic structural differences exist between
single cell protein expressions in COVID-19 patients and healthy controls. We
identify proteins of interest by a decision-tree based classifier, sample
points randomly and compute persistence diagrams from these sampled points. The
resulting persistence diagrams identify regions in cytometry datasets of
varying density and identify protruded structures such as `elbows'. We compute
Wasserstein distances between these persistence diagrams for random pairs of
healthy controls and COVID-19 patients and find that systematic structural
differences exist between COVID-19 patients and healthy controls in the
expression data for T-bet, Eomes, and Ki-67. Further analysis shows that
expression of T-bet and Eomes are significantly downregulated in COVID-19
patient non-na\"ive CD8+ T cells compared to healthy controls. This
counter-intuitive finding may indicate that canonical effector CD8+ T cells are
less prevalent in COVID-19 patients than healthy controls. This method is
applicable to any cytometry dataset for discovering novel insights through
topological data analysis which may be difficult to ascertain otherwise with a
standard gating strategy or existing bioinformatic tools.
| [
{
"created": "Fri, 11 Mar 2022 21:42:51 GMT",
"version": "v1"
}
] | 2022-05-04 | [
[
"Mukherjee",
"Soham",
""
],
[
"Wethington",
"Darren",
""
],
[
"Dey",
"Tamal K.",
""
],
[
"Das",
"Jayajit",
""
]
] | Cytometry experiments yield high-dimensional point cloud data that is difficult to interpret manually. Boolean gating techniques coupled with comparisons of relative abundances of cellular subsets is the current standard for cytometry data analysis. However, this approach is unable to capture more subtle topological features hidden in data, especially if those features are further masked by data transforms or significant batch effects or donor-to-donor variations in clinical data. Analysis of publicly available cytometry data describing non-na\"ive CD8+ T cells in COVID-19 patients and healthy controls shows that systematic structural differences exist between single cell protein expressions in COVID-19 patients and healthy controls. We identify proteins of interest by a decision-tree based classifier, sample points randomly and compute persistence diagrams from these sampled points. The resulting persistence diagrams identify regions in cytometry datasets of varying density and identify protruded structures such as `elbows'. We compute Wasserstein distances between these persistence diagrams for random pairs of healthy controls and COVID-19 patients and find that systematic structural differences exist between COVID-19 patients and healthy controls in the expression data for T-bet, Eomes, and Ki-67. Further analysis shows that expression of T-bet and Eomes are significantly downregulated in COVID-19 patient non-na\"ive CD8+ T cells compared to healthy controls. This counter-intuitive finding may indicate that canonical effector CD8+ T cells are less prevalent in COVID-19 patients than healthy controls. This method is applicable to any cytometry dataset for discovering novel insights through topological data analysis which may be difficult to ascertain otherwise with a standard gating strategy or existing bioinformatic tools. |
2106.10081 | Mark Leake | Mark C Leake | Correlative approaches in single-molecule biophysics: a review of the
progress in methods and applications | null | null | null | null | q-bio.QM physics.bio-ph q-bio.BM q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Here, we discuss a collection of cutting-edge techniques and applications in
use today by some of the leading experts in the field of correlative approaches
in single-molecule biophysics. A key difference in emphasis, compared with
traditional single-molecule biophysics approaches detailed previously, is on
the emphasis of the development and use of complex methods which explicitly
combine multiple approaches to increase biological insights at the
single-molecule level. These so-called correlative single-molecule biophysics
methods rely on multiple, orthogonal tools and analysis, as opposed to any one
single driving technique. Importantly, they span both in vivo and in vitro
biological systems as well as the interfaces between theory and experiment in
often highly integrated ways, very different to earlier traditional
non-integrative approaches. The first applications of correlative
single-molecule methods involved adaption of a range of different experimental
technologies to the same biological sample whose measurements were
synchronised. However, now we find a greater flora of integrated methods
emerging that include approaches applied to different samples at different
times and yet still permit useful molecular-scale correlations to be performed.
The resultant findings often enable far greater precision of length and time
scales of measurements, and a more understanding of the interplay between
different processes in the same cell. Many new correlative single-molecule
biophysics techniques also include more complex, physiologically relevant
approaches as well as increasing number that combine advanced computational
methods and mathematical analysis with experimental tools. Here we review the
motivation behind the development of correlative single-molecule microscopy
methods, its history and recent progress in the field.
| [
{
"created": "Fri, 18 Jun 2021 12:09:27 GMT",
"version": "v1"
}
] | 2021-06-21 | [
[
"Leake",
"Mark C",
""
]
] | Here, we discuss a collection of cutting-edge techniques and applications in use today by some of the leading experts in the field of correlative approaches in single-molecule biophysics. A key difference in emphasis, compared with traditional single-molecule biophysics approaches detailed previously, is on the emphasis of the development and use of complex methods which explicitly combine multiple approaches to increase biological insights at the single-molecule level. These so-called correlative single-molecule biophysics methods rely on multiple, orthogonal tools and analysis, as opposed to any one single driving technique. Importantly, they span both in vivo and in vitro biological systems as well as the interfaces between theory and experiment in often highly integrated ways, very different to earlier traditional non-integrative approaches. The first applications of correlative single-molecule methods involved adaption of a range of different experimental technologies to the same biological sample whose measurements were synchronised. However, now we find a greater flora of integrated methods emerging that include approaches applied to different samples at different times and yet still permit useful molecular-scale correlations to be performed. The resultant findings often enable far greater precision of length and time scales of measurements, and a more understanding of the interplay between different processes in the same cell. Many new correlative single-molecule biophysics techniques also include more complex, physiologically relevant approaches as well as increasing number that combine advanced computational methods and mathematical analysis with experimental tools. Here we review the motivation behind the development of correlative single-molecule microscopy methods, its history and recent progress in the field. |
2102.05512 | Maria Elena Gonzalez Herrero | Maria Elena Gonzalez Herrero and Christian Kuehn | A qualitative mathematical model of the immune response under the effect
of stress | 8 pages, 2 figures | Chaos 31, 061104 (2021) | 10.1063/5.0055784 | null | q-bio.QM math.DS nlin.PS | http://creativecommons.org/licenses/by/4.0/ | In the last decades, the interest to understand the connection between brain
and body has grown notably. For example, in psychoneuroimmunology many studies
associate stress, arising from many different sources and situations, to
changes in the immune system from the medical or immunological point of view as
well as from the biochemical one. In this paper we identify important
behaviours of this interplay between the immune system and stress from medical
studies and seek to represent them qualitatively in a paradigmatic, yet simple,
mathematical model. To that end we develop a differential equation model with
two equations for infection level and immune system, which integrates the
effects of stress as an additional parameter. We are able to reproduce a stable
healthy state for little stress, an oscillatory state between healthy and
infected states for high stress, and a "burn-out" or stable sick state for
extremely high stress. The mechanism between the different dynamics is
controlled by two saddle-node in cycle (SNIC) bifurcations. Furthermore, our
model is able to capture an induced infection upon dropping from moderate to
low stress, and it predicts increasing infection periods upon increasing before
eventually reaching a burn-out state.
| [
{
"created": "Wed, 10 Feb 2021 15:55:02 GMT",
"version": "v1"
}
] | 2022-07-27 | [
[
"Herrero",
"Maria Elena Gonzalez",
""
],
[
"Kuehn",
"Christian",
""
]
] | In the last decades, the interest to understand the connection between brain and body has grown notably. For example, in psychoneuroimmunology many studies associate stress, arising from many different sources and situations, to changes in the immune system from the medical or immunological point of view as well as from the biochemical one. In this paper we identify important behaviours of this interplay between the immune system and stress from medical studies and seek to represent them qualitatively in a paradigmatic, yet simple, mathematical model. To that end we develop a differential equation model with two equations for infection level and immune system, which integrates the effects of stress as an additional parameter. We are able to reproduce a stable healthy state for little stress, an oscillatory state between healthy and infected states for high stress, and a "burn-out" or stable sick state for extremely high stress. The mechanism between the different dynamics is controlled by two saddle-node in cycle (SNIC) bifurcations. Furthermore, our model is able to capture an induced infection upon dropping from moderate to low stress, and it predicts increasing infection periods upon increasing before eventually reaching a burn-out state. |
2204.09004 | Rodrigo Garc\'ia-Tejera | Rodrigo Garc\'ia-Tejera, Linus Schumacher and Ramon Grima | Regulation of stem cell dynamics through volume exclusion | 23 pages, 5 figures | Garc\'ia-Tejera R, SchumacherL, Grima R. 2022 Regulation of stem
cell dynamics through volume exclusion. Proc.R.Soc.A478: 20220376 | 10.1098/rspa.2022.0376 | null | q-bio.PE q-bio.CB | http://creativecommons.org/licenses/by/4.0/ | Maintenance and regeneration of adult tissues rely on the self-renewal of
stem cells. Regeneration without over-proliferation requires precise regulation
of the stem cell proliferation and differentiation rates. The nature of such
regulatory mechanisms in different tissues, and how to incorporate them in
models of stem cell population dynamics, is incompletely understood. The
critical birth-death (CBD) process is widely used to model stem cell
populations, capturing key phenomena, such as scaling laws in clone size
distributions. However, the CBD process neglects regulatory mechanisms. Here,
we propose the birth-death process with volume exclusion (vBD), a variation of
the birth-death process that considers crowding effects, such as may arise due
to limited space in a stem cell niche. While the deterministic rate equations
predict a single non-trivial attracting steady state, the master equation
predicts extinction and transient distributions of stem cell numbers with three
possible behaviours: long-lived quasi-steady state, and short-lived bimodal or
unimodal distributions. In all cases, we approximate solutions to the vBD
master equation using a renormalised system-size expansion, quasi-steady state
approximation and the WKB method. Our study suggests that the size distribution
of a stem cell population bears signatures that are useful to detect negative
feedback mediated via volume exclusion.
| [
{
"created": "Tue, 19 Apr 2022 17:02:03 GMT",
"version": "v1"
},
{
"created": "Fri, 16 Dec 2022 02:27:47 GMT",
"version": "v2"
}
] | 2022-12-19 | [
[
"García-Tejera",
"Rodrigo",
""
],
[
"Schumacher",
"Linus",
""
],
[
"Grima",
"Ramon",
""
]
] | Maintenance and regeneration of adult tissues rely on the self-renewal of stem cells. Regeneration without over-proliferation requires precise regulation of the stem cell proliferation and differentiation rates. The nature of such regulatory mechanisms in different tissues, and how to incorporate them in models of stem cell population dynamics, is incompletely understood. The critical birth-death (CBD) process is widely used to model stem cell populations, capturing key phenomena, such as scaling laws in clone size distributions. However, the CBD process neglects regulatory mechanisms. Here, we propose the birth-death process with volume exclusion (vBD), a variation of the birth-death process that considers crowding effects, such as may arise due to limited space in a stem cell niche. While the deterministic rate equations predict a single non-trivial attracting steady state, the master equation predicts extinction and transient distributions of stem cell numbers with three possible behaviours: long-lived quasi-steady state, and short-lived bimodal or unimodal distributions. In all cases, we approximate solutions to the vBD master equation using a renormalised system-size expansion, quasi-steady state approximation and the WKB method. Our study suggests that the size distribution of a stem cell population bears signatures that are useful to detect negative feedback mediated via volume exclusion. |
2111.03867 | Axel G. Rossberg | Axel G. Rossberg, Jacob D. O'Sullivan, Svetlana Malysheva and Nadav M.
Shnerb | A metric for tradable biodiversity credits linked to the Living Planet
Index and global species conservation | 42 pages, 7 Figures, 1 Table | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Difficulties identifying appropriate biodiversity impact metrics remain a
major barrier to inclusion of biodiversity considerations in environmentally
responsible investment. We propose and analyse a simple science-based local
metric: the sum of proportional changes in local species abundances relative to
their global species abundances, with a correction for species close to
extinction. As we show, this metric quantifies changes in the mean long-term
global survival probability of species. It links mathematically to a widely
cited global biodiversity indicator, the Living Planet Index, for which we
propose an improved formula that directly addresses the known problem of
singularities caused by extinctions. We show that, in an ideal market, trade in
our metric would lead to near-optimal allocation of resources to species
conservation. We further show that the metric is closely related to several
other metrics and indices already in use. Barriers to adoption are therefore
low. Used in conjunction with metrics addressing ecosystem functioning and
services, potential areas of application include biodiversity related financial
disclosures and voluntary or legislated no net biodiversity loss policies.
| [
{
"created": "Sat, 6 Nov 2021 12:17:53 GMT",
"version": "v1"
},
{
"created": "Mon, 24 Jan 2022 09:38:01 GMT",
"version": "v2"
},
{
"created": "Sat, 29 Oct 2022 15:54:13 GMT",
"version": "v3"
},
{
"created": "Mon, 12 Dec 2022 09:30:58 GMT",
"version": "v4"
},
{
"created": "Thu, 16 Mar 2023 21:35:37 GMT",
"version": "v5"
},
{
"created": "Mon, 20 Mar 2023 05:30:50 GMT",
"version": "v6"
}
] | 2023-03-21 | [
[
"Rossberg",
"Axel G.",
""
],
[
"O'Sullivan",
"Jacob D.",
""
],
[
"Malysheva",
"Svetlana",
""
],
[
"Shnerb",
"Nadav M.",
""
]
] | Difficulties identifying appropriate biodiversity impact metrics remain a major barrier to inclusion of biodiversity considerations in environmentally responsible investment. We propose and analyse a simple science-based local metric: the sum of proportional changes in local species abundances relative to their global species abundances, with a correction for species close to extinction. As we show, this metric quantifies changes in the mean long-term global survival probability of species. It links mathematically to a widely cited global biodiversity indicator, the Living Planet Index, for which we propose an improved formula that directly addresses the known problem of singularities caused by extinctions. We show that, in an ideal market, trade in our metric would lead to near-optimal allocation of resources to species conservation. We further show that the metric is closely related to several other metrics and indices already in use. Barriers to adoption are therefore low. Used in conjunction with metrics addressing ecosystem functioning and services, potential areas of application include biodiversity related financial disclosures and voluntary or legislated no net biodiversity loss policies. |
1202.1243 | Alexander Gorban | A. Zinovyev, N. Morozova, A. N. Gorban, A. Harel-Belan | Mathematical modeling of microRNA-mediated mechanisms of translation
repression | 40 pages, 9 figures, 4 tables, 91 cited reference. The analysis of
kinetic signatures is updated according to the new model of coupled
transcription, translation and degradation, and of miRNA-based regulation of
this process published recently (arXiv:1204.5941). arXiv admin note: text
overlap with arXiv:0911.1797 | In: U. Schmitz et al. (eds.), MicroRNA Cancer Regulation: Advanced
Concepts, Bioinformatics and Systems Biology Tools. Advances in Experimental
Medicine and Biology Vol. 774, Springer, 2013, pp. 189-224 | 10.1007/978-94-007-5590-1_11 | null | q-bio.MN | http://creativecommons.org/licenses/by/3.0/ | MicroRNAs can affect the protein translation using nine mechanistically
different mechanisms, including repression of initiation and degradation of the
transcript. There is a hot debate in the current literature about which
mechanism and in which situations has a dominant role in living cells. The
worst, same experimental systems dealing with the same pairs of mRNA and miRNA
can provide ambiguous evidences about which is the actual mechanism of
translation repression observed in the experiment. We start with reviewing the
current knowledge of various mechanisms of miRNA action and suggest that
mathematical modeling can help resolving some of the controversial
interpretations. We describe three simple mathematical models of miRNA
translation that can be used as tools in interpreting the experimental data on
the dynamics of protein synthesis. The most complex model developed by us
includes all known mechanisms of miRNA action. It allowed us to study possible
dynamical patterns corresponding to different miRNA-mediated mechanisms of
translation repression and to suggest concrete recipes on determining the
dominant mechanism of miRNA action in the form of kinetic signatures. Using
computational experiments and systematizing existing evidences from the
literature, we justify a hypothesis about co-existence of distinct
miRNA-mediated mechanisms of translation repression. The actually observed
mechanism will be that acting on or changing the limiting "place" of the
translation process. The limiting place can vary from one experimental setting
to another. This model explains the majority of existing controversies
reported.
| [
{
"created": "Mon, 6 Feb 2012 19:18:43 GMT",
"version": "v1"
},
{
"created": "Sat, 12 May 2012 15:32:56 GMT",
"version": "v2"
}
] | 2013-04-09 | [
[
"Zinovyev",
"A.",
""
],
[
"Morozova",
"N.",
""
],
[
"Gorban",
"A. N.",
""
],
[
"Harel-Belan",
"A.",
""
]
] | MicroRNAs can affect the protein translation using nine mechanistically different mechanisms, including repression of initiation and degradation of the transcript. There is a hot debate in the current literature about which mechanism and in which situations has a dominant role in living cells. The worst, same experimental systems dealing with the same pairs of mRNA and miRNA can provide ambiguous evidences about which is the actual mechanism of translation repression observed in the experiment. We start with reviewing the current knowledge of various mechanisms of miRNA action and suggest that mathematical modeling can help resolving some of the controversial interpretations. We describe three simple mathematical models of miRNA translation that can be used as tools in interpreting the experimental data on the dynamics of protein synthesis. The most complex model developed by us includes all known mechanisms of miRNA action. It allowed us to study possible dynamical patterns corresponding to different miRNA-mediated mechanisms of translation repression and to suggest concrete recipes on determining the dominant mechanism of miRNA action in the form of kinetic signatures. Using computational experiments and systematizing existing evidences from the literature, we justify a hypothesis about co-existence of distinct miRNA-mediated mechanisms of translation repression. The actually observed mechanism will be that acting on or changing the limiting "place" of the translation process. The limiting place can vary from one experimental setting to another. This model explains the majority of existing controversies reported. |
q-bio/0610046 | Piotr Szymczak | P. Szymczak and Marek Cieplak | Stretching of proteins in a uniform flow | J. Chem. Phys. (in press) | null | 10.1063/1.2358346 | null | q-bio.BM | null | Stretching of a protein by a fluid flow is compared to that in a force-clamp
apparatus. The comparison is made within a simple topology-based dynamical
model of a protein in which the effects of the flow are implemented using
Langevin dynamics. We demonstrate that unfolding induced by a uniform flow
shows a richer behavior than that in the force clamp. The dynamics of unfolding
is found to depend strongly on the selection of the amino acid, usually one of
the termini, which is anchored. These features offer potentially wider
diagnostic tools to investigate structure of proteins compared to experiments
based on the atomic force microscopy.
| [
{
"created": "Wed, 25 Oct 2006 09:39:07 GMT",
"version": "v1"
}
] | 2009-11-13 | [
[
"Szymczak",
"P.",
""
],
[
"Cieplak",
"Marek",
""
]
] | Stretching of a protein by a fluid flow is compared to that in a force-clamp apparatus. The comparison is made within a simple topology-based dynamical model of a protein in which the effects of the flow are implemented using Langevin dynamics. We demonstrate that unfolding induced by a uniform flow shows a richer behavior than that in the force clamp. The dynamics of unfolding is found to depend strongly on the selection of the amino acid, usually one of the termini, which is anchored. These features offer potentially wider diagnostic tools to investigate structure of proteins compared to experiments based on the atomic force microscopy. |
0808.2490 | Tamara Mihaljev | Tamara Mihaljev and Barbara Drossel | Evolution of a population of random Boolean networks | 10 pages, 17 pictures | null | 10.1140/epjb/e2009-00032-8 | null | q-bio.PE q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate the evolution of populations of random Boolean networks under
selection for robustness of the dynamics with respect to the perturbation of
the state of a node. The fitness landscape contains a huge plateau of maximum
fitness that spans the entire network space. When selection is so strong that
it dominates over drift, the evolutionary process is accompanied by a slow
increase in the mean connectivity and a slow decrease in the mean fitness.
Populations evolved with higher mutation rates show a higher robustness under
mutations. This means that even though all the evolved populations exist close
to the plateau of maximum fitness, they end up in different regions of network
space.
| [
{
"created": "Mon, 18 Aug 2008 21:17:56 GMT",
"version": "v1"
}
] | 2009-11-13 | [
[
"Mihaljev",
"Tamara",
""
],
[
"Drossel",
"Barbara",
""
]
] | We investigate the evolution of populations of random Boolean networks under selection for robustness of the dynamics with respect to the perturbation of the state of a node. The fitness landscape contains a huge plateau of maximum fitness that spans the entire network space. When selection is so strong that it dominates over drift, the evolutionary process is accompanied by a slow increase in the mean connectivity and a slow decrease in the mean fitness. Populations evolved with higher mutation rates show a higher robustness under mutations. This means that even though all the evolved populations exist close to the plateau of maximum fitness, they end up in different regions of network space. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.