id stringlengths 9 13 | submitter stringlengths 4 48 | authors stringlengths 4 9.62k | title stringlengths 4 343 | comments stringlengths 2 480 ⌀ | journal-ref stringlengths 9 309 ⌀ | doi stringlengths 12 138 ⌀ | report-no stringclasses 277 values | categories stringlengths 8 87 | license stringclasses 9 values | orig_abstract stringlengths 27 3.76k | versions listlengths 1 15 | update_date stringlengths 10 10 | authors_parsed listlengths 1 147 | abstract stringlengths 24 3.75k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0811.2838 | Tom Chou | Sarah A. Nowak, Tom Chou | Mechanisms of receptor/coreceptor-mediated entry of enveloped viruses | 10 Figures | Biophysical Journal, 96, 2624-2636, (2009) | 10.1016/j.bpj.2009.01.018 | null | q-bio.SC q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Enveloped viruses enter host cells either through endocytosis, or by direct
fusion of the viral membrane envelope and the membrane of the host cell.
However, some viruses, such as HIV-1, HSV-1, and Epstein-Barr can enter a cell
through either mechanism, with the choice of pathway often a function of the
ambient physical chemical conditions, such as temperature and pH. We develop a
stochastic model that describes the entry process at the level of binding of
viral glycoprotein spikes to cell membrane receptors and coreceptors. In our
model, receptors attach the cell membrane to the viral membrane, while
subsequent binding of coreceptors enables fusion. The model quantifies the
competition between fusion and endocytotic entry pathways. Relative
probabilities for each pathway are computed numerically, as well as
analytically in the high viral spike density limit. We delineate parameter
regimes in which fusion or endocytosis is dominant. These parameters are
related to measurable and potentially controllable quantities such as membrane
bending rigidity and receptor, coreceptor, and viral spike densities.
Experimental implications of our mechanistic hypotheses are proposed and
discussed.
| [
{
"created": "Tue, 18 Nov 2008 05:06:44 GMT",
"version": "v1"
}
] | 2009-11-13 | [
[
"Nowak",
"Sarah A.",
""
],
[
"Chou",
"Tom",
""
]
] | Enveloped viruses enter host cells either through endocytosis, or by direct fusion of the viral membrane envelope and the membrane of the host cell. However, some viruses, such as HIV-1, HSV-1, and Epstein-Barr can enter a cell through either mechanism, with the choice of pathway often a function of the ambient physical chemical conditions, such as temperature and pH. We develop a stochastic model that describes the entry process at the level of binding of viral glycoprotein spikes to cell membrane receptors and coreceptors. In our model, receptors attach the cell membrane to the viral membrane, while subsequent binding of coreceptors enables fusion. The model quantifies the competition between fusion and endocytotic entry pathways. Relative probabilities for each pathway are computed numerically, as well as analytically in the high viral spike density limit. We delineate parameter regimes in which fusion or endocytosis is dominant. These parameters are related to measurable and potentially controllable quantities such as membrane bending rigidity and receptor, coreceptor, and viral spike densities. Experimental implications of our mechanistic hypotheses are proposed and discussed. |
1905.04138 | Pablo Sartori | Pablo Sartori | Effect of curvature and normal forces on motor regulation of cilia | null | null | null | null | q-bio.SC cond-mat.soft physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cilia are ubiquitous organelles involves in eukaryotic motility. They are
long, slender, and motile protrusions from the cell body. They undergo active
regular oscillatory beating patterns that can propel cells, such as the algae
Chlamydomonas, through fluids. When many cilia beat in synchrony they can also
propel fluid along the surfaces of cells, as is the case of nodal cilia.
The main structural elements inside the cilium are microtubules. There are
also molecular motors of the dynein family that actively power the motion of
the cilium. These motors transform chemical energy in the form of ATP into
mechanical forces that produce sliding displacement between the microtubules.
This sliding is converted to bending by constraints at the base and/or along
the length of the cilium. Forces and displacements within the cilium can
regulate dyneins and provide a feedback mechanism: the dyneins generate forces,
deforming the cilium; the deformations, in turn, regulate the dyneins. This
feedback is believed to be the origin of the coordination of dyneins in space
and time which underlies the regularity of the beat pattern.
Goals and approach. While the mechanism by which dyneins bend the cilium is
understood, the feedback mechanism is much less clear. The two key questions
are: which forces and displacements are the most relevant in regulating the
beat? and how exactly does this regulation occur?
In this thesis we develop a framework to describe the spatio-temporal
patterns of a cilium with different mechanisms of motor regulation.
Characterizing and comparing the predicted shapes and beat patterns of these
different mechanisms to those observed in experiments provides us with further
understanding on how dyneins are regulated. This comparison is done both, with
a linear model that can be analytically solved, as with a non-linear model that
we solve numerically.
| [
{
"created": "Fri, 10 May 2019 12:52:45 GMT",
"version": "v1"
}
] | 2019-05-13 | [
[
"Sartori",
"Pablo",
""
]
] | Cilia are ubiquitous organelles involves in eukaryotic motility. They are long, slender, and motile protrusions from the cell body. They undergo active regular oscillatory beating patterns that can propel cells, such as the algae Chlamydomonas, through fluids. When many cilia beat in synchrony they can also propel fluid along the surfaces of cells, as is the case of nodal cilia. The main structural elements inside the cilium are microtubules. There are also molecular motors of the dynein family that actively power the motion of the cilium. These motors transform chemical energy in the form of ATP into mechanical forces that produce sliding displacement between the microtubules. This sliding is converted to bending by constraints at the base and/or along the length of the cilium. Forces and displacements within the cilium can regulate dyneins and provide a feedback mechanism: the dyneins generate forces, deforming the cilium; the deformations, in turn, regulate the dyneins. This feedback is believed to be the origin of the coordination of dyneins in space and time which underlies the regularity of the beat pattern. Goals and approach. While the mechanism by which dyneins bend the cilium is understood, the feedback mechanism is much less clear. The two key questions are: which forces and displacements are the most relevant in regulating the beat? and how exactly does this regulation occur? In this thesis we develop a framework to describe the spatio-temporal patterns of a cilium with different mechanisms of motor regulation. Characterizing and comparing the predicted shapes and beat patterns of these different mechanisms to those observed in experiments provides us with further understanding on how dyneins are regulated. This comparison is done both, with a linear model that can be analytically solved, as with a non-linear model that we solve numerically. |
2003.06789 | Felix Schwietert | Felix Schwietert, Jan Kierfeld | Bistability and oscillations in cooperative microtubule and kinetochore
dynamics in the mitotic spindle | null | null | 10.1088/1367-2630/ab7ede | null | q-bio.SC physics.bio-ph | http://creativecommons.org/licenses/by/4.0/ | In the mitotic spindle microtubules attach to kinetochores via catch bonds
during metaphase, and microtubule depolymerization forces give rise to
stochastic chromosome oscillations. We investigate the cooperative stochastic
microtubule dynamics in spindle models consisting of ensembles of parallel
microtubules, which attach to a kinetochore via elastic linkers. We include the
dynamic instability of microtubules and forces on microtubules and kinetochores
from elastic linkers. A one-sided model, where an external force acts on the
kinetochore is solved analytically employing a mean-field approach based on
Fokker-Planck equations. The solution establishes a bistable force-velocity
relation of the microtubule ensemble in agreement with stochastic simulations.
We derive constraints on linker stiffness and microtubule number for
bistability. The bistable force-velocity relation of the one-sided spindle
model gives rise to oscillations in the two-sided model, which can explain
stochastic chromosome oscillations in metaphase (directional instability). We
derive constraints on linker stiffness and microtubule number for metaphase
chromosome oscillations. Including poleward microtubule flux into the model we
can provide an explanation for the experimentally observed suppression of
chromosome oscillations in cells with high poleward flux velocities. Chromosome
oscillations persist in the presence of polar ejection forces, however, with a
reduced amplitude and a phase shift between sister kinetochores. Moreover,
polar ejection forces are necessary to align the chromosomes at the spindle
equator and stabilize an alternating oscillation pattern of the two
kinetochores. Finally, we modify the model such that microtubules can only
exert tensile forces on the kinetochore resulting in a tug-of-war between the
two microtubule ensembles. Then, induced microtubule catastrophes after
reaching the...
| [
{
"created": "Sun, 15 Mar 2020 10:37:11 GMT",
"version": "v1"
}
] | 2020-03-17 | [
[
"Schwietert",
"Felix",
""
],
[
"Kierfeld",
"Jan",
""
]
] | In the mitotic spindle microtubules attach to kinetochores via catch bonds during metaphase, and microtubule depolymerization forces give rise to stochastic chromosome oscillations. We investigate the cooperative stochastic microtubule dynamics in spindle models consisting of ensembles of parallel microtubules, which attach to a kinetochore via elastic linkers. We include the dynamic instability of microtubules and forces on microtubules and kinetochores from elastic linkers. A one-sided model, where an external force acts on the kinetochore is solved analytically employing a mean-field approach based on Fokker-Planck equations. The solution establishes a bistable force-velocity relation of the microtubule ensemble in agreement with stochastic simulations. We derive constraints on linker stiffness and microtubule number for bistability. The bistable force-velocity relation of the one-sided spindle model gives rise to oscillations in the two-sided model, which can explain stochastic chromosome oscillations in metaphase (directional instability). We derive constraints on linker stiffness and microtubule number for metaphase chromosome oscillations. Including poleward microtubule flux into the model we can provide an explanation for the experimentally observed suppression of chromosome oscillations in cells with high poleward flux velocities. Chromosome oscillations persist in the presence of polar ejection forces, however, with a reduced amplitude and a phase shift between sister kinetochores. Moreover, polar ejection forces are necessary to align the chromosomes at the spindle equator and stabilize an alternating oscillation pattern of the two kinetochores. Finally, we modify the model such that microtubules can only exert tensile forces on the kinetochore resulting in a tug-of-war between the two microtubule ensembles. Then, induced microtubule catastrophes after reaching the... |
2108.01974 | Thomas Oikonomou | O. Farzadian, T. Oikonomou, M. Moradkhani | Melting process of twisted DNA in a thermal bath | 10 pages; 9 figures | null | null | null | q-bio.BM cond-mat.soft cond-mat.stat-mech | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We investigate melting transition of DNA sequences embedded in a Langevin
fluctuation-dissipation thermal bath. Torsional effects are considered by a
twist angle $\varphi$ between neighboring base pairs stacked along the molecule
backbone. Our simulation results show that the increase of twist angle
translates linearly the melting temperature with a positive slope. After the so
called equilibrium angle $\varphi_\mathrm{eq}$, the DNA chain becomes very
rigid against opening and accordingly very high temperatures are required to
initiate the melting process. In such cases however, the biofunctionality of
DNA is destroyed before so that the observed in our model melting process
becomes biologically irrelevant. We believe that the outcome of this survey
would deeper understanding of the interplay between DNA twisting and melting
transition for precise control of DNA behavior.
| [
{
"created": "Wed, 4 Aug 2021 11:31:02 GMT",
"version": "v1"
}
] | 2021-08-05 | [
[
"Farzadian",
"O.",
""
],
[
"Oikonomou",
"T.",
""
],
[
"Moradkhani",
"M.",
""
]
] | We investigate melting transition of DNA sequences embedded in a Langevin fluctuation-dissipation thermal bath. Torsional effects are considered by a twist angle $\varphi$ between neighboring base pairs stacked along the molecule backbone. Our simulation results show that the increase of twist angle translates linearly the melting temperature with a positive slope. After the so called equilibrium angle $\varphi_\mathrm{eq}$, the DNA chain becomes very rigid against opening and accordingly very high temperatures are required to initiate the melting process. In such cases however, the biofunctionality of DNA is destroyed before so that the observed in our model melting process becomes biologically irrelevant. We believe that the outcome of this survey would deeper understanding of the interplay between DNA twisting and melting transition for precise control of DNA behavior. |
2205.11918 | Benjamin Mauroy | Riccardo Di Dio, Micha\"el Brunengo, Benjamin Mauroy | Influence of lung physical properties on its flow--volume curves using a
detailed multi-scale mathematical model of the lung | null | null | null | null | q-bio.TO | http://creativecommons.org/licenses/by/4.0/ | We develop a mathematical model of the lung that can estimate independently
the air flows and pressures in the upper bronchi. It accounts for the lung
multi-scale properties and for the air-tissue interactions. The model equations
are solved using the Discrete Fourier Transform, which allows quasi
instantaneous solving, in the limit of the model hypotheses. With this model,
we explore how the air flow--volume curves are affected by airways obstruction
or by change in lung compliance. Our work suggests that a fine analysis of the
flow-volume curves might bring information about the inner phenomena occurring
in the lung.
| [
{
"created": "Tue, 24 May 2022 09:25:46 GMT",
"version": "v1"
}
] | 2022-05-25 | [
[
"Di Dio",
"Riccardo",
""
],
[
"Brunengo",
"Michaël",
""
],
[
"Mauroy",
"Benjamin",
""
]
] | We develop a mathematical model of the lung that can estimate independently the air flows and pressures in the upper bronchi. It accounts for the lung multi-scale properties and for the air-tissue interactions. The model equations are solved using the Discrete Fourier Transform, which allows quasi instantaneous solving, in the limit of the model hypotheses. With this model, we explore how the air flow--volume curves are affected by airways obstruction or by change in lung compliance. Our work suggests that a fine analysis of the flow-volume curves might bring information about the inner phenomena occurring in the lung. |
0810.1606 | Hiroshi Nishiura | Hiroshi Nishiura | Backcalculation of the disease-age specific frequency of secondary
transmission of primary pneumonic plague | null | null | null | null | q-bio.PE q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Aim: To assess the frequency of secondary transmissions of primary pneumonic
plague relative to the onset of fever. Methods: A simple backcalculation method
was employed to estimate the frequency of secondary transmissions relative to
disease-age. A likelihood-based procedure was taken using observed
distributions of the serial interval (n = 177) and incubation period (n = 126).
Furthermore, an extended model was developed to account for the survival
probability of cases. Results: The simple backcalculation suggested that 31.0%
(95% confidence intervals (CI): 11.6, 50.4) and 28.0 % (95% CI: 10.2, 45.8) of
the total number of secondary transmissions had occurred at second and third
days of the disease, respectively, and more than four-fifths of the secondary
transmission occurred before the end of third day of disease. The
survivorship-adjusted frequency of secondary transmissions was obtained,
demonstrating that the infectiousness in later stages of illness was not
insignificant and indicates that the obtained frequencies were likely biased on
underlying factors including isolation measures. Conclusion: The simple
exercise suggests a need to implement countermeasures during pre-clinical stage
or immediately after onset. Further information is needed to elucidate the
finer details of the disease-age specific infectiousness.
| [
{
"created": "Thu, 9 Oct 2008 09:14:24 GMT",
"version": "v1"
}
] | 2008-10-10 | [
[
"Nishiura",
"Hiroshi",
""
]
] | Aim: To assess the frequency of secondary transmissions of primary pneumonic plague relative to the onset of fever. Methods: A simple backcalculation method was employed to estimate the frequency of secondary transmissions relative to disease-age. A likelihood-based procedure was taken using observed distributions of the serial interval (n = 177) and incubation period (n = 126). Furthermore, an extended model was developed to account for the survival probability of cases. Results: The simple backcalculation suggested that 31.0% (95% confidence intervals (CI): 11.6, 50.4) and 28.0 % (95% CI: 10.2, 45.8) of the total number of secondary transmissions had occurred at second and third days of the disease, respectively, and more than four-fifths of the secondary transmission occurred before the end of third day of disease. The survivorship-adjusted frequency of secondary transmissions was obtained, demonstrating that the infectiousness in later stages of illness was not insignificant and indicates that the obtained frequencies were likely biased on underlying factors including isolation measures. Conclusion: The simple exercise suggests a need to implement countermeasures during pre-clinical stage or immediately after onset. Further information is needed to elucidate the finer details of the disease-age specific infectiousness. |
q-bio/0403036 | Apoorva Patel | Apoorva Patel | The Triplet Genetic Code had a Doublet Predecessor | 10 pages (v2) Expanded to include additional features, including
likely relation to the operational code of the tRNA-acceptor stem. Version to
be published in Journal of Theoretical Biology | Journal of Theoretical Biology 233 (2005) 527-532 | null | null | q-bio.GN cs.CE q-bio.BM quant-ph | null | Information theoretic analysis of genetic languages indicates that the
naturally occurring 20 amino acids and the triplet genetic code arose by
duplication of 10 amino acids of class-II and a doublet genetic code having
codons NNY and anticodons $\overleftarrow{\rm GNN}$. Evidence for this scenario
is presented based on the properties of aminoacyl-tRNA synthetases, amino acids
and nucleotide bases.
| [
{
"created": "Thu, 25 Mar 2004 12:30:03 GMT",
"version": "v1"
},
{
"created": "Thu, 28 Oct 2004 14:42:36 GMT",
"version": "v2"
}
] | 2007-05-23 | [
[
"Patel",
"Apoorva",
""
]
] | Information theoretic analysis of genetic languages indicates that the naturally occurring 20 amino acids and the triplet genetic code arose by duplication of 10 amino acids of class-II and a doublet genetic code having codons NNY and anticodons $\overleftarrow{\rm GNN}$. Evidence for this scenario is presented based on the properties of aminoacyl-tRNA synthetases, amino acids and nucleotide bases. |
1003.2658 | Zhenyu Wang | Zhenyu Wang and Nigel Goldenfeld | Fixed points and limit cycles in the population dynamics of lysogenic
viruses and their hosts | 20 pages, 16 figures, 4 tables | null | 10.1103/PhysRevE.82.011918 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Starting with stochastic rate equations for the fundamental interactions
between microbes and their viruses, we derive a mean field theory for the
population dynamics of microbe-virus systems, including the effects of
lysogeny. In the absence of lysogeny, our model is a generalization of that
proposed phenomenologically by Weitz and Dushoff. In the presence of lysogeny,
we analyze the possible states of the system, identifying a novel limit cycle,
which we interpret physically. To test the robustness of our mean field
calculations to demographic fluctuations, we have compared our results with
stochastic simulations using the Gillespie algorithm. Finally, we estimate the
range of parameters that delineate the various steady states of our model.
| [
{
"created": "Sat, 13 Mar 2010 00:20:07 GMT",
"version": "v1"
}
] | 2015-05-18 | [
[
"Wang",
"Zhenyu",
""
],
[
"Goldenfeld",
"Nigel",
""
]
] | Starting with stochastic rate equations for the fundamental interactions between microbes and their viruses, we derive a mean field theory for the population dynamics of microbe-virus systems, including the effects of lysogeny. In the absence of lysogeny, our model is a generalization of that proposed phenomenologically by Weitz and Dushoff. In the presence of lysogeny, we analyze the possible states of the system, identifying a novel limit cycle, which we interpret physically. To test the robustness of our mean field calculations to demographic fluctuations, we have compared our results with stochastic simulations using the Gillespie algorithm. Finally, we estimate the range of parameters that delineate the various steady states of our model. |
1606.08281 | Jan Mikelson | Jan Mikelson, Mustafa Khammash | A parallelizable sampling method for parameter inference of large
biochemical reaction models | null | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The development of mechanistic models of biological systems is a central part
of Systems Biology. One major task in developing these models is the inference
of the correct model parameters. Due to the size of most realistic models and
their possibly complex dynamical behaviour one must usually rely on sample
based methods. In this paper we present a novel algorithm that reliably
estimates model parameters for deterministic as well as stochastic models from
trajectory data. Our algorithm samples iteratively independent particles from
the level sets of the likelihood and recovers the posterior from these level
sets. The presented approach is easily parallelizable and, by utilizing density
estimation through Dirichlet Process Gaussian Mixture Models, can deal with
high dimensional parameter spaces. We illustrate that our algorithm is
applicable to large, realistic deterministic and stochastic models and succeeds
in inferring the correct posterior from a given number of observed
trajectories. This algorithm presents a novel, computationally feasible
approach to identify parameters of large biochemical reaction models based on
sample path data.
| [
{
"created": "Mon, 27 Jun 2016 14:02:18 GMT",
"version": "v1"
}
] | 2016-06-28 | [
[
"Mikelson",
"Jan",
""
],
[
"Khammash",
"Mustafa",
""
]
] | The development of mechanistic models of biological systems is a central part of Systems Biology. One major task in developing these models is the inference of the correct model parameters. Due to the size of most realistic models and their possibly complex dynamical behaviour one must usually rely on sample based methods. In this paper we present a novel algorithm that reliably estimates model parameters for deterministic as well as stochastic models from trajectory data. Our algorithm samples iteratively independent particles from the level sets of the likelihood and recovers the posterior from these level sets. The presented approach is easily parallelizable and, by utilizing density estimation through Dirichlet Process Gaussian Mixture Models, can deal with high dimensional parameter spaces. We illustrate that our algorithm is applicable to large, realistic deterministic and stochastic models and succeeds in inferring the correct posterior from a given number of observed trajectories. This algorithm presents a novel, computationally feasible approach to identify parameters of large biochemical reaction models based on sample path data. |
2309.08765 | Clayton Kosonocky | Clayton W. Kosonocky, Claus O. Wilke, Edward M. Marcotte, and Andrew
D. Ellington | Mining Patents with Large Language Models Elucidates the Chemical
Function Landscape | Under review | null | null | null | q-bio.QM cs.LG | http://creativecommons.org/licenses/by/4.0/ | The fundamental goal of small molecule discovery is to generate chemicals
with target functionality. While this often proceeds through structure-based
methods, we set out to investigate the practicality of orthogonal methods that
leverage the extensive corpus of chemical literature. We hypothesize that a
sufficiently large text-derived chemical function dataset would mirror the
actual landscape of chemical functionality. Such a landscape would implicitly
capture complex physical and biological interactions given that chemical
function arises from both a molecule's structure and its interacting partners.
To evaluate this hypothesis, we built a Chemical Function (CheF) dataset of
patent-derived functional labels. This dataset, comprising 631K
molecule-function pairs, was created using an LLM- and embedding-based method
to obtain functional labels for approximately 100K molecules from their
corresponding 188K unique patents. We carry out a series of analyses
demonstrating that the CheF dataset contains a semantically coherent textual
representation of the functional landscape congruent with chemical structural
relationships, thus approximating the actual chemical function landscape. We
then demonstrate that this text-based functional landscape can be leveraged to
identify drugs with target functionality using a model able to predict
functional profiles from structure alone. We believe that functional
label-guided molecular discovery may serve as an orthogonal approach to
traditional structure-based methods in the pursuit of designing novel
functional molecules.
| [
{
"created": "Fri, 15 Sep 2023 21:08:41 GMT",
"version": "v1"
},
{
"created": "Mon, 18 Dec 2023 18:24:03 GMT",
"version": "v2"
}
] | 2023-12-20 | [
[
"Kosonocky",
"Clayton W.",
""
],
[
"Wilke",
"Claus O.",
""
],
[
"Marcotte",
"Edward M.",
""
],
[
"Ellington",
"Andrew D.",
""
]
] | The fundamental goal of small molecule discovery is to generate chemicals with target functionality. While this often proceeds through structure-based methods, we set out to investigate the practicality of orthogonal methods that leverage the extensive corpus of chemical literature. We hypothesize that a sufficiently large text-derived chemical function dataset would mirror the actual landscape of chemical functionality. Such a landscape would implicitly capture complex physical and biological interactions given that chemical function arises from both a molecule's structure and its interacting partners. To evaluate this hypothesis, we built a Chemical Function (CheF) dataset of patent-derived functional labels. This dataset, comprising 631K molecule-function pairs, was created using an LLM- and embedding-based method to obtain functional labels for approximately 100K molecules from their corresponding 188K unique patents. We carry out a series of analyses demonstrating that the CheF dataset contains a semantically coherent textual representation of the functional landscape congruent with chemical structural relationships, thus approximating the actual chemical function landscape. We then demonstrate that this text-based functional landscape can be leveraged to identify drugs with target functionality using a model able to predict functional profiles from structure alone. We believe that functional label-guided molecular discovery may serve as an orthogonal approach to traditional structure-based methods in the pursuit of designing novel functional molecules. |
q-bio/0603014 | John Collins | John Collins and Dezhe Z. Jin | Grandmother cells and the storage capacity of the human brain | Expanded treatment, with extra references. Quantitative treatment of
silent cell issues. 19 pages, 6 figures | null | null | null | q-bio.NC | null | Quian Quiroga et al. [Nature 435, 1102 (2005)] have recently discovered
neurons that appear to have the characteristics of grandmother (GM) cells. Here
we quantitatively assess the compatibility of their data with the GM-cell
hypothesis. We show that, contrary to the general impression, a GM-cell
representation can be information-theoretically efficient, but that it must be
accompanied by cells giving a distributed coding of the input. We present a
general method to deduce the sparsity distribution of the whole neuronal
population from a sample, and use it to show there are two populations of
cells: a distributed-code population of less than about 5% of the cells, and a
much more sparsely responding population of putative GM cells. With an
allowance for the number of undetected silent cells, we find that the putative
GM cells can code for 10^5 or more categories, sufficient for them to be
classic GM cells, or to be GM-like cells coding for memories. We quantify the
strong biases against detection of GM cells, and show consistency of our
results with previous measurements that find only distributed coding. We
discuss the consequences for the architecture of neural systems and synaptic
connectivity, and for the statistics of neural firing.
| [
{
"created": "Mon, 13 Mar 2006 20:33:18 GMT",
"version": "v1"
},
{
"created": "Mon, 26 Feb 2007 18:55:15 GMT",
"version": "v2"
}
] | 2007-05-23 | [
[
"Collins",
"John",
""
],
[
"Jin",
"Dezhe Z.",
""
]
] | Quian Quiroga et al. [Nature 435, 1102 (2005)] have recently discovered neurons that appear to have the characteristics of grandmother (GM) cells. Here we quantitatively assess the compatibility of their data with the GM-cell hypothesis. We show that, contrary to the general impression, a GM-cell representation can be information-theoretically efficient, but that it must be accompanied by cells giving a distributed coding of the input. We present a general method to deduce the sparsity distribution of the whole neuronal population from a sample, and use it to show there are two populations of cells: a distributed-code population of less than about 5% of the cells, and a much more sparsely responding population of putative GM cells. With an allowance for the number of undetected silent cells, we find that the putative GM cells can code for 10^5 or more categories, sufficient for them to be classic GM cells, or to be GM-like cells coding for memories. We quantify the strong biases against detection of GM cells, and show consistency of our results with previous measurements that find only distributed coding. We discuss the consequences for the architecture of neural systems and synaptic connectivity, and for the statistics of neural firing. |
1312.4490 | Gunnar W. Klau | Kasper Dinkla, Mohammed El-Kebir, Cristina-Iulia Bucur, Marco
Siderius, Martine J. Smit, Michel A. Westenberg and Gunnar W. Klau | eXamine: a Cytoscape app for exploring annotated modules in networks | null | null | null | null | q-bio.MN cs.CE cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background. Biological networks have growing importance for the
interpretation of high-throughput "omics" data. Statistical and combinatorial
methods allow to obtain mechanistic insights through the extraction of smaller
subnetwork modules. Further enrichment analyses provide set-based annotations
of these modules.
Results. We present eXamine, a set-oriented visual analysis approach for
annotated modules that displays set membership as contours on top of a
node-link layout. Our approach extends upon Self Organizing Maps to
simultaneously lay out nodes, links, and set contours.
Conclusions. We implemented eXamine as a freely available Cytoscape app.
Using eXamine we study a module that is activated by the virally-encoded
G-protein coupled receptor US28 and formulate a novel hypothesis about its
functioning.
| [
{
"created": "Mon, 16 Dec 2013 19:58:54 GMT",
"version": "v1"
}
] | 2013-12-17 | [
[
"Dinkla",
"Kasper",
""
],
[
"El-Kebir",
"Mohammed",
""
],
[
"Bucur",
"Cristina-Iulia",
""
],
[
"Siderius",
"Marco",
""
],
[
"Smit",
"Martine J.",
""
],
[
"Westenberg",
"Michel A.",
""
],
[
"Klau",
"Gunnar W.",
""
]
] | Background. Biological networks have growing importance for the interpretation of high-throughput "omics" data. Statistical and combinatorial methods allow to obtain mechanistic insights through the extraction of smaller subnetwork modules. Further enrichment analyses provide set-based annotations of these modules. Results. We present eXamine, a set-oriented visual analysis approach for annotated modules that displays set membership as contours on top of a node-link layout. Our approach extends upon Self Organizing Maps to simultaneously lay out nodes, links, and set contours. Conclusions. We implemented eXamine as a freely available Cytoscape app. Using eXamine we study a module that is activated by the virally-encoded G-protein coupled receptor US28 and formulate a novel hypothesis about its functioning. |
1404.7108 | Philip Gerlee | Philip Gerlee and Eunjung Kim and Alexander R.A. Anderson | Bridging scales in cancer progression: Mapping genotype to phenotype
using neural networks | null | null | null | null | q-bio.TO q-bio.MN q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this review we summarize our recent efforts in trying to understand the
role of heterogeneity in cancer progression by using neural networks to
characterise different aspects of the mapping from a cancer cells genotype and
environment to its phenotype. Our central premise is that cancer is an evolving
system subject to mutation and selection, and the primary conduit for these
processes to occur is the cancer cell whose behaviour is regulated on multiple
biological scales. The selection pressure is mainly driven by the
microenvironment that the tumour is growing in and this acts directly upon the
cell phenotype. In turn, the phenotype is driven by the intracellular pathways
that are regulated by the genotype. Integrating all of these processes is a
massive undertaking and requires bridging many biological scales (i.e.
genotype, pathway, phenotype and environment) that we will only scratch the
surface of in this review. We will focus on models that use neural networks as
a means of connecting these different biological scales, since they allow us to
easily create heterogeneity for selection to act upon and importantly this
heterogeneity can be implemented at different biological scales. More
specifically, we consider three different neural networks that bridge different
aspects of these scales and the dialogue with the micro-environment, (i) the
impact of the micro-environment on evolutionary dynamics, (ii) the mapping from
genotype to phenotype under drug-induced perturbations and (iii) pathway
activity in both normal and cancer cells under different micro-environmental
conditions.
| [
{
"created": "Mon, 28 Apr 2014 19:37:21 GMT",
"version": "v1"
}
] | 2014-04-29 | [
[
"Gerlee",
"Philip",
""
],
[
"Kim",
"Eunjung",
""
],
[
"Anderson",
"Alexander R. A.",
""
]
] | In this review we summarize our recent efforts in trying to understand the role of heterogeneity in cancer progression by using neural networks to characterise different aspects of the mapping from a cancer cells genotype and environment to its phenotype. Our central premise is that cancer is an evolving system subject to mutation and selection, and the primary conduit for these processes to occur is the cancer cell whose behaviour is regulated on multiple biological scales. The selection pressure is mainly driven by the microenvironment that the tumour is growing in and this acts directly upon the cell phenotype. In turn, the phenotype is driven by the intracellular pathways that are regulated by the genotype. Integrating all of these processes is a massive undertaking and requires bridging many biological scales (i.e. genotype, pathway, phenotype and environment) that we will only scratch the surface of in this review. We will focus on models that use neural networks as a means of connecting these different biological scales, since they allow us to easily create heterogeneity for selection to act upon and importantly this heterogeneity can be implemented at different biological scales. More specifically, we consider three different neural networks that bridge different aspects of these scales and the dialogue with the micro-environment, (i) the impact of the micro-environment on evolutionary dynamics, (ii) the mapping from genotype to phenotype under drug-induced perturbations and (iii) pathway activity in both normal and cancer cells under different micro-environmental conditions. |
0711.4512 | Marco Morelli | M.J. Morelli, R.J. Allen, S. Tanase-Nicola, P.R. ten Wolde | Eliminating fast reactions in stochastic simulations of biochemical
networks: a bistable genetic switch | 46 pages, 5 figures | null | 10.1063/1.2821957 | null | q-bio.QM q-bio.MN | null | In many stochastic simulations of biochemical reaction networks, it is
desirable to ``coarse-grain'' the reaction set, removing fast reactions while
retaining the correct system dynamics. Various coarse-graining methods have
been proposed, but it remains unclear which methods are reliable and which
reactions can safely be eliminated. We address these issues for a model gene
regulatory network that is particularly sensitive to dynamical fluctuations: a
bistable genetic switch. We remove protein-DNA and/or protein-protein
association-dissociation reactions from the reaction set, using various
coarse-graining strategies. We determine the effects on the steady-state
probability distribution function and on the rate of fluctuation-driven switch
flipping transitions. We find that protein-protein interactions may be safely
eliminated from the reaction set, but protein-DNA interactions may not. We also
find that it is important to use the chemical master equation rather than
macroscopic rate equations to compute effective propensity functions for the
coarse-grained reactions.
| [
{
"created": "Wed, 28 Nov 2007 14:36:40 GMT",
"version": "v1"
}
] | 2009-11-13 | [
[
"Morelli",
"M. J.",
""
],
[
"Allen",
"R. J.",
""
],
[
"Tanase-Nicola",
"S.",
""
],
[
"Wolde",
"P. R. ten",
""
]
] | In many stochastic simulations of biochemical reaction networks, it is desirable to ``coarse-grain'' the reaction set, removing fast reactions while retaining the correct system dynamics. Various coarse-graining methods have been proposed, but it remains unclear which methods are reliable and which reactions can safely be eliminated. We address these issues for a model gene regulatory network that is particularly sensitive to dynamical fluctuations: a bistable genetic switch. We remove protein-DNA and/or protein-protein association-dissociation reactions from the reaction set, using various coarse-graining strategies. We determine the effects on the steady-state probability distribution function and on the rate of fluctuation-driven switch flipping transitions. We find that protein-protein interactions may be safely eliminated from the reaction set, but protein-DNA interactions may not. We also find that it is important to use the chemical master equation rather than macroscopic rate equations to compute effective propensity functions for the coarse-grained reactions. |
2308.15025 | Suman Kumar Banik | Tuhin Subhra Roy, Mintu Nandi, Pinaki Chaudhury, Sudip Chattopadhyay,
and Suman K Banik | Interplay of degeneracy and non-degeneracy in fluctuations propagation
in coherent feed-forward loop motif | 20 pages, 4 figures | J. Stat. Mech. 2023 (2023) 093502 | 10.1088/1742-5468/acf8b9 | null | q-bio.MN physics.bio-ph | http://creativecommons.org/licenses/by/4.0/ | We present a stochastic framework to decipher fluctuations propagation in
classes of coherent feed-forward loops. The systematic contribution of the
direct (one-step) and indirect (two-step) pathways is considered to quantify
fluctuations of the output node. We also consider both additive and
multiplicative integration mechanisms of the two parallel pathways (one-step
and two-step). Analytical expression of the output node's coefficient of
variation shows contributions of intrinsic, one-step, two-step, and
cross-interaction in closed form. We observe a diverse range of degeneracy and
non-degeneracy in each of the decomposed fluctuations term and their
contribution to the overall output fluctuations of each coherent feed-forward
loop motif. Analysis of output fluctuations reveals a maximal level of
fluctuations of the coherent feed-forward loop motif of type 1.
| [
{
"created": "Tue, 29 Aug 2023 05:15:36 GMT",
"version": "v1"
}
] | 2023-10-17 | [
[
"Roy",
"Tuhin Subhra",
""
],
[
"Nandi",
"Mintu",
""
],
[
"Chaudhury",
"Pinaki",
""
],
[
"Chattopadhyay",
"Sudip",
""
],
[
"Banik",
"Suman K",
""
]
] | We present a stochastic framework to decipher fluctuations propagation in classes of coherent feed-forward loops. The systematic contribution of the direct (one-step) and indirect (two-step) pathways is considered to quantify fluctuations of the output node. We also consider both additive and multiplicative integration mechanisms of the two parallel pathways (one-step and two-step). Analytical expression of the output node's coefficient of variation shows contributions of intrinsic, one-step, two-step, and cross-interaction in closed form. We observe a diverse range of degeneracy and non-degeneracy in each of the decomposed fluctuations term and their contribution to the overall output fluctuations of each coherent feed-forward loop motif. Analysis of output fluctuations reveals a maximal level of fluctuations of the coherent feed-forward loop motif of type 1. |
2305.07369 | Shuhei Kitamura PhD | Shuhei Kitamura and Aya S. Ihara | Semantic Processing of Political Words in Naturalistic Information
Differs by Political Orientation | null | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Worldviews may differ significantly according to political orientation. Even
a single word can have a completely different meaning depending on political
orientation. However, no direct evidence has been obtained on differences in
the semantic processing of single words in naturalistic information between
individuals with different political orientations. The present study aimed to
fill this gap. We measured electroencephalographic signals while participants
with different political orientations listened to naturalistic content.
Responses for moral-, ideology-, and policy-related words between and within
the participant groups were then compared. Within-group comparisons showed that
right-leaning participants reacted more to moral-related words than to
policy-related words, while left-leaning participants reacted more to
policy-related words than to moral-related words. In addition, between-group
comparisons also showed that neural responses for moral-related words were
greater in right-leaning participants than in left-leaning participants and
those for policy-related words were lesser in right-leaning participants than
in neutral participants. There was a significant correlation between the
predicted and self-reported political orientations. In summary, the study found
that people with different political orientations differ in semantic processing
at the level of a single word. These findings have implications for
understanding the mechanisms of political polarization and for making policy
messages more effective.
| [
{
"created": "Fri, 12 May 2023 10:35:56 GMT",
"version": "v1"
},
{
"created": "Fri, 16 Jun 2023 12:57:01 GMT",
"version": "v2"
}
] | 2023-06-19 | [
[
"Kitamura",
"Shuhei",
""
],
[
"Ihara",
"Aya S.",
""
]
] | Worldviews may differ significantly according to political orientation. Even a single word can have a completely different meaning depending on political orientation. However, no direct evidence has been obtained on differences in the semantic processing of single words in naturalistic information between individuals with different political orientations. The present study aimed to fill this gap. We measured electroencephalographic signals while participants with different political orientations listened to naturalistic content. Responses for moral-, ideology-, and policy-related words between and within the participant groups were then compared. Within-group comparisons showed that right-leaning participants reacted more to moral-related words than to policy-related words, while left-leaning participants reacted more to policy-related words than to moral-related words. In addition, between-group comparisons also showed that neural responses for moral-related words were greater in right-leaning participants than in left-leaning participants and those for policy-related words were lesser in right-leaning participants than in neutral participants. There was a significant correlation between the predicted and self-reported political orientations. In summary, the study found that people with different political orientations differ in semantic processing at the level of a single word. These findings have implications for understanding the mechanisms of political polarization and for making policy messages more effective. |
1107.1549 | Max Souza | Fabio A. C. C. Chalub and Max O. Souza | The frequency-dependent Wright-Fisher model: diffusive and non-diffusive
approximations | null | J. Math. Biol., 68 (5), 1089--1133 (2014) | 10.1007/s00285-013-0657-7 | null | q-bio.PE math.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study a class of processes that are akin to the Wright-Fisher model, with
transition probabilities weighted in terms of the frequency-dependent fitness
of the population types. By considering an approximate weak formulation of the
discrete problem, we are able to derive a corresponding continuous weak
formulation for the probability density. Therefore, we obtain a family of
partial differential equations (PDE) for the evolution of the probability
density, and which will be an approximation of the discrete process in the
joint large population, small time-steps and weak selection limit. If the
fitness functions are sufficiently regular, we can recast the weak formulation
in a more standard formulation, without any boundary conditions, but
supplemented by a number of conservation laws. The equations in this family can
be purely diffusive, purely hyperbolic or of convection-diffusion type, with
frequency dependent convection. The particular outcome will depend on the
assumed scalings. The diffusive equations are of the degenerate type; using a
duality approach, we also obtain a frequency dependent version of the Kimura
equation without any further assumptions. We also show that the convective
approximation is related to the replicator dynamics and provide some estimate
of how accurate is the convective approximation, with respect to the
convective-diffusion approximation. In particular, we show that the mode, but
not the expected value, of the probability distribution is modelled by the
replicator dynamics. Some numerical simulations that illustrate the results are
also presented.
| [
{
"created": "Fri, 8 Jul 2011 03:14:52 GMT",
"version": "v1"
},
{
"created": "Thu, 3 May 2012 12:55:45 GMT",
"version": "v2"
},
{
"created": "Fri, 5 Oct 2012 01:15:52 GMT",
"version": "v3"
},
{
"created": "Tue, 12 Feb 2013 13:58:35 GMT",
"version": "v4"
}
] | 2014-08-28 | [
[
"Chalub",
"Fabio A. C. C.",
""
],
[
"Souza",
"Max O.",
""
]
] | We study a class of processes that are akin to the Wright-Fisher model, with transition probabilities weighted in terms of the frequency-dependent fitness of the population types. By considering an approximate weak formulation of the discrete problem, we are able to derive a corresponding continuous weak formulation for the probability density. Therefore, we obtain a family of partial differential equations (PDE) for the evolution of the probability density, and which will be an approximation of the discrete process in the joint large population, small time-steps and weak selection limit. If the fitness functions are sufficiently regular, we can recast the weak formulation in a more standard formulation, without any boundary conditions, but supplemented by a number of conservation laws. The equations in this family can be purely diffusive, purely hyperbolic or of convection-diffusion type, with frequency dependent convection. The particular outcome will depend on the assumed scalings. The diffusive equations are of the degenerate type; using a duality approach, we also obtain a frequency dependent version of the Kimura equation without any further assumptions. We also show that the convective approximation is related to the replicator dynamics and provide some estimate of how accurate is the convective approximation, with respect to the convective-diffusion approximation. In particular, we show that the mode, but not the expected value, of the probability distribution is modelled by the replicator dynamics. Some numerical simulations that illustrate the results are also presented. |
1403.4319 | Michel Pleimling | Shahir Mowlaei, Ahmed Roman, and Michel Pleimling | Spirals and coarsening patterns in the competition of many species: A
complex Ginzburg-Landau approach | 22 pages, 5 figures, accepted for publication in the Journal of
Physics A | J. Phys. A: Math. Theor. 47 (2014) 165001 | 10.1088/1751-8113/47/16/165001 | null | q-bio.PE cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In order to model real ecological systems one has to consider many species
that interact in complex ways. However, most of the recent theoretical studies
have been restricted to few species systems with rather trivial interactions.
The few studies dealing with larger number of species and/or more complex
interaction schemes are mostly restricted to numerical explorations. In this
paper we determine, starting from the deterministic mean-field rate equations,
for large classes of systems the space of coexistence fixed points at which
biodiversity is maximal. For systems with a single coexistence fixed point we
derive complex Ginzburg-Landau equations that allow to describe space-time
pattern realized in two space dimensions. For selected cases we compare the
theoretical predictions with the pattern observed in numerical simulations.
| [
{
"created": "Tue, 18 Mar 2014 02:27:54 GMT",
"version": "v1"
}
] | 2014-04-11 | [
[
"Mowlaei",
"Shahir",
""
],
[
"Roman",
"Ahmed",
""
],
[
"Pleimling",
"Michel",
""
]
] | In order to model real ecological systems one has to consider many species that interact in complex ways. However, most of the recent theoretical studies have been restricted to few species systems with rather trivial interactions. The few studies dealing with larger number of species and/or more complex interaction schemes are mostly restricted to numerical explorations. In this paper we determine, starting from the deterministic mean-field rate equations, for large classes of systems the space of coexistence fixed points at which biodiversity is maximal. For systems with a single coexistence fixed point we derive complex Ginzburg-Landau equations that allow to describe space-time pattern realized in two space dimensions. For selected cases we compare the theoretical predictions with the pattern observed in numerical simulations. |
1307.2506 | Song Xu | Song Xu, Xinan Wang, Shuyun Jiao | Landscape construction in non-gradient dynamics: A case from evolution | arXiv admin note: text overlap with arXiv:q-bio/0605020 by other
authors | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Adaptive landscape has been a fundamental concept in many branches of modern
biology since Wright's first proposition in 1932. Meanwhile, the general
existence of landscape remains controversial. The causes include the mixed uses
of different landscape definitions with their own different aims and
advantages. Sometimes the difficulty and the impossibility of the landscape
construction for complex models are also equated. To clarify these confusions,
based on a recent formulation of Wright's theory, the current authors construct
generalized adaptive landscape in a two-loci population model with non-gradient
dynamics, where the conventional gradient landscape does not exist. On the
generalized landscape, a population moves along an evolutionary trajectory
which always increases or conserves adaptiveness but does not necessarily
follow the steepest gradient direction. Comparisons of different aspects of
various landscapes lead to a conclusion that the generalized landscape is a
possible direction to continue the exploration of Wright's theory for complex
dynamics.
| [
{
"created": "Tue, 9 Jul 2013 16:10:01 GMT",
"version": "v1"
},
{
"created": "Mon, 21 Oct 2013 23:36:52 GMT",
"version": "v2"
},
{
"created": "Fri, 6 Jun 2014 19:56:25 GMT",
"version": "v3"
},
{
"created": "Wed, 9 Dec 2015 17:31:26 GMT",
"version": "v4"
}
] | 2015-12-10 | [
[
"Xu",
"Song",
""
],
[
"Wang",
"Xinan",
""
],
[
"Jiao",
"Shuyun",
""
]
] | Adaptive landscape has been a fundamental concept in many branches of modern biology since Wright's first proposition in 1932. Meanwhile, the general existence of landscape remains controversial. The causes include the mixed uses of different landscape definitions with their own different aims and advantages. Sometimes the difficulty and the impossibility of the landscape construction for complex models are also equated. To clarify these confusions, based on a recent formulation of Wright's theory, the current authors construct generalized adaptive landscape in a two-loci population model with non-gradient dynamics, where the conventional gradient landscape does not exist. On the generalized landscape, a population moves along an evolutionary trajectory which always increases or conserves adaptiveness but does not necessarily follow the steepest gradient direction. Comparisons of different aspects of various landscapes lead to a conclusion that the generalized landscape is a possible direction to continue the exploration of Wright's theory for complex dynamics. |
2105.15034 | Fei Tang | Fei Tang, Michael Kopp | A remark on a paper of Krotov and Hopfield [arXiv:2008.06996] | 1 page, 8 formulae | null | null | null | q-bio.NC cs.AI cs.CV cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | In their recent paper titled "Large Associative Memory Problem in
Neurobiology and Machine Learning" [arXiv:2008.06996] the authors gave a
biologically plausible microscopic theory from which one can recover many dense
associative memory models discussed in the literature. We show that the layers
of the recent "MLP-mixer" [arXiv:2105.01601] as well as the essentially
equivalent model in [arXiv:2105.02723] are amongst them.
| [
{
"created": "Mon, 31 May 2021 15:13:00 GMT",
"version": "v1"
},
{
"created": "Thu, 3 Jun 2021 07:14:13 GMT",
"version": "v2"
}
] | 2021-06-04 | [
[
"Tang",
"Fei",
""
],
[
"Kopp",
"Michael",
""
]
] | In their recent paper titled "Large Associative Memory Problem in Neurobiology and Machine Learning" [arXiv:2008.06996] the authors gave a biologically plausible microscopic theory from which one can recover many dense associative memory models discussed in the literature. We show that the layers of the recent "MLP-mixer" [arXiv:2105.01601] as well as the essentially equivalent model in [arXiv:2105.02723] are amongst them. |
1706.08774 | Jean-Ren\'e Chazottes | S. Billiard, V. Bansaye, J.-R. Chazottes | Rejuvenating functional responses with renewal theory | 37 pages, 7 figures, to appear in the Journal of the Royal Society
Interface | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Functional responses are widely used to describe interactions and resources
exchange between individuals in ecology. The form given to functional responses
dramatically affects the dynamics and stability of populations and communities.
Despite their importance, functional responses are generally considered with a
phenomenological approach, without clear mechanistic justifications from
individual traits and behaviors. Here, we develop a bottom-up stochastic
framework grounded in Renewal Theory showing how functional responses emerge
from the level of the individuals through the decomposition of interactions
into different activities. Our framework has many applications for conceptual,
theoretical and empirical purposes. First, we show how the mean and variance of
classical functional responses are obtained with explicit ecological
assumptions, for instance regarding foraging behaviors. Second, we give
examples in specific ecological contexts, such as in nuptial-feeding species or
size dependent handling times. Finally, we demonstrate how to analyze data with
our framework, especially highlighting that observed variability in the number
of interactions can be used to infer parameters and compare functional response
models.
| [
{
"created": "Tue, 27 Jun 2017 11:02:57 GMT",
"version": "v1"
},
{
"created": "Wed, 15 Aug 2018 03:31:32 GMT",
"version": "v2"
}
] | 2018-08-16 | [
[
"Billiard",
"S.",
""
],
[
"Bansaye",
"V.",
""
],
[
"Chazottes",
"J. -R.",
""
]
] | Functional responses are widely used to describe interactions and resources exchange between individuals in ecology. The form given to functional responses dramatically affects the dynamics and stability of populations and communities. Despite their importance, functional responses are generally considered with a phenomenological approach, without clear mechanistic justifications from individual traits and behaviors. Here, we develop a bottom-up stochastic framework grounded in Renewal Theory showing how functional responses emerge from the level of the individuals through the decomposition of interactions into different activities. Our framework has many applications for conceptual, theoretical and empirical purposes. First, we show how the mean and variance of classical functional responses are obtained with explicit ecological assumptions, for instance regarding foraging behaviors. Second, we give examples in specific ecological contexts, such as in nuptial-feeding species or size dependent handling times. Finally, we demonstrate how to analyze data with our framework, especially highlighting that observed variability in the number of interactions can be used to infer parameters and compare functional response models. |
1609.08855 | Maurizio Mattia | Maurizio Mattia | Low-dimensional firing rate dynamics of spiking neuron networks | 8 pages, 4 figures | null | null | null | q-bio.NC cond-mat.dis-nn | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Starting from a spectral expansion of the Fokker-Plank equation for the
membrane potential density in a network of spiking neurons, a low-dimensional
dynamics of the collective firing rate is derived. As a result a $n$-order
ordinary differential equation for the network activity can be worked out by
taking into account the slowest $n$ modes of the expansion. The resulting
low-dimensional dynamics naturally takes into account the strength of the
synaptic couplings under the hypothesis of a not too fast changing membrane
potential density. By considering only the two slowest modes, the firing rate
dynamics is equivalent to the one of a damped oscillator in which the angular
frequency and the relaxation time are state-dependent. The presented results
apply to a wide class of networks of one-compartment neuron models.
| [
{
"created": "Wed, 28 Sep 2016 10:43:41 GMT",
"version": "v1"
}
] | 2016-09-29 | [
[
"Mattia",
"Maurizio",
""
]
] | Starting from a spectral expansion of the Fokker-Plank equation for the membrane potential density in a network of spiking neurons, a low-dimensional dynamics of the collective firing rate is derived. As a result a $n$-order ordinary differential equation for the network activity can be worked out by taking into account the slowest $n$ modes of the expansion. The resulting low-dimensional dynamics naturally takes into account the strength of the synaptic couplings under the hypothesis of a not too fast changing membrane potential density. By considering only the two slowest modes, the firing rate dynamics is equivalent to the one of a damped oscillator in which the angular frequency and the relaxation time are state-dependent. The presented results apply to a wide class of networks of one-compartment neuron models. |
1502.05328 | Jean Petitot | Jean Petitot | Complexity and self-organization in Turing | 33 pages, 19 figures | The Legacy of A.M. Turing, (E. Agazzi, ed.), Franco Angeli,
Milano, 149-182, 2013 | null | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We comment on Alan Turing's celebrated paper "The Chemical Basis of
Morphogenesis" published in 1952 in the "Philosophical Transactions of the
Royal Society of London". It is a typical example of a pioneering and inspired
work in the domain of mathematical modelling.
| [
{
"created": "Wed, 18 Feb 2015 18:17:10 GMT",
"version": "v1"
}
] | 2015-02-19 | [
[
"Petitot",
"Jean",
""
]
] | We comment on Alan Turing's celebrated paper "The Chemical Basis of Morphogenesis" published in 1952 in the "Philosophical Transactions of the Royal Society of London". It is a typical example of a pioneering and inspired work in the domain of mathematical modelling. |
q-bio/0412013 | Taekjip Ha | Ivan Rasnik, Sean A. McKinney and Taekjip Ha | Surfaces and orientations: much to fret about? | 9 pages, 7 figures | null | null | null | q-bio.BM | null | Single molecule FRET (fluorescence resonance energy transfer) is a powerful
technique for detecting real-time conformational changes and molecular
interactions during biological reactions. In this review, we examine different
techniques of extending observation times via immobilization and illustrate how
useful biological information can be obtained from single molecule FRET time
trajectories with or without absolute distance information.
| [
{
"created": "Tue, 7 Dec 2004 02:39:38 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Rasnik",
"Ivan",
""
],
[
"McKinney",
"Sean A.",
""
],
[
"Ha",
"Taekjip",
""
]
] | Single molecule FRET (fluorescence resonance energy transfer) is a powerful technique for detecting real-time conformational changes and molecular interactions during biological reactions. In this review, we examine different techniques of extending observation times via immobilization and illustrate how useful biological information can be obtained from single molecule FRET time trajectories with or without absolute distance information. |
2304.03889 | Mohamed Amine Ketata | Mohamed Amine Ketata, Cedrik Laue, Ruslan Mammadov, Hannes St\"ark,
Menghua Wu, Gabriele Corso, C\'eline Marquet, Regina Barzilay, Tommi S.
Jaakkola | DiffDock-PP: Rigid Protein-Protein Docking with Diffusion Models | ICLR Machine Learning for Drug Discovery (MLDD) Workshop 2023 | null | null | null | q-bio.BM cs.LG | http://creativecommons.org/licenses/by/4.0/ | Understanding how proteins structurally interact is crucial to modern
biology, with applications in drug discovery and protein design. Recent machine
learning methods have formulated protein-small molecule docking as a generative
problem with significant performance boosts over both traditional and deep
learning baselines. In this work, we propose a similar approach for rigid
protein-protein docking: DiffDock-PP is a diffusion generative model that
learns to translate and rotate unbound protein structures into their bound
conformations. We achieve state-of-the-art performance on DIPS with a median
C-RMSD of 4.85, outperforming all considered baselines. Additionally,
DiffDock-PP is faster than all search-based methods and generates reliable
confidence estimates for its predictions. Our code is publicly available at
$\texttt{https://github.com/ketatam/DiffDock-PP}$
| [
{
"created": "Sat, 8 Apr 2023 02:10:44 GMT",
"version": "v1"
}
] | 2023-04-11 | [
[
"Ketata",
"Mohamed Amine",
""
],
[
"Laue",
"Cedrik",
""
],
[
"Mammadov",
"Ruslan",
""
],
[
"Stärk",
"Hannes",
""
],
[
"Wu",
"Menghua",
""
],
[
"Corso",
"Gabriele",
""
],
[
"Marquet",
"Céline",
""
],
[
"Barzilay",
"Regina",
""
],
[
"Jaakkola",
"Tommi S.",
""
]
] | Understanding how proteins structurally interact is crucial to modern biology, with applications in drug discovery and protein design. Recent machine learning methods have formulated protein-small molecule docking as a generative problem with significant performance boosts over both traditional and deep learning baselines. In this work, we propose a similar approach for rigid protein-protein docking: DiffDock-PP is a diffusion generative model that learns to translate and rotate unbound protein structures into their bound conformations. We achieve state-of-the-art performance on DIPS with a median C-RMSD of 4.85, outperforming all considered baselines. Additionally, DiffDock-PP is faster than all search-based methods and generates reliable confidence estimates for its predictions. Our code is publicly available at $\texttt{https://github.com/ketatam/DiffDock-PP}$ |
1605.02028 | Michel Pleimling | Ahmed Roman, Debanjan Dasgupta, and Michel Pleimling | A theoretical approach to understand spatial organization in complex
ecologies | 14 pages, 3 figures, accepted for publication in the Journal of
Theoretical Biology | J. Theor. Biol. 403, 10 (2016) | 10.1016/j.jtbi.2016.05.009 | null | q-bio.PE cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Predicting the fate of ecologies is a daunting, albeit extremely important,
task. As part of this task one needs to develop an understanding of the
organization, hierarchies, and correlations among the species forming the
ecology. Focusing on complex food networks we present a theoretical method that
allows to achieve this understanding. Starting from the adjacency matrix the
method derives specific matrices that encode the various inter-species
relationships. The full potential of the method is achieved in a spatial
setting where one obtains detailed predictions for the emerging space-time
patterns. For a variety of cases these theoretical predictions are verified
through numerical simulations.
| [
{
"created": "Fri, 6 May 2016 18:43:09 GMT",
"version": "v1"
}
] | 2016-08-31 | [
[
"Roman",
"Ahmed",
""
],
[
"Dasgupta",
"Debanjan",
""
],
[
"Pleimling",
"Michel",
""
]
] | Predicting the fate of ecologies is a daunting, albeit extremely important, task. As part of this task one needs to develop an understanding of the organization, hierarchies, and correlations among the species forming the ecology. Focusing on complex food networks we present a theoretical method that allows to achieve this understanding. Starting from the adjacency matrix the method derives specific matrices that encode the various inter-species relationships. The full potential of the method is achieved in a spatial setting where one obtains detailed predictions for the emerging space-time patterns. For a variety of cases these theoretical predictions are verified through numerical simulations. |
1903.05984 | Sharmistha Mishra | Joshua Feldman, Sharmistha Mishra | What could re-infection tell us about R0? a modeling case-study of
syphilis transmission | 1 table, 4 figures | null | null | null | q-bio.PE math.DS physics.soc-ph | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Many infectious diseases can lead to re-infection. We examined the
relationship between the prevalence of repeat infection and the basic
reproductive number (R0). First we solved a generic, deterministic
compartmental model of re-infection to derive an analytic solution for the
relationship. We then numerically solved a disease specific model of syphilis
transmission that explicitly tracked re-infection. We derived a generic
expression that reflects a non-linear and monotonically increasing relationship
between proportion re-infection and R0 and which is attenuated by entry/exit
rates and recovery (i.e. treatment). Numerical simulations from the syphilis
model aligned with the analytic relationship. Re-infection proportions could be
used to understand how far regions are from epidemic control, and should be
included as a routine indicator in infectious disease surveillance.
| [
{
"created": "Thu, 14 Mar 2019 13:29:54 GMT",
"version": "v1"
}
] | 2019-03-15 | [
[
"Feldman",
"Joshua",
""
],
[
"Mishra",
"Sharmistha",
""
]
] | Many infectious diseases can lead to re-infection. We examined the relationship between the prevalence of repeat infection and the basic reproductive number (R0). First we solved a generic, deterministic compartmental model of re-infection to derive an analytic solution for the relationship. We then numerically solved a disease specific model of syphilis transmission that explicitly tracked re-infection. We derived a generic expression that reflects a non-linear and monotonically increasing relationship between proportion re-infection and R0 and which is attenuated by entry/exit rates and recovery (i.e. treatment). Numerical simulations from the syphilis model aligned with the analytic relationship. Re-infection proportions could be used to understand how far regions are from epidemic control, and should be included as a routine indicator in infectious disease surveillance. |
2111.06593 | Yanyi Ding | Yanyi Ding, Zhiyi Kuang, Yuxin Pei, Jeff Tan, Ziyu Zhang, Joseph Konan | Using Deep Learning Sequence Models to Identify SARS-CoV-2 Divergence | null | null | null | null | q-bio.QM cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | SARS-CoV-2 is an upper respiratory system RNA virus that has caused over 3
million deaths and infecting over 150 million worldwide as of May 2021. With
thousands of strains sequenced to date, SARS-CoV-2 mutations pose significant
challenges to scientists on keeping pace with vaccine development and public
health measures. Therefore, an efficient method of identifying the divergence
of lab samples from patients would greatly aid the documentation of SARS-CoV-2
genomics. In this study, we propose a neural network model that leverages
recurrent and convolutional units to directly take in amino acid sequences of
spike proteins and classify corresponding clades. We also compared our model's
performance with Bidirectional Encoder Representations from Transformers (BERT)
pre-trained on protein database. Our approach has the potential of providing a
more computationally efficient alternative to current homology based
intra-species differentiation.
| [
{
"created": "Fri, 12 Nov 2021 07:52:11 GMT",
"version": "v1"
}
] | 2021-11-15 | [
[
"Ding",
"Yanyi",
""
],
[
"Kuang",
"Zhiyi",
""
],
[
"Pei",
"Yuxin",
""
],
[
"Tan",
"Jeff",
""
],
[
"Zhang",
"Ziyu",
""
],
[
"Konan",
"Joseph",
""
]
] | SARS-CoV-2 is an upper respiratory system RNA virus that has caused over 3 million deaths and infecting over 150 million worldwide as of May 2021. With thousands of strains sequenced to date, SARS-CoV-2 mutations pose significant challenges to scientists on keeping pace with vaccine development and public health measures. Therefore, an efficient method of identifying the divergence of lab samples from patients would greatly aid the documentation of SARS-CoV-2 genomics. In this study, we propose a neural network model that leverages recurrent and convolutional units to directly take in amino acid sequences of spike proteins and classify corresponding clades. We also compared our model's performance with Bidirectional Encoder Representations from Transformers (BERT) pre-trained on protein database. Our approach has the potential of providing a more computationally efficient alternative to current homology based intra-species differentiation. |
0807.1943 | Patrick De Leenheer | Patrick De Leenheer and Nick Cogan | Failure of antibiotic treatment in microbial populations | 11 pages, 6 figures | null | null | null | q-bio.PE q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The tolerance of bacterial populations to biocidal or antibiotic treatment
has been well documented in both biofilm and planktonic settings. However,
there is still very little known about the mechanisms that produce this
tolerance. Evidence that small, non-mutant subpopulations of bacteria are not
affected by antibiotic challenge has been accumulating and provides an
attractive explanation for the failure of typical dosing protocols. Although a
dosing challenge can kill all the susceptible bacteria, the remaining persister
cells can serve as a source of population regrowth. We give a robust condition
for the failure of a periodic dosing protocol for a general chemostat model,
which supports the mathematical conclusions and simulations of an earlier, more
specialized batch model. Our condition implies that the treatment protocol
fails globally, in the sense that a mixed bacterial population will ultimately
persist above a level that is independent of the initial composition of the
population. We also give a sufficient condition for treatment success, at least
for initial population compositions near the steady state of interest,
corresponding to bacterial washout. Finally, we investigate how the speed at
which the bacteria are wiped out depends on the duration of administration of
the antibiotic. We find that this dependence is not necessarily monotone,
implying that optimal dosing does not necessarily correspond to continuous
administration of the antibiotic. Thus, genuine periodic protocols can be more
advantageous in treating a wide variety of bacterial infections.
| [
{
"created": "Sat, 12 Jul 2008 00:14:45 GMT",
"version": "v1"
}
] | 2008-07-15 | [
[
"De Leenheer",
"Patrick",
""
],
[
"Cogan",
"Nick",
""
]
] | The tolerance of bacterial populations to biocidal or antibiotic treatment has been well documented in both biofilm and planktonic settings. However, there is still very little known about the mechanisms that produce this tolerance. Evidence that small, non-mutant subpopulations of bacteria are not affected by antibiotic challenge has been accumulating and provides an attractive explanation for the failure of typical dosing protocols. Although a dosing challenge can kill all the susceptible bacteria, the remaining persister cells can serve as a source of population regrowth. We give a robust condition for the failure of a periodic dosing protocol for a general chemostat model, which supports the mathematical conclusions and simulations of an earlier, more specialized batch model. Our condition implies that the treatment protocol fails globally, in the sense that a mixed bacterial population will ultimately persist above a level that is independent of the initial composition of the population. We also give a sufficient condition for treatment success, at least for initial population compositions near the steady state of interest, corresponding to bacterial washout. Finally, we investigate how the speed at which the bacteria are wiped out depends on the duration of administration of the antibiotic. We find that this dependence is not necessarily monotone, implying that optimal dosing does not necessarily correspond to continuous administration of the antibiotic. Thus, genuine periodic protocols can be more advantageous in treating a wide variety of bacterial infections. |
1810.12898 | Anna Song | Anna Song, Olivier Faugeras and Romain Veltz | A neural field model for color perception unifying assimilation and
contrast | 28 pages, 14 figures, 6 supplementary files (to be found on PLOS'
website) | null | 10.1371/journal.pcbi.1007050 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We address the question of color-space interactions in the brain, by
proposing a neural field model of color perception with spatial context for the
visual area V1 of the cortex. Our framework reconciles two opposing perceptual
phenomena, known as simultaneous contrast and chromatic assimilation. They have
been previously shown to act synergistically, so that at some point in an
image, the color seems perceptually more similar to that of adjacent neighbors,
while being more dissimilar from that of remote ones. Thus, their combined
effects are enhanced in the presence of a spatial pattern, and can be measured
as larger shifts in color matching experiments. Our model supposes a
hypercolumnar structure coding for colors in V1, and relies on the notion of
color opponency introduced by Hering. The connectivity kernel of the neural
field exploits the balance between attraction and repulsion in color and
physical spaces, so as to reproduce the sign reversal in the influence of
neighboring points. The color sensation at a point, defined from a steady state
of the neural activities, is then extracted as a nonlinear percept conveyed by
an assembly of neurons. It connects the cortical and perceptual levels, because
we describe the search for a color match in asymmetric matching experiments as
a mathematical projection on color sensations. We validate our color neural
field alongside this color matching framework, by performing a multi-parameter
regression to data produced by psychophysicists and ourselves. All the results
show that we are able to explain the nonlinear behavior of shifts observed
along one or two dimensions in color space, which cannot be done using a simple
linear model.
| [
{
"created": "Tue, 30 Oct 2018 17:50:29 GMT",
"version": "v1"
},
{
"created": "Fri, 28 Jun 2019 12:29:41 GMT",
"version": "v2"
}
] | 2019-07-01 | [
[
"Song",
"Anna",
""
],
[
"Faugeras",
"Olivier",
""
],
[
"Veltz",
"Romain",
""
]
] | We address the question of color-space interactions in the brain, by proposing a neural field model of color perception with spatial context for the visual area V1 of the cortex. Our framework reconciles two opposing perceptual phenomena, known as simultaneous contrast and chromatic assimilation. They have been previously shown to act synergistically, so that at some point in an image, the color seems perceptually more similar to that of adjacent neighbors, while being more dissimilar from that of remote ones. Thus, their combined effects are enhanced in the presence of a spatial pattern, and can be measured as larger shifts in color matching experiments. Our model supposes a hypercolumnar structure coding for colors in V1, and relies on the notion of color opponency introduced by Hering. The connectivity kernel of the neural field exploits the balance between attraction and repulsion in color and physical spaces, so as to reproduce the sign reversal in the influence of neighboring points. The color sensation at a point, defined from a steady state of the neural activities, is then extracted as a nonlinear percept conveyed by an assembly of neurons. It connects the cortical and perceptual levels, because we describe the search for a color match in asymmetric matching experiments as a mathematical projection on color sensations. We validate our color neural field alongside this color matching framework, by performing a multi-parameter regression to data produced by psychophysicists and ourselves. All the results show that we are able to explain the nonlinear behavior of shifts observed along one or two dimensions in color space, which cannot be done using a simple linear model. |
1712.09332 | Sobhan Moosavi | Samaneh Aghajanbaglo, Sobhan Moosavi, Maseud Rahgozar, Amir Rahimi | Predicting protein-protein interactions based on rotation of proteins in
3D-space | 6 pages, accepted in The Second International Workshop on Parallelism
in Bioinformatics (PBio 2014), as part of IEEE Cluster 2014 | null | null | null | q-bio.QM cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Protein-Protein Interactions (PPIs) perform essential roles in biological
functions. Although some experimental techniques have been developed to detect
PPIs, they suffer from high false positive and high false negative rates.
Consequently, efforts have been devoted during recent years to develop
computational approaches to predict the interactions utilizing various sources
of information. Therefore, a unique category of prediction approaches has been
devised which is based on the protein sequence information. However, finding an
appropriate feature encoding to characterize the sequence of proteins is a
major challenge in such methods. In presented work, a sequence based method is
proposed to predict protein-protein interactions using N-Gram encoding
approaches to describe amino acids and a Relaxed Variable Kernel Density
Estimator (RVKDE) as a machine learning tool. Moreover, since proteins can
rotate in 3D-space, amino acid compositions have been considered with
"undirected" property which leads to reduce dimensions of the vector space. The
results show that our proposed method achieves the superiority of prediction
performance with improving an F-measure of 2.5% on Human Protein Reference
Dataset (HPRD).
| [
{
"created": "Fri, 22 Dec 2017 19:33:19 GMT",
"version": "v1"
}
] | 2017-12-29 | [
[
"Aghajanbaglo",
"Samaneh",
""
],
[
"Moosavi",
"Sobhan",
""
],
[
"Rahgozar",
"Maseud",
""
],
[
"Rahimi",
"Amir",
""
]
] | Protein-Protein Interactions (PPIs) perform essential roles in biological functions. Although some experimental techniques have been developed to detect PPIs, they suffer from high false positive and high false negative rates. Consequently, efforts have been devoted during recent years to develop computational approaches to predict the interactions utilizing various sources of information. Therefore, a unique category of prediction approaches has been devised which is based on the protein sequence information. However, finding an appropriate feature encoding to characterize the sequence of proteins is a major challenge in such methods. In presented work, a sequence based method is proposed to predict protein-protein interactions using N-Gram encoding approaches to describe amino acids and a Relaxed Variable Kernel Density Estimator (RVKDE) as a machine learning tool. Moreover, since proteins can rotate in 3D-space, amino acid compositions have been considered with "undirected" property which leads to reduce dimensions of the vector space. The results show that our proposed method achieves the superiority of prediction performance with improving an F-measure of 2.5% on Human Protein Reference Dataset (HPRD). |
2210.01020 | Laurent Gatto | Christophe Vanderaa and Laurent Gatto | The current state of single-cell proteomics data analysis | All data used to create the figures in this article are available
from the scpdata package scpdata. The R code to reproduce the figures is
available at: https://github.com/UCLouvain-CBIO/2022-scp-data-analysis | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by-sa/4.0/ | Sound data analysis is essential to retrieve meaningful biological
information from single-cell proteomics experiments. This analysis is carried
out by computational methods that are assembled into workflows, and their
implementations influence the conclusions that can be drawn from the data. In
this work, we explore and compare the computational workflows that have been
used over the last four years and identify a profound lack of consensus on how
to analyze single-cell proteomics data. We highlight the need for benchmarking
of computational workflows, standardization of computational tools and data, as
well as carefully designed experiments. Finally, we cover the current
standardization efforts that aim to fill the gap and list the remaining missing
pieces, and conclude with lessons learned from the replication of published
single-cell proteomics analyses.
| [
{
"created": "Mon, 3 Oct 2022 15:33:45 GMT",
"version": "v1"
},
{
"created": "Thu, 1 Dec 2022 16:39:25 GMT",
"version": "v2"
}
] | 2022-12-02 | [
[
"Vanderaa",
"Christophe",
""
],
[
"Gatto",
"Laurent",
""
]
] | Sound data analysis is essential to retrieve meaningful biological information from single-cell proteomics experiments. This analysis is carried out by computational methods that are assembled into workflows, and their implementations influence the conclusions that can be drawn from the data. In this work, we explore and compare the computational workflows that have been used over the last four years and identify a profound lack of consensus on how to analyze single-cell proteomics data. We highlight the need for benchmarking of computational workflows, standardization of computational tools and data, as well as carefully designed experiments. Finally, we cover the current standardization efforts that aim to fill the gap and list the remaining missing pieces, and conclude with lessons learned from the replication of published single-cell proteomics analyses. |
2105.00643 | Siddhartha Chakrabarty | Suryadeepto Nag, Siddhartha P. Chakrabarty | Modeling the dynamics of COVID-19 transmission in India: Social
Distancing, Regional Spread and Healthcare Capacity | null | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | In the new paradigm of health-centric governance, policy makers are in a
constant need for appropriate metrics and estimates in order to determine the
best policies in a non-arbitrary fashion. Thus, in this paper, a
compartmentalized model for the transmission of COVID-19 is developed to
facilitate policy making. A socially distanced compartment is added to the
model and its utility in quantifying the magnitude of voluntary social
distancing is illustrated. Modifications are made to incorporate inter-region
migration, and suitable metrics are proposed to quantify the impact of
migration on the rise of cases. The healthcare capacity is modeled and a method
is developed to study the consequences of the saturation of the healthcare
system. The model and related measures are used to study the nature of the
transmission and spread of COVID-19 in India, and appropriate insights are
drawn.
| [
{
"created": "Mon, 3 May 2021 06:23:41 GMT",
"version": "v1"
},
{
"created": "Tue, 27 Jul 2021 09:23:21 GMT",
"version": "v2"
},
{
"created": "Tue, 19 Apr 2022 11:18:36 GMT",
"version": "v3"
}
] | 2022-04-20 | [
[
"Nag",
"Suryadeepto",
""
],
[
"Chakrabarty",
"Siddhartha P.",
""
]
] | In the new paradigm of health-centric governance, policy makers are in a constant need for appropriate metrics and estimates in order to determine the best policies in a non-arbitrary fashion. Thus, in this paper, a compartmentalized model for the transmission of COVID-19 is developed to facilitate policy making. A socially distanced compartment is added to the model and its utility in quantifying the magnitude of voluntary social distancing is illustrated. Modifications are made to incorporate inter-region migration, and suitable metrics are proposed to quantify the impact of migration on the rise of cases. The healthcare capacity is modeled and a method is developed to study the consequences of the saturation of the healthcare system. The model and related measures are used to study the nature of the transmission and spread of COVID-19 in India, and appropriate insights are drawn. |
1804.00404 | Ryuta Mizutani | Ryuta Mizutani, Rino Saiga, Akihisa Takeuchi, Kentaro Uesugi, Yasuko
Terada, Yoshio Suzuki, Vincent De Andrade, Francesco De Carlo, Susumu
Takekoshi, Chie Inomoto, Naoya Nakamura, Itaru Kushima, Shuji Iritani, Norio
Ozaki, Soichiro Ide, Kazutaka Ikeda, Kenichi Oshima, Masanari Itokawa, and
Makoto Arai | Three-dimensional alteration of neurites in schizophrenia | 24 pages, 4 figures, and 1 table. Supplementary materials are
available from DOI link | Translational Psychiatry 9, 85 (2019) | 10.1038/s41398-019-0427-4 | null | q-bio.NC physics.bio-ph | http://creativecommons.org/licenses/by/4.0/ | This paper reports nano-CT analysis of brain tissues of schizophrenia and
control cases. The analysis revealed that: (1) neuronal structures vary between
individuals, (2) the mean curvature of distal neurites of the schizophrenia
cases was 1.5 times higher than that of the controls, and (3) dendritic spines
were categorized into two geometrically distinctive groups, though no
structural differences were observed between the disease and control cases. The
differences in the neurite curvature result in differences in the spatial
trajectory and hence alter neuronal circuits. We suggest that the structural
alteration of neurons in the schizophrenia cases should reflect psychiatric
symptoms of schizophrenia.
| [
{
"created": "Mon, 2 Apr 2018 05:55:03 GMT",
"version": "v1"
},
{
"created": "Fri, 20 Jul 2018 02:37:46 GMT",
"version": "v2"
},
{
"created": "Sat, 16 Feb 2019 06:33:45 GMT",
"version": "v3"
}
] | 2019-02-19 | [
[
"Mizutani",
"Ryuta",
""
],
[
"Saiga",
"Rino",
""
],
[
"Takeuchi",
"Akihisa",
""
],
[
"Uesugi",
"Kentaro",
""
],
[
"Terada",
"Yasuko",
""
],
[
"Suzuki",
"Yoshio",
""
],
[
"De Andrade",
"Vincent",
""
],
[
"De Carlo",
"Francesco",
""
],
[
"Takekoshi",
"Susumu",
""
],
[
"Inomoto",
"Chie",
""
],
[
"Nakamura",
"Naoya",
""
],
[
"Kushima",
"Itaru",
""
],
[
"Iritani",
"Shuji",
""
],
[
"Ozaki",
"Norio",
""
],
[
"Ide",
"Soichiro",
""
],
[
"Ikeda",
"Kazutaka",
""
],
[
"Oshima",
"Kenichi",
""
],
[
"Itokawa",
"Masanari",
""
],
[
"Arai",
"Makoto",
""
]
] | This paper reports nano-CT analysis of brain tissues of schizophrenia and control cases. The analysis revealed that: (1) neuronal structures vary between individuals, (2) the mean curvature of distal neurites of the schizophrenia cases was 1.5 times higher than that of the controls, and (3) dendritic spines were categorized into two geometrically distinctive groups, though no structural differences were observed between the disease and control cases. The differences in the neurite curvature result in differences in the spatial trajectory and hence alter neuronal circuits. We suggest that the structural alteration of neurons in the schizophrenia cases should reflect psychiatric symptoms of schizophrenia. |
1303.1432 | Krzysztof Argasinski | Krzysztof Argasinski, Mark Broom | Towards a replicator dynamics model of age structured populations | 52 pages, 8 figures | Journal of Mathematical Biology 2021 | 10.1007/s00285-021-01592-4 | null | q-bio.PE math.CA nlin.AO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we present a new modelling framework combining replicator
dynamics (which is the standard model of frequency dependent selection) with
the model of an age-structured population. The new framework allows for the
modelling of populations consisting of competing strategies carried by
individuals who change across their life cycle. Firstly the discretization of
the McKendrick von Foerster model is derived. It is shown that the Euler--Lotka
equation is satisfied when the new model reaches a steady state (i.e. stable
frequencies between the age classes). This discretization consists of the unit
age classes and the timescale is chosen that only a fraction of individuals
play single game round. This implies linear dynamics within single time unit
when individuals not killed during game round are moved from one age class to
another. Since its local linear behaviour the system is equivalent to large
Bernadelli-Lewis-Leslie matrix. Then the methodology of multipopulation games
is used for the derivation of two, mutually equivalent systems of equations.
The first contains equations describing the evolution of the strategy
frequencies in the whole population completed by subsystems of equations
describing the evolution of the age structure for each strategy. The second
system contains equations describing the changes of the general population's
age structure, completed with subsystems of equations describing the selection
of the strategies within each age class. Then the obtained system of replicator
dynamics is presented in the form of the mixed ODE-PDE system. The obtained
results are illustrated by example of the sex ratio model which shows that when
different mortalities of both sexes are assumed, the sex ratio of 0.5 is
obtained but that Fisher's mechanism driven by the reproductive value of the
different sexes is not in equilibrium.
| [
{
"created": "Wed, 6 Mar 2013 19:23:40 GMT",
"version": "v1"
},
{
"created": "Thu, 23 Apr 2020 21:02:26 GMT",
"version": "v2"
},
{
"created": "Mon, 31 Aug 2020 16:46:46 GMT",
"version": "v3"
},
{
"created": "Wed, 31 Mar 2021 16:17:37 GMT",
"version": "v4"
}
] | 2021-04-01 | [
[
"Argasinski",
"Krzysztof",
""
],
[
"Broom",
"Mark",
""
]
] | In this paper we present a new modelling framework combining replicator dynamics (which is the standard model of frequency dependent selection) with the model of an age-structured population. The new framework allows for the modelling of populations consisting of competing strategies carried by individuals who change across their life cycle. Firstly the discretization of the McKendrick von Foerster model is derived. It is shown that the Euler--Lotka equation is satisfied when the new model reaches a steady state (i.e. stable frequencies between the age classes). This discretization consists of the unit age classes and the timescale is chosen that only a fraction of individuals play single game round. This implies linear dynamics within single time unit when individuals not killed during game round are moved from one age class to another. Since its local linear behaviour the system is equivalent to large Bernadelli-Lewis-Leslie matrix. Then the methodology of multipopulation games is used for the derivation of two, mutually equivalent systems of equations. The first contains equations describing the evolution of the strategy frequencies in the whole population completed by subsystems of equations describing the evolution of the age structure for each strategy. The second system contains equations describing the changes of the general population's age structure, completed with subsystems of equations describing the selection of the strategies within each age class. Then the obtained system of replicator dynamics is presented in the form of the mixed ODE-PDE system. The obtained results are illustrated by example of the sex ratio model which shows that when different mortalities of both sexes are assumed, the sex ratio of 0.5 is obtained but that Fisher's mechanism driven by the reproductive value of the different sexes is not in equilibrium. |
1508.02601 | Xiao-Jun Tian | Xiao-Jun Tian, Hang Zhang, Jingyu Zhang, Jianhua Xing | mRNA-miRNA Reciprocal Regulation Enabled Bistable Switch Directs Cell
Fate Decision | 32 pages, 5 figures and 7 supporting figures in FEBS Letters, 2016 | null | 10.1002/1873-3468.12379 | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | miRNAs serve as crucial post-transcriptional regulators in various essential
cell fate decision. However, the contribution of the mRNA-miRNA mutual
regulation to bistability is not fully understood. Here, we built a set of
mathematical models of mRNA-miRNA interactions and systematically analyzed the
sensitivity of response curves under various conditions. First, we found that
mRNA-miRNA reciprocal regulation could manifest ultrasensitivity to subserve
the generation of bistability when equipped with a positive feedback loop.
Second, the region of bistability is expanded by a stronger competing mRNA
(ceRNA). Interesting, bistability can be emerged without feedback loop if
multiple miRNA binding sites exist on a target mRNA. Thus, we demonstrated the
importance of simple mRNA-miRNA reciprocal regulation in cell fate decision.
| [
{
"created": "Tue, 11 Aug 2015 14:05:36 GMT",
"version": "v1"
},
{
"created": "Fri, 2 Sep 2016 13:19:03 GMT",
"version": "v2"
}
] | 2016-09-05 | [
[
"Tian",
"Xiao-Jun",
""
],
[
"Zhang",
"Hang",
""
],
[
"Zhang",
"Jingyu",
""
],
[
"Xing",
"Jianhua",
""
]
] | miRNAs serve as crucial post-transcriptional regulators in various essential cell fate decision. However, the contribution of the mRNA-miRNA mutual regulation to bistability is not fully understood. Here, we built a set of mathematical models of mRNA-miRNA interactions and systematically analyzed the sensitivity of response curves under various conditions. First, we found that mRNA-miRNA reciprocal regulation could manifest ultrasensitivity to subserve the generation of bistability when equipped with a positive feedback loop. Second, the region of bistability is expanded by a stronger competing mRNA (ceRNA). Interesting, bistability can be emerged without feedback loop if multiple miRNA binding sites exist on a target mRNA. Thus, we demonstrated the importance of simple mRNA-miRNA reciprocal regulation in cell fate decision. |
1804.02115 | Elham Bayat Mokhtari | Elham Bayat Mokhtari, J. Josh Lawrence, Emily F Stone | Effect of Neuromodulation of Short-Term Plasticity on Information
Processing in Hippocampal Interneuron Synapses | 29 pages, 14 figures | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neurons in a micro-circuit connected by chemical synapses can have their
connectivity affected by the prior activity of the cells. The number of
synapses available for releasing neurotransmitter can be decreased by
repetitive activation through depletion of readily releasable neurotransmitter
(NT), or increased through facilitation, where the probability of release of NT
is increased by prior activation. These competing effects can create a
complicated and subtle range of time dependent connectivity. Here we
investigate the probabilistic properties of facilitation and depression (FD)
for a presynaptic neuron that is receiving a Poisson spike train of input. We
use a model of FD that is parameterized with experimental data from a
hippocampal basket cell and pyramidal cell connection, for fixed frequency
input spikes at frequencies in the range of theta and gamma oscillations. Hence
our results will apply to micro-circuits in the hippocampus that are
responsible for the interaction of theta and gamma rhythms associated with
learning and memory. A control situation is compared with one in which a
pharmaceutical neuromodulator (muscarine) is employed. We apply standard
information theoretic measures such as entropy and mutual information, and find
a closed form approximate expression for the probability distribution of
release probability. We also use techniques that measure the dependence of the
response on the exact history of stimulation the synapse has received, which
uncovers some unexpected differences between control and muscarine-added cases.
| [
{
"created": "Fri, 6 Apr 2018 02:37:11 GMT",
"version": "v1"
}
] | 2018-04-09 | [
[
"Mokhtari",
"Elham Bayat",
""
],
[
"Lawrence",
"J. Josh",
""
],
[
"Stone",
"Emily F",
""
]
] | Neurons in a micro-circuit connected by chemical synapses can have their connectivity affected by the prior activity of the cells. The number of synapses available for releasing neurotransmitter can be decreased by repetitive activation through depletion of readily releasable neurotransmitter (NT), or increased through facilitation, where the probability of release of NT is increased by prior activation. These competing effects can create a complicated and subtle range of time dependent connectivity. Here we investigate the probabilistic properties of facilitation and depression (FD) for a presynaptic neuron that is receiving a Poisson spike train of input. We use a model of FD that is parameterized with experimental data from a hippocampal basket cell and pyramidal cell connection, for fixed frequency input spikes at frequencies in the range of theta and gamma oscillations. Hence our results will apply to micro-circuits in the hippocampus that are responsible for the interaction of theta and gamma rhythms associated with learning and memory. A control situation is compared with one in which a pharmaceutical neuromodulator (muscarine) is employed. We apply standard information theoretic measures such as entropy and mutual information, and find a closed form approximate expression for the probability distribution of release probability. We also use techniques that measure the dependence of the response on the exact history of stimulation the synapse has received, which uncovers some unexpected differences between control and muscarine-added cases. |
1512.00544 | Brian Camley | Brian A. Camley and Juliane Zimmermann and Herbert Levine and
Wouter-Jan Rappel | Collective signal processing in cluster chemotaxis: roles of adaptation,
amplification, and co-attraction in collective guidance | This article extends some results previously presented in
arXiv:1506.06698 | null | 10.1371/journal.pcbi.1005008 | null | q-bio.CB cond-mat.stat-mech physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Single eukaryotic cells commonly sense and follow chemical gradients,
performing chemotaxis. Recent experiments and theories, however, show that even
when single cells do not chemotax, clusters of cells may, if their interactions
are regulated by the chemoattractant. We study this general mechanism of
"collective guidance" computationally with models that integrate stochastic
dynamics for individual cells with biochemical reactions within the cells, and
diffusion of chemical signals between the cells. We show that if clusters of
cells use the well-known local excitation, global inhibition (LEGI) mechanism
to sense chemoattractant gradients, the speed of the cell cluster becomes
non-monotonic in the cluster's size - clusters either larger or smaller than an
optimal size will have lower speed. We argue that the cell cluster speed is a
crucial readout of how the cluster processes chemotactic signal; both
amplification and adaptation will alter the behavior of cluster speed as a
function of size. We also show that, contrary to the assumptions of earlier
theories, collective guidance does not require persistent cell-cell contacts
and strong short range adhesion to function. If cell-cell adhesion is absent,
and the cluster cohesion is instead provided by a co-attraction mechanism, e.g.
chemotaxis toward a secreted molecule, collective guidance may still function.
However, new behaviors, such as cluster rotation, may also appear in this case.
Together, the combination of co-attraction and adaptation allows for collective
guidance that is robust to varying chemoattractant concentrations while not
requiring strong cell-cell adhesion.
| [
{
"created": "Wed, 2 Dec 2015 02:24:22 GMT",
"version": "v1"
}
] | 2016-07-04 | [
[
"Camley",
"Brian A.",
""
],
[
"Zimmermann",
"Juliane",
""
],
[
"Levine",
"Herbert",
""
],
[
"Rappel",
"Wouter-Jan",
""
]
] | Single eukaryotic cells commonly sense and follow chemical gradients, performing chemotaxis. Recent experiments and theories, however, show that even when single cells do not chemotax, clusters of cells may, if their interactions are regulated by the chemoattractant. We study this general mechanism of "collective guidance" computationally with models that integrate stochastic dynamics for individual cells with biochemical reactions within the cells, and diffusion of chemical signals between the cells. We show that if clusters of cells use the well-known local excitation, global inhibition (LEGI) mechanism to sense chemoattractant gradients, the speed of the cell cluster becomes non-monotonic in the cluster's size - clusters either larger or smaller than an optimal size will have lower speed. We argue that the cell cluster speed is a crucial readout of how the cluster processes chemotactic signal; both amplification and adaptation will alter the behavior of cluster speed as a function of size. We also show that, contrary to the assumptions of earlier theories, collective guidance does not require persistent cell-cell contacts and strong short range adhesion to function. If cell-cell adhesion is absent, and the cluster cohesion is instead provided by a co-attraction mechanism, e.g. chemotaxis toward a secreted molecule, collective guidance may still function. However, new behaviors, such as cluster rotation, may also appear in this case. Together, the combination of co-attraction and adaptation allows for collective guidance that is robust to varying chemoattractant concentrations while not requiring strong cell-cell adhesion. |
2008.04347 | Fakhar Mustafa | Fakhar Mustafa, Rehan Ahmed Khan Sherwani, Syed Salman Saqlain,
Muhammad Asad Meraj, Haseeb ur Rehman, Rida Ayyaz | COVID-19 in South Asia: Real-time monitoring of reproduction and case
fatality rate | null | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | As the ravages caused by COVID-19 pandemic are becoming inevitable with every
moment, monitoring and understanding of transmission and fatality rate has
become even more paramount for containing its spread. The key purpose of this
analysis is to report the real-time effective reproduction rate ($R_t$ ) and
case fatality rates (CFR) of COVID-19 in South Asia region. Data for this study
are extracted from JHU CSSE COVID-19 Data source up to July 31, 2020. $R_t$ is
estimated using exponential growth and time-dependent methods. R0 package in
R-language is employed to estimate $R_t$ by fitting the existing epidemic
curve. Case fatality rate is estimated by using Naive and Kaplan-Meier methods.
Owing to exponential increase in cases of COVID-19, the pandemic will ensue in
India, Maldives and in Nepal as $R_t$ was estimated greater than 1 for these
countries. Although case fatality rates are found lesser as compared to other
highly affected regions in the world, strict monitoring of deaths for better
health facilities and care of patients is emphasized. More regional level
cooperation and efforts are the need of time to minimize the detrimental
effects of the virus.
| [
{
"created": "Mon, 10 Aug 2020 18:19:39 GMT",
"version": "v1"
}
] | 2020-08-12 | [
[
"Mustafa",
"Fakhar",
""
],
[
"Sherwani",
"Rehan Ahmed Khan",
""
],
[
"Saqlain",
"Syed Salman",
""
],
[
"Meraj",
"Muhammad Asad",
""
],
[
"Rehman",
"Haseeb ur",
""
],
[
"Ayyaz",
"Rida",
""
]
] | As the ravages caused by COVID-19 pandemic are becoming inevitable with every moment, monitoring and understanding of transmission and fatality rate has become even more paramount for containing its spread. The key purpose of this analysis is to report the real-time effective reproduction rate ($R_t$ ) and case fatality rates (CFR) of COVID-19 in South Asia region. Data for this study are extracted from JHU CSSE COVID-19 Data source up to July 31, 2020. $R_t$ is estimated using exponential growth and time-dependent methods. R0 package in R-language is employed to estimate $R_t$ by fitting the existing epidemic curve. Case fatality rate is estimated by using Naive and Kaplan-Meier methods. Owing to exponential increase in cases of COVID-19, the pandemic will ensue in India, Maldives and in Nepal as $R_t$ was estimated greater than 1 for these countries. Although case fatality rates are found lesser as compared to other highly affected regions in the world, strict monitoring of deaths for better health facilities and care of patients is emphasized. More regional level cooperation and efforts are the need of time to minimize the detrimental effects of the virus. |
1609.01566 | Arli Aditya Parikesit | Arli Aditya Parikesit, Harry Noviardi, Djati Kerami, Usman Sumo Friend
Tambunan | The Complexity of Molecular Interactions and Bindings between Cyclic
Peptide and Inhibit Polymerase A and B1 (PAC-PB1N) H1N1 | 6 pages, 9th Joint Conference on Chemistry, Semarang, Indonesia,
12-13 November 2014 | null | 10.13140/RG.2.1.1439.6969 | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The influenza/H1N1 virus has caused hazard in the public health of many
countries. Hence, existing influenza drugs could not cope with H1N1 infection
due to the high mutation rate of the virus. In this respect, new method to
block the virus was devised. The polymerase PAC-PB1N enzyme is responsible for
the replication of H1N1 virus. Thus, novel inhibitors were developed to ward
off the functionality of the enzyme. In this research, cyclic peptides has been
chosen to inhibit PAC-PB1N due to its proven stability in reaching the drug
target. Thus, computational method for elucidating the molecular interaction
between cyclic peptides and PAC-PB1N has been developed by using the LigX tools
from MOE 2008.10 software. The tools could render the bindings that involved in
the interactions. The interactions between individual amino acid in the
inhibitor and enzyme could be seen as well. Thus, the peptide sequences of
CKTTC and CKKTC were chosen as the lead compounds. In this end, the feasibility
of cyclic peptides to act as drug candidate for H1N1 could be exposed by the 2d
and 3d modeling of the molecular interactions.
| [
{
"created": "Tue, 27 Oct 2015 03:58:35 GMT",
"version": "v1"
}
] | 2016-09-07 | [
[
"Parikesit",
"Arli Aditya",
""
],
[
"Noviardi",
"Harry",
""
],
[
"Kerami",
"Djati",
""
],
[
"Tambunan",
"Usman Sumo Friend",
""
]
] | The influenza/H1N1 virus has caused hazard in the public health of many countries. Hence, existing influenza drugs could not cope with H1N1 infection due to the high mutation rate of the virus. In this respect, new method to block the virus was devised. The polymerase PAC-PB1N enzyme is responsible for the replication of H1N1 virus. Thus, novel inhibitors were developed to ward off the functionality of the enzyme. In this research, cyclic peptides has been chosen to inhibit PAC-PB1N due to its proven stability in reaching the drug target. Thus, computational method for elucidating the molecular interaction between cyclic peptides and PAC-PB1N has been developed by using the LigX tools from MOE 2008.10 software. The tools could render the bindings that involved in the interactions. The interactions between individual amino acid in the inhibitor and enzyme could be seen as well. Thus, the peptide sequences of CKTTC and CKKTC were chosen as the lead compounds. In this end, the feasibility of cyclic peptides to act as drug candidate for H1N1 could be exposed by the 2d and 3d modeling of the molecular interactions. |
1608.07058 | Justin Feigelman | Justin Feigelman, Stefan Ganscha and Manfred Claassen | matLeap: A fast adaptive Matlab-ready tau-leaping implementation
suitable for Bayesian inference | null | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: Species abundance distributions in chemical reaction network
models cannot usually be computed analytically. Instead, stochas- tic
simulation algorithms allow sample from the the system configuration. Although
many algorithms have been described, no fast implementation has been provided
for {\tau}-leaping which i) is Matlab-compatible, ii) adap- tively alternates
between SSA, implicit and explicit {\tau}-leaping, and iii) provides summary
statistics necessary for Bayesian inference. Results: We provide a
Matlab-compatible implementation of the adap- tive explicit-implicit
{\tau}-leaping algorithm to address the above-mentioned deficits. matLeap
provides equal or substantially faster results compared to two widely used
simulation packages while maintaining accuracy. Lastly, matLeap yields summary
statistics of the stochastic process unavailable with other methods, which are
indispensable for Bayesian inference. Conclusions: matLeap addresses
shortcomings in existing Matlab-compatible stochastic simulation software,
providing significant speedups and sum- mary statistics that are especially
useful for researchers utilizing particle- filter based methods for Bayesian
inference. Code is available for download at
https://github.com/claassengroup/matLeap. Contact:
justin.feigelman@imsb.biol.ethz.ch
| [
{
"created": "Thu, 25 Aug 2016 09:14:56 GMT",
"version": "v1"
}
] | 2016-08-26 | [
[
"Feigelman",
"Justin",
""
],
[
"Ganscha",
"Stefan",
""
],
[
"Claassen",
"Manfred",
""
]
] | Background: Species abundance distributions in chemical reaction network models cannot usually be computed analytically. Instead, stochas- tic simulation algorithms allow sample from the the system configuration. Although many algorithms have been described, no fast implementation has been provided for {\tau}-leaping which i) is Matlab-compatible, ii) adap- tively alternates between SSA, implicit and explicit {\tau}-leaping, and iii) provides summary statistics necessary for Bayesian inference. Results: We provide a Matlab-compatible implementation of the adap- tive explicit-implicit {\tau}-leaping algorithm to address the above-mentioned deficits. matLeap provides equal or substantially faster results compared to two widely used simulation packages while maintaining accuracy. Lastly, matLeap yields summary statistics of the stochastic process unavailable with other methods, which are indispensable for Bayesian inference. Conclusions: matLeap addresses shortcomings in existing Matlab-compatible stochastic simulation software, providing significant speedups and sum- mary statistics that are especially useful for researchers utilizing particle- filter based methods for Bayesian inference. Code is available for download at https://github.com/claassengroup/matLeap. Contact: justin.feigelman@imsb.biol.ethz.ch |
q-bio/0402025 | Konstantin Tetenev F. | Konstantin F. Tetenev | To a Problem of Not Increasing Dynamic Compliance at Asthma Patients
with Ventilating Disorders after Berotec Inhalation | 9 pages, 1 figure, 2 tables. Submitted ERJ-00766-2002 | null | null | null | q-bio.TO | null | The purpose of research was to check up the influence of decrease of
nonequality of ventilating (after bronchodilator (berotec) inhalation (BI)) on
the magnitude of dynamic compliance of lungs (Cdyn) at asthma patients with
ventilating infringements. Methods and materials: 20 patients (with 2 and 3
degrees of ventilating infringements (VC<73%, FEV1<51%, MVV<56%), without
restrictive disease of lungs, suffering from bronchial asthma were studied
before and after BI by plotting volume, rate flow, against the transpulmonare
pressure. About the change of nonequality of ventilating we consider by the
change after BI of Cdyn, Cdyn at once after flow interruption (Cdyn1), tissue
resistance at inhalation (Rti in) and exhalation (Rti ex), parameters of
ventilating and general parameters of respiratory mechanics. Results: the
parameters of ventilating were improved (P < 0,05). General parameters of
respiratory mechanics also improved. Rti in and Rti ex are made 0,48+0,16;
1,05+0,25 kPa/l/s before BI and decreased 0,09+0,04; 0,28+0,09 kPa/l/s after BI
(P < 0,05; P < 0,05). But Cdyn and Cdyn1 are not changed after BI. Conclusions:
1. The decrease of ventilation nonequality and tissue friction after BI do not
influence on the initially reduced dynamic compliance of lungs at asthma
patients without any restrictive diseases of lungs. 2. The cause of not
increasing of dynamic compliance after BI probably due by changes in elastic
component of parenchyma of lungs, insensitive to berotec.
| [
{
"created": "Wed, 11 Feb 2004 16:36:24 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Tetenev",
"Konstantin F.",
""
]
] | The purpose of research was to check up the influence of decrease of nonequality of ventilating (after bronchodilator (berotec) inhalation (BI)) on the magnitude of dynamic compliance of lungs (Cdyn) at asthma patients with ventilating infringements. Methods and materials: 20 patients (with 2 and 3 degrees of ventilating infringements (VC<73%, FEV1<51%, MVV<56%), without restrictive disease of lungs, suffering from bronchial asthma were studied before and after BI by plotting volume, rate flow, against the transpulmonare pressure. About the change of nonequality of ventilating we consider by the change after BI of Cdyn, Cdyn at once after flow interruption (Cdyn1), tissue resistance at inhalation (Rti in) and exhalation (Rti ex), parameters of ventilating and general parameters of respiratory mechanics. Results: the parameters of ventilating were improved (P < 0,05). General parameters of respiratory mechanics also improved. Rti in and Rti ex are made 0,48+0,16; 1,05+0,25 kPa/l/s before BI and decreased 0,09+0,04; 0,28+0,09 kPa/l/s after BI (P < 0,05; P < 0,05). But Cdyn and Cdyn1 are not changed after BI. Conclusions: 1. The decrease of ventilation nonequality and tissue friction after BI do not influence on the initially reduced dynamic compliance of lungs at asthma patients without any restrictive diseases of lungs. 2. The cause of not increasing of dynamic compliance after BI probably due by changes in elastic component of parenchyma of lungs, insensitive to berotec. |
2405.14810 | Simon Brandt | Simon Brandt, Mihai Alexandru Petrovici, Walter Senn, Katharina Anna
Wilmes, Federico Benitez | Prospective and retrospective coding in cortical neurons | null | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by-sa/4.0/ | Brains can process sensory information from different modalities at
astonishing speed, this is surprising as already the integration of inputs
through the membrane causes a delayed response. Neuronal recordings in vitro
reveal a possible explanation for the fast processing through an advancement of
the output firing rates of individual neurons with respect to the input, a
concept which we refer to as prospective coding. The underlying mechanisms of
prospective coding, however, is not completely understood. We propose a
mechanistic explanation for individual neurons advancing their output on the
level of single action potentials and instantaneous firing rates. Using the
Hodgkin-Huxley model, we show that the spike generation mechanism can be the
source for prospective (advanced) or retrospective (delayed) responses with
respect the underlying somatic voltage. A simplified Hodgkin-Huxley model
identifies the sodium inactivation as a source for the prospective firing,
controlling the timing of the neuron's output as a function the voltage and its
derivative. We also consider a slower spike-frequency adaptation as a
mechanisms that generates prospective firings to inputs that undergo slow
temporal modulations. In general, we show that adaptation processes at
different time scales can cause advanced neuronal responses to time varying
inputs that are modulated on the corresponding time scales.
| [
{
"created": "Thu, 23 May 2024 17:19:21 GMT",
"version": "v1"
}
] | 2024-05-24 | [
[
"Brandt",
"Simon",
""
],
[
"Petrovici",
"Mihai Alexandru",
""
],
[
"Senn",
"Walter",
""
],
[
"Wilmes",
"Katharina Anna",
""
],
[
"Benitez",
"Federico",
""
]
] | Brains can process sensory information from different modalities at astonishing speed, this is surprising as already the integration of inputs through the membrane causes a delayed response. Neuronal recordings in vitro reveal a possible explanation for the fast processing through an advancement of the output firing rates of individual neurons with respect to the input, a concept which we refer to as prospective coding. The underlying mechanisms of prospective coding, however, is not completely understood. We propose a mechanistic explanation for individual neurons advancing their output on the level of single action potentials and instantaneous firing rates. Using the Hodgkin-Huxley model, we show that the spike generation mechanism can be the source for prospective (advanced) or retrospective (delayed) responses with respect the underlying somatic voltage. A simplified Hodgkin-Huxley model identifies the sodium inactivation as a source for the prospective firing, controlling the timing of the neuron's output as a function the voltage and its derivative. We also consider a slower spike-frequency adaptation as a mechanisms that generates prospective firings to inputs that undergo slow temporal modulations. In general, we show that adaptation processes at different time scales can cause advanced neuronal responses to time varying inputs that are modulated on the corresponding time scales. |
1410.6425 | Ulrich S. Schwarz | Anna Battista, Friedrich Frischknecht and Ulrich S. Schwarz
(Heidelberg University) | Geometrical model for malaria parasite migration in structured
environments | latex, 14 pages, 12 figures | Phys. Rev. E 90:042720, Oct 2014 | 10.1103/PhysRevE.90.042720 | null | q-bio.CB cond-mat.soft physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Malaria is transmitted to vertebrates via a mosquito bite, during which
rod-like and crescent-shaped parasites, called sporozoites, are injected into
the skin of the host. Searching for a blood capillary to penetrate, sporozoites
move quickly in locally helical trajectories, that are frequently perturbed by
interactions with the extracellular environment. Here we present a theoretical
analysis of the active motility of sporozoites in a structured environment. The
sporozoite is modelled as a self-propelled rod with spontaneous curvature and
bending rigidity. It interacts with hard obstacles through collision rules
inferred from experimental observation of two-dimensional sporozoite movement
in pillar arrays. Our model shows that complex motion patterns arise from the
geometrical shape of the parasite and that its mechanical flexibility is
crucial for stable migration patterns. Extending the model to three dimensions
reveals that a bent and twisted rod can associate to cylindrical obstacles in a
manner reminiscent of the association of sporozoites to blood capillaries,
supporting the notion of a prominent role of cell shape during malaria
transmission.
| [
{
"created": "Thu, 23 Oct 2014 17:31:40 GMT",
"version": "v1"
}
] | 2014-10-24 | [
[
"Battista",
"Anna",
"",
"Heidelberg University"
],
[
"Frischknecht",
"Friedrich",
"",
"Heidelberg University"
],
[
"Schwarz",
"Ulrich S.",
"",
"Heidelberg University"
]
] | Malaria is transmitted to vertebrates via a mosquito bite, during which rod-like and crescent-shaped parasites, called sporozoites, are injected into the skin of the host. Searching for a blood capillary to penetrate, sporozoites move quickly in locally helical trajectories, that are frequently perturbed by interactions with the extracellular environment. Here we present a theoretical analysis of the active motility of sporozoites in a structured environment. The sporozoite is modelled as a self-propelled rod with spontaneous curvature and bending rigidity. It interacts with hard obstacles through collision rules inferred from experimental observation of two-dimensional sporozoite movement in pillar arrays. Our model shows that complex motion patterns arise from the geometrical shape of the parasite and that its mechanical flexibility is crucial for stable migration patterns. Extending the model to three dimensions reveals that a bent and twisted rod can associate to cylindrical obstacles in a manner reminiscent of the association of sporozoites to blood capillaries, supporting the notion of a prominent role of cell shape during malaria transmission. |
1902.10073 | Biwei Huang | Biwei Huang, Kun Zhang, Ruben Sanchez-Romero, Joseph Ramsey, Madelyn
Glymour, Clark Glymour | Diagnosis of Autism Spectrum Disorder by Causal Influence Strength
Learned from Resting-State fMRI Data | null | null | null | null | q-bio.NC cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Autism spectrum disorder (ASD) is one of the major developmental disorders
affecting children. Recently, it has been hypothesized that ASD is associated
with atypical brain connectivities. A substantial body of researches use
Pearson's correlation coefficients, mutual information, or partial correlation
to investigate the differences in brain connectivities between ASD and typical
controls from functional Magnetic Resonance Imaging (fMRI). However,
correlation or partial correlation does not directly reveal causal influences -
the information flow - between brain regions. Comparing to correlation,
causality pinpoints the key connectivity characteristics and removes redundant
features for diagnosis.
In this paper, we propose a two-step method for large-scale and cyclic causal
discovery from fMRI. It can identify brain causal structures without doing
interventional experiments. The learned causal structure, as well as the causal
influence strength, provides us the path and effectiveness of information flow.
With the recovered causal influence strength as candidate features, we then
perform ASD diagnosis by further doing feature selection and classification. We
apply our methods to three datasets from Autism Brain Imaging Data Exchange
(ABIDE).
From experimental results, it shows that with causal connectivities, the
diagnostic accuracy largely improves. A closer examination shows that
information flows starting from the superior front gyrus to default mode
network and posterior areas are largely reduced. Moreover, all enhanced
information flows are from posterior to anterior or in local areas. Overall, it
shows that long-range influences have a larger proportion of reductions than
local ones, while local influences have a larger proportion of increases than
long-range ones. By examining the graph properties of brain causal structure,
the group of ASD shows reduced small-worldness.
| [
{
"created": "Sun, 27 Jan 2019 05:09:55 GMT",
"version": "v1"
},
{
"created": "Tue, 5 Mar 2019 16:28:50 GMT",
"version": "v2"
}
] | 2019-03-06 | [
[
"Huang",
"Biwei",
""
],
[
"Zhang",
"Kun",
""
],
[
"Sanchez-Romero",
"Ruben",
""
],
[
"Ramsey",
"Joseph",
""
],
[
"Glymour",
"Madelyn",
""
],
[
"Glymour",
"Clark",
""
]
] | Autism spectrum disorder (ASD) is one of the major developmental disorders affecting children. Recently, it has been hypothesized that ASD is associated with atypical brain connectivities. A substantial body of researches use Pearson's correlation coefficients, mutual information, or partial correlation to investigate the differences in brain connectivities between ASD and typical controls from functional Magnetic Resonance Imaging (fMRI). However, correlation or partial correlation does not directly reveal causal influences - the information flow - between brain regions. Comparing to correlation, causality pinpoints the key connectivity characteristics and removes redundant features for diagnosis. In this paper, we propose a two-step method for large-scale and cyclic causal discovery from fMRI. It can identify brain causal structures without doing interventional experiments. The learned causal structure, as well as the causal influence strength, provides us the path and effectiveness of information flow. With the recovered causal influence strength as candidate features, we then perform ASD diagnosis by further doing feature selection and classification. We apply our methods to three datasets from Autism Brain Imaging Data Exchange (ABIDE). From experimental results, it shows that with causal connectivities, the diagnostic accuracy largely improves. A closer examination shows that information flows starting from the superior front gyrus to default mode network and posterior areas are largely reduced. Moreover, all enhanced information flows are from posterior to anterior or in local areas. Overall, it shows that long-range influences have a larger proportion of reductions than local ones, while local influences have a larger proportion of increases than long-range ones. By examining the graph properties of brain causal structure, the group of ASD shows reduced small-worldness. |
1801.03268 | Ming Song | Ming Song, Yi Yang, Jianghong He, Zhengyi Yang, Shan Yu, Qiuyou Xie,
Xiaoyu Xia, Yuanyuan Dang, Qiang Zhang, Xinhuai Wu, Yue Cui, Bing Hou,
Ronghao Yu, Ruxiang Xu, Tianzi Jiang | Prognostication of chronic disorders of consciousness using brain
functional networks and clinical characteristics | Although some prognostic indicators and models have been proposed for
disorders of consciousness, each single method when used alone carries risks
of false prediction. Song et al. report that a model combining resting state
functional MRI with clinical characteristics provided accurate, robust, and
interpretable prognostications. 52 pages, 1 table, 7 figures | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Disorders of consciousness are a heterogeneous mixture of different diseases
or injuries. Although some indicators and models have been proposed for
prognostication, any single method when used alone carries a high risk of false
prediction. This study aimed to develop a multidomain prognostic model that
combines resting state functional MRI with three clinical characteristics to
predict one year outcomes at the single-subject level. The model discriminated
between patients who would later recover consciousness and those who would not
with an accuracy of around 90% on three datasets from two medical centers. It
was also able to identify the prognostic importance of different predictors,
including brain functions and clinical characteristics. To our knowledge, this
is the first implementation reported of a multidomain prognostic model based on
resting state functional MRI and clinical characteristics in chronic disorders
of consciousness. We therefore suggest that this novel prognostic model is
accurate, robust, and interpretable.
| [
{
"created": "Wed, 10 Jan 2018 08:35:48 GMT",
"version": "v1"
},
{
"created": "Thu, 22 Feb 2018 01:46:52 GMT",
"version": "v2"
},
{
"created": "Fri, 7 Sep 2018 01:29:06 GMT",
"version": "v3"
}
] | 2018-09-10 | [
[
"Song",
"Ming",
""
],
[
"Yang",
"Yi",
""
],
[
"He",
"Jianghong",
""
],
[
"Yang",
"Zhengyi",
""
],
[
"Yu",
"Shan",
""
],
[
"Xie",
"Qiuyou",
""
],
[
"Xia",
"Xiaoyu",
""
],
[
"Dang",
"Yuanyuan",
""
],
[
"Zhang",
"Qiang",
""
],
[
"Wu",
"Xinhuai",
""
],
[
"Cui",
"Yue",
""
],
[
"Hou",
"Bing",
""
],
[
"Yu",
"Ronghao",
""
],
[
"Xu",
"Ruxiang",
""
],
[
"Jiang",
"Tianzi",
""
]
] | Disorders of consciousness are a heterogeneous mixture of different diseases or injuries. Although some indicators and models have been proposed for prognostication, any single method when used alone carries a high risk of false prediction. This study aimed to develop a multidomain prognostic model that combines resting state functional MRI with three clinical characteristics to predict one year outcomes at the single-subject level. The model discriminated between patients who would later recover consciousness and those who would not with an accuracy of around 90% on three datasets from two medical centers. It was also able to identify the prognostic importance of different predictors, including brain functions and clinical characteristics. To our knowledge, this is the first implementation reported of a multidomain prognostic model based on resting state functional MRI and clinical characteristics in chronic disorders of consciousness. We therefore suggest that this novel prognostic model is accurate, robust, and interpretable. |
1703.02564 | Pablo Piedrahita | P. Piedrahita, J.J. Mazo, L.M. Flor\'ia, Y. Moreno | Pulse-coupled model of excitable elements on heterogeneous sparse
networks | Working paper, 13 pages, 13 figures | null | null | null | q-bio.NC cond-mat.dis-nn | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study a pulse-coupled dynamics of excitable elements in uncorrelated
scale-free networks. Regimes of self-sustained activity are found for
homogeneous and inhomogeneous couplings, in which the system displays a wide
variety of behaviors, including periodic and irregular global spiking signals,
as well as coherent oscillations, an unexpected form of synchronization. Our
numerical results also show that the properties of the population firing rate
depend on the size of the system, particularly its structure and average value
over time. However, a few straightforward dynamical and topological strategies
can be introduced to enhance or hinder these global behaviors, rendering a
scenario where signal control is attainable, which incorporates a basic
mechanism to turn off the dynamics permanently. As our main result, here we
present a framework to estimate, in the stationary state, the mean firing rate
over a long time window and to decompose the global dynamics into average
values of the inter-spike-interval of each connectivity group. Our approach
provides accurate predictions of these average quantities when the network
exhibits high heterogeneity, a remarkable finding that is not restricted
exclusively to the scale-free topology.
| [
{
"created": "Tue, 7 Mar 2017 19:30:05 GMT",
"version": "v1"
},
{
"created": "Thu, 9 Mar 2017 17:29:23 GMT",
"version": "v2"
}
] | 2017-03-10 | [
[
"Piedrahita",
"P.",
""
],
[
"Mazo",
"J. J.",
""
],
[
"Floría",
"L. M.",
""
],
[
"Moreno",
"Y.",
""
]
] | We study a pulse-coupled dynamics of excitable elements in uncorrelated scale-free networks. Regimes of self-sustained activity are found for homogeneous and inhomogeneous couplings, in which the system displays a wide variety of behaviors, including periodic and irregular global spiking signals, as well as coherent oscillations, an unexpected form of synchronization. Our numerical results also show that the properties of the population firing rate depend on the size of the system, particularly its structure and average value over time. However, a few straightforward dynamical and topological strategies can be introduced to enhance or hinder these global behaviors, rendering a scenario where signal control is attainable, which incorporates a basic mechanism to turn off the dynamics permanently. As our main result, here we present a framework to estimate, in the stationary state, the mean firing rate over a long time window and to decompose the global dynamics into average values of the inter-spike-interval of each connectivity group. Our approach provides accurate predictions of these average quantities when the network exhibits high heterogeneity, a remarkable finding that is not restricted exclusively to the scale-free topology. |
1811.02766 | Tomislav Plesa Dr | Tomislav Plesa | Stochastic approximations of higher-molecular by bi-molecular reactions | null | null | null | null | q-bio.MN math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Biochemical reactions involving three or more reactants, called
higher-molecular reactions, play an important role in theoretical systems and
synthetic biology. In particular, such reactions underpin a variety of
important bio-dynamical phenomena, such as multi-stability/multi-modality,
oscillations, bifurcations, and noise-induced effects. However, only reactions
with at most two reactants, called bi-molecular reactions, are experimentally
feasible. To bridge the gap, in this paper we put forward an algorithm for
systematically approximating arbitrary higher-molecular reactions with
bi-molecular ones, while preserving the underlying stochastic dynamics.
Properties of the algorithm and convergence are established via singular
perturbation theory. The algorithm is applied to a variety of higher-molecular
biochemical networks, and is shown to play an important role in
nucleic-acid-based synthetic biology.
| [
{
"created": "Wed, 7 Nov 2018 05:34:59 GMT",
"version": "v1"
},
{
"created": "Sat, 2 Jan 2021 09:04:20 GMT",
"version": "v2"
}
] | 2021-01-05 | [
[
"Plesa",
"Tomislav",
""
]
] | Biochemical reactions involving three or more reactants, called higher-molecular reactions, play an important role in theoretical systems and synthetic biology. In particular, such reactions underpin a variety of important bio-dynamical phenomena, such as multi-stability/multi-modality, oscillations, bifurcations, and noise-induced effects. However, only reactions with at most two reactants, called bi-molecular reactions, are experimentally feasible. To bridge the gap, in this paper we put forward an algorithm for systematically approximating arbitrary higher-molecular reactions with bi-molecular ones, while preserving the underlying stochastic dynamics. Properties of the algorithm and convergence are established via singular perturbation theory. The algorithm is applied to a variety of higher-molecular biochemical networks, and is shown to play an important role in nucleic-acid-based synthetic biology. |
1811.03943 | Philippe Arnaud | Ana Xavier-Magalh\~aes, C\'eline Gon\c{c}alves, Anne Fogli, Tatiana
Louren\c{c}o, Marta Pojo, Bruno Pereira, Miguel Rocha, Maria Lopes, In\^es
Crespo, Olinda Rebelo, Herminio T\~ao, Jo\~ao Lima, Ricardo Moreira, Afonso
Pinto, Chris Jones, Rui Reis, Joseph Costello, Philippe Arnaud (GReD), Nuno
Sousa, Bruno Costa | The long non-coding RNA HOTAIR is transcriptionally activated by HOXA9
and is an independent prognostic marker in patients with malignant glioma | null | Oncotarget, Impact journals, 2018, 9, pp.15740 - 15756 | null | null | q-bio.GN q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The lncRNA HOTAIR has been implicated in several human cancers. Here, we
evaluated the molecular alterations and upstream regulatory mechanisms of
HOTAIR in glioma, the most common primary brain tumors, and its clinical
relevance. HOTAIR gene expression, methylation, copy-number and prognostic
value were investigated in human gliomas integrating data from online datasets
and our cohorts. High levels of HOTAIR were associated with higher grades of
glioma, particularly IDH wild-type cases. Mechanistically, HOTAIR was
overexpressed in a gene dosage-independent manner, while DNA methylation levels
of particular CpGs in HOTAIR locus were associated with HOTAIR expression
levels in GBM clinical specimens and cell lines. Concordantly, the
demethylating agent 5-Aza-2'-deoxycytidine affected HOTAIR transcriptional
levels in a cell line-dependent manner. Importantly, HOTAIR was frequently
co-expressed with HOXA9 in high-grade gliomas from TCGA, Oncomine, and our
Portuguese and French datasets. Integrated in silico analyses, chromatin
immunoprecipitation, and qPCR data showed that HOXA9 binds directly to the
promoter of HOTAIR. Clinically, GBM patients with high HOTAIR expression had a
significantly reduced overall survival, independently of other prognostic
variables. In summary, this work reveals HOXA9 as a novel direct regulator of
HOTAIR, and establishes HOTAIR as an independent prognostic marker, providing
new therapeutic opportunities to treat this highly aggressive cancer.
| [
{
"created": "Fri, 9 Nov 2018 15:03:59 GMT",
"version": "v1"
}
] | 2020-09-15 | [
[
"Xavier-Magalhães",
"Ana",
"",
"GReD"
],
[
"Gonçalves",
"Céline",
"",
"GReD"
],
[
"Fogli",
"Anne",
"",
"GReD"
],
[
"Lourenço",
"Tatiana",
"",
"GReD"
],
[
"Pojo",
"Marta",
"",
"GReD"
],
[
"Pereira",
"Bruno",
"",
"GReD"
],
[
"Rocha",
"Miguel",
"",
"GReD"
],
[
"Lopes",
"Maria",
"",
"GReD"
],
[
"Crespo",
"Inês",
"",
"GReD"
],
[
"Rebelo",
"Olinda",
"",
"GReD"
],
[
"Tão",
"Herminio",
"",
"GReD"
],
[
"Lima",
"João",
"",
"GReD"
],
[
"Moreira",
"Ricardo",
"",
"GReD"
],
[
"Pinto",
"Afonso",
"",
"GReD"
],
[
"Jones",
"Chris",
"",
"GReD"
],
[
"Reis",
"Rui",
"",
"GReD"
],
[
"Costello",
"Joseph",
"",
"GReD"
],
[
"Arnaud",
"Philippe",
"",
"GReD"
],
[
"Sousa",
"Nuno",
""
],
[
"Costa",
"Bruno",
""
]
] | The lncRNA HOTAIR has been implicated in several human cancers. Here, we evaluated the molecular alterations and upstream regulatory mechanisms of HOTAIR in glioma, the most common primary brain tumors, and its clinical relevance. HOTAIR gene expression, methylation, copy-number and prognostic value were investigated in human gliomas integrating data from online datasets and our cohorts. High levels of HOTAIR were associated with higher grades of glioma, particularly IDH wild-type cases. Mechanistically, HOTAIR was overexpressed in a gene dosage-independent manner, while DNA methylation levels of particular CpGs in HOTAIR locus were associated with HOTAIR expression levels in GBM clinical specimens and cell lines. Concordantly, the demethylating agent 5-Aza-2'-deoxycytidine affected HOTAIR transcriptional levels in a cell line-dependent manner. Importantly, HOTAIR was frequently co-expressed with HOXA9 in high-grade gliomas from TCGA, Oncomine, and our Portuguese and French datasets. Integrated in silico analyses, chromatin immunoprecipitation, and qPCR data showed that HOXA9 binds directly to the promoter of HOTAIR. Clinically, GBM patients with high HOTAIR expression had a significantly reduced overall survival, independently of other prognostic variables. In summary, this work reveals HOXA9 as a novel direct regulator of HOTAIR, and establishes HOTAIR as an independent prognostic marker, providing new therapeutic opportunities to treat this highly aggressive cancer. |
2305.04931 | Shao Li | Boyang Wang, DingFan Zhang, Tingyu Zhang, Chayanis Sutcharitchan,
Jianlin Hua, Dongfang Hua, Bo Zhang, Shao Li | Network pharmacology on the mechanism of Yi Qi Tong Qiao Pill inhibiting
allergic rhinitis | 25 pages, 6 figures | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Objective: The purpose of this study is to reveal the mechanism of action of
Yi Qi Tong Qiao Pill (YQTQP) in the treatment of allergic rhinitis (AR), as
well as establish a paradigm for the researches on traditional Chinese medicine
(TCM) from systematic perspective. Methods: Based on the data collected from
TCM-related and disease-related databases, target profiles of compounds in
YQTQP were calculated through network-based algorithms and holistic targets of
TQTQP was constructed. Network target analysis was performed to explore the
potential mechanisms of YQTQP in the treatment of AR and the mechanisms were
classified into different modules according to their biological functions.
Besides, animal and clinical experiments were conducted to validate our
findings inferred from Network target analysis. Results: Network target
analysis showed that YQTQP targeted 12 main pathways or biological processes
related to AR, represented by those related to IL-4, IFN-{\gamma}, TNF-{\alpha}
and IL-13. These results could be classified into 3 biological modules,
including regulation of immune and inflammation, epithelial barrier disorder
and cell adhesion. Finally, a series of experiments composed of animal and
clinical experiments, proved our findings and confirmed that YQTQP could
improve related symptoms of AR, like permeability of nasal mucosa epithelium.
Conclusion: A combination of Network target analysis and the experimental
validation indicated that YQTQP was effective in the treatment of AR and might
provide a new insight on revealing the mechanism of TCM against diseases.
| [
{
"created": "Sat, 6 May 2023 14:53:02 GMT",
"version": "v1"
},
{
"created": "Sun, 21 May 2023 13:26:04 GMT",
"version": "v2"
}
] | 2023-05-23 | [
[
"Wang",
"Boyang",
""
],
[
"Zhang",
"DingFan",
""
],
[
"Zhang",
"Tingyu",
""
],
[
"Sutcharitchan",
"Chayanis",
""
],
[
"Hua",
"Jianlin",
""
],
[
"Hua",
"Dongfang",
""
],
[
"Zhang",
"Bo",
""
],
[
"Li",
"Shao",
""
]
] | Objective: The purpose of this study is to reveal the mechanism of action of Yi Qi Tong Qiao Pill (YQTQP) in the treatment of allergic rhinitis (AR), as well as establish a paradigm for the researches on traditional Chinese medicine (TCM) from systematic perspective. Methods: Based on the data collected from TCM-related and disease-related databases, target profiles of compounds in YQTQP were calculated through network-based algorithms and holistic targets of TQTQP was constructed. Network target analysis was performed to explore the potential mechanisms of YQTQP in the treatment of AR and the mechanisms were classified into different modules according to their biological functions. Besides, animal and clinical experiments were conducted to validate our findings inferred from Network target analysis. Results: Network target analysis showed that YQTQP targeted 12 main pathways or biological processes related to AR, represented by those related to IL-4, IFN-{\gamma}, TNF-{\alpha} and IL-13. These results could be classified into 3 biological modules, including regulation of immune and inflammation, epithelial barrier disorder and cell adhesion. Finally, a series of experiments composed of animal and clinical experiments, proved our findings and confirmed that YQTQP could improve related symptoms of AR, like permeability of nasal mucosa epithelium. Conclusion: A combination of Network target analysis and the experimental validation indicated that YQTQP was effective in the treatment of AR and might provide a new insight on revealing the mechanism of TCM against diseases. |
1501.02254 | Ognjen Arandjelovi\'c PhD | Ognjen Arandjelovic | A note on the capability profile - localized increase in force
production and its effect on overall resistance training performance | null | null | null | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this article I resolve formally the apparent paradox that arises in the
context of the computational model of resistance exercise introduced in my
previous work. Contrary to intuition, the model seems to allow for a localized
increase in force production to degrade the ultimate exercise performance. I
show this not to be the case.
| [
{
"created": "Sat, 29 Nov 2014 11:19:25 GMT",
"version": "v1"
}
] | 2015-01-12 | [
[
"Arandjelovic",
"Ognjen",
""
]
] | In this article I resolve formally the apparent paradox that arises in the context of the computational model of resistance exercise introduced in my previous work. Contrary to intuition, the model seems to allow for a localized increase in force production to degrade the ultimate exercise performance. I show this not to be the case. |
2307.11675 | Swarag. Thaikkandi | Swarag Thaikkandi and K. M. Sharika | Analyzing time series of unequal durations using Multidimensional
Recurrence Quantification Analysis (MdRQA): validation and implementation
using Python | null | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In recent years, recurrent quantification analysis (RQA) and its
multi-dimensional version (MdRQA) have emerged as a popular tool for assessing
interpersonal behavioral or physiological synchrony in groups of two or more
individuals. While experimental data in such studies are typically collected
for a fixed, pre-determined duration, naturally occurring phenomena may often
reach a state of transition after an unpredictable or varying duration of time.
The resulting recurrence plots (RPs) across samples cannot be compared directly
via linear scaling because the sensitivity of RQA variables to local dynamics
would vary. We propose to address this by using the sliding window technique on
individual RPs and using the summary statistics of the different RQA variable
distributions computed across the sliding windows to differentiate the dynamics
of the original time series of unequal durations. We first tested our approach
in two simulated models: 1) the Rossler attractor and 2) the Kuramoto model. We
compared the ability of different summary statistics of RQA variable
distributions to accurately predict the dynamic states of the system across
varying levels of noise, unequal lengths of time series, and, in the case of
the Kuramoto model, different numbers of oscillators across samples. We found
that the mode was the most robust to the degree of noise in the signals. We
further tested and validated it on open access data from a recent study
comparing spontaneously generated interpersonal movement synchrony in a dyad.
To our knowledge, this is the first systematic attempt to validate the use of
MdRQA in computing and comparing synchrony between systems of non-uniform
composition and unequal time series data, paving the way for future work that
examines interpersonal synchrony in more naturalistic, ecologically valid
contexts.
| [
{
"created": "Fri, 21 Jul 2023 16:28:04 GMT",
"version": "v1"
},
{
"created": "Mon, 24 Jul 2023 09:43:53 GMT",
"version": "v2"
},
{
"created": "Thu, 10 Aug 2023 17:06:34 GMT",
"version": "v3"
},
{
"created": "Fri, 11 Aug 2023 04:55:13 GMT",
"version": "v4"
},
{
"created": "Tue, 15 Aug 2023 13:39:02 GMT",
"version": "v5"
}
] | 2023-08-16 | [
[
"Thaikkandi",
"Swarag",
""
],
[
"Sharika",
"K. M.",
""
]
] | In recent years, recurrent quantification analysis (RQA) and its multi-dimensional version (MdRQA) have emerged as a popular tool for assessing interpersonal behavioral or physiological synchrony in groups of two or more individuals. While experimental data in such studies are typically collected for a fixed, pre-determined duration, naturally occurring phenomena may often reach a state of transition after an unpredictable or varying duration of time. The resulting recurrence plots (RPs) across samples cannot be compared directly via linear scaling because the sensitivity of RQA variables to local dynamics would vary. We propose to address this by using the sliding window technique on individual RPs and using the summary statistics of the different RQA variable distributions computed across the sliding windows to differentiate the dynamics of the original time series of unequal durations. We first tested our approach in two simulated models: 1) the Rossler attractor and 2) the Kuramoto model. We compared the ability of different summary statistics of RQA variable distributions to accurately predict the dynamic states of the system across varying levels of noise, unequal lengths of time series, and, in the case of the Kuramoto model, different numbers of oscillators across samples. We found that the mode was the most robust to the degree of noise in the signals. We further tested and validated it on open access data from a recent study comparing spontaneously generated interpersonal movement synchrony in a dyad. To our knowledge, this is the first systematic attempt to validate the use of MdRQA in computing and comparing synchrony between systems of non-uniform composition and unequal time series data, paving the way for future work that examines interpersonal synchrony in more naturalistic, ecologically valid contexts. |
1007.0333 | Yupeng Cun | Yupeng Cun | On the Evolutionary Fate of Mutant Allele at Duplicate Loci: a
Theoretical and Simulation Study | null | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Gene duplications are one of major primary driving forces for evolutionary
novelty. We took population genetics models of genes duplicate to study how
evolutionary forces acting during the fixation of mutant allele at duplicate
loci. We study the fixation time of mutant allele at duplicate loci under
double null recessive model (DNR) and haploinsufficient model (HI). And we also
investigate how selection coefficients with other evolutionary force influence
the fixation frequency of mutant allele at duplicate loci. Our results suggest
that the selection plays a role in the evolutionary fate of duplicate genes,
and tight linkage would help the mutant allele preserved at duplicate loci. Our
theoretical simulation agree with the genomics data analysis result well, that
selection, rather than drift, plays a important role in the establishment of
duplicate loci, and recombination have a great opportunity to be acted upon
selection.
| [
{
"created": "Fri, 2 Jul 2010 10:39:08 GMT",
"version": "v1"
},
{
"created": "Sun, 17 Oct 2010 10:46:03 GMT",
"version": "v2"
},
{
"created": "Thu, 10 Mar 2011 21:41:05 GMT",
"version": "v3"
},
{
"created": "Fri, 12 Aug 2011 16:17:31 GMT",
"version": "v4"
},
{
"created": "Thu, 15 Mar 2012 23:09:46 GMT",
"version": "v5"
}
] | 2012-03-19 | [
[
"Cun",
"Yupeng",
""
]
] | Gene duplications are one of major primary driving forces for evolutionary novelty. We took population genetics models of genes duplicate to study how evolutionary forces acting during the fixation of mutant allele at duplicate loci. We study the fixation time of mutant allele at duplicate loci under double null recessive model (DNR) and haploinsufficient model (HI). And we also investigate how selection coefficients with other evolutionary force influence the fixation frequency of mutant allele at duplicate loci. Our results suggest that the selection plays a role in the evolutionary fate of duplicate genes, and tight linkage would help the mutant allele preserved at duplicate loci. Our theoretical simulation agree with the genomics data analysis result well, that selection, rather than drift, plays a important role in the establishment of duplicate loci, and recombination have a great opportunity to be acted upon selection. |
1202.4939 | Benoit Roux | Benoit Roux | Ion Binding Sites and their Representations by Quasichemical Reduced
Models | null | null | null | null | q-bio.BM cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The binding of small metal ions to complex macromolecular structures is
typically dominated by strong local interactions of the ion with its nearest
ligands. For this reason, it is often possible to understand the microscopic
origin of ion binding selectivity by considering simplified reduced models
comprised of only the nearest ion-coordinating ligands. Although the main
ingredients underlying simplified reduced models are intuitively clear, a
formal statistical mechanical treatment is nonetheless necessary in order to
draw meaningful conclusions about complex macromolecular systems. By
construction, reduced models only treat the ion and the nearest coordinating
ligands explicitly. The influence of the missing atoms from the protein or the
solvent is incorporated indirectly. Quasi-chemical theory offers one example of
how to carry out such a separation in the case of ion solvation in bulk
liquids, and in several ways, a statistical mechanical formulation of reduced
binding site models for macromolecules is expected to follow a similar route.
Here, some critical issues with recent theories of reduced binding site models
are examined.
| [
{
"created": "Wed, 22 Feb 2012 15:30:54 GMT",
"version": "v1"
}
] | 2012-02-23 | [
[
"Roux",
"Benoit",
""
]
] | The binding of small metal ions to complex macromolecular structures is typically dominated by strong local interactions of the ion with its nearest ligands. For this reason, it is often possible to understand the microscopic origin of ion binding selectivity by considering simplified reduced models comprised of only the nearest ion-coordinating ligands. Although the main ingredients underlying simplified reduced models are intuitively clear, a formal statistical mechanical treatment is nonetheless necessary in order to draw meaningful conclusions about complex macromolecular systems. By construction, reduced models only treat the ion and the nearest coordinating ligands explicitly. The influence of the missing atoms from the protein or the solvent is incorporated indirectly. Quasi-chemical theory offers one example of how to carry out such a separation in the case of ion solvation in bulk liquids, and in several ways, a statistical mechanical formulation of reduced binding site models for macromolecules is expected to follow a similar route. Here, some critical issues with recent theories of reduced binding site models are examined. |
1006.3122 | Mike Steel Prof. | Leo van Iersel, Charles Semple, Mike Steel | Locating a tree in a phylogenetic network | 9 pages, 4 figures | null | null | null | q-bio.PE cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Phylogenetic trees and networks are leaf-labelled graphs that are used to
describe evolutionary histories of species. The Tree Containment problem asks
whether a given phylogenetic tree is embedded in a given phylogenetic network.
Given a phylogenetic network and a cluster of species, the Cluster Containment
problem asks whether the given cluster is a cluster of some phylogenetic tree
embedded in the network. Both problems are known to be NP-complete in general.
In this article, we consider the restriction of these problems to several
well-studied classes of phylogenetic networks. We show that Tree Containment is
polynomial-time solvable for normal networks, for binary tree-child networks,
and for level-$k$ networks. On the other hand, we show that, even for
tree-sibling, time-consistent, regular networks, both Tree Containment and
Cluster Containment remain NP-complete.
| [
{
"created": "Wed, 16 Jun 2010 01:44:33 GMT",
"version": "v1"
}
] | 2010-06-17 | [
[
"van Iersel",
"Leo",
""
],
[
"Semple",
"Charles",
""
],
[
"Steel",
"Mike",
""
]
] | Phylogenetic trees and networks are leaf-labelled graphs that are used to describe evolutionary histories of species. The Tree Containment problem asks whether a given phylogenetic tree is embedded in a given phylogenetic network. Given a phylogenetic network and a cluster of species, the Cluster Containment problem asks whether the given cluster is a cluster of some phylogenetic tree embedded in the network. Both problems are known to be NP-complete in general. In this article, we consider the restriction of these problems to several well-studied classes of phylogenetic networks. We show that Tree Containment is polynomial-time solvable for normal networks, for binary tree-child networks, and for level-$k$ networks. On the other hand, we show that, even for tree-sibling, time-consistent, regular networks, both Tree Containment and Cluster Containment remain NP-complete. |
1809.02503 | Mohammad Golbabaee | Mohammad Golbabaee, Zhouye Chen, Yves Wiaux, Mike E. Davies | CoverBLIP: scalable iterative matched filtering for MR Fingerprint
recovery | In Proceedings of Joint Annual Meeting ISMRM-ESMRMB 2018 - Paris | null | null | null | q-bio.QM cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current proposed solutions for the high dimensionality of the MRF
reconstruction problem rely on a linear compression step to reduce the matching
computations and boost the efficiency of fast but non-scalable searching
schemes such as the KD-trees. However such methodologies often introduce an
unfavourable compromise in the estimation accuracy when applied to nonlinear
data structures such as the manifold of Bloch responses with possible increased
dynamic complexity and growth in data population. To address this shortcoming
we propose an inexact iterative reconstruction method, dubbed as the Cover
BLoch response Iterative Projection (CoverBLIP). Iterative methods improve the
accuracy of their non-iterative counterparts and are additionally robust
against certain accelerated approximate updates, without compromising their
final accuracy. Leveraging on these results, we accelerate matched-filtering
using an ANNS algorithm based on Cover trees with a robustness feature against
the curse of dimensionality.
| [
{
"created": "Thu, 6 Sep 2018 08:58:09 GMT",
"version": "v1"
}
] | 2018-09-10 | [
[
"Golbabaee",
"Mohammad",
""
],
[
"Chen",
"Zhouye",
""
],
[
"Wiaux",
"Yves",
""
],
[
"Davies",
"Mike E.",
""
]
] | Current proposed solutions for the high dimensionality of the MRF reconstruction problem rely on a linear compression step to reduce the matching computations and boost the efficiency of fast but non-scalable searching schemes such as the KD-trees. However such methodologies often introduce an unfavourable compromise in the estimation accuracy when applied to nonlinear data structures such as the manifold of Bloch responses with possible increased dynamic complexity and growth in data population. To address this shortcoming we propose an inexact iterative reconstruction method, dubbed as the Cover BLoch response Iterative Projection (CoverBLIP). Iterative methods improve the accuracy of their non-iterative counterparts and are additionally robust against certain accelerated approximate updates, without compromising their final accuracy. Leveraging on these results, we accelerate matched-filtering using an ANNS algorithm based on Cover trees with a robustness feature against the curse of dimensionality. |
0801.4812 | Christopher Wylie | C. Scott Wylie, Cheol-Min Ghim, David A. Kessler, Herbert Levine | The fixation probability of rare mutators in finite asexual populations | 46 pages, 8 figures | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A mutator is an allele that increases the mutation rate throughout the genome
by disrupting some aspect of DNA replication or repair. Mutators that increase
the mutation rate by the order of 100 fold have been observed to spontaneously
emerge and achieve high frequencies in natural populations and in long-term
laboratory evolution experiments with \textit{E. coli}. In principle, the
fixation of mutator alleles is limited by (i) competition with mutations in
wild-type backgrounds, (ii) additional deleterious mutational load, and (iii)
random genetic drift. Using a multiple locus model and employing both
simulation and analytic methods, we investigate the effects of these three
factors on the fixation probability $P_{fix}$ of an initially rare mutator as a
function of population size $N$, beneficial and deleterious mutation rates, and
the strength of mutations $s$. Our diffusion based approximation for $P_{fix}$
successfully captures effects (ii) and (iii) when selection is fast compared to
mutation ($\mu/s \ll 1$). This enables us to predict the conditions under which
mutators will be evolutionarily favored. Surprisingly, our simulations show
that effect (i) is typically small for strong-effect mutators. Our results
agree semi-quantitatively with existing laboratory evolution experiments and
suggest future experimental directions.
| [
{
"created": "Thu, 31 Jan 2008 03:08:43 GMT",
"version": "v1"
},
{
"created": "Tue, 5 Feb 2008 02:14:07 GMT",
"version": "v2"
},
{
"created": "Wed, 31 Dec 2008 04:10:57 GMT",
"version": "v3"
}
] | 2008-12-31 | [
[
"Wylie",
"C. Scott",
""
],
[
"Ghim",
"Cheol-Min",
""
],
[
"Kessler",
"David A.",
""
],
[
"Levine",
"Herbert",
""
]
] | A mutator is an allele that increases the mutation rate throughout the genome by disrupting some aspect of DNA replication or repair. Mutators that increase the mutation rate by the order of 100 fold have been observed to spontaneously emerge and achieve high frequencies in natural populations and in long-term laboratory evolution experiments with \textit{E. coli}. In principle, the fixation of mutator alleles is limited by (i) competition with mutations in wild-type backgrounds, (ii) additional deleterious mutational load, and (iii) random genetic drift. Using a multiple locus model and employing both simulation and analytic methods, we investigate the effects of these three factors on the fixation probability $P_{fix}$ of an initially rare mutator as a function of population size $N$, beneficial and deleterious mutation rates, and the strength of mutations $s$. Our diffusion based approximation for $P_{fix}$ successfully captures effects (ii) and (iii) when selection is fast compared to mutation ($\mu/s \ll 1$). This enables us to predict the conditions under which mutators will be evolutionarily favored. Surprisingly, our simulations show that effect (i) is typically small for strong-effect mutators. Our results agree semi-quantitatively with existing laboratory evolution experiments and suggest future experimental directions. |
q-bio/0508038 | Cheng Shao | Gongxian Xu, Cheng Shao, Zhilong Xiu | A Modified Iterative IOM Approach for Optimization of Biochemical
Systems | 27pages, 43 figures | null | null | null | q-bio.QM | null | The presented previously indirect optimization method (IOM) developed within
biochemical systems theory (BST) provides a versatile and mathematically
tractable optimization strategy for biochemical systems. However, due to the
local approximations nature of the BST formalism, the iterative version of this
technique possibly does not yield the true optimum solution. In this work, an
algorithm is proposed to obtain the correct and consistent optimum steady-state
operating point of biochemical systems. The existing linear optimization
problem of the direct IOM approach is modified by adding an equality constraint
of describing the consistency of solutions between the S-system and the
original model. Lagrangian analysis is employed to derive the first order
necessary optimality conditions for the above modified optimization problem.
This leads to a procedure that may be regarded as a modified iterative IOM
approach in which the optimization objective function includes an extra linear
term. The extra term contains a comparison of metabolite concentration
derivatives with respect to the enzyme activities between the S-system and the
original model and ensures that the new algorithm is still carried out within
linear programming techniques. The presented framework is applied to several
biochemical systems and shown to the tractability and effectiveness of the
method. The simulation is also studied to investigate the convergence
properties of the algorithm and to give a performance comparison of standard
and modified iterative IOM approach.
| [
{
"created": "Sat, 27 Aug 2005 14:17:59 GMT",
"version": "v1"
},
{
"created": "Mon, 4 Jun 2007 13:54:31 GMT",
"version": "v2"
}
] | 2007-06-13 | [
[
"Xu",
"Gongxian",
""
],
[
"Shao",
"Cheng",
""
],
[
"Xiu",
"Zhilong",
""
]
] | The presented previously indirect optimization method (IOM) developed within biochemical systems theory (BST) provides a versatile and mathematically tractable optimization strategy for biochemical systems. However, due to the local approximations nature of the BST formalism, the iterative version of this technique possibly does not yield the true optimum solution. In this work, an algorithm is proposed to obtain the correct and consistent optimum steady-state operating point of biochemical systems. The existing linear optimization problem of the direct IOM approach is modified by adding an equality constraint of describing the consistency of solutions between the S-system and the original model. Lagrangian analysis is employed to derive the first order necessary optimality conditions for the above modified optimization problem. This leads to a procedure that may be regarded as a modified iterative IOM approach in which the optimization objective function includes an extra linear term. The extra term contains a comparison of metabolite concentration derivatives with respect to the enzyme activities between the S-system and the original model and ensures that the new algorithm is still carried out within linear programming techniques. The presented framework is applied to several biochemical systems and shown to the tractability and effectiveness of the method. The simulation is also studied to investigate the convergence properties of the algorithm and to give a performance comparison of standard and modified iterative IOM approach. |
1402.4648 | Wiktor Mlynarski | Wiktor M{\l}ynarski and J\"urgen Jost | Natural statistics of binaural sounds | 29 pages, 13 figures | null | null | null | q-bio.NC cs.SD | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Binaural sound localization is usually considered a discrimination task,
where interaural time (ITD) and level (ILD) disparities at pure frequency
channels are utilized to identify a position of a sound source. In natural
conditions binaural circuits are exposed to a stimulation by sound waves
originating from multiple, often moving and overlapping sources. Therefore
statistics of binaural cues depend on acoustic properties and the spatial
configuration of the environment. In order to process binaural sounds
efficiently, the auditory system should be adapted to naturally encountered cue
distributions. Statistics of cues encountered naturally and their dependence on
the physical properties of an auditory scene have not been studied before.
Here, we performed binaural recordings of three auditory scenes with varying
spatial properties. We have analyzed empirical cue distributions from each
scene by fitting them with parametric probability density functions which
allowed for an easy comparison of different scenes. Higher order statistics of
binaural waveforms were analyzed by performing Independent Component Analysis
(ICA) and studying properties of learned basis functions. Obtained results can
be related to known neuronal mechanisms and suggest how binaural hearing can be
understood in terms of adaptation to the natural signal statistics.
| [
{
"created": "Wed, 19 Feb 2014 12:47:05 GMT",
"version": "v1"
},
{
"created": "Sat, 1 Mar 2014 01:32:16 GMT",
"version": "v2"
}
] | 2014-03-04 | [
[
"Młynarski",
"Wiktor",
""
],
[
"Jost",
"Jürgen",
""
]
] | Binaural sound localization is usually considered a discrimination task, where interaural time (ITD) and level (ILD) disparities at pure frequency channels are utilized to identify a position of a sound source. In natural conditions binaural circuits are exposed to a stimulation by sound waves originating from multiple, often moving and overlapping sources. Therefore statistics of binaural cues depend on acoustic properties and the spatial configuration of the environment. In order to process binaural sounds efficiently, the auditory system should be adapted to naturally encountered cue distributions. Statistics of cues encountered naturally and their dependence on the physical properties of an auditory scene have not been studied before. Here, we performed binaural recordings of three auditory scenes with varying spatial properties. We have analyzed empirical cue distributions from each scene by fitting them with parametric probability density functions which allowed for an easy comparison of different scenes. Higher order statistics of binaural waveforms were analyzed by performing Independent Component Analysis (ICA) and studying properties of learned basis functions. Obtained results can be related to known neuronal mechanisms and suggest how binaural hearing can be understood in terms of adaptation to the natural signal statistics. |
q-bio/0703045 | Dietrich Stauffer | Dietrich Stauffer, Ana Proykova and Karl-Heinz Lampe | Monte Carlo Simulation of Age-Dependent Host-Parasite Relations | 8 pages including 6 figures | null | 10.1016/j.physa.2007.05.028 | null | q-bio.PE | null | The death of a biological population is an extreme event which we investigate
here for a host-parasitoid system. Our simulations using the Penna ageing model
show how biological evolution can ``teach'' the parasitoids to avoid extinction
by waiting for the right age of the host. We also show the dependence of
extinction time on the population size.
| [
{
"created": "Wed, 21 Mar 2007 11:26:58 GMT",
"version": "v1"
}
] | 2009-11-13 | [
[
"Stauffer",
"Dietrich",
""
],
[
"Proykova",
"Ana",
""
],
[
"Lampe",
"Karl-Heinz",
""
]
] | The death of a biological population is an extreme event which we investigate here for a host-parasitoid system. Our simulations using the Penna ageing model show how biological evolution can ``teach'' the parasitoids to avoid extinction by waiting for the right age of the host. We also show the dependence of extinction time on the population size. |
2402.01829 | Shreyas V | Shreyas V, Swati Agarwal | Predicting ATP binding sites in protein sequences using Deep Learning
and Natural Language Processing | Published at 3rd Annual AAAI Workshop on AI to Accelerate Science and
Engineering (AI2ASE) | null | null | null | q-bio.BM cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Predicting ATP-Protein Binding sites in genes is of great significance in the
field of Biology and Medicine. The majority of research in this field has been
conducted through time- and resource-intensive 'wet experiments' in
laboratories. Over the years, researchers have been investigating computational
methods computational methods to accomplish the same goals, utilising the
strength of advanced Deep Learning and NLP algorithms. In this paper, we
propose to develop methods to classify ATP-Protein binding sites. We conducted
various experiments mainly using PSSMs and several word embeddings as features.
We used 2D CNNs and LightGBM classifiers as our chief Deep Learning Algorithms.
The MP3Vec and BERT models have also been subjected to testing in our study.
The outcomes of our experiments demonstrated improvement over the
state-of-the-art benchmarks.
| [
{
"created": "Fri, 2 Feb 2024 18:42:39 GMT",
"version": "v1"
}
] | 2024-02-06 | [
[
"V",
"Shreyas",
""
],
[
"Agarwal",
"Swati",
""
]
] | Predicting ATP-Protein Binding sites in genes is of great significance in the field of Biology and Medicine. The majority of research in this field has been conducted through time- and resource-intensive 'wet experiments' in laboratories. Over the years, researchers have been investigating computational methods computational methods to accomplish the same goals, utilising the strength of advanced Deep Learning and NLP algorithms. In this paper, we propose to develop methods to classify ATP-Protein binding sites. We conducted various experiments mainly using PSSMs and several word embeddings as features. We used 2D CNNs and LightGBM classifiers as our chief Deep Learning Algorithms. The MP3Vec and BERT models have also been subjected to testing in our study. The outcomes of our experiments demonstrated improvement over the state-of-the-art benchmarks. |
1812.05759 | David Warne | David J. Warne (1), Ruth E. Baker (2), Matthew J. Simpson (1) ((1)
Queensland University of Technology, (2) University of Oxford) | Simulation and inference algorithms for stochastic biochemical reaction
networks: from basic concepts to state-of-the-art | null | null | 10.1098/rsif.2018.0943 | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Stochasticity is a key characteristic of intracellular processes such as gene
regulation and chemical signalling. Therefore, characterising stochastic
effects in biochemical systems is essential to understand the complex dynamics
of living things. Mathematical idealisations of biochemically reacting systems
must be able to capture stochastic phenomena. While robust theory exists to
describe such stochastic models, the computational challenges in exploring
these models can be a significant burden in practice since realistic models are
analytically intractable. Determining the expected behaviour and variability of
a stochastic biochemical reaction network requires many probabilistic
simulations of its evolution. Using a biochemical reaction network model to
assist in the interpretation of time course data from a biological experiment
is an even greater challenge due to the intractability of the likelihood
function for determining observation probabilities. These computational
challenges have been subjects of active research for over four decades. In this
review, we present an accessible discussion of the major historical
developments and state-of-the-art computational techniques relevant to
simulation and inference problems for stochastic biochemical reaction network
models. Detailed algorithms for particularly important methods are described
and complemented with MATLAB implementations. As a result, this review provides
a practical and accessible introduction to computational methods for stochastic
models within the life sciences community.
| [
{
"created": "Fri, 14 Dec 2018 02:09:30 GMT",
"version": "v1"
},
{
"created": "Wed, 30 Jan 2019 04:41:47 GMT",
"version": "v2"
}
] | 2019-03-04 | [
[
"Warne",
"David J.",
""
],
[
"Baker",
"Ruth E.",
""
],
[
"Simpson",
"Matthew J.",
""
]
] | Stochasticity is a key characteristic of intracellular processes such as gene regulation and chemical signalling. Therefore, characterising stochastic effects in biochemical systems is essential to understand the complex dynamics of living things. Mathematical idealisations of biochemically reacting systems must be able to capture stochastic phenomena. While robust theory exists to describe such stochastic models, the computational challenges in exploring these models can be a significant burden in practice since realistic models are analytically intractable. Determining the expected behaviour and variability of a stochastic biochemical reaction network requires many probabilistic simulations of its evolution. Using a biochemical reaction network model to assist in the interpretation of time course data from a biological experiment is an even greater challenge due to the intractability of the likelihood function for determining observation probabilities. These computational challenges have been subjects of active research for over four decades. In this review, we present an accessible discussion of the major historical developments and state-of-the-art computational techniques relevant to simulation and inference problems for stochastic biochemical reaction network models. Detailed algorithms for particularly important methods are described and complemented with MATLAB implementations. As a result, this review provides a practical and accessible introduction to computational methods for stochastic models within the life sciences community. |
1310.7682 | Liane Gabora | Liane Gabora and Diederik Aerts | Contextualizing concepts using a mathematical generalization of the
quantum formalism | 31 pages. arXiv admin note: substantial text overlap with
arXiv:quant-ph/0205161 | Journal of Experimental and Theoretical Artificial Intelligence,
14(4), 327-358 | null | null | q-bio.NC cs.AI quant-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We outline the rationale and preliminary results of using the state context
property (SCOP) formalism, originally developed as a generalization of quantum
mechanics, to describe the contextual manner in which concepts are evoked, used
and combined to generate meaning. The quantum formalism was developed to cope
with problems arising in the description of (i) the measurement process, and
(ii) the generation of new states with new properties when particles become
entangled. Similar problems arising with concepts motivated the formal
treatment introduced here. Concepts are viewed not as fixed representations,
but entities existing in states of potentiality that require interaction with a
context-a stimulus or another concept-to 'collapse' to an instantiated form
(e.g. exemplar, prototype, or other possibly imaginary instance). The stimulus
situation plays the role of the measurement in physics, acting as context that
induces a change of the cognitive state from superposition state to collapsed
state. The collapsed state is more likely to consist of a conjunction of
concepts for associative than analytic thought because more stimulus or concept
properties take part in the collapse. We provide two contextual measures of
conceptual distance-one using collapse probabilities and the other weighted
properties-and show how they can be applied to conjunctions using the pet fish
problem.
| [
{
"created": "Tue, 29 Oct 2013 04:40:53 GMT",
"version": "v1"
},
{
"created": "Fri, 1 Nov 2013 22:17:52 GMT",
"version": "v2"
}
] | 2013-11-05 | [
[
"Gabora",
"Liane",
""
],
[
"Aerts",
"Diederik",
""
]
] | We outline the rationale and preliminary results of using the state context property (SCOP) formalism, originally developed as a generalization of quantum mechanics, to describe the contextual manner in which concepts are evoked, used and combined to generate meaning. The quantum formalism was developed to cope with problems arising in the description of (i) the measurement process, and (ii) the generation of new states with new properties when particles become entangled. Similar problems arising with concepts motivated the formal treatment introduced here. Concepts are viewed not as fixed representations, but entities existing in states of potentiality that require interaction with a context-a stimulus or another concept-to 'collapse' to an instantiated form (e.g. exemplar, prototype, or other possibly imaginary instance). The stimulus situation plays the role of the measurement in physics, acting as context that induces a change of the cognitive state from superposition state to collapsed state. The collapsed state is more likely to consist of a conjunction of concepts for associative than analytic thought because more stimulus or concept properties take part in the collapse. We provide two contextual measures of conceptual distance-one using collapse probabilities and the other weighted properties-and show how they can be applied to conjunctions using the pet fish problem. |
2010.03307 | Michal Michalak | Micha{\l} Pawe{\l} Michalak, Jack Cordes, Agnieszka Kulawik,
S{\l}awomir Sitek, S{\l}awomir Pytel, El\.zbieta Zuza\'nska-\.Zy\'sko,
Rados{\l}aw Wieczorek | An unbiased spatiotemporal risk model for COVID-19 with
epidemiologically meaningful dynamics (A systematic framework for
spatiotemporal modelling of COVID-19 disease) | null | null | null | null | q-bio.QM q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | Spatiotemporal modelling of infectious diseases such as COVID-19 involves
using a variety of epidemiological metrics such as regional proportion of cases
or regional positivity rates. Although observing their changes over time is
critical to estimate the regional disease burden, the dynamical properties of
these measures as well as cross-relationships are not systematically explained.
Here we provide a spatiotemporal framework composed of six commonly used and
newly constructed epidemiological metrics and conduct a case study evaluation.
We introduce a refined risk model that is biased neither by the differences in
population sizes nor by the spatial heterogeneity of testing. In particular,
the proposed methodology is useful for the unbiased identification of time
periods with elevated COVID-19 risk, without sensitivity to spatial
heterogeneity of neither population nor testing. Our results also provide
insights regarding regional prioritization of testing and the consequences of
potential synchronization of epidemics between regions.
| [
{
"created": "Wed, 7 Oct 2020 09:53:37 GMT",
"version": "v1"
},
{
"created": "Fri, 11 Dec 2020 09:32:01 GMT",
"version": "v2"
}
] | 2020-12-14 | [
[
"Michalak",
"Michał Paweł",
""
],
[
"Cordes",
"Jack",
""
],
[
"Kulawik",
"Agnieszka",
""
],
[
"Sitek",
"Sławomir",
""
],
[
"Pytel",
"Sławomir",
""
],
[
"Zuzańska-Żyśko",
"Elżbieta",
""
],
[
"Wieczorek",
"Radosław",
""
]
] | Spatiotemporal modelling of infectious diseases such as COVID-19 involves using a variety of epidemiological metrics such as regional proportion of cases or regional positivity rates. Although observing their changes over time is critical to estimate the regional disease burden, the dynamical properties of these measures as well as cross-relationships are not systematically explained. Here we provide a spatiotemporal framework composed of six commonly used and newly constructed epidemiological metrics and conduct a case study evaluation. We introduce a refined risk model that is biased neither by the differences in population sizes nor by the spatial heterogeneity of testing. In particular, the proposed methodology is useful for the unbiased identification of time periods with elevated COVID-19 risk, without sensitivity to spatial heterogeneity of neither population nor testing. Our results also provide insights regarding regional prioritization of testing and the consequences of potential synchronization of epidemics between regions. |
1501.00727 | Vince Grolmusz | Balazs Szalkai, Balint Varga, Vince Grolmusz | Graph Theoretical Analysis Reveals: Women's Brains are Better Connected
than Men's | null | null | 10.1371/journal.pone.0130045 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep graph-theoretic ideas in the context with the graph of the World Wide
Web led to the definition of Google's PageRank and the subsequent rise of the
most-popular search engine to date. Brain graphs, or connectomes, are being
widely explored today. We believe that non-trivial graph theoretic concepts,
similarly as it happened in the case of the World Wide web, will lead to
discoveries enlightening the structural and also the functional details of the
animal and human brains. When scientists examine large networks of tens or
hundreds of millions of vertices, only fast algorithms can be applied because
of the size constraints. In the case of diffusion MRI-based structural human
brain imaging, the effective vertex number of the connectomes, or brain graphs
derived from the data is on the scale of several hundred today. That size
facilitates applying strict mathematical graph algorithms even for some
hard-to-compute (or NP-hard) quantities like vertex cover or balanced minimum
cut. In the present work we have examined brain graphs, computed from the data
of the Human Connectome Project, recorded from male and female subjects between
ages 22 and 35. Significant differences were found between the male and female
structural brain graphs: we show that the average female connectome has more
edges, is a better expander graph, has larger minimal bisection width, and has
more spanning trees than the average male connectome. Since the average female
brain weights less than the brain of males, these properties show that the
female brain is more "well-connected" or perhaps, more "efficient" in a sense
than the brain of males.
| [
{
"created": "Sun, 4 Jan 2015 22:05:21 GMT",
"version": "v1"
},
{
"created": "Wed, 7 Jan 2015 22:35:41 GMT",
"version": "v2"
},
{
"created": "Mon, 12 Jan 2015 10:07:48 GMT",
"version": "v3"
}
] | 2016-02-17 | [
[
"Szalkai",
"Balazs",
""
],
[
"Varga",
"Balint",
""
],
[
"Grolmusz",
"Vince",
""
]
] | Deep graph-theoretic ideas in the context with the graph of the World Wide Web led to the definition of Google's PageRank and the subsequent rise of the most-popular search engine to date. Brain graphs, or connectomes, are being widely explored today. We believe that non-trivial graph theoretic concepts, similarly as it happened in the case of the World Wide web, will lead to discoveries enlightening the structural and also the functional details of the animal and human brains. When scientists examine large networks of tens or hundreds of millions of vertices, only fast algorithms can be applied because of the size constraints. In the case of diffusion MRI-based structural human brain imaging, the effective vertex number of the connectomes, or brain graphs derived from the data is on the scale of several hundred today. That size facilitates applying strict mathematical graph algorithms even for some hard-to-compute (or NP-hard) quantities like vertex cover or balanced minimum cut. In the present work we have examined brain graphs, computed from the data of the Human Connectome Project, recorded from male and female subjects between ages 22 and 35. Significant differences were found between the male and female structural brain graphs: we show that the average female connectome has more edges, is a better expander graph, has larger minimal bisection width, and has more spanning trees than the average male connectome. Since the average female brain weights less than the brain of males, these properties show that the female brain is more "well-connected" or perhaps, more "efficient" in a sense than the brain of males. |
1302.1758 | Andrey Dovzhenok | Andrey Dovzhenok, Choongseok Park, Robert M. Worth and Leonid L.
Rubchinsky | Failure of Delayed Feedback Deep Brain Stimulation for Intermittent
Pathological Synchronization in Parkinson's Disease | 19 pages, 8 figures | PLoS ONE 8(3): e58264, 2013 | 10.1371/journal.pone.0058264 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Suppression of excessively synchronous beta-band oscillatory activity in the
brain is believed to suppress hypokinetic motor symptoms of Parkinson's
disease. Recently, a lot of interest has been devoted to desynchronizing
delayed feedback deep brain stimulation (DBS). This type of synchrony control
was shown to destabilize the synchronized state in networks of simple model
oscillators as well as in networks of coupled model neurons. However, the
dynamics of the neural activity in Parkinson's disease exhibits complex
intermittent synchronous patterns, far from the idealized synchronous dynamics
used to study the delayed feedback stimulation. This study explores the action
of delayed feedback stimulation on partially synchronized oscillatory dynamics,
similar to what one observes experimentally in parkinsonian patients. We employ
a model of the basal ganglia networks which reproduces experimentally observed
fine temporal structure of the synchronous dynamics. When the parameters of our
model are such that the synchrony is unphysiologically strong, the feedback
exerts a desynchronizing action. However, when the network is tuned to
reproduce the highly variable temporal patterns observed experimentally, the
same kind of delayed feedback may actually increase the synchrony. As network
parameters are changed from the range which produces complete synchrony to
those favoring less synchronous dynamics, desynchronizing delayed feedback may
gradually turn into synchronizing stimulation. This suggests that delayed
feedback DBS in Parkinson's disease may boost rather than suppress
synchronization and is unlikely to be clinically successful. The study also
indicates that delayed feedback stimulation may not necessarily exhibit a
desynchronization effect when acting on a physiologically realistic partially
synchronous dynamics, and provides an example of how to estimate the
stimulation effect.
| [
{
"created": "Thu, 7 Feb 2013 14:30:52 GMT",
"version": "v1"
}
] | 2013-03-05 | [
[
"Dovzhenok",
"Andrey",
""
],
[
"Park",
"Choongseok",
""
],
[
"Worth",
"Robert M.",
""
],
[
"Rubchinsky",
"Leonid L.",
""
]
] | Suppression of excessively synchronous beta-band oscillatory activity in the brain is believed to suppress hypokinetic motor symptoms of Parkinson's disease. Recently, a lot of interest has been devoted to desynchronizing delayed feedback deep brain stimulation (DBS). This type of synchrony control was shown to destabilize the synchronized state in networks of simple model oscillators as well as in networks of coupled model neurons. However, the dynamics of the neural activity in Parkinson's disease exhibits complex intermittent synchronous patterns, far from the idealized synchronous dynamics used to study the delayed feedback stimulation. This study explores the action of delayed feedback stimulation on partially synchronized oscillatory dynamics, similar to what one observes experimentally in parkinsonian patients. We employ a model of the basal ganglia networks which reproduces experimentally observed fine temporal structure of the synchronous dynamics. When the parameters of our model are such that the synchrony is unphysiologically strong, the feedback exerts a desynchronizing action. However, when the network is tuned to reproduce the highly variable temporal patterns observed experimentally, the same kind of delayed feedback may actually increase the synchrony. As network parameters are changed from the range which produces complete synchrony to those favoring less synchronous dynamics, desynchronizing delayed feedback may gradually turn into synchronizing stimulation. This suggests that delayed feedback DBS in Parkinson's disease may boost rather than suppress synchronization and is unlikely to be clinically successful. The study also indicates that delayed feedback stimulation may not necessarily exhibit a desynchronization effect when acting on a physiologically realistic partially synchronous dynamics, and provides an example of how to estimate the stimulation effect. |
1612.05955 | Loizos Kounios | Loizos Kounios, Jeff Clune, Kostas Kouvaris, G\"unter P. Wagner,
Mihaela Pavlicev, Daniel M. Weinreich, Richard A. Watson | Resolving the paradox of evolvability with learning theory: How
evolution learns to improve evolvability on rugged fitness landscapes | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It has been hypothesized that one of the main reasons evolution has been able
to produce such impressive adaptations is because it has improved its own
ability to evolve -- "the evolution of evolvability". Rupert Riedl, for
example, an early pioneer of evolutionary developmental biology, suggested that
the evolution of complex adaptations is facilitated by a developmental
organization that is itself shaped by past selection to facilitate evolutionary
innovation. However, selection for characteristics that enable future
innovation seems paradoxical: natural selection cannot favor structures for
benefits they have not yet produced, and favoring characteristics for benefits
that have already been produced does not constitute future innovation. Here we
resolve this paradox by exploiting a formal equivalence between the evolution
of evolvability and learning systems. We use the conditions that enable simple
learning systems to generalize, i.e., to use past experience to improve
performance on previously unseen, future test cases, to demonstrate conditions
where natural selection can systematically favor developmental organizations
that benefit future evolvability. Using numerical simulations of evolution on
highly epistatic fitness landscapes, we illustrate how the structure of evolved
gene regulation networks can result in increased evolvability capable of
avoiding local fitness peaks and discovering higher fitness phenotypes. Our
findings support Riedl's intuition: Developmental organizations that "mimic"
the organization of constraints on phenotypes can be favored by short-term
selection and also facilitate future innovation. Importantly, the conditions
that enable the evolution of such surprising evolvability follow from the same
non-mysterious conditions that permit generalization in learning systems.
| [
{
"created": "Sun, 18 Dec 2016 17:24:11 GMT",
"version": "v1"
}
] | 2016-12-20 | [
[
"Kounios",
"Loizos",
""
],
[
"Clune",
"Jeff",
""
],
[
"Kouvaris",
"Kostas",
""
],
[
"Wagner",
"Günter P.",
""
],
[
"Pavlicev",
"Mihaela",
""
],
[
"Weinreich",
"Daniel M.",
""
],
[
"Watson",
"Richard A.",
""
]
] | It has been hypothesized that one of the main reasons evolution has been able to produce such impressive adaptations is because it has improved its own ability to evolve -- "the evolution of evolvability". Rupert Riedl, for example, an early pioneer of evolutionary developmental biology, suggested that the evolution of complex adaptations is facilitated by a developmental organization that is itself shaped by past selection to facilitate evolutionary innovation. However, selection for characteristics that enable future innovation seems paradoxical: natural selection cannot favor structures for benefits they have not yet produced, and favoring characteristics for benefits that have already been produced does not constitute future innovation. Here we resolve this paradox by exploiting a formal equivalence between the evolution of evolvability and learning systems. We use the conditions that enable simple learning systems to generalize, i.e., to use past experience to improve performance on previously unseen, future test cases, to demonstrate conditions where natural selection can systematically favor developmental organizations that benefit future evolvability. Using numerical simulations of evolution on highly epistatic fitness landscapes, we illustrate how the structure of evolved gene regulation networks can result in increased evolvability capable of avoiding local fitness peaks and discovering higher fitness phenotypes. Our findings support Riedl's intuition: Developmental organizations that "mimic" the organization of constraints on phenotypes can be favored by short-term selection and also facilitate future innovation. Importantly, the conditions that enable the evolution of such surprising evolvability follow from the same non-mysterious conditions that permit generalization in learning systems. |
2012.14559 | Bhav Jain | Bhav Jain, Sean Elliott | Correlation Across Environments Encoded by Hippocampal Place Cells | null | null | null | null | q-bio.NC stat.AP | http://creativecommons.org/licenses/by/4.0/ | The hippocampus is often attributed to episodic memory formation and storage
in the mammalian brain; in particular, Alme et al. showed that hippocampal area
CA3 forms statistically independent representations across a large number of
environments, even if the environments share highly similar features. This lack
of overlap between spatial maps indicates the large capacity of the CA3
circuitry. In this paper, we support the argument for the large capacity of the
CA3 network. To do so, we replicate the key findings of Alme et al. and extend
the results by perturbing the neural activity encodings with noise and
conducting representation similarity analysis (RSA). We find that the
correlations between firing rates are partially resistant to noise, and that
the spatial representations across cells show similar patterns, even across
different environments. Finally, we discuss some theoretical and practical
implications of our results.
| [
{
"created": "Tue, 29 Dec 2020 01:41:31 GMT",
"version": "v1"
}
] | 2021-01-01 | [
[
"Jain",
"Bhav",
""
],
[
"Elliott",
"Sean",
""
]
] | The hippocampus is often attributed to episodic memory formation and storage in the mammalian brain; in particular, Alme et al. showed that hippocampal area CA3 forms statistically independent representations across a large number of environments, even if the environments share highly similar features. This lack of overlap between spatial maps indicates the large capacity of the CA3 circuitry. In this paper, we support the argument for the large capacity of the CA3 network. To do so, we replicate the key findings of Alme et al. and extend the results by perturbing the neural activity encodings with noise and conducting representation similarity analysis (RSA). We find that the correlations between firing rates are partially resistant to noise, and that the spatial representations across cells show similar patterns, even across different environments. Finally, we discuss some theoretical and practical implications of our results. |
2103.12638 | Kevin Schmidt | K. Schmidt, J. Culbertson, C. Cox, H.S. Clouse, O. Larue, M.
Molineaux, S. Rogers | What is it Like to Be a Bot: Simulated, Situated, Structurally Coherent
Qualia (S3Q) Theory of Consciousness | null | null | null | null | q-bio.NC | http://creativecommons.org/publicdomain/zero/1.0/ | A novel representationalist theory of consciousness is presented that is
grounded in neuroscience and provides a path to artificially conscious
computing. Central to the theory are representational affordances of the
conscious experience based on the generation of qualia, the fundamental unit of
the conscious representation. The current approach is focused on understanding
the balance of simulation, situatedness, and structural coherence of artificial
conscious representations through converging evidence from neuroscientific and
modeling experiments. Representations instantiating a suitable balance of
situated and structurally coherent simulation-based qualia are hypothesized to
afford the agent the flexibilities required to succeed in rapidly changing
environments.
| [
{
"created": "Sat, 13 Mar 2021 02:07:13 GMT",
"version": "v1"
}
] | 2021-03-24 | [
[
"Schmidt",
"K.",
""
],
[
"Culbertson",
"J.",
""
],
[
"Cox",
"C.",
""
],
[
"Clouse",
"H. S.",
""
],
[
"Larue",
"O.",
""
],
[
"Molineaux",
"M.",
""
],
[
"Rogers",
"S.",
""
]
] | A novel representationalist theory of consciousness is presented that is grounded in neuroscience and provides a path to artificially conscious computing. Central to the theory are representational affordances of the conscious experience based on the generation of qualia, the fundamental unit of the conscious representation. The current approach is focused on understanding the balance of simulation, situatedness, and structural coherence of artificial conscious representations through converging evidence from neuroscientific and modeling experiments. Representations instantiating a suitable balance of situated and structurally coherent simulation-based qualia are hypothesized to afford the agent the flexibilities required to succeed in rapidly changing environments. |
2405.09647 | Zhenying Chen | Zhenying Chen, Hasan Ahmed, Cora Hirst, Rustom Antia | Dynamics of antibody binding and neutralization during viral infection | null | null | null | null | q-bio.PE q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In vivo in infection, virions are constantly produced and die rapidly. In
contrast, most antibody binding assays do not include such features. Motivated
by this, we considered virions with n=100 binding sites in simple mathematical
models with and without the production of virions. In the absence of viral
production, at steady state, the distribution of virions by the number of sites
bound is given by a binomial distribution, with the proportion being a simple
function of antibody affinity (Kon/Koff) and concentration; this generalizes to
a multinomial distribution in the case of two or more kinds of antibodies. In
the presence of viral production, the role of affinity is replaced by an
infection analog of affinity (IAA), with IAA=Kon/(Koff+dv+r), where dv is the
virus decaying rate and r is the infection growth rate. Because in vivo dv can
be large, the amount of binding as well as the effect of Koff on binding are
substantially reduced. When neutralization is added, the effect of Koff is
similarly small which may help explain the relatively high Koff reported for
many antibodies. We next show that the n+2-dimensional model used for
neutralization can be simplified to a 2-dimensional model. This provides some
justification for the simple models that have been used in practice. A
corollary of our results is that an unexpectedly large effect of Koff in vivo
may point to mechanisms of neutralization beyond stoichiometry. Our results
suggest reporting Kon and Koff separately, rather than focusing on affinity,
until the situation is better resolved both experimentally and theoretically.
| [
{
"created": "Wed, 15 May 2024 18:26:37 GMT",
"version": "v1"
}
] | 2024-05-17 | [
[
"Chen",
"Zhenying",
""
],
[
"Ahmed",
"Hasan",
""
],
[
"Hirst",
"Cora",
""
],
[
"Antia",
"Rustom",
""
]
] | In vivo in infection, virions are constantly produced and die rapidly. In contrast, most antibody binding assays do not include such features. Motivated by this, we considered virions with n=100 binding sites in simple mathematical models with and without the production of virions. In the absence of viral production, at steady state, the distribution of virions by the number of sites bound is given by a binomial distribution, with the proportion being a simple function of antibody affinity (Kon/Koff) and concentration; this generalizes to a multinomial distribution in the case of two or more kinds of antibodies. In the presence of viral production, the role of affinity is replaced by an infection analog of affinity (IAA), with IAA=Kon/(Koff+dv+r), where dv is the virus decaying rate and r is the infection growth rate. Because in vivo dv can be large, the amount of binding as well as the effect of Koff on binding are substantially reduced. When neutralization is added, the effect of Koff is similarly small which may help explain the relatively high Koff reported for many antibodies. We next show that the n+2-dimensional model used for neutralization can be simplified to a 2-dimensional model. This provides some justification for the simple models that have been used in practice. A corollary of our results is that an unexpectedly large effect of Koff in vivo may point to mechanisms of neutralization beyond stoichiometry. Our results suggest reporting Kon and Koff separately, rather than focusing on affinity, until the situation is better resolved both experimentally and theoretically. |
1203.5673 | Yasser Roudi | Joanna Tyrcha, Yasser Roudi, Matteo Marsili, John Hertz | The Effect of Nonstationarity on Models Inferred from Neural Data | version in press in J Stat Mech | J. Stat. Mech. (2013) P03005 | 10.1088/1742-5468/2013/03/P03005 | null | q-bio.QM q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neurons subject to a common non-stationary input may exhibit a correlated
firing behavior. Correlations in the statistics of neural spike trains also
arise as the effect of interaction between neurons. Here we show that these two
situations can be distinguished, with machine learning techniques, provided the
data are rich enough. In order to do this, we study the problem of inferring a
kinetic Ising model, stationary or nonstationary, from the available data. We
apply the inference procedure to two data sets: one from salamander retinal
ganglion cells and the other from a realistic computational cortical network
model. We show that many aspects of the concerted activity of the salamander
retinal neurons can be traced simply to the external input. A model of
non-interacting neurons subject to a non-stationary external field outperforms
a model with stationary input with couplings between neurons, even accounting
for the differences in the number of model parameters. When couplings are added
to the non-stationary model, for the retinal data, little is gained: the
inferred couplings are generally not significant. Likewise, the distribution of
the sizes of sets of neurons that spike simultaneously and the frequency of
spike patterns as function of their rank (Zipf plots) are well-explained by an
independent-neuron model with time-dependent external input, and adding
connections to such a model does not offer significant improvement. For the
cortical model data, robust couplings, well correlated with the real
connections, can be inferred using the non-stationary model. Adding connections
to this model slightly improves the agreement with the data for the probability
of synchronous spikes but hardly affects the Zipf plot.
| [
{
"created": "Mon, 26 Mar 2012 14:08:44 GMT",
"version": "v1"
},
{
"created": "Thu, 31 Jan 2013 13:19:52 GMT",
"version": "v2"
}
] | 2021-04-13 | [
[
"Tyrcha",
"Joanna",
""
],
[
"Roudi",
"Yasser",
""
],
[
"Marsili",
"Matteo",
""
],
[
"Hertz",
"John",
""
]
] | Neurons subject to a common non-stationary input may exhibit a correlated firing behavior. Correlations in the statistics of neural spike trains also arise as the effect of interaction between neurons. Here we show that these two situations can be distinguished, with machine learning techniques, provided the data are rich enough. In order to do this, we study the problem of inferring a kinetic Ising model, stationary or nonstationary, from the available data. We apply the inference procedure to two data sets: one from salamander retinal ganglion cells and the other from a realistic computational cortical network model. We show that many aspects of the concerted activity of the salamander retinal neurons can be traced simply to the external input. A model of non-interacting neurons subject to a non-stationary external field outperforms a model with stationary input with couplings between neurons, even accounting for the differences in the number of model parameters. When couplings are added to the non-stationary model, for the retinal data, little is gained: the inferred couplings are generally not significant. Likewise, the distribution of the sizes of sets of neurons that spike simultaneously and the frequency of spike patterns as function of their rank (Zipf plots) are well-explained by an independent-neuron model with time-dependent external input, and adding connections to such a model does not offer significant improvement. For the cortical model data, robust couplings, well correlated with the real connections, can be inferred using the non-stationary model. Adding connections to this model slightly improves the agreement with the data for the probability of synchronous spikes but hardly affects the Zipf plot. |
1601.06764 | Simon Mitternacht | Simon Mitternacht | FreeSASA: An open source C library for solvent accessible surface area
calculations | 12 pages, 2 figures, 1 appendix | F1000 Research 2016, 5:189 | 10.12688/f1000research.7931.1 | null | q-bio.BM | http://creativecommons.org/licenses/by/4.0/ | Calculating solvent accessible surface areas (SASA) is a run-of-the-mill
calculation in structural biology. Although there are many programs available
for this calculation, there are no free-standing, open-source tools designed
for easy tool-chain integration. FreeSASA is an open source C library for SASA
calculations that provides both command-line and Python interfaces in addition
to its C API. The library implements both Lee and Richards' and Shrake and
Rupley's approximations, and is highly configurable to allow the user to
control molecular parameters, accuracy and output granularity. It only depends
on standard C libraries and should therefore be easy to compile and install on
any platform. The source code is freely available from
http://freesasa.github.io/. The library is well-documented, stable and
efficient. The command-line interface can easily replace closed source legacy
programs, with comparable or better accuracy and speed, and with some added
functionality.
| [
{
"created": "Mon, 25 Jan 2016 20:42:23 GMT",
"version": "v1"
}
] | 2016-03-01 | [
[
"Mitternacht",
"Simon",
""
]
] | Calculating solvent accessible surface areas (SASA) is a run-of-the-mill calculation in structural biology. Although there are many programs available for this calculation, there are no free-standing, open-source tools designed for easy tool-chain integration. FreeSASA is an open source C library for SASA calculations that provides both command-line and Python interfaces in addition to its C API. The library implements both Lee and Richards' and Shrake and Rupley's approximations, and is highly configurable to allow the user to control molecular parameters, accuracy and output granularity. It only depends on standard C libraries and should therefore be easy to compile and install on any platform. The source code is freely available from http://freesasa.github.io/. The library is well-documented, stable and efficient. The command-line interface can easily replace closed source legacy programs, with comparable or better accuracy and speed, and with some added functionality. |
1710.05783 | Dennis C. Rapaport | D.C. Rapaport | Molecular dynamics study of T=3 capsid assembly | 18 pages, 10 figures (minor changes) | J. Biol. Phys. 44, 147 (2018) | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Molecular dynamics simulation is used to model the self-assembly of
polyhedral shells containing 180 trapezoidal particles that correspond to the
T=3 virus capsid. Three kinds of particle, differing only slightly in shape,
are used to account for the effect of quasi-equivalence. Bond formation between
particles is reversible and an explicit atomistic solvent is included. Under
suitable conditions the simulations are able to produce complete shells, with
the majority of unused particles remaining as monomers, and practically no
other clusters. There are also no incorrectly assembled clusters. The
simulations reveal details of intermediate structures along the growth pathway,
information that is relevant for interpreting experiment.
| [
{
"created": "Mon, 16 Oct 2017 15:33:00 GMT",
"version": "v1"
},
{
"created": "Sun, 18 Mar 2018 08:12:32 GMT",
"version": "v2"
}
] | 2020-04-01 | [
[
"Rapaport",
"D. C.",
""
]
] | Molecular dynamics simulation is used to model the self-assembly of polyhedral shells containing 180 trapezoidal particles that correspond to the T=3 virus capsid. Three kinds of particle, differing only slightly in shape, are used to account for the effect of quasi-equivalence. Bond formation between particles is reversible and an explicit atomistic solvent is included. Under suitable conditions the simulations are able to produce complete shells, with the majority of unused particles remaining as monomers, and practically no other clusters. There are also no incorrectly assembled clusters. The simulations reveal details of intermediate structures along the growth pathway, information that is relevant for interpreting experiment. |
2311.13417 | Gregory Way | Erik Serrano, Srinivas Niranj Chandrasekaran, Dave Bunten, Kenneth I.
Brewer, Jenna Tomkinson, Roshan Kern, Michael Bornholdt, Stephen Fleming,
Ruifan Pei, John Arevalo, Hillary Tsang, Vincent Rubinetti, Callum
Tromans-Coia, Tim Becker, Erin Weisbart, Charlotte Bunne, Alexandr A.
Kalinin, Rebecca Senft, Stephen J. Taylor, Nasim Jamali, Adeniyi Adeboye,
Hamdah Shafqat Abbasi, Allen Goodman, Juan C. Caicedo, Anne E. Carpenter,
Beth A. Cimini, Shantanu Singh, Gregory P. Way | Reproducible image-based profiling with Pycytominer | We updated: Figures (e.g., remove panel from Figure 1) to increase
clarity. Consolidated the introduction, results, and discussion into a single
section. Added a new analysis to predict compounds that cause undesirable
cell injuries. Added three tables including one to highlight image-based
profiling software limitations. 14 pages, 2 main figures, 5 supplementary
figures, 3 tables | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Advances in high-throughput microscopy have enabled the rapid acquisition of
large numbers of high-content microscopy images. Whether by deep learning or
classical algorithms, image analysis pipelines then produce single-cell
features. To process these single-cells for downstream applications, we present
Pycytominer, a user-friendly, open-source python package that implements the
bioinformatics steps, known as image-based profiling. We demonstrate
Pycytominers usefulness in a machine learning project to predict nuisance
compounds that cause undesirable cell injuries.
| [
{
"created": "Wed, 22 Nov 2023 14:26:48 GMT",
"version": "v1"
},
{
"created": "Tue, 2 Jul 2024 21:01:54 GMT",
"version": "v2"
}
] | 2024-07-04 | [
[
"Serrano",
"Erik",
""
],
[
"Chandrasekaran",
"Srinivas Niranj",
""
],
[
"Bunten",
"Dave",
""
],
[
"Brewer",
"Kenneth I.",
""
],
[
"Tomkinson",
"Jenna",
""
],
[
"Kern",
"Roshan",
""
],
[
"Bornholdt",
"Michael",
""
],
[
"Fleming",
"Stephen",
""
],
[
"Pei",
"Ruifan",
""
],
[
"Arevalo",
"John",
""
],
[
"Tsang",
"Hillary",
""
],
[
"Rubinetti",
"Vincent",
""
],
[
"Tromans-Coia",
"Callum",
""
],
[
"Becker",
"Tim",
""
],
[
"Weisbart",
"Erin",
""
],
[
"Bunne",
"Charlotte",
""
],
[
"Kalinin",
"Alexandr A.",
""
],
[
"Senft",
"Rebecca",
""
],
[
"Taylor",
"Stephen J.",
""
],
[
"Jamali",
"Nasim",
""
],
[
"Adeboye",
"Adeniyi",
""
],
[
"Abbasi",
"Hamdah Shafqat",
""
],
[
"Goodman",
"Allen",
""
],
[
"Caicedo",
"Juan C.",
""
],
[
"Carpenter",
"Anne E.",
""
],
[
"Cimini",
"Beth A.",
""
],
[
"Singh",
"Shantanu",
""
],
[
"Way",
"Gregory P.",
""
]
] | Advances in high-throughput microscopy have enabled the rapid acquisition of large numbers of high-content microscopy images. Whether by deep learning or classical algorithms, image analysis pipelines then produce single-cell features. To process these single-cells for downstream applications, we present Pycytominer, a user-friendly, open-source python package that implements the bioinformatics steps, known as image-based profiling. We demonstrate Pycytominers usefulness in a machine learning project to predict nuisance compounds that cause undesirable cell injuries. |
1305.1882 | Michael Courtney | Joshua M. Courtney, Amy C. Courtney, Michael W. Courtney | Do Rainbow Trout and Their Hybrids Outcompete Cutthroat Trout in a
Lentic Ecosystem? | null | Fisheries and Aquaculture Journal, Vol. 2013: Pages 7, Article ID:
FAJ-78 | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Much has been written about introduced rainbow trout interbreeding and
outcompeting native cutthroat trout. However, specific mechanisms have not been
thoroughly explored, and most data is limited to lotic ecosystems. Samples of
Snake River cutthroat trout (Oncorhynchus clarkii bouvieri), the
rainbow-cutthroat hybrid, the cutbow trout (Onchorhynchus mykiss x clarkii),
and rainbow trout (Oncorhynchus mykiss), were obtained from a lentic ecosystem
(Eleven Mile Reservoir, Colorado) by creel surveys conducted from May to
October, 2012. The total length and weight of each fish was measured and the
relative condition factor of each fish was computed using expected weight from
weight-length relationships from the Colorado Division of Parks and Wildlife
(CDPW). Data from the CDPW collected from 2003-2010 in the same lentic
ecosystem were used to compute relative condition factors for additional
comparison, as was independent creel survey data from 2011. Cutthroat trout
were plump: the mean relative condition factor of the cutthroat trout was
112.0% (+/- 1.0%). Cutbow hybrid trout were close to the expected weights with
a mean relative condition factor of 99.8% (+/- 0.6%). Rainbow trout were
thinner with a mean relative condition factor of 96.4% (+/- 1.4%). Comparing
mean relative condition factors of CDPW data from earlier years and plotting
the 2012 data relative to percentile curves also shows the same trend of
cutthroat trout being plumper than expected and rainbow trout being thinner
than the cutthroat trout, with the hybrid cutbow trout in between. This data
supports the hypothesis that rainbow trout do not outcompete cutthroat trout in
lentic ecosystems. Comparison with data from three other Colorado reservoirs
also shows that cutthroat trout tend to be more plump than rainbow trout and
their hybrids in sympatric lentic ecosystems.
| [
{
"created": "Wed, 8 May 2013 16:53:17 GMT",
"version": "v1"
}
] | 2013-05-09 | [
[
"Courtney",
"Joshua M.",
""
],
[
"Courtney",
"Amy C.",
""
],
[
"Courtney",
"Michael W.",
""
]
] | Much has been written about introduced rainbow trout interbreeding and outcompeting native cutthroat trout. However, specific mechanisms have not been thoroughly explored, and most data is limited to lotic ecosystems. Samples of Snake River cutthroat trout (Oncorhynchus clarkii bouvieri), the rainbow-cutthroat hybrid, the cutbow trout (Onchorhynchus mykiss x clarkii), and rainbow trout (Oncorhynchus mykiss), were obtained from a lentic ecosystem (Eleven Mile Reservoir, Colorado) by creel surveys conducted from May to October, 2012. The total length and weight of each fish was measured and the relative condition factor of each fish was computed using expected weight from weight-length relationships from the Colorado Division of Parks and Wildlife (CDPW). Data from the CDPW collected from 2003-2010 in the same lentic ecosystem were used to compute relative condition factors for additional comparison, as was independent creel survey data from 2011. Cutthroat trout were plump: the mean relative condition factor of the cutthroat trout was 112.0% (+/- 1.0%). Cutbow hybrid trout were close to the expected weights with a mean relative condition factor of 99.8% (+/- 0.6%). Rainbow trout were thinner with a mean relative condition factor of 96.4% (+/- 1.4%). Comparing mean relative condition factors of CDPW data from earlier years and plotting the 2012 data relative to percentile curves also shows the same trend of cutthroat trout being plumper than expected and rainbow trout being thinner than the cutthroat trout, with the hybrid cutbow trout in between. This data supports the hypothesis that rainbow trout do not outcompete cutthroat trout in lentic ecosystems. Comparison with data from three other Colorado reservoirs also shows that cutthroat trout tend to be more plump than rainbow trout and their hybrids in sympatric lentic ecosystems. |
1906.03931 | Martin Helmer | Michael F. Adamer and Martin Helmer | Families of Toric Chemical Reaction Networks | null | null | null | null | q-bio.MN math.AG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study families of chemical reaction networks whose positive steady states
are toric, and therefore can be parameterized by monomials. Families are
constructed algorithmically from a core network; we show that if a family
member is multistationary, then so are all subsequent networks in the family.
Further, we address the questions of model selection and experimental design
for families by investigating the algebraic dependencies of the chemical
concentrations using matroids. Given a family with toric steady states and a
constant number of conservation relations, we construct a matroid that encodes
important information regarding the steady state behaviour of the entire
family. Among other things, this gives necessary conditions for the
distinguishability of families of reaction networks with respect to a data set
of measured chemical concentrations. We illustrate our results using multi-site
phosphorylation networks.
| [
{
"created": "Mon, 10 Jun 2019 12:22:11 GMT",
"version": "v1"
},
{
"created": "Mon, 16 Dec 2019 04:06:22 GMT",
"version": "v2"
}
] | 2019-12-17 | [
[
"Adamer",
"Michael F.",
""
],
[
"Helmer",
"Martin",
""
]
] | We study families of chemical reaction networks whose positive steady states are toric, and therefore can be parameterized by monomials. Families are constructed algorithmically from a core network; we show that if a family member is multistationary, then so are all subsequent networks in the family. Further, we address the questions of model selection and experimental design for families by investigating the algebraic dependencies of the chemical concentrations using matroids. Given a family with toric steady states and a constant number of conservation relations, we construct a matroid that encodes important information regarding the steady state behaviour of the entire family. Among other things, this gives necessary conditions for the distinguishability of families of reaction networks with respect to a data set of measured chemical concentrations. We illustrate our results using multi-site phosphorylation networks. |
2201.12041 | Peter G\"untert | Piotr Klukowski, Roland Riek, Peter G\"untert | Rapid protein assignments and structures from raw NMR spectra with the
deep learning technique ARTINA | null | null | 10.1038/s41467-022-33879-5 | null | q-bio.BM cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Nuclear Magnetic Resonance (NMR) spectroscopy is one of the major techniques
in structural biology with over 11,800 protein structures deposited in the
Protein Data Bank. NMR can elucidate structures and dynamics of small and
medium size proteins in solution, living cells, and solids, but has been
limited by the tedious data analysis process. It typically requires weeks or
months of manual work of a trained expert to turn NMR measurements into a
protein structure. Automation of this process is an open problem, formulated in
the field over 30 years ago. Here, we present a solution to this challenge that
enables the completely automated analysis of protein NMR data within hours
after completing the measurements. Using only NMR spectra and the protein
sequence as input, our machine learning-based method, ARTINA, delivers signal
positions, resonance assignments, and structures strictly without any human
intervention. Tested on a 100-protein benchmark comprising 1329
multidimensional NMR spectra, ARTINA demonstrated its ability to solve
structures with 1.44 {\AA} median RMSD to the PDB reference and to identify
91.36% correct NMR resonance assignments. ARTINA can be used by non-experts,
reducing the effort for a protein assignment or structure determination by NMR
essentially to the preparation of the sample and the spectra measurements.
| [
{
"created": "Fri, 28 Jan 2022 11:08:50 GMT",
"version": "v1"
},
{
"created": "Thu, 17 Feb 2022 15:25:50 GMT",
"version": "v2"
},
{
"created": "Mon, 21 Mar 2022 15:24:01 GMT",
"version": "v3"
},
{
"created": "Fri, 22 Jul 2022 14:17:47 GMT",
"version": "v4"
}
] | 2023-01-11 | [
[
"Klukowski",
"Piotr",
""
],
[
"Riek",
"Roland",
""
],
[
"Güntert",
"Peter",
""
]
] | Nuclear Magnetic Resonance (NMR) spectroscopy is one of the major techniques in structural biology with over 11,800 protein structures deposited in the Protein Data Bank. NMR can elucidate structures and dynamics of small and medium size proteins in solution, living cells, and solids, but has been limited by the tedious data analysis process. It typically requires weeks or months of manual work of a trained expert to turn NMR measurements into a protein structure. Automation of this process is an open problem, formulated in the field over 30 years ago. Here, we present a solution to this challenge that enables the completely automated analysis of protein NMR data within hours after completing the measurements. Using only NMR spectra and the protein sequence as input, our machine learning-based method, ARTINA, delivers signal positions, resonance assignments, and structures strictly without any human intervention. Tested on a 100-protein benchmark comprising 1329 multidimensional NMR spectra, ARTINA demonstrated its ability to solve structures with 1.44 {\AA} median RMSD to the PDB reference and to identify 91.36% correct NMR resonance assignments. ARTINA can be used by non-experts, reducing the effort for a protein assignment or structure determination by NMR essentially to the preparation of the sample and the spectra measurements. |
1507.08620 | Mareike Fischer | Kristina Wicke and Mareike Fischer | Comparing the rankings obtained from two biodiversity indices: the Fair
Proportion Index and the Shapley Value | null | null | null | null | q-bio.PE math.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Shapley Value and the Fair Proportion Index of phylogenetic trees have
been frequently discussed as prioritization tools in conservation biology. Both
indices rank species according to their contribution to total phylogenetic
diversity, allowing for a simple conservation criterion. While both indices
have their specific advantages and drawbacks, it has recently been shown that
both values are closely related. However, as different authors use different
definitions of the Shapley Value, the specific degree of relatedness depends on
the specific version of the Shapley Value - it ranges from a high correlation
index to equality of the indices. In this note, we first give an overview of
the different indices. Then we turn our attention to the mere ranking order
provided by either of the indices. We compare the rankings obtained from
different versions of the Shapley Value for a phylogenetic tree of European
amphibians and illustrate their differences. We then undertake further analyses
on simulated data and show that even though the chance of two rankings being
exactly identical (when obtained from different versions of the Shapley Value)
decreases with an increasing number of taxa, the distance between the two
rankings converges to zero, i.e., the rankings are becoming more and more
alike. Moreover, we introduce our freely available software package
FairShapley, which was implemented in Perl and with which all calculations have
been performed.
| [
{
"created": "Thu, 30 Jul 2015 18:35:43 GMT",
"version": "v1"
},
{
"created": "Wed, 22 Mar 2017 17:27:35 GMT",
"version": "v2"
},
{
"created": "Fri, 16 Jun 2017 13:54:45 GMT",
"version": "v3"
},
{
"created": "Thu, 27 Jul 2017 11:56:51 GMT",
"version": "v4"
}
] | 2017-07-28 | [
[
"Wicke",
"Kristina",
""
],
[
"Fischer",
"Mareike",
""
]
] | The Shapley Value and the Fair Proportion Index of phylogenetic trees have been frequently discussed as prioritization tools in conservation biology. Both indices rank species according to their contribution to total phylogenetic diversity, allowing for a simple conservation criterion. While both indices have their specific advantages and drawbacks, it has recently been shown that both values are closely related. However, as different authors use different definitions of the Shapley Value, the specific degree of relatedness depends on the specific version of the Shapley Value - it ranges from a high correlation index to equality of the indices. In this note, we first give an overview of the different indices. Then we turn our attention to the mere ranking order provided by either of the indices. We compare the rankings obtained from different versions of the Shapley Value for a phylogenetic tree of European amphibians and illustrate their differences. We then undertake further analyses on simulated data and show that even though the chance of two rankings being exactly identical (when obtained from different versions of the Shapley Value) decreases with an increasing number of taxa, the distance between the two rankings converges to zero, i.e., the rankings are becoming more and more alike. Moreover, we introduce our freely available software package FairShapley, which was implemented in Perl and with which all calculations have been performed. |
2009.04409 | Maxime De Bois | Maxime De Bois, Moun\^im A. El Yacoubi, Mehdi Ammi | Study of Short-Term Personalized Glucose Predictive Models on Type-1
Diabetic Children | null | 2019 International Joint Conference on Neural Networks (IJCNN) | 10.1109/IJCNN.2019.8852399 | null | q-bio.QM eess.SP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Research in diabetes, especially when it comes to building data-driven models
to forecast future glucose values, is hindered by the sensitive nature of the
data. Because researchers do not share the same data between studies, progress
is hard to assess. This paper aims at comparing the most promising algorithms
in the field, namely Feedforward Neural Networks (FFNN), Long Short-Term Memory
(LSTM) Recurrent Neural Networks, Extreme Learning Machines (ELM), Support
Vector Regression (SVR) and Gaussian Processes (GP). They are personalized and
trained on a population of 10 virtual children from the Type 1 Diabetes
Metabolic Simulator software to predict future glucose values at a prediction
horizon of 30 minutes. The performances of the models are evaluated using the
Root Mean Squared Error (RMSE) and the Continuous Glucose-Error Grid Analysis
(CG-EGA). While most of the models end up having low RMSE, the GP model with a
Dot-Product kernel (GP-DP), a novel usage in the context of glucose prediction,
has the lowest. Despite having good RMSE values, we show that the models do not
necessarily exhibit a good clinical acceptability, measured by the CG-EGA. Only
the LSTM, SVR and GP-DP models have overall acceptable results, each of them
performing best in one of the glycemia regions.
| [
{
"created": "Tue, 8 Sep 2020 12:58:12 GMT",
"version": "v1"
}
] | 2020-09-10 | [
[
"De Bois",
"Maxime",
""
],
[
"Yacoubi",
"Mounîm A. El",
""
],
[
"Ammi",
"Mehdi",
""
]
] | Research in diabetes, especially when it comes to building data-driven models to forecast future glucose values, is hindered by the sensitive nature of the data. Because researchers do not share the same data between studies, progress is hard to assess. This paper aims at comparing the most promising algorithms in the field, namely Feedforward Neural Networks (FFNN), Long Short-Term Memory (LSTM) Recurrent Neural Networks, Extreme Learning Machines (ELM), Support Vector Regression (SVR) and Gaussian Processes (GP). They are personalized and trained on a population of 10 virtual children from the Type 1 Diabetes Metabolic Simulator software to predict future glucose values at a prediction horizon of 30 minutes. The performances of the models are evaluated using the Root Mean Squared Error (RMSE) and the Continuous Glucose-Error Grid Analysis (CG-EGA). While most of the models end up having low RMSE, the GP model with a Dot-Product kernel (GP-DP), a novel usage in the context of glucose prediction, has the lowest. Despite having good RMSE values, we show that the models do not necessarily exhibit a good clinical acceptability, measured by the CG-EGA. Only the LSTM, SVR and GP-DP models have overall acceptable results, each of them performing best in one of the glycemia regions. |
2403.01282 | Mareike Fischer | Mareike Fischer | On the uniqueness of the maximum parsimony tree for data with few
substitutions within the NNI neighborhood | 17 pages, 4 figures | null | null | null | q-bio.PE math.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Estimating species relationship trees, so-called phylogenetic trees, from
aligned sequence data (such as DNA, RNA, or proteins) is one of the main aims
of evolutionary biology. However, tree reconstruction criteria like maximum
parsimony do not necessarily lead to unique trees and in some cases even fail
to recognize the \enquote{correct} tree (i.e., the tree on which the data was
generated).
On the other hand, a recent study has shown that for an alignment containing
precisely those characters (sites) which require up to two substitutions on a
given tree, this tree will be the unique maximum parsimony tree.
It is the aim of the present manuscript to generalize this recent result in
the following sense: We show that for a tree with $n$ leaves, as long as $k<
\frac{n}{8}+\frac{6}{5}-\frac{1}{10} \sqrt{\frac{5}{16} n^2+4}$ (or,
equivalently, $n>8 k-\frac{46}{5}+\frac{2}{5} \sqrt{40 k-31} $), the maximum
parsimony tree for the alignment containing all characters which require (up to
or precisely) $k$ substitutions on a given tree $T$ will be unique in the NNI
neighborhood of $T$ and it will coincide with $T$, too. In other words, within
the NNI neighborhood of $T$, $T$ is the unique most parsimonious tree for said
alignment. This partially answers a recently published conjecture
affirmatively.
| [
{
"created": "Sat, 2 Mar 2024 18:30:18 GMT",
"version": "v1"
},
{
"created": "Tue, 19 Mar 2024 13:55:04 GMT",
"version": "v2"
},
{
"created": "Wed, 20 Mar 2024 13:01:13 GMT",
"version": "v3"
}
] | 2024-03-21 | [
[
"Fischer",
"Mareike",
""
]
] | Estimating species relationship trees, so-called phylogenetic trees, from aligned sequence data (such as DNA, RNA, or proteins) is one of the main aims of evolutionary biology. However, tree reconstruction criteria like maximum parsimony do not necessarily lead to unique trees and in some cases even fail to recognize the \enquote{correct} tree (i.e., the tree on which the data was generated). On the other hand, a recent study has shown that for an alignment containing precisely those characters (sites) which require up to two substitutions on a given tree, this tree will be the unique maximum parsimony tree. It is the aim of the present manuscript to generalize this recent result in the following sense: We show that for a tree with $n$ leaves, as long as $k< \frac{n}{8}+\frac{6}{5}-\frac{1}{10} \sqrt{\frac{5}{16} n^2+4}$ (or, equivalently, $n>8 k-\frac{46}{5}+\frac{2}{5} \sqrt{40 k-31} $), the maximum parsimony tree for the alignment containing all characters which require (up to or precisely) $k$ substitutions on a given tree $T$ will be unique in the NNI neighborhood of $T$ and it will coincide with $T$, too. In other words, within the NNI neighborhood of $T$, $T$ is the unique most parsimonious tree for said alignment. This partially answers a recently published conjecture affirmatively. |
1404.4405 | Tahir Yusufaly | Tahir I. Yusufaly, Yun Li, Gautam Singh and Wilma K. Olson | Arginine-Phosphate Salt Bridges Between Histones and DNA: Intermolecular
Actuators that Control Nucleosome Architecture | Revised version - Accepted for publication in J. Chem. Phys | null | 10.1063/1.4897978 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Structural bioinformatics and van der Waals density functional theory are
combined to investigate the mechanochemical impact of a major class of
histone-DNA interactions, namely the formation of salt bridges between arginine
residues in histones and phosphate groups on the DNA backbone. Principal
component analysis reveals that the configurational fluctuations of the
sugar-phosphate backbone display sequence-specific variability, and clustering
of nucleosomal crystal structures identifies two major salt bridge
configurations: a monodentate form in which the arginine end-group guanidinium
only forms one hydrogen bond with the phosphate, and a bidentate form in which
it forms two. Density functional theory calculations highlight that the
combination of sequence, denticity and salt bridge positioning enable the
histones to tunably activate specific backbone deformations via mechanochemical
stress. The results suggest that selection for specific placements of van der
Waals contacts, with high-precision control of the spatial distribution of
intermolecular forces, may serve as an underlying evolutionary design principle
for the structure and function of nucleosomes, a conjecture that is
corroborated by previous experimental studies.
| [
{
"created": "Thu, 17 Apr 2014 00:51:47 GMT",
"version": "v1"
},
{
"created": "Wed, 1 Oct 2014 00:41:08 GMT",
"version": "v2"
}
] | 2015-06-19 | [
[
"Yusufaly",
"Tahir I.",
""
],
[
"Li",
"Yun",
""
],
[
"Singh",
"Gautam",
""
],
[
"Olson",
"Wilma K.",
""
]
] | Structural bioinformatics and van der Waals density functional theory are combined to investigate the mechanochemical impact of a major class of histone-DNA interactions, namely the formation of salt bridges between arginine residues in histones and phosphate groups on the DNA backbone. Principal component analysis reveals that the configurational fluctuations of the sugar-phosphate backbone display sequence-specific variability, and clustering of nucleosomal crystal structures identifies two major salt bridge configurations: a monodentate form in which the arginine end-group guanidinium only forms one hydrogen bond with the phosphate, and a bidentate form in which it forms two. Density functional theory calculations highlight that the combination of sequence, denticity and salt bridge positioning enable the histones to tunably activate specific backbone deformations via mechanochemical stress. The results suggest that selection for specific placements of van der Waals contacts, with high-precision control of the spatial distribution of intermolecular forces, may serve as an underlying evolutionary design principle for the structure and function of nucleosomes, a conjecture that is corroborated by previous experimental studies. |
2310.19744 | Angad Yuvraj Singh | Angad Yuvraj Singh and Sanjay Jain | Multistable protocells can aid the evolution of prebiotic autocatalytic
sets | 28 pages, 12 figures, includes Supplementary Material | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | We present a simple mathematical model that captures the evolutionary
capabilities of a prebiotic compartment or protocell. In the model the
protocell contains an autocatalytic set whose chemical dynamics is coupled to
the growth-division dynamics of the compartment. Bistability in the dynamics of
the autocatalytic set results in a protocell that can exist with two distinct
growth rates. Stochasticity in chemical reactions plays the role of mutations
and causes transitions from one growth regime to another. We show that the
system exhibits `natural selection', where a `mutant' protocell in which the
autocatalytic set is active arises by chance in a population of inactive
protocells, and then takes over the population because of its higher growth
rate or `fitness'. The work integrates three levels of dynamics: intracellular
chemical, single protocell, and population (or ecosystem) of protocells..
| [
{
"created": "Mon, 30 Oct 2023 17:08:52 GMT",
"version": "v1"
}
] | 2023-10-31 | [
[
"Singh",
"Angad Yuvraj",
""
],
[
"Jain",
"Sanjay",
""
]
] | We present a simple mathematical model that captures the evolutionary capabilities of a prebiotic compartment or protocell. In the model the protocell contains an autocatalytic set whose chemical dynamics is coupled to the growth-division dynamics of the compartment. Bistability in the dynamics of the autocatalytic set results in a protocell that can exist with two distinct growth rates. Stochasticity in chemical reactions plays the role of mutations and causes transitions from one growth regime to another. We show that the system exhibits `natural selection', where a `mutant' protocell in which the autocatalytic set is active arises by chance in a population of inactive protocells, and then takes over the population because of its higher growth rate or `fitness'. The work integrates three levels of dynamics: intracellular chemical, single protocell, and population (or ecosystem) of protocells.. |
1306.0558 | Andres Moreno-Estrada MD PhD | Andres Moreno-Estrada, Simon Gravel, Fouad Zakharia, Jacob L.
McCauley, Jake K. Byrnes, Christopher R. Gignoux, Patricia A. Ortiz-Tello,
Ricardo J. Martinez, Dale J. Hedges, Richard W. Morris, Celeste Eng, Karla
Sandoval, Suehelay Acevedo-Acevedo, Juan Carlos Martinez-Cruzado, Paul J.
Norman, Zulay Layrisse, Peter Parham, Esteban Gonzalez Burchard, Michael L.
Cuccaro, Eden R. Martin and Carlos D. Bustamante | Reconstructing the Population Genetic History of the Caribbean | 26 pages, 6 figures, and supporting information | null | null | null | q-bio.PE q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Caribbean basin is home to some of the most complex interactions in
recent history among previously diverged human populations. Here, by making use
of genome-wide SNP array data, we characterize ancestral components of
Caribbean populations on a sub-continental level and unveil fine-scale patterns
of population structure distinguishing insular from mainland Caribbean
populations as well as from other Hispanic/Latino groups. We provide genetic
evidence for an inland South American origin of the Native American component
in island populations and for extensive pre-Columbian gene flow across the
Caribbean basin. The Caribbean-derived European component shows significant
differentiation from parental Iberian populations, presumably as a result of
founder effects during the colonization of the New World. Based on demographic
models, we reconstruct the complex population history of the Caribbean since
the onset of continental admixture. We find that insular populations are best
modeled as mixtures absorbing two pulses of African migrants, coinciding with
early and maximum activity stages of the transatlantic slave trade. These two
pulses appear to have originated in different regions within West Africa,
imprinting two distinguishable signatures in present day Afro-Caribbean genomes
and shedding light on the genetic impact of the dynamics occurring during the
slave trade in the Caribbean.
| [
{
"created": "Mon, 3 Jun 2013 19:43:39 GMT",
"version": "v1"
}
] | 2013-06-05 | [
[
"Moreno-Estrada",
"Andres",
""
],
[
"Gravel",
"Simon",
""
],
[
"Zakharia",
"Fouad",
""
],
[
"McCauley",
"Jacob L.",
""
],
[
"Byrnes",
"Jake K.",
""
],
[
"Gignoux",
"Christopher R.",
""
],
[
"Ortiz-Tello",
"Patricia A.",
""
],
[
"Martinez",
"Ricardo J.",
""
],
[
"Hedges",
"Dale J.",
""
],
[
"Morris",
"Richard W.",
""
],
[
"Eng",
"Celeste",
""
],
[
"Sandoval",
"Karla",
""
],
[
"Acevedo-Acevedo",
"Suehelay",
""
],
[
"Martinez-Cruzado",
"Juan Carlos",
""
],
[
"Norman",
"Paul J.",
""
],
[
"Layrisse",
"Zulay",
""
],
[
"Parham",
"Peter",
""
],
[
"Burchard",
"Esteban Gonzalez",
""
],
[
"Cuccaro",
"Michael L.",
""
],
[
"Martin",
"Eden R.",
""
],
[
"Bustamante",
"Carlos D.",
""
]
] | The Caribbean basin is home to some of the most complex interactions in recent history among previously diverged human populations. Here, by making use of genome-wide SNP array data, we characterize ancestral components of Caribbean populations on a sub-continental level and unveil fine-scale patterns of population structure distinguishing insular from mainland Caribbean populations as well as from other Hispanic/Latino groups. We provide genetic evidence for an inland South American origin of the Native American component in island populations and for extensive pre-Columbian gene flow across the Caribbean basin. The Caribbean-derived European component shows significant differentiation from parental Iberian populations, presumably as a result of founder effects during the colonization of the New World. Based on demographic models, we reconstruct the complex population history of the Caribbean since the onset of continental admixture. We find that insular populations are best modeled as mixtures absorbing two pulses of African migrants, coinciding with early and maximum activity stages of the transatlantic slave trade. These two pulses appear to have originated in different regions within West Africa, imprinting two distinguishable signatures in present day Afro-Caribbean genomes and shedding light on the genetic impact of the dynamics occurring during the slave trade in the Caribbean. |
1705.09614 | Gennadi Glinsky | Gennadi Glinsky | Role of distal enhancers in shaping 3D-folding patterns and defining
human-specific features of interphase chromatin architecture in embryonic
stem cells | arXiv admin note: substantial text overlap with arXiv:1507.05368 | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Molecular and genetic definitions of human-specific changes to genomic
regulatory networks (GRNs) contributing to development of unique to human
phenotypes remain a highly significant challenge. Genome-wide proximity
placement analysis of diverse families of human-specific genomic regulatory
loci (HSGRL) identified topologically-associating domains (TADs) that are
significantly enriched for HSGRL and designated rapidly-evolving in humans TADs
(Genome Biol Evol. 2016 8; 2774-88). Here, the analysis of HSGRL, hESC-enriched
enhancers, super-enhancers (SEs), and specific sub-TAD structures termed
super-enhancer domains (SEDs) has been performed. Markedly distinct features of
the principal regulatory structures of interphase chromatin evolved in the hESC
genome compared to mouse: the SED quantity is 3-fold higher and the median SED
size is significantly larger. Concomitantly, the overall TAD quantity is
increased by 42% while the median TAD size is significantly decreased (p =
9.11E-37) in the hESC genome. Present analyses illustrate a putative global
role for HSGRL in shaping the human-specific features of the interphase
chromatin organization and functions, which are facilitated by accelerated
creation of new enhancers associated with targeted placement of HSGRL at
defined genomic coordinates. A trend toward the convergence of TAD and SED
architectures of interphase chromatin in the hESC genome may reflect changes of
3D-folding patterns of linear chromatin fibers designed to enhance both
regulatory complexity and functional precision of GRNs by creating
predominantly a single gene per regulatory domain structures.
| [
{
"created": "Thu, 25 May 2017 03:45:36 GMT",
"version": "v1"
}
] | 2017-05-29 | [
[
"Glinsky",
"Gennadi",
""
]
] | Molecular and genetic definitions of human-specific changes to genomic regulatory networks (GRNs) contributing to development of unique to human phenotypes remain a highly significant challenge. Genome-wide proximity placement analysis of diverse families of human-specific genomic regulatory loci (HSGRL) identified topologically-associating domains (TADs) that are significantly enriched for HSGRL and designated rapidly-evolving in humans TADs (Genome Biol Evol. 2016 8; 2774-88). Here, the analysis of HSGRL, hESC-enriched enhancers, super-enhancers (SEs), and specific sub-TAD structures termed super-enhancer domains (SEDs) has been performed. Markedly distinct features of the principal regulatory structures of interphase chromatin evolved in the hESC genome compared to mouse: the SED quantity is 3-fold higher and the median SED size is significantly larger. Concomitantly, the overall TAD quantity is increased by 42% while the median TAD size is significantly decreased (p = 9.11E-37) in the hESC genome. Present analyses illustrate a putative global role for HSGRL in shaping the human-specific features of the interphase chromatin organization and functions, which are facilitated by accelerated creation of new enhancers associated with targeted placement of HSGRL at defined genomic coordinates. A trend toward the convergence of TAD and SED architectures of interphase chromatin in the hESC genome may reflect changes of 3D-folding patterns of linear chromatin fibers designed to enhance both regulatory complexity and functional precision of GRNs by creating predominantly a single gene per regulatory domain structures. |
2202.09482 | Thi Kim Thoa Thieu | Thi Kim Thoa Thieu and Roderick Melnik | Effects of noise on leaky integrate-and-fire neuron models for
neuromorphic computing applications | 16 pages, 11 figures. arXiv admin note: text overlap with
arXiv:2112.12932 | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Artificial neural networks (ANNs) have been extensively used for the
description of problems arising from biological systems and for constructing
neuromorphic computing models. The third generation of ANNs, namely, spiking
neural networks (SNNs), inspired by biological neurons enable a more realistic
mimicry of the human brain. A large class of the problems from these domains is
characterized by the necessity to deal with the combination of neurons, spikes
and synapses via integrate-and-fire neuron models. Motivated by important
applications of the integrate-and-fire of neurons in neuromorphic computing for
bio-medical studies, the main focus of the present work is on the analysis of
the effects of additive and multiplicative types of random input currents
together with a random refractory period on a leaky integrate-and-fire (LIF)
synaptic conductance neuron model. Our analysis is carried out via Langevin
stochastic dynamics in a numerical setting describing a cell membrane
potential. We provide the details of the model, as well as representative
numerical examples, and discuss the effects of noise on the time evolution of
the membrane potential as well as the spiking activities of neurons in the LIF
synaptic conductance model scrutinized here. Furthermore, our numerical results
demonstrate that the presence of a random refractory period in the LIF synaptic
conductance system may substantially influence an increased irregularity of
spike trains of the output neuron.
| [
{
"created": "Sat, 19 Feb 2022 00:39:37 GMT",
"version": "v1"
},
{
"created": "Wed, 18 May 2022 19:06:47 GMT",
"version": "v2"
}
] | 2022-06-18 | [
[
"Thieu",
"Thi Kim Thoa",
""
],
[
"Melnik",
"Roderick",
""
]
] | Artificial neural networks (ANNs) have been extensively used for the description of problems arising from biological systems and for constructing neuromorphic computing models. The third generation of ANNs, namely, spiking neural networks (SNNs), inspired by biological neurons enable a more realistic mimicry of the human brain. A large class of the problems from these domains is characterized by the necessity to deal with the combination of neurons, spikes and synapses via integrate-and-fire neuron models. Motivated by important applications of the integrate-and-fire of neurons in neuromorphic computing for bio-medical studies, the main focus of the present work is on the analysis of the effects of additive and multiplicative types of random input currents together with a random refractory period on a leaky integrate-and-fire (LIF) synaptic conductance neuron model. Our analysis is carried out via Langevin stochastic dynamics in a numerical setting describing a cell membrane potential. We provide the details of the model, as well as representative numerical examples, and discuss the effects of noise on the time evolution of the membrane potential as well as the spiking activities of neurons in the LIF synaptic conductance model scrutinized here. Furthermore, our numerical results demonstrate that the presence of a random refractory period in the LIF synaptic conductance system may substantially influence an increased irregularity of spike trains of the output neuron. |
2405.15158 | Mingqing Wang | Mingqing Wang, Zhiwei Nie, Yonghong He, Zhixiang Ren | ProtFAD: Introducing function-aware domains as implicit modality towards
protein function perception | 16 pages, 6 figures, 5 tables | null | null | null | q-bio.BM cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Protein function prediction is currently achieved by encoding its sequence or
structure, where the sequence-to-function transcendence and high-quality
structural data scarcity lead to obvious performance bottlenecks. Protein
domains are "building blocks" of proteins that are functionally independent,
and their combinations determine the diverse biological functions. However,
most existing studies have yet to thoroughly explore the intricate functional
information contained in the protein domains. To fill this gap, we propose a
synergistic integration approach for a function-aware domain representation,
and a domain-joint contrastive learning strategy to distinguish different
protein functions while aligning the modalities. Specifically, we associate
domains with the GO terms as function priors to pre-train domain embeddings.
Furthermore, we partition proteins into multiple sub-views based on continuous
joint domains for contrastive training under the supervision of a novel triplet
InfoNCE loss. Our approach significantly and comprehensively outperforms the
state-of-the-art methods on various benchmarks, and clearly differentiates
proteins carrying distinct functions compared to the competitor.
| [
{
"created": "Fri, 24 May 2024 02:26:45 GMT",
"version": "v1"
}
] | 2024-05-27 | [
[
"Wang",
"Mingqing",
""
],
[
"Nie",
"Zhiwei",
""
],
[
"He",
"Yonghong",
""
],
[
"Ren",
"Zhixiang",
""
]
] | Protein function prediction is currently achieved by encoding its sequence or structure, where the sequence-to-function transcendence and high-quality structural data scarcity lead to obvious performance bottlenecks. Protein domains are "building blocks" of proteins that are functionally independent, and their combinations determine the diverse biological functions. However, most existing studies have yet to thoroughly explore the intricate functional information contained in the protein domains. To fill this gap, we propose a synergistic integration approach for a function-aware domain representation, and a domain-joint contrastive learning strategy to distinguish different protein functions while aligning the modalities. Specifically, we associate domains with the GO terms as function priors to pre-train domain embeddings. Furthermore, we partition proteins into multiple sub-views based on continuous joint domains for contrastive training under the supervision of a novel triplet InfoNCE loss. Our approach significantly and comprehensively outperforms the state-of-the-art methods on various benchmarks, and clearly differentiates proteins carrying distinct functions compared to the competitor. |
2309.07088 | Varun Kotian | Varun Kotian, Daan M. Pool and Riender Happee | Modelling individual motion sickness accumulation in vehicles and
driving simulators | 8 pages, 9 figures | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Users of automated vehicles will move away from being drivers to passengers,
preferably engaged in other activities such as reading or using laptops and
smartphones, which will strongly increase susceptibility to motion sickness.
Similarly, in driving simulators, the presented visual motion with scaled or
even without any physical motion causes an illusion of passive motion, creating
a conflict between perceived and expected motion, and eliciting motion
sickness. Given the very large differences in sickness susceptibility between
individuals, we need to consider sickness at an individual level. This paper
combines a group-averaged sensory conflict model with an individualized
accumulation model to capture individual differences in motion sickness
susceptibility across various vision conditions. The model framework can be
used to develop personalized models for users of automated vehicles and improve
the design of new motion cueing algorithms for simulators. The feasibility and
accuracy of this model framework are verified using two existing datasets with
sickening. Both datasets involve passive motion, representative of being driven
by an automated vehicle. The model is able to fit an individuals motion
sickness responses using only 2 parameters (gain K1 and time constant T1), as
opposed to the 5 parameters in the original model. This ensures unique
parameters for each individual. Better fits, on average by a factor of 1.7 of
an individuals motion sickness levels, are achieved as compared to using only
the group-averaged model. Thus, we find that models predicting group-averaged
sickness incidence cannot be used to predict sickness at an individual level.
On the other hand, the proposed combined model approach predicts individual
motion sickness levels and thus can be used to control sickness.
| [
{
"created": "Wed, 13 Sep 2023 17:03:56 GMT",
"version": "v1"
}
] | 2023-09-14 | [
[
"Kotian",
"Varun",
""
],
[
"Pool",
"Daan M.",
""
],
[
"Happee",
"Riender",
""
]
] | Users of automated vehicles will move away from being drivers to passengers, preferably engaged in other activities such as reading or using laptops and smartphones, which will strongly increase susceptibility to motion sickness. Similarly, in driving simulators, the presented visual motion with scaled or even without any physical motion causes an illusion of passive motion, creating a conflict between perceived and expected motion, and eliciting motion sickness. Given the very large differences in sickness susceptibility between individuals, we need to consider sickness at an individual level. This paper combines a group-averaged sensory conflict model with an individualized accumulation model to capture individual differences in motion sickness susceptibility across various vision conditions. The model framework can be used to develop personalized models for users of automated vehicles and improve the design of new motion cueing algorithms for simulators. The feasibility and accuracy of this model framework are verified using two existing datasets with sickening. Both datasets involve passive motion, representative of being driven by an automated vehicle. The model is able to fit an individuals motion sickness responses using only 2 parameters (gain K1 and time constant T1), as opposed to the 5 parameters in the original model. This ensures unique parameters for each individual. Better fits, on average by a factor of 1.7 of an individuals motion sickness levels, are achieved as compared to using only the group-averaged model. Thus, we find that models predicting group-averaged sickness incidence cannot be used to predict sickness at an individual level. On the other hand, the proposed combined model approach predicts individual motion sickness levels and thus can be used to control sickness. |
1902.05919 | Naoto Hori | Naoto Hori, Natalia A. Denesyuk, D. Thirumalai | Ion Condensation onto Ribozyme is Site-Specific and Fold-Dependent | 22 pages including 9 figures, 5 SI figures, and 1 SI table | null | 10.1016/j.bpj.2019.04.037 | null | q-bio.BM cond-mat.soft physics.bio-ph | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The highly charged RNA molecules, with each phosphate carrying a single
negative charge, cannot fold into well-defined architectures with tertiary
interactions, in the absence of ions. For ribozymes, divalent cations are known
to be more efficient than monovalent ions in driving them to a compact state
although Mg$^{2+}$ ions are needed for catalytic activities. Therefore, how
ions interact with RNA is relevant in understanding RNA folding. It is often
thought that most of the ions are territorially and non-specifically bound to
the RNA, as predicted by the counterion condensation (CIC) theory. Here, we
show using simulations of ${\it Azoarcus}$ ribozyme, based on an accurate
coarse-grained Three Site Interaction (TIS) model, with explicit divalent and
monovalent cations, that ion condensation is highly specific and depends on the
nucleotide position. The regions with high coordination between the phosphate
groups and the divalent cations are discernible even at very low Mg$^{2+}$
concentrations when the ribozyme does not form tertiary interactions.
Surprisingly, these regions also contain the secondary structural elements that
nucleate subsequently in the self-assembly of RNA, implying that ion
condensation is determined by the architecture of the folded state. These
results are in sharp contrast to interactions of ions (monovalent and divalent)
with rigid charged rods in which ion condensation is uniform and position
independent. The differences are explained in terms of the dramatic
non-monotonic shape fluctuations in the ribozyme as it folds with increasing
Mg$^{2+}$ or Ca$^{2+}$ concentration.
| [
{
"created": "Fri, 15 Feb 2019 18:02:22 GMT",
"version": "v1"
},
{
"created": "Sat, 13 Apr 2019 10:14:08 GMT",
"version": "v2"
}
] | 2019-07-24 | [
[
"Hori",
"Naoto",
""
],
[
"Denesyuk",
"Natalia A.",
""
],
[
"Thirumalai",
"D.",
""
]
] | The highly charged RNA molecules, with each phosphate carrying a single negative charge, cannot fold into well-defined architectures with tertiary interactions, in the absence of ions. For ribozymes, divalent cations are known to be more efficient than monovalent ions in driving them to a compact state although Mg$^{2+}$ ions are needed for catalytic activities. Therefore, how ions interact with RNA is relevant in understanding RNA folding. It is often thought that most of the ions are territorially and non-specifically bound to the RNA, as predicted by the counterion condensation (CIC) theory. Here, we show using simulations of ${\it Azoarcus}$ ribozyme, based on an accurate coarse-grained Three Site Interaction (TIS) model, with explicit divalent and monovalent cations, that ion condensation is highly specific and depends on the nucleotide position. The regions with high coordination between the phosphate groups and the divalent cations are discernible even at very low Mg$^{2+}$ concentrations when the ribozyme does not form tertiary interactions. Surprisingly, these regions also contain the secondary structural elements that nucleate subsequently in the self-assembly of RNA, implying that ion condensation is determined by the architecture of the folded state. These results are in sharp contrast to interactions of ions (monovalent and divalent) with rigid charged rods in which ion condensation is uniform and position independent. The differences are explained in terms of the dramatic non-monotonic shape fluctuations in the ribozyme as it folds with increasing Mg$^{2+}$ or Ca$^{2+}$ concentration. |
2103.07660 | Li Duo | Li Duo and Zhao Yingren and Chen Hongmei | Light chain systemic amyloidosis manifested as liver failure complicated
with fatal spontaneous splenic rupture: A case report | null | null | null | null | q-bio.TO | http://creativecommons.org/licenses/by/4.0/ | For a patient with manifestations of nausea, abdominal distension,
spontaneous splenic rupture, obvious liver enlargement, low red blood cells and
platelets, yellow sclera, and spider angioma, Congo red staining of liver and
spleen tissues indicated amyloidosis. After secondary factors were excluded,
the patient was finally diagnosed as chronic liver failure, light chain
amyloidosis, spontaneous bacterial peritonitis after cystic resection and
splenectomy. This case suggests that for patients with chronic liver failure
accompanied by spontaneous splenic rupture and hepatomegaly for unknown
reasons, the possibility of amyloidosis should be considered after excluding
other factors, such as viral liver disease, autoimmune disease, alcoholic liver
disease, genetic metabolic liver disease and liver tumor, and etc. Considering
the low clinical incidence rate and poor prognosis, relevant diagnosis depends
on the biopsy results, and many patients were not confirmed until autopsy after
death. Therefore, once amyloidosis is suspected, it is necessary to have
communications with relevant patients and their families on the risks for
examination, treatment methods and prognosis as soon as possible.
| [
{
"created": "Sat, 13 Mar 2021 09:13:40 GMT",
"version": "v1"
}
] | 2021-03-16 | [
[
"Duo",
"Li",
""
],
[
"Yingren",
"Zhao",
""
],
[
"Hongmei",
"Chen",
""
]
] | For a patient with manifestations of nausea, abdominal distension, spontaneous splenic rupture, obvious liver enlargement, low red blood cells and platelets, yellow sclera, and spider angioma, Congo red staining of liver and spleen tissues indicated amyloidosis. After secondary factors were excluded, the patient was finally diagnosed as chronic liver failure, light chain amyloidosis, spontaneous bacterial peritonitis after cystic resection and splenectomy. This case suggests that for patients with chronic liver failure accompanied by spontaneous splenic rupture and hepatomegaly for unknown reasons, the possibility of amyloidosis should be considered after excluding other factors, such as viral liver disease, autoimmune disease, alcoholic liver disease, genetic metabolic liver disease and liver tumor, and etc. Considering the low clinical incidence rate and poor prognosis, relevant diagnosis depends on the biopsy results, and many patients were not confirmed until autopsy after death. Therefore, once amyloidosis is suspected, it is necessary to have communications with relevant patients and their families on the risks for examination, treatment methods and prognosis as soon as possible. |
2207.14776 | Farzad Khalvati | Khashayar Namdar, Matthias W. Wagner, Birgit B. Ertl-Wagner, Farzad
Khalvati | Open-radiomics: A Collection of Standardized Datasets and a Technical
Protocol for Reproducible Radiomics Machine Learning Pipelines | null | null | null | null | q-bio.QM cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Purpose: As an important branch of machine learning pipelines in medical
imaging, radiomics faces two major challenges namely reproducibility and
accessibility. In this work, we introduce open-radiomics, a set of radiomics
datasets along with a comprehensive radiomics pipeline based on our proposed
technical protocol to investigate the effects of radiomics feature extraction
on the reproducibility of the results.
Materials and Methods: Experiments are conducted on BraTS 2020 open-source
Magnetic Resonance Imaging (MRI) dataset that includes 369 adult patients with
brain tumors (76 low-grade glioma (LGG), and 293 high-grade glioma (HGG)).
Using PyRadiomics library for LGG vs. HGG classification, 288 radiomics
datasets are formed; the combinations of 4 MRI sequences, 3 binWidths, 6 image
normalization methods, and 4 tumor subregions.
Random Forest classifiers were used, and for each radiomics dataset the
training-validation-test (60%/20%/20%) experiment with different data splits
and model random states was repeated 100 times (28,800 test results) and Area
Under Receiver Operating Characteristic Curve (AUC) was calculated.
Results: Unlike binWidth and image normalization, tumor subregion and imaging
sequence significantly affected performance of the models. T1 contrast-enhanced
sequence and the union of necrotic and the non-enhancing tumor core subregions
resulted in the highest AUCs (average test AUC 0.951, 95% confidence interval
of (0.949, 0.952)). Although 28 settings and data splits yielded test AUC of 1,
they were irreproducible.
Conclusion: Our experiments demonstrate the sources of variability in
radiomics pipelines (e.g., tumor subregion) can have a significant impact on
the results, which may lead to superficial perfect performances that are
irreproducible.
| [
{
"created": "Fri, 29 Jul 2022 16:37:46 GMT",
"version": "v1"
},
{
"created": "Tue, 24 Oct 2023 18:41:44 GMT",
"version": "v2"
}
] | 2023-10-26 | [
[
"Namdar",
"Khashayar",
""
],
[
"Wagner",
"Matthias W.",
""
],
[
"Ertl-Wagner",
"Birgit B.",
""
],
[
"Khalvati",
"Farzad",
""
]
] | Purpose: As an important branch of machine learning pipelines in medical imaging, radiomics faces two major challenges namely reproducibility and accessibility. In this work, we introduce open-radiomics, a set of radiomics datasets along with a comprehensive radiomics pipeline based on our proposed technical protocol to investigate the effects of radiomics feature extraction on the reproducibility of the results. Materials and Methods: Experiments are conducted on BraTS 2020 open-source Magnetic Resonance Imaging (MRI) dataset that includes 369 adult patients with brain tumors (76 low-grade glioma (LGG), and 293 high-grade glioma (HGG)). Using PyRadiomics library for LGG vs. HGG classification, 288 radiomics datasets are formed; the combinations of 4 MRI sequences, 3 binWidths, 6 image normalization methods, and 4 tumor subregions. Random Forest classifiers were used, and for each radiomics dataset the training-validation-test (60%/20%/20%) experiment with different data splits and model random states was repeated 100 times (28,800 test results) and Area Under Receiver Operating Characteristic Curve (AUC) was calculated. Results: Unlike binWidth and image normalization, tumor subregion and imaging sequence significantly affected performance of the models. T1 contrast-enhanced sequence and the union of necrotic and the non-enhancing tumor core subregions resulted in the highest AUCs (average test AUC 0.951, 95% confidence interval of (0.949, 0.952)). Although 28 settings and data splits yielded test AUC of 1, they were irreproducible. Conclusion: Our experiments demonstrate the sources of variability in radiomics pipelines (e.g., tumor subregion) can have a significant impact on the results, which may lead to superficial perfect performances that are irreproducible. |
0904.3253 | Joel Miller | Joel C Miller | Percolation in clustered networks | The first version is being separated into multiple papers. This is
the first of these to be submitted | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The social networks that infectious diseases spread along are typically
clustered. Because of the close relation between percolation and epidemic
spread, the behavior of percolation in such networks gives insight into
infectious disease dynamics. A number of authors have studied clustered
networks, but the networks often contain preferential mixing between high
degree nodes. We introduce a class of random clustered networks and another
class of random unclustered networks with the same preferential mixing. We
analytically show that percolation in the clustered networks reduces the
component sizes and increases the epidemic threshold compared to the
unclustered networks.
| [
{
"created": "Tue, 21 Apr 2009 19:09:51 GMT",
"version": "v1"
},
{
"created": "Thu, 14 May 2009 16:59:40 GMT",
"version": "v2"
}
] | 2009-05-14 | [
[
"Miller",
"Joel C",
""
]
] | The social networks that infectious diseases spread along are typically clustered. Because of the close relation between percolation and epidemic spread, the behavior of percolation in such networks gives insight into infectious disease dynamics. A number of authors have studied clustered networks, but the networks often contain preferential mixing between high degree nodes. We introduce a class of random clustered networks and another class of random unclustered networks with the same preferential mixing. We analytically show that percolation in the clustered networks reduces the component sizes and increases the epidemic threshold compared to the unclustered networks. |
1511.04836 | Byunghan Lee | Byunghan Lee, Taesup Moon, Sungroh Yoon, and Tsachy Weissman | DUDE-Seq: Fast, Flexible, and Robust Denoising for Targeted Amplicon
Sequencing | null | null | null | null | q-bio.GN cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the correction of errors from nucleotide sequences produced by
next-generation targeted amplicon sequencing. The next-generation sequencing
(NGS) platforms can provide a great deal of sequencing data thanks to their
high throughput, but the associated error rates often tend to be high.
Denoising in high-throughput sequencing has thus become a crucial process for
boosting the reliability of downstream analyses. Our methodology, named
DUDE-Seq, is derived from a general setting of reconstructing finite-valued
source data corrupted by a discrete memoryless channel and effectively corrects
substitution and homopolymer indel errors, the two major types of sequencing
errors in most high-throughput targeted amplicon sequencing platforms. Our
experimental studies with real and simulated datasets suggest that the proposed
DUDE-Seq not only outperforms existing alternatives in terms of
error-correction capability and time efficiency, but also boosts the
reliability of downstream analyses. Further, the flexibility of DUDE-Seq
enables its robust application to different sequencing platforms and analysis
pipelines by simple updates of the noise model. DUDE-Seq is available at
http://data.snu.ac.kr/pub/dude-seq.
| [
{
"created": "Mon, 16 Nov 2015 06:19:52 GMT",
"version": "v1"
},
{
"created": "Wed, 20 Jan 2016 07:46:49 GMT",
"version": "v2"
},
{
"created": "Tue, 4 Jul 2017 04:12:39 GMT",
"version": "v3"
}
] | 2017-07-05 | [
[
"Lee",
"Byunghan",
""
],
[
"Moon",
"Taesup",
""
],
[
"Yoon",
"Sungroh",
""
],
[
"Weissman",
"Tsachy",
""
]
] | We consider the correction of errors from nucleotide sequences produced by next-generation targeted amplicon sequencing. The next-generation sequencing (NGS) platforms can provide a great deal of sequencing data thanks to their high throughput, but the associated error rates often tend to be high. Denoising in high-throughput sequencing has thus become a crucial process for boosting the reliability of downstream analyses. Our methodology, named DUDE-Seq, is derived from a general setting of reconstructing finite-valued source data corrupted by a discrete memoryless channel and effectively corrects substitution and homopolymer indel errors, the two major types of sequencing errors in most high-throughput targeted amplicon sequencing platforms. Our experimental studies with real and simulated datasets suggest that the proposed DUDE-Seq not only outperforms existing alternatives in terms of error-correction capability and time efficiency, but also boosts the reliability of downstream analyses. Further, the flexibility of DUDE-Seq enables its robust application to different sequencing platforms and analysis pipelines by simple updates of the noise model. DUDE-Seq is available at http://data.snu.ac.kr/pub/dude-seq. |
2106.08327 | Tamilalagan P | P. Tamilalagan, B. Krithika, P. Manivannan | A SEIRUC mathematical model for transmission dynamics of COVID-19 | 19 pages, 5 figures | null | null | null | q-bio.PE math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The world is still fighting against COVID-19, which has been lasting for more
than a year. Till date, it has been a greatest challenge to human beings in
fighting against COVID-19 since, the pathogen SARS-COV-2 that causes COVID-19
has significant biological and transmission characteristics when compared to
SARS-COV and MERS-COV pathogens. In spite of many control strategies that are
implemented to reduce the disease spread, there is a rise in the number of
infected cases around the world. Hence, a mathematical model which can describe
the real nature and impact of COVID-19 is necessary for the better
understanding of disease transmission dynamics of COVID-19. This article
proposes a new compartmental SEIRUC mathematical model, which includes the new
state called convalesce (C). The basic reproduction number $\mathcal{R}_0$ is
identified for the proposed model. The stability analysis are performed for the
disease free equilibrium ($\mathcal{E}_0$) as well for the endemic equilibrium
($\mathcal{E}_*$) by using the Routh-Hurwitz criterion. The graphical
illustrations of the proposed mathematical results are provided to validate the
theoretical results.
| [
{
"created": "Tue, 15 Jun 2021 10:56:26 GMT",
"version": "v1"
}
] | 2021-06-17 | [
[
"Tamilalagan",
"P.",
""
],
[
"Krithika",
"B.",
""
],
[
"Manivannan",
"P.",
""
]
] | The world is still fighting against COVID-19, which has been lasting for more than a year. Till date, it has been a greatest challenge to human beings in fighting against COVID-19 since, the pathogen SARS-COV-2 that causes COVID-19 has significant biological and transmission characteristics when compared to SARS-COV and MERS-COV pathogens. In spite of many control strategies that are implemented to reduce the disease spread, there is a rise in the number of infected cases around the world. Hence, a mathematical model which can describe the real nature and impact of COVID-19 is necessary for the better understanding of disease transmission dynamics of COVID-19. This article proposes a new compartmental SEIRUC mathematical model, which includes the new state called convalesce (C). The basic reproduction number $\mathcal{R}_0$ is identified for the proposed model. The stability analysis are performed for the disease free equilibrium ($\mathcal{E}_0$) as well for the endemic equilibrium ($\mathcal{E}_*$) by using the Routh-Hurwitz criterion. The graphical illustrations of the proposed mathematical results are provided to validate the theoretical results. |
1610.02281 | Jason T. L. Wang | Ling Zhong and Jason T. L. Wang | Effective Classification of MicroRNA Precursors Using Combinatorial
Feature Mining and AdaBoost Algorithms | 26 pages, 3 figures | null | null | null | q-bio.GN cs.CE cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | MicroRNAs (miRNAs) are non-coding RNAs with approximately 22 nucleotides (nt)
that are derived from precursor molecules. These precursor molecules or
pre-miRNAs often fold into stem-loop hairpin structures. However, a large
number of sequences with pre-miRNA-like hairpins can be found in genomes. It is
a challenge to distinguish the real pre-miRNAs from other hairpin sequences
with similar stem-loops (referred to as pseudo pre-miRNAs). Several
computational methods have been developed to tackle this challenge. In this
paper we propose a new method, called MirID, for identifying and classifying
microRNA precursors. We collect 74 features from the sequences and secondary
structures of pre-miRNAs; some of these features are taken from our previous
studies on non-coding RNA prediction while others were suggested in the
literature. We develop a combinatorial feature mining algorithm to identify
suitable feature sets. These feature sets are then used to train support vector
machines to obtain classification models, based on which classifier ensemble is
constructed. Finally we use an AdaBoost algorithm to further enhance the
accuracy of the classifier ensemble. Experimental results on a variety of
species demonstrate the good performance of the proposed method, and its
superiority over existing tools.
| [
{
"created": "Thu, 6 Oct 2016 04:35:37 GMT",
"version": "v1"
}
] | 2016-10-10 | [
[
"Zhong",
"Ling",
""
],
[
"Wang",
"Jason T. L.",
""
]
] | MicroRNAs (miRNAs) are non-coding RNAs with approximately 22 nucleotides (nt) that are derived from precursor molecules. These precursor molecules or pre-miRNAs often fold into stem-loop hairpin structures. However, a large number of sequences with pre-miRNA-like hairpins can be found in genomes. It is a challenge to distinguish the real pre-miRNAs from other hairpin sequences with similar stem-loops (referred to as pseudo pre-miRNAs). Several computational methods have been developed to tackle this challenge. In this paper we propose a new method, called MirID, for identifying and classifying microRNA precursors. We collect 74 features from the sequences and secondary structures of pre-miRNAs; some of these features are taken from our previous studies on non-coding RNA prediction while others were suggested in the literature. We develop a combinatorial feature mining algorithm to identify suitable feature sets. These feature sets are then used to train support vector machines to obtain classification models, based on which classifier ensemble is constructed. Finally we use an AdaBoost algorithm to further enhance the accuracy of the classifier ensemble. Experimental results on a variety of species demonstrate the good performance of the proposed method, and its superiority over existing tools. |
1711.00045 | Chunwei Ma | Chunwei Ma, Zhiyong Zhu, Jun Ye, Jiarui Yang, Jianguo Pei, Shaohang
Xu, Chang Yu, Fan Mo, Bo Wen, Siqi Liu | Retention Time of Peptides in Liquid Chromatography Is Well Estimated
upon Deep Transfer Learning | 13-page research article | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A fully automatic prediction for peptide retention time (RT) in liquid
chromatography (LC), termed as DeepRT, was developed using deep learning
approach, an ensemble of Residual Network (ResNet) and Long Short-Term Memory
(LSTM). In contrast to the traditional predictor based on the hand-crafted
features for peptides, DeepRT learns features from raw amino acid sequences and
makes relatively accurate prediction of peptide RTs with 0.987 R2 for
unmodified peptides. Furthermore, by virtue of transfer learning, DeepRT
enables utilization of the peptides datasets generated from different LC
conditions and of different modification status, resulting in the RT prediction
of 0.992 R2 for unmodified peptides and 0.978 R2 for post-translationally
modified peptides. Even though chromatographic behaviors of peptides are quite
complicated, the study here demonstrated that peptide RT prediction could be
largely improved by deep transfer learning. The DeepRT software is freely
available at https://github.com/horsepurve/DeepRT, under Apache2 open source
License.
| [
{
"created": "Tue, 31 Oct 2017 18:33:59 GMT",
"version": "v1"
}
] | 2017-11-02 | [
[
"Ma",
"Chunwei",
""
],
[
"Zhu",
"Zhiyong",
""
],
[
"Ye",
"Jun",
""
],
[
"Yang",
"Jiarui",
""
],
[
"Pei",
"Jianguo",
""
],
[
"Xu",
"Shaohang",
""
],
[
"Yu",
"Chang",
""
],
[
"Mo",
"Fan",
""
],
[
"Wen",
"Bo",
""
],
[
"Liu",
"Siqi",
""
]
] | A fully automatic prediction for peptide retention time (RT) in liquid chromatography (LC), termed as DeepRT, was developed using deep learning approach, an ensemble of Residual Network (ResNet) and Long Short-Term Memory (LSTM). In contrast to the traditional predictor based on the hand-crafted features for peptides, DeepRT learns features from raw amino acid sequences and makes relatively accurate prediction of peptide RTs with 0.987 R2 for unmodified peptides. Furthermore, by virtue of transfer learning, DeepRT enables utilization of the peptides datasets generated from different LC conditions and of different modification status, resulting in the RT prediction of 0.992 R2 for unmodified peptides and 0.978 R2 for post-translationally modified peptides. Even though chromatographic behaviors of peptides are quite complicated, the study here demonstrated that peptide RT prediction could be largely improved by deep transfer learning. The DeepRT software is freely available at https://github.com/horsepurve/DeepRT, under Apache2 open source License. |
2305.08057 | Colin Grambow | Colin A. Grambow, Hayley Weir, Christian N. Cunningham, Tommaso
Biancalani, Kangway V. Chuang | CREMP: Conformer-rotamer ensembles of macrocyclic peptides for machine
learning | null | Sci. Data 11, 859 (2024) | 10.1038/s41597-024-03698-y | null | q-bio.BM cs.LG physics.chem-ph | http://creativecommons.org/licenses/by/4.0/ | Computational and machine learning approaches to model the conformational
landscape of macrocyclic peptides have the potential to enable rational design
and optimization. However, accurate, fast, and scalable methods for modeling
macrocycle geometries remain elusive. Recent deep learning approaches have
significantly accelerated protein structure prediction and the generation of
small-molecule conformational ensembles, yet similar progress has not been made
for macrocyclic peptides due to their unique properties. Here, we introduce
CREMP, a resource generated for the rapid development and evaluation of machine
learning models for macrocyclic peptides. CREMP contains 36,198 unique
macrocyclic peptides and their high-quality structural ensembles generated
using the Conformer-Rotamer Ensemble Sampling Tool (CREST). Altogether, this
new dataset contains nearly 31.3 million unique macrocycle geometries, each
annotated with energies derived from semi-empirical extended tight-binding
(xTB) DFT calculations. Additionally, we include 3,258 macrocycles with
reported passive permeability data to couple conformational ensembles to
experiment. We anticipate that this dataset will enable the development of
machine learning models that can improve peptide design and optimization for
novel therapeutics.
| [
{
"created": "Sun, 14 May 2023 03:50:46 GMT",
"version": "v1"
},
{
"created": "Fri, 9 Aug 2024 17:16:32 GMT",
"version": "v2"
}
] | 2024-08-12 | [
[
"Grambow",
"Colin A.",
""
],
[
"Weir",
"Hayley",
""
],
[
"Cunningham",
"Christian N.",
""
],
[
"Biancalani",
"Tommaso",
""
],
[
"Chuang",
"Kangway V.",
""
]
] | Computational and machine learning approaches to model the conformational landscape of macrocyclic peptides have the potential to enable rational design and optimization. However, accurate, fast, and scalable methods for modeling macrocycle geometries remain elusive. Recent deep learning approaches have significantly accelerated protein structure prediction and the generation of small-molecule conformational ensembles, yet similar progress has not been made for macrocyclic peptides due to their unique properties. Here, we introduce CREMP, a resource generated for the rapid development and evaluation of machine learning models for macrocyclic peptides. CREMP contains 36,198 unique macrocyclic peptides and their high-quality structural ensembles generated using the Conformer-Rotamer Ensemble Sampling Tool (CREST). Altogether, this new dataset contains nearly 31.3 million unique macrocycle geometries, each annotated with energies derived from semi-empirical extended tight-binding (xTB) DFT calculations. Additionally, we include 3,258 macrocycles with reported passive permeability data to couple conformational ensembles to experiment. We anticipate that this dataset will enable the development of machine learning models that can improve peptide design and optimization for novel therapeutics. |
2110.01892 | Gianpietro Basei | Gianpietro Basei, Alex Zabeo, Kirsten Rasmussen, Georgia Tsiliki,
Danail Hristozov | A Weight of Evidence approach to classify nanomaterials according to the
EU Classification, Labelling and Packaging regulation criteria | Preprint version | NanoImpact, Volume 24, October 2021 | 10.1016/j.impact.2021.100359 | null | q-bio.QM | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In the context of the EU Horizon 2020 GRACIOUS project, we proposed a
quantitative Weight of Evidence (WoE) approach for hazard classification of
nanomaterials (NMs). This approach is based on the requirements of the European
Regulation on Classification, Labelling and Packaging of Substances and
Mixtures (the CLP Regulation), which implements the United Nations' Globally
Harmonized System of Classification and Labelling of Chemicals (UN GHS) in the
European Union. The goal of this WoE methodology is to facilitate
classification of NMs according to CLP criteria, following the decision trees
defined in ECHA's CLP regulatory guidance. The proposed methodology involves
the following stages: (1) collection of data for different NMs related to the
endpoint of interest: each study related to each NM is referred as a Line of
Evidence (LoE); (2) computation of weighted scores for each LoE: each LoE is
weighted by a score calculated based on agreed data quality and completeness
criteria defined in the GRACIOUS project; (3) comparison and integration of the
weighed LoEs for each NM: A Monte Carlo resampling approach is adopted to
quantitatively and probabilistically integrate the weighted evidence; and (4)
assignment of each NM to a hazard class: according to the results, each NM is
assigned to one of the classes defined by the CLP regulation. Furthermore, to
facilitate the integration and the classification of the weighted LoEs, an R
tool was developed. Finally, the approach was tested against an endpoint
relevant to CLP (Aquatic Toxicity) using data retrieved from the eNanoMapper
database, results obtained were consistent to results in ECHA registration
dossiers and in recent literature
| [
{
"created": "Tue, 5 Oct 2021 09:25:21 GMT",
"version": "v1"
},
{
"created": "Wed, 24 Nov 2021 10:50:01 GMT",
"version": "v2"
}
] | 2021-11-25 | [
[
"Basei",
"Gianpietro",
""
],
[
"Zabeo",
"Alex",
""
],
[
"Rasmussen",
"Kirsten",
""
],
[
"Tsiliki",
"Georgia",
""
],
[
"Hristozov",
"Danail",
""
]
] | In the context of the EU Horizon 2020 GRACIOUS project, we proposed a quantitative Weight of Evidence (WoE) approach for hazard classification of nanomaterials (NMs). This approach is based on the requirements of the European Regulation on Classification, Labelling and Packaging of Substances and Mixtures (the CLP Regulation), which implements the United Nations' Globally Harmonized System of Classification and Labelling of Chemicals (UN GHS) in the European Union. The goal of this WoE methodology is to facilitate classification of NMs according to CLP criteria, following the decision trees defined in ECHA's CLP regulatory guidance. The proposed methodology involves the following stages: (1) collection of data for different NMs related to the endpoint of interest: each study related to each NM is referred as a Line of Evidence (LoE); (2) computation of weighted scores for each LoE: each LoE is weighted by a score calculated based on agreed data quality and completeness criteria defined in the GRACIOUS project; (3) comparison and integration of the weighed LoEs for each NM: A Monte Carlo resampling approach is adopted to quantitatively and probabilistically integrate the weighted evidence; and (4) assignment of each NM to a hazard class: according to the results, each NM is assigned to one of the classes defined by the CLP regulation. Furthermore, to facilitate the integration and the classification of the weighted LoEs, an R tool was developed. Finally, the approach was tested against an endpoint relevant to CLP (Aquatic Toxicity) using data retrieved from the eNanoMapper database, results obtained were consistent to results in ECHA registration dossiers and in recent literature |
1502.06089 | Da Zhou Dr. | Yuanling Niu, Yue Wang, Da Zhou | The phenotypic equilibrium of cancer cells: From average-level stability
to path-wise convergence | 27 pages, 5 figures in Journal of Theoretical Biology, 2015 | null | 10.1016/j.jtbi.2015.09.001 | null | q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The phenotypic equilibrium, i.e. heterogeneous population of cancer cells
tending to a fixed equilibrium of phenotypic proportions, has received much
attention in cancer biology very recently. In previous literature, some
theoretical models were used to predict the experimental phenomena of the
phenotypic equilibrium, which were often explained by different concepts of
stabilities of the models. Here we present a stochastic multi-phenotype
branching model by integrating conventional cellular hierarchy with phenotypic
plasticity mechanisms of cancer cells. Based on our model, it is shown that:
(i) our model can serve as a framework to unify the previous models for the
phenotypic equilibrium, and then harmonizes the different kinds of
average-level stabilities proposed in these models; and (ii) path-wise
convergence of our model provides a deeper understanding to the phenotypic
equilibrium from stochastic point of view. That is, the emergence of the
phenotypic equilibrium is rooted in the stochastic nature of (almost) every
sample path, the average-level stability just follows from it by averaging
stochastic samples.
| [
{
"created": "Sat, 21 Feb 2015 09:25:48 GMT",
"version": "v1"
},
{
"created": "Wed, 23 Sep 2015 03:16:09 GMT",
"version": "v2"
}
] | 2015-09-24 | [
[
"Niu",
"Yuanling",
""
],
[
"Wang",
"Yue",
""
],
[
"Zhou",
"Da",
""
]
] | The phenotypic equilibrium, i.e. heterogeneous population of cancer cells tending to a fixed equilibrium of phenotypic proportions, has received much attention in cancer biology very recently. In previous literature, some theoretical models were used to predict the experimental phenomena of the phenotypic equilibrium, which were often explained by different concepts of stabilities of the models. Here we present a stochastic multi-phenotype branching model by integrating conventional cellular hierarchy with phenotypic plasticity mechanisms of cancer cells. Based on our model, it is shown that: (i) our model can serve as a framework to unify the previous models for the phenotypic equilibrium, and then harmonizes the different kinds of average-level stabilities proposed in these models; and (ii) path-wise convergence of our model provides a deeper understanding to the phenotypic equilibrium from stochastic point of view. That is, the emergence of the phenotypic equilibrium is rooted in the stochastic nature of (almost) every sample path, the average-level stability just follows from it by averaging stochastic samples. |
1106.3166 | Jan Buytaert | J.A.N. Buytaert, W.H.M. Salih, M. Dierick, P. Jacobs and J.J.J. Dirckx | Realistic 3D computer model of the gerbil middle ear, featuring accurate
morphology of bone and soft tissue structures | 41 pages, 14 figures, to be published in JARO - Journal of the
Association for Research in Otolaryngology | null | null | null | q-bio.TO physics.bio-ph | http://creativecommons.org/licenses/by-nc-sa/3.0/ | In order to improve realism in middle ear (ME) finite element modeling (FEM),
comprehensive and precise morphological data are needed. To date, micro-scale
X-ray computed tomography (\mu CT) recordings have been used as geometric input
data for FEM models of the ME ossicles. Previously, attempts were made to
obtain this data on ME soft tissue structures as well. However, due to low
X-ray absorption of soft tissue, quality of these images is limited. Another
popular approach is using histological sections as data for 3D models,
delivering high in-plane resolution for the sections, but the technique is
destructive in nature and registration of the sections is difficult. We combine
data from high-resolution \mu CT recordings with data from high-resolution
orthogonal-plane fluorescence optical-sectioning microscopy (OPFOS), both
obtained on the same gerbil specimen. State-of-the-art \mu CT delivers
high-resolution data on the three-dimensional shape of ossicles and other ME
bony structures, while the OPFOS setup generates data of unprecedented quality
both on bone and soft tissue ME structures. Each of these techniques is
tomographic and non-destructive, and delivers sets of automatically aligned
virtual sections. The datasets coming from different techniques need to be
registered with respect to each other. By combining both datasets, we obtain a
complete high-resolution morphological model of all functional components in
the gerbil ME. The resulting three-dimensional model can be readily imported in
FEM software and is made freely available to the research community. In this
paper, we discuss the methods used, present the resulting merged model and
discuss morphological properties of the soft tissue structures, such as muscles
and ligaments.
| [
{
"created": "Thu, 16 Jun 2011 08:26:53 GMT",
"version": "v1"
}
] | 2011-06-17 | [
[
"Buytaert",
"J. A. N.",
""
],
[
"Salih",
"W. H. M.",
""
],
[
"Dierick",
"M.",
""
],
[
"Jacobs",
"P.",
""
],
[
"Dirckx",
"J. J. J.",
""
]
] | In order to improve realism in middle ear (ME) finite element modeling (FEM), comprehensive and precise morphological data are needed. To date, micro-scale X-ray computed tomography (\mu CT) recordings have been used as geometric input data for FEM models of the ME ossicles. Previously, attempts were made to obtain this data on ME soft tissue structures as well. However, due to low X-ray absorption of soft tissue, quality of these images is limited. Another popular approach is using histological sections as data for 3D models, delivering high in-plane resolution for the sections, but the technique is destructive in nature and registration of the sections is difficult. We combine data from high-resolution \mu CT recordings with data from high-resolution orthogonal-plane fluorescence optical-sectioning microscopy (OPFOS), both obtained on the same gerbil specimen. State-of-the-art \mu CT delivers high-resolution data on the three-dimensional shape of ossicles and other ME bony structures, while the OPFOS setup generates data of unprecedented quality both on bone and soft tissue ME structures. Each of these techniques is tomographic and non-destructive, and delivers sets of automatically aligned virtual sections. The datasets coming from different techniques need to be registered with respect to each other. By combining both datasets, we obtain a complete high-resolution morphological model of all functional components in the gerbil ME. The resulting three-dimensional model can be readily imported in FEM software and is made freely available to the research community. In this paper, we discuss the methods used, present the resulting merged model and discuss morphological properties of the soft tissue structures, such as muscles and ligaments. |
2401.12462 | Chuanqing Xu | Chuanqing Xu, Kedeng Cheng, Songbai Guo, Dehui Yuan, Xiaoyu Zhao | A dynamic model to study the potential TB infections and assessment of
control strategies in China | 20 pages, 10 figures, 33 conference | null | null | null | q-bio.PE math.DS | http://creativecommons.org/licenses/by-nc-sa/4.0/ | China is one of the countries with a high burden of tuberculosis, and
although the number of new cases of tuberculosis has been decreasing year by
year, the number of new infections per year has remained high and the diagnosis
rate of tuberculosis-infected patients has remained low. Based on the analysis
of TB infection data, we develop a model of TB transmission dynamics that
include potentially infected individuals and BCG vaccination, fit the model
parameters to the data on new TB cases, calculate the basic reproduction number
\mathcal{R}_v= 0.4442. A parametric sensitivity analysis of \mathcal{R}_v is
performed, and we obtained the correlation coefficients of BCG vaccination rate
and effectiveness rate with \mathcal{R}_v as -0.810, -0.825. According to the
model, we estimate that there are 614,186 (95% CI [562,631,665,741])
potentially infected TB cases in China, accounting for about 39.5% of the total
number of TB cases. We assess the feasibility of achieving the goals of the WHO
strategy to end tuberculosis in China and find that reducing the number of new
cases by 90 per cent by 2035 is very difficult with the current tuberculosis
control measures. However, with an effective combination of control measures
such as increased detection of potentially infected persons, improved drug
treatment, and reduction of overall exposure to tuberculosis patients, it is
feasible to reach the WHO strategic goal of ending tuberculosis by 2035.
| [
{
"created": "Tue, 23 Jan 2024 03:21:49 GMT",
"version": "v1"
},
{
"created": "Thu, 25 Jan 2024 07:12:47 GMT",
"version": "v2"
}
] | 2024-01-26 | [
[
"Xu",
"Chuanqing",
""
],
[
"Cheng",
"Kedeng",
""
],
[
"Guo",
"Songbai",
""
],
[
"Yuan",
"Dehui",
""
],
[
"Zhao",
"Xiaoyu",
""
]
] | China is one of the countries with a high burden of tuberculosis, and although the number of new cases of tuberculosis has been decreasing year by year, the number of new infections per year has remained high and the diagnosis rate of tuberculosis-infected patients has remained low. Based on the analysis of TB infection data, we develop a model of TB transmission dynamics that include potentially infected individuals and BCG vaccination, fit the model parameters to the data on new TB cases, calculate the basic reproduction number \mathcal{R}_v= 0.4442. A parametric sensitivity analysis of \mathcal{R}_v is performed, and we obtained the correlation coefficients of BCG vaccination rate and effectiveness rate with \mathcal{R}_v as -0.810, -0.825. According to the model, we estimate that there are 614,186 (95% CI [562,631,665,741]) potentially infected TB cases in China, accounting for about 39.5% of the total number of TB cases. We assess the feasibility of achieving the goals of the WHO strategy to end tuberculosis in China and find that reducing the number of new cases by 90 per cent by 2035 is very difficult with the current tuberculosis control measures. However, with an effective combination of control measures such as increased detection of potentially infected persons, improved drug treatment, and reduction of overall exposure to tuberculosis patients, it is feasible to reach the WHO strategic goal of ending tuberculosis by 2035. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.