id stringlengths 9 13 | submitter stringlengths 4 48 | authors stringlengths 4 9.62k | title stringlengths 4 343 | comments stringlengths 2 480 ⌀ | journal-ref stringlengths 9 309 ⌀ | doi stringlengths 12 138 ⌀ | report-no stringclasses 277 values | categories stringlengths 8 87 | license stringclasses 9 values | orig_abstract stringlengths 27 3.76k | versions listlengths 1 15 | update_date stringlengths 10 10 | authors_parsed listlengths 1 147 | abstract stringlengths 24 3.75k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1308.3552 | Markus Dahlem | Markus A. Dahlem, Sebastian Rode, Arne May, Naoya Fujiwara, Yoshito
Hirata, Kazuyuki Aihara, and J\"urgen Kurths | Towards dynamical network biomarkers in neuromodulation of episodic
migraine | 13 pages, 5 figures | null | 10.2478/s13380-013-0127-0 | null | q-bio.NC nlin.PS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Computational methods have complemented experimental and clinical
neursciences and led to improvements in our understanding of the nervous
systems in health and disease. In parallel, neuromodulation in form of electric
and magnetic stimulation is gaining increasing acceptance in chronic and
intractable diseases. In this paper, we firstly explore the relevant state of
the art in fusion of both developments towards translational computational
neuroscience. Then, we propose a strategy to employ the new theoretical concept
of dynamical network biomarkers (DNB) in episodic manifestations of chronic
disorders. In particular, as a first example, we introduce the use of
computational models in migraine and illustrate on the basis of this example
the potential of DNB as early-warning signals for neuromodulation in episodic
migraine.
| [
{
"created": "Fri, 16 Aug 2013 04:48:56 GMT",
"version": "v1"
}
] | 2014-08-13 | [
[
"Dahlem",
"Markus A.",
""
],
[
"Rode",
"Sebastian",
""
],
[
"May",
"Arne",
""
],
[
"Fujiwara",
"Naoya",
""
],
[
"Hirata",
"Yoshito",
""
],
[
"Aihara",
"Kazuyuki",
""
],
[
"Kurths",
"Jürgen",
""
]
] | Computational methods have complemented experimental and clinical neursciences and led to improvements in our understanding of the nervous systems in health and disease. In parallel, neuromodulation in form of electric and magnetic stimulation is gaining increasing acceptance in chronic and intractable diseases. In this paper, we firstly explore the relevant state of the art in fusion of both developments towards translational computational neuroscience. Then, we propose a strategy to employ the new theoretical concept of dynamical network biomarkers (DNB) in episodic manifestations of chronic disorders. In particular, as a first example, we introduce the use of computational models in migraine and illustrate on the basis of this example the potential of DNB as early-warning signals for neuromodulation in episodic migraine. |
1306.3850 | Changbong Hyeon | Changbong Hyeon and D. Thirumalai | Generalized Iterative Annealing Model for the action of RNA chaperones | 19 pages, 3 figures | J. Chem. Phys. (2013) vol. 139, 121924 | 10.1063/1.4818594 | null | q-bio.BM cond-mat.soft physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As a consequence of the rugged landscape of RNA molecules their folding is
described by the kinetic partitioning mechanism according to which only a small
fraction ($\phi_F$) reaches the folded state while the remaining fraction of
molecules is kinetically trapped in misfolded intermediates. The transition
from the misfolded states to the native state can far exceed biologically
relevant time. Thus, RNA folding in vivo is often aided by protein cofactors,
called RNA chaperones, that can rescue RNAs from a multitude of misfolded
structures. We consider two models, based on chemical kinetics and chemical
master equation, for describing assisted folding. In the passive model,
applicable for class I substrates, transient interactions of misfolded
structures with RNA chaperones alone are sufficient to destabilize the
misfolded structures, thus entropically lowering the barrier to folding. For
this mechanism to be efficient the intermediate ribonucleoprotein (RNP) complex
between collapsed RNA and protein cofactor should have optimal stability. We
also introduce an active model (suitable for stringent substrates with small
$\phi_F$), which accounts for the recent experimental findings on the action of
CYT-19 on the group I intron ribozyme, showing that RNA chaperones does not
discriminate between the misfolded and the native states. In the active model,
the RNA chaperone system utilizes chemical energy of ATP hydrolysis to
repeatedly bind and release misfolded and folded RNAs, resulting in substantial
increase of yield of the native state. The theory outlined here shows, in
accord with experiments, that in the steady state the native state does not
form with unit probability.
| [
{
"created": "Mon, 17 Jun 2013 13:24:50 GMT",
"version": "v1"
}
] | 2017-01-24 | [
[
"Hyeon",
"Changbong",
""
],
[
"Thirumalai",
"D.",
""
]
] | As a consequence of the rugged landscape of RNA molecules their folding is described by the kinetic partitioning mechanism according to which only a small fraction ($\phi_F$) reaches the folded state while the remaining fraction of molecules is kinetically trapped in misfolded intermediates. The transition from the misfolded states to the native state can far exceed biologically relevant time. Thus, RNA folding in vivo is often aided by protein cofactors, called RNA chaperones, that can rescue RNAs from a multitude of misfolded structures. We consider two models, based on chemical kinetics and chemical master equation, for describing assisted folding. In the passive model, applicable for class I substrates, transient interactions of misfolded structures with RNA chaperones alone are sufficient to destabilize the misfolded structures, thus entropically lowering the barrier to folding. For this mechanism to be efficient the intermediate ribonucleoprotein (RNP) complex between collapsed RNA and protein cofactor should have optimal stability. We also introduce an active model (suitable for stringent substrates with small $\phi_F$), which accounts for the recent experimental findings on the action of CYT-19 on the group I intron ribozyme, showing that RNA chaperones does not discriminate between the misfolded and the native states. In the active model, the RNA chaperone system utilizes chemical energy of ATP hydrolysis to repeatedly bind and release misfolded and folded RNAs, resulting in substantial increase of yield of the native state. The theory outlined here shows, in accord with experiments, that in the steady state the native state does not form with unit probability. |
1308.0101 | Juli\'an Candia | Juli\'an Candia, Ryan Maunu, Meghan Driscoll, Ang\'elique Biancotto,
Pradeep Dagur, J. Philip McCoy Jr, H. Nida Sen, Lai Wei, Amos Maritan, Kan
Cao, Robert B. Nussenblatt, Jayanth R. Banavar, Wolfgang Losert | From Cellular Characteristics to Disease Diagnosis: Uncovering
Phenotypes with Supercells | 22 pages, 4 figures. To appear in PLOS Computational Biology | PLOS Comp. Biol. 9(9):e1003215 (2013) | 10.1371/journal.pcbi.1003215 | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cell heterogeneity and the inherent complexity due to the interplay of
multiple molecular processes within the cell pose difficult challenges for
current single-cell biology. We introduce an approach that identifies a disease
phenotype from multiparameter single-cell measurements, which is based on the
concept of "supercell statistics", a single-cell-based averaging procedure
followed by a machine learning classification scheme. We are able to assess the
optimal tradeoff between the number of single cells averaged and the number of
measurements needed to capture phenotypic differences between healthy and
diseased patients, as well as between different diseases that are difficult to
diagnose otherwise. We apply our approach to two kinds of single-cell datasets,
addressing the diagnosis of a premature aging disorder using images of cell
nuclei, as well as the phenotypes of two non-infectious uveitides (the ocular
manifestations of Beh\c{c}et's disease and sarcoidosis) based on multicolor
flow cytometry. In the former case, one nuclear shape measurement taken over a
group of 30 cells is sufficient to classify samples as healthy or diseased, in
agreement with usual laboratory practice. In the latter, our method is able to
identify a minimal set of 5 markers that accurately predict Beh\c{c}et's
disease and sarcoidosis. This is the first time that a quantitative phenotypic
distinction between these two diseases has been achieved. To obtain this clear
phenotypic signature, about one hundred CD8+ T cells need to be measured.
Beyond these specific cases, the approach proposed here is applicable to
datasets generated by other kinds of state-of-the-art and forthcoming
single-cell technologies, such as multidimensional mass cytometry, single-cell
gene expression, and single-cell full genome sequencing techniques.
| [
{
"created": "Thu, 1 Aug 2013 05:57:03 GMT",
"version": "v1"
}
] | 2013-09-10 | [
[
"Candia",
"Julián",
""
],
[
"Maunu",
"Ryan",
""
],
[
"Driscoll",
"Meghan",
""
],
[
"Biancotto",
"Angélique",
""
],
[
"Dagur",
"Pradeep",
""
],
[
"McCoy",
"J. Philip",
"Jr"
],
[
"Sen",
"H. Nida",
""
],
[
"Wei",
"Lai",
""
],
[
"Maritan",
"Amos",
""
],
[
"Cao",
"Kan",
""
],
[
"Nussenblatt",
"Robert B.",
""
],
[
"Banavar",
"Jayanth R.",
""
],
[
"Losert",
"Wolfgang",
""
]
] | Cell heterogeneity and the inherent complexity due to the interplay of multiple molecular processes within the cell pose difficult challenges for current single-cell biology. We introduce an approach that identifies a disease phenotype from multiparameter single-cell measurements, which is based on the concept of "supercell statistics", a single-cell-based averaging procedure followed by a machine learning classification scheme. We are able to assess the optimal tradeoff between the number of single cells averaged and the number of measurements needed to capture phenotypic differences between healthy and diseased patients, as well as between different diseases that are difficult to diagnose otherwise. We apply our approach to two kinds of single-cell datasets, addressing the diagnosis of a premature aging disorder using images of cell nuclei, as well as the phenotypes of two non-infectious uveitides (the ocular manifestations of Beh\c{c}et's disease and sarcoidosis) based on multicolor flow cytometry. In the former case, one nuclear shape measurement taken over a group of 30 cells is sufficient to classify samples as healthy or diseased, in agreement with usual laboratory practice. In the latter, our method is able to identify a minimal set of 5 markers that accurately predict Beh\c{c}et's disease and sarcoidosis. This is the first time that a quantitative phenotypic distinction between these two diseases has been achieved. To obtain this clear phenotypic signature, about one hundred CD8+ T cells need to be measured. Beyond these specific cases, the approach proposed here is applicable to datasets generated by other kinds of state-of-the-art and forthcoming single-cell technologies, such as multidimensional mass cytometry, single-cell gene expression, and single-cell full genome sequencing techniques. |
1306.4374 | Juan Carlos del Alamo | Juan C del Alamo, Ruedi Meili, Bego\~na Alvarez-Gonzalez, Baldomero
Alonso-Latorre, Effie Bastounis, Richard Firtel, Juan C Lasheras | Three-Dimensional Quantification of Cellular Traction Forces and
Mechanosensing of Thin Substrata by Fourier Traction Force Microscopy | null | PLoS ONE 8(9): e69850, 2013 | 10.1371/journal.pone.0069850 | null | q-bio.QM cond-mat.soft q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a novel three-dimensional (3D) traction force microscopy (TFM)
method motivated by the recent discovery that cells adhering on plane surfaces
exert both in-plane and out-of-plane traction stresses. We measure the 3D
deformation of the substratum on a thin layer near its surface, and input this
information into an exact analytical solution of the elastic equilibrium
equation. These operations are performed in the Fourier domain with high
computational efficiency, allowing to obtain the 3D traction stresses from raw
microscopy images virtually in real time. We also characterize the error of
previous two-dimensional (2D) TFM methods that neglect the out-of-plane
component of the traction stresses. This analysis reveals that, under certain
combinations of experimental parameters (\ie cell size, substratums' thickness
and Poisson's ratio), the accuracy of 2D TFM methods is minimally affected by
neglecting the out-of-plane component of the traction stresses. Finally, we
consider the cell's mechanosensing of substratum thickness by 3D traction
stresses, finding that, when cells adhere on thin substrata, their out-of-plane
traction stresses can reach four times deeper into the substratum than their
in-plane traction stresses. It is also found that the substratum stiffness
sensed by applying out-of-plane traction stresses may be up to 10 times larger
than the stiffness sensed by applying in-plane traction stresses.
| [
{
"created": "Tue, 18 Jun 2013 22:04:59 GMT",
"version": "v1"
}
] | 2013-10-07 | [
[
"del Alamo",
"Juan C",
""
],
[
"Meili",
"Ruedi",
""
],
[
"Alvarez-Gonzalez",
"Begoña",
""
],
[
"Alonso-Latorre",
"Baldomero",
""
],
[
"Bastounis",
"Effie",
""
],
[
"Firtel",
"Richard",
""
],
[
"Lasheras",
"Juan C",
""
]
] | We introduce a novel three-dimensional (3D) traction force microscopy (TFM) method motivated by the recent discovery that cells adhering on plane surfaces exert both in-plane and out-of-plane traction stresses. We measure the 3D deformation of the substratum on a thin layer near its surface, and input this information into an exact analytical solution of the elastic equilibrium equation. These operations are performed in the Fourier domain with high computational efficiency, allowing to obtain the 3D traction stresses from raw microscopy images virtually in real time. We also characterize the error of previous two-dimensional (2D) TFM methods that neglect the out-of-plane component of the traction stresses. This analysis reveals that, under certain combinations of experimental parameters (\ie cell size, substratums' thickness and Poisson's ratio), the accuracy of 2D TFM methods is minimally affected by neglecting the out-of-plane component of the traction stresses. Finally, we consider the cell's mechanosensing of substratum thickness by 3D traction stresses, finding that, when cells adhere on thin substrata, their out-of-plane traction stresses can reach four times deeper into the substratum than their in-plane traction stresses. It is also found that the substratum stiffness sensed by applying out-of-plane traction stresses may be up to 10 times larger than the stiffness sensed by applying in-plane traction stresses. |
1301.3095 | Jie Chen | Jie Chen and Devarajan Thirumalai | Helices 2 and 3 are the initiation sites in the PrPc -> PrPsc transition | null | null | 10.1021/bi3005472 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It is established that prion protein is the sole causative agent in a number
of diseases in humans and animals. However, the nature of conformational
changes that the normal cellular form PrPC undergoes in the conversion process
to a self-replicating state is still not fully understood. The ordered
C-terminus of PrPC proteins has three helices (H1, H2, and H3). Here, we use
the Statistical Coupling Analysis (SCA) to infer co-variations at various
locations using a family of evolutionarily related sequences, and the response
of mouse and human PrPCs to mechanical force to decipher the initiation sites
for transition from PrPC to an aggregation prone PrP* state. The sequence-based
SCA predicts that the clustered residues in non-mammals are localized in the
stable core (near H1) of PrPC whereas in mammalian PrPC they are localized in
the frustrated helices H2 and H3 where most of the pathogenic mutations are
found. Force-extension curves and free energy profiles as a function of
extension of mouse and human PrPC in the absence of disulfide (SS) bond between
residues Cys179 and Cys214, generated by applying mechanical force to the ends
of the molecule, show a sequence of unfolding events starting first with
rupture of H2 and H3. This is followed by disruption of structure in two
strands. Helix H1, stabilized by three salt-bridges, resists substantial force
before unfolding. Force extension profiles and the dynamics of rupture of
tertiary contacts also show that even in the presence of SS bond the
instabilities in most of H3 and parts of H2 still determine the propensity to
form the PrP* state. In mouse PrPC with SS bond there are about ten residues
that retain their order even at high forces.
| [
{
"created": "Mon, 14 Jan 2013 18:56:50 GMT",
"version": "v1"
}
] | 2016-11-04 | [
[
"Chen",
"Jie",
""
],
[
"Thirumalai",
"Devarajan",
""
]
] | It is established that prion protein is the sole causative agent in a number of diseases in humans and animals. However, the nature of conformational changes that the normal cellular form PrPC undergoes in the conversion process to a self-replicating state is still not fully understood. The ordered C-terminus of PrPC proteins has three helices (H1, H2, and H3). Here, we use the Statistical Coupling Analysis (SCA) to infer co-variations at various locations using a family of evolutionarily related sequences, and the response of mouse and human PrPCs to mechanical force to decipher the initiation sites for transition from PrPC to an aggregation prone PrP* state. The sequence-based SCA predicts that the clustered residues in non-mammals are localized in the stable core (near H1) of PrPC whereas in mammalian PrPC they are localized in the frustrated helices H2 and H3 where most of the pathogenic mutations are found. Force-extension curves and free energy profiles as a function of extension of mouse and human PrPC in the absence of disulfide (SS) bond between residues Cys179 and Cys214, generated by applying mechanical force to the ends of the molecule, show a sequence of unfolding events starting first with rupture of H2 and H3. This is followed by disruption of structure in two strands. Helix H1, stabilized by three salt-bridges, resists substantial force before unfolding. Force extension profiles and the dynamics of rupture of tertiary contacts also show that even in the presence of SS bond the instabilities in most of H3 and parts of H2 still determine the propensity to form the PrP* state. In mouse PrPC with SS bond there are about ten residues that retain their order even at high forces. |
2309.15831 | Farzad Fatehi | Farzad Fatehi and Reidun Twarock | An interaction network approach predicts protein cage architectures in
bionanotechnology | 22 pages, 13 figures | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Protein nanoparticles play pivotal roles in many areas of bionanotechnology,
including drug delivery, vaccination and diagnostics. These technologies
require control over the distinct particle morphologies that protein
nanocontainers can adopt during self-assembly from their constituent protein
components. The geometric construction principle of virus-derived protein cages
is by now fairly well understood by analogy to viral protein shells in terms of
Caspar and Klug's quasi-equivalence principle. However, many artificial, or
genetically modified, protein containers violate this principle, because
identical protein subunits do not interact in quasi-equivalent ways, leading to
gaps in the particle surface. We introduce a method that exploits information
on the local interactions between the assembly units, called capsomers, to
infer the geometric construction principle of these nanoparticle architectures.
The predictive power of this approach is demonstrated here for a prominent
system in nanotechnology, the AaLS pentamer. Our method not only rationalises
hitherto discovered cage structures, but also predicts geometrically viable
options that have not yet been observed. The classification of nanoparticle
architecture based on the geometric properties of the interaction network
closes a gap in our current understanding of protein container structure and
can be widely applied in protein nanotechnology, paving the way to programmable
control over particle polymorphism.
| [
{
"created": "Tue, 26 Sep 2023 11:24:22 GMT",
"version": "v1"
}
] | 2023-09-28 | [
[
"Fatehi",
"Farzad",
""
],
[
"Twarock",
"Reidun",
""
]
] | Protein nanoparticles play pivotal roles in many areas of bionanotechnology, including drug delivery, vaccination and diagnostics. These technologies require control over the distinct particle morphologies that protein nanocontainers can adopt during self-assembly from their constituent protein components. The geometric construction principle of virus-derived protein cages is by now fairly well understood by analogy to viral protein shells in terms of Caspar and Klug's quasi-equivalence principle. However, many artificial, or genetically modified, protein containers violate this principle, because identical protein subunits do not interact in quasi-equivalent ways, leading to gaps in the particle surface. We introduce a method that exploits information on the local interactions between the assembly units, called capsomers, to infer the geometric construction principle of these nanoparticle architectures. The predictive power of this approach is demonstrated here for a prominent system in nanotechnology, the AaLS pentamer. Our method not only rationalises hitherto discovered cage structures, but also predicts geometrically viable options that have not yet been observed. The classification of nanoparticle architecture based on the geometric properties of the interaction network closes a gap in our current understanding of protein container structure and can be widely applied in protein nanotechnology, paving the way to programmable control over particle polymorphism. |
1903.07257 | J. F. Rojas | Rub\'en Mart\'inez D, Andrea Montiel P., and J. F. Rojas | Vegetation pattern formation in a sinuous free-scale landscape | 14 pages | null | null | null | q-bio.PE nlin.PS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The original Hardenberg's model of biomass patterns in arid and semi-arid
regions is revisited to extend it to more general non flat regions. It is
proposed a technique to study these more generalized (non-flat) regions using
both a conservation criterion and a explicit spatial dependent function $\nu
(x)$. In this paper a study of dynamical stability around system's fixed points
made. Under the idea of predictability via air images a fitted relationship
among dynamical variables at stable fixed points is stablished. Also, is
presented a discrete version of the model, in the form of Cellular Automata
techniques, that allows to neglect the spatial scale and reproduces realistic
stable spatial patterns.
| [
{
"created": "Mon, 18 Mar 2019 05:08:26 GMT",
"version": "v1"
}
] | 2019-03-19 | [
[
"D",
"Rubén Martínez",
""
],
[
"P.",
"Andrea Montiel",
""
],
[
"Rojas",
"J. F.",
""
]
] | The original Hardenberg's model of biomass patterns in arid and semi-arid regions is revisited to extend it to more general non flat regions. It is proposed a technique to study these more generalized (non-flat) regions using both a conservation criterion and a explicit spatial dependent function $\nu (x)$. In this paper a study of dynamical stability around system's fixed points made. Under the idea of predictability via air images a fitted relationship among dynamical variables at stable fixed points is stablished. Also, is presented a discrete version of the model, in the form of Cellular Automata techniques, that allows to neglect the spatial scale and reproduces realistic stable spatial patterns. |
2011.10575 | Ruby Sedgwick | Ruby Sedgwick, John Goertz, Molly Stevens, Ruth Misener, Mark van der
Wilk | Design of Experiments for Verifying Biomolecular Networks | Comment: Updated to correct typo "that that" => "that" | null | null | null | q-bio.QM cs.LG stat.ML | http://creativecommons.org/licenses/by-nc-sa/4.0/ | There is a growing trend in molecular and synthetic biology of using
mechanistic (non machine learning) models to design biomolecular networks. Once
designed, these networks need to be validated by experimental results to ensure
the theoretical network correctly models the true system. However, these
experiments can be expensive and time consuming. We propose a design of
experiments approach for validating these networks efficiently. Gaussian
processes are used to construct a probabilistic model of the discrepancy
between experimental results and the designed response, then a Bayesian
optimization strategy used to select the next sample points. We compare
different design criteria and develop a stopping criterion based on a metric
that quantifies this discrepancy over the whole surface, and its uncertainty.
We test our strategy on simulated data from computer models of biochemical
processes.
| [
{
"created": "Fri, 20 Nov 2020 13:39:45 GMT",
"version": "v1"
},
{
"created": "Wed, 25 Nov 2020 10:51:24 GMT",
"version": "v2"
}
] | 2020-11-26 | [
[
"Sedgwick",
"Ruby",
""
],
[
"Goertz",
"John",
""
],
[
"Stevens",
"Molly",
""
],
[
"Misener",
"Ruth",
""
],
[
"van der Wilk",
"Mark",
""
]
] | There is a growing trend in molecular and synthetic biology of using mechanistic (non machine learning) models to design biomolecular networks. Once designed, these networks need to be validated by experimental results to ensure the theoretical network correctly models the true system. However, these experiments can be expensive and time consuming. We propose a design of experiments approach for validating these networks efficiently. Gaussian processes are used to construct a probabilistic model of the discrepancy between experimental results and the designed response, then a Bayesian optimization strategy used to select the next sample points. We compare different design criteria and develop a stopping criterion based on a metric that quantifies this discrepancy over the whole surface, and its uncertainty. We test our strategy on simulated data from computer models of biochemical processes. |
1901.00945 | Stephane Deny | Jack Lindsey, Samuel A. Ocko, Surya Ganguli, Stephane Deny | A Unified Theory of Early Visual Representations from Retina to Cortex
through Anatomically Constrained Deep CNNs | null | International Conference on Learning Representations, 2019
https://openreview.net/forum?id=S1xq3oR5tQ | null | null | q-bio.NC cs.LG cs.NE | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The visual system is hierarchically organized to process visual information
in successive stages. Neural representations vary drastically across the first
stages of visual processing: at the output of the retina, ganglion cell
receptive fields (RFs) exhibit a clear antagonistic center-surround structure,
whereas in the primary visual cortex, typical RFs are sharply tuned to a
precise orientation. There is currently no unified theory explaining these
differences in representations across layers. Here, using a deep convolutional
neural network trained on image recognition as a model of the visual system, we
show that such differences in representation can emerge as a direct consequence
of different neural resource constraints on the retinal and cortical networks,
and we find a single model from which both geometries spontaneously emerge at
the appropriate stages of visual processing. The key constraint is a reduced
number of neurons at the retinal output, consistent with the anatomy of the
optic nerve as a stringent bottleneck. Second, we find that, for simple
cortical networks, visual representations at the retinal output emerge as
nonlinear and lossy feature detectors, whereas they emerge as linear and
faithful encoders of the visual scene for more complex cortices. This result
predicts that the retinas of small vertebrates should perform sophisticated
nonlinear computations, extracting features directly relevant to behavior,
whereas retinas of large animals such as primates should mostly encode the
visual scene linearly and respond to a much broader range of stimuli. These
predictions could reconcile the two seemingly incompatible views of the retina
as either performing feature extraction or efficient coding of natural scenes,
by suggesting that all vertebrates lie on a spectrum between these two
objectives, depending on the degree of neural resources allocated to their
visual system.
| [
{
"created": "Thu, 3 Jan 2019 23:51:38 GMT",
"version": "v1"
}
] | 2019-01-07 | [
[
"Lindsey",
"Jack",
""
],
[
"Ocko",
"Samuel A.",
""
],
[
"Ganguli",
"Surya",
""
],
[
"Deny",
"Stephane",
""
]
] | The visual system is hierarchically organized to process visual information in successive stages. Neural representations vary drastically across the first stages of visual processing: at the output of the retina, ganglion cell receptive fields (RFs) exhibit a clear antagonistic center-surround structure, whereas in the primary visual cortex, typical RFs are sharply tuned to a precise orientation. There is currently no unified theory explaining these differences in representations across layers. Here, using a deep convolutional neural network trained on image recognition as a model of the visual system, we show that such differences in representation can emerge as a direct consequence of different neural resource constraints on the retinal and cortical networks, and we find a single model from which both geometries spontaneously emerge at the appropriate stages of visual processing. The key constraint is a reduced number of neurons at the retinal output, consistent with the anatomy of the optic nerve as a stringent bottleneck. Second, we find that, for simple cortical networks, visual representations at the retinal output emerge as nonlinear and lossy feature detectors, whereas they emerge as linear and faithful encoders of the visual scene for more complex cortices. This result predicts that the retinas of small vertebrates should perform sophisticated nonlinear computations, extracting features directly relevant to behavior, whereas retinas of large animals such as primates should mostly encode the visual scene linearly and respond to a much broader range of stimuli. These predictions could reconcile the two seemingly incompatible views of the retina as either performing feature extraction or efficient coding of natural scenes, by suggesting that all vertebrates lie on a spectrum between these two objectives, depending on the degree of neural resources allocated to their visual system. |
2305.15453 | Andreas Maier | Andreas Maier, Michael Hartung, Mark Abovsky, Klaudia Adamowicz, Gary
D. Bader, Sylvie Baier, David B. Blumenthal, Jing Chen, Maria L. Elkjaer,
Carlos Garcia-Hernandez, Mohamed Helmy, Markus Hoffmann, Igor Jurisica, Max
Kotlyar, Olga Lazareva, Hagai Levi, Markus List, Sebastian Lobentanzer,
Joseph Loscalzo, Noel Malod-Dognin, Quirin Manz, Julian Matschinske, Miles
Mee, Mhaned Oubounyt, Alexander R. Pico, Rudolf T. Pillich, Julian M.
Poschenrieder, Dexter Pratt, Nata\v{s}a Pr\v{z}ulj, Sepideh Sadegh, Julio
Saez-Rodriguez, Suryadipto Sarkar, Gideon Shaked, Ron Shamir, Nico Trummer,
Ugur Turhan, Ruisheng Wang, Olga Zolotareva, Jan Baumbach | Drugst.One -- A plug-and-play solution for online systems medicine and
network-based drug repurposing | 45 pages, 6 figures, 7 tables | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | In recent decades, the development of new drugs has become increasingly
expensive and inefficient, and the molecular mechanisms of most pharmaceuticals
remain poorly understood. In response, computational systems and network
medicine tools have emerged to identify potential drug repurposing candidates.
However, these tools often require complex installation and lack intuitive
visual network mining capabilities. To tackle these challenges, we introduce
Drugst.One, a platform that assists specialized computational medicine tools in
becoming user-friendly, web-based utilities for drug repurposing. With just
three lines of code, Drugst.One turns any systems biology software into an
interactive web tool for modeling and analyzing complex protein-drug-disease
networks. Demonstrating its broad adaptability, Drugst.One has been
successfully integrated with 21 computational systems medicine tools. Available
at https://drugst.one, Drugst.One has significant potential for streamlining
the drug discovery process, allowing researchers to focus on essential aspects
of pharmaceutical treatment research.
| [
{
"created": "Wed, 24 May 2023 16:30:36 GMT",
"version": "v1"
},
{
"created": "Tue, 4 Jul 2023 17:40:11 GMT",
"version": "v2"
}
] | 2023-07-06 | [
[
"Maier",
"Andreas",
""
],
[
"Hartung",
"Michael",
""
],
[
"Abovsky",
"Mark",
""
],
[
"Adamowicz",
"Klaudia",
""
],
[
"Bader",
"Gary D.",
""
],
[
"Baier",
"Sylvie",
""
],
[
"Blumenthal",
"David B.",
""
],
[
"Chen",
"Jing",
""
],
[
"Elkjaer",
"Maria L.",
""
],
[
"Garcia-Hernandez",
"Carlos",
""
],
[
"Helmy",
"Mohamed",
""
],
[
"Hoffmann",
"Markus",
""
],
[
"Jurisica",
"Igor",
""
],
[
"Kotlyar",
"Max",
""
],
[
"Lazareva",
"Olga",
""
],
[
"Levi",
"Hagai",
""
],
[
"List",
"Markus",
""
],
[
"Lobentanzer",
"Sebastian",
""
],
[
"Loscalzo",
"Joseph",
""
],
[
"Malod-Dognin",
"Noel",
""
],
[
"Manz",
"Quirin",
""
],
[
"Matschinske",
"Julian",
""
],
[
"Mee",
"Miles",
""
],
[
"Oubounyt",
"Mhaned",
""
],
[
"Pico",
"Alexander R.",
""
],
[
"Pillich",
"Rudolf T.",
""
],
[
"Poschenrieder",
"Julian M.",
""
],
[
"Pratt",
"Dexter",
""
],
[
"Pržulj",
"Nataša",
""
],
[
"Sadegh",
"Sepideh",
""
],
[
"Saez-Rodriguez",
"Julio",
""
],
[
"Sarkar",
"Suryadipto",
""
],
[
"Shaked",
"Gideon",
""
],
[
"Shamir",
"Ron",
""
],
[
"Trummer",
"Nico",
""
],
[
"Turhan",
"Ugur",
""
],
[
"Wang",
"Ruisheng",
""
],
[
"Zolotareva",
"Olga",
""
],
[
"Baumbach",
"Jan",
""
]
] | In recent decades, the development of new drugs has become increasingly expensive and inefficient, and the molecular mechanisms of most pharmaceuticals remain poorly understood. In response, computational systems and network medicine tools have emerged to identify potential drug repurposing candidates. However, these tools often require complex installation and lack intuitive visual network mining capabilities. To tackle these challenges, we introduce Drugst.One, a platform that assists specialized computational medicine tools in becoming user-friendly, web-based utilities for drug repurposing. With just three lines of code, Drugst.One turns any systems biology software into an interactive web tool for modeling and analyzing complex protein-drug-disease networks. Demonstrating its broad adaptability, Drugst.One has been successfully integrated with 21 computational systems medicine tools. Available at https://drugst.one, Drugst.One has significant potential for streamlining the drug discovery process, allowing researchers to focus on essential aspects of pharmaceutical treatment research. |
1507.08911 | Frederico Arnoldi PhD | Frederico G. C. Arnoldi, Rodrigo F. Rodrigues, Celio L. Silva | pyBioSig: optimizing group discrimination using genetic algorithms for
biosignature discovery | 11 pages, 3 figures | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In medical sciences, a biomarker is "a characteristic that is objectively
measured and evaluated as an indicator of normal biological processes,
pathogenic processes, or pharmacologic responses to a therapeutic
intervention". Molecular experiments are providing rapid and systematic
approaches to search for biomarkers, but because single-molecule biomarkers
have shown a disappointing lack of robustness for clinical diagnosis,
researchers have begun searching for distinctive sets of molecules, called
"biosignatures". However, the most popular statistics are not appropriate for
their identification, and the number of possible biosignatures to be tested is
frequently intractable. In the present work, we developed a "multivariate
filter" using genetic algorithms (GA) as a feature (gene) selector to optimize
a measure of intra-group cohesion and inter-group dispersion. This method was
implemented using Python and R (pyBioSig, available at
https://github.com/fredgca/pybiosig under LGPL) and can be manipulated via
graphical interface or Python scripts. Using it, we were able to identify
putative biosignatures composed by just a few genes and capable of recovering
multiple groups simultaneously in a hierarchical clustering, even ones that
were not recovered using the whole transcriptome, within a feasible length of
time using a personal computer. Our results allowed us to conclude that using
GA to optimize our new intra-group cohesion and inter-group dispersion measure
is a clear, effective, and computationally feasible strategy for the
identification of putative "omical" biosignatures that could support
discrimination among multiple groups simultaneously.
| [
{
"created": "Thu, 30 Jul 2015 14:23:54 GMT",
"version": "v1"
}
] | 2015-08-03 | [
[
"Arnoldi",
"Frederico G. C.",
""
],
[
"Rodrigues",
"Rodrigo F.",
""
],
[
"Silva",
"Celio L.",
""
]
] | In medical sciences, a biomarker is "a characteristic that is objectively measured and evaluated as an indicator of normal biological processes, pathogenic processes, or pharmacologic responses to a therapeutic intervention". Molecular experiments are providing rapid and systematic approaches to search for biomarkers, but because single-molecule biomarkers have shown a disappointing lack of robustness for clinical diagnosis, researchers have begun searching for distinctive sets of molecules, called "biosignatures". However, the most popular statistics are not appropriate for their identification, and the number of possible biosignatures to be tested is frequently intractable. In the present work, we developed a "multivariate filter" using genetic algorithms (GA) as a feature (gene) selector to optimize a measure of intra-group cohesion and inter-group dispersion. This method was implemented using Python and R (pyBioSig, available at https://github.com/fredgca/pybiosig under LGPL) and can be manipulated via graphical interface or Python scripts. Using it, we were able to identify putative biosignatures composed by just a few genes and capable of recovering multiple groups simultaneously in a hierarchical clustering, even ones that were not recovered using the whole transcriptome, within a feasible length of time using a personal computer. Our results allowed us to conclude that using GA to optimize our new intra-group cohesion and inter-group dispersion measure is a clear, effective, and computationally feasible strategy for the identification of putative "omical" biosignatures that could support discrimination among multiple groups simultaneously. |
2102.01522 | Robin D. Hanson | Robin Hanson, Daniel Martin, Calvin McCarter, Jonathan Paulson | If Loud Aliens Explain Human Earliness, Quiet Aliens Are Also Rare | To appear in Astrophysical Journal | null | 10.3847/1538-4357/ac2369 | null | q-bio.OT physics.pop-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | If life on Earth had to achieve n 'hard steps' to reach humanity's level,
then the chance of this event rose as time to the n-th power. Integrating this
over habitable star formation and planet lifetime distributions predicts >99%
of advanced life appears after today, unless n<3 and max planet duration
<50Gyr. That is, we seem early. We offer this explanation: a deadline is set by
'loud' aliens who are born according to a hard steps power law, expand at a
common rate, change their volumes' appearances, and prevent advanced life like
us from appearing in their volumes. 'Quiet' aliens, in contrast, are much
harder to see. We fit this three-parameter model of loud aliens to data: 1)
birth power from the number of hard steps seen in Earth history, 2) birth
constant by assuming a inform distribution over our rank among loud alien birth
dates, and 3) expansion speed from our not seeing alien volumes in our sky. We
estimate that loud alien civilizations now control 40-50% of universe volume,
each will later control ~10^5 - 3x10^7 galaxies, and we could meet them in
~200Myr - 2Gyr. If loud aliens arise from quiet ones, a depressingly low
transition chance (~10^-4) is required to expect that even one other quiet
alien civilization has ever been active in our galaxy. Which seems bad news for
SETI. But perhaps alien volume appearances are subtle, and their expansion
speed lower, in which case we predict many long circular arcs to find in our
sky.
| [
{
"created": "Mon, 1 Feb 2021 18:27:12 GMT",
"version": "v1"
},
{
"created": "Thu, 25 Mar 2021 15:28:07 GMT",
"version": "v2"
},
{
"created": "Mon, 6 Sep 2021 14:18:23 GMT",
"version": "v3"
}
] | 2021-12-08 | [
[
"Hanson",
"Robin",
""
],
[
"Martin",
"Daniel",
""
],
[
"McCarter",
"Calvin",
""
],
[
"Paulson",
"Jonathan",
""
]
] | If life on Earth had to achieve n 'hard steps' to reach humanity's level, then the chance of this event rose as time to the n-th power. Integrating this over habitable star formation and planet lifetime distributions predicts >99% of advanced life appears after today, unless n<3 and max planet duration <50Gyr. That is, we seem early. We offer this explanation: a deadline is set by 'loud' aliens who are born according to a hard steps power law, expand at a common rate, change their volumes' appearances, and prevent advanced life like us from appearing in their volumes. 'Quiet' aliens, in contrast, are much harder to see. We fit this three-parameter model of loud aliens to data: 1) birth power from the number of hard steps seen in Earth history, 2) birth constant by assuming a inform distribution over our rank among loud alien birth dates, and 3) expansion speed from our not seeing alien volumes in our sky. We estimate that loud alien civilizations now control 40-50% of universe volume, each will later control ~10^5 - 3x10^7 galaxies, and we could meet them in ~200Myr - 2Gyr. If loud aliens arise from quiet ones, a depressingly low transition chance (~10^-4) is required to expect that even one other quiet alien civilization has ever been active in our galaxy. Which seems bad news for SETI. But perhaps alien volume appearances are subtle, and their expansion speed lower, in which case we predict many long circular arcs to find in our sky. |
1604.07637 | Pedro Manrique | Pedro D. Manrique, Chen Xu, Pak Ming Hui and Neil F. Johnson | Atypical viral dynamics from transport through popular places | 14 pages, 16 figures | null | 10.1103/PhysRevE.94.022304 | null | q-bio.PE cond-mat.stat-mech physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The flux of visitors through popular places undoubtedly influences viral
spreading -- from H1N1 and Zika viruses spreading through physical spaces such
as airports, to rumors and ideas spreading though online spaces such as
chatrooms and social media. However there is a lack of understanding of the
types of viral dynamics that can result. Here we present a minimal dynamical
model which focuses on the time-dependent interplay between the {\em mobility
through} and the {\em occupancy of} such spaces. Our generic model permits
analytic analysis while producing a rich diversity of infection profiles in
terms of their shapes, durations, and intensities. The general features of
these theoretical profiles compare well to real-world data of recent social
contagion phenomena.
| [
{
"created": "Tue, 26 Apr 2016 12:01:30 GMT",
"version": "v1"
},
{
"created": "Sat, 6 Aug 2016 12:47:24 GMT",
"version": "v2"
}
] | 2017-06-29 | [
[
"Manrique",
"Pedro D.",
""
],
[
"Xu",
"Chen",
""
],
[
"Hui",
"Pak Ming",
""
],
[
"Johnson",
"Neil F.",
""
]
] | The flux of visitors through popular places undoubtedly influences viral spreading -- from H1N1 and Zika viruses spreading through physical spaces such as airports, to rumors and ideas spreading though online spaces such as chatrooms and social media. However there is a lack of understanding of the types of viral dynamics that can result. Here we present a minimal dynamical model which focuses on the time-dependent interplay between the {\em mobility through} and the {\em occupancy of} such spaces. Our generic model permits analytic analysis while producing a rich diversity of infection profiles in terms of their shapes, durations, and intensities. The general features of these theoretical profiles compare well to real-world data of recent social contagion phenomena. |
1009.2294 | Subhadip Raychaudhuri | Subhadip Raychaudhuri | A Minimal Model of Signaling Network Elucidates Cell-to-Cell Stochastic
Variability in Apoptosis | 9 pages, 6 figures | PLoS ONE 5(8): e11930 (2010) | 10.1371/journal.pone.0011930 | null | q-bio.MN physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Signaling networks are designed to sense an environmental stimulus and adapt
to it. We propose and study a minimal model of signaling network that can sense
and respond to external stimuli of varying strength in an adaptive manner. The
structure of this minimal network is derived based on some simple assumptions
on its differential response to external stimuli. We employ stochastic
differential equations and probability distributions obtained from stochastic
simulations to characterize differential signaling response in our minimal
network model. We show that the proposed minimal signaling network displays two
distinct types of response as the strength of the stimulus is decreased. The
signaling network has a deterministic part that undergoes rapid activation by a
strong stimulus in which case cell-to-cell fluctuations can be ignored. As the
strength of the stimulus decreases, the stochastic part of the network begins
dominating the signaling response where slow activation is observed with
characteristic large cell-to-cell stochastic variability. Interestingly, this
proposed stochastic signaling network can capture some of the essential
signaling behaviors of a complex apoptotic cell death signaling network that
has been studied through experiments and large-scale computer simulations. Thus
we claim that the proposed signaling network is an appropriate minimal model of
apoptosis signaling. Elucidating the fundamental design principles of complex
cellular signaling pathways such as apoptosis signaling remains a challenging
task. We demonstrate how our proposed minimal model can help elucidate the
effect of a specific apoptotic inhibitor Bcl-2 on apoptotic signaling in a
cell-type independent manner. We also discuss the implications of our study in
elucidating the adaptive strategy of cell death signaling pathways.
| [
{
"created": "Mon, 13 Sep 2010 04:27:11 GMT",
"version": "v1"
}
] | 2010-09-14 | [
[
"Raychaudhuri",
"Subhadip",
""
]
] | Signaling networks are designed to sense an environmental stimulus and adapt to it. We propose and study a minimal model of signaling network that can sense and respond to external stimuli of varying strength in an adaptive manner. The structure of this minimal network is derived based on some simple assumptions on its differential response to external stimuli. We employ stochastic differential equations and probability distributions obtained from stochastic simulations to characterize differential signaling response in our minimal network model. We show that the proposed minimal signaling network displays two distinct types of response as the strength of the stimulus is decreased. The signaling network has a deterministic part that undergoes rapid activation by a strong stimulus in which case cell-to-cell fluctuations can be ignored. As the strength of the stimulus decreases, the stochastic part of the network begins dominating the signaling response where slow activation is observed with characteristic large cell-to-cell stochastic variability. Interestingly, this proposed stochastic signaling network can capture some of the essential signaling behaviors of a complex apoptotic cell death signaling network that has been studied through experiments and large-scale computer simulations. Thus we claim that the proposed signaling network is an appropriate minimal model of apoptosis signaling. Elucidating the fundamental design principles of complex cellular signaling pathways such as apoptosis signaling remains a challenging task. We demonstrate how our proposed minimal model can help elucidate the effect of a specific apoptotic inhibitor Bcl-2 on apoptotic signaling in a cell-type independent manner. We also discuss the implications of our study in elucidating the adaptive strategy of cell death signaling pathways. |
2107.10115 | Giacomo Cacciapaglia | Adele de Hoffer, Shahram Vatani, Corentin Cot, Giacomo Cacciapaglia,
Maria Luisa Chiusano, Andrea Cimarelli, Francesco Conventi, Antonio Giannini,
Stefan Hohenegger and Francesco Sannino | Variant-driven multi-wave pattern of COVID-19 via a Machine Learning
analysis of spike protein mutations | 16 pages, 6 figures, supplementary material in a separate file.
Analysis extended with early warning performance and spike protein
diversification | null | null | null | q-bio.GN | http://creativecommons.org/licenses/by/4.0/ | Applying a ML approach to the temporal variability of the Spike protein
sequence enables us to identify, classify and track emerging virus variants.
Our analysis is unbiased, in the sense that it does not require any prior
knowledge of the variant characteristics, and our results are validated by
other informed methods that define variants based on the complete genome.
Furthermore, correlating persistent variants of our approach to epidemiological
data, we discover that each new wave of the COVID-19 pandemic is driven and
dominated by a new emerging variant. Our results are therefore indispensable
for further studies on the evolution of SARS-CoV-2 and the prediction of
evolutionary patterns that determine current and future mutations of the Spike
proteins, as well as their diversification and persistence during the viral
spread. Moreover, our ML algorithm works as an efficient early warning system
for the emergence of new persistent variants that may pose a threat of
triggering a new wave of COVID-19. Capable of a timely identification of
potential new epidemiological threats when the variant only represents 1% of
the new sequences, our ML strategy is a crucial tool for decision makers to
define short and long term strategies to curb future outbreaks. The same
methodology can be applied to other viral diseases, influenza included, if
sufficient sequencing data is available.
| [
{
"created": "Wed, 21 Jul 2021 14:42:46 GMT",
"version": "v1"
},
{
"created": "Fri, 22 Oct 2021 07:58:37 GMT",
"version": "v2"
}
] | 2021-10-25 | [
[
"de Hoffer",
"Adele",
""
],
[
"Vatani",
"Shahram",
""
],
[
"Cot",
"Corentin",
""
],
[
"Cacciapaglia",
"Giacomo",
""
],
[
"Chiusano",
"Maria Luisa",
""
],
[
"Cimarelli",
"Andrea",
""
],
[
"Conventi",
"Francesco",
""
],
[
"Giannini",
"Antonio",
""
],
[
"Hohenegger",
"Stefan",
""
],
[
"Sannino",
"Francesco",
""
]
] | Applying a ML approach to the temporal variability of the Spike protein sequence enables us to identify, classify and track emerging virus variants. Our analysis is unbiased, in the sense that it does not require any prior knowledge of the variant characteristics, and our results are validated by other informed methods that define variants based on the complete genome. Furthermore, correlating persistent variants of our approach to epidemiological data, we discover that each new wave of the COVID-19 pandemic is driven and dominated by a new emerging variant. Our results are therefore indispensable for further studies on the evolution of SARS-CoV-2 and the prediction of evolutionary patterns that determine current and future mutations of the Spike proteins, as well as their diversification and persistence during the viral spread. Moreover, our ML algorithm works as an efficient early warning system for the emergence of new persistent variants that may pose a threat of triggering a new wave of COVID-19. Capable of a timely identification of potential new epidemiological threats when the variant only represents 1% of the new sequences, our ML strategy is a crucial tool for decision makers to define short and long term strategies to curb future outbreaks. The same methodology can be applied to other viral diseases, influenza included, if sufficient sequencing data is available. |
1004.1234 | Max Little | Max A. Little, Bradley C. Steel, Fan Bai, Yoshiyuki Sowa, Thomas
Bilyard, David M. Mueller, Richard M. Berry, Nick S. Jones | Steps and bumps: precision extraction of discrete states of molecular
machines using physically-based, high-throughput time series analysis | null | null | 10.1016/j.bpj.2011.05.070 | null | q-bio.QM physics.data-an stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We report new statistical time-series analysis tools providing significant
improvements in the rapid, precision extraction of discrete state dynamics from
large databases of experimental observations of molecular machines. By building
physical knowledge and statistical innovations into analysis tools, we
demonstrate new techniques for recovering discrete state transitions buried in
highly correlated molecular noise. We demonstrate the effectiveness of our
approach on simulated and real examples of step-like rotation of the bacterial
flagellar motor and the F1-ATPase enzyme. We show that our method can clearly
identify molecular steps, symmetries and cascaded processes that are too weak
for existing algorithms to detect, and can do so much faster than existing
algorithms. Our techniques represent a major advance in the drive towards
automated, precision, highthroughput studies of molecular machine dynamics.
Modular, open-source software that implements these techniques is provided at
http://www.eng.ox.ac.uk/samp/members/max/software/
| [
{
"created": "Thu, 8 Apr 2010 03:28:15 GMT",
"version": "v1"
}
] | 2019-10-23 | [
[
"Little",
"Max A.",
""
],
[
"Steel",
"Bradley C.",
""
],
[
"Bai",
"Fan",
""
],
[
"Sowa",
"Yoshiyuki",
""
],
[
"Bilyard",
"Thomas",
""
],
[
"Mueller",
"David M.",
""
],
[
"Berry",
"Richard M.",
""
],
[
"Jones",
"Nick S.",
""
]
] | We report new statistical time-series analysis tools providing significant improvements in the rapid, precision extraction of discrete state dynamics from large databases of experimental observations of molecular machines. By building physical knowledge and statistical innovations into analysis tools, we demonstrate new techniques for recovering discrete state transitions buried in highly correlated molecular noise. We demonstrate the effectiveness of our approach on simulated and real examples of step-like rotation of the bacterial flagellar motor and the F1-ATPase enzyme. We show that our method can clearly identify molecular steps, symmetries and cascaded processes that are too weak for existing algorithms to detect, and can do so much faster than existing algorithms. Our techniques represent a major advance in the drive towards automated, precision, highthroughput studies of molecular machine dynamics. Modular, open-source software that implements these techniques is provided at http://www.eng.ox.ac.uk/samp/members/max/software/ |
1006.3171 | Noa Sela | Noa Sela, Eddo Kim, Gil Ast | The role of transposable elements in the evolution of non-mammalian
vertebrates and invertebrates | null | Genome Biology 2010, 11:R59 | 10.1186/gb-2010-11-6-r59 | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: Transposable elements (TEs) have played an important role in the
diversification and enrichment of mammalian transcriptomes through various
mechanisms such as exonization and intronization (the birth of new
exons/introns from previously intronic/exonic sequences, respectively), and
insertion into first and last exons. However, no extensive analysis has
compared the effects of TEs on the transcriptomes of mammalian, non-mammalian
vertebrates and invertebrates. Results: We analyzed the influence of TEs on the
transcriptomes of five species, three invertebrates and two non-mammalian
vertebrates. Compared to previously analyzed mammals, there were lower levels
of TE introduction into introns, significantly lower numbers of exonizations
originating from TEs and a lower percentage of TE insertion within the first
and last exons. Although the transcriptomes of vertebrates exhibit a
significant level of exonizations of TEs, only anecdotal cases were found in
invertebrates. In vertebrates, as in mammals, the exonized TEs are mostly
alternatively spliced, indicating selective pressure maintains the original
mRNA product generated from such genes. Conclusions: Exonization of TEs is
wide-spread in mammals, less so in non- mammalian vertebrates, and very low in
invertebrates. We assume that the exonization process depends on the length of
introns. Vertebrates, unlike invertebrates, are characterized by long introns
and short internal exons. Our results suggest that there is a direct link
between the length of introns and exonization of TEs and that this process
became more prevalent following the appearance of mammals.
| [
{
"created": "Wed, 16 Jun 2010 09:51:19 GMT",
"version": "v1"
}
] | 2010-06-17 | [
[
"Sela",
"Noa",
""
],
[
"Kim",
"Eddo",
""
],
[
"Ast",
"Gil",
""
]
] | Background: Transposable elements (TEs) have played an important role in the diversification and enrichment of mammalian transcriptomes through various mechanisms such as exonization and intronization (the birth of new exons/introns from previously intronic/exonic sequences, respectively), and insertion into first and last exons. However, no extensive analysis has compared the effects of TEs on the transcriptomes of mammalian, non-mammalian vertebrates and invertebrates. Results: We analyzed the influence of TEs on the transcriptomes of five species, three invertebrates and two non-mammalian vertebrates. Compared to previously analyzed mammals, there were lower levels of TE introduction into introns, significantly lower numbers of exonizations originating from TEs and a lower percentage of TE insertion within the first and last exons. Although the transcriptomes of vertebrates exhibit a significant level of exonizations of TEs, only anecdotal cases were found in invertebrates. In vertebrates, as in mammals, the exonized TEs are mostly alternatively spliced, indicating selective pressure maintains the original mRNA product generated from such genes. Conclusions: Exonization of TEs is wide-spread in mammals, less so in non- mammalian vertebrates, and very low in invertebrates. We assume that the exonization process depends on the length of introns. Vertebrates, unlike invertebrates, are characterized by long introns and short internal exons. Our results suggest that there is a direct link between the length of introns and exonization of TEs and that this process became more prevalent following the appearance of mammals. |
1810.07743 | Kahini Wadhawan | Payel Das, Kahini Wadhawan, Oscar Chang, Tom Sercu, Cicero Dos Santos,
Matthew Riemer, Vijil Chenthamarakshan, Inkit Padhi, Aleksandra Mojsilovic | PepCVAE: Semi-Supervised Targeted Design of Antimicrobial Peptide
Sequences | null | null | null | null | q-bio.QM cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Given the emerging global threat of antimicrobial resistance, new methods for
next-generation antimicrobial design are urgently needed. We report a peptide
generation framework PepCVAE, based on a semi-supervised variational
autoencoder (VAE) model, for designing novel antimicrobial peptide (AMP)
sequences. Our model learns a rich latent space of the biological peptide
context by taking advantage of abundant, unlabeled peptide sequences. The model
further learns a disentangled antimicrobial attribute space by using the
feedback from a jointly trained AMP classifier that uses limited labeled
instances. The disentangled representation allows for controllable generation
of AMPs. Extensive analysis of the PepCVAE-generated sequences reveals superior
performance of our model in comparison to a plain VAE, as PepCVAE generates
novel AMP sequences with higher long-range diversity, while being closer to the
training distribution of biological peptides. These features are highly desired
in next-generation antimicrobial design.
| [
{
"created": "Wed, 17 Oct 2018 19:19:36 GMT",
"version": "v1"
},
{
"created": "Mon, 22 Oct 2018 18:50:04 GMT",
"version": "v2"
},
{
"created": "Tue, 13 Nov 2018 16:24:09 GMT",
"version": "v3"
}
] | 2018-11-14 | [
[
"Das",
"Payel",
""
],
[
"Wadhawan",
"Kahini",
""
],
[
"Chang",
"Oscar",
""
],
[
"Sercu",
"Tom",
""
],
[
"Santos",
"Cicero Dos",
""
],
[
"Riemer",
"Matthew",
""
],
[
"Chenthamarakshan",
"Vijil",
""
],
[
"Padhi",
"Inkit",
""
],
[
"Mojsilovic",
"Aleksandra",
""
]
] | Given the emerging global threat of antimicrobial resistance, new methods for next-generation antimicrobial design are urgently needed. We report a peptide generation framework PepCVAE, based on a semi-supervised variational autoencoder (VAE) model, for designing novel antimicrobial peptide (AMP) sequences. Our model learns a rich latent space of the biological peptide context by taking advantage of abundant, unlabeled peptide sequences. The model further learns a disentangled antimicrobial attribute space by using the feedback from a jointly trained AMP classifier that uses limited labeled instances. The disentangled representation allows for controllable generation of AMPs. Extensive analysis of the PepCVAE-generated sequences reveals superior performance of our model in comparison to a plain VAE, as PepCVAE generates novel AMP sequences with higher long-range diversity, while being closer to the training distribution of biological peptides. These features are highly desired in next-generation antimicrobial design. |
2009.00058 | Alexander Moffett | Alexander S. Moffett, Nigel Wallbridge, Carrol Plummer, Andrew W.
Eckford | The Fitness Value of Information with Delayed Phenotype Switching:
Optimal Performance with Imperfect Sensing | Accepted for publication in Physical Review E | Phys. Rev. E 102, 052403 (2020) | 10.1103/PhysRevE.102.052403 | null | q-bio.PE | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The ability of organisms to accurately sense their environment and respond
accordingly is critical for evolutionary success. However, exactly how the
sensory ability influences fitness is a topic of active research, while the
necessity of a time delay between when unreliable environmental cues are sensed
and when organisms can mount a response has yet to be explored at any length.
Accounting for this delay in phenotype response in models of population growth,
we find that a critical error probability can exist under certain environmental
conditions: an organism with a sensory system with any error probability less
than the critical value can achieve the same long-term growth rate as an
organism with a perfect sensing system. We also observe a trade off between the
evolutionary value of sensory information and robustness to error, mediated by
the rate at which the phenotype distribution relaxes to steady-state. The
existence of the critical error probability could have several important
evolutionary consequences, primarily that sensory systems operating at the
non-zero critical error probability may be evolutionarily optimal.
| [
{
"created": "Mon, 31 Aug 2020 18:52:59 GMT",
"version": "v1"
},
{
"created": "Wed, 21 Oct 2020 13:59:20 GMT",
"version": "v2"
}
] | 2020-11-11 | [
[
"Moffett",
"Alexander S.",
""
],
[
"Wallbridge",
"Nigel",
""
],
[
"Plummer",
"Carrol",
""
],
[
"Eckford",
"Andrew W.",
""
]
] | The ability of organisms to accurately sense their environment and respond accordingly is critical for evolutionary success. However, exactly how the sensory ability influences fitness is a topic of active research, while the necessity of a time delay between when unreliable environmental cues are sensed and when organisms can mount a response has yet to be explored at any length. Accounting for this delay in phenotype response in models of population growth, we find that a critical error probability can exist under certain environmental conditions: an organism with a sensory system with any error probability less than the critical value can achieve the same long-term growth rate as an organism with a perfect sensing system. We also observe a trade off between the evolutionary value of sensory information and robustness to error, mediated by the rate at which the phenotype distribution relaxes to steady-state. The existence of the critical error probability could have several important evolutionary consequences, primarily that sensory systems operating at the non-zero critical error probability may be evolutionarily optimal. |
1503.03350 | Ronald Fox | Ronald F. Fox | Contributions to the Theory of Thermostated Systems II: Least
Dissipation of Helmholtz Free Energy in Nano-Biology | 18 pages, follow-up to arXiv:1409.5712 | null | null | null | q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we develop further the theory of thermostated systems along
the lines of our earlier paper. Two results are highlighted: 1) in the Markov
limit of the contracted description, a least dissipation of Helmholtz free
energy principle is established; and 2) a detailed account of the
appropriateness of this principle for nano-biology, including the evolution of
life, is presented.
| [
{
"created": "Wed, 11 Mar 2015 14:30:31 GMT",
"version": "v1"
},
{
"created": "Thu, 12 Mar 2015 17:23:08 GMT",
"version": "v2"
}
] | 2015-03-13 | [
[
"Fox",
"Ronald F.",
""
]
] | In this paper, we develop further the theory of thermostated systems along the lines of our earlier paper. Two results are highlighted: 1) in the Markov limit of the contracted description, a least dissipation of Helmholtz free energy principle is established; and 2) a detailed account of the appropriateness of this principle for nano-biology, including the evolution of life, is presented. |
1710.04687 | Susan Cheng | Mir Henglin, Teemu Niiranen, Jeramie D. Watrous, Kim A. Lehmann,
Joseph Antonelli, Brian L. Claggett, Emmanuella J. Demosthenes, Beatrice von
Jeinsen, Olga Demler, Ramachandran S. Vasan, Martin G. Larson, Mohit Jain,
Susan Cheng | A Single Visualization Technique for Displaying Multiple
Metabolite-Phenotype Associations | null | null | null | null | q-bio.QM stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | More advanced visualization tools are needed to assist with the analyses and
interpretation of human metabolomics data, which are rapidly increasing in
quantity and complexity. Using a dataset of several hundred bioactive lipid
metabolites profiled in a cohort of over 1400 individuals sampled from a
population-based community study, we performed a comprehensive set of
association analyses relating all metabolites with eight demographic and
cardiometabolic traits and outcomes. We then compared existing graphical
approaches with an adapted rain plot approach to display the results of these
analyses. The rain plot combines the features of a raindrop plot and a parallel
heatmap approach to succinctly convey, in a single visualization, the results
of relating complex metabolomics data with multiple phenotypes. This approach
complements existing tools, particularly by facilitating comparisons between
individual metabolites and across a range of pre-specified clinical outcomes.
We anticipate that this single visualization technique may be further extended
and applied to alternate study designs using different types of molecular
phenotyping data.
| [
{
"created": "Thu, 12 Oct 2017 18:54:18 GMT",
"version": "v1"
}
] | 2017-10-16 | [
[
"Henglin",
"Mir",
""
],
[
"Niiranen",
"Teemu",
""
],
[
"Watrous",
"Jeramie D.",
""
],
[
"Lehmann",
"Kim A.",
""
],
[
"Antonelli",
"Joseph",
""
],
[
"Claggett",
"Brian L.",
""
],
[
"Demosthenes",
"Emmanuella J.",
""
],
[
"von Jeinsen",
"Beatrice",
""
],
[
"Demler",
"Olga",
""
],
[
"Vasan",
"Ramachandran S.",
""
],
[
"Larson",
"Martin G.",
""
],
[
"Jain",
"Mohit",
""
],
[
"Cheng",
"Susan",
""
]
] | More advanced visualization tools are needed to assist with the analyses and interpretation of human metabolomics data, which are rapidly increasing in quantity and complexity. Using a dataset of several hundred bioactive lipid metabolites profiled in a cohort of over 1400 individuals sampled from a population-based community study, we performed a comprehensive set of association analyses relating all metabolites with eight demographic and cardiometabolic traits and outcomes. We then compared existing graphical approaches with an adapted rain plot approach to display the results of these analyses. The rain plot combines the features of a raindrop plot and a parallel heatmap approach to succinctly convey, in a single visualization, the results of relating complex metabolomics data with multiple phenotypes. This approach complements existing tools, particularly by facilitating comparisons between individual metabolites and across a range of pre-specified clinical outcomes. We anticipate that this single visualization technique may be further extended and applied to alternate study designs using different types of molecular phenotyping data. |
1405.2548 | Sergei Maslov | Purushottam Dixit, Tin Yau Pang, F. William Studier, and Sergei Maslov | Quantifying evolutionary dynamics of the basic genome of E. coli | 21 pages, 5 figures, 1 table | null | null | null | q-bio.PE physics.bio-ph q-bio.GN q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The ~4-Mbp basic genome shared by 32 independent isolates of E. coli
representing considerable population diversity has been approximated by
whole-genome multiple-alignment and computational filtering designed to remove
mobile elements and highly variable regions. Single nucleotide polymorphisms
(SNPs) in the 496 basic-genome pairs are identified and clonally inherited
stretches are distinguished from those acquired by horizontal transfer (HT) by
sharp discontinuities in SNP density. The six least diverged genome-pairs each
have only one or two HT stretches, each occupying 42-115-kbp of basic genome
and containing at least one gene cluster known to confer selective advantage.
At higher divergences, the typical mosaic pattern of interspersed clonal and HT
stretches across the entire basic genome are observed, including likely
fragmented integrations across a restriction barrier. A simple model suggests
that individual HT events are of the order of 10-kbp and are the chief
contributor to genome divergence, bringing in almost 12 times more SNPs than
point mutations. As a result of continuing horizontal transfer of such large
segments, 400 out of the 496 strain-pairs beyond genomic divergence of share
virtually no genomic material with their common ancestor. We conclude that the
active and continuing horizontal transfer of moderately large genomic fragments
is likely to be mediated primarily by a co evolving population of phages that
distribute random genome fragments throughout the population by generalized
transduction, allowing efficient adaptation to environmental changes.
| [
{
"created": "Sun, 11 May 2014 16:43:44 GMT",
"version": "v1"
}
] | 2014-05-13 | [
[
"Dixit",
"Purushottam",
""
],
[
"Pang",
"Tin Yau",
""
],
[
"Studier",
"F. William",
""
],
[
"Maslov",
"Sergei",
""
]
] | The ~4-Mbp basic genome shared by 32 independent isolates of E. coli representing considerable population diversity has been approximated by whole-genome multiple-alignment and computational filtering designed to remove mobile elements and highly variable regions. Single nucleotide polymorphisms (SNPs) in the 496 basic-genome pairs are identified and clonally inherited stretches are distinguished from those acquired by horizontal transfer (HT) by sharp discontinuities in SNP density. The six least diverged genome-pairs each have only one or two HT stretches, each occupying 42-115-kbp of basic genome and containing at least one gene cluster known to confer selective advantage. At higher divergences, the typical mosaic pattern of interspersed clonal and HT stretches across the entire basic genome are observed, including likely fragmented integrations across a restriction barrier. A simple model suggests that individual HT events are of the order of 10-kbp and are the chief contributor to genome divergence, bringing in almost 12 times more SNPs than point mutations. As a result of continuing horizontal transfer of such large segments, 400 out of the 496 strain-pairs beyond genomic divergence of share virtually no genomic material with their common ancestor. We conclude that the active and continuing horizontal transfer of moderately large genomic fragments is likely to be mediated primarily by a co evolving population of phages that distribute random genome fragments throughout the population by generalized transduction, allowing efficient adaptation to environmental changes. |
1710.06686 | Andrea De Martino | Andrea Crisanti, Andrea De Martino, Jonathan Fiorentino | Statistics of optimal information flow in ensembles of regulatory motifs | 14 pages, 6 figures | Phys. Rev. E 97, 022407 (2018) | 10.1103/PhysRevE.97.022407 | null | q-bio.MN cond-mat.dis-nn cond-mat.stat-mech q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Genetic regulatory circuits universally cope with different sources of noise
that limit their ability to coordinate input and output signals. In many cases,
optimal regulatory performance can be thought to correspond to configurations
of variables and parameters that maximize the mutual information between inputs
and outputs. Such optima have been well characterized in several biologically
relevant cases over the past decade. Here we use methods of statistical field
theory to calculate the statistics of the maximal mutual information (the
`capacity') achievable by tuning the input variable only in an ensemble of
regulatory motifs, such that a single controller regulates N targets. Assuming
(i) sufficiently large N, (ii) quenched random kinetic parameters, and (iii)
small noise affecting the input-output channels, we can accurately reproduce
numerical simulations both for the mean capacity and for the whole
distribution. Our results provide insight into the inherent variability in
effectiveness occurring in regulatory systems with heterogeneous kinetic
parameters.
| [
{
"created": "Wed, 18 Oct 2017 11:44:27 GMT",
"version": "v1"
},
{
"created": "Fri, 16 Feb 2018 15:20:16 GMT",
"version": "v2"
}
] | 2018-02-19 | [
[
"Crisanti",
"Andrea",
""
],
[
"De Martino",
"Andrea",
""
],
[
"Fiorentino",
"Jonathan",
""
]
] | Genetic regulatory circuits universally cope with different sources of noise that limit their ability to coordinate input and output signals. In many cases, optimal regulatory performance can be thought to correspond to configurations of variables and parameters that maximize the mutual information between inputs and outputs. Such optima have been well characterized in several biologically relevant cases over the past decade. Here we use methods of statistical field theory to calculate the statistics of the maximal mutual information (the `capacity') achievable by tuning the input variable only in an ensemble of regulatory motifs, such that a single controller regulates N targets. Assuming (i) sufficiently large N, (ii) quenched random kinetic parameters, and (iii) small noise affecting the input-output channels, we can accurately reproduce numerical simulations both for the mean capacity and for the whole distribution. Our results provide insight into the inherent variability in effectiveness occurring in regulatory systems with heterogeneous kinetic parameters. |
0710.3420 | Babak Pourbohloul | Pierre-Andre Noel, Bahman Davoudi, Louis J. Dube, Robert C. Brunham,
Babak Pourbohloul | Time Evolution of Disease Spread on Networks with Degree Heterogeneity | 20 pages, 6 figures | null | 10.1103/PhysRevE.79.026101 | null | q-bio.PE | null | Two crucial elements facilitate the understanding and control of communicable
disease spread within a social setting. These components are, the underlying
contact structure among individuals that determines the pattern of disease
transmission; and the evolution of this pattern over time. Mathematical models
of infectious diseases, which are in principle analytically tractable, use two
general approaches to incorporate these elements. The first approach, generally
known as compartmental modeling, addresses the time evolution of disease spread
at the expense of simplifying the pattern of transmission. On the other hand,
the second approach uses network theory to incorporate detailed information
pertaining to the underlying contact structure among individuals. However,
while providing accurate estimates on the final size of outbreaks/epidemics,
this approach, in its current formalism, disregards the progression of time
during outbreaks. So far, the only alternative that enables the integration of
both aspects of disease spread simultaneously has been to abandon the
analytical approach and rely on computer simulations. We offer a new analytical
framework based on percolation theory, which incorporates both the complexity
of contact network structure and the time progression of disease spread.
Furthermore, we demonstrate that this framework is equally effective on finite-
and "infinite"-size networks. Application of this formalism is not limited to
disease spread; it can be equally applied to similar percolation phenomena on
networks in other areas in science and technology.
| [
{
"created": "Thu, 18 Oct 2007 00:13:35 GMT",
"version": "v1"
}
] | 2013-05-29 | [
[
"Noel",
"Pierre-Andre",
""
],
[
"Davoudi",
"Bahman",
""
],
[
"Dube",
"Louis J.",
""
],
[
"Brunham",
"Robert C.",
""
],
[
"Pourbohloul",
"Babak",
""
]
] | Two crucial elements facilitate the understanding and control of communicable disease spread within a social setting. These components are, the underlying contact structure among individuals that determines the pattern of disease transmission; and the evolution of this pattern over time. Mathematical models of infectious diseases, which are in principle analytically tractable, use two general approaches to incorporate these elements. The first approach, generally known as compartmental modeling, addresses the time evolution of disease spread at the expense of simplifying the pattern of transmission. On the other hand, the second approach uses network theory to incorporate detailed information pertaining to the underlying contact structure among individuals. However, while providing accurate estimates on the final size of outbreaks/epidemics, this approach, in its current formalism, disregards the progression of time during outbreaks. So far, the only alternative that enables the integration of both aspects of disease spread simultaneously has been to abandon the analytical approach and rely on computer simulations. We offer a new analytical framework based on percolation theory, which incorporates both the complexity of contact network structure and the time progression of disease spread. Furthermore, we demonstrate that this framework is equally effective on finite- and "infinite"-size networks. Application of this formalism is not limited to disease spread; it can be equally applied to similar percolation phenomena on networks in other areas in science and technology. |
1002.2506 | Michel Yamagishi | Michel E. Beleza Yamagishi | BAK1 Gene Variation: the doubts remain | 6 pages | null | null | null | q-bio.GN q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dr. Hatchwell [2010] has proposed that the BAK1 gene variants were likely due
to sequencing of a processed gene on chromosome 20. However, in response, Dr.
Gottlieb and co-authors [2010] have argued that "some but not all of the
sequence changes present in the BAK1 sequence of our abdominal aorta samples
are also present in the chromosome 20 BAK1 sequence. However, all the AAA and
AA cDNA samples are identical to each other and different from chromosome 20
BAK1 sequence at amino acids 2 and 145". I have been following this discussion
because I have independently reached almost the same conclusion as Dr.
Hatchwell did [Yamagishi, 2009], and, unfortunately, the response from Dr.
Gottlieb and his co-authors seems to me to be unsatisfactory for the reasons
listed below
| [
{
"created": "Fri, 12 Feb 2010 09:41:26 GMT",
"version": "v1"
}
] | 2010-02-15 | [
[
"Yamagishi",
"Michel E. Beleza",
""
]
] | Dr. Hatchwell [2010] has proposed that the BAK1 gene variants were likely due to sequencing of a processed gene on chromosome 20. However, in response, Dr. Gottlieb and co-authors [2010] have argued that "some but not all of the sequence changes present in the BAK1 sequence of our abdominal aorta samples are also present in the chromosome 20 BAK1 sequence. However, all the AAA and AA cDNA samples are identical to each other and different from chromosome 20 BAK1 sequence at amino acids 2 and 145". I have been following this discussion because I have independently reached almost the same conclusion as Dr. Hatchwell did [Yamagishi, 2009], and, unfortunately, the response from Dr. Gottlieb and his co-authors seems to me to be unsatisfactory for the reasons listed below |
1003.5380 | Kavita Jain | Kavita Jain, Joachim Krug and Su-Chan Park | Evolutionary advantage of small populations on complex fitness
landscapes | Version to appear in Evolution | Evolution 65-7, 1945-1955 | null | null | q-bio.PE cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: Recent experimental and theoretical studies have shown that small
asexual populations evolving on complex fitness landscapes may achieve a higher
fitness than large ones due to the increased heterogeneity of adaptive
trajectories. Here we introduce a class of haploid three-locus fitness
landscapes that allows to investigate this scenario in a precise and
quantitative way.
Results: Our main result derived analytically shows how the probability of
choosing the path of largest initial fitness increase grows with the population
size. This makes large populations more likely to get trapped at local fitness
peaks and implies an advantage of small populations at intermediate time
scales. The range of population sizes where this effect is operative coincides
with the onset of clonal interference. Additional studies using ensembles of
random fitness landscapes show that the results achieved for a particular
choice of three-locus landscape parameters are robust and also persist as the
number of loci increases.
Conclusions: Our study indicates that an advantage for small populations is
likely whenever the fitness landscape contains local maxima. The advantage
appears at intermediate time scales, which are long enough for trapping at
local fitness maxima to have occurred but too short for peak escape by the
creation of multiple mutants.
| [
{
"created": "Sun, 28 Mar 2010 17:07:24 GMT",
"version": "v1"
},
{
"created": "Wed, 23 Feb 2011 04:45:10 GMT",
"version": "v2"
}
] | 2011-07-12 | [
[
"Jain",
"Kavita",
""
],
[
"Krug",
"Joachim",
""
],
[
"Park",
"Su-Chan",
""
]
] | Background: Recent experimental and theoretical studies have shown that small asexual populations evolving on complex fitness landscapes may achieve a higher fitness than large ones due to the increased heterogeneity of adaptive trajectories. Here we introduce a class of haploid three-locus fitness landscapes that allows to investigate this scenario in a precise and quantitative way. Results: Our main result derived analytically shows how the probability of choosing the path of largest initial fitness increase grows with the population size. This makes large populations more likely to get trapped at local fitness peaks and implies an advantage of small populations at intermediate time scales. The range of population sizes where this effect is operative coincides with the onset of clonal interference. Additional studies using ensembles of random fitness landscapes show that the results achieved for a particular choice of three-locus landscape parameters are robust and also persist as the number of loci increases. Conclusions: Our study indicates that an advantage for small populations is likely whenever the fitness landscape contains local maxima. The advantage appears at intermediate time scales, which are long enough for trapping at local fitness maxima to have occurred but too short for peak escape by the creation of multiple mutants. |
2401.16255 | Joydeb Bhattacharyya | Joydeb Bhattacharyya, Arnab Chattopadhyay, Anurag Sau, Sabyasachi
Bhattacharya | Evaluating the consequences: Impact of sex-selective harvesting on fish
population and identifying tipping points via life-history parameters | null | null | null | null | q-bio.PE math.DS | http://creativecommons.org/licenses/by/4.0/ | Fish harvesting often targets larger individuals, which can be sex-specific
due to size dimorphism or differences in behaviors like migration and spawning.
Sex-selective harvesting can have dire consequences in the long run,
potentially pushing fish populations towards collapse much earlier due to
skewed sex ratios and reduced reproduction. To investigate this pressing issue,
we used a single-species sex-structured mathematical model with a weak Allee
effect on the fish population. Additionally, we incorporate a realistic
harvesting mechanism resembling the Michaelis-Menten function. Our analysis
illuminates the intricate interplay between life history traits, harvesting
intensity, and population stability. The results demonstrate that fish life
history traits, such as a higher reproductive rate, early maturation of
juveniles, and increased longevity, confer advantages under intensive
harvesting. To anticipate potential population collapse, we employ a novel
early warning tool (EWT) based on the concept of basin stability to pinpoint
tipping points before they occur. Harvesting yield at our proposed early
indicator can act as a potential pathway to achieve optimal yield while keeping
the population safely away from the brink of collapse, rather than relying
solely on the established maximum sustainable yield (MSY), where the population
dangerously approaches the point of no return. Furthermore, we show that
density-dependent female stocking upon receiving an EWT signal significantly
shifts the tipping point, allowing safe harvesting even at MSY levels, thus can
act as a potential intervention strategy.
| [
{
"created": "Mon, 29 Jan 2024 16:06:01 GMT",
"version": "v1"
}
] | 2024-01-30 | [
[
"Bhattacharyya",
"Joydeb",
""
],
[
"Chattopadhyay",
"Arnab",
""
],
[
"Sau",
"Anurag",
""
],
[
"Bhattacharya",
"Sabyasachi",
""
]
] | Fish harvesting often targets larger individuals, which can be sex-specific due to size dimorphism or differences in behaviors like migration and spawning. Sex-selective harvesting can have dire consequences in the long run, potentially pushing fish populations towards collapse much earlier due to skewed sex ratios and reduced reproduction. To investigate this pressing issue, we used a single-species sex-structured mathematical model with a weak Allee effect on the fish population. Additionally, we incorporate a realistic harvesting mechanism resembling the Michaelis-Menten function. Our analysis illuminates the intricate interplay between life history traits, harvesting intensity, and population stability. The results demonstrate that fish life history traits, such as a higher reproductive rate, early maturation of juveniles, and increased longevity, confer advantages under intensive harvesting. To anticipate potential population collapse, we employ a novel early warning tool (EWT) based on the concept of basin stability to pinpoint tipping points before they occur. Harvesting yield at our proposed early indicator can act as a potential pathway to achieve optimal yield while keeping the population safely away from the brink of collapse, rather than relying solely on the established maximum sustainable yield (MSY), where the population dangerously approaches the point of no return. Furthermore, we show that density-dependent female stocking upon receiving an EWT signal significantly shifts the tipping point, allowing safe harvesting even at MSY levels, thus can act as a potential intervention strategy. |
0711.1522 | Volkan Sevim | Volkan Sevim, Per Arne Rikvold | Chaotic Gene Regulatory Networks Can Be Robust Against Mutations and
Noise | JTB accepted | J. Theor. Biol. 253, 323-332 (2008) | 10.1016/j.jtbi.2008.03.003 | null | q-bio.MN | null | Robustness to mutations and noise has been shown to evolve through
stabilizing selection for optimal phenotypes in model gene regulatory networks.
The ability to evolve robust mutants is known to depend on the network
architecture. How do the dynamical properties and state-space structures of
networks with high and low robustness differ? Does selection operate on the
global dynamical behavior of the networks? What kind of state-space structures
are favored by selection? We provide damage propagation analysis and an
extensive statistical analysis of state spaces of these model networks to show
that the change in their dynamical properties due to stabilizing selection for
optimal phenotypes is minor. Most notably, the networks that are most robust to
both mutations and noise are highly chaotic. Certain properties of chaotic
networks, such as being able to produce large attractor basins, can be useful
for maintaining a stable gene-expression pattern. Our findings indicate that
conventional measures of stability, such as the damage-propagation rate, do not
provide much information about robustness to mutations or noise in model gene
regulatory networks.
| [
{
"created": "Fri, 9 Nov 2007 18:44:46 GMT",
"version": "v1"
},
{
"created": "Mon, 17 Mar 2008 16:42:53 GMT",
"version": "v2"
}
] | 2008-07-07 | [
[
"Sevim",
"Volkan",
""
],
[
"Rikvold",
"Per Arne",
""
]
] | Robustness to mutations and noise has been shown to evolve through stabilizing selection for optimal phenotypes in model gene regulatory networks. The ability to evolve robust mutants is known to depend on the network architecture. How do the dynamical properties and state-space structures of networks with high and low robustness differ? Does selection operate on the global dynamical behavior of the networks? What kind of state-space structures are favored by selection? We provide damage propagation analysis and an extensive statistical analysis of state spaces of these model networks to show that the change in their dynamical properties due to stabilizing selection for optimal phenotypes is minor. Most notably, the networks that are most robust to both mutations and noise are highly chaotic. Certain properties of chaotic networks, such as being able to produce large attractor basins, can be useful for maintaining a stable gene-expression pattern. Our findings indicate that conventional measures of stability, such as the damage-propagation rate, do not provide much information about robustness to mutations or noise in model gene regulatory networks. |
2105.01723 | Lam Ho | Lam Si Tung Ho and Vu Dinh | Convergence of maximum likelihood supertree reconstruction | null | null | null | null | q-bio.PE math.ST stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Supertree methods are tree reconstruction techniques that combine several
smaller gene trees (possibly on different sets of species) to build a larger
species tree. The question of interest is whether the reconstructed supertree
converges to the true species tree as the number of gene trees increases (that
is, the consistency of supertree methods). In this paper, we are particularly
interested in the convergence rate of the maximum likelihood supertree.
Previous studies on the maximum likelihood supertree approach often formulate
the question of interest as a discrete problem and focus on reconstructing the
correct topology of the species tree. Aiming to reconstruct both the topology
and the branch lengths of the species tree, we propose an analytic approach for
analyzing the convergence of the maximum likelihood supertree method.
Specifically, we consider each tree as one point of a metric space and prove
that the distance between the maximum likelihood supertree and the species tree
converges to zero at a polynomial rate under some mild conditions. We further
verify these conditions for the popular exponential error model of gene trees.
| [
{
"created": "Tue, 4 May 2021 19:44:17 GMT",
"version": "v1"
}
] | 2021-05-06 | [
[
"Ho",
"Lam Si Tung",
""
],
[
"Dinh",
"Vu",
""
]
] | Supertree methods are tree reconstruction techniques that combine several smaller gene trees (possibly on different sets of species) to build a larger species tree. The question of interest is whether the reconstructed supertree converges to the true species tree as the number of gene trees increases (that is, the consistency of supertree methods). In this paper, we are particularly interested in the convergence rate of the maximum likelihood supertree. Previous studies on the maximum likelihood supertree approach often formulate the question of interest as a discrete problem and focus on reconstructing the correct topology of the species tree. Aiming to reconstruct both the topology and the branch lengths of the species tree, we propose an analytic approach for analyzing the convergence of the maximum likelihood supertree method. Specifically, we consider each tree as one point of a metric space and prove that the distance between the maximum likelihood supertree and the species tree converges to zero at a polynomial rate under some mild conditions. We further verify these conditions for the popular exponential error model of gene trees. |
2205.08720 | Yong Ye | Yong Ye, Yi Zhao, Jiaying Zhou | Pattern formation of parasite-host model induced by fear effect | 28 pages, 11 figures | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | In this paper, based on the epidemiological microparasite model, a
parasite-host model is established by considering the fear effect of
susceptible individuals on infectors. We explored the pattern formation with
the help of numerical simulation, and analyzed the effects of fear effect,
infected host mortality, population diffusion rate and reducing reproduction
ability of infected hosts on population activities in different degrees.
Theoretically, we give the general conditions for the stability of the model
under non-diffusion and considering the Turing instability caused by diffusion.
Our results indicate how fear affects the distribution of the uninfected and
infected hosts in the habitat and quantify the influence of the fear factor on
the spatiotemporal pattern of the population. In addition, we analyze the
influence of natural death rate, reproduction ability of infected hosts, and
diffusion level of uninfected (infected) hosts on the spatiotemporal pattern,
respectively. The results present that the growth of pattern induced by
intensified fear effect follows the certain rule: cold spots $\rightarrow$ cold
spots-stripes $\rightarrow$ cold stripes $\rightarrow$ hot stripes
$\rightarrow$ hot spots-stripes $\rightarrow$ hot spots. Interestingly, the
natural mortality and fear effect take the opposite effect on the growth order
of the pattern. From the perspective of biological significance, we find that
the degree of fear effect can reshape the distribution of population to meet
the previous rule.
| [
{
"created": "Wed, 18 May 2022 04:42:23 GMT",
"version": "v1"
}
] | 2022-05-19 | [
[
"Ye",
"Yong",
""
],
[
"Zhao",
"Yi",
""
],
[
"Zhou",
"Jiaying",
""
]
] | In this paper, based on the epidemiological microparasite model, a parasite-host model is established by considering the fear effect of susceptible individuals on infectors. We explored the pattern formation with the help of numerical simulation, and analyzed the effects of fear effect, infected host mortality, population diffusion rate and reducing reproduction ability of infected hosts on population activities in different degrees. Theoretically, we give the general conditions for the stability of the model under non-diffusion and considering the Turing instability caused by diffusion. Our results indicate how fear affects the distribution of the uninfected and infected hosts in the habitat and quantify the influence of the fear factor on the spatiotemporal pattern of the population. In addition, we analyze the influence of natural death rate, reproduction ability of infected hosts, and diffusion level of uninfected (infected) hosts on the spatiotemporal pattern, respectively. The results present that the growth of pattern induced by intensified fear effect follows the certain rule: cold spots $\rightarrow$ cold spots-stripes $\rightarrow$ cold stripes $\rightarrow$ hot stripes $\rightarrow$ hot spots-stripes $\rightarrow$ hot spots. Interestingly, the natural mortality and fear effect take the opposite effect on the growth order of the pattern. From the perspective of biological significance, we find that the degree of fear effect can reshape the distribution of population to meet the previous rule. |
1504.05138 | Ralph Stern | Ralph H. Stern, Dean E. Smith, Hitinder S. Gurm | Improving the Presentation and Understanding of Risk Models | 14 pages, 7 figures | null | null | null | q-bio.QM stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The key concepts (calibration, discrimination, and discordance) important in
understanding and comparing risk models are best conveyed graphically. To
illustrate this, models predicting death and acute kidney injury in a large
cohort of PCI patients differing in the number of predictors included are
presented. Calibration plots, often presented in the current literature,
present the agreement between predicted and observed risk for deciles of risk.
Risk distribution curves present the frequency of different levels of risk.
Scatterplots of the risks assigned to individuals by different models show the
discordance of the individual risk estimates. Increasing the number of
predictors in these models produce increasingly disperse and progressively
skewed risk distribution curves. These resemble the lognormal distributions
expected when risk predictors interact multiplicatively. These changes in the
risk distribution curves correlate with improved measures of discrimination.
| [
{
"created": "Mon, 20 Apr 2015 17:57:32 GMT",
"version": "v1"
}
] | 2015-04-21 | [
[
"Stern",
"Ralph H.",
""
],
[
"Smith",
"Dean E.",
""
],
[
"Gurm",
"Hitinder S.",
""
]
] | The key concepts (calibration, discrimination, and discordance) important in understanding and comparing risk models are best conveyed graphically. To illustrate this, models predicting death and acute kidney injury in a large cohort of PCI patients differing in the number of predictors included are presented. Calibration plots, often presented in the current literature, present the agreement between predicted and observed risk for deciles of risk. Risk distribution curves present the frequency of different levels of risk. Scatterplots of the risks assigned to individuals by different models show the discordance of the individual risk estimates. Increasing the number of predictors in these models produce increasingly disperse and progressively skewed risk distribution curves. These resemble the lognormal distributions expected when risk predictors interact multiplicatively. These changes in the risk distribution curves correlate with improved measures of discrimination. |
1506.05019 | Alexander Fletcher Dr | Alexander G. Fletcher, Philip J. Murray, Philip K. Maini | Multiscale modelling of intestinal crypt organization and carcinogenesis | 26 pages, 5 figures, review article | null | null | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Colorectal cancers are the third most common type of cancer. They originate
from intestinal crypts, glands that descend from the intestinal lumen into the
underlying connective tissue. Normal crypts are thought to exist in a dynamic
equilibrium where the rate of cell production at the base of a crypt is matched
by that of loss at the top. Understanding how genetic alterations accumulate
and proceed to disrupt this dynamic equilibrium is fundamental to understanding
the origins of colorectal cancer. Colorectal cancer emerges from the
interaction of biological processes that span several spatial scales, from
mutations that cause inappropriate intracellular responses to changes at the
cell/tissue level, such as uncontrolled proliferation and altered motility and
adhesion. Multiscale mathematical modelling can provide insight into the
spatiotemporal organisation of such a complex, highly regulated and dynamic
system. Moreover, the aforementioned challenges are inherent to the multiscale
modelling of biological tissue more generally. In this review we describe the
mathematical approaches that have been applied to investigate multiscale
aspects of crypt behaviour, highlighting a number of model predictions that
have since been validated experimentally. We also discuss some of the key
mathematical and computational challenges associated with the multiscale
modelling approach. We conclude by discussing recent efforts to derive
coarse-grained descriptions of such models, which may offer one way of reducing
the computational cost of simulation by leveraging well-established tools of
mathematical analysis to address key problems in multiscale modelling.
| [
{
"created": "Tue, 16 Jun 2015 16:15:13 GMT",
"version": "v1"
}
] | 2015-06-17 | [
[
"Fletcher",
"Alexander G.",
""
],
[
"Murray",
"Philip J.",
""
],
[
"Maini",
"Philip K.",
""
]
] | Colorectal cancers are the third most common type of cancer. They originate from intestinal crypts, glands that descend from the intestinal lumen into the underlying connective tissue. Normal crypts are thought to exist in a dynamic equilibrium where the rate of cell production at the base of a crypt is matched by that of loss at the top. Understanding how genetic alterations accumulate and proceed to disrupt this dynamic equilibrium is fundamental to understanding the origins of colorectal cancer. Colorectal cancer emerges from the interaction of biological processes that span several spatial scales, from mutations that cause inappropriate intracellular responses to changes at the cell/tissue level, such as uncontrolled proliferation and altered motility and adhesion. Multiscale mathematical modelling can provide insight into the spatiotemporal organisation of such a complex, highly regulated and dynamic system. Moreover, the aforementioned challenges are inherent to the multiscale modelling of biological tissue more generally. In this review we describe the mathematical approaches that have been applied to investigate multiscale aspects of crypt behaviour, highlighting a number of model predictions that have since been validated experimentally. We also discuss some of the key mathematical and computational challenges associated with the multiscale modelling approach. We conclude by discussing recent efforts to derive coarse-grained descriptions of such models, which may offer one way of reducing the computational cost of simulation by leveraging well-established tools of mathematical analysis to address key problems in multiscale modelling. |
0907.2933 | Agnieszka Ruebenbauer Ms | Agnieszka Ruebenbauer | Attraction to natural stimuli in Drosophila melanogaster | 17 pages, 2 figures | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The animals, in particular insects (Drosophila melanogaster), response
towards odor stimuli in nature can be established by measuring the dynamic of
the odor response. Such an approach is innovative since responses to odors were
tested only at certain time point so far, not in hour intervals for several
hours. The odor attraction to 14 natural and ecologically relevant odor stimuli
such as fruits- ripe and rotten, yeast and vinegar was tested. A mathematical
model to evaluate obtained data is proposed in this study in which the number
of flies caught over several time points is presented as one simple parameter
showing trapping potential of the trap housing particular odor stimuli. The
knowledge concerning dynamic of the odor response in Drosophila melanogaster
may enlighten the principles of flies behavior in context of exposure towards
odor stimulus.
| [
{
"created": "Thu, 16 Jul 2009 20:42:41 GMT",
"version": "v1"
}
] | 2009-07-20 | [
[
"Ruebenbauer",
"Agnieszka",
""
]
] | The animals, in particular insects (Drosophila melanogaster), response towards odor stimuli in nature can be established by measuring the dynamic of the odor response. Such an approach is innovative since responses to odors were tested only at certain time point so far, not in hour intervals for several hours. The odor attraction to 14 natural and ecologically relevant odor stimuli such as fruits- ripe and rotten, yeast and vinegar was tested. A mathematical model to evaluate obtained data is proposed in this study in which the number of flies caught over several time points is presented as one simple parameter showing trapping potential of the trap housing particular odor stimuli. The knowledge concerning dynamic of the odor response in Drosophila melanogaster may enlighten the principles of flies behavior in context of exposure towards odor stimulus. |
2312.05785 | Bertrand Ottino-Loffler | Bertrand Ottino-Loffler and Gabriel Victora | On Possible Indicators of Negative Selection in Germinal Centers | 19 pages (including SI), 10 figures | null | null | null | q-bio.PE math.DS | http://creativecommons.org/licenses/by/4.0/ | A central feature of vertebrate immune response is affinity maturation,
wherein antibody-producing B cells undergo evolutionary selection in
microanatomical structures called germinal centers, which form in secondary
lymphoid organs upon antigen exposure. While it has been shown that the median
B cell affinity dependably increases over the course of maturation, the exact
logic behind this evolution remains vague. Three potential selection methods
include encouraging the reproduction of high affinity cells (``birth/positive
selection''), encouraging cell death in low affinity cells (``death/negative
selection''), and adjusting the mutation rate based on cell affinity
(``mutational selection''). While all three forms of selection would lead to a
net increase in affinity, different selection methods may lead to distinct
statistical dynamics. We present a tractable model of selection, and analyze
proposed signatures of negative selection. Given the simplicity of the model,
such signatures should be stronger here than in real systems. However, we find
a number of intuitively appealing metrics -- such as preferential ancestry
ratios, terminal node counts, and mutation count skewness -- are all ill-suited
for detecting selection method.
| [
{
"created": "Sun, 10 Dec 2023 06:12:53 GMT",
"version": "v1"
},
{
"created": "Thu, 8 Feb 2024 00:52:03 GMT",
"version": "v2"
},
{
"created": "Wed, 14 Feb 2024 00:38:42 GMT",
"version": "v3"
}
] | 2024-02-15 | [
[
"Ottino-Loffler",
"Bertrand",
""
],
[
"Victora",
"Gabriel",
""
]
] | A central feature of vertebrate immune response is affinity maturation, wherein antibody-producing B cells undergo evolutionary selection in microanatomical structures called germinal centers, which form in secondary lymphoid organs upon antigen exposure. While it has been shown that the median B cell affinity dependably increases over the course of maturation, the exact logic behind this evolution remains vague. Three potential selection methods include encouraging the reproduction of high affinity cells (``birth/positive selection''), encouraging cell death in low affinity cells (``death/negative selection''), and adjusting the mutation rate based on cell affinity (``mutational selection''). While all three forms of selection would lead to a net increase in affinity, different selection methods may lead to distinct statistical dynamics. We present a tractable model of selection, and analyze proposed signatures of negative selection. Given the simplicity of the model, such signatures should be stronger here than in real systems. However, we find a number of intuitively appealing metrics -- such as preferential ancestry ratios, terminal node counts, and mutation count skewness -- are all ill-suited for detecting selection method. |
1611.07962 | Andrew Murphy | Andrew C. Murphy, Shi Gu, Ankit N. Khambhati, Nicholas F. Wymbs, Scott
T. Grafton, Theodore D. Satterthwaite, and Danielle S. Bassett | Explicitly Linking Regional Activation and Function Connectivity:
Community Structure of Weighted Networks with Continuous Annotation | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A major challenge in neuroimaging is understanding the mapping of
neurophysiological dynamics onto cognitive functions. Traditionally, these maps
have been constructed by examining changes in the activity magnitude of regions
related to task performance. Recently, network neuroscience has produced
methods to map connectivity patterns among many regions to certain cognitive
functions by drawing on tools from network science and graph theory. However,
these two different views are rarely addressed simultaneously, largely because
few tools exist that account for patterns between nodes while simultaneously
considering activation of nodes. We address this gap by solving the problem of
community detection on weighted networks with continuous (non-integer)
annotations by deriving a generative probabilistic model. This model generates
communities whose members connect densely to nodes within their own community,
and whose members share similar annotation values. We demonstrate the utility
of the model in the context of neuroimaging data gathered during a motor
learning paradigm, where edges are task-based functional connectivity and
annotations to each node are beta weights from a general linear model that
encoded a linear decrease in blood-oxygen-level-dependent signal with practice.
Interestingly, we observe that individuals who learn at a faster rate exhibit
the greatest dissimilarity between functional connectivity and activation
magnitudes, suggesting that activation and functional connectivity are distinct
dimensions of neurophysiology that track behavioral change. More generally, the
tool that we develop offers an explicit, mathematically principled link between
functional activation and functional connectivity, and can readily be applied
to a other similar problems in which one set of imaging data offers network
data, and a second offers a regional attribute.
| [
{
"created": "Wed, 23 Nov 2016 20:13:47 GMT",
"version": "v1"
}
] | 2016-11-24 | [
[
"Murphy",
"Andrew C.",
""
],
[
"Gu",
"Shi",
""
],
[
"Khambhati",
"Ankit N.",
""
],
[
"Wymbs",
"Nicholas F.",
""
],
[
"Grafton",
"Scott T.",
""
],
[
"Satterthwaite",
"Theodore D.",
""
],
[
"Bassett",
"Danielle S.",
""
]
] | A major challenge in neuroimaging is understanding the mapping of neurophysiological dynamics onto cognitive functions. Traditionally, these maps have been constructed by examining changes in the activity magnitude of regions related to task performance. Recently, network neuroscience has produced methods to map connectivity patterns among many regions to certain cognitive functions by drawing on tools from network science and graph theory. However, these two different views are rarely addressed simultaneously, largely because few tools exist that account for patterns between nodes while simultaneously considering activation of nodes. We address this gap by solving the problem of community detection on weighted networks with continuous (non-integer) annotations by deriving a generative probabilistic model. This model generates communities whose members connect densely to nodes within their own community, and whose members share similar annotation values. We demonstrate the utility of the model in the context of neuroimaging data gathered during a motor learning paradigm, where edges are task-based functional connectivity and annotations to each node are beta weights from a general linear model that encoded a linear decrease in blood-oxygen-level-dependent signal with practice. Interestingly, we observe that individuals who learn at a faster rate exhibit the greatest dissimilarity between functional connectivity and activation magnitudes, suggesting that activation and functional connectivity are distinct dimensions of neurophysiology that track behavioral change. More generally, the tool that we develop offers an explicit, mathematically principled link between functional activation and functional connectivity, and can readily be applied to a other similar problems in which one set of imaging data offers network data, and a second offers a regional attribute. |
1808.09481 | Ari Allyn-Feuer | Ari Allyn-Feuer, Gerald A. Higgins and Brian D. Athey | Pharmacogenomics in the Age of GWAS, Omics Atlases, and PheWAS | null | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The search for causative pharmacogenomic loci is being transformed by
integrative omics pipelines, but their outputs have only begun being applied to
test design. We assess the direction of the field in light of Biobanks/PheWAS,
omics atlases, and AI. We first assess the potential of recent epigenome and
spatial genome concepts, datasets, and methods to improve the functionality of
PIP-style pipelines. We then discuss new potential methods of genetic test
design on the basis of the outputs of such pipelines. We conclude with a vision
for a pharmacophenomic atlas, in which omics atlas data, PheWAS associations,
and biobank data would be used with AI to design thousands of genetic tests for
clinical deployment in an automated parallel process.
| [
{
"created": "Tue, 28 Aug 2018 18:25:54 GMT",
"version": "v1"
}
] | 2018-08-30 | [
[
"Allyn-Feuer",
"Ari",
""
],
[
"Higgins",
"Gerald A.",
""
],
[
"Athey",
"Brian D.",
""
]
] | The search for causative pharmacogenomic loci is being transformed by integrative omics pipelines, but their outputs have only begun being applied to test design. We assess the direction of the field in light of Biobanks/PheWAS, omics atlases, and AI. We first assess the potential of recent epigenome and spatial genome concepts, datasets, and methods to improve the functionality of PIP-style pipelines. We then discuss new potential methods of genetic test design on the basis of the outputs of such pipelines. We conclude with a vision for a pharmacophenomic atlas, in which omics atlas data, PheWAS associations, and biobank data would be used with AI to design thousands of genetic tests for clinical deployment in an automated parallel process. |
2308.14942 | David Evans | David M Evans, George Davey Smith, Gunn-Helen Moen | Woolf et als GWAS by subtraction is not useful for cross-generational
Mendelian randomization studies | 8 pages, 0 figures | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Mendelian randomization (MR) is an epidemiological method that can be used to
strengthen causal inference regarding the relationship between a modifiable
environmental exposure and a medically relevant trait and to estimate the
magnitude of this relationship1. Recently, there has been considerable interest
in using MR to examine potential causal relationships between parental
phenotypes and outcomes amongst their offspring. In a recent issue of BMC
Research Notes, Woolf et al (2023) present a new method, GWAS by subtraction,
to derive genome-wide summary statistics for paternal smoking and other
paternal phenotypes with the goal that these estimates can then be used in
downstream (including two sample) MR studies. Whilst a potentially useful goal,
Woolf et al. (2023) focus on the wrong parameter of interest for useful
genome-wide association studies (GWAS) and downstream cross-generational MR
studies, and the estimator that they derive is neither efficient nor
appropriate for such use.
| [
{
"created": "Mon, 28 Aug 2023 23:49:51 GMT",
"version": "v1"
}
] | 2023-08-30 | [
[
"Evans",
"David M",
""
],
[
"Smith",
"George Davey",
""
],
[
"Moen",
"Gunn-Helen",
""
]
] | Mendelian randomization (MR) is an epidemiological method that can be used to strengthen causal inference regarding the relationship between a modifiable environmental exposure and a medically relevant trait and to estimate the magnitude of this relationship1. Recently, there has been considerable interest in using MR to examine potential causal relationships between parental phenotypes and outcomes amongst their offspring. In a recent issue of BMC Research Notes, Woolf et al (2023) present a new method, GWAS by subtraction, to derive genome-wide summary statistics for paternal smoking and other paternal phenotypes with the goal that these estimates can then be used in downstream (including two sample) MR studies. Whilst a potentially useful goal, Woolf et al. (2023) focus on the wrong parameter of interest for useful genome-wide association studies (GWAS) and downstream cross-generational MR studies, and the estimator that they derive is neither efficient nor appropriate for such use. |
1910.11628 | Armen Allahverdyan | Armen E. Allahverdyan, Sanasar G. Babajanyan, and Chin-Kun Hu | Polymorphism in rapidly-changing cyclic environment | 16 pages, 10 figures | Physical Review E 100, 032401 (2019) | 10.1103/PhysRevE.100.032401 | null | q-bio.PE nlin.AO physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Selection in a time-periodic environment is modeled via the continuous-time
two-player replicator dynamics, which for symmetric pay-offs reduces to the
Fisher equation of mathematical genetics. For a sufficiently rapid and cyclic
[fine-grained] environment, the time-averaged population frequencies are shown
to obey a replicator dynamics with a non-linear fitness that is induced by
environmental changes. The non-linear terms in the fitness emerge due to
populations tracking their time-dependent environment. These terms can induce a
stable polymorphism, though they do not spoil the polymorphism that exists
already without them. In this sense polymorphic populations are more robust
with respect to their time-dependent environments. The overall fitness of the
problem is still given by its time-averaged value, but the emergence of
polymorphism during genetic selection can be accompanied by decreasing mean
fitness of the population. The impact of the uncovered polymorphism scenario on
the models of diversity is examplified via the rock-paper-scissors dynamics,
and also via the prisoner's dilemma in a time-periodic environment.
| [
{
"created": "Fri, 25 Oct 2019 11:35:20 GMT",
"version": "v1"
}
] | 2019-10-30 | [
[
"Allahverdyan",
"Armen E.",
""
],
[
"Babajanyan",
"Sanasar G.",
""
],
[
"Hu",
"Chin-Kun",
""
]
] | Selection in a time-periodic environment is modeled via the continuous-time two-player replicator dynamics, which for symmetric pay-offs reduces to the Fisher equation of mathematical genetics. For a sufficiently rapid and cyclic [fine-grained] environment, the time-averaged population frequencies are shown to obey a replicator dynamics with a non-linear fitness that is induced by environmental changes. The non-linear terms in the fitness emerge due to populations tracking their time-dependent environment. These terms can induce a stable polymorphism, though they do not spoil the polymorphism that exists already without them. In this sense polymorphic populations are more robust with respect to their time-dependent environments. The overall fitness of the problem is still given by its time-averaged value, but the emergence of polymorphism during genetic selection can be accompanied by decreasing mean fitness of the population. The impact of the uncovered polymorphism scenario on the models of diversity is examplified via the rock-paper-scissors dynamics, and also via the prisoner's dilemma in a time-periodic environment. |
1405.7902 | Grzegorz Nawrocki | Grzegorz Nawrocki and Marek Cieplak | Amino acids and proteins at ZnO-water interfaces in molecular dynamics
simulations | null | Phys. Chem. Chem. Phys., 2013, 15, 13628 | 10.1039/c3cp52198b | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We determine potentials of the mean force for interactions of amino acids
with four common surfaces of ZnO in aqueous solutions. The method involves
all-atom molecular dynamics simulations combined with the umbrella sampling
technique. The profiled nature of the density of water with the strongly
adsorbed first layer affects the approach of amino acids to the surface and
generates either repulsion or weak binding. The largest binding energy is found
for tyrosine interacting with the surface in which the Zn ions are at the top.
It is equal to 7 kJ/mol which is comparable to that of the hydrogen bonds in a
protein. This makes the adsorption of amino acids onto the ZnO surface much
weaker than onto the well studied surface of gold. Under vacuum, binding
energies are more than 40 times stronger (for one of the surfaces). The precise
manner in which water molecules interact with a given surface influences the
binding energies in a way that depends on the surface. Among the four
considered surfaces the one with Zn at the top is recognized as binding almost
all amino acids with an average binding energy of 2.60 kJ/mol. Another (O at
the top) is non-binding for most amino acids. For binding situations the
average energy is 0.66 kJ/mol. The remaining two surfaces bind nearly as many
amino acids as they do not and the average binding energies are 1.46 and 1.22
kJ/mol. For all of the surfaces the binding energies vary between amino acids
significantly: the dispersion in the range of 68-154% of the mean. A small
protein is shown to adsorb onto ZnO only intermittently and with only a small
deformation. Various adsorption events lead to different patterns in mobilities
of amino acids within the protein.
| [
{
"created": "Fri, 30 May 2014 16:04:56 GMT",
"version": "v1"
}
] | 2014-06-02 | [
[
"Nawrocki",
"Grzegorz",
""
],
[
"Cieplak",
"Marek",
""
]
] | We determine potentials of the mean force for interactions of amino acids with four common surfaces of ZnO in aqueous solutions. The method involves all-atom molecular dynamics simulations combined with the umbrella sampling technique. The profiled nature of the density of water with the strongly adsorbed first layer affects the approach of amino acids to the surface and generates either repulsion or weak binding. The largest binding energy is found for tyrosine interacting with the surface in which the Zn ions are at the top. It is equal to 7 kJ/mol which is comparable to that of the hydrogen bonds in a protein. This makes the adsorption of amino acids onto the ZnO surface much weaker than onto the well studied surface of gold. Under vacuum, binding energies are more than 40 times stronger (for one of the surfaces). The precise manner in which water molecules interact with a given surface influences the binding energies in a way that depends on the surface. Among the four considered surfaces the one with Zn at the top is recognized as binding almost all amino acids with an average binding energy of 2.60 kJ/mol. Another (O at the top) is non-binding for most amino acids. For binding situations the average energy is 0.66 kJ/mol. The remaining two surfaces bind nearly as many amino acids as they do not and the average binding energies are 1.46 and 1.22 kJ/mol. For all of the surfaces the binding energies vary between amino acids significantly: the dispersion in the range of 68-154% of the mean. A small protein is shown to adsorb onto ZnO only intermittently and with only a small deformation. Various adsorption events lead to different patterns in mobilities of amino acids within the protein. |
2206.12508 | Julien Dirani | Julien Dirani, Liina Pylkk\"anen | The Temporal Evolution of Modality-Independent Representations of
Conceptual Categories | null | null | 10.1016/j.neuroimage.2023.120254 | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | To what extent does language production activate amodal conceptual
representations? In picture naming, we view specific exemplars of concepts and
then name them with a category label, like 'dog'. In contrast, in overt
reading, the written word expresses the category (dog), not an exemplar.
Here we used a decoding approach with magnetoencephalography to address
whether picture naming and overt word reading involve shared representations of
semantic categories. This addresses a fundamental question about the
modality-generality of conceptual representations and their temporal evolution.
Crucially, we do this using a language production task that does not require
explicit categorization judgment and that controls for word form properties
across semantic categories. We trained our models to classify the animal/tool
distinction using MEG data of one modality at each time point and then tested
the generalization of those models on the other modality. We obtained evidence
for the automatic activation of modality-independent semantic category
representations for both pictures and words starting at ~150ms and lasting
until about 500ms. The time course of lexical activation was also assessed
revealing that semantic category is represented before lexical access for
pictures but after lexical access for words. Notably, this earlier activation
of semantic category in pictures occurred simultaneously with visual
representations.
We thus show evidence for the spontaneous activation of modality-independent
semantic categories in picture naming and word reading, supporting theories in
which amodal conceptual representations exist. Together, these results serve to
anchor a more comprehensive spatio-temporal delineation of the semantic feature
space during production planning.
| [
{
"created": "Fri, 24 Jun 2022 22:44:11 GMT",
"version": "v1"
},
{
"created": "Thu, 30 Jun 2022 20:52:11 GMT",
"version": "v2"
}
] | 2024-03-22 | [
[
"Dirani",
"Julien",
""
],
[
"Pylkkänen",
"Liina",
""
]
] | To what extent does language production activate amodal conceptual representations? In picture naming, we view specific exemplars of concepts and then name them with a category label, like 'dog'. In contrast, in overt reading, the written word expresses the category (dog), not an exemplar. Here we used a decoding approach with magnetoencephalography to address whether picture naming and overt word reading involve shared representations of semantic categories. This addresses a fundamental question about the modality-generality of conceptual representations and their temporal evolution. Crucially, we do this using a language production task that does not require explicit categorization judgment and that controls for word form properties across semantic categories. We trained our models to classify the animal/tool distinction using MEG data of one modality at each time point and then tested the generalization of those models on the other modality. We obtained evidence for the automatic activation of modality-independent semantic category representations for both pictures and words starting at ~150ms and lasting until about 500ms. The time course of lexical activation was also assessed revealing that semantic category is represented before lexical access for pictures but after lexical access for words. Notably, this earlier activation of semantic category in pictures occurred simultaneously with visual representations. We thus show evidence for the spontaneous activation of modality-independent semantic categories in picture naming and word reading, supporting theories in which amodal conceptual representations exist. Together, these results serve to anchor a more comprehensive spatio-temporal delineation of the semantic feature space during production planning. |
1905.06855 | Coralie Fritsch | Coralie Fritsch (IECL, BIGS), Sylvain Billiard (Evo-Eco-Pal\'eo),
Nicolas Champagnat (IECL, BIGS) | Identifying conversion efficiency as a key mechanism underlying food
webs evolution : a step forward, or backward ? | null | Oikos, Nordic Ecological Society, 2021, 130 (6), pp.904-930 | 10.1111/oik.07421 | null | q-bio.PE math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Body size or mass is one of the main factors underlying food webs structure.
A large number of evolutionary models have shown that indeed, the adaptive
evolution of body size (or mass) can give rise to hierarchically organised
trophic levels with complex between and within trophic interactions. However,
these models generally make strong arbitrary assumptions on how traits evolve,
casting doubts on their robustness. In particular, biomass conversion
efficiency is always considered independent of the predator and prey size,
which contradicts with the literature. In this paper, we propose a general
model encompassing most previous models which allows to show that relaxing
arbitrary assumptions gives rise to unrealistic food webs. We then show that
considering biomass conversion efficiency dependent on species size is
certainly key for food webs adaptive evolution because realistic food webs can
evolve, making obsolete the need of arbitrary constraints on traits' evolution.
We finally conclude that, on the one hand, ecologists should pay attention to
how biomass flows into food webs in models. On the other hand, we question more
generally the robustness of evolutionary models for the study of food webs.
| [
{
"created": "Wed, 15 May 2019 07:44:34 GMT",
"version": "v1"
},
{
"created": "Tue, 9 Feb 2021 14:48:57 GMT",
"version": "v2"
},
{
"created": "Thu, 26 Aug 2021 06:15:57 GMT",
"version": "v3"
}
] | 2021-08-27 | [
[
"Fritsch",
"Coralie",
"",
"IECL, BIGS"
],
[
"Billiard",
"Sylvain",
"",
"Evo-Eco-Paléo"
],
[
"Champagnat",
"Nicolas",
"",
"IECL, BIGS"
]
] | Body size or mass is one of the main factors underlying food webs structure. A large number of evolutionary models have shown that indeed, the adaptive evolution of body size (or mass) can give rise to hierarchically organised trophic levels with complex between and within trophic interactions. However, these models generally make strong arbitrary assumptions on how traits evolve, casting doubts on their robustness. In particular, biomass conversion efficiency is always considered independent of the predator and prey size, which contradicts with the literature. In this paper, we propose a general model encompassing most previous models which allows to show that relaxing arbitrary assumptions gives rise to unrealistic food webs. We then show that considering biomass conversion efficiency dependent on species size is certainly key for food webs adaptive evolution because realistic food webs can evolve, making obsolete the need of arbitrary constraints on traits' evolution. We finally conclude that, on the one hand, ecologists should pay attention to how biomass flows into food webs in models. On the other hand, we question more generally the robustness of evolutionary models for the study of food webs. |
q-bio/0507015 | Dagmar Iber | Dagmar Iber, Joanna Clarkson, Michael D Yudkin, Iain D Campbell | Differential gene expression in Bacillus subtilis | null | null | null | null | q-bio.MN q-bio.CB q-bio.SC | null | Sporulation in Bacillus subtilis serves as a paradigm for the development of
two different cell types (mother cell and prespore) from a single cell. The
mechanism by which the two different developmental programs are initiated has
been much studied but is not well understood. With the help of existing and new
experimental results, a mathematical model has been developed that reproduces
all published in vitro experiments and makes new predictions about the
properties of the system in vivo.
| [
{
"created": "Sun, 10 Jul 2005 12:26:09 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Iber",
"Dagmar",
""
],
[
"Clarkson",
"Joanna",
""
],
[
"Yudkin",
"Michael D",
""
],
[
"Campbell",
"Iain D",
""
]
] | Sporulation in Bacillus subtilis serves as a paradigm for the development of two different cell types (mother cell and prespore) from a single cell. The mechanism by which the two different developmental programs are initiated has been much studied but is not well understood. With the help of existing and new experimental results, a mathematical model has been developed that reproduces all published in vitro experiments and makes new predictions about the properties of the system in vivo. |
1502.01513 | Yuzuru Yamanaka | Yuzuru Yamanaka, Shun-ichi Amari and Shigeru Shinomoto | Microscopic instability in recurrent neural networks | 9 pages, 12 figures | null | 10.1103/PhysRevE.91.032921 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In a manner similar to the molecular chaos that underlies the stable
thermodynamics of gases, neuronal system may exhibit microscopic instability in
individual neuronal dynamics while a macroscopic order of the entire population
possibly remains stable. In this study, we analyze the microscopic stability of
a network of neurons whose macroscopic activity obeys stable dynamics,
expressing either monostable, bistable, or periodic state. We reveal that the
network exhibits a variety of dynamical states for microscopic instability
residing in given stable macroscopic dynamics. The presence of a variety of
dynamical states in such a simple random network implies more abundant
microscopic fluctuations in real neural networks, which consist of more complex
and hierarchically structured interactions.
| [
{
"created": "Thu, 5 Feb 2015 12:02:08 GMT",
"version": "v1"
}
] | 2015-06-23 | [
[
"Yamanaka",
"Yuzuru",
""
],
[
"Amari",
"Shun-ichi",
""
],
[
"Shinomoto",
"Shigeru",
""
]
] | In a manner similar to the molecular chaos that underlies the stable thermodynamics of gases, neuronal system may exhibit microscopic instability in individual neuronal dynamics while a macroscopic order of the entire population possibly remains stable. In this study, we analyze the microscopic stability of a network of neurons whose macroscopic activity obeys stable dynamics, expressing either monostable, bistable, or periodic state. We reveal that the network exhibits a variety of dynamical states for microscopic instability residing in given stable macroscopic dynamics. The presence of a variety of dynamical states in such a simple random network implies more abundant microscopic fluctuations in real neural networks, which consist of more complex and hierarchically structured interactions. |
2109.05012 | Austin Clyde | Austin Clyde, Ashka Shah, Max Zvyagin, Arvind Ramanathan, Rick Stevens | Scaffold-Induced Molecular Graph (SIMG): Effective Graph Sampling
Methods for High-Throughput Computational Drug Discovery | null | null | null | null | q-bio.QM q-bio.BM | http://creativecommons.org/licenses/by/4.0/ | Scaffold based drug discovery (SBDD) is a technique for drug discovery which
pins chemical scaffolds as the framework of design. Scaffolds, or molecular
frameworks, organize the design of compounds into local neighborhoods. We
formalize scaffold based drug discovery into a network design. Utilizing
docking data from SARS-CoV-2 virtual screening studies and JAK2 kinase assay
data, we showcase how a scaffold based conception of chemical space is
intuitive for design. Lastly, we highlight the utility of scaffold based
networks for chemical space as a potential solution to the intractable
enumeration problem of chemical space by working inductively on local
neighborhoods.
| [
{
"created": "Fri, 10 Sep 2021 17:47:31 GMT",
"version": "v1"
}
] | 2021-09-13 | [
[
"Clyde",
"Austin",
""
],
[
"Shah",
"Ashka",
""
],
[
"Zvyagin",
"Max",
""
],
[
"Ramanathan",
"Arvind",
""
],
[
"Stevens",
"Rick",
""
]
] | Scaffold based drug discovery (SBDD) is a technique for drug discovery which pins chemical scaffolds as the framework of design. Scaffolds, or molecular frameworks, organize the design of compounds into local neighborhoods. We formalize scaffold based drug discovery into a network design. Utilizing docking data from SARS-CoV-2 virtual screening studies and JAK2 kinase assay data, we showcase how a scaffold based conception of chemical space is intuitive for design. Lastly, we highlight the utility of scaffold based networks for chemical space as a potential solution to the intractable enumeration problem of chemical space by working inductively on local neighborhoods. |
2204.13048 | Alex Li | Alex J. Li, Vikram Sundar, Gevorg Grigoryan, Amy E. Keating | TERMinator: A Neural Framework for Structure-Based Protein Design using
Tertiary Repeating Motifs | Machine Learning for Structural Biology, NeurIPS 2021 | null | null | null | q-bio.BM cs.LG | http://creativecommons.org/licenses/by/4.0/ | Computational protein design has the potential to deliver novel molecular
structures, binders, and catalysts for myriad applications. Recent neural
graph-based models that use backbone coordinate-derived features show
exceptional performance on native sequence recovery tasks and are promising
frameworks for design. A statistical framework for modeling protein sequence
landscapes using Tertiary Motifs (TERMs), compact units of recurring structure
in proteins, has also demonstrated good performance on protein design tasks. In
this work, we investigate the use of TERM-derived data as features in neural
protein design frameworks. Our graph-based architecture, TERMinator,
incorporates TERM-based and coordinate-based information and outputs a Potts
model over sequence space. TERMinator outperforms state-of-the-art models on
native sequence recovery tasks, suggesting that utilizing TERM-based and
coordinate-based features together is beneficial for protein design.
| [
{
"created": "Wed, 27 Apr 2022 16:42:10 GMT",
"version": "v1"
}
] | 2022-04-28 | [
[
"Li",
"Alex J.",
""
],
[
"Sundar",
"Vikram",
""
],
[
"Grigoryan",
"Gevorg",
""
],
[
"Keating",
"Amy E.",
""
]
] | Computational protein design has the potential to deliver novel molecular structures, binders, and catalysts for myriad applications. Recent neural graph-based models that use backbone coordinate-derived features show exceptional performance on native sequence recovery tasks and are promising frameworks for design. A statistical framework for modeling protein sequence landscapes using Tertiary Motifs (TERMs), compact units of recurring structure in proteins, has also demonstrated good performance on protein design tasks. In this work, we investigate the use of TERM-derived data as features in neural protein design frameworks. Our graph-based architecture, TERMinator, incorporates TERM-based and coordinate-based information and outputs a Potts model over sequence space. TERMinator outperforms state-of-the-art models on native sequence recovery tasks, suggesting that utilizing TERM-based and coordinate-based features together is beneficial for protein design. |
2110.07446 | Marcela Ordorica | Marcela Ordorica Arango, Alessio Franci | The correlated variability control problem: a dominant approach | 6 pages, 6 figures | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Given a population of interconnected input-output agents repeatedly exposed
to independent random inputs, we talk of correlated variability when agents'
outputs are variable (i.e., they change randomly at each input repetition) but
correlated (i.e., they do not vary independently across input repetitions).
Correlated variability appears at multiple levels in neuronal systems, from the
molecular level of protein expression to the electrical level of neuronal
excitability, but its functions and origins are still debated. Motivated by
advancing our understanding of correlated variability, we introduce the
(linear) "correlated variability control problem" as the problem of controlling
steady-state correlations in a linear dynamical network in which agents receive
independent random inputs. Although simple, the chosen setting reveals
important connections between network structure, in particular, the existence
and the dimension of dominant (i.e., slow) dynamics in the network, and the
emergence of correlated variability.
| [
{
"created": "Thu, 14 Oct 2021 15:13:10 GMT",
"version": "v1"
}
] | 2021-10-15 | [
[
"Arango",
"Marcela Ordorica",
""
],
[
"Franci",
"Alessio",
""
]
] | Given a population of interconnected input-output agents repeatedly exposed to independent random inputs, we talk of correlated variability when agents' outputs are variable (i.e., they change randomly at each input repetition) but correlated (i.e., they do not vary independently across input repetitions). Correlated variability appears at multiple levels in neuronal systems, from the molecular level of protein expression to the electrical level of neuronal excitability, but its functions and origins are still debated. Motivated by advancing our understanding of correlated variability, we introduce the (linear) "correlated variability control problem" as the problem of controlling steady-state correlations in a linear dynamical network in which agents receive independent random inputs. Although simple, the chosen setting reveals important connections between network structure, in particular, the existence and the dimension of dominant (i.e., slow) dynamics in the network, and the emergence of correlated variability. |
1102.3919 | TaeHyun Hwang | TaeHyun Hwang, Wei Zhang, Maoqiang Xie, Rui Kuang | Inferring Disease and Gene Set Associations with Rank Coherence in
Networks | 16 pages | null | null | null | q-bio.GN cs.AI cs.LG q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A computational challenge to validate the candidate disease genes identified
in a high-throughput genomic study is to elucidate the associations between the
set of candidate genes and disease phenotypes. The conventional gene set
enrichment analysis often fails to reveal associations between disease
phenotypes and the gene sets with a short list of poorly annotated genes,
because the existing annotations of disease causative genes are incomplete. We
propose a network-based computational approach called rcNet to discover the
associations between gene sets and disease phenotypes. Assuming coherent
associations between the genes ranked by their relevance to the query gene set,
and the disease phenotypes ranked by their relevance to the hidden target
disease phenotypes of the query gene set, we formulate a learning framework
maximizing the rank coherence with respect to the known disease phenotype-gene
associations. An efficient algorithm coupling ridge regression with label
propagation, and two variants are introduced to find the optimal solution of
the framework. We evaluated the rcNet algorithms and existing baseline methods
with both leave-one-out cross-validation and a task of predicting recently
discovered disease-gene associations in OMIM. The experiments demonstrated that
the rcNet algorithms achieved the best overall rankings compared to the
baselines. To further validate the reproducibility of the performance, we
applied the algorithms to identify the target diseases of novel candidate
disease genes obtained from recent studies of GWAS, DNA copy number variation
analysis, and gene expression profiling. The algorithms ranked the target
disease of the candidate genes at the top of the rank list in many cases across
all the three case studies. The rcNet algorithms are available as a webtool for
disease and gene set association analysis at
http://compbio.cs.umn.edu/dgsa_rcNet.
| [
{
"created": "Fri, 18 Feb 2011 21:01:38 GMT",
"version": "v1"
}
] | 2011-02-22 | [
[
"Hwang",
"TaeHyun",
""
],
[
"Zhang",
"Wei",
""
],
[
"Xie",
"Maoqiang",
""
],
[
"Kuang",
"Rui",
""
]
] | A computational challenge to validate the candidate disease genes identified in a high-throughput genomic study is to elucidate the associations between the set of candidate genes and disease phenotypes. The conventional gene set enrichment analysis often fails to reveal associations between disease phenotypes and the gene sets with a short list of poorly annotated genes, because the existing annotations of disease causative genes are incomplete. We propose a network-based computational approach called rcNet to discover the associations between gene sets and disease phenotypes. Assuming coherent associations between the genes ranked by their relevance to the query gene set, and the disease phenotypes ranked by their relevance to the hidden target disease phenotypes of the query gene set, we formulate a learning framework maximizing the rank coherence with respect to the known disease phenotype-gene associations. An efficient algorithm coupling ridge regression with label propagation, and two variants are introduced to find the optimal solution of the framework. We evaluated the rcNet algorithms and existing baseline methods with both leave-one-out cross-validation and a task of predicting recently discovered disease-gene associations in OMIM. The experiments demonstrated that the rcNet algorithms achieved the best overall rankings compared to the baselines. To further validate the reproducibility of the performance, we applied the algorithms to identify the target diseases of novel candidate disease genes obtained from recent studies of GWAS, DNA copy number variation analysis, and gene expression profiling. The algorithms ranked the target disease of the candidate genes at the top of the rank list in many cases across all the three case studies. The rcNet algorithms are available as a webtool for disease and gene set association analysis at http://compbio.cs.umn.edu/dgsa_rcNet. |
1605.07124 | Mauricio Barahona | Justine Dattani and Mauricio Barahona | Stochastic models of gene transcription with upstream drives: exact
solution and sample path characterization | 10 figures | J Roy Soc Interface 14: 20160833 (2017) | 10.1098/rsif.2016.0833 | null | q-bio.QM q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Gene transcription is a highly stochastic and dynamic process. As a result,
the mRNA copy number of a given gene is heterogeneous both between cells and
across time. We present a framework to model gene transcription in populations
of cells with time-varying (stochastic or deterministic) transcription and
degradation rates. Such rates can be understood as upstream cellular drives
representing the effect of different aspects of the cellular environment. We
show that the full solution of the master equation contains two components: a
model-specific, upstream effective drive, which encapsulates the effect of
cellular drives (e.g., entrainment, periodicity or promoter randomness), and a
downstream transcriptional Poissonian part, which is common to all models. Our
analytical framework treats cell-to-cell and dynamic variability consistently,
unifying several approaches in the literature. We apply the obtained solution
to characterise different models of experimental relevance, and to explain the
influence on gene transcription of synchrony, stationarity, ergodicity, as well
as the effect of time-scales and other dynamic characteristics of drives. We
also show how the solution can be applied to the analysis of noise sources in
single-cell data, and to reduce the computational cost of stochastic
simulations.
| [
{
"created": "Mon, 23 May 2016 18:14:12 GMT",
"version": "v1"
},
{
"created": "Mon, 9 Jan 2017 02:01:18 GMT",
"version": "v2"
}
] | 2017-01-10 | [
[
"Dattani",
"Justine",
""
],
[
"Barahona",
"Mauricio",
""
]
] | Gene transcription is a highly stochastic and dynamic process. As a result, the mRNA copy number of a given gene is heterogeneous both between cells and across time. We present a framework to model gene transcription in populations of cells with time-varying (stochastic or deterministic) transcription and degradation rates. Such rates can be understood as upstream cellular drives representing the effect of different aspects of the cellular environment. We show that the full solution of the master equation contains two components: a model-specific, upstream effective drive, which encapsulates the effect of cellular drives (e.g., entrainment, periodicity or promoter randomness), and a downstream transcriptional Poissonian part, which is common to all models. Our analytical framework treats cell-to-cell and dynamic variability consistently, unifying several approaches in the literature. We apply the obtained solution to characterise different models of experimental relevance, and to explain the influence on gene transcription of synchrony, stationarity, ergodicity, as well as the effect of time-scales and other dynamic characteristics of drives. We also show how the solution can be applied to the analysis of noise sources in single-cell data, and to reduce the computational cost of stochastic simulations. |
1102.3596 | Sergey Petoukhov | Sergey V. Petoukhov | The genetic code, 8-dimensional hypercomplex numbers and dyadic shifts | 108 pages, 73 figures, added text, added references | null | null | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Matrix forms of the representation of the multi-level system of
molecular-genetic alphabets have revealed algebraic properties of this system.
Families of genetic (4*4)- and (8*8)-matrices show unexpected connections of
the genetic system with Walsh functions and Hadamard matrices, which are known
in theory of noise-immunity coding, digital communication and digital
holography. Dyadic-shift decompositions of such genetic matrices lead to sets
of sparse matrices. Each of these sets is closed in relation to multiplication
and defines relevant algebra of hypercomplex numbers. It is shown that genetic
Hadamard matrices are identical to matrix representations of Hamilton
quaternions and its complexification in the case of unit coordinates. The
diversity of known dialects of the genetic code is analyzed from the viewpoint
of the genetic algebras. An algebraic analogy with Punnett squares for
inherited traits is shown. Our results are used in analyzing genetic phenomena.
The statement about existence of the geno-logic code in DNA and epigenetics on
the base of the spectral logic of systems of Boolean functions is put forward.
Our results show promising ways to develop algebraic-logical biology, in
particular, in connection with the logic holography on Walsh functions.
| [
{
"created": "Thu, 17 Feb 2011 14:57:30 GMT",
"version": "v1"
},
{
"created": "Thu, 19 May 2016 16:52:03 GMT",
"version": "v10"
},
{
"created": "Fri, 15 Jul 2016 17:59:30 GMT",
"version": "v11"
},
{
"created": "Thu, 24 Feb 2011 16:32:19 GMT",
"version": "v2"
},
{
"created": "Tue, 15 Mar 2011 05:48:31 GMT",
"version": "v3"
},
{
"created": "Fri, 14 Oct 2011 09:07:52 GMT",
"version": "v4"
},
{
"created": "Fri, 9 Dec 2011 17:29:44 GMT",
"version": "v5"
},
{
"created": "Fri, 23 Dec 2011 18:22:07 GMT",
"version": "v6"
},
{
"created": "Mon, 30 Jan 2012 14:02:18 GMT",
"version": "v7"
},
{
"created": "Sat, 18 Apr 2015 18:29:33 GMT",
"version": "v8"
},
{
"created": "Tue, 12 Apr 2016 15:35:59 GMT",
"version": "v9"
}
] | 2016-07-18 | [
[
"Petoukhov",
"Sergey V.",
""
]
] | Matrix forms of the representation of the multi-level system of molecular-genetic alphabets have revealed algebraic properties of this system. Families of genetic (4*4)- and (8*8)-matrices show unexpected connections of the genetic system with Walsh functions and Hadamard matrices, which are known in theory of noise-immunity coding, digital communication and digital holography. Dyadic-shift decompositions of such genetic matrices lead to sets of sparse matrices. Each of these sets is closed in relation to multiplication and defines relevant algebra of hypercomplex numbers. It is shown that genetic Hadamard matrices are identical to matrix representations of Hamilton quaternions and its complexification in the case of unit coordinates. The diversity of known dialects of the genetic code is analyzed from the viewpoint of the genetic algebras. An algebraic analogy with Punnett squares for inherited traits is shown. Our results are used in analyzing genetic phenomena. The statement about existence of the geno-logic code in DNA and epigenetics on the base of the spectral logic of systems of Boolean functions is put forward. Our results show promising ways to develop algebraic-logical biology, in particular, in connection with the logic holography on Walsh functions. |
1301.7745 | Justin Kinney | Justin B. Kinney, Gurinder S. Atwal | Equitability, mutual information, and the maximal information
coefficient | null | null | 10.1073/pnas.1309933111 | null | q-bio.QM math.ST stat.ME stat.ML stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reshef et al. recently proposed a new statistical measure, the "maximal
information coefficient" (MIC), for quantifying arbitrary dependencies between
pairs of stochastic quantities. MIC is based on mutual information, a
fundamental quantity in information theory that is widely understood to serve
this need. MIC, however, is not an estimate of mutual information. Indeed, it
was claimed that MIC possesses a desirable mathematical property called
"equitability" that mutual information lacks. This was not proven; instead it
was argued solely through the analysis of simulated data. Here we show that
this claim, in fact, is incorrect. First we offer mathematical proof that no
(non-trivial) dependence measure satisfies the definition of equitability
proposed by Reshef et al.. We then propose a self-consistent and more general
definition of equitability that follows naturally from the Data Processing
Inequality. Mutual information satisfies this new definition of equitability
while MIC does not. Finally, we show that the simulation evidence offered by
Reshef et al. was artifactual. We conclude that estimating mutual information
is not only practical for many real-world applications, but also provides a
natural solution to the problem of quantifying associations in large data sets.
| [
{
"created": "Thu, 31 Jan 2013 20:44:28 GMT",
"version": "v1"
}
] | 2015-06-12 | [
[
"Kinney",
"Justin B.",
""
],
[
"Atwal",
"Gurinder S.",
""
]
] | Reshef et al. recently proposed a new statistical measure, the "maximal information coefficient" (MIC), for quantifying arbitrary dependencies between pairs of stochastic quantities. MIC is based on mutual information, a fundamental quantity in information theory that is widely understood to serve this need. MIC, however, is not an estimate of mutual information. Indeed, it was claimed that MIC possesses a desirable mathematical property called "equitability" that mutual information lacks. This was not proven; instead it was argued solely through the analysis of simulated data. Here we show that this claim, in fact, is incorrect. First we offer mathematical proof that no (non-trivial) dependence measure satisfies the definition of equitability proposed by Reshef et al.. We then propose a self-consistent and more general definition of equitability that follows naturally from the Data Processing Inequality. Mutual information satisfies this new definition of equitability while MIC does not. Finally, we show that the simulation evidence offered by Reshef et al. was artifactual. We conclude that estimating mutual information is not only practical for many real-world applications, but also provides a natural solution to the problem of quantifying associations in large data sets. |
2402.14819 | Hari Prasad Sreekrishnapurath Variyam | Hari Prasad SV | A Novel method for Schizophrenia classification using nonlinear features
and neural networks | 4 Pages. 8 Figures, Individual project | null | null | null | q-bio.NC physics.bio-ph | http://creativecommons.org/licenses/by/4.0/ | One notable method for recording brainwaves to identify neurological problems
is electroencephalography (hereafter EEG). A trained neuro physician can learn
more about how the brain functions through the use of EEGs. However
conventionally, EEGs are only used to examine neurological problems (Eg.
Seizures). But abnormal links to neurological circuits can also exist in
psychological illnesses like Schizophrenia. Hence EEGs can be an alternate
source of data for detection and classification of psychological disorders. A
study on the classification of EEG data obtained from healthy individuals and
individuals experiencing schizophrenia is conducted. The inherent nonlinear
nature of brain waves are made use for the dimensionality reduction of the
data. Nonlinear parameters such as Lyapunov exponent (LE) and Hurst exponent
(HE) were selected as essential features. The EEG data was obtained from the
openly available EEG database of MV. Lomonosov Moscow State university. To
perform Noise reduction of the data, a more recently developed Tunable Q factor
based wavelet transform (TQWT) is used . Finally for the classification, the 16
channel EEG time series is converted into spatial heatmaps using the
aforementioned features. A convolutional neural network (CNN) is designed and
trained with the modified data format for classification
| [
{
"created": "Sat, 30 Dec 2023 16:28:05 GMT",
"version": "v1"
}
] | 2024-02-26 | [
[
"SV",
"Hari Prasad",
""
]
] | One notable method for recording brainwaves to identify neurological problems is electroencephalography (hereafter EEG). A trained neuro physician can learn more about how the brain functions through the use of EEGs. However conventionally, EEGs are only used to examine neurological problems (Eg. Seizures). But abnormal links to neurological circuits can also exist in psychological illnesses like Schizophrenia. Hence EEGs can be an alternate source of data for detection and classification of psychological disorders. A study on the classification of EEG data obtained from healthy individuals and individuals experiencing schizophrenia is conducted. The inherent nonlinear nature of brain waves are made use for the dimensionality reduction of the data. Nonlinear parameters such as Lyapunov exponent (LE) and Hurst exponent (HE) were selected as essential features. The EEG data was obtained from the openly available EEG database of MV. Lomonosov Moscow State university. To perform Noise reduction of the data, a more recently developed Tunable Q factor based wavelet transform (TQWT) is used . Finally for the classification, the 16 channel EEG time series is converted into spatial heatmaps using the aforementioned features. A convolutional neural network (CNN) is designed and trained with the modified data format for classification |
2005.08956 | Brian Mathias | Brian Mathias, Andrea Klingebiel, Gesa Hartwigsen, Leona Sureth,
Manuela Macedonia, Katja M. Mayer, Katharina von Kriegstein | Motor cortex causally contributes to auditory word recognition following
sensorimotor-enriched vocabulary training | arXiv admin note: text overlap with arXiv:1903.04201 | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The role of the motor cortex in perceptual and cognitive functions is highly
controversial. Here, we investigated the hypothesis that the motor cortex can
be instrumental for translating foreign language vocabulary. Participants were
trained on foreign language (L2) words and their native language translations
over four consecutive days. L2 words were accompanied by complementary gestures
(sensorimotor enrichment) or pictures (sensory enrichment). Following training,
participants translated the auditorily-presented L2 words that they had learned
and repetitive transcranial magnetic stimulation (rTMS) was applied to the
bilateral posterior motor cortices. Compared to sham stimulation, effective
perturbation by rTMS slowed down the translation of sensorimotor-enriched L2
words - but not sensory-enriched L2 words. This finding suggests that
sensorimotor-enriched training induced changes in L2 representations within the
motor cortex, which in turn facilitated the translation of L2 words. The motor
cortex may play a causal role in precipitating sensorimotor-based learning
benefits, and may directly aid in remembering the native language translations
of foreign language words following sensorimotor-enriched training. These
findings support multisensory theories of learning while challenging
reactivation-based theories.
| [
{
"created": "Sun, 17 May 2020 11:50:40 GMT",
"version": "v1"
}
] | 2020-05-20 | [
[
"Mathias",
"Brian",
""
],
[
"Klingebiel",
"Andrea",
""
],
[
"Hartwigsen",
"Gesa",
""
],
[
"Sureth",
"Leona",
""
],
[
"Macedonia",
"Manuela",
""
],
[
"Mayer",
"Katja M.",
""
],
[
"von Kriegstein",
"Katharina",
""
]
] | The role of the motor cortex in perceptual and cognitive functions is highly controversial. Here, we investigated the hypothesis that the motor cortex can be instrumental for translating foreign language vocabulary. Participants were trained on foreign language (L2) words and their native language translations over four consecutive days. L2 words were accompanied by complementary gestures (sensorimotor enrichment) or pictures (sensory enrichment). Following training, participants translated the auditorily-presented L2 words that they had learned and repetitive transcranial magnetic stimulation (rTMS) was applied to the bilateral posterior motor cortices. Compared to sham stimulation, effective perturbation by rTMS slowed down the translation of sensorimotor-enriched L2 words - but not sensory-enriched L2 words. This finding suggests that sensorimotor-enriched training induced changes in L2 representations within the motor cortex, which in turn facilitated the translation of L2 words. The motor cortex may play a causal role in precipitating sensorimotor-based learning benefits, and may directly aid in remembering the native language translations of foreign language words following sensorimotor-enriched training. These findings support multisensory theories of learning while challenging reactivation-based theories. |
2107.05478 | Sen Nie | Cheng Yuan, Zu-Yu Qian, Shi-Ming Chen, Sen Nie | Structural characteristics in network control of molecular multiplex
networks | null | null | null | null | q-bio.QM physics.bio-ph q-bio.MN | http://creativecommons.org/licenses/by/4.0/ | Numerous real-world systems can be naturally modeled as multilayer networks,
enabling an efficient way to characterize those complex systems. Much evidence
in the context of system biology indicated that the collections between
different molecular networks can dramatically impact the global network
functions. Here, we focus on the molecular multiplex networks coupled by the
transcriptional regulatory network (TRN) and protein-protein interaction (PPI)
network, exploring the controllability and energy requiring in these types of
molecular multiplex networks. We find that the driver nodes tend to avoid
essential or pathogen-related genes. Yet, imposing the external inputs to these
essential or pathogen-related genes can remarkably reduce the energy cost,
implying their crucial role in network control. Moreover, we find that lower
minimal driver nodes as well as energy requiring are associated with
disassortative coupling between TRN and PPI networks. Our findings in several
species provide comprehensive understanding of genes' roles in biology and
network control.
| [
{
"created": "Mon, 12 Jul 2021 14:50:15 GMT",
"version": "v1"
}
] | 2021-07-13 | [
[
"Yuan",
"Cheng",
""
],
[
"Qian",
"Zu-Yu",
""
],
[
"Chen",
"Shi-Ming",
""
],
[
"Nie",
"Sen",
""
]
] | Numerous real-world systems can be naturally modeled as multilayer networks, enabling an efficient way to characterize those complex systems. Much evidence in the context of system biology indicated that the collections between different molecular networks can dramatically impact the global network functions. Here, we focus on the molecular multiplex networks coupled by the transcriptional regulatory network (TRN) and protein-protein interaction (PPI) network, exploring the controllability and energy requiring in these types of molecular multiplex networks. We find that the driver nodes tend to avoid essential or pathogen-related genes. Yet, imposing the external inputs to these essential or pathogen-related genes can remarkably reduce the energy cost, implying their crucial role in network control. Moreover, we find that lower minimal driver nodes as well as energy requiring are associated with disassortative coupling between TRN and PPI networks. Our findings in several species provide comprehensive understanding of genes' roles in biology and network control. |
1310.5466 | Chuan-Chao Wang | Chuan-Chao Wang, Ling-Xiang Wang, Manfei Zhang, Dali Yao, Li Jin, Hui
Li | Present Y chromosomes support the Persian ancestry of Sayyid Ajjal Shams
al-Din Omar and Eminent Navigator Zheng He | 5 pages, 1 figure | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sayyid Ajjal is the ancestor of many Muslims in areas all across China. And
one of his descendants is the famous Navigator of Ming Dynasty, Zheng He, who
led the largest armada in the world of 15th century. The origin of Sayyid
Ajjal's family remains unclear although many studies have been done on this
topic of Muslim history. In this paper, we studied the Y chromosomes of his
present descendants, and found they all have haplogroup L1a-M76, proving a
southern Persian origin.
| [
{
"created": "Mon, 21 Oct 2013 08:52:47 GMT",
"version": "v1"
}
] | 2013-10-22 | [
[
"Wang",
"Chuan-Chao",
""
],
[
"Wang",
"Ling-Xiang",
""
],
[
"Zhang",
"Manfei",
""
],
[
"Yao",
"Dali",
""
],
[
"Jin",
"Li",
""
],
[
"Li",
"Hui",
""
]
] | Sayyid Ajjal is the ancestor of many Muslims in areas all across China. And one of his descendants is the famous Navigator of Ming Dynasty, Zheng He, who led the largest armada in the world of 15th century. The origin of Sayyid Ajjal's family remains unclear although many studies have been done on this topic of Muslim history. In this paper, we studied the Y chromosomes of his present descendants, and found they all have haplogroup L1a-M76, proving a southern Persian origin. |
1709.02852 | Steven Tompson | Steven H. Tompson, Ari E. Kahn, Emily B. Falk, Jean M. Vettel,
Danielle S. Bassett | Individual Differences in Learning Social and Non-Social Network
Structures | null | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Learning about complex associations between pieces of information enables
individuals to quickly adjust their expectations and develop mental models.
Yet, the degree to which humans can learn higher-order information about
complex associations is not well understood; nor is it known whether the
learning process differs for social and non-social information. Here, we employ
a paradigm in which the order of stimulus presentation forms temporal
associations between the stimuli, collectively constituting a complex network
structure. We examined individual differences in the ability to learn network
topology for which stimuli were social versus non-social. Although participants
were able to learn both social and non-social networks, their performance in
social network learning was uncorrelated with their performance in non-social
network learning. Importantly, social traits, including social orientation and
perspective-taking, uniquely predicted the learning of social networks but not
the learning of non-social networks. Taken together, our results suggest that
the process of learning higher-order structure in social networks is
independent from the process of learning higher-order structure in non-social
networks. Our study design provides a promising approach to identify
neurophysiological drivers of social network versus non-social network
learning, extending our knowledge about the impact of individual differences on
these learning processes. Implications for how people learn and adapt to new
social contexts that require integration into a new social network are
discussed.
| [
{
"created": "Fri, 8 Sep 2017 20:31:46 GMT",
"version": "v1"
}
] | 2017-09-12 | [
[
"Tompson",
"Steven H.",
""
],
[
"Kahn",
"Ari E.",
""
],
[
"Falk",
"Emily B.",
""
],
[
"Vettel",
"Jean M.",
""
],
[
"Bassett",
"Danielle S.",
""
]
] | Learning about complex associations between pieces of information enables individuals to quickly adjust their expectations and develop mental models. Yet, the degree to which humans can learn higher-order information about complex associations is not well understood; nor is it known whether the learning process differs for social and non-social information. Here, we employ a paradigm in which the order of stimulus presentation forms temporal associations between the stimuli, collectively constituting a complex network structure. We examined individual differences in the ability to learn network topology for which stimuli were social versus non-social. Although participants were able to learn both social and non-social networks, their performance in social network learning was uncorrelated with their performance in non-social network learning. Importantly, social traits, including social orientation and perspective-taking, uniquely predicted the learning of social networks but not the learning of non-social networks. Taken together, our results suggest that the process of learning higher-order structure in social networks is independent from the process of learning higher-order structure in non-social networks. Our study design provides a promising approach to identify neurophysiological drivers of social network versus non-social network learning, extending our knowledge about the impact of individual differences on these learning processes. Implications for how people learn and adapt to new social contexts that require integration into a new social network are discussed. |
1912.02331 | James Hope Mr | James Hope, Narrendar Ravi Chandra, Frederique Vanholsbeeck, Andrew
McDaid | Investigation of ephaptic interactions in peripheral nerve of sheep
using 6 kHz subthreshold currents | 12 pages, 5 figures, journal submission | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The objective of this work was to determine whether application of
subthreshold currents to the peripheral nerve increases the excitability of the
underlying nerve fibres, and how this increased excitability would alter neural
activity as it propagates through the subthreshold currents. Experiments were
performed on two Romney cross-breed sheep in vivo, by applying subthreshold
currents either at the stimulus site or between the stimulus and recording
sites. Neural recordings were obtained from nerve cuff implanted on the
peroneal or sciatic nerve branches, while stimulus was applied to either the
peroneal nerve or pins placed through the lower hindshank. Results showed that
subthreshold currents applied to the same site as stimulus increased excitation
of underlying nerve fibres (p < 0.0001). With stimulus and subthreshold
currents applied to different sites on the peroneal nerve, the primary CAP in
the sciatic displayed a temporal shift of -2.5 to -3 us which agreed with
statistically significant changes in the CAP waveform (p<0.02). These findings
contribute to the understanding of mechanisms in myelinated fibres of
subthreshold current neuromodulation therapies.
| [
{
"created": "Thu, 5 Dec 2019 01:17:30 GMT",
"version": "v1"
}
] | 2019-12-06 | [
[
"Hope",
"James",
""
],
[
"Chandra",
"Narrendar Ravi",
""
],
[
"Vanholsbeeck",
"Frederique",
""
],
[
"McDaid",
"Andrew",
""
]
] | The objective of this work was to determine whether application of subthreshold currents to the peripheral nerve increases the excitability of the underlying nerve fibres, and how this increased excitability would alter neural activity as it propagates through the subthreshold currents. Experiments were performed on two Romney cross-breed sheep in vivo, by applying subthreshold currents either at the stimulus site or between the stimulus and recording sites. Neural recordings were obtained from nerve cuff implanted on the peroneal or sciatic nerve branches, while stimulus was applied to either the peroneal nerve or pins placed through the lower hindshank. Results showed that subthreshold currents applied to the same site as stimulus increased excitation of underlying nerve fibres (p < 0.0001). With stimulus and subthreshold currents applied to different sites on the peroneal nerve, the primary CAP in the sciatic displayed a temporal shift of -2.5 to -3 us which agreed with statistically significant changes in the CAP waveform (p<0.02). These findings contribute to the understanding of mechanisms in myelinated fibres of subthreshold current neuromodulation therapies. |
2201.01668 | Daniel Graham | Donald Spector, Daniel Graham | Blankets, Heat, and Why Free Energy Has Not Illuminated the Workings of
the Brain | 3 pages, 0 figures | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | What can we hope to learn about brains from the free energy principle? In
adopting the "primordial soup" physical model, Bruineberg et al. perpetuate the
unsupported notion that the free energy principle has a meaningful
physical--and neuronal--interpretation. We examine how minimization of free
energy arises in physical contexts, and what this can and cannot tell us about
brains.
| [
{
"created": "Wed, 5 Jan 2022 15:49:48 GMT",
"version": "v1"
},
{
"created": "Thu, 13 Jan 2022 17:05:56 GMT",
"version": "v2"
}
] | 2022-01-14 | [
[
"Spector",
"Donald",
""
],
[
"Graham",
"Daniel",
""
]
] | What can we hope to learn about brains from the free energy principle? In adopting the "primordial soup" physical model, Bruineberg et al. perpetuate the unsupported notion that the free energy principle has a meaningful physical--and neuronal--interpretation. We examine how minimization of free energy arises in physical contexts, and what this can and cannot tell us about brains. |
1203.4802 | C. Titus Brown | C. Titus Brown, Adina Howe, Qingpeng Zhang, Alexis B. Pyrkosz, Timothy
H. Brom | A Reference-Free Algorithm for Computational Normalization of Shotgun
Sequencing Data | null | null | null | null | q-bio.GN | http://creativecommons.org/licenses/by/3.0/ | Deep shotgun sequencing and analysis of genomes, transcriptomes, amplified
single-cell genomes, and metagenomes has enabled investigation of a wide range
of organisms and ecosystems. However, sampling variation in short-read data
sets and high sequencing error rates of modern sequencers present many new
computational challenges in data interpretation. These challenges have led to
the development of new classes of mapping tools and {\em de novo} assemblers.
These algorithms are challenged by the continued improvement in sequencing
throughput. We here describe digital normalization, a single-pass computational
algorithm that systematizes coverage in shotgun sequencing data sets, thereby
decreasing sampling variation, discarding redundant data, and removing the
majority of errors. Digital normalization substantially reduces the size of
shotgun data sets and decreases the memory and time requirements for {\em de
novo} sequence assembly, all without significantly impacting content of the
generated contigs. We apply digital normalization to the assembly of microbial
genomic data, amplified single-cell genomic data, and transcriptomic data. Our
implementation is freely available for use and modification.
| [
{
"created": "Wed, 21 Mar 2012 18:58:45 GMT",
"version": "v1"
},
{
"created": "Mon, 21 May 2012 05:49:35 GMT",
"version": "v2"
}
] | 2012-05-22 | [
[
"Brown",
"C. Titus",
""
],
[
"Howe",
"Adina",
""
],
[
"Zhang",
"Qingpeng",
""
],
[
"Pyrkosz",
"Alexis B.",
""
],
[
"Brom",
"Timothy H.",
""
]
] | Deep shotgun sequencing and analysis of genomes, transcriptomes, amplified single-cell genomes, and metagenomes has enabled investigation of a wide range of organisms and ecosystems. However, sampling variation in short-read data sets and high sequencing error rates of modern sequencers present many new computational challenges in data interpretation. These challenges have led to the development of new classes of mapping tools and {\em de novo} assemblers. These algorithms are challenged by the continued improvement in sequencing throughput. We here describe digital normalization, a single-pass computational algorithm that systematizes coverage in shotgun sequencing data sets, thereby decreasing sampling variation, discarding redundant data, and removing the majority of errors. Digital normalization substantially reduces the size of shotgun data sets and decreases the memory and time requirements for {\em de novo} sequence assembly, all without significantly impacting content of the generated contigs. We apply digital normalization to the assembly of microbial genomic data, amplified single-cell genomic data, and transcriptomic data. Our implementation is freely available for use and modification. |
1707.03616 | Jorge Hidalgo | Tommaso Spanio, Jorge Hidalgo and Miguel A. Mu\~noz | Impact of environmental colored noise in single-species population
dynamics | 10 pages, 3 figures | Phys. Rev. E 96, 042301 (2017) | 10.1103/PhysRevE.96.042301 | null | q-bio.PE cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Variability on external conditions has important consequences for the
dynamics and the organization of biological systems. In many cases, the
characteristic timescale of environmental changes as well as their correlations
play a fundamental role in the way living systems adapt and respond to it. A
proper mathematical approach to understand population dynamics, thus, requires
approaches more refined than, e.g., simple white-noise approximations. To shed
further light onto this problem, in this paper we propose a unifying framework
based on different analytical and numerical tools available to deal with
"colored" environmental noise. In particular, we employ a "unified colored
noise approximation" to map the original problem into an effective one with
white noise, and then we apply a standard path integral approach to gain
analytical understanding. For the sake of specificity, we present our approach
using as a guideline a variation of the contact process--which can also be seen
as a birth-death process of the Malthus-Verhulst class--where the propagation
or birth rate varies stochastically in time. Our approach allows us to tackle
in a systematic manner some of the relevant questions concerning population
dynamics under environmental variability, such as determining the stationary
population density, establishing the conditions under which a population may
become extinct, and estimating extinction times. We focus on the emerging phase
diagram and its possible phase transitions, underlying how these are affected
by the presence of environmental noise time-correlations.
| [
{
"created": "Wed, 12 Jul 2017 09:54:21 GMT",
"version": "v1"
},
{
"created": "Mon, 2 Oct 2017 17:17:15 GMT",
"version": "v2"
}
] | 2017-10-03 | [
[
"Spanio",
"Tommaso",
""
],
[
"Hidalgo",
"Jorge",
""
],
[
"Muñoz",
"Miguel A.",
""
]
] | Variability on external conditions has important consequences for the dynamics and the organization of biological systems. In many cases, the characteristic timescale of environmental changes as well as their correlations play a fundamental role in the way living systems adapt and respond to it. A proper mathematical approach to understand population dynamics, thus, requires approaches more refined than, e.g., simple white-noise approximations. To shed further light onto this problem, in this paper we propose a unifying framework based on different analytical and numerical tools available to deal with "colored" environmental noise. In particular, we employ a "unified colored noise approximation" to map the original problem into an effective one with white noise, and then we apply a standard path integral approach to gain analytical understanding. For the sake of specificity, we present our approach using as a guideline a variation of the contact process--which can also be seen as a birth-death process of the Malthus-Verhulst class--where the propagation or birth rate varies stochastically in time. Our approach allows us to tackle in a systematic manner some of the relevant questions concerning population dynamics under environmental variability, such as determining the stationary population density, establishing the conditions under which a population may become extinct, and estimating extinction times. We focus on the emerging phase diagram and its possible phase transitions, underlying how these are affected by the presence of environmental noise time-correlations. |
1308.6018 | Clinton Goss Ph.D. | Clinton F. Goss and Eric B. Miller | Dynamic Metrics of Heart Rate Variability | Revised August 29, 2013. 4 pages, 2 figures, 1 table | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Numerous metrics of heart rate variability (HRV) have been described,
analyzed, and compared in the literature. However, they rarely cover the actual
metrics used in a class of HRV data acquisition devices - those designed
primarily to produce real-time metrics. This paper characterizes a class of
metrics that we term dynamic metrics. We also report the results of a pilot
study which compares one such dynamic metric, based on photoplethysmographic
data using a moving sampling window set to the length of an estimated breath
cycle (EBC), with established HRV metrics. The results show high correlation
coefficients between the dynamic EBC metrics and the established static SDNN
metric (standard deviation of Normal-to-Normal) based on electrocardiography.
These results demonstrate the usefulness of data acquisition devices designed
for real-time metrics.
| [
{
"created": "Tue, 27 Aug 2013 23:48:10 GMT",
"version": "v1"
},
{
"created": "Thu, 29 Aug 2013 15:57:03 GMT",
"version": "v2"
}
] | 2013-08-30 | [
[
"Goss",
"Clinton F.",
""
],
[
"Miller",
"Eric B.",
""
]
] | Numerous metrics of heart rate variability (HRV) have been described, analyzed, and compared in the literature. However, they rarely cover the actual metrics used in a class of HRV data acquisition devices - those designed primarily to produce real-time metrics. This paper characterizes a class of metrics that we term dynamic metrics. We also report the results of a pilot study which compares one such dynamic metric, based on photoplethysmographic data using a moving sampling window set to the length of an estimated breath cycle (EBC), with established HRV metrics. The results show high correlation coefficients between the dynamic EBC metrics and the established static SDNN metric (standard deviation of Normal-to-Normal) based on electrocardiography. These results demonstrate the usefulness of data acquisition devices designed for real-time metrics. |
1906.00871 | Yangkun Du Mr | Yangkun Du, Chaofeng L\"u, Michel Destrade, Weiqiu Chen | Influence of Initial Residual Stress on Growth and Pattern Creation for
a Layered Aorta | null | null | null | null | q-bio.TO cond-mat.soft physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Residual stress is ubiquitous and indispensable in most biological and
artificial materials, where it sustains and optimizes many biological and
functional mechanisms. The theory of volume growth, starting from a stress-free
initial state, is widely used to explain the creation and evolution of
growth-induced residual stress and the resulting changes in shape, and to model
how growing bio-tissues such as arteries and solid tumors develop a strategy of
pattern creation according to geometrical and material parameters. This
modelling provides promising avenues for designing and directing some
appropriate morphology of a given tissue or organ and achieve some targeted
biomedical function. In this paper, we rely on a modified, augmented theory to
reveal how we can obtain growth-induced residual stress and pattern evolution
of a layered artery by starting from an existing, non-zero initial residual
stress state. We use experimentally determined residual stress distributions of
aged bi-layered human aortas and quantify their influence by a magnitude
factor. Our results show that initial residual stress has a more significant
impact on residual stress accumulation and the subsequent evolution of patterns
than geometry and material parameters. Additionally, we provide an essential
explanation for growth-induced patterns driven by differential growth coupled
to an initial residual stress. Finally, we show that initial residual stress is
a readily available way to control growth-induced pattern creation for tissues
and thus may provide a promising inspiration for biomedical engineering.
| [
{
"created": "Mon, 3 Jun 2019 15:27:54 GMT",
"version": "v1"
}
] | 2019-06-04 | [
[
"Du",
"Yangkun",
""
],
[
"Lü",
"Chaofeng",
""
],
[
"Destrade",
"Michel",
""
],
[
"Chen",
"Weiqiu",
""
]
] | Residual stress is ubiquitous and indispensable in most biological and artificial materials, where it sustains and optimizes many biological and functional mechanisms. The theory of volume growth, starting from a stress-free initial state, is widely used to explain the creation and evolution of growth-induced residual stress and the resulting changes in shape, and to model how growing bio-tissues such as arteries and solid tumors develop a strategy of pattern creation according to geometrical and material parameters. This modelling provides promising avenues for designing and directing some appropriate morphology of a given tissue or organ and achieve some targeted biomedical function. In this paper, we rely on a modified, augmented theory to reveal how we can obtain growth-induced residual stress and pattern evolution of a layered artery by starting from an existing, non-zero initial residual stress state. We use experimentally determined residual stress distributions of aged bi-layered human aortas and quantify their influence by a magnitude factor. Our results show that initial residual stress has a more significant impact on residual stress accumulation and the subsequent evolution of patterns than geometry and material parameters. Additionally, we provide an essential explanation for growth-induced patterns driven by differential growth coupled to an initial residual stress. Finally, we show that initial residual stress is a readily available way to control growth-induced pattern creation for tissues and thus may provide a promising inspiration for biomedical engineering. |
2103.14988 | Zhiwei Chen | Zao Liu, Kan Song, Zhiwei Chen | NMRPy: a novel NMR scripting system to implement artificial intelligence
and advanced applications | 19 pages, 6 figures | null | null | null | q-bio.QM cs.PL | http://creativecommons.org/licenses/by-sa/4.0/ | Background: Software is an important windows to offer a variety of complex
instrument control and data processing for nuclear magnetic resonance (NMR)
spectrometer. NMR software should allow researchers to flexibly implement
various functionality according to the requirement of applications. Scripting
system can offer an open environment for NMR users to write custom programs
with basic libraries. Emerging technologies, especially multivariate
statistical analysis and artificial intelligence, have been successfully
applied to NMR applications such as metabolomics and biomacromolecules.
Scripting system should support more complex NMR libraries, which will enable
the emerging technologies to be easily implemented in the scripting
environment. Result: Here, a novel NMR scripting system named "NMRPy" is
introduced. In the scripting system, both Java based NMR methods and original
CPython based libraries are supported. A module was built as a bridge to
integrate the runtime environment of Java and CPython. It works as an extension
in CPython environment, as well as interacts with Java part by Java Native
Interface. Leveraging the bridge, Java based instrument control and data
processing methods can be called as a CPython style. Compared with traditional
scripting system, NMRPy is easier for NMR researchers to develop complex
functionality with fast numerical computation, multivariate statistical
analysis, deep learning etc. Non-uniform sampling and protein structure
prediction methods based on deep learning can be conveniently integrated into
NMRPy. Conclusion: NMRPy offers a user-friendly environment to implement custom
functionality leveraging its powerful basic NMR and rich CPython libraries. NMR
applications with emerging technologies can be easily integrated. The scripting
system is free of charge and can be downloaded by visiting
http://www.spinstudioj.net/nmrpy.
| [
{
"created": "Sat, 27 Mar 2021 20:28:18 GMT",
"version": "v1"
}
] | 2021-03-30 | [
[
"Liu",
"Zao",
""
],
[
"Song",
"Kan",
""
],
[
"Chen",
"Zhiwei",
""
]
] | Background: Software is an important windows to offer a variety of complex instrument control and data processing for nuclear magnetic resonance (NMR) spectrometer. NMR software should allow researchers to flexibly implement various functionality according to the requirement of applications. Scripting system can offer an open environment for NMR users to write custom programs with basic libraries. Emerging technologies, especially multivariate statistical analysis and artificial intelligence, have been successfully applied to NMR applications such as metabolomics and biomacromolecules. Scripting system should support more complex NMR libraries, which will enable the emerging technologies to be easily implemented in the scripting environment. Result: Here, a novel NMR scripting system named "NMRPy" is introduced. In the scripting system, both Java based NMR methods and original CPython based libraries are supported. A module was built as a bridge to integrate the runtime environment of Java and CPython. It works as an extension in CPython environment, as well as interacts with Java part by Java Native Interface. Leveraging the bridge, Java based instrument control and data processing methods can be called as a CPython style. Compared with traditional scripting system, NMRPy is easier for NMR researchers to develop complex functionality with fast numerical computation, multivariate statistical analysis, deep learning etc. Non-uniform sampling and protein structure prediction methods based on deep learning can be conveniently integrated into NMRPy. Conclusion: NMRPy offers a user-friendly environment to implement custom functionality leveraging its powerful basic NMR and rich CPython libraries. NMR applications with emerging technologies can be easily integrated. The scripting system is free of charge and can be downloaded by visiting http://www.spinstudioj.net/nmrpy. |
1011.0466 | Thomas Butler | Thomas Butler, Nigel Goldenfeld | Fluctuation-driven Turing patterns | Minor changes to manuscript | null | 10.1103/PhysRevE.84.011112 | null | q-bio.OT nlin.AO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Models of diffusion driven pattern formation that rely on the Turing
mechanism are utilized in many areas of science. However, many such models
suffer from the defect of requiring fine tuning of parameters or an unrealistic
separation of scales in the diffusivities of the constituents of the system in
order to predict the formation of spatial patterns. In the context of a very
generic model of ecological pattern formation, we show that the inclusion of
intrinsic noise in Turing models leads to the formation of "quasi-patterns"
that form in generic regions of parameter space and are experimentally
distinguishable from standard Turing patterns. The existence of quasi-patterns
removes the need for unphysical fine tuning or separation of scales in the
application of Turing models to real systems.
| [
{
"created": "Mon, 1 Nov 2010 23:03:07 GMT",
"version": "v1"
},
{
"created": "Thu, 4 Nov 2010 21:03:47 GMT",
"version": "v2"
},
{
"created": "Mon, 31 Jan 2011 22:17:12 GMT",
"version": "v3"
}
] | 2015-03-17 | [
[
"Butler",
"Thomas",
""
],
[
"Goldenfeld",
"Nigel",
""
]
] | Models of diffusion driven pattern formation that rely on the Turing mechanism are utilized in many areas of science. However, many such models suffer from the defect of requiring fine tuning of parameters or an unrealistic separation of scales in the diffusivities of the constituents of the system in order to predict the formation of spatial patterns. In the context of a very generic model of ecological pattern formation, we show that the inclusion of intrinsic noise in Turing models leads to the formation of "quasi-patterns" that form in generic regions of parameter space and are experimentally distinguishable from standard Turing patterns. The existence of quasi-patterns removes the need for unphysical fine tuning or separation of scales in the application of Turing models to real systems. |
1912.04337 | Ramon Grima | Casper H. L. Beentjes, Ruben Perez-Carrasco, Ramon Grima | Exact solution of stochastic gene expression models with bursting, cell
cycle and replication dynamics | 7 figures | Phys. Rev. E 101, 032403 (2020) | 10.1103/PhysRevE.101.032403 | null | q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The bulk of stochastic gene expression models in the literature do not have
an explicit description of the age of a cell within a generation and hence they
cannot capture events such as cell division and DNA replication. Instead, many
models incorporate cell cycle implicitly by assuming that dilution due to cell
division can be described by an effective decay reaction with first-order
kinetics. If it is further assumed that protein production occurs in bursts
then the stationary protein distribution is a negative binomial. Here we seek
to understand how accurate these implicit models are when compared with more
detailed models of stochastic gene expression. We derive the exact stationary
solution of the chemical master equation describing bursty protein dynamics,
binomial partitioning at mitosis, age-dependent transcription dynamics
including replication, and random interdivision times sampled from Erlang or
more general distributions; the solution is different for single lineage and
population snapshot settings. We show that protein distributions are well
approximated by the solution of implicit models (a negative binomial) when the
mean number of mRNAs produced per cycle is low and the cell cycle length
variability is large. When these conditions are not met, the distributions are
either almost bimodal or else display very flat regions near the mode and
cannot be described by implicit models. We also show that for genes with low
transcription rates, the size of protein noise has a strong dependence on the
replication time, it is almost independent of cell cycle variability for
lineage measurements and increases with cell cycle variability for population
snapshot measurements. In contrast for large transcription rates, the size of
protein noise is independent of replication time and increases with cell cycle
variability for both lineage and population measurements.
| [
{
"created": "Mon, 9 Dec 2019 19:34:53 GMT",
"version": "v1"
},
{
"created": "Thu, 13 Feb 2020 21:02:16 GMT",
"version": "v2"
}
] | 2020-03-11 | [
[
"Beentjes",
"Casper H. L.",
""
],
[
"Perez-Carrasco",
"Ruben",
""
],
[
"Grima",
"Ramon",
""
]
] | The bulk of stochastic gene expression models in the literature do not have an explicit description of the age of a cell within a generation and hence they cannot capture events such as cell division and DNA replication. Instead, many models incorporate cell cycle implicitly by assuming that dilution due to cell division can be described by an effective decay reaction with first-order kinetics. If it is further assumed that protein production occurs in bursts then the stationary protein distribution is a negative binomial. Here we seek to understand how accurate these implicit models are when compared with more detailed models of stochastic gene expression. We derive the exact stationary solution of the chemical master equation describing bursty protein dynamics, binomial partitioning at mitosis, age-dependent transcription dynamics including replication, and random interdivision times sampled from Erlang or more general distributions; the solution is different for single lineage and population snapshot settings. We show that protein distributions are well approximated by the solution of implicit models (a negative binomial) when the mean number of mRNAs produced per cycle is low and the cell cycle length variability is large. When these conditions are not met, the distributions are either almost bimodal or else display very flat regions near the mode and cannot be described by implicit models. We also show that for genes with low transcription rates, the size of protein noise has a strong dependence on the replication time, it is almost independent of cell cycle variability for lineage measurements and increases with cell cycle variability for population snapshot measurements. In contrast for large transcription rates, the size of protein noise is independent of replication time and increases with cell cycle variability for both lineage and population measurements. |
1304.6613 | Tom Kelsey | Thomas W. Kelsey, Sarah K. Dodwell, A. Graham Wilkinson, Tine Greve,
Claus Y. Andersen, Richard A. Anderson, W. Hamish B. Wallace | Ovarian volume throughout life: a validated normative model | 14 pages, 7 figures | null | 10.1371/journal.pone.0071465 | null | q-bio.TO cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The measurement of ovarian volume has been shown to be a useful indirect
indicator of the ovarian reserve in women of reproductive age, in the diagnosis
and management of a number of disorders of puberty and adult reproductive
function, and is under investigation as a screening tool for ovarian cancer. To
date there is no normative model of ovarian volume throughout life. By
searching the published literature for ovarian volume in healthy females, and
using our own data from multiple sources (combined n = 59,994) we have
generated and robustly validated the first model of ovarian volume from
conception to 82 years of age. This model shows that 69% of the variation in
ovarian volume is due to age alone. We have shown that in the average case
ovarian volume rises from 0.7 mL (95% CI 0.4 -- 1.1 mL) at 2 years of age to a
peak of 7.7 mL (95% CI 6.5 -- 9.2 mL) at 20 years of age with a subsequent
decline to about 2.8mL (95% CI 2.7 -- 2.9 mL) at the menopause and smaller
volumes thereafter. Our model allows us to generate normal values and ranges
for ovarian volume throughout life. This is the first validated normative model
of ovarian volume from conception to old age; it will be of use in the
diagnosis and management of a number of diverse gynaecological and reproductive
conditions in females from birth to menopause and beyond.
| [
{
"created": "Tue, 23 Apr 2013 16:20:57 GMT",
"version": "v1"
}
] | 2015-06-15 | [
[
"Kelsey",
"Thomas W.",
""
],
[
"Dodwell",
"Sarah K.",
""
],
[
"Wilkinson",
"A. Graham",
""
],
[
"Greve",
"Tine",
""
],
[
"Andersen",
"Claus Y.",
""
],
[
"Anderson",
"Richard A.",
""
],
[
"Wallace",
"W. Hamish B.",
""
]
] | The measurement of ovarian volume has been shown to be a useful indirect indicator of the ovarian reserve in women of reproductive age, in the diagnosis and management of a number of disorders of puberty and adult reproductive function, and is under investigation as a screening tool for ovarian cancer. To date there is no normative model of ovarian volume throughout life. By searching the published literature for ovarian volume in healthy females, and using our own data from multiple sources (combined n = 59,994) we have generated and robustly validated the first model of ovarian volume from conception to 82 years of age. This model shows that 69% of the variation in ovarian volume is due to age alone. We have shown that in the average case ovarian volume rises from 0.7 mL (95% CI 0.4 -- 1.1 mL) at 2 years of age to a peak of 7.7 mL (95% CI 6.5 -- 9.2 mL) at 20 years of age with a subsequent decline to about 2.8mL (95% CI 2.7 -- 2.9 mL) at the menopause and smaller volumes thereafter. Our model allows us to generate normal values and ranges for ovarian volume throughout life. This is the first validated normative model of ovarian volume from conception to old age; it will be of use in the diagnosis and management of a number of diverse gynaecological and reproductive conditions in females from birth to menopause and beyond. |
q-bio/0611032 | Lior Pachter | Peter Huggins, Lior Pachter, Bernd Sturmfels | Towards the Human Genotope | null | null | null | null | q-bio.PE math.ST q-bio.QM stat.TH | null | The human genotope is the convex hull of all allele frequency vectors that
can be obtained from the genotypes present in the human population. In this
paper we take a few initial steps towards a description of this object, which
may be fundamental for future population based genetics studies. Here we use
data from the HapMap Project, restricted to two ENCODE regions, to study a
subpolytope of the human genotope. We study three different approaches for
obtaining informative low-dimensional projections of this subpolytope. The
projections are specified by projection onto few tag SNPs, principal component
analysis, and archetypal analysis. We describe the application of our geometric
approach to identifying structure in populations based on single nucleotide
polymorphisms.
| [
{
"created": "Thu, 9 Nov 2006 05:07:00 GMT",
"version": "v1"
},
{
"created": "Mon, 25 Dec 2006 21:40:38 GMT",
"version": "v2"
}
] | 2011-11-10 | [
[
"Huggins",
"Peter",
""
],
[
"Pachter",
"Lior",
""
],
[
"Sturmfels",
"Bernd",
""
]
] | The human genotope is the convex hull of all allele frequency vectors that can be obtained from the genotypes present in the human population. In this paper we take a few initial steps towards a description of this object, which may be fundamental for future population based genetics studies. Here we use data from the HapMap Project, restricted to two ENCODE regions, to study a subpolytope of the human genotope. We study three different approaches for obtaining informative low-dimensional projections of this subpolytope. The projections are specified by projection onto few tag SNPs, principal component analysis, and archetypal analysis. We describe the application of our geometric approach to identifying structure in populations based on single nucleotide polymorphisms. |
1012.5987 | Feraz Azhar | Feraz Azhar, William Bialek | When are correlations strong? | null | null | null | null | q-bio.NC cond-mat.dis-nn cond-mat.stat-mech physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The inverse problem of statistical mechanics involves finding the minimal
Hamiltonian that is consistent with some observed set of correlation functions.
This problem has received renewed interest in the analysis of biological
networks; in particular, several such networks have been described successfully
by maximum entropy models consistent with pairwise correlations. These
correlations are usually weak in an absolute sense (e.g., correlation
coefficients ~ 0.1 or less), and this is sometimes taken as evidence against
the existence of interesting collective behavior in the network. If
correlations are weak, it should be possible to capture their effects in
perturbation theory, so we develop an expansion for the entropy of Ising
systems in powers of the correlations, carrying this out to fourth order. We
then consider recent work on networks of neurons [Schneidman et al., Nature
440, 1007 (2006); Tkacik et al., arXiv:0912.5409 [q-bio.NC] (2009)], and show
that even though all pairwise correlations are weak, the fact that these
correlations are widespread means that their impact on the network as a whole
is not captured in the leading orders of perturbation theory. More positively,
this means that recent successes of maximum entropy approaches are not simply
the result of correlations being weak.
| [
{
"created": "Wed, 29 Dec 2010 17:29:34 GMT",
"version": "v1"
}
] | 2010-12-30 | [
[
"Azhar",
"Feraz",
""
],
[
"Bialek",
"William",
""
]
] | The inverse problem of statistical mechanics involves finding the minimal Hamiltonian that is consistent with some observed set of correlation functions. This problem has received renewed interest in the analysis of biological networks; in particular, several such networks have been described successfully by maximum entropy models consistent with pairwise correlations. These correlations are usually weak in an absolute sense (e.g., correlation coefficients ~ 0.1 or less), and this is sometimes taken as evidence against the existence of interesting collective behavior in the network. If correlations are weak, it should be possible to capture their effects in perturbation theory, so we develop an expansion for the entropy of Ising systems in powers of the correlations, carrying this out to fourth order. We then consider recent work on networks of neurons [Schneidman et al., Nature 440, 1007 (2006); Tkacik et al., arXiv:0912.5409 [q-bio.NC] (2009)], and show that even though all pairwise correlations are weak, the fact that these correlations are widespread means that their impact on the network as a whole is not captured in the leading orders of perturbation theory. More positively, this means that recent successes of maximum entropy approaches are not simply the result of correlations being weak. |
1505.01187 | Changshuai Wei | Changshuai Wei, and Qing Lu | GWGGI: software for genome-wide gene-gene interaction analysis | null | BMC Genetics 2014, 15:101 | 10.1186/s12863-014-0101-z | null | q-bio.QM cs.DS q-bio.GN stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: While the importance of gene-gene interactions in human diseases
has been well recognized, identifying them has been a great challenge,
especially through association studies with millions of genetic markers and
thousands of individuals. Computationally efficient and powerful tools are in
great need for the identification of new gene-gene interactions in
high-dimensional association studies. Result: We develop C++ software for
genome-wide gene-gene interaction analyses (GWGGI). GWGGI utilizes tree-based
algorithms to search a large number of genetic markers for a disease-associated
joint association with the consideration of high-order interactions, and then
uses non-parametric statistics to test the joint association. The package
includes two functions, likelihood ratio Mann-whitney (LRMW) and Tree
Assembling Mann-whitney (TAMW).We optimize the data storage and computational
efficiency of the software, making it feasible to run the genome-wide analysis
on a personal computer. The use of GWGGI was demonstrated by using two real
data-sets with nearly 500 k genetic markers. Conclusion: Through the empirical
study, we demonstrated that the genome-wide gene-gene interaction analysis
using GWGGI could be accomplished within a reasonable time on a personal
computer (i.e., ~3.5 hours for LRMW and ~10 hours for TAMW). We also showed
that LRMW was suitable to detect interaction among a small number of genetic
variants with moderate-to-strong marginal effect, while TAMW was useful to
detect interaction among a larger number of low-marginal-effect genetic
variants.
| [
{
"created": "Tue, 5 May 2015 21:11:22 GMT",
"version": "v1"
}
] | 2015-05-07 | [
[
"Wei",
"Changshuai",
""
],
[
"Lu",
"Qing",
""
]
] | Background: While the importance of gene-gene interactions in human diseases has been well recognized, identifying them has been a great challenge, especially through association studies with millions of genetic markers and thousands of individuals. Computationally efficient and powerful tools are in great need for the identification of new gene-gene interactions in high-dimensional association studies. Result: We develop C++ software for genome-wide gene-gene interaction analyses (GWGGI). GWGGI utilizes tree-based algorithms to search a large number of genetic markers for a disease-associated joint association with the consideration of high-order interactions, and then uses non-parametric statistics to test the joint association. The package includes two functions, likelihood ratio Mann-whitney (LRMW) and Tree Assembling Mann-whitney (TAMW).We optimize the data storage and computational efficiency of the software, making it feasible to run the genome-wide analysis on a personal computer. The use of GWGGI was demonstrated by using two real data-sets with nearly 500 k genetic markers. Conclusion: Through the empirical study, we demonstrated that the genome-wide gene-gene interaction analysis using GWGGI could be accomplished within a reasonable time on a personal computer (i.e., ~3.5 hours for LRMW and ~10 hours for TAMW). We also showed that LRMW was suitable to detect interaction among a small number of genetic variants with moderate-to-strong marginal effect, while TAMW was useful to detect interaction among a larger number of low-marginal-effect genetic variants. |
1202.5362 | Sahand Hormoz | Sahand Hormoz | Cross-talk and interference enhance information capacity of a signaling
pathway | 28 pages, 5 figures | Biophysical Journal 104 (2013) 1170-1180 | 10.1016/j.bpj.2013.01.033 | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A recurring motif in gene regulatory networks is transcription factors (TFs)
that regulate each other, and then bind to overlapping sites on DNA, where they
interact and synergistically control transcription of a target gene. Here, we
suggest that this motif maximizes information flow in a noisy network. Gene
expression is an inherently noisy process due to thermal fluctuations and the
small number of molecules involved. A consequence of multiple TFs interacting
at overlapping binding-sites is that their binding noise becomes correlated.
Using concepts from information theory, we show that in general a signaling
pathway transmits more information if 1) noise of one input is correlated with
that of the other, 2) input signals are not chosen independently. In the case
of TFs, the latter criterion hints at up-stream cross-regulation. We
demonstrate these ideas for competing TFs and feed-forward gene regulatory
modules, and discuss generalizations to other signaling pathways. Our results
challenge the conventional approach of treating biological noise as
uncorrelated fluctuations, and present a systematic method for understanding TF
cross-regulation networks either from direct measurements of binding noise, or
bioinformatic analysis of overlapping binding-sites.
| [
{
"created": "Fri, 24 Feb 2012 01:57:08 GMT",
"version": "v1"
},
{
"created": "Wed, 6 Mar 2013 00:14:08 GMT",
"version": "v2"
}
] | 2013-03-07 | [
[
"Hormoz",
"Sahand",
""
]
] | A recurring motif in gene regulatory networks is transcription factors (TFs) that regulate each other, and then bind to overlapping sites on DNA, where they interact and synergistically control transcription of a target gene. Here, we suggest that this motif maximizes information flow in a noisy network. Gene expression is an inherently noisy process due to thermal fluctuations and the small number of molecules involved. A consequence of multiple TFs interacting at overlapping binding-sites is that their binding noise becomes correlated. Using concepts from information theory, we show that in general a signaling pathway transmits more information if 1) noise of one input is correlated with that of the other, 2) input signals are not chosen independently. In the case of TFs, the latter criterion hints at up-stream cross-regulation. We demonstrate these ideas for competing TFs and feed-forward gene regulatory modules, and discuss generalizations to other signaling pathways. Our results challenge the conventional approach of treating biological noise as uncorrelated fluctuations, and present a systematic method for understanding TF cross-regulation networks either from direct measurements of binding noise, or bioinformatic analysis of overlapping binding-sites. |
2002.12100 | T. Thang Vo-Doan | T. Thang Vo-Doan and Andrew D. Straw | Millisecond insect tracking system | 5 pages, 5 figures, supplemental data:
https://github.com/strawlab/msectrax , supplemental video:
https://youtu.be/zaScQgKKk3c | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Animals such as insects have provided a rich source of inspiration for
designing robots. For example, animals navigate to goals via efficient
coordination of individual motor actions, and demonstrate natural solutions to
problems also faced by engineers. Recording individual body part positions
during large scale movement would therefore be useful. Such multi-scale
observations, however, are challenging. With video, for example, there is
typically a trade-off between the volume over which an animal can be recorded
and spatial resolution within the volume. Even with high pixel-count cameras,
motion blur can be a challenge when using available light. Here we present a
new approach for tracking animals, such as insects, with an optical system that
bypasses this tradeoff by actively pointing a telephoto video camera at the
animal. This system is based around high-speed pan-tilt mirrors which steer an
optical path shared by a quadrant photodiode and a high-resolution, high-speed
telephoto video recording system. The mirror is directed to lock on to the
image of a 25-milligram retroreflector worn by the animal. This system allows
high-magnification videography with reduced motion blur over a large tracking
volume. With our prototype, we obtained millisecond order closed-loop latency
and recorded videos of flying insects in a tracking volume extending to an
axial distance of 3 meters and horizontally and vertically by 40 degrees. The
system offers increased capabilities compared to other video recording
solutions and may be useful for the study of animal behavior and the design of
bio-inspired robots.
| [
{
"created": "Thu, 27 Feb 2020 14:11:06 GMT",
"version": "v1"
}
] | 2020-02-28 | [
[
"Vo-Doan",
"T. Thang",
""
],
[
"Straw",
"Andrew D.",
""
]
] | Animals such as insects have provided a rich source of inspiration for designing robots. For example, animals navigate to goals via efficient coordination of individual motor actions, and demonstrate natural solutions to problems also faced by engineers. Recording individual body part positions during large scale movement would therefore be useful. Such multi-scale observations, however, are challenging. With video, for example, there is typically a trade-off between the volume over which an animal can be recorded and spatial resolution within the volume. Even with high pixel-count cameras, motion blur can be a challenge when using available light. Here we present a new approach for tracking animals, such as insects, with an optical system that bypasses this tradeoff by actively pointing a telephoto video camera at the animal. This system is based around high-speed pan-tilt mirrors which steer an optical path shared by a quadrant photodiode and a high-resolution, high-speed telephoto video recording system. The mirror is directed to lock on to the image of a 25-milligram retroreflector worn by the animal. This system allows high-magnification videography with reduced motion blur over a large tracking volume. With our prototype, we obtained millisecond order closed-loop latency and recorded videos of flying insects in a tracking volume extending to an axial distance of 3 meters and horizontally and vertically by 40 degrees. The system offers increased capabilities compared to other video recording solutions and may be useful for the study of animal behavior and the design of bio-inspired robots. |
2005.04029 | Srikanta Sannigrahi | Srikanta Sannigrahi, Francesco Pilla, Bidroha Basu, Arunima Sarkar
Basu | The overall mortality caused by COVID-19 in the European region is
highly associated with demographic composition: A spatial regression-based
approach | 45 | Sustainable Cities and Society, 2020 | 10.1016/j.scs.2020.102418 | null | q-bio.QM q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | The demographic factors have a substantial impact on the overall casualties
caused by the COVID-19. In this study, the spatial association between the key
demographic variables and COVID-19 cases and deaths were analyzed using the
spatial regression models. Total 13 (for COVID-19 case factor) and 8 (for
COVID-19 death factor) key variables were considered for the modelling. Total
five spatial regression models such as Geographically weighted regression
(GWR), Spatial Error Model (SEM), Spatial Lag Model (SLM), Spatial Error_Lag
model (SEM_SLM), and Ordinary Least Square (OLS) were performed for the spatial
modelling and mapping of model estimates. The local R2 values, which suggesting
the influences of the selected demographic variables on overall casualties
caused by COVID-19, was found highest in Italy and the UK. The moderate local
R2 was observed for France, Belgium, Netherlands, Ireland, Denmark, Norway,
Sweden, Poland, Slovakia, and Romania. The lowest local R2 value for COVID-19
cases was accounted for Latvia and Lithuania. Among the 13 variables, the
highest local R2 was calculated for total population (R2 = 0.92), followed by
death crude death rate (R2 = 0.9), long time illness (R2 = 0.84), population
with age >80 (R2 = 0.59), employment (R2 = 0.46), life expectancy at 65 (R2 =
0.34), crude birth rate (R2 = 0.31), life expectancy (R2 = 0.31), Population
with age 65-80 (R2 = 0.29), Population with age 15-24 (R2 = 0.27), Population
with age 25-49 (R2 = 0.27), Population with age 0-14 (R2 = 0.23), and
Population with age 50-65 (R2 = 0.23), respectively.
| [
{
"created": "Thu, 7 May 2020 12:46:21 GMT",
"version": "v1"
}
] | 2020-08-12 | [
[
"Sannigrahi",
"Srikanta",
""
],
[
"Pilla",
"Francesco",
""
],
[
"Basu",
"Bidroha",
""
],
[
"Basu",
"Arunima Sarkar",
""
]
] | The demographic factors have a substantial impact on the overall casualties caused by the COVID-19. In this study, the spatial association between the key demographic variables and COVID-19 cases and deaths were analyzed using the spatial regression models. Total 13 (for COVID-19 case factor) and 8 (for COVID-19 death factor) key variables were considered for the modelling. Total five spatial regression models such as Geographically weighted regression (GWR), Spatial Error Model (SEM), Spatial Lag Model (SLM), Spatial Error_Lag model (SEM_SLM), and Ordinary Least Square (OLS) were performed for the spatial modelling and mapping of model estimates. The local R2 values, which suggesting the influences of the selected demographic variables on overall casualties caused by COVID-19, was found highest in Italy and the UK. The moderate local R2 was observed for France, Belgium, Netherlands, Ireland, Denmark, Norway, Sweden, Poland, Slovakia, and Romania. The lowest local R2 value for COVID-19 cases was accounted for Latvia and Lithuania. Among the 13 variables, the highest local R2 was calculated for total population (R2 = 0.92), followed by death crude death rate (R2 = 0.9), long time illness (R2 = 0.84), population with age >80 (R2 = 0.59), employment (R2 = 0.46), life expectancy at 65 (R2 = 0.34), crude birth rate (R2 = 0.31), life expectancy (R2 = 0.31), Population with age 65-80 (R2 = 0.29), Population with age 15-24 (R2 = 0.27), Population with age 25-49 (R2 = 0.27), Population with age 0-14 (R2 = 0.23), and Population with age 50-65 (R2 = 0.23), respectively. |
0705.3188 | Eduardo D. Sontag | Murat Arcak and Eduardo D. Sontag | A passivity-based stability criterion for a class of interconnected
systems and applications to biochemical reaction networks | See http://www.math.rutgers.edu/~sontag/PUBDIR/index.html for related
(p)reprints | null | null | null | q-bio.QM | null | This paper presents a stability test for a class of interconnected nonlinear
systems motivated by biochemical reaction networks. One of the main results
determines global asymptotic stability of the network from the diagonal
stability of a "dissipativity matrix" which incorporates information about the
passivity properties of the subsystems, the interconnection structure of the
network, and the signs of the interconnection terms. This stability test
encompasses the "secant criterion" for cyclic networks presented in our
previous paper, and extends it to a general interconnection structure
represented by a graph. A second main result allows one to accommodate state
products. This extension makes the new stability criterion applicable to a
broader class of models, even in the case of cyclic systems. The new stability
test is illustrated on a mitogen activated protein kinase (MAPK) cascade model,
and on a branched interconnection structure motivated by metabolic networks.
Finally, another result addresses the robustness of stability in the presence
of diffusion terms in a compartmental system made out of identical systems.
| [
{
"created": "Tue, 22 May 2007 15:14:37 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Arcak",
"Murat",
""
],
[
"Sontag",
"Eduardo D.",
""
]
] | This paper presents a stability test for a class of interconnected nonlinear systems motivated by biochemical reaction networks. One of the main results determines global asymptotic stability of the network from the diagonal stability of a "dissipativity matrix" which incorporates information about the passivity properties of the subsystems, the interconnection structure of the network, and the signs of the interconnection terms. This stability test encompasses the "secant criterion" for cyclic networks presented in our previous paper, and extends it to a general interconnection structure represented by a graph. A second main result allows one to accommodate state products. This extension makes the new stability criterion applicable to a broader class of models, even in the case of cyclic systems. The new stability test is illustrated on a mitogen activated protein kinase (MAPK) cascade model, and on a branched interconnection structure motivated by metabolic networks. Finally, another result addresses the robustness of stability in the presence of diffusion terms in a compartmental system made out of identical systems. |
q-bio/0506004 | Ted Theodosopoulos | Patricia Theodosopoulos and Ted Theodosopoulos | Robustness and Evolvability of the B Cell Mutator Mechanism | 22 pages, 12 figures | null | null | null | q-bio.QM q-bio.PE | null | We present a model that considers the maturation of the antibody population
following primary antigen presentation as a global optimization problem. The
trade-off that emerges from our model describes the balance between the safety
of mutations that lead to local improvements in affinity and the necessity of
the system to undergo global reconfigurations in the antibody's shape in order
to achieve its goals, in this example of fast-paced evolution. The parameter p
which quantifies this trade-off appears to be itself both robust and evolvable.
This parallels the rapidity and consistency of the optimization operating
during the biologic response. In this paper, we explore the robust qualities
and evolvability of this tunable control parameter, p.
| [
{
"created": "Tue, 7 Jun 2005 03:26:30 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Theodosopoulos",
"Patricia",
""
],
[
"Theodosopoulos",
"Ted",
""
]
] | We present a model that considers the maturation of the antibody population following primary antigen presentation as a global optimization problem. The trade-off that emerges from our model describes the balance between the safety of mutations that lead to local improvements in affinity and the necessity of the system to undergo global reconfigurations in the antibody's shape in order to achieve its goals, in this example of fast-paced evolution. The parameter p which quantifies this trade-off appears to be itself both robust and evolvable. This parallels the rapidity and consistency of the optimization operating during the biologic response. In this paper, we explore the robust qualities and evolvability of this tunable control parameter, p. |
1509.03107 | Karin Moeller | Karin Moeller, Katharina Mueller, Hanna Engelke, Christoph Braeuchle,
Ernst Wagner, and Thomas Bein | Highly Efficient siRNA Delivery from Core-Shell Mesoporous Silica
Nanoparticles with Multifunctional Polymer Caps | Artikel including supporting information | null | 10.1039/c5nr06246b | null | q-bio.SC physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A new general route for siRNA delivery is presented combining porous
core-shell silica nanocarriers with a modularly designed multifunctional block
copolymer. Specifically, the internal storage and release of siRNA from
mesoporous silica nanoparticles (MSN) with orthogonal core-shell surface
chemistry was investigated as a function of pore-size, pore morphology, surface
properties and pH. Very high siRNA loading capacities of up to 380 microg/mg
MSN were obtained with charge-matched amino-functionalized mesoporous cores,
and release profiles show up to 80% siRNA elution after 24 h. We demonstrate
that adsorption and desorption of siRNA is mainly driven by electrostatic
interactions, which allow for high loading capacities even in medium-sized
mesopores with pore diameters down to 4 nm in a stellate pore morphology. The
negatively charged MSN shell enabled the association with a block copolymer
containing positively charged artificial amino acids and oleic acid blocks,
which acts simultaneously as capping function and endosomal release agent. The
potential of this multifunctional delivery platform is demonstrated by highly
effective cell transfection and siRNA delivery into KB-cells. A luciferase
reporter gene knock-down of up to 90% was possible using extremely low cell
exposures with only 2.5 microg MSN containing 32 pM siRNA per 100 microL well.
| [
{
"created": "Thu, 10 Sep 2015 11:35:14 GMT",
"version": "v1"
}
] | 2016-03-23 | [
[
"Moeller",
"Karin",
""
],
[
"Mueller",
"Katharina",
""
],
[
"Engelke",
"Hanna",
""
],
[
"Braeuchle",
"Christoph",
""
],
[
"Wagner",
"Ernst",
""
],
[
"Bein",
"Thomas",
""
]
] | A new general route for siRNA delivery is presented combining porous core-shell silica nanocarriers with a modularly designed multifunctional block copolymer. Specifically, the internal storage and release of siRNA from mesoporous silica nanoparticles (MSN) with orthogonal core-shell surface chemistry was investigated as a function of pore-size, pore morphology, surface properties and pH. Very high siRNA loading capacities of up to 380 microg/mg MSN were obtained with charge-matched amino-functionalized mesoporous cores, and release profiles show up to 80% siRNA elution after 24 h. We demonstrate that adsorption and desorption of siRNA is mainly driven by electrostatic interactions, which allow for high loading capacities even in medium-sized mesopores with pore diameters down to 4 nm in a stellate pore morphology. The negatively charged MSN shell enabled the association with a block copolymer containing positively charged artificial amino acids and oleic acid blocks, which acts simultaneously as capping function and endosomal release agent. The potential of this multifunctional delivery platform is demonstrated by highly effective cell transfection and siRNA delivery into KB-cells. A luciferase reporter gene knock-down of up to 90% was possible using extremely low cell exposures with only 2.5 microg MSN containing 32 pM siRNA per 100 microL well. |
1111.1360 | Gerardo F. Goya | Jos\'e I. Schwerdt, Gerardo F. Goya, Pilar Calatayud, Claudia B.
Here\~n\'u, Paula C. Reggiani, Rodolfo G. Goya | Magnetic Field-Assisted Gene Delivery: Achievements and Therapeutic
Potential | 30 pages, 6 figures | null | null | null | q-bio.QM cond-mat.mtrl-sci physics.bio-ph q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The discovery in the early 2000's that magnetic nanoparticles (MNPs)
complexed to nonviral or viral vectors can, in the presence of an external
magnetic field, greatly enhance gene transfer into cells has raised much
interest. This technique, called magnetofection, was initially developed mainly
to improve gene transfer in cell cultures, a simpler and more easily
controllable scenario than in vivo models. These studies provided evidence for
some unique capabilities of magnetofection. Progressively, the interest in
magnetofection expanded to its application in animal models and led to the
association of this technique with another technology, magnetic drug targeting
(MDT). This combination offers the possibility to develop more efficient and
less invasive gene therapy strategies for a number of major pathologies like
cancer, neurodegeneration and myocardial infarction. The goal of MDT is to
concentrate MNPs functionalized with therapeutic drugs, in target areas of the
body by means of properly focused external magnetic fields. The availability of
stable, nontoxic MNP-gene vector complexes now offers the opportunity to
develop magnetic gene targeting (MGT), a variant of MDT in which the gene
coding for a therapeutic molecule, rather than the molecule itself, is
delivered to a therapeutic target area in the body. This article will first
outline the principle of magnetofection, subsequently describing the properties
of the magnetic fields and MNPs used in this technique. Next, it will review
the results achieved by magnetofection in cell cultures. Last, the potential of
MGT for implementing minimally invasive gene therapy will be discussed.
| [
{
"created": "Sat, 5 Nov 2011 23:17:02 GMT",
"version": "v1"
}
] | 2011-11-08 | [
[
"Schwerdt",
"José I.",
""
],
[
"Goya",
"Gerardo F.",
""
],
[
"Calatayud",
"Pilar",
""
],
[
"Hereñú",
"Claudia B.",
""
],
[
"Reggiani",
"Paula C.",
""
],
[
"Goya",
"Rodolfo G.",
""
]
] | The discovery in the early 2000's that magnetic nanoparticles (MNPs) complexed to nonviral or viral vectors can, in the presence of an external magnetic field, greatly enhance gene transfer into cells has raised much interest. This technique, called magnetofection, was initially developed mainly to improve gene transfer in cell cultures, a simpler and more easily controllable scenario than in vivo models. These studies provided evidence for some unique capabilities of magnetofection. Progressively, the interest in magnetofection expanded to its application in animal models and led to the association of this technique with another technology, magnetic drug targeting (MDT). This combination offers the possibility to develop more efficient and less invasive gene therapy strategies for a number of major pathologies like cancer, neurodegeneration and myocardial infarction. The goal of MDT is to concentrate MNPs functionalized with therapeutic drugs, in target areas of the body by means of properly focused external magnetic fields. The availability of stable, nontoxic MNP-gene vector complexes now offers the opportunity to develop magnetic gene targeting (MGT), a variant of MDT in which the gene coding for a therapeutic molecule, rather than the molecule itself, is delivered to a therapeutic target area in the body. This article will first outline the principle of magnetofection, subsequently describing the properties of the magnetic fields and MNPs used in this technique. Next, it will review the results achieved by magnetofection in cell cultures. Last, the potential of MGT for implementing minimally invasive gene therapy will be discussed. |
2402.07430 | Denni Currin-Ross | Denni Currin-Ross (1, 2 and 3), Sami C. Al-Izzi (4), Ivar Noordstra
(1), Alpha S. Yap (1) and Richard G. Morris (2 and 3) ((1) Centre for Cell
Biology of Chronic Disease, Institute for Molecular Bioscience, The
University of Queensland, Australia (2) School of Physics, UNSW, Australia
(3) EMBL Australia Node in Single Molecule Science, School of Biomedical
Sciences, UNSW, Australia (4) Department of Mathematics, Faculty of
Mathematics and Natural Sciences, University of Oslo, Norway) | Advecting Scaffolds: Controlling The Remodelling Of Actomyosin With
Anillin | null | null | null | null | q-bio.SC cond-mat.soft | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose and analyse an active hydrodynamic theory that characterises the
effects of the scaffold protein anillin. Anillin is found at major sites of
cortical activity, such as adherens junctions and the cytokinetic furrow, where
the canonical regulator of actomyosin remodelling is the small GTPase, RhoA.
RhoA acts via intermediary 'effectors' to increase both the rates of activation
of myosin motors and the polymerisation of actin filaments. Anillin has been
shown to scaffold this action of RhoA - improving critical rates in the
signalling pathway without altering the essential biochemistry - but its
contribution to the wider spatio-temporal organisation of the cortical
cytoskeleton remains poorly understood. Here, we combine analytics and numerics
to show how anillin can non-trivially regulate the cytoskeleton at hydrodynamic
scales. At short times, anillin can amplify or dampen existing contractile
instabilities, as well as alter the parameter ranges over which they occur. At
long times, it can change both the size and speed of steady-state travelling
pulses. The primary mechanism that underpins these behaviours is established to
be the advection of anillin by myosin II motors, with the specifics relying on
the values of two coupling parameters. These codify anillin's effect on local
signalling kinetics and can be traced back to its interaction with the acidic
phospholipid phosphatidylinositol 4,5-bisphosphate (PIP2), thereby establishing
a putative connection between actomyosin remodelling and membrane composition.
| [
{
"created": "Mon, 12 Feb 2024 06:14:26 GMT",
"version": "v1"
}
] | 2024-02-13 | [
[
"Currin-Ross",
"Denni",
"",
"1, 2 and 3"
],
[
"Al-Izzi",
"Sami C.",
"",
"2 and 3"
],
[
"Noordstra",
"Ivar",
"",
"2 and 3"
],
[
"Yap",
"Alpha S.",
"",
"2 and 3"
],
[
"Morris",
"Richard G.",
"",
"2 and 3"
]
] | We propose and analyse an active hydrodynamic theory that characterises the effects of the scaffold protein anillin. Anillin is found at major sites of cortical activity, such as adherens junctions and the cytokinetic furrow, where the canonical regulator of actomyosin remodelling is the small GTPase, RhoA. RhoA acts via intermediary 'effectors' to increase both the rates of activation of myosin motors and the polymerisation of actin filaments. Anillin has been shown to scaffold this action of RhoA - improving critical rates in the signalling pathway without altering the essential biochemistry - but its contribution to the wider spatio-temporal organisation of the cortical cytoskeleton remains poorly understood. Here, we combine analytics and numerics to show how anillin can non-trivially regulate the cytoskeleton at hydrodynamic scales. At short times, anillin can amplify or dampen existing contractile instabilities, as well as alter the parameter ranges over which they occur. At long times, it can change both the size and speed of steady-state travelling pulses. The primary mechanism that underpins these behaviours is established to be the advection of anillin by myosin II motors, with the specifics relying on the values of two coupling parameters. These codify anillin's effect on local signalling kinetics and can be traced back to its interaction with the acidic phospholipid phosphatidylinositol 4,5-bisphosphate (PIP2), thereby establishing a putative connection between actomyosin remodelling and membrane composition. |
1801.03125 | Carla Sciarra | Carla Sciarra, Andrea Rinaldo, Francesco Laio, Damiano Pasetto | Mathematical modeling of cholera epidemics in South Sudan | MSc Thesis | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | In this work, we analyze and model the cholera epidemics that affected South
Sudan, the newest country in the world, during 2014 and 2015. South Sudan
possibly represents one of the most difficult context in which adapt the
deterministic mathematical cholera model, due to the unstable social and
political situation that clearly affects the fluxes of people and the sanitary
conditions, increasing the risk of large outbreaks. Despite the limitation of a
static gravity model in describing the chaotic human mobility of South Sudan,
the SIRB model, calibrated with a data assimilation technique (Ensemble Kalman
Filter), retrieves the epidemic dynamics in the counties with the largest
number of infected cases, showing the potentiality of the methodology in
forecasting future outbreaks.
| [
{
"created": "Mon, 8 Jan 2018 17:53:38 GMT",
"version": "v1"
}
] | 2018-01-11 | [
[
"Sciarra",
"Carla",
""
],
[
"Rinaldo",
"Andrea",
""
],
[
"Laio",
"Francesco",
""
],
[
"Pasetto",
"Damiano",
""
]
] | In this work, we analyze and model the cholera epidemics that affected South Sudan, the newest country in the world, during 2014 and 2015. South Sudan possibly represents one of the most difficult context in which adapt the deterministic mathematical cholera model, due to the unstable social and political situation that clearly affects the fluxes of people and the sanitary conditions, increasing the risk of large outbreaks. Despite the limitation of a static gravity model in describing the chaotic human mobility of South Sudan, the SIRB model, calibrated with a data assimilation technique (Ensemble Kalman Filter), retrieves the epidemic dynamics in the counties with the largest number of infected cases, showing the potentiality of the methodology in forecasting future outbreaks. |
2401.17382 | Antonio Lax LaxA | A Lax, F Soler, and F Fernandez Belda | Functional approach to the catalytic site of the sarcoplasmic reticulum
Ca(2+)-ATPase: binding and hydrolysis of ATP in the absence of Ca(2+) | null | J Bioenerg Biomembr . 2004 Jun;36(3):265-73 | 10.1023/b:jobb.0000031978.15139.49. | null | q-bio.MN | http://creativecommons.org/licenses/by/4.0/ | Isolated sarcoplasmic reticulum vesicles in the presence of Mg(2+) and
absence of Ca(2+) retain significant ATP hydrolytic activity that can be
attributed to the Ca(2+)-ATPase protein. At neutral pH and the presence of 5 mM
Mg(2+), the dependence of the hydrolysis rate on a linear ATP concentration
scale can be fitted by a single hyperbolic function. MgATP hydrolysis is
inhibited by either free Mg(2+) or free ATP. The rate of ATP hydrolysis is not
perturbed by vanadate, whereas the rate of p-nitrophenyl phosphate hydrolysis
is not altered by a nonhydrolyzable ATP analog. ATP binding affinity at neutral
pH and in a Ca(2+)-free medium is increased by Mg(2+) but decreased by vanadate
when Mg(2+) is present. It is suggested that MgATP hydrolysis in the absence of
Ca(2+) requires some optimal adjustment of the enzyme cytoplasmic domains. The
Ca(2+)-independent activity is operative at basal levels of cytoplasmic Ca(2+)
or when the Ca(2+) binding transition is impeded.
| [
{
"created": "Tue, 30 Jan 2024 19:08:27 GMT",
"version": "v1"
}
] | 2024-02-01 | [
[
"Lax",
"A",
""
],
[
"Soler",
"F",
""
],
[
"Belda",
"F Fernandez",
""
]
] | Isolated sarcoplasmic reticulum vesicles in the presence of Mg(2+) and absence of Ca(2+) retain significant ATP hydrolytic activity that can be attributed to the Ca(2+)-ATPase protein. At neutral pH and the presence of 5 mM Mg(2+), the dependence of the hydrolysis rate on a linear ATP concentration scale can be fitted by a single hyperbolic function. MgATP hydrolysis is inhibited by either free Mg(2+) or free ATP. The rate of ATP hydrolysis is not perturbed by vanadate, whereas the rate of p-nitrophenyl phosphate hydrolysis is not altered by a nonhydrolyzable ATP analog. ATP binding affinity at neutral pH and in a Ca(2+)-free medium is increased by Mg(2+) but decreased by vanadate when Mg(2+) is present. It is suggested that MgATP hydrolysis in the absence of Ca(2+) requires some optimal adjustment of the enzyme cytoplasmic domains. The Ca(2+)-independent activity is operative at basal levels of cytoplasmic Ca(2+) or when the Ca(2+) binding transition is impeded. |
2305.08124 | Tom George | Tom M George | Theta sequences as eligibility traces: a biological solution to credit
assignment | null | null | null | null | q-bio.NC cs.AI | http://creativecommons.org/licenses/by/4.0/ | Credit assignment problems, for example policy evaluation in RL, often
require bootstrapping prediction errors through preceding states \textit{or}
maintaining temporally extended memory traces; solutions which are unfavourable
or implausible for biological networks of neurons. We propose theta sequences
-- chains of neural activity during theta oscillations in the hippocampus,
thought to represent rapid playthroughs of awake behaviour -- as a solution. By
analysing and simulating a model for theta sequences we show they compress
behaviour such that existing but short $\mathsf{O}(10)$ ms neuronal memory
traces are effectively extended allowing for bootstrap-free credit assignment
without long memory traces, equivalent to the use of eligibility traces in
TD($\lambda$).
| [
{
"created": "Sun, 14 May 2023 11:04:36 GMT",
"version": "v1"
}
] | 2023-05-16 | [
[
"George",
"Tom M",
""
]
] | Credit assignment problems, for example policy evaluation in RL, often require bootstrapping prediction errors through preceding states \textit{or} maintaining temporally extended memory traces; solutions which are unfavourable or implausible for biological networks of neurons. We propose theta sequences -- chains of neural activity during theta oscillations in the hippocampus, thought to represent rapid playthroughs of awake behaviour -- as a solution. By analysing and simulating a model for theta sequences we show they compress behaviour such that existing but short $\mathsf{O}(10)$ ms neuronal memory traces are effectively extended allowing for bootstrap-free credit assignment without long memory traces, equivalent to the use of eligibility traces in TD($\lambda$). |
0803.1385 | Raphael Plasson | Rapha\"el Plasson (Nordita), Hugues Bersini (Iridia), Axel Brandenburg
(Nordita) | Decomposition of Complex Reaction Networks into Reactons | 24 pages, 8 figures, submitted to Biophysical Journal | null | null | NORDITA-2008-12 | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The analysis of complex reaction networks is of great importance in several
chemical and biochemical fields (interstellar chemistry, prebiotic chemistry,
reaction mechanism, etc). In this article, we propose to simultaneously refine
and extend for general chemical reaction systems the formalism initially
introduced for the description of metabolic networks. The classical approaches
through the computation of the right null space leads to the decomposition of
the network into complex ``cycles'' of reactions concerned with all
metabolites. We show how, departing from the left null space computation, the
flux analysis can be decoupled into linear fluxes and single loops, allowing a
more refine qualitative analysis as a function of the antagonisms and
connections among these local fluxes. This analysis is made possible by the
decomposition of the molecules into elementary subunits, called "reactons" and
the consequent decomposition of the whole network into simple first order unary
partial reactions related with simple transfers of reactons from one molecule
to another. This article explains and justifies the algorithmic steps leading
to the total decomposition of the reaction network into its constitutive
elementary subpart.
| [
{
"created": "Mon, 10 Mar 2008 10:57:50 GMT",
"version": "v1"
}
] | 2008-03-11 | [
[
"Plasson",
"Raphaël",
"",
"Nordita"
],
[
"Bersini",
"Hugues",
"",
"Iridia"
],
[
"Brandenburg",
"Axel",
"",
"Nordita"
]
] | The analysis of complex reaction networks is of great importance in several chemical and biochemical fields (interstellar chemistry, prebiotic chemistry, reaction mechanism, etc). In this article, we propose to simultaneously refine and extend for general chemical reaction systems the formalism initially introduced for the description of metabolic networks. The classical approaches through the computation of the right null space leads to the decomposition of the network into complex ``cycles'' of reactions concerned with all metabolites. We show how, departing from the left null space computation, the flux analysis can be decoupled into linear fluxes and single loops, allowing a more refine qualitative analysis as a function of the antagonisms and connections among these local fluxes. This analysis is made possible by the decomposition of the molecules into elementary subunits, called "reactons" and the consequent decomposition of the whole network into simple first order unary partial reactions related with simple transfers of reactons from one molecule to another. This article explains and justifies the algorithmic steps leading to the total decomposition of the reaction network into its constitutive elementary subpart. |
1805.05058 | Lee Susman | Lee Susman, Maryam Kohram, Harsh Vashistha, Jeffrey T. Nechleba, Hanna
Salman and Naama Brenner | Individuality and slow dynamics in bacterial growth homeostasis | In press with PNAS. 50 pages, including supplementary information | null | 10.1073/pnas.1615526115 | null | q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Microbial growth and division are fundamental processes relevant to many
areas of life science. Of particular interest are homeostasis mechanisms, which
buffer growth and division from accumulating fluctuations over multiple cycles.
These mechanisms operate within single cells, possibly extending over several
division cycles. However, all experimental studies to date have relied on
measurements pooled from many distinct cells. Here, we disentangle long-term
measured traces of individual cells from one another, revealing subtle
differences between temporal and pooled statistics. By analyzing correlations
along up to hundreds of generations, we find that the parameter describing
effective cell-size homeostasis strength varies significantly among cells. At
the same time, we find an invariant cell size which acts as an attractor to all
individual traces, albeit with different effective attractive forces. Despite
the common attractor, each cell maintains a distinct average size over its
finite lifetime with suppressed temporal fluctuations around it, and
equilibration to the global average size is surprisingly slow (> 150 cell
cycles). To demonstrate a possible source of variable homeostasis strength, we
construct a mathematical model relying on intracellular interactions, which
integrates measured properties of cell size with those of highly expressed
proteins. Effective homeostasis strength is then influenced by interactions and
by noise levels, and generally varies among cells. A predictable and measurable
consequence of variable homeostasis strength appears as distinct oscillatory
patterns in cell size and protein content over many generations. We discuss the
implications of our results to understanding mechanisms controlling division in
single cells and their characteristic timescales
| [
{
"created": "Mon, 14 May 2018 08:25:22 GMT",
"version": "v1"
}
] | 2022-10-12 | [
[
"Susman",
"Lee",
""
],
[
"Kohram",
"Maryam",
""
],
[
"Vashistha",
"Harsh",
""
],
[
"Nechleba",
"Jeffrey T.",
""
],
[
"Salman",
"Hanna",
""
],
[
"Brenner",
"Naama",
""
]
] | Microbial growth and division are fundamental processes relevant to many areas of life science. Of particular interest are homeostasis mechanisms, which buffer growth and division from accumulating fluctuations over multiple cycles. These mechanisms operate within single cells, possibly extending over several division cycles. However, all experimental studies to date have relied on measurements pooled from many distinct cells. Here, we disentangle long-term measured traces of individual cells from one another, revealing subtle differences between temporal and pooled statistics. By analyzing correlations along up to hundreds of generations, we find that the parameter describing effective cell-size homeostasis strength varies significantly among cells. At the same time, we find an invariant cell size which acts as an attractor to all individual traces, albeit with different effective attractive forces. Despite the common attractor, each cell maintains a distinct average size over its finite lifetime with suppressed temporal fluctuations around it, and equilibration to the global average size is surprisingly slow (> 150 cell cycles). To demonstrate a possible source of variable homeostasis strength, we construct a mathematical model relying on intracellular interactions, which integrates measured properties of cell size with those of highly expressed proteins. Effective homeostasis strength is then influenced by interactions and by noise levels, and generally varies among cells. A predictable and measurable consequence of variable homeostasis strength appears as distinct oscillatory patterns in cell size and protein content over many generations. We discuss the implications of our results to understanding mechanisms controlling division in single cells and their characteristic timescales |
2109.00392 | Andreas Hilfinger | Euan Joly-Smith, Zitong Jerry Wang, Andreas Hilfinger | Inferring gene regulation dynamics from static snapshots of gene
expression variability | null | null | 10.1103/PhysRevE.104.044406 | null | q-bio.MN q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Inferring functional relationships within complex networks from static
snapshots of a subset of variables is a ubiquitous problem in science. For
example, a key challenge of systems biology is to translate cellular
heterogeneity data obtained from single-cell sequencing or flow-cytometry
experiments into regulatory dynamics. We show how static population snapshots
of co-variability can be exploited to rigorously infer properties of gene
expression dynamics when gene expression reporters probe their upstream
dynamics on separate time-scales. This can be experimentally exploited in
dual-reporter experiments with fluorescent proteins of unequal maturation
times, thus turning an experimental bug into an analysis feature. We derive
correlation conditions that detect the presence of closed-loop feedback
regulation in gene regulatory networks. Furthermore, we show how genes with
cell-cycle dependent transcription rates can be identified from the variability
of co-regulated fluorescent proteins. Similar correlation constraints might
prove useful in other areas of science in which static correlation snapshots
are used to infer causal connections between dynamically interacting
components.
| [
{
"created": "Wed, 1 Sep 2021 13:58:29 GMT",
"version": "v1"
}
] | 2024-08-09 | [
[
"Joly-Smith",
"Euan",
""
],
[
"Wang",
"Zitong Jerry",
""
],
[
"Hilfinger",
"Andreas",
""
]
] | Inferring functional relationships within complex networks from static snapshots of a subset of variables is a ubiquitous problem in science. For example, a key challenge of systems biology is to translate cellular heterogeneity data obtained from single-cell sequencing or flow-cytometry experiments into regulatory dynamics. We show how static population snapshots of co-variability can be exploited to rigorously infer properties of gene expression dynamics when gene expression reporters probe their upstream dynamics on separate time-scales. This can be experimentally exploited in dual-reporter experiments with fluorescent proteins of unequal maturation times, thus turning an experimental bug into an analysis feature. We derive correlation conditions that detect the presence of closed-loop feedback regulation in gene regulatory networks. Furthermore, we show how genes with cell-cycle dependent transcription rates can be identified from the variability of co-regulated fluorescent proteins. Similar correlation constraints might prove useful in other areas of science in which static correlation snapshots are used to infer causal connections between dynamically interacting components. |
1812.00186 | Yuriy Shckorbatov G | Yuriy Shckorbatov | Properties of Chromatin in Human Cells as Characteristics of the State
of Human Organism: A Review | 5 pages | Adv Complement Alt Med. 1(1). ACAM.000505. 2018.
http://crimsonpublishers.com/acam/volume1-issue1-acam.php | null | null | q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The state of chromatin in individual cells is directly related to the state
of the whole organism and may be used for assessment of the state of the whole
organism. Many hereditary diseases are connected with chromatin rearrangements.
Chromatin distribution in cell nucleus is an important characteristic which may
be applied for determination of disease gravity. The method of determination of
the quantity of heterochromatin granules in buccal epithelium cells may be
applied in the assessment of the state of human organism during sportive
trainings and medical treatment.
| [
{
"created": "Sat, 1 Dec 2018 10:25:27 GMT",
"version": "v1"
}
] | 2018-12-04 | [
[
"Shckorbatov",
"Yuriy",
""
]
] | The state of chromatin in individual cells is directly related to the state of the whole organism and may be used for assessment of the state of the whole organism. Many hereditary diseases are connected with chromatin rearrangements. Chromatin distribution in cell nucleus is an important characteristic which may be applied for determination of disease gravity. The method of determination of the quantity of heterochromatin granules in buccal epithelium cells may be applied in the assessment of the state of human organism during sportive trainings and medical treatment. |
2401.14025 | \c{S}\"ukr\"u Ozan | \c{S}\"ukr\"u Ozan | DNA Sequence Classification with Compressors | null | null | null | null | q-bio.GN cs.LG | http://creativecommons.org/licenses/by/4.0/ | Recent studies in DNA sequence classification have leveraged sophisticated
machine learning techniques, achieving notable accuracy in categorizing complex
genomic data. Among these, methods such as k-mer counting have proven effective
in distinguishing sequences from varied species like chimpanzees, dogs, and
humans, becoming a staple in contemporary genomic research. However, these
approaches often demand extensive computational resources, posing a challenge
in terms of scalability and efficiency. Addressing this issue, our study
introduces a novel adaptation of Jiang et al.'s compressor-based,
parameter-free classification method, specifically tailored for DNA sequence
analysis. This innovative approach utilizes a variety of compression
algorithms, such as Gzip, Brotli, and LZMA, to efficiently process and classify
genomic sequences. Not only does this method align with the current
state-of-the-art in terms of accuracy, but it also offers a more
resource-efficient alternative to traditional machine learning methods. Our
comprehensive evaluation demonstrates the proposed method's effectiveness in
accurately classifying DNA sequences from multiple species. We present a
detailed analysis of the performance of each algorithm used, highlighting the
strengths and limitations of our approach in various genomic contexts.
Furthermore, we discuss the broader implications of our findings for
bioinformatics, particularly in genomic data processing and analysis. The
results of our study pave the way for more efficient and scalable DNA sequence
classification methods, offering significant potential for advancements in
genomic research and applications.
| [
{
"created": "Thu, 25 Jan 2024 09:17:19 GMT",
"version": "v1"
}
] | 2024-01-26 | [
[
"Ozan",
"Şükrü",
""
]
] | Recent studies in DNA sequence classification have leveraged sophisticated machine learning techniques, achieving notable accuracy in categorizing complex genomic data. Among these, methods such as k-mer counting have proven effective in distinguishing sequences from varied species like chimpanzees, dogs, and humans, becoming a staple in contemporary genomic research. However, these approaches often demand extensive computational resources, posing a challenge in terms of scalability and efficiency. Addressing this issue, our study introduces a novel adaptation of Jiang et al.'s compressor-based, parameter-free classification method, specifically tailored for DNA sequence analysis. This innovative approach utilizes a variety of compression algorithms, such as Gzip, Brotli, and LZMA, to efficiently process and classify genomic sequences. Not only does this method align with the current state-of-the-art in terms of accuracy, but it also offers a more resource-efficient alternative to traditional machine learning methods. Our comprehensive evaluation demonstrates the proposed method's effectiveness in accurately classifying DNA sequences from multiple species. We present a detailed analysis of the performance of each algorithm used, highlighting the strengths and limitations of our approach in various genomic contexts. Furthermore, we discuss the broader implications of our findings for bioinformatics, particularly in genomic data processing and analysis. The results of our study pave the way for more efficient and scalable DNA sequence classification methods, offering significant potential for advancements in genomic research and applications. |
1711.03575 | Nikola Jurisic | Nikola K. Jurisic and Fred Cooper | Quantum mechanics, universal scaling and ferroelectric hysteresis
regimes in the giant squid axon propagating action potential: a Phase Space
Approach | 33 pages, 19 figures | null | null | null | q-bio.OT | http://creativecommons.org/licenses/by/4.0/ | The experimental data for the giant squid axon propagated action potential is
examined in phase space and it is parsed into distinct parts, properties of
ionic channels, properties of polarization channels, and two very narrow
regions pertaining to polarization flips of sodium channels lattice. The charge
conserving cable equation currents, namely capacitive, membrane, and total
ionic current are parsed into segments, pertaining to sodium and potassium
channels. Plots of ionic currents vs. potential exhibit quasilinear segments
yielding temperature dependent maximum conductance constants and the related
time rates. Plotting ionic time rates as Boltzmann kinetic rates yields
activation energies of the same order as the rate-limiting biochemical
metabolic reactions indicating that the passage of ions through the membrane is
mediated by biochemical reactions. Fractions of open channels are fitted in the
lab frame by modified Avrami (mAvrami) equations seeded with the value of the
Fine Structure Constant $\alpha$ = 0.0072973...(FSC). The steady propagation of
the action potential leads with its own excitation and provides insight into
nerve excitation in general. Evidence is presented that action potential
traverses a ferroelectric hysteresis loop. The heat released at 19.8 oC is
estimated to be twice as large as the heat released at 4.5 oC. It is expected
that presented results will provide the framework for further analysis of
biological excitability, ionic channels lattice structure, thermodynamic phase
changing behavior, and the role of quantum mechanics in biochemical reactions
mediating the flow of ions trough ionic channels. In particular, the critical
role of ferroelectric sodium channels lattice behavior has far reaching
implications for nerve excitability and encoding of memories.
| [
{
"created": "Thu, 9 Nov 2017 20:09:36 GMT",
"version": "v1"
},
{
"created": "Tue, 14 Dec 2021 00:32:53 GMT",
"version": "v2"
},
{
"created": "Wed, 27 Jul 2022 14:06:27 GMT",
"version": "v3"
}
] | 2022-07-28 | [
[
"Jurisic",
"Nikola K.",
""
],
[
"Cooper",
"Fred",
""
]
] | The experimental data for the giant squid axon propagated action potential is examined in phase space and it is parsed into distinct parts, properties of ionic channels, properties of polarization channels, and two very narrow regions pertaining to polarization flips of sodium channels lattice. The charge conserving cable equation currents, namely capacitive, membrane, and total ionic current are parsed into segments, pertaining to sodium and potassium channels. Plots of ionic currents vs. potential exhibit quasilinear segments yielding temperature dependent maximum conductance constants and the related time rates. Plotting ionic time rates as Boltzmann kinetic rates yields activation energies of the same order as the rate-limiting biochemical metabolic reactions indicating that the passage of ions through the membrane is mediated by biochemical reactions. Fractions of open channels are fitted in the lab frame by modified Avrami (mAvrami) equations seeded with the value of the Fine Structure Constant $\alpha$ = 0.0072973...(FSC). The steady propagation of the action potential leads with its own excitation and provides insight into nerve excitation in general. Evidence is presented that action potential traverses a ferroelectric hysteresis loop. The heat released at 19.8 oC is estimated to be twice as large as the heat released at 4.5 oC. It is expected that presented results will provide the framework for further analysis of biological excitability, ionic channels lattice structure, thermodynamic phase changing behavior, and the role of quantum mechanics in biochemical reactions mediating the flow of ions trough ionic channels. In particular, the critical role of ferroelectric sodium channels lattice behavior has far reaching implications for nerve excitability and encoding of memories. |
1007.4457 | Tsvi Tlusty | Yonatan Savir and Tsvi Tlusty | Conformational Proofreading: The Impact of Conformational Changes on the
Specificity of Molecular Recognition | http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1868595/
http://www.weizmann.ac.il/complex/tlusty/papers/PLoSONE2007.pdf | PLoS ONE. 2007; 2(5): e468 | 10.1371/journal.pone.0000468. | null | q-bio.BM cond-mat.stat-mech physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To perform recognition, molecules must locate and specifically bind their
targets within a noisy biochemical environment with many look-alikes. Molecular
recognition processes, especially the induced-fit mechanism, are known to
involve conformational changes. This arises a basic question: does molecular
recognition gain any advantage by such conformational changes? By introducing a
simple statistical-mechanics approach, we study the effect of conformation and
flexibility on the quality of recognition processes. Our model relates
specificity to the conformation of the participant molecules and thus suggests
a possible answer: Optimal specificity is achieved when the ligand is slightly
off target, that is a conformational mismatch between the ligand and its main
target improves the selectivity of the process. This indicates that
deformations upon binding serve as a conformational proofreading mechanism,
which may be selected for via evolution.
| [
{
"created": "Mon, 26 Jul 2010 13:44:20 GMT",
"version": "v1"
}
] | 2010-07-27 | [
[
"Savir",
"Yonatan",
""
],
[
"Tlusty",
"Tsvi",
""
]
] | To perform recognition, molecules must locate and specifically bind their targets within a noisy biochemical environment with many look-alikes. Molecular recognition processes, especially the induced-fit mechanism, are known to involve conformational changes. This arises a basic question: does molecular recognition gain any advantage by such conformational changes? By introducing a simple statistical-mechanics approach, we study the effect of conformation and flexibility on the quality of recognition processes. Our model relates specificity to the conformation of the participant molecules and thus suggests a possible answer: Optimal specificity is achieved when the ligand is slightly off target, that is a conformational mismatch between the ligand and its main target improves the selectivity of the process. This indicates that deformations upon binding serve as a conformational proofreading mechanism, which may be selected for via evolution. |
1403.7865 | Hong-Yan Shih | Hong-Yan Shih and Nigel Goldenfeld | Path integral calculation for emergence of rapid evolution from
demographic stochasticity | 5 pages and 2 figures + 2 pages of supplementary material; v2 figures
and supplementary material updated, other minor changes; v3 very minor
changes; v4 further minor changes in supplementary material and figures; v5
acknowledgement updated; very minor changes | Phys. Rev. E 90, 050702(R) (2014) | 10.1103/PhysRevE.90.050702 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Genetic variation in a population can sometimes arise so fast as to modify
ecosystem dynamics. Such phenomena have been observed in natural predator-prey
systems, and characterized in the laboratory as showing unusual phase
relationships in population dynamics, including a $\pi$ phase shift between
predator and prey (evolutionary cycles) and even undetectable prey oscillations
compared to those of the predator (cryptic cycles). Here we present a generic
individual-level stochastic model of interacting populations that includes a
subpopulation of low nutritional value to the predator. Using a master equation
formalism, and by mapping to a coherent state path integral solved by a
system-size expansion, we show that evolutionary and cryptic quasicycles can
emerge generically from the combination of intrinsic demographic fluctuations
and clonal mutations alone, without additional biological mechanisms.
| [
{
"created": "Mon, 31 Mar 2014 04:21:50 GMT",
"version": "v1"
},
{
"created": "Thu, 17 Jul 2014 16:38:05 GMT",
"version": "v2"
},
{
"created": "Thu, 11 Sep 2014 03:00:12 GMT",
"version": "v3"
},
{
"created": "Wed, 5 Nov 2014 07:53:12 GMT",
"version": "v4"
},
{
"created": "Thu, 27 Nov 2014 17:20:03 GMT",
"version": "v5"
}
] | 2014-12-01 | [
[
"Shih",
"Hong-Yan",
""
],
[
"Goldenfeld",
"Nigel",
""
]
] | Genetic variation in a population can sometimes arise so fast as to modify ecosystem dynamics. Such phenomena have been observed in natural predator-prey systems, and characterized in the laboratory as showing unusual phase relationships in population dynamics, including a $\pi$ phase shift between predator and prey (evolutionary cycles) and even undetectable prey oscillations compared to those of the predator (cryptic cycles). Here we present a generic individual-level stochastic model of interacting populations that includes a subpopulation of low nutritional value to the predator. Using a master equation formalism, and by mapping to a coherent state path integral solved by a system-size expansion, we show that evolutionary and cryptic quasicycles can emerge generically from the combination of intrinsic demographic fluctuations and clonal mutations alone, without additional biological mechanisms. |
2004.10832 | Alexander Vasilyev | Alexander Vasilyev | Control of fixation duration during visual search task execution | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the ability of human observer to control fixation duration during
execution of visual search tasks. We conducted the eye-tracking experiments
with natural and synthetic images and found the dependency of fixation duration
on difficulty of the task and the lengths of preceding and succeeding saccades.
In order to explain it, we developed the novel control model of human
eye-movements that incorporates continuous-time decision making, observation
and update of belief state. This model is based on Partially Observable Markov
Decision Process with delay in observation and saccade execution that accounts
for a delay between eye and cortex. We validated the computational model
through comparison of statistical properties of simulated and experimental
eye-movement trajectories.
| [
{
"created": "Wed, 22 Apr 2020 20:25:01 GMT",
"version": "v1"
}
] | 2020-04-24 | [
[
"Vasilyev",
"Alexander",
""
]
] | We study the ability of human observer to control fixation duration during execution of visual search tasks. We conducted the eye-tracking experiments with natural and synthetic images and found the dependency of fixation duration on difficulty of the task and the lengths of preceding and succeeding saccades. In order to explain it, we developed the novel control model of human eye-movements that incorporates continuous-time decision making, observation and update of belief state. This model is based on Partially Observable Markov Decision Process with delay in observation and saccade execution that accounts for a delay between eye and cortex. We validated the computational model through comparison of statistical properties of simulated and experimental eye-movement trajectories. |
2405.00159 | Aaditya Rangan | Caroline C. McGrouther, Aaditya V. Rangan, Arianna Di Florio, Jeremy
A. Elman, Nicholas J. Schork, John Kelsoe | Heterogeneity analysis provides evidence for a genetically homogeneous
subtype of bipolar-disorder | null | null | null | null | q-bio.GN | http://creativecommons.org/licenses/by/4.0/ | Bipolar disorder is a highly heritable brain disorder which affects an
estimated 50 million people worldwide. Due to recent advances in genotyping
technology and bioinformatics methodology, as well as the increase in the
overall amount of available data, our understanding of the genetic
underpinnings of BD has improved. A growing consensus is that BD is polygenic
and heterogeneous, but the specifics of that heterogeneity are not yet well
understood. Here we use a recently developed technique to investigate the
genetic heterogeneity of bipolar disorder. We find strong statistical evidence
for a `bicluster': a subset of bipolar subjects that exhibits a
disease-specific genetic pattern. The structure illuminated by this bicluster
replicates in several other data-sets and can be used to improve BD
risk-prediction algorithms. We believe that this bicluster is likely to
correspond to a genetically-distinct subtype of BD. More generally, we believe
that our biclustering approach is a promising means of untangling the
underlying heterogeneity of complex disease without the need for reliable
subphenotypic data.
| [
{
"created": "Tue, 30 Apr 2024 19:16:08 GMT",
"version": "v1"
}
] | 2024-05-02 | [
[
"McGrouther",
"Caroline C.",
""
],
[
"Rangan",
"Aaditya V.",
""
],
[
"Di Florio",
"Arianna",
""
],
[
"Elman",
"Jeremy A.",
""
],
[
"Schork",
"Nicholas J.",
""
],
[
"Kelsoe",
"John",
""
]
] | Bipolar disorder is a highly heritable brain disorder which affects an estimated 50 million people worldwide. Due to recent advances in genotyping technology and bioinformatics methodology, as well as the increase in the overall amount of available data, our understanding of the genetic underpinnings of BD has improved. A growing consensus is that BD is polygenic and heterogeneous, but the specifics of that heterogeneity are not yet well understood. Here we use a recently developed technique to investigate the genetic heterogeneity of bipolar disorder. We find strong statistical evidence for a `bicluster': a subset of bipolar subjects that exhibits a disease-specific genetic pattern. The structure illuminated by this bicluster replicates in several other data-sets and can be used to improve BD risk-prediction algorithms. We believe that this bicluster is likely to correspond to a genetically-distinct subtype of BD. More generally, we believe that our biclustering approach is a promising means of untangling the underlying heterogeneity of complex disease without the need for reliable subphenotypic data. |
2002.04806 | Terrence Sejnowski | Terrence J. Sejnowski | The Unreasonable Effectiveness of Deep Learning in Artificial
Intelligence | null | Proceedings of the National Academy of Sciences U.S.A. (2020)
https://www.pnas.org/content/early/2020/01/23/1907373117 | 10.1073/pnas.1907373117 | null | q-bio.NC cs.AI cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep learning networks have been trained to recognize speech, caption
photographs and translate text between languages at high levels of performance.
Although applications of deep learning networks to real world problems have
become ubiquitous, our understanding of why they are so effective is lacking.
These empirical results should not be possible according to sample complexity
in statistics and non-convex optimization theory. However, paradoxes in the
training and effectiveness of deep learning networks are being investigated and
insights are being found in the geometry of high-dimensional spaces. A
mathematical theory of deep learning would illuminate how they function, allow
us to assess the strengths and weaknesses of different network architectures
and lead to major improvements. Deep learning has provided natural ways for
humans to communicate with digital devices and is foundational for building
artificial general intelligence. Deep learning was inspired by the architecture
of the cerebral cortex and insights into autonomy and general intelligence may
be found in other brain regions that are essential for planning and survival,
but major breakthroughs will be needed to achieve these goals.
| [
{
"created": "Wed, 12 Feb 2020 05:25:15 GMT",
"version": "v1"
}
] | 2020-02-13 | [
[
"Sejnowski",
"Terrence J.",
""
]
] | Deep learning networks have been trained to recognize speech, caption photographs and translate text between languages at high levels of performance. Although applications of deep learning networks to real world problems have become ubiquitous, our understanding of why they are so effective is lacking. These empirical results should not be possible according to sample complexity in statistics and non-convex optimization theory. However, paradoxes in the training and effectiveness of deep learning networks are being investigated and insights are being found in the geometry of high-dimensional spaces. A mathematical theory of deep learning would illuminate how they function, allow us to assess the strengths and weaknesses of different network architectures and lead to major improvements. Deep learning has provided natural ways for humans to communicate with digital devices and is foundational for building artificial general intelligence. Deep learning was inspired by the architecture of the cerebral cortex and insights into autonomy and general intelligence may be found in other brain regions that are essential for planning and survival, but major breakthroughs will be needed to achieve these goals. |
1603.01982 | Serge Moulin | Serge Moulin, Nicolas Seux, St\'ephane Chr\'etien, Christophe Guyeux,
and Emmanuelle Lerat | Simulation based estimation of branching models for LTR retrotransposons | 7 pages, 3 figures, 7 tables. Submit to "Bioiformatics" on March 1,
2016 | null | null | null | q-bio.CB q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Motivation: LTR retrotransposons are mobile elements that are able, like
retroviruses, to copy and move inside eukaryotic genomes. In the present work,
we propose a branching model for studying the propagation of LTR
retrotransposons in these genomes. This model allows to take into account both
positions and degradations of LTR retrotransposons copies. In our model, the
duplication rate is also allowed to vary with the degradation level.
Results: Various functions have been implemented in order to simulate their
spread and visualization tools are proposed. Based on these simulation tools,
we show that an accurate estimation of the parameters of this propagation model
can be performed. We applied this method to the study of the spread of the
transposable elements ROO, GYPSY, and DM412 on a chromosome of
\textit{Drosophila melanogaster}.
Availability: Our proposal has been implemented using Python software. Source
code is freely available on the web at
https://github.com/SergeMOULIN/retrotransposons-spread.
| [
{
"created": "Mon, 7 Mar 2016 09:39:54 GMT",
"version": "v1"
}
] | 2016-03-08 | [
[
"Moulin",
"Serge",
""
],
[
"Seux",
"Nicolas",
""
],
[
"Chrétien",
"Stéphane",
""
],
[
"Guyeux",
"Christophe",
""
],
[
"Lerat",
"Emmanuelle",
""
]
] | Motivation: LTR retrotransposons are mobile elements that are able, like retroviruses, to copy and move inside eukaryotic genomes. In the present work, we propose a branching model for studying the propagation of LTR retrotransposons in these genomes. This model allows to take into account both positions and degradations of LTR retrotransposons copies. In our model, the duplication rate is also allowed to vary with the degradation level. Results: Various functions have been implemented in order to simulate their spread and visualization tools are proposed. Based on these simulation tools, we show that an accurate estimation of the parameters of this propagation model can be performed. We applied this method to the study of the spread of the transposable elements ROO, GYPSY, and DM412 on a chromosome of \textit{Drosophila melanogaster}. Availability: Our proposal has been implemented using Python software. Source code is freely available on the web at https://github.com/SergeMOULIN/retrotransposons-spread. |
0804.2646 | Iannis Kominis | I. K. Kominis | Quantum Zeno Effect Underpinning the Radical-Ion-Pair Mechanism of Avian
Magnetoreception | 21 pages, 3 figures | null | null | null | q-bio.BM quant-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The intricate biochemical processes underlying avian magnetoreception, the
sensory ability of migratory birds to navigate using earths magnetic field,
have been narrowed down to spin-dependent recombination of radical-ion pairs to
be found in avian species retinal proteins. The avian magnetic field detection
is governed by the interplay between magnetic interactions of the radicals
unpaired electrons and the radicals recombination dynamics. Critical to this
mechanism is the long lifetime of the radical-pair spin coherence, so that the
weak geomagnetic field will have a chance to signal its presence. It is here
shown that a fundamental quantum phenomenon, the quantum Zeno effect, is at the
basis of the radical-ion-pair magnetoreception mechanism. The quantum Zeno
effect naturally leads to long spin coherence lifetimes, without any
constraints on the systems physical parameters, ensuring the robustness of this
sensory mechanism. Basic experimental observations regarding avian magnetic
sensitivity are seamlessly derived. These include the magnetic sensitivity
functional window and the heading error of oriented bird ensembles, which so
far evaded theoretical justification. The findings presented here could be
highly relevant to similar mechanisms at work in photosynthetic reactions. They
also trigger fundamental questions about the evolutionary mechanisms that
enabled avian species to make optimal use of quantum measurement laws.
| [
{
"created": "Wed, 16 Apr 2008 17:17:23 GMT",
"version": "v1"
}
] | 2008-04-17 | [
[
"Kominis",
"I. K.",
""
]
] | The intricate biochemical processes underlying avian magnetoreception, the sensory ability of migratory birds to navigate using earths magnetic field, have been narrowed down to spin-dependent recombination of radical-ion pairs to be found in avian species retinal proteins. The avian magnetic field detection is governed by the interplay between magnetic interactions of the radicals unpaired electrons and the radicals recombination dynamics. Critical to this mechanism is the long lifetime of the radical-pair spin coherence, so that the weak geomagnetic field will have a chance to signal its presence. It is here shown that a fundamental quantum phenomenon, the quantum Zeno effect, is at the basis of the radical-ion-pair magnetoreception mechanism. The quantum Zeno effect naturally leads to long spin coherence lifetimes, without any constraints on the systems physical parameters, ensuring the robustness of this sensory mechanism. Basic experimental observations regarding avian magnetic sensitivity are seamlessly derived. These include the magnetic sensitivity functional window and the heading error of oriented bird ensembles, which so far evaded theoretical justification. The findings presented here could be highly relevant to similar mechanisms at work in photosynthetic reactions. They also trigger fundamental questions about the evolutionary mechanisms that enabled avian species to make optimal use of quantum measurement laws. |
1202.4751 | Wonsang You | Wonsang You, Joerg Stadler | Fractal-based Correlation Analysis for Resting State Functional
Connectivity of the Rat Brain in Functional MRI | CBBS Educational Workshop on Resting State fMRI 2010 | null | null | null | q-bio.NC stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The most studies on functional connectivity have been done by analyzing the
brain's hemodynamic response to a stimulation. On the other hand, the
low-frequency spontaneous fluctuations in the blood oxygen level dependent
(BOLD) signals of functional MRI have been observed in the resting state.
However, the BOLD signals in resting state are significantly corrupted by huge
noises arising from cardiac pulsation, respiration, subject motion, scanner,
and so forth. Especially, the noise compounds are stronger in the rat brain
than in the human brain. To overcome such an artifact, we assumed that fractal
behavior in BOLD signals reflects low frequency neural activity, and applied
the theorem such that the wavelet correlation spectrum between long memory
processes is scale-invariant over low frequency scales. Here, we report an
experiment that shows special correlation patterns not only in correlation of
scaling coefficients in very low-frequency band (less than 0.0078Hz) but also
in asymptotic wavelet correlation. In addition, we show the distribution of the
Hurst exponents in the rat brain.
| [
{
"created": "Tue, 21 Feb 2012 20:55:07 GMT",
"version": "v1"
}
] | 2012-02-22 | [
[
"You",
"Wonsang",
""
],
[
"Stadler",
"Joerg",
""
]
] | The most studies on functional connectivity have been done by analyzing the brain's hemodynamic response to a stimulation. On the other hand, the low-frequency spontaneous fluctuations in the blood oxygen level dependent (BOLD) signals of functional MRI have been observed in the resting state. However, the BOLD signals in resting state are significantly corrupted by huge noises arising from cardiac pulsation, respiration, subject motion, scanner, and so forth. Especially, the noise compounds are stronger in the rat brain than in the human brain. To overcome such an artifact, we assumed that fractal behavior in BOLD signals reflects low frequency neural activity, and applied the theorem such that the wavelet correlation spectrum between long memory processes is scale-invariant over low frequency scales. Here, we report an experiment that shows special correlation patterns not only in correlation of scaling coefficients in very low-frequency band (less than 0.0078Hz) but also in asymptotic wavelet correlation. In addition, we show the distribution of the Hurst exponents in the rat brain. |
2212.14588 | Laure Peter-Derex | Laurent Derex (HESPER), Sylvain Rheims (CRNL, HCL), Laure Peter-Derex
(HCL, UCBL, CRNL) | Seizures and epilepsy after intracerebral hemorrhage: an update | null | Journal of Neurology, 2021, 268 (7), pp.2605-2615 | 10.1007/s00415-021-10439-3 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Seizures are common after intracerebral hemorrhage, occurring in 6 to 15% of
the patients, mostly in the first 72 hours. Their incidence reaches 30% when
subclinical or non-convulsive seizures are diagnosed by continuous
electroencephalogram. Several risk factors for seizures have been described
including cortical location of intracerebral hemorrhage, presence of
intraventricular hemorrhage, total hemorrhage volume, and history of alcohol
abuse. Seizures after intracerebral hemorrhage may theoretically be harmful as
they can lead to sudden blood pressure fluctuations, increase intracranial
pressure and neuronal injury due to increased metabolic demand. Some recent
studies suggest that acute symptomatic seizures (occurring within seven days of
stroke) are associated with worse functional outcome and increased risk of
death despite accounting for other known prognostic factors such as age and
baseline hemorrhage volume. However, the impact of seizures on prognosis is
still debated and it remains unclear if treating or preventing seizures might
lead to improved clinical outcome. Thus, the currently available scientific
evidence does not support the routine use of antiseizure medication as primary
prevention among patients with intracerebral hemorrhage. Only prospective
adequately powered randomized controlled trials will be able to answer whether
seizure prophylaxis in the acute or longer term settings is beneficial or not
in patients with intracerebral hemorrhage.
| [
{
"created": "Fri, 30 Dec 2022 07:53:04 GMT",
"version": "v1"
}
] | 2023-01-02 | [
[
"Derex",
"Laurent",
"",
"HESPER"
],
[
"Rheims",
"Sylvain",
"",
"CRNL, HCL"
],
[
"Peter-Derex",
"Laure",
"",
"HCL, UCBL, CRNL"
]
] | Seizures are common after intracerebral hemorrhage, occurring in 6 to 15% of the patients, mostly in the first 72 hours. Their incidence reaches 30% when subclinical or non-convulsive seizures are diagnosed by continuous electroencephalogram. Several risk factors for seizures have been described including cortical location of intracerebral hemorrhage, presence of intraventricular hemorrhage, total hemorrhage volume, and history of alcohol abuse. Seizures after intracerebral hemorrhage may theoretically be harmful as they can lead to sudden blood pressure fluctuations, increase intracranial pressure and neuronal injury due to increased metabolic demand. Some recent studies suggest that acute symptomatic seizures (occurring within seven days of stroke) are associated with worse functional outcome and increased risk of death despite accounting for other known prognostic factors such as age and baseline hemorrhage volume. However, the impact of seizures on prognosis is still debated and it remains unclear if treating or preventing seizures might lead to improved clinical outcome. Thus, the currently available scientific evidence does not support the routine use of antiseizure medication as primary prevention among patients with intracerebral hemorrhage. Only prospective adequately powered randomized controlled trials will be able to answer whether seizure prophylaxis in the acute or longer term settings is beneficial or not in patients with intracerebral hemorrhage. |
1106.2252 | Francois Treussart | Anna Alhaddad, Marie-Pierre Adam, Jacques Botsoa, G\'eraldine
Dantelle, Sandrine Perruchas, Thierry Gacoin, Christelle Mansuy, Solange
Lavielle, Claude Malvy, Fran\c{c}ois Treussart, and Jean-R\'emi Bertrand | Nanodiamond as a vector for siRNA delivery to Ewing sarcoma cells | null | null | null | null | q-bio.BM physics.bio-ph physics.med-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigated the ability of diamond nanoparticles (nanodiamonds, NDs) to
deliver small interfering RNA (siRNA) in Ewing sarcoma cells, in the
perspective of in vivo anti-cancer nucleic acid drug delivery. siRNA was
adsorbed onto NDs previously coated with cationic polymer. Cell uptake of NDs
has been demonstrated by taking advantage of NDs intrinsic fluorescence coming
from embedded color center defects. Cell toxicity of these coated NDs was shown
to be low. Consistent with the internalization efficacy, we have shown a
specific inhibition of EWS/Fli-1 gene expression at the mRNA and protein level
by the ND vectorized siRNA in a serum containing medium.
| [
{
"created": "Sat, 11 Jun 2011 16:14:19 GMT",
"version": "v1"
},
{
"created": "Sat, 30 Jul 2011 22:05:18 GMT",
"version": "v2"
}
] | 2011-08-02 | [
[
"Alhaddad",
"Anna",
""
],
[
"Adam",
"Marie-Pierre",
""
],
[
"Botsoa",
"Jacques",
""
],
[
"Dantelle",
"Géraldine",
""
],
[
"Perruchas",
"Sandrine",
""
],
[
"Gacoin",
"Thierry",
""
],
[
"Mansuy",
"Christelle",
""
],
[
"Lavielle",
"Solange",
""
],
[
"Malvy",
"Claude",
""
],
[
"Treussart",
"François",
""
],
[
"Bertrand",
"Jean-Rémi",
""
]
] | We investigated the ability of diamond nanoparticles (nanodiamonds, NDs) to deliver small interfering RNA (siRNA) in Ewing sarcoma cells, in the perspective of in vivo anti-cancer nucleic acid drug delivery. siRNA was adsorbed onto NDs previously coated with cationic polymer. Cell uptake of NDs has been demonstrated by taking advantage of NDs intrinsic fluorescence coming from embedded color center defects. Cell toxicity of these coated NDs was shown to be low. Consistent with the internalization efficacy, we have shown a specific inhibition of EWS/Fli-1 gene expression at the mRNA and protein level by the ND vectorized siRNA in a serum containing medium. |
2402.12604 | Benjamin Peters | Gunnar Blohm, Benjamin Peters, Ralf Haefner, Leyla Isik, Nikolaus
Kriegeskorte, Jennifer S. Lieberman, Carlos R. Ponce, Gemma Roig, Megan A. K.
Peters | Generative Adversarial Collaborations: A practical guide for conference
organizers and participating scientists | null | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Generative adversarial collaborations (GACs) are a form of formal teamwork
between groups of scientists with diverging views. The goal of GACs is to
identify and ultimately resolve the most important challenges, controversies,
and exciting theoretical and empirical debates in a given research field. A GAC
team would develop specific, agreed-upon avenues to resolve debates in order to
move a field of research forward in a collaborative way. Such adversarial
collaborations have many benefits and opportunities but also come with
challenges. Here, we use our experience from (1) creating and running the GAC
program for the Cognitive Computational Neuroscience (CCN) conference and (2)
implementing and leading GACs on particular scientific problems to provide a
practical guide for future GAC program organizers and leaders of individual
GACs.
| [
{
"created": "Mon, 19 Feb 2024 23:54:34 GMT",
"version": "v1"
}
] | 2024-02-21 | [
[
"Blohm",
"Gunnar",
""
],
[
"Peters",
"Benjamin",
""
],
[
"Haefner",
"Ralf",
""
],
[
"Isik",
"Leyla",
""
],
[
"Kriegeskorte",
"Nikolaus",
""
],
[
"Lieberman",
"Jennifer S.",
""
],
[
"Ponce",
"Carlos R.",
""
],
[
"Roig",
"Gemma",
""
],
[
"Peters",
"Megan A. K.",
""
]
] | Generative adversarial collaborations (GACs) are a form of formal teamwork between groups of scientists with diverging views. The goal of GACs is to identify and ultimately resolve the most important challenges, controversies, and exciting theoretical and empirical debates in a given research field. A GAC team would develop specific, agreed-upon avenues to resolve debates in order to move a field of research forward in a collaborative way. Such adversarial collaborations have many benefits and opportunities but also come with challenges. Here, we use our experience from (1) creating and running the GAC program for the Cognitive Computational Neuroscience (CCN) conference and (2) implementing and leading GACs on particular scientific problems to provide a practical guide for future GAC program organizers and leaders of individual GACs. |
1102.4540 | Alejandro Herrada | E. Alejandro Herrada, V\'ictor M. Egu\'iluz, Emilio
Hern\'andez-Garc\'ia, Carlos M. Duarte | Scaling properties of protein family phylogenies | Replaced with final published version | BMC Evolutionary Biology 11, 155 (2011) | 10.1186/1471-2148-11-155 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the classical questions in evolutionary biology is how evolutionary
processes are coupled at the gene and species level. With this motivation, we
compare the topological properties (mainly the depth scaling, as a
characterization of balance) of a large set of protein phylogenies with a set
of species phylogenies. The comparative analysis shows that both sets of
phylogenies share remarkably similar scaling behavior, suggesting the
universality of branching rules and of the evolutionary processes that drive
biological diversification from gene to species level. In order to explain such
generality, we propose a simple model which allows us to estimate the
proportion of evolvability/robustness needed to approximate the scaling
behavior observed in the phylogenies, highlighting the relevance of the
robustness of a biological system (species or protein) in the scaling
properties of the phylogenetic trees. Thus, the rules that govern the
incapability of a biological system to diversify are equally relevant both at
the gene and at the species level.
| [
{
"created": "Tue, 22 Feb 2011 15:46:08 GMT",
"version": "v1"
},
{
"created": "Thu, 24 Feb 2011 08:55:01 GMT",
"version": "v2"
},
{
"created": "Mon, 25 Jul 2011 18:15:48 GMT",
"version": "v3"
}
] | 2011-07-26 | [
[
"Herrada",
"E. Alejandro",
""
],
[
"Eguíluz",
"Víctor M.",
""
],
[
"Hernández-García",
"Emilio",
""
],
[
"Duarte",
"Carlos M.",
""
]
] | One of the classical questions in evolutionary biology is how evolutionary processes are coupled at the gene and species level. With this motivation, we compare the topological properties (mainly the depth scaling, as a characterization of balance) of a large set of protein phylogenies with a set of species phylogenies. The comparative analysis shows that both sets of phylogenies share remarkably similar scaling behavior, suggesting the universality of branching rules and of the evolutionary processes that drive biological diversification from gene to species level. In order to explain such generality, we propose a simple model which allows us to estimate the proportion of evolvability/robustness needed to approximate the scaling behavior observed in the phylogenies, highlighting the relevance of the robustness of a biological system (species or protein) in the scaling properties of the phylogenetic trees. Thus, the rules that govern the incapability of a biological system to diversify are equally relevant both at the gene and at the species level. |
1410.2358 | Vladimir Gavrikov | Vladimir L. Gavrikov | An application of bole surface growth model: a transitional status of
-3/2 rule | A draft manuscript | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the research scope of forest stand self-thinning, the analysis reveals a
broad picture in which the -3/2 rule takes a definite and special place. The
application of the simple geometrical model to the Douglas-fir and Scots pine
data suggests that the slope of the self-thinning curve will not remain
constant during the course of growth and self-thinning of a single forest
stand. Most probable, at the initial stages of stand growth the slope will be
less than -3/2 and at old ages of the stand the slope will be higher than -3/2.
Inevitably, a time will come when the slope is exactly equals -3/2. In other
words, the slope -3/2 is an obligatory state in the course of self-thinning of
a forest stand. At the very time of -3/2 slope two particular features coincide
with it. One is that the total bole surface area remains constant. The length
of the constancy stage would probably vary with species, initial stand
densities, their spatial arrangements, conditions of growth, and other specific
factors. Another feature of the time is that a geometric similarity in the
growth of the forest stand takes place, which is not in a contradiction with
the -3/2 rule as it had been formulated by its authors. To put it shortly, the
slope -3/2: i) is a very specific and obligatory state in the process of forest
stand growth and ii) is not an asymptote but rather a transitional point (span)
in the time of growth. These two assertions may be called a transitional status
of the -3/2 rule. The geometric model of a forest stand (Gavrikov, 2014) has
proved to be rather helpful at analyzing of real forest stand structure and
dynamics. Despite of its extreme simplicity (it uses cones as representations
of trees) the model looks like having enough similarity with real even-aged
forests since the model's predictions are often reasonably close to measured
values of power exponents.
| [
{
"created": "Thu, 9 Oct 2014 05:18:08 GMT",
"version": "v1"
}
] | 2014-10-10 | [
[
"Gavrikov",
"Vladimir L.",
""
]
] | In the research scope of forest stand self-thinning, the analysis reveals a broad picture in which the -3/2 rule takes a definite and special place. The application of the simple geometrical model to the Douglas-fir and Scots pine data suggests that the slope of the self-thinning curve will not remain constant during the course of growth and self-thinning of a single forest stand. Most probable, at the initial stages of stand growth the slope will be less than -3/2 and at old ages of the stand the slope will be higher than -3/2. Inevitably, a time will come when the slope is exactly equals -3/2. In other words, the slope -3/2 is an obligatory state in the course of self-thinning of a forest stand. At the very time of -3/2 slope two particular features coincide with it. One is that the total bole surface area remains constant. The length of the constancy stage would probably vary with species, initial stand densities, their spatial arrangements, conditions of growth, and other specific factors. Another feature of the time is that a geometric similarity in the growth of the forest stand takes place, which is not in a contradiction with the -3/2 rule as it had been formulated by its authors. To put it shortly, the slope -3/2: i) is a very specific and obligatory state in the process of forest stand growth and ii) is not an asymptote but rather a transitional point (span) in the time of growth. These two assertions may be called a transitional status of the -3/2 rule. The geometric model of a forest stand (Gavrikov, 2014) has proved to be rather helpful at analyzing of real forest stand structure and dynamics. Despite of its extreme simplicity (it uses cones as representations of trees) the model looks like having enough similarity with real even-aged forests since the model's predictions are often reasonably close to measured values of power exponents. |
1008.1887 | Dirk Stiefs | Dirk Stiefs, George A. K. van Voorn, Bob W. Kooi, Ulrike Feudel, Thilo
Gross | Food Quality in Producer-Grazer Models: A Generalized Analysis | Online appendixes included | Am Nat 2010. Vol. 176, pp. 367-380 | 10.1086/655429 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Stoichiometric constraints play a role in the dynamics of natural
populations, but are not explicitly considered in most mathematical models.
Recent theoretical works suggest that these constraints can have a significant
impact and should not be neglected. However, it is not yet resolved how
stoichiometry should be integrated in population dynamical models, as different
modeling approaches are found to yield qualitatively different results. Here we
investigate a unifying framework that reveals the differences and commonalities
between previously proposed models for producer-grazer systems. Our analysis
reveals that stoichiometric constraints affect the dynamics mainly by
increasing the intraspecific competition between producers and by introducing a
variable biomass conversion efficiency. The intraspecific competition has a
strongly stabilizing effect on the system, whereas the variable conversion
efficiency resulting from a variable food quality is the main determinant for
the nature of the instability once destabilization occurs. Only if the food
quality is high an oscillatory instability, as in the classical paradox of
enrichment, can occur. While the generalized model reveals that the generic
insights remain valid in a large class of models, we show that other details
such as the specific sequence of bifurcations encountered in enrichment
scenarios can depend sensitively on assumptions made in modeling stoichiometric
constraints.
| [
{
"created": "Wed, 11 Aug 2010 11:20:44 GMT",
"version": "v1"
}
] | 2010-08-12 | [
[
"Stiefs",
"Dirk",
""
],
[
"van Voorn",
"George A. K.",
""
],
[
"Kooi",
"Bob W.",
""
],
[
"Feudel",
"Ulrike",
""
],
[
"Gross",
"Thilo",
""
]
] | Stoichiometric constraints play a role in the dynamics of natural populations, but are not explicitly considered in most mathematical models. Recent theoretical works suggest that these constraints can have a significant impact and should not be neglected. However, it is not yet resolved how stoichiometry should be integrated in population dynamical models, as different modeling approaches are found to yield qualitatively different results. Here we investigate a unifying framework that reveals the differences and commonalities between previously proposed models for producer-grazer systems. Our analysis reveals that stoichiometric constraints affect the dynamics mainly by increasing the intraspecific competition between producers and by introducing a variable biomass conversion efficiency. The intraspecific competition has a strongly stabilizing effect on the system, whereas the variable conversion efficiency resulting from a variable food quality is the main determinant for the nature of the instability once destabilization occurs. Only if the food quality is high an oscillatory instability, as in the classical paradox of enrichment, can occur. While the generalized model reveals that the generic insights remain valid in a large class of models, we show that other details such as the specific sequence of bifurcations encountered in enrichment scenarios can depend sensitively on assumptions made in modeling stoichiometric constraints. |
2002.02394 | Sahand Salamat | Sahand Salamat and Tajana Rosing | FPGA Acceleration of Sequence Alignment: A Survey | null | null | null | null | q-bio.QM cs.AR q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Genomics is changing our understanding of humans, evolution, diseases, and
medicines to name but a few. As sequencing technology is developed collecting
DNA sequences takes less time thereby generating more genetic data every day.
Today the rate of generating genetic data is outpacing the rate of computation
power growth. Current sequencing machines can sequence 50 humans genome per
day; however, aligning the read sequences against a reference genome and
assembling the genome will take 1300 CPU hours. The main step in constructing
the genome is aligning the reads against a reference genome. Numerous
accelerators have been proposed to accelerate the DNA alignment process.
Providing massive parallelism, FPGA-based accelerators have shown great
performance in accelerating DNA alignment algorithms. Additionally, FPGA-based
accelerators provide better energy efficiency than general-purpose processors.
In this survey, we introduce three main DNA alignment algorithms and FPGA-based
implementation of these algorithms to accelerate the DNA alignment. We also,
compare these three alignment categories and show how accelerators are
developing during the time.
| [
{
"created": "Wed, 5 Feb 2020 00:33:22 GMT",
"version": "v1"
},
{
"created": "Tue, 28 Jul 2020 02:45:14 GMT",
"version": "v2"
}
] | 2020-07-29 | [
[
"Salamat",
"Sahand",
""
],
[
"Rosing",
"Tajana",
""
]
] | Genomics is changing our understanding of humans, evolution, diseases, and medicines to name but a few. As sequencing technology is developed collecting DNA sequences takes less time thereby generating more genetic data every day. Today the rate of generating genetic data is outpacing the rate of computation power growth. Current sequencing machines can sequence 50 humans genome per day; however, aligning the read sequences against a reference genome and assembling the genome will take 1300 CPU hours. The main step in constructing the genome is aligning the reads against a reference genome. Numerous accelerators have been proposed to accelerate the DNA alignment process. Providing massive parallelism, FPGA-based accelerators have shown great performance in accelerating DNA alignment algorithms. Additionally, FPGA-based accelerators provide better energy efficiency than general-purpose processors. In this survey, we introduce three main DNA alignment algorithms and FPGA-based implementation of these algorithms to accelerate the DNA alignment. We also, compare these three alignment categories and show how accelerators are developing during the time. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.