id stringlengths 9 13 | submitter stringlengths 4 48 | authors stringlengths 4 9.62k | title stringlengths 4 343 | comments stringlengths 2 480 ⌀ | journal-ref stringlengths 9 309 ⌀ | doi stringlengths 12 138 ⌀ | report-no stringclasses 277 values | categories stringlengths 8 87 | license stringclasses 9 values | orig_abstract stringlengths 27 3.76k | versions listlengths 1 15 | update_date stringlengths 10 10 | authors_parsed listlengths 1 147 | abstract stringlengths 24 3.75k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2310.00067 | Diego Ferreiro | Ignacio E. S\'anchez, Ezequiel A. Galpern, Diego U. Ferreiro | Solvent constraints for biopolymer folding and evolution in
extraterrestrial environments | 25 pages, 6 figures | null | null | null | q-bio.BM astro-ph.EP | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We propose that spontaneous folding and molecular evolution of biopolymers
are two universal aspects that must concur for life to happen. These aspects
are fundamentally related to the chemical composition of biopolymers and
crucially depend on the solvent in which they are embedded. We show that
molecular information theory and energy landscape theory allow us to explore
the limits that solvents impose on biopolymer existence. We consider 54
solvents, including water, alcohols, hydrocarbons, halogenated solvents,
aromatic solvents, and low molecular weight substances made up of elements
abundant in the universe, which may potentially take part in alternative
biochemistries. We find that along with water, there are many solvents for
which the liquid regime is compatible with biopolymer folding and evolution. We
present a ranking of the solvents in terms of biopolymer compatibility. Many of
these solvents have been found in molecular clouds or may be expected to occur
in extrasolar planets.
| [
{
"created": "Fri, 29 Sep 2023 18:18:18 GMT",
"version": "v1"
}
] | 2023-10-03 | [
[
"Sánchez",
"Ignacio E.",
""
],
[
"Galpern",
"Ezequiel A.",
""
],
[
"Ferreiro",
"Diego U.",
""
]
] | We propose that spontaneous folding and molecular evolution of biopolymers are two universal aspects that must concur for life to happen. These aspects are fundamentally related to the chemical composition of biopolymers and crucially depend on the solvent in which they are embedded. We show that molecular information theory and energy landscape theory allow us to explore the limits that solvents impose on biopolymer existence. We consider 54 solvents, including water, alcohols, hydrocarbons, halogenated solvents, aromatic solvents, and low molecular weight substances made up of elements abundant in the universe, which may potentially take part in alternative biochemistries. We find that along with water, there are many solvents for which the liquid regime is compatible with biopolymer folding and evolution. We present a ranking of the solvents in terms of biopolymer compatibility. Many of these solvents have been found in molecular clouds or may be expected to occur in extrasolar planets. |
2208.07653 | R. C. Penner | Robert Penner | Protein Geometry, Function and Mutation | 18 pages, 4 figures | null | null | null | q-bio.BM math.GT | http://creativecommons.org/licenses/by-nc-nd/4.0/ | This survey for mathematicians summarizes several works by the author on
protein geometry and protein function with applications to viral glycoproteins
in general and the spike glycoprotein of the SARS-CoV-2 virus in particular.
Background biology and biophysics are sketched. This body of work culminates in
a postulate that protein secondary structure regulates mutation, with backbone
hydrogen bonds materializing in critical regions to avoid mutation, and
disappearing from other regions to enable it.
| [
{
"created": "Tue, 16 Aug 2022 10:27:00 GMT",
"version": "v1"
},
{
"created": "Thu, 18 Aug 2022 09:31:22 GMT",
"version": "v2"
}
] | 2022-08-19 | [
[
"Penner",
"Robert",
""
]
] | This survey for mathematicians summarizes several works by the author on protein geometry and protein function with applications to viral glycoproteins in general and the spike glycoprotein of the SARS-CoV-2 virus in particular. Background biology and biophysics are sketched. This body of work culminates in a postulate that protein secondary structure regulates mutation, with backbone hydrogen bonds materializing in critical regions to avoid mutation, and disappearing from other regions to enable it. |
1302.2710 | Ryan Hernandez | M. Cyrus Maher, Lawrence H. Uricchio, Dara G. Torgerson, Ryan D.
Hernandez | Population Genetics of Rare Variants and Complex Diseases | 36 pages, 7 figures | Hum Hered 2012;74:118-128 | 10.1159/000346826 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Identifying drivers of complex traits from the noisy signals of genetic
variation obtained from high throughput genome sequencing technologies is a
central challenge faced by human geneticists today. We hypothesize that the
variants involved in complex diseases are likely to exhibit non-neutral
evolutionary signatures. Uncovering the evolutionary history of all variants is
therefore of intrinsic interest for complex disease research. However, doing so
necessitates the simultaneous elucidation of the targets of natural selection
and population-specific demographic history. Here we characterize the action of
natural selection operating across complex disease categories, and use
population genetic simulations to evaluate the expected patterns of genetic
variation in large samples. We focus on populations that have experienced
historical bottlenecks followed by explosive growth (consistent with most human
populations), and describe the differences between evolutionarily deleterious
mutations and those that are neutral. Genes associated with several complex
disease categories exhibit stronger signatures of purifying selection than
non-disease genes. In addition, loci identified through genome-wide association
studies of complex traits also exhibit signatures consistent with being in
regions recurrently targeted by purifying selection. Through simulations, we
show that population bottlenecks and rapid growth enables deleterious rare
variants to persist at low frequencies just as long as neutral variants, but
low frequency and common variants tend to be much younger than neutral
variants. This has resulted in a large proportion of modern-day rare alleles
that have a deleterious effect on function, and that potentially contribute to
disease susceptibility.
| [
{
"created": "Tue, 12 Feb 2013 06:16:04 GMT",
"version": "v1"
}
] | 2013-06-18 | [
[
"Maher",
"M. Cyrus",
""
],
[
"Uricchio",
"Lawrence H.",
""
],
[
"Torgerson",
"Dara G.",
""
],
[
"Hernandez",
"Ryan D.",
""
]
] | Identifying drivers of complex traits from the noisy signals of genetic variation obtained from high throughput genome sequencing technologies is a central challenge faced by human geneticists today. We hypothesize that the variants involved in complex diseases are likely to exhibit non-neutral evolutionary signatures. Uncovering the evolutionary history of all variants is therefore of intrinsic interest for complex disease research. However, doing so necessitates the simultaneous elucidation of the targets of natural selection and population-specific demographic history. Here we characterize the action of natural selection operating across complex disease categories, and use population genetic simulations to evaluate the expected patterns of genetic variation in large samples. We focus on populations that have experienced historical bottlenecks followed by explosive growth (consistent with most human populations), and describe the differences between evolutionarily deleterious mutations and those that are neutral. Genes associated with several complex disease categories exhibit stronger signatures of purifying selection than non-disease genes. In addition, loci identified through genome-wide association studies of complex traits also exhibit signatures consistent with being in regions recurrently targeted by purifying selection. Through simulations, we show that population bottlenecks and rapid growth enables deleterious rare variants to persist at low frequencies just as long as neutral variants, but low frequency and common variants tend to be much younger than neutral variants. This has resulted in a large proportion of modern-day rare alleles that have a deleterious effect on function, and that potentially contribute to disease susceptibility. |
1311.6864 | Eftychios A. Pnevmatikakis | Eftychios A. Pnevmatikakis, Josh Merel, Ari Pakman, Liam Paninski | Bayesian spike inference from calcium imaging data | null | null | null | null | q-bio.NC q-bio.QM stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present efficient Bayesian methods for extracting neuronal spiking
information from calcium imaging data. The goal of our methods is to sample
from the posterior distribution of spike trains and model parameters (baseline
concentration, spike amplitude etc) given noisy calcium imaging data. We
present discrete time algorithms where we sample the existence of a spike at
each time bin using Gibbs methods, as well as continuous time algorithms where
we sample over the number of spikes and their locations at an arbitrary
resolution using Metropolis-Hastings methods for point processes. We provide
Rao-Blackwellized extensions that (i) marginalize over several model parameters
and (ii) provide smooth estimates of the marginal spike posterior distribution
in continuous time. Our methods serve as complements to standard point
estimates and allow for quantification of uncertainty in estimating the
underlying spike train and model parameters.
| [
{
"created": "Wed, 27 Nov 2013 03:59:13 GMT",
"version": "v1"
}
] | 2013-11-28 | [
[
"Pnevmatikakis",
"Eftychios A.",
""
],
[
"Merel",
"Josh",
""
],
[
"Pakman",
"Ari",
""
],
[
"Paninski",
"Liam",
""
]
] | We present efficient Bayesian methods for extracting neuronal spiking information from calcium imaging data. The goal of our methods is to sample from the posterior distribution of spike trains and model parameters (baseline concentration, spike amplitude etc) given noisy calcium imaging data. We present discrete time algorithms where we sample the existence of a spike at each time bin using Gibbs methods, as well as continuous time algorithms where we sample over the number of spikes and their locations at an arbitrary resolution using Metropolis-Hastings methods for point processes. We provide Rao-Blackwellized extensions that (i) marginalize over several model parameters and (ii) provide smooth estimates of the marginal spike posterior distribution in continuous time. Our methods serve as complements to standard point estimates and allow for quantification of uncertainty in estimating the underlying spike train and model parameters. |
2311.00392 | Cornelis Klop | Cornelis Klop, Ruud Schreurs, Guido A De Jong, Edwin TM Klinkenberg,
Valeria Vespasiano, Naomi L Rood, Valerie G Niehe, Vidija Soerdjbalie-Maikoe,
Alexia Van Goethem, Bernadette S De Bakker, Thomas JJ Maal, Jitske W Nolte,
Alfred G Becking | An open-source, three-dimensional growth model of the mandible | null | null | 10.1016/j.compbiomed.2024.108455 | null | q-bio.TO | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The available reference data for the mandible and mandibular growth consists
primarily of two-dimensional linear or angular measurements. The aim of this
study was to create the first open-source, three-dimensional statistical shape
model of the mandible that spans the complete growth period. Computed
tomography scans of 678 mandibles from children and young adults between 0 and
22 years old were included in the model. The mandibles were segmented using a
semi-automatic or automatic (artificial intelligence-based) segmentation
method. Point correspondence among the samples was achieved by rigid
registration, followed by non-rigid registration of a symmetrical template onto
each sample. The registration process was validated with adequate results.
Principal component analysis was used to gain insight in the variation within
the dataset and to investigate age-related changes and sexual dimorphism. The
presented growth model is accessible globally and free-of-charge for
scientists, physicians and forensic investigators for any kind of purpose
deemed suitable. The versatility of the model opens up new possibilities in the
fields of oral and maxillofacial surgery, forensic sciences or biological
anthropology. In clinical settings, the model may aid diagnostic
decision-making, treatment planning and treatment evaluation.
| [
{
"created": "Wed, 1 Nov 2023 09:38:05 GMT",
"version": "v1"
},
{
"created": "Tue, 7 Nov 2023 08:50:43 GMT",
"version": "v2"
}
] | 2024-04-17 | [
[
"Klop",
"Cornelis",
""
],
[
"Schreurs",
"Ruud",
""
],
[
"De Jong",
"Guido A",
""
],
[
"Klinkenberg",
"Edwin TM",
""
],
[
"Vespasiano",
"Valeria",
""
],
[
"Rood",
"Naomi L",
""
],
[
"Niehe",
"Valerie G",
""
],
[
"Soerdjbalie-Maikoe",
"Vidija",
""
],
[
"Van Goethem",
"Alexia",
""
],
[
"De Bakker",
"Bernadette S",
""
],
[
"Maal",
"Thomas JJ",
""
],
[
"Nolte",
"Jitske W",
""
],
[
"Becking",
"Alfred G",
""
]
] | The available reference data for the mandible and mandibular growth consists primarily of two-dimensional linear or angular measurements. The aim of this study was to create the first open-source, three-dimensional statistical shape model of the mandible that spans the complete growth period. Computed tomography scans of 678 mandibles from children and young adults between 0 and 22 years old were included in the model. The mandibles were segmented using a semi-automatic or automatic (artificial intelligence-based) segmentation method. Point correspondence among the samples was achieved by rigid registration, followed by non-rigid registration of a symmetrical template onto each sample. The registration process was validated with adequate results. Principal component analysis was used to gain insight in the variation within the dataset and to investigate age-related changes and sexual dimorphism. The presented growth model is accessible globally and free-of-charge for scientists, physicians and forensic investigators for any kind of purpose deemed suitable. The versatility of the model opens up new possibilities in the fields of oral and maxillofacial surgery, forensic sciences or biological anthropology. In clinical settings, the model may aid diagnostic decision-making, treatment planning and treatment evaluation. |
1609.01790 | Danielle Bassett | Marcelo G. Mattar and Danielle S. Bassett | Brain Network Architecture: Implications for Human Learning | 9 pages, 5 figures | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human learning is a complex phenomenon that requires adaptive processes
across a range of temporal and spacial scales. While our understanding of those
processes at single scales has increased exponentially over the last few years,
a mechanistic understanding of the entire phenomenon has remained elusive. We
propose that progress has been stymied by the lack of a quantitative framework
that can account for the full range of neurophysiological and behavioral
dynamics both across scales in the systems and also across different types of
learning. We posit that network neuroscience offers promise in meeting this
challenge. Built on the mathematical fields of complex systems science and
graph theory, network neuroscience embraces the interconnected and hierarchical
nature of human learning, offering insights into the emergent properties of
adaptability. In this review, we discuss the utility of network neuroscience as
a tool to build a quantitative framework in which to study human learning,
which seeks to explain the full chain of events in the brain from sensory input
to motor output, being both biologically plausible and able to make predictions
about how an intervention at a single level of the chain may cause alterations
in another level of the chain. We close by laying out important remaining
challenges in network neuroscience in explicitly bridging spatial scales at
which neurophysiological processes occur, and underscore the utility of such a
quantitative framework for education and therapy.
| [
{
"created": "Wed, 7 Sep 2016 00:22:40 GMT",
"version": "v1"
}
] | 2016-09-08 | [
[
"Mattar",
"Marcelo G.",
""
],
[
"Bassett",
"Danielle S.",
""
]
] | Human learning is a complex phenomenon that requires adaptive processes across a range of temporal and spacial scales. While our understanding of those processes at single scales has increased exponentially over the last few years, a mechanistic understanding of the entire phenomenon has remained elusive. We propose that progress has been stymied by the lack of a quantitative framework that can account for the full range of neurophysiological and behavioral dynamics both across scales in the systems and also across different types of learning. We posit that network neuroscience offers promise in meeting this challenge. Built on the mathematical fields of complex systems science and graph theory, network neuroscience embraces the interconnected and hierarchical nature of human learning, offering insights into the emergent properties of adaptability. In this review, we discuss the utility of network neuroscience as a tool to build a quantitative framework in which to study human learning, which seeks to explain the full chain of events in the brain from sensory input to motor output, being both biologically plausible and able to make predictions about how an intervention at a single level of the chain may cause alterations in another level of the chain. We close by laying out important remaining challenges in network neuroscience in explicitly bridging spatial scales at which neurophysiological processes occur, and underscore the utility of such a quantitative framework for education and therapy. |
2304.07615 | David D. Reid | Zily Burstein, David D. Reid, Peter J. Thomas, and Jack D. Cowan | Pattern Forming Mechanisms of Color Vision | 21 pages, 15 figures | null | 10.1162/netn_a_00294 | null | q-bio.NC physics.bio-ph | http://creativecommons.org/licenses/by/4.0/ | While our understanding of the way single neurons process chromatic stimuli
in the early visual pathway has advanced significantly in recent years, we do
not yet know how these cells interact to form stable representations of hue.
Drawing on physiological studies, we offer a dynamical model of how the primary
visual cortex tunes for color, hinged on intracortical interactions and
emergent network effects. After detailing the evolution of network activity
through analytical and numerical approaches, we discuss the effects of the
model's cortical parameters on the selectivity of the tuning curves. In
particular, we explore the role of the model's thresholding nonlinearity in
enhancing hue selectivity by expanding the region of stability, allowing for
the precise encoding of chromatic stimuli in early vision. Finally, in the
absence of a stimulus, the model is capable of explaining hallucinatory color
perception via a Turing-like mechanism of biological pattern formation.
| [
{
"created": "Sat, 15 Apr 2023 18:51:57 GMT",
"version": "v1"
}
] | 2023-04-18 | [
[
"Burstein",
"Zily",
""
],
[
"Reid",
"David D.",
""
],
[
"Thomas",
"Peter J.",
""
],
[
"Cowan",
"Jack D.",
""
]
] | While our understanding of the way single neurons process chromatic stimuli in the early visual pathway has advanced significantly in recent years, we do not yet know how these cells interact to form stable representations of hue. Drawing on physiological studies, we offer a dynamical model of how the primary visual cortex tunes for color, hinged on intracortical interactions and emergent network effects. After detailing the evolution of network activity through analytical and numerical approaches, we discuss the effects of the model's cortical parameters on the selectivity of the tuning curves. In particular, we explore the role of the model's thresholding nonlinearity in enhancing hue selectivity by expanding the region of stability, allowing for the precise encoding of chromatic stimuli in early vision. Finally, in the absence of a stimulus, the model is capable of explaining hallucinatory color perception via a Turing-like mechanism of biological pattern formation. |
1511.00964 | Markus Goldhacker | Markus Goldhacker, Ana Maria Tom\'e, Mark W. Greenlee, Elmar W. Lang | Frequency-resolved dynamic functional connectivity and scale-invariant
connectivity-state behavior | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Investigating temporal variability of functional connectivity is an emerging
field in connectomics. Entering dynamic functional connectivity by applying
sliding window techniques on resting-state fMRI (rs-fMRI) time courses emerged
from this topic. We introduce frequency-resolved dynamic functional
connectivity (frdFC) by means of multivariate empirical mode decomposition
(MEMD) followed up by filter-bank investigations. We develop our method on the
most canonical form by applying a sliding window approach to the intrinsic mode
functions (IMFs) resulting from MEMD. We explore two modifications:
uniform-amplitude frequency scales by normalizing the IMFs by their
instantaneous amplitude and cumulative scales. By exploiting the well
established concept of scale-invariance in resting-state parameters, we compare
our frdFC approaches. In general, we find that MEMD is capable of generating
time courses to perform frdFC and we discover that the structure of
connectivity-states is robust over frequency scales and even becomes more
evident with decreasing frequency. This scale-stability varies with the number
of extracted clusters when applying k-means. We find a scale-stability drop-off
from k = 4 to k = 5 extracted connectivity-states, which is corroborated by
null-models, simulations, theoretical considerations, filter-banks, and
scale-adjusted windows. Our filter-bank studies show that filter design is more
delicate in the rs-fMRI than in the simulated case. Besides offering a baseline
for further frdFC research, we suggest and demonstrate the use of
scale-stability as a quality criterion for connectivity-state and model
selection. We present first evidence showing that scale-invariance plays an
important role in connectivity-state considerations. A data repository of our
frequency-resolved time-series is provided.
| [
{
"created": "Tue, 3 Nov 2015 16:11:58 GMT",
"version": "v1"
}
] | 2015-11-04 | [
[
"Goldhacker",
"Markus",
""
],
[
"Tomé",
"Ana Maria",
""
],
[
"Greenlee",
"Mark W.",
""
],
[
"Lang",
"Elmar W.",
""
]
] | Investigating temporal variability of functional connectivity is an emerging field in connectomics. Entering dynamic functional connectivity by applying sliding window techniques on resting-state fMRI (rs-fMRI) time courses emerged from this topic. We introduce frequency-resolved dynamic functional connectivity (frdFC) by means of multivariate empirical mode decomposition (MEMD) followed up by filter-bank investigations. We develop our method on the most canonical form by applying a sliding window approach to the intrinsic mode functions (IMFs) resulting from MEMD. We explore two modifications: uniform-amplitude frequency scales by normalizing the IMFs by their instantaneous amplitude and cumulative scales. By exploiting the well established concept of scale-invariance in resting-state parameters, we compare our frdFC approaches. In general, we find that MEMD is capable of generating time courses to perform frdFC and we discover that the structure of connectivity-states is robust over frequency scales and even becomes more evident with decreasing frequency. This scale-stability varies with the number of extracted clusters when applying k-means. We find a scale-stability drop-off from k = 4 to k = 5 extracted connectivity-states, which is corroborated by null-models, simulations, theoretical considerations, filter-banks, and scale-adjusted windows. Our filter-bank studies show that filter design is more delicate in the rs-fMRI than in the simulated case. Besides offering a baseline for further frdFC research, we suggest and demonstrate the use of scale-stability as a quality criterion for connectivity-state and model selection. We present first evidence showing that scale-invariance plays an important role in connectivity-state considerations. A data repository of our frequency-resolved time-series is provided. |
2301.00673 | Jean-Luc Boulnois | Jean-Luc Boulnois | Predator-Prey Linear Coupling with Hybrid Species | null | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The classical two-species non-linear Predator-Prey system, often used in
population dynamics modeling, is expressed in terms of a single positive
coupling parameter $\lambda$. Based on standard logarithmic transformations, we
derive a novel $\lambda$-\textit{invariant} Hamiltonian resulting in two
coupled first-order ODEs for ``hybrid-species'', \textit{albeit} with one being
\textit{linear}; we thus derive a new exact, closed-form, single quadrature
solution valid for any value of $\lambda$ and the system's energy. In the
particular case $\lambda = 1$ the ODE system completely uncouples and a new,
exact, energy-only dependent simple quadrature solution is derived. In the case
$\lambda \neq 1$ an accurate practical approximation uncoupling the non-linear
system is proposed and solutions are provided in terms of explicit quadratures
together with high energy asymptotic solutions. A novel, exact, closed-form
expression of the system's oscillation period valid for any value of $\lambda$
and orbital energy is also derived; two fundamental properties of the period
are established; for $\lambda = 1$ the period is expressed in terms of a
universal energy function and shown to be the shortest.
| [
{
"created": "Thu, 29 Dec 2022 16:28:14 GMT",
"version": "v1"
}
] | 2023-01-03 | [
[
"Boulnois",
"Jean-Luc",
""
]
] | The classical two-species non-linear Predator-Prey system, often used in population dynamics modeling, is expressed in terms of a single positive coupling parameter $\lambda$. Based on standard logarithmic transformations, we derive a novel $\lambda$-\textit{invariant} Hamiltonian resulting in two coupled first-order ODEs for ``hybrid-species'', \textit{albeit} with one being \textit{linear}; we thus derive a new exact, closed-form, single quadrature solution valid for any value of $\lambda$ and the system's energy. In the particular case $\lambda = 1$ the ODE system completely uncouples and a new, exact, energy-only dependent simple quadrature solution is derived. In the case $\lambda \neq 1$ an accurate practical approximation uncoupling the non-linear system is proposed and solutions are provided in terms of explicit quadratures together with high energy asymptotic solutions. A novel, exact, closed-form expression of the system's oscillation period valid for any value of $\lambda$ and orbital energy is also derived; two fundamental properties of the period are established; for $\lambda = 1$ the period is expressed in terms of a universal energy function and shown to be the shortest. |
1010.2105 | Philipp Altrock | Bin Wu, Philipp M. Altrock, Long Wang and Arne Traulsen | Universality of weak selection | 12 pages, 3 figures, accepted for publication in Physical Review E | Physical Review E 82, 046106 (2010) | 10.1103/PhysRevE.82.046106 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Weak selection, which means a phenotype is slightly advantageous over
another, is an important limiting case in evolutionary biology. Recently it has
been introduced into evolutionary game theory. In evolutionary game dynamics,
the probability to be imitated or to reproduce depends on the performance in a
game. The influence of the game on the stochastic dynamics in finite
populations is governed by the intensity of selection. In many models of both
unstructured and structured populations, a key assumption allowing analytical
calculations is weak selection, which means that all individuals perform
approximately equally well. In the weak selection limit many different
microscopic evolutionary models have the same or similar properties. How
universal is weak selection for those microscopic evolutionary processes? We
answer this question by investigating the fixation probability and the average
fixation time not only up to linear, but also up to higher orders in selection
intensity. We find universal higher order expansions, which allow a rescaling
of the selection intensity. With this, we can identify specific models which
violate (linear) weak selection results, such as the one--third rule of
coordination games in finite but large populations.
| [
{
"created": "Mon, 11 Oct 2010 13:51:44 GMT",
"version": "v1"
}
] | 2010-10-15 | [
[
"Wu",
"Bin",
""
],
[
"Altrock",
"Philipp M.",
""
],
[
"Wang",
"Long",
""
],
[
"Traulsen",
"Arne",
""
]
] | Weak selection, which means a phenotype is slightly advantageous over another, is an important limiting case in evolutionary biology. Recently it has been introduced into evolutionary game theory. In evolutionary game dynamics, the probability to be imitated or to reproduce depends on the performance in a game. The influence of the game on the stochastic dynamics in finite populations is governed by the intensity of selection. In many models of both unstructured and structured populations, a key assumption allowing analytical calculations is weak selection, which means that all individuals perform approximately equally well. In the weak selection limit many different microscopic evolutionary models have the same or similar properties. How universal is weak selection for those microscopic evolutionary processes? We answer this question by investigating the fixation probability and the average fixation time not only up to linear, but also up to higher orders in selection intensity. We find universal higher order expansions, which allow a rescaling of the selection intensity. With this, we can identify specific models which violate (linear) weak selection results, such as the one--third rule of coordination games in finite but large populations. |
1309.7546 | Liaofu Luo | Liaofu Luo | A Quantum Model on Chemically-Physically Induced Pluripotency in Stem
Cells | 3 figures and related discussions added in the revision, results
unchanged | null | null | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A quantum model on the chemically and physically induced pluripotency in stem
cells is proposed. Based on the conformational Hamiltonian and the idea of slow
variables (molecular torsions) slaving fast ones the conversion from the
differentiate state to pluripotent state is defined as the quantum transition
between conformational states. The transitional rate is calculated and an
analytical form for the rate formulas is deduced. Then the dependence of the
rate on the number of torsion angles of the gene and the magnitude of the rate
can be estimated by comparison with protein folding. The reaction equations of
the conformational change of the pluripotency genes in chemical reprogramming
are given. The characteristic time of the chemical reprogramming is calculated
and the result is consistent with experiments. The dependence of the transition
rate on physical factors such as temperature, PH value and the volume and shape
of the coherent domain is analyzed from the rate equation. It is suggested that
by decreasing the coherence degree of some pluripotency genes a more effective
approach to the physically induced pluripotency can be made.
| [
{
"created": "Sun, 29 Sep 2013 07:07:32 GMT",
"version": "v1"
},
{
"created": "Wed, 9 Apr 2014 07:20:25 GMT",
"version": "v2"
},
{
"created": "Wed, 31 Dec 2014 02:51:30 GMT",
"version": "v3"
},
{
"created": "Mon, 5 Jan 2015 06:58:24 GMT",
"version": "v4"
},
{
"created": "Tue, 6 Jan 2015 03:16:32 GMT",
"version": "v5"
}
] | 2015-01-07 | [
[
"Luo",
"Liaofu",
""
]
] | A quantum model on the chemically and physically induced pluripotency in stem cells is proposed. Based on the conformational Hamiltonian and the idea of slow variables (molecular torsions) slaving fast ones the conversion from the differentiate state to pluripotent state is defined as the quantum transition between conformational states. The transitional rate is calculated and an analytical form for the rate formulas is deduced. Then the dependence of the rate on the number of torsion angles of the gene and the magnitude of the rate can be estimated by comparison with protein folding. The reaction equations of the conformational change of the pluripotency genes in chemical reprogramming are given. The characteristic time of the chemical reprogramming is calculated and the result is consistent with experiments. The dependence of the transition rate on physical factors such as temperature, PH value and the volume and shape of the coherent domain is analyzed from the rate equation. It is suggested that by decreasing the coherence degree of some pluripotency genes a more effective approach to the physically induced pluripotency can be made. |
2105.01428 | Vito Dichio | Vito Dichio, Hong-Li Zeng and Erik Aurell | Statistical Genetics in and out of Quasi-Linkage Equilibrium (Extended) | 58 pages, 13 figures | 2023 Rep. Prog. Phys. 86 052601 | 10.1088/1361-6633/acc5fa | null | q-bio.PE cond-mat.dis-nn physics.bio-ph | http://creativecommons.org/licenses/by-nc-nd/4.0/ | This review is about statistical genetics, an interdisciplinary topic between
statistical physics and population biology. The focus is on the phase of
quasi-linkage equilibrium (QLE). Our goals here are to clarify under which
conditions the QLE phase can be expected to hold in population biology and how
the stability of the QLE phase is lost. The QLE state, which has many
similarities to a thermal equilibrium state in statistical mechanics, was
discovered by M Kimura for a two-locus two-allele model, and was extended and
generalized to the global genome scale by (Neher and Shraiman, 2011). What we
will refer to as the Kimura-Neher-Shraiman (KNS) theory describes a population
evolving due to the mutations, recombination, natural selection and possibly
genetic drift. A QLE phase exists at sufficiently high recombination rate
and/or mutation rates with respect to selection strength. We show how in QLE it
is possible to infer the epistatic parameters of the fitness function from the
knowledge of the (dynamical) distribution of genotypes in a population. We
further consider the breakdown of the QLE regime for high enough selection
strength. We review recent results for the selection-mutation and
selection-recombination dynamics. Finally, we identify and characterize a new
phase which we call the non-random coexistence (NRC) where variability persists
in the population without either fixating or disappearing.
| [
{
"created": "Tue, 4 May 2021 11:34:27 GMT",
"version": "v1"
},
{
"created": "Wed, 9 Mar 2022 15:06:39 GMT",
"version": "v2"
},
{
"created": "Thu, 10 Mar 2022 09:26:29 GMT",
"version": "v3"
},
{
"created": "Fri, 30 Sep 2022 14:10:34 GMT",
"version": "v4"
},
{
"created": "Tue, 13 Dec 2022 11:41:41 GMT",
"version": "v5"
},
{
"created": "Fri, 3 Feb 2023 17:15:54 GMT",
"version": "v6"
}
] | 2023-04-07 | [
[
"Dichio",
"Vito",
""
],
[
"Zeng",
"Hong-Li",
""
],
[
"Aurell",
"Erik",
""
]
] | This review is about statistical genetics, an interdisciplinary topic between statistical physics and population biology. The focus is on the phase of quasi-linkage equilibrium (QLE). Our goals here are to clarify under which conditions the QLE phase can be expected to hold in population biology and how the stability of the QLE phase is lost. The QLE state, which has many similarities to a thermal equilibrium state in statistical mechanics, was discovered by M Kimura for a two-locus two-allele model, and was extended and generalized to the global genome scale by (Neher and Shraiman, 2011). What we will refer to as the Kimura-Neher-Shraiman (KNS) theory describes a population evolving due to the mutations, recombination, natural selection and possibly genetic drift. A QLE phase exists at sufficiently high recombination rate and/or mutation rates with respect to selection strength. We show how in QLE it is possible to infer the epistatic parameters of the fitness function from the knowledge of the (dynamical) distribution of genotypes in a population. We further consider the breakdown of the QLE regime for high enough selection strength. We review recent results for the selection-mutation and selection-recombination dynamics. Finally, we identify and characterize a new phase which we call the non-random coexistence (NRC) where variability persists in the population without either fixating or disappearing. |
1104.4524 | Jinzhi Lei JL | Jinzhi Lei | Stochastic Modeling in Systems Biology | 25 pages, 4 figures | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many cellular behaviors are regulated by gene regulation networks, kinetics
of which is one of the main subjects in the study of systems biology. Because
of the low number molecules in these reacting systems, stochastic effects are
significant. In recent years, stochasticity in modeling the kinetics of gene
regulation networks have been drawing the attention of many researchers. This
paper is a self contained review trying to provide an overview of stochastic
modeling. I will introduce the derivation of the main equations in modeling the
biochemical systems with intrinsic noise (chemical master equation, Fokker-Plan
equation, reaction rate equation, chemical Langevin equation), and will discuss
the relations between these formulations. The mathematical formulations for
systems with fluctuations in kinetic parameters are also discussed. Finally, I
will introduce the exact stochastic simulation algorithm and the approximate
explicit tau-leaping method for making numerical simulations.
| [
{
"created": "Sat, 23 Apr 2011 01:41:18 GMT",
"version": "v1"
}
] | 2011-04-26 | [
[
"Lei",
"Jinzhi",
""
]
] | Many cellular behaviors are regulated by gene regulation networks, kinetics of which is one of the main subjects in the study of systems biology. Because of the low number molecules in these reacting systems, stochastic effects are significant. In recent years, stochasticity in modeling the kinetics of gene regulation networks have been drawing the attention of many researchers. This paper is a self contained review trying to provide an overview of stochastic modeling. I will introduce the derivation of the main equations in modeling the biochemical systems with intrinsic noise (chemical master equation, Fokker-Plan equation, reaction rate equation, chemical Langevin equation), and will discuss the relations between these formulations. The mathematical formulations for systems with fluctuations in kinetic parameters are also discussed. Finally, I will introduce the exact stochastic simulation algorithm and the approximate explicit tau-leaping method for making numerical simulations. |
1510.04107 | F. Cecconi | Patrizio Ansalone and Mauro Chinappi and Lamberto Rondoni and Fabio
Cecconi | Driven diffusion against electrostatic or effective energy barrier
across Alpha-Hemolysin | RevTeX 4-1, 11 pages, 6 pdf figures, J. Chem. Phys. 2015 in press | null | null | null | q-bio.BM physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We analyze the translocation of a charged particle across an Alpha-Hemolysin
(aHL) pore in the framework of a driven diffusion over an extended energy
barrier generated by the electrical charges of the aHL. A one-dimensional
electrostatic potential is extracted from the full 3D solution of the Poisson's
equation. We characterize the particle transport under the action of a constant
forcing by studying the statistics of the translocation time. We derive an
analytical expression of translocation time average that compares well with the
results from Brownian dynamic simulations of driven particles over the
electrostatic potential. Moreover, we show that the translocation time
distributions can be perfectly described by a simple theory which replaces the
true barrier by an equivalent structureless square barrier. Remarkably our
approach maintains its accuracy also for low-applied voltage regimes where the
usual inverse-Gaussian approximation fails. Finally we discuss how the
comparison between the simulated time distributions and their theoretical
prediction results to be greatly simplified when using the notion of the
empirical Laplace transform technique.
| [
{
"created": "Wed, 14 Oct 2015 14:14:59 GMT",
"version": "v1"
}
] | 2015-10-15 | [
[
"Ansalone",
"Patrizio",
""
],
[
"Chinappi",
"Mauro",
""
],
[
"Rondoni",
"Lamberto",
""
],
[
"Cecconi",
"Fabio",
""
]
] | We analyze the translocation of a charged particle across an Alpha-Hemolysin (aHL) pore in the framework of a driven diffusion over an extended energy barrier generated by the electrical charges of the aHL. A one-dimensional electrostatic potential is extracted from the full 3D solution of the Poisson's equation. We characterize the particle transport under the action of a constant forcing by studying the statistics of the translocation time. We derive an analytical expression of translocation time average that compares well with the results from Brownian dynamic simulations of driven particles over the electrostatic potential. Moreover, we show that the translocation time distributions can be perfectly described by a simple theory which replaces the true barrier by an equivalent structureless square barrier. Remarkably our approach maintains its accuracy also for low-applied voltage regimes where the usual inverse-Gaussian approximation fails. Finally we discuss how the comparison between the simulated time distributions and their theoretical prediction results to be greatly simplified when using the notion of the empirical Laplace transform technique. |
1206.0094 | Peter Csermely | David M. Gyurko, Csaba Soti, Attila Stetak and Peter Csermely | System level mechanisms of adaptation, learning, memory formation and
evolvability: the role of chaperone and other networks | 19 pages, 2 Figures, 1 Table, 173 references | Current Protein and Peptide Science (2014) 15: 171-188 | 10.2174/1389203715666140331110522 | null | q-bio.MN physics.bio-ph q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | During the last decade, network approaches became a powerful tool to describe
protein structure and dynamics. Here, we describe first the protein structure
networks of molecular chaperones, then characterize chaperone containing
sub-networks of interactomes called as chaperone-networks or chaperomes. We
review the role of molecular chaperones in short-term adaptation of cellular
networks in response to stress, and in long-term adaptation discussing their
putative functions in the regulation of evolvability. We provide a general
overview of possible network mechanisms of adaptation, learning and memory
formation. We propose that changes of network rigidity play a key role in
learning and memory formation processes. Flexible network topology provides
"learning competent" state. Here, networks may have much less modular
boundaries than locally rigid, highly modular networks, where the learnt
information has already been consolidated in a memory formation process. Since
modular boundaries are efficient filters of information, in the "learning
competent" state information filtering may be much smaller, than after memory
formation. This mechanism restricts high information transfer to the "learning
competent" state. After memory formation, modular boundary-induced segregation
and information filtering protect the stored information. The flexible networks
of young organisms are generally in a "learning competent" state. On the
contrary, locally rigid networks of old organisms have lost their "learning
competent" state, but store and protect their learnt information efficiently.
We anticipate that the above mechanism may operate at the level of both
protein-protein interaction and neuronal networks.
| [
{
"created": "Fri, 1 Jun 2012 06:34:03 GMT",
"version": "v1"
},
{
"created": "Sun, 9 Jun 2013 15:16:38 GMT",
"version": "v2"
},
{
"created": "Fri, 25 Apr 2014 10:32:43 GMT",
"version": "v3"
}
] | 2014-04-28 | [
[
"Gyurko",
"David M.",
""
],
[
"Soti",
"Csaba",
""
],
[
"Stetak",
"Attila",
""
],
[
"Csermely",
"Peter",
""
]
] | During the last decade, network approaches became a powerful tool to describe protein structure and dynamics. Here, we describe first the protein structure networks of molecular chaperones, then characterize chaperone containing sub-networks of interactomes called as chaperone-networks or chaperomes. We review the role of molecular chaperones in short-term adaptation of cellular networks in response to stress, and in long-term adaptation discussing their putative functions in the regulation of evolvability. We provide a general overview of possible network mechanisms of adaptation, learning and memory formation. We propose that changes of network rigidity play a key role in learning and memory formation processes. Flexible network topology provides "learning competent" state. Here, networks may have much less modular boundaries than locally rigid, highly modular networks, where the learnt information has already been consolidated in a memory formation process. Since modular boundaries are efficient filters of information, in the "learning competent" state information filtering may be much smaller, than after memory formation. This mechanism restricts high information transfer to the "learning competent" state. After memory formation, modular boundary-induced segregation and information filtering protect the stored information. The flexible networks of young organisms are generally in a "learning competent" state. On the contrary, locally rigid networks of old organisms have lost their "learning competent" state, but store and protect their learnt information efficiently. We anticipate that the above mechanism may operate at the level of both protein-protein interaction and neuronal networks. |
1901.02478 | Andrew Jaegle | Andrew Jaegle, Vahid Mehrpour, Nicole Rust | Visual novelty, curiosity, and intrinsic reward in machine learning and
the brain | 13 pages | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A strong preference for novelty emerges in infancy and is prevalent across
the animal kingdom. When incorporated into reinforcement-based machine learning
algorithms, visual novelty can act as an intrinsic reward signal that vastly
increases the efficiency of exploration and expedites learning, particularly in
situations where external rewards are difficult to obtain. Here we review
parallels between recent developments in novelty-driven machine learning
algorithms and our understanding of how visual novelty is computed and signaled
in the primate brain. We propose that in the visual system, novelty
representations are not configured with the principal goal of detecting novel
objects, but rather with the broader goal of flexibly generalizing novelty
information across different states in the service of driving novelty-based
learning.
| [
{
"created": "Tue, 8 Jan 2019 19:34:44 GMT",
"version": "v1"
}
] | 2019-01-10 | [
[
"Jaegle",
"Andrew",
""
],
[
"Mehrpour",
"Vahid",
""
],
[
"Rust",
"Nicole",
""
]
] | A strong preference for novelty emerges in infancy and is prevalent across the animal kingdom. When incorporated into reinforcement-based machine learning algorithms, visual novelty can act as an intrinsic reward signal that vastly increases the efficiency of exploration and expedites learning, particularly in situations where external rewards are difficult to obtain. Here we review parallels between recent developments in novelty-driven machine learning algorithms and our understanding of how visual novelty is computed and signaled in the primate brain. We propose that in the visual system, novelty representations are not configured with the principal goal of detecting novel objects, but rather with the broader goal of flexibly generalizing novelty information across different states in the service of driving novelty-based learning. |
1810.07263 | Jie Liang | Anna Terebus, Chun Liu, and Jie Liang | Discrete Flux and Velocity Fields of Probability and Their Global Maps
in Reaction Systems | 21 pages, 5 figures | null | 10.1063/1.5050808 | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Stochasticity plays important roles in reaction systems. Vector fields of
probability flux and velocity characterize time-varying and steady-state
properties of these systems, including high probability paths, barriers,
checkpoints among different stable regions, as well as mechanisms of dynamic
switching among them. However, conventional fluxes on continuous space are
ill-defined and are problematic when at boundaries of the state space or when
copy numbers are small. By re-defining the derivative and divergence operators
based on the discrete nature of reactions, we introduce new formulations of
discrete fluxes. Our flux model fully accounts for the discreetness of both the
state space and the jump processes of reactions. The reactional discrete flux
satisfies the continuity equation and describes the behavior of the system
evolving along directions of reactions. The species discrete flux directly
describes the dynamic behavior in the state space of the reactants such as the
transfer of probability mass. With the relationship between these two fluxes
specified, we show how to construct time-evolving and steady-state global
flow-maps of probability flux and velocity in the directions of every species
at every microstate, and how they are related to the outflow and inflow of
probability fluxes when tracing out reaction trajectories. We also describe how
to impose proper conditions enabling exact quantification of flux and velocity
in the boundary regions, without the difficulty of enforcing artificial
reflecting conditions. We illustrate the computation of probability flux and
velocity using three model systems, namely, the birth-death process, the
bistable Schl\"ogl model, and the oscillating Schnakenberg model.
| [
{
"created": "Tue, 16 Oct 2018 20:38:59 GMT",
"version": "v1"
}
] | 2018-12-05 | [
[
"Terebus",
"Anna",
""
],
[
"Liu",
"Chun",
""
],
[
"Liang",
"Jie",
""
]
] | Stochasticity plays important roles in reaction systems. Vector fields of probability flux and velocity characterize time-varying and steady-state properties of these systems, including high probability paths, barriers, checkpoints among different stable regions, as well as mechanisms of dynamic switching among them. However, conventional fluxes on continuous space are ill-defined and are problematic when at boundaries of the state space or when copy numbers are small. By re-defining the derivative and divergence operators based on the discrete nature of reactions, we introduce new formulations of discrete fluxes. Our flux model fully accounts for the discreetness of both the state space and the jump processes of reactions. The reactional discrete flux satisfies the continuity equation and describes the behavior of the system evolving along directions of reactions. The species discrete flux directly describes the dynamic behavior in the state space of the reactants such as the transfer of probability mass. With the relationship between these two fluxes specified, we show how to construct time-evolving and steady-state global flow-maps of probability flux and velocity in the directions of every species at every microstate, and how they are related to the outflow and inflow of probability fluxes when tracing out reaction trajectories. We also describe how to impose proper conditions enabling exact quantification of flux and velocity in the boundary regions, without the difficulty of enforcing artificial reflecting conditions. We illustrate the computation of probability flux and velocity using three model systems, namely, the birth-death process, the bistable Schl\"ogl model, and the oscillating Schnakenberg model. |
0811.3510 | Noa Sela | Noa Sela, Britta Mersch, Nurit Gal-Mark, Galit Lev-Maor, Agnes Hotz-
Wagenblatt, Gil Ast | Comparative analysis of transposed element insertion within human and
mouse genomes reveals Alu's unique role in shaping the human transcriptome | null | Genome Biology 2007, 8:R127 | 10.1186/gb-2007-8-6-r127 | null | q-bio.GN q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: Transposed elements (TEs) have a substantial impact on mammalian
evolution and are involved in numerous genetic diseases. We compared the impact
of TEs on the human transcriptome and the mouse transcriptome. Results: We
compiled a dataset of all TEs in the human and mouse genomes, identifying
3,932,058 and 3,122,416 TEs, respectively. We than extracted TEs located within
human and mouse genes and, surprisingly, we found that 60% of TEs in both human
and mouse are located in intronic sequences, even though introns comprise only
24% of the human genome. All TE families in both human and mouse can exonize.
TE families that are shared between human and mouse exhibit the same percentage
of TE exonization in the two species, but the exonization level of Alu, a
primatespecific retroelement, is significantly greater than that of other TEs
within the human genome, leading to a higher level of TE exonization in human
than in mouse (1,824 exons compared with 506 exons, respectively). We detected
a primate-specific mechanism for intron gain, in which Alu insertion into an
exon creates a new intron located in the 3' untranslated region (termed
'intronization'). Finally, the insertion of TEs into the first and last exons
of a gene is more frequent in human than in mouse, leading to longer exons in
human. Conclusion: Our findings reveal many effects of TEs on these two
transcriptomes. These effects are substantially greater in human than in mouse,
which is due to the presence of Alu elements in human.
| [
{
"created": "Fri, 21 Nov 2008 10:51:29 GMT",
"version": "v1"
}
] | 2008-11-24 | [
[
"Sela",
"Noa",
""
],
[
"Mersch",
"Britta",
""
],
[
"Gal-Mark",
"Nurit",
""
],
[
"Lev-Maor",
"Galit",
""
],
[
"Wagenblatt",
"Agnes Hotz-",
""
],
[
"Ast",
"Gil",
""
]
] | Background: Transposed elements (TEs) have a substantial impact on mammalian evolution and are involved in numerous genetic diseases. We compared the impact of TEs on the human transcriptome and the mouse transcriptome. Results: We compiled a dataset of all TEs in the human and mouse genomes, identifying 3,932,058 and 3,122,416 TEs, respectively. We than extracted TEs located within human and mouse genes and, surprisingly, we found that 60% of TEs in both human and mouse are located in intronic sequences, even though introns comprise only 24% of the human genome. All TE families in both human and mouse can exonize. TE families that are shared between human and mouse exhibit the same percentage of TE exonization in the two species, but the exonization level of Alu, a primatespecific retroelement, is significantly greater than that of other TEs within the human genome, leading to a higher level of TE exonization in human than in mouse (1,824 exons compared with 506 exons, respectively). We detected a primate-specific mechanism for intron gain, in which Alu insertion into an exon creates a new intron located in the 3' untranslated region (termed 'intronization'). Finally, the insertion of TEs into the first and last exons of a gene is more frequent in human than in mouse, leading to longer exons in human. Conclusion: Our findings reveal many effects of TEs on these two transcriptomes. These effects are substantially greater in human than in mouse, which is due to the presence of Alu elements in human. |
1402.4896 | Daniel Balick | Ron Do, Daniel Balick, Heng Li, Ivan Adzhubei, Shamil Sunyaev and
David Reich | No evidence that natural selection has been less effective at removing
deleterious mutations in Europeans than in West Africans | 53 pages (22 page manuscript and 31 supplemental information) | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Non-African populations have experienced major bottlenecks in the time since
their split from West Africans, which has led to the hypothesis that natural
selection to remove weakly deleterious mutations may have been less effective
in non-Africans. To directly test this hypothesis, we measure the per-genome
accumulation of deleterious mutations across diverse humans. We fail to detect
any significant differences, but find that archaic Denisovans accumulated
non-synonymous mutations at a higher rate than modern humans, consistent with
the longer separation time of modern and archaic humans. We also revisit the
empirical patterns that have been interpreted as evidence for less effective
removal of deleterious mutations in non-Africans than in West Africans, and
show they are not driven by differences in selection after population
separation, but by neutral evolution.
| [
{
"created": "Thu, 20 Feb 2014 05:24:11 GMT",
"version": "v1"
}
] | 2014-02-21 | [
[
"Do",
"Ron",
""
],
[
"Balick",
"Daniel",
""
],
[
"Li",
"Heng",
""
],
[
"Adzhubei",
"Ivan",
""
],
[
"Sunyaev",
"Shamil",
""
],
[
"Reich",
"David",
""
]
] | Non-African populations have experienced major bottlenecks in the time since their split from West Africans, which has led to the hypothesis that natural selection to remove weakly deleterious mutations may have been less effective in non-Africans. To directly test this hypothesis, we measure the per-genome accumulation of deleterious mutations across diverse humans. We fail to detect any significant differences, but find that archaic Denisovans accumulated non-synonymous mutations at a higher rate than modern humans, consistent with the longer separation time of modern and archaic humans. We also revisit the empirical patterns that have been interpreted as evidence for less effective removal of deleterious mutations in non-Africans than in West Africans, and show they are not driven by differences in selection after population separation, but by neutral evolution. |
1303.0673 | James Degnan | Sha Zhu, James H Degnan, Bjarki Eldon | Hybrid-Lambda: simulation of multiple merger and Kingman gene
genealogies in species networks and species trees | 5 pages, 1 figure | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hybrid-Lambda is a software package that simulates gene trees under Kingman
or two Lambda-coalescent processes within species networks or species trees. It
is written in C++, and re- leased under GNU General Public License (GPL)
version 3. Users can modify and make new dis- tribution under the terms of this
license. For details of this license, visit http://www.gnu.org/licenses/.
Hybrid Lambda is available at https://code.google.com/p/hybrid-lambda.
| [
{
"created": "Mon, 4 Mar 2013 11:20:37 GMT",
"version": "v1"
}
] | 2013-03-05 | [
[
"Zhu",
"Sha",
""
],
[
"Degnan",
"James H",
""
],
[
"Eldon",
"Bjarki",
""
]
] | Hybrid-Lambda is a software package that simulates gene trees under Kingman or two Lambda-coalescent processes within species networks or species trees. It is written in C++, and re- leased under GNU General Public License (GPL) version 3. Users can modify and make new dis- tribution under the terms of this license. For details of this license, visit http://www.gnu.org/licenses/. Hybrid Lambda is available at https://code.google.com/p/hybrid-lambda. |
1706.00603 | Ahmad Mheich | Ahmad Mheich, Mahmoud Hassan, Fabrice Wendling | Classification of meaningful and meaningless visual objects: a graph
similarity approach | 4 pages, 2 figures, ICABME 2017 conference | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Cognition involves dynamic reconfiguration of functional brain networks at
sub-second time scale. A precise tracking of these reconfigurations to
categorize visual objects remains elusive. Here, we use dense
electroencephalography (EEG) data recorded during naming meaningful (tools,
animals) and scrambled objects from 20 healthy subjects. We combine technique
for identifying functional brain networks and recently developed algorithm for
estimating networks similarity to discriminate between the two categories.
First, we showed that dynamic networks of both categories can be segmented into
several brain network states (times windows with consistent brain networks)
reflecting sequential information processing from object representation to
reaction time. Second, using a network similarity algorithm, results showed
high intra-category and very low inter-category values. An average accuracy of
76% was obtained at different brain network states.
| [
{
"created": "Fri, 2 Jun 2017 09:25:28 GMT",
"version": "v1"
}
] | 2017-06-05 | [
[
"Mheich",
"Ahmad",
""
],
[
"Hassan",
"Mahmoud",
""
],
[
"Wendling",
"Fabrice",
""
]
] | Cognition involves dynamic reconfiguration of functional brain networks at sub-second time scale. A precise tracking of these reconfigurations to categorize visual objects remains elusive. Here, we use dense electroencephalography (EEG) data recorded during naming meaningful (tools, animals) and scrambled objects from 20 healthy subjects. We combine technique for identifying functional brain networks and recently developed algorithm for estimating networks similarity to discriminate between the two categories. First, we showed that dynamic networks of both categories can be segmented into several brain network states (times windows with consistent brain networks) reflecting sequential information processing from object representation to reaction time. Second, using a network similarity algorithm, results showed high intra-category and very low inter-category values. An average accuracy of 76% was obtained at different brain network states. |
1711.07164 | Lucas Valdez D. | L. D. Valdez, G. J. Sibona, C. A. Condat | Impact of rainfall on Aedes aegypti populations | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Aedes aegypti is the main vector of multiple diseases, such as Dengue, Zika,
and Chikungunya. Due to modifications in weather patterns, its geographical
range is continuously evolving. Temperature is a key factor for its expansion
into regions with cool winters, but rainfall can also have a strong impact on
the colonization of these regions, since larvae emerging after a rainfall are
likely to die at temperatures below $10^{\circ}$C. As climate change is
expected to affect rainfall regimes, with a higher frequency of heavy storms
and an increase in drought-affected areas, it is important to understand how
different rainfall scenarios may shape Ae. aegypti's range. We develop a model
for the population dynamics of Ae. aegypti, coupled with a rainfall model to
study the effect of the temporal distribution of rainfall on mosquito
abundance. Using a fracturing process, we then investigate the effect of a
higher variability in the daily rainfall. As an example, we show that rainfall
distribution is necessary to explain the geographic range of Ae. aegypti in
Taiwan, an island characterized by rainy winters in the north and dry winters
in the south. We also predict that a higher variability in the rainfall time
distribution will decrease the maximum abundance of Ae. aegypti during the
summer. An increase in daily rainfall variability will likewise enhance its
extinction probability. Finally, we obtain a nonlinear relationship between dry
season duration and extinction probability. These findings can have a
significant impact on our ability to predict disease outbreaks.
| [
{
"created": "Mon, 20 Nov 2017 06:04:07 GMT",
"version": "v1"
}
] | 2017-11-21 | [
[
"Valdez",
"L. D.",
""
],
[
"Sibona",
"G. J.",
""
],
[
"Condat",
"C. A.",
""
]
] | Aedes aegypti is the main vector of multiple diseases, such as Dengue, Zika, and Chikungunya. Due to modifications in weather patterns, its geographical range is continuously evolving. Temperature is a key factor for its expansion into regions with cool winters, but rainfall can also have a strong impact on the colonization of these regions, since larvae emerging after a rainfall are likely to die at temperatures below $10^{\circ}$C. As climate change is expected to affect rainfall regimes, with a higher frequency of heavy storms and an increase in drought-affected areas, it is important to understand how different rainfall scenarios may shape Ae. aegypti's range. We develop a model for the population dynamics of Ae. aegypti, coupled with a rainfall model to study the effect of the temporal distribution of rainfall on mosquito abundance. Using a fracturing process, we then investigate the effect of a higher variability in the daily rainfall. As an example, we show that rainfall distribution is necessary to explain the geographic range of Ae. aegypti in Taiwan, an island characterized by rainy winters in the north and dry winters in the south. We also predict that a higher variability in the rainfall time distribution will decrease the maximum abundance of Ae. aegypti during the summer. An increase in daily rainfall variability will likewise enhance its extinction probability. Finally, we obtain a nonlinear relationship between dry season duration and extinction probability. These findings can have a significant impact on our ability to predict disease outbreaks. |
1508.05991 | Tatiana Tatarinova | Tatiana V. Tatarinova, Inna Lysnyansky, Yuri V. Nikolsky, and
Alexander Bolshoy | The mysterious orphans of Mycoplasmataceae | null | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: The length of a protein sequence is largely determined by its
function, i.e. each functional group is associated with an optimal size.
However, comparative genomics revealed that proteins length may be affected by
additional factors. In 2002 it was shown that in bacterium Escherichia coli and
the archaeon Archaeoglobus fulgidus, protein sequences with no homologs are, on
average, shorter than those with homologs. Most experts now agree that the
length distributions are distinctly different between protein sequences with
and without homologs in bacterial and archaeal genomes. In this study, we
examine this postulate by a comprehensive analysis of all annotated prokaryotic
genomes and focusing on certain exceptions.
Results: We compared lengths distributions of having homologs proteins (HHPs)
and non-having homologs proteins (orphans or ORFans) in all currently annotated
completely sequenced prokaryotic genomes. As expected, the HHPs and ORFans have
strikingly different length distributions in almost all genomes. As previously
established, the HHPs, indeed, are, on average, longer than the ORFans, and the
length distributions for the ORFans have a relatively narrow peak, in contrast
to the HHPs, whose lengths spread over a wider range of values. However, about
thirty genomes do not obey these rules. Practically all genomes of Mycoplasma
and Ureaplasma have atypical ORFans distributions, with the mean lengths of
ORFan larger than the mean lengths of HHPs. These genera constitute over 80% of
atypical genomes.
Conclusions: We confirmed on a ubiquitous set of genomes the previous
observation that HHPs and ORFans have different gene length distributions. We
also showed that Mycoplasmataceae genomes have distinctive distributions of
ORFans lengths. We offer several possible biological explanations of this
phenomenon.
| [
{
"created": "Mon, 24 Aug 2015 22:50:42 GMT",
"version": "v1"
}
] | 2015-08-26 | [
[
"Tatarinova",
"Tatiana V.",
""
],
[
"Lysnyansky",
"Inna",
""
],
[
"Nikolsky",
"Yuri V.",
""
],
[
"Bolshoy",
"Alexander",
""
]
] | Background: The length of a protein sequence is largely determined by its function, i.e. each functional group is associated with an optimal size. However, comparative genomics revealed that proteins length may be affected by additional factors. In 2002 it was shown that in bacterium Escherichia coli and the archaeon Archaeoglobus fulgidus, protein sequences with no homologs are, on average, shorter than those with homologs. Most experts now agree that the length distributions are distinctly different between protein sequences with and without homologs in bacterial and archaeal genomes. In this study, we examine this postulate by a comprehensive analysis of all annotated prokaryotic genomes and focusing on certain exceptions. Results: We compared lengths distributions of having homologs proteins (HHPs) and non-having homologs proteins (orphans or ORFans) in all currently annotated completely sequenced prokaryotic genomes. As expected, the HHPs and ORFans have strikingly different length distributions in almost all genomes. As previously established, the HHPs, indeed, are, on average, longer than the ORFans, and the length distributions for the ORFans have a relatively narrow peak, in contrast to the HHPs, whose lengths spread over a wider range of values. However, about thirty genomes do not obey these rules. Practically all genomes of Mycoplasma and Ureaplasma have atypical ORFans distributions, with the mean lengths of ORFan larger than the mean lengths of HHPs. These genera constitute over 80% of atypical genomes. Conclusions: We confirmed on a ubiquitous set of genomes the previous observation that HHPs and ORFans have different gene length distributions. We also showed that Mycoplasmataceae genomes have distinctive distributions of ORFans lengths. We offer several possible biological explanations of this phenomenon. |
q-bio/0403016 | Ping Ao | X.-M. Zhu, L. Yin, L. Hood, and P. Ao | Robustness, Stability and Efficiency of Phage lambda Gene Regulatory
Network: Dynamical Structure Analysis | 31 pages | null | null | null | q-bio.MN cond-mat.stat-mech nlin.AO | null | Based on our physical and biological studies we have recently developed a
mathematical framework for the analysis of nonlnear dynamics. We call this
framework the dynamical structure analysis. It has four dynamical elements:
potential landscape, transverse matrix, descendant matrix, and stochastic
drive. In particular, the importance and the existence of the potential
landscape is emphasized.
The dynamical structure analysis is illustrated in detail by the study of
stability, robustness, and efficiency of the simplest gene regulatory network
of phage lambda.
| [
{
"created": "Mon, 15 Mar 2004 03:19:12 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Zhu",
"X. -M.",
""
],
[
"Yin",
"L.",
""
],
[
"Hood",
"L.",
""
],
[
"Ao",
"P.",
""
]
] | Based on our physical and biological studies we have recently developed a mathematical framework for the analysis of nonlnear dynamics. We call this framework the dynamical structure analysis. It has four dynamical elements: potential landscape, transverse matrix, descendant matrix, and stochastic drive. In particular, the importance and the existence of the potential landscape is emphasized. The dynamical structure analysis is illustrated in detail by the study of stability, robustness, and efficiency of the simplest gene regulatory network of phage lambda. |
2211.02358 | Alison Hale Ph.D. | Alison C Hale and Christopher P Jewell | An approach for benchmarking the numerical solutions of stochastic
compartmental models | 21 pages 3 figures | null | null | null | q-bio.PE stat.AP | http://creativecommons.org/licenses/by-nc-nd/4.0/ | An approach is introduced for comparing the estimated states of stochastic
compartmental models for an epidemic or biological process with analytically
obtained solutions from the corresponding system of ordinary differential
equations (ODEs). Positive integer valued samples from a stochastic model are
generated numerically at discrete time intervals using either the Reed-Frost
chain Binomial or Gillespie algorithm. The simulated distribution of
realisations is compared with an exact solution obtained analytically from the
ODE model. Using this novel methodology this work demonstrates it is feasible
to check that the realisations from the stochastic compartmental model adhere
to the ODE model they represent. There is no requirement for the model to be in
any particular state or limit. These techniques are developed using the
stochastic compartmental model for a susceptible-infected-recovered (SIR)
epidemic process. The Lotka-Volterra model is then used as an example of the
generality of the principles developed here. This approach presents a way of
testing/benchmarking the numerical solutions of stochastic compartmental
models, e.g. using unit tests, to check that the computer code along with its
corresponding algorithm adheres to the underlying ODE model.
| [
{
"created": "Fri, 4 Nov 2022 10:32:48 GMT",
"version": "v1"
},
{
"created": "Thu, 29 Jun 2023 10:14:49 GMT",
"version": "v2"
}
] | 2023-06-30 | [
[
"Hale",
"Alison C",
""
],
[
"Jewell",
"Christopher P",
""
]
] | An approach is introduced for comparing the estimated states of stochastic compartmental models for an epidemic or biological process with analytically obtained solutions from the corresponding system of ordinary differential equations (ODEs). Positive integer valued samples from a stochastic model are generated numerically at discrete time intervals using either the Reed-Frost chain Binomial or Gillespie algorithm. The simulated distribution of realisations is compared with an exact solution obtained analytically from the ODE model. Using this novel methodology this work demonstrates it is feasible to check that the realisations from the stochastic compartmental model adhere to the ODE model they represent. There is no requirement for the model to be in any particular state or limit. These techniques are developed using the stochastic compartmental model for a susceptible-infected-recovered (SIR) epidemic process. The Lotka-Volterra model is then used as an example of the generality of the principles developed here. This approach presents a way of testing/benchmarking the numerical solutions of stochastic compartmental models, e.g. using unit tests, to check that the computer code along with its corresponding algorithm adheres to the underlying ODE model. |
1206.1571 | Ramon Grima | R. Grima, D. R. Schmidt, T. J. Newman | Steady-state fluctuations of a genetic feedback loop: an exact solution | 31 pages, 3 figures. Accepted for publication in the Journal of
Chemical Physics (2012) | null | 10.1063/1.4736721 | null | q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Genetic feedback loops in cells break detailed balance and involve
bimolecular reactions; hence exact solutions revealing the nature of the
stochastic fluctuations in these loops are lacking. We here consider the master
equation for a gene regulatory feedback loop: a gene produces protein which
then binds to the promoter of the same gene and regulates its expression. The
protein degrades in its free and bound forms. This network breaks detailed
balance and involves a single bimolecular reaction step. We provide an exact
solution of the steady-state master equation for arbitrary values of the
parameters, and present simplified solutions for a number of special cases. The
full parametric dependence of the analytical non-equilibrium steady-state
probability distribution is verified by direct numerical solution of the master
equations. For the case where the degradation rate of bound and free protein is
the same, our solution is at variance with a previous claim of an exact
solution (Hornos et al, Phys. Rev. E {\bf 72}, 051907 (2005) and subsequent
studies). We show explicitly that this is due to an unphysical formulation of
the underlying master equation in those studies.
| [
{
"created": "Thu, 7 Jun 2012 18:32:20 GMT",
"version": "v1"
},
{
"created": "Wed, 27 Jun 2012 21:19:48 GMT",
"version": "v2"
}
] | 2015-06-05 | [
[
"Grima",
"R.",
""
],
[
"Schmidt",
"D. R.",
""
],
[
"Newman",
"T. J.",
""
]
] | Genetic feedback loops in cells break detailed balance and involve bimolecular reactions; hence exact solutions revealing the nature of the stochastic fluctuations in these loops are lacking. We here consider the master equation for a gene regulatory feedback loop: a gene produces protein which then binds to the promoter of the same gene and regulates its expression. The protein degrades in its free and bound forms. This network breaks detailed balance and involves a single bimolecular reaction step. We provide an exact solution of the steady-state master equation for arbitrary values of the parameters, and present simplified solutions for a number of special cases. The full parametric dependence of the analytical non-equilibrium steady-state probability distribution is verified by direct numerical solution of the master equations. For the case where the degradation rate of bound and free protein is the same, our solution is at variance with a previous claim of an exact solution (Hornos et al, Phys. Rev. E {\bf 72}, 051907 (2005) and subsequent studies). We show explicitly that this is due to an unphysical formulation of the underlying master equation in those studies. |
1304.2054 | Mircea Andrecut Dr | M. Andrecut | Monte-Carlo Simulation of a Multi-Dimensional Switch-Like Model of Stem
Cell Differentiation | 16 pages, 4 figures | Theory and Applications of Monte Carlo Simulations, ed. Charles J.
Mode, Intech, 2011, ISBN: 978-953-307-427-6 | 10.5772/15474 | null | q-bio.QM q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The process controlling the diferentiation of stem, or progenitor, cells into
one specific functional direction is called lineage specification. An important
characteristic of this process is the multi-lineage priming, which requires the
simultaneous expression of lineage-specific genes. Prior to commitment to a
certain lineage, it has been observed that these genes exhibit intermediate
values of their expression levels. Multi-lineage differentiation has been
reported for various progenitor cells, and it has been explained through the
bifurcation of a metastable state. During the differentiation process the
dynamics of the core regulatory network follows a bifurcation, where the
metastable state, corresponding to the progenitor cell, is destabilized and the
system is forced to choose between the possible developmental alternatives.
While this approach gives a reasonable interpretation of the cell fate decision
process, it fails to explain the multi-lineage priming characteristic. Here, we
describe a new multi-dimensional switch-like model that captures both the
process of cell fate decision and the phenomenon of multi-lineage priming. We
show that in the symmetrical interaction case, the system exhibits a new type
of degenerate bifurcation, characterized by a critical hyperplane, containing
an infinite number of critical steady states. This critical hyperplane may be
interpreted as the support for the multi-lineage priming states of the
progenitor. Also, the cell fate decision (the multi-stability and switching
behavior) can be explained by a symmetry breaking in the parameter space of
this critical hyperplane. These analytical results are confirmed by Monte-Carlo
simulations of the corresponding chemical master equations.
| [
{
"created": "Sun, 7 Apr 2013 20:06:25 GMT",
"version": "v1"
}
] | 2013-04-09 | [
[
"Andrecut",
"M.",
""
]
] | The process controlling the diferentiation of stem, or progenitor, cells into one specific functional direction is called lineage specification. An important characteristic of this process is the multi-lineage priming, which requires the simultaneous expression of lineage-specific genes. Prior to commitment to a certain lineage, it has been observed that these genes exhibit intermediate values of their expression levels. Multi-lineage differentiation has been reported for various progenitor cells, and it has been explained through the bifurcation of a metastable state. During the differentiation process the dynamics of the core regulatory network follows a bifurcation, where the metastable state, corresponding to the progenitor cell, is destabilized and the system is forced to choose between the possible developmental alternatives. While this approach gives a reasonable interpretation of the cell fate decision process, it fails to explain the multi-lineage priming characteristic. Here, we describe a new multi-dimensional switch-like model that captures both the process of cell fate decision and the phenomenon of multi-lineage priming. We show that in the symmetrical interaction case, the system exhibits a new type of degenerate bifurcation, characterized by a critical hyperplane, containing an infinite number of critical steady states. This critical hyperplane may be interpreted as the support for the multi-lineage priming states of the progenitor. Also, the cell fate decision (the multi-stability and switching behavior) can be explained by a symmetry breaking in the parameter space of this critical hyperplane. These analytical results are confirmed by Monte-Carlo simulations of the corresponding chemical master equations. |
2303.00068 | Kirsty Y. Wan | Karen Grace Bondoc-Naumovitz, Hannah Laeverenz-Schlogelhofer, Rebecca
N. Poon, Alexander K. Boggon, Samuel A. Bentley, Dario Cortese, Kirsty Y. Wan | Methods and measures for investigating microscale motility | 24 pages, 2 figures | null | 10.1093/icb/icad075 | null | q-bio.CB physics.bio-ph | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Motility is an essential factor for an organism's survival and
diversification. With the advent of novel single-cell technologies, analytical
frameworks and theoretical methods, we can begin to probe the complex lives of
microscopic motile organisms and answer the intertwining biological and
physical questions of how these diverse lifeforms navigate their surroundings.
Herein, we give an overview of different experimental, analytical, and
mathematical methods used to study a suite of microscale motility mechanisms
across different scales encompassing molecular-, individual- to
population-level. We identify transferable techniques, pressing challenges, and
future directions in the field. This review can serve as a starting point for
researchers who are interested in exploring and quantifying the movements of
organisms in the microscale world.
| [
{
"created": "Tue, 28 Feb 2023 20:11:16 GMT",
"version": "v1"
}
] | 2023-09-01 | [
[
"Bondoc-Naumovitz",
"Karen Grace",
""
],
[
"Laeverenz-Schlogelhofer",
"Hannah",
""
],
[
"Poon",
"Rebecca N.",
""
],
[
"Boggon",
"Alexander K.",
""
],
[
"Bentley",
"Samuel A.",
""
],
[
"Cortese",
"Dario",
""
],
[
"Wan",
"Kirsty Y.",
""
]
] | Motility is an essential factor for an organism's survival and diversification. With the advent of novel single-cell technologies, analytical frameworks and theoretical methods, we can begin to probe the complex lives of microscopic motile organisms and answer the intertwining biological and physical questions of how these diverse lifeforms navigate their surroundings. Herein, we give an overview of different experimental, analytical, and mathematical methods used to study a suite of microscale motility mechanisms across different scales encompassing molecular-, individual- to population-level. We identify transferable techniques, pressing challenges, and future directions in the field. This review can serve as a starting point for researchers who are interested in exploring and quantifying the movements of organisms in the microscale world. |
2210.16099 | Elvin Lo | Elvin Lo and Pin-Yu Chen | An Empirical Evaluation of Zeroth-Order Optimization Methods on
AI-driven Molecule Optimization | 15 pages, 4 figures | null | null | null | q-bio.BM cs.LG | http://creativecommons.org/licenses/by/4.0/ | Molecule optimization is an important problem in chemical discovery and has
been approached using many techniques, including generative modeling,
reinforcement learning, genetic algorithms, and much more. Recent work has also
applied zeroth-order (ZO) optimization, a subset of gradient-free optimization
that solves problems similarly to gradient-based methods, for optimizing latent
vector representations from an autoencoder. In this paper, we study the
effectiveness of various ZO optimization methods for optimizing molecular
objectives, which are characterized by variable smoothness, infrequent optima,
and other challenges. We provide insights on the robustness of various ZO
optimizers in this setting, show the advantages of ZO sign-based gradient
descent (ZO-signGD), discuss how ZO optimization can be used practically in
realistic discovery tasks, and demonstrate the potential effectiveness of ZO
optimization methods on widely used benchmark tasks from the Guacamol suite.
Code is available at: https://github.com/IBM/QMO-bench.
| [
{
"created": "Thu, 27 Oct 2022 01:58:10 GMT",
"version": "v1"
}
] | 2022-10-31 | [
[
"Lo",
"Elvin",
""
],
[
"Chen",
"Pin-Yu",
""
]
] | Molecule optimization is an important problem in chemical discovery and has been approached using many techniques, including generative modeling, reinforcement learning, genetic algorithms, and much more. Recent work has also applied zeroth-order (ZO) optimization, a subset of gradient-free optimization that solves problems similarly to gradient-based methods, for optimizing latent vector representations from an autoencoder. In this paper, we study the effectiveness of various ZO optimization methods for optimizing molecular objectives, which are characterized by variable smoothness, infrequent optima, and other challenges. We provide insights on the robustness of various ZO optimizers in this setting, show the advantages of ZO sign-based gradient descent (ZO-signGD), discuss how ZO optimization can be used practically in realistic discovery tasks, and demonstrate the potential effectiveness of ZO optimization methods on widely used benchmark tasks from the Guacamol suite. Code is available at: https://github.com/IBM/QMO-bench. |
1305.1231 | Susan Khor | Susan Khor | Optimality of Moore neighborhoods in protein contact maps | null | null | null | null | q-bio.MN q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A protein contact map is a binary symmetric adjacency matrix capturing the
distance relationship between atoms of a protein. Each cell (i, j) of a protein
contact map states whether the atoms (nodes) i and j are within some Euclidean
distance from each other. We examined the radius one Moore neighborhood
surrounding each cell (i, j) where j > (i + 2) in complete protein contact maps
by mutating them one at a time. We found that the particular configuration of a
neighborhood is generally (97%) optimal in the sense that no other
configuration could maintain or improve upon existing local and global
efficiencies of the nodes residing in a neighborhood. Local efficiency of a
node is directly related to its clustering measure. Global efficiency of a node
is inversely related to its distance to other nodes in the network. This
feature of the Moore neighborhood in complete protein contact maps may explain
how protein residue networks are able to form long-range links to reduce
average path length while maintaining a high level of clustering throughout the
process of small-world formation, and could suggest new approaches to protein
contact map prediction. Effectively, the problem of protein contact map
prediction is transformed to one of maximizing the number of optimal
neighborhoods. By comparison, Moore neighborhoods in protein contact maps with
randomized long-range links are less optimal.
| [
{
"created": "Mon, 6 May 2013 16:00:41 GMT",
"version": "v1"
}
] | 2013-05-07 | [
[
"Khor",
"Susan",
""
]
] | A protein contact map is a binary symmetric adjacency matrix capturing the distance relationship between atoms of a protein. Each cell (i, j) of a protein contact map states whether the atoms (nodes) i and j are within some Euclidean distance from each other. We examined the radius one Moore neighborhood surrounding each cell (i, j) where j > (i + 2) in complete protein contact maps by mutating them one at a time. We found that the particular configuration of a neighborhood is generally (97%) optimal in the sense that no other configuration could maintain or improve upon existing local and global efficiencies of the nodes residing in a neighborhood. Local efficiency of a node is directly related to its clustering measure. Global efficiency of a node is inversely related to its distance to other nodes in the network. This feature of the Moore neighborhood in complete protein contact maps may explain how protein residue networks are able to form long-range links to reduce average path length while maintaining a high level of clustering throughout the process of small-world formation, and could suggest new approaches to protein contact map prediction. Effectively, the problem of protein contact map prediction is transformed to one of maximizing the number of optimal neighborhoods. By comparison, Moore neighborhoods in protein contact maps with randomized long-range links are less optimal. |
1505.01143 | Vicente M. Reyes Ph.D. | Vicente M. Reyes | Structure-Based Function Prediction of Functionally Unannotated
Structures in the PDB: Prediction of ATP, GTP, Sialic Acid, Retinoic Acid and
Heme-bound and -Unbound (Free) Nitric Oxide Protein Binding Sites | 33 pages total (12 pages text; 21 pages figures and tables); 2
figures; 6 tables (all multi-panel); 7200 words in text; 7274 words incl. in
figures and tables | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Due to increased activity in high-throughput structural genomics efforts
around the globe, there has been an accumulation of experimental protein 3D
structures lacking functional annotation, thus creating a need for
structure-based protein function assignment methods. Computational prediction
of ligand binding sites (LBS) is a well-established protein function assignment
method. Here we apply the specific LBS detection algorithm we recently
described (Reyes, V.M. & Sheth, V.N., 2011; Reyes, V.M., 2015a) to some 801
functionally unannotated experimental structures in the Protein Data Bank by
screening for the binding sites (BS) of 6 biologically important ligands: GTP
in small Ras-type G-proteins, ATP in ser/thr protein kinases, sialic acid
(SIA), retinoic acid (REA), and heme-bound and unbound (free) nitric oxide
(hNO, fNO). Validation of the algorithm for the GTP- and ATP-binding sites has
been previously described in detail (ibid.); here, validation for the BSs of
the 4 other ligands shows both good specificity and sensitivity. Of the 801
structures screened, 8 tested positive for GTP binding, 61 for ATP binding, 35
for SIA binding, 132 for REA binding, 33 for hNO binding, and 10 for fNO
binding. Using the cutting plane and tangent sphere methods we described
previously, (Reyes, V.M., 2015b), we also determined the depth of burial of the
LBSs detected above and compared the values with those from the respective
training structures, and the degree of similarity between the two values taken
as a further validation of the predicted LBSs. Applying this criterion, we were
able to narrow down the predicted GTP-binding proteins to 2, the ATP-binding
proteins to 13, the SIA-binding proteins to 2, the REA-binding proteins to 14,
the hNO-binding proteins to 4, and the fNO-binding proteins to 1. We believe
this further criterion increases the confidence level of our LBS predictions.
| [
{
"created": "Sat, 28 Feb 2015 04:17:34 GMT",
"version": "v1"
}
] | 2015-05-06 | [
[
"Reyes",
"Vicente M.",
""
]
] | Due to increased activity in high-throughput structural genomics efforts around the globe, there has been an accumulation of experimental protein 3D structures lacking functional annotation, thus creating a need for structure-based protein function assignment methods. Computational prediction of ligand binding sites (LBS) is a well-established protein function assignment method. Here we apply the specific LBS detection algorithm we recently described (Reyes, V.M. & Sheth, V.N., 2011; Reyes, V.M., 2015a) to some 801 functionally unannotated experimental structures in the Protein Data Bank by screening for the binding sites (BS) of 6 biologically important ligands: GTP in small Ras-type G-proteins, ATP in ser/thr protein kinases, sialic acid (SIA), retinoic acid (REA), and heme-bound and unbound (free) nitric oxide (hNO, fNO). Validation of the algorithm for the GTP- and ATP-binding sites has been previously described in detail (ibid.); here, validation for the BSs of the 4 other ligands shows both good specificity and sensitivity. Of the 801 structures screened, 8 tested positive for GTP binding, 61 for ATP binding, 35 for SIA binding, 132 for REA binding, 33 for hNO binding, and 10 for fNO binding. Using the cutting plane and tangent sphere methods we described previously, (Reyes, V.M., 2015b), we also determined the depth of burial of the LBSs detected above and compared the values with those from the respective training structures, and the degree of similarity between the two values taken as a further validation of the predicted LBSs. Applying this criterion, we were able to narrow down the predicted GTP-binding proteins to 2, the ATP-binding proteins to 13, the SIA-binding proteins to 2, the REA-binding proteins to 14, the hNO-binding proteins to 4, and the fNO-binding proteins to 1. We believe this further criterion increases the confidence level of our LBS predictions. |
1604.03250 | Robert Patro | Avi Srivastava, Hirak Sarkar, Laraib Malik, Rob Patro | Accurate, Fast and Lightweight Clustering of de novo Transcriptomes
using Fragment Equivalence Classes | paper accepted at the RECOMB-Seq 2016 | null | null | null | q-bio.GN cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Motivation: De novo transcriptome assembly of non-model organisms is the
first major step for many RNA-seq analysis tasks. Current methods for de novo
assembly often report a large number of contiguous sequences (contigs), which
may be fractured and incomplete sequences instead of full-length transcripts.
Dealing with a large number of such contigs can slow and complicate downstream
analysis.
Results :We present a method for clustering contigs from de novo
transcriptome assemblies based upon the relationships exposed by multi-mapping
sequencing fragments. Specifically, we cast the problem of clustering contigs
as one of clustering a sparse graph that is induced by equivalence classes of
fragments that map to subsets of the transcriptome. Leveraging recent
developments in efficient read mapping and transcript quantification, we have
developed RapClust, a tool implementing this approach that is capable of
accurately clustering most large de novo transcriptomes in a matter of minutes,
while simultaneously providing accurate estimates of expression for the
resulting clusters. We compare RapClust against a number of tools commonly used
for de novo transcriptome clustering. Using de novo assemblies of organisms for
which reference genomes are available, we assess the accuracy of these
different methods in terms of the quality of the resulting clusterings, and the
concordance of differential expression tests with those based on ground truth
clusters. We find that RapClust produces clusters of comparable or better
quality than existing state-of-the-art approaches, and does so substantially
faster. RapClust also confers a large benefit in terms of space usage, as it
produces only succinct intermediate files - usually on the order of a few
megabytes - even when processing hundreds of millions of reads.
| [
{
"created": "Tue, 12 Apr 2016 05:23:37 GMT",
"version": "v1"
}
] | 2016-04-13 | [
[
"Srivastava",
"Avi",
""
],
[
"Sarkar",
"Hirak",
""
],
[
"Malik",
"Laraib",
""
],
[
"Patro",
"Rob",
""
]
] | Motivation: De novo transcriptome assembly of non-model organisms is the first major step for many RNA-seq analysis tasks. Current methods for de novo assembly often report a large number of contiguous sequences (contigs), which may be fractured and incomplete sequences instead of full-length transcripts. Dealing with a large number of such contigs can slow and complicate downstream analysis. Results :We present a method for clustering contigs from de novo transcriptome assemblies based upon the relationships exposed by multi-mapping sequencing fragments. Specifically, we cast the problem of clustering contigs as one of clustering a sparse graph that is induced by equivalence classes of fragments that map to subsets of the transcriptome. Leveraging recent developments in efficient read mapping and transcript quantification, we have developed RapClust, a tool implementing this approach that is capable of accurately clustering most large de novo transcriptomes in a matter of minutes, while simultaneously providing accurate estimates of expression for the resulting clusters. We compare RapClust against a number of tools commonly used for de novo transcriptome clustering. Using de novo assemblies of organisms for which reference genomes are available, we assess the accuracy of these different methods in terms of the quality of the resulting clusterings, and the concordance of differential expression tests with those based on ground truth clusters. We find that RapClust produces clusters of comparable or better quality than existing state-of-the-art approaches, and does so substantially faster. RapClust also confers a large benefit in terms of space usage, as it produces only succinct intermediate files - usually on the order of a few megabytes - even when processing hundreds of millions of reads. |
2404.17981 | Fernanda Matias | \'Icaro Rodolfo Soares Coelho Da Paz, Pedro F. A. Silva, Helena
Bordini de Lucas, S\'ergio H. A. Lira, Osvaldo A. Rosso, Fernanda Selingardi
Matias | A symbolic information approach applied to human intracranial data to
characterize and distinguish different congnitive processes | null | null | null | null | q-bio.NC physics.bio-ph physics.comp-ph physics.data-an | http://creativecommons.org/licenses/by/4.0/ | How the human brain processes information during different cognitive tasks is
one of the greatest questions in contemporary neuroscience. Understanding the
statistical properties of brain signals during specific activities is one
promising way to address this question. Here we analyze freely available data
from implanted electrocorticography (ECoG) in five human subjects during two
different cognitive tasks in the light of information theory quantifiers ideas.
We employ a symbolic information approach to determine the probability
distribution function associated with the time series from different cortical
areas. Then we utilize these probabilities to calculate the associated Shannon
entropy and a statistical complexity measure based on the disequilibrium
between the actual time series and one with a uniform probability distribution
function. We show that an Euclidian distance in the complexity-entropy plane
and an asymmetry index for complexity are useful for comparing the two
conditions. We show that our method can distinguish visual search epochs from
blank screen intervals in different electrodes and patients. By using a
multi-scale approach and embedding time delays to downsample the data, we find
important time scales in which the relevant information is being processed. We
also determine cortical regions and time intervals along the 2-second-long
trials that present more pronounced differences between the two cognitive
tasks. Finally, we show that the method is useful to distinguish cognitive
processes using brain activity on a trial-by-trial basis.
| [
{
"created": "Sat, 27 Apr 2024 18:52:58 GMT",
"version": "v1"
}
] | 2024-04-30 | [
[
"Da Paz",
"Ícaro Rodolfo Soares Coelho",
""
],
[
"Silva",
"Pedro F. A.",
""
],
[
"de Lucas",
"Helena Bordini",
""
],
[
"Lira",
"Sérgio H. A.",
""
],
[
"Rosso",
"Osvaldo A.",
""
],
[
"Matias",
"Fernanda Selingardi",
""
]
] | How the human brain processes information during different cognitive tasks is one of the greatest questions in contemporary neuroscience. Understanding the statistical properties of brain signals during specific activities is one promising way to address this question. Here we analyze freely available data from implanted electrocorticography (ECoG) in five human subjects during two different cognitive tasks in the light of information theory quantifiers ideas. We employ a symbolic information approach to determine the probability distribution function associated with the time series from different cortical areas. Then we utilize these probabilities to calculate the associated Shannon entropy and a statistical complexity measure based on the disequilibrium between the actual time series and one with a uniform probability distribution function. We show that an Euclidian distance in the complexity-entropy plane and an asymmetry index for complexity are useful for comparing the two conditions. We show that our method can distinguish visual search epochs from blank screen intervals in different electrodes and patients. By using a multi-scale approach and embedding time delays to downsample the data, we find important time scales in which the relevant information is being processed. We also determine cortical regions and time intervals along the 2-second-long trials that present more pronounced differences between the two cognitive tasks. Finally, we show that the method is useful to distinguish cognitive processes using brain activity on a trial-by-trial basis. |
1307.6583 | Jason Graham | Jason M Graham | A Measure of Control for Secondary Cytokine-Induced Injury of Articular
Cartilage: A Computational Study | null | null | null | null | q-bio.QM q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In previous works, the author and collaborators establish a mathematical
model for injury response in articular cartilage. In this paper we use
mathematical software and computational techniques, applied to an existing
model to explore in more detail how the behavior of cartilage cells is
influenced by several of, what are believed to be, the most significant
mechanisms underlying cartilage injury response at the cellular level. We
introduce a control parameter, the radius of attenuation, and present some new
simulations that shed light on how inflammation associated with cartilage
injuries impacts the metabolic activity of cartilage cells. The details
presented in the work can help to elucidate targets for more effective
therapies in the preventative treatment of post-traumatic osteoarthritis.
| [
{
"created": "Wed, 24 Jul 2013 21:03:23 GMT",
"version": "v1"
}
] | 2013-07-26 | [
[
"Graham",
"Jason M",
""
]
] | In previous works, the author and collaborators establish a mathematical model for injury response in articular cartilage. In this paper we use mathematical software and computational techniques, applied to an existing model to explore in more detail how the behavior of cartilage cells is influenced by several of, what are believed to be, the most significant mechanisms underlying cartilage injury response at the cellular level. We introduce a control parameter, the radius of attenuation, and present some new simulations that shed light on how inflammation associated with cartilage injuries impacts the metabolic activity of cartilage cells. The details presented in the work can help to elucidate targets for more effective therapies in the preventative treatment of post-traumatic osteoarthritis. |
0903.1753 | Ginestra Bianconi | Ginestra Bianconi, Luca Ferretti and Silvio Franz | Non-neutral theory of biodiversity | 4 pages, 3 figure | Europhys. Lett. 87, 28001 (2009) | 10.1209/0295-5075/87/28001 | null | q-bio.PE cond-mat.dis-nn cond-mat.stat-mech q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a non-neutral stochastic model for the dynamics taking place in a
meta-community ecosystems in presence of migration. The model provides a
framework for describing the emergence of multiple ecological scenarios and
behaves in two extreme limits either as the unified neutral theory of
biodiversity or as the Bak-Sneppen model. Interestingly, the model shows a
condensation phase transition where one species becomes the dominant one, the
diversity in the ecosystems is strongly reduced and the ecosystem is
non-stationary. This phase transition extend the principle of competitive
exclusion to open ecosystems and might be relevant for the study of the impact
of invasive species in native ecologies.
| [
{
"created": "Tue, 10 Mar 2009 12:49:43 GMT",
"version": "v1"
},
{
"created": "Fri, 16 Oct 2009 13:51:53 GMT",
"version": "v2"
}
] | 2015-05-13 | [
[
"Bianconi",
"Ginestra",
""
],
[
"Ferretti",
"Luca",
""
],
[
"Franz",
"Silvio",
""
]
] | We present a non-neutral stochastic model for the dynamics taking place in a meta-community ecosystems in presence of migration. The model provides a framework for describing the emergence of multiple ecological scenarios and behaves in two extreme limits either as the unified neutral theory of biodiversity or as the Bak-Sneppen model. Interestingly, the model shows a condensation phase transition where one species becomes the dominant one, the diversity in the ecosystems is strongly reduced and the ecosystem is non-stationary. This phase transition extend the principle of competitive exclusion to open ecosystems and might be relevant for the study of the impact of invasive species in native ecologies. |
1211.4210 | Oscar Franz\'en | Oscar Franz\'en | Genome and transcriptome studies of the protozoan parasites Trypanosoma
cruzi and Giardia intestinalis | PhD thesis, Karolinska Institutet, November 2012 | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Trypanosoma cruzi and Giardia intestinalis are two human pathogens and
protozoan parasites responsible for the diseases Chagas disease and giardiasis,
respectively. Both diseases cause suffering and illness in several million
individuals. The former disease occurs primarily in South America and Central
America, and the latter disease occurs worldwide. Current therapeutics are
toxic and lack efficacy, and potential vaccines are far from the market.
Increased knowledge about the biology of these parasites is essential for drug
and vaccine development, and new diagnostic tests. In this thesis,
high-throughput sequencing was applied together with extensive bioinformatic
analyses to yield insights into the biology and evolution of Trypanosoma cruzi
and Giardia intestinalis. Bioinformatics analysis of DNA and RNA sequences was
performed to identify features that may be of importance for parasite biology
and functional characterization. This thesis is based on five papers (i-v).
Paper i and ii describe comparative genome studies of three distinct genotypes
of Giardia intestinalis (A, B and E). Paper iii describes a genome comparison
of the human infecting Trypanosoma cruzi with the bat-restricted subspecies
Trypanosoma cruzi marinkellei. Paper iv describes the repertoire of small
non-coding RNAs in Trypanosoma cruzi epimastigotes. Paper v describes
transcriptome analysis using paired-end RNA-Seq of three distinct genotypes of
Giardia intestinalis (A, B and E).
| [
{
"created": "Sun, 18 Nov 2012 11:36:07 GMT",
"version": "v1"
}
] | 2012-11-20 | [
[
"Franzén",
"Oscar",
""
]
] | Trypanosoma cruzi and Giardia intestinalis are two human pathogens and protozoan parasites responsible for the diseases Chagas disease and giardiasis, respectively. Both diseases cause suffering and illness in several million individuals. The former disease occurs primarily in South America and Central America, and the latter disease occurs worldwide. Current therapeutics are toxic and lack efficacy, and potential vaccines are far from the market. Increased knowledge about the biology of these parasites is essential for drug and vaccine development, and new diagnostic tests. In this thesis, high-throughput sequencing was applied together with extensive bioinformatic analyses to yield insights into the biology and evolution of Trypanosoma cruzi and Giardia intestinalis. Bioinformatics analysis of DNA and RNA sequences was performed to identify features that may be of importance for parasite biology and functional characterization. This thesis is based on five papers (i-v). Paper i and ii describe comparative genome studies of three distinct genotypes of Giardia intestinalis (A, B and E). Paper iii describes a genome comparison of the human infecting Trypanosoma cruzi with the bat-restricted subspecies Trypanosoma cruzi marinkellei. Paper iv describes the repertoire of small non-coding RNAs in Trypanosoma cruzi epimastigotes. Paper v describes transcriptome analysis using paired-end RNA-Seq of three distinct genotypes of Giardia intestinalis (A, B and E). |
2012.05744 | Yu Yao | Yu Yao and Klaas E. Stephan | Markov chain Monte Carlo methods for hierarchical clustering of dynamic
causal models | null | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we address technical difficulties that arise when applying
Markov chain Monte Carlo (MCMC) to hierarchical models designed to perform
clustering in the space of latent parameters of subject-wise generative models.
Specifically, we focus on the case where the subject-wise generative model is a
dynamic causal model (DCM) for fMRI and clusters are defined in terms of
effective brain connectivity. While an attractive approach for detecting
mechanistically interpretable subgroups in heterogeneous populations, inverting
such a hierarchical model represents a particularly challenging case, since DCM
is often characterized by high posterior correlations between its parameters.
In this context, standard MCMC schemes exhibit poor performance and extremely
slow convergence. In this paper, we investigate the properties of hierarchical
clustering which lead to the observed failure of standard MCMC schemes and
propose a solution designed to improve convergence but preserve computational
complexity. Specifically, we introduce a class of proposal distributions which
aims to capture the interdependencies between the parameters of the clustering
and subject-wise generative models and helps to reduce random walk behaviour of
the MCMC scheme. Critically, these proposal distributions only introduce a
single hyperparameter that needs to be tuned to achieve good performance. For
validation, we apply our proposed solution to synthetic and real-world datasets
and also compare it, in terms of computational complexity and performance, to
Hamiltonian Monte Carlo (HMC), a state-of-the-art Monte Carlo. Our results
indicate that, for the specific application domain considered here, our
proposed solution shows good convergence performance and superior runtime
compared to HMC.
| [
{
"created": "Thu, 10 Dec 2020 15:26:12 GMT",
"version": "v1"
},
{
"created": "Mon, 14 Dec 2020 16:19:55 GMT",
"version": "v2"
}
] | 2020-12-15 | [
[
"Yao",
"Yu",
""
],
[
"Stephan",
"Klaas E.",
""
]
] | In this paper, we address technical difficulties that arise when applying Markov chain Monte Carlo (MCMC) to hierarchical models designed to perform clustering in the space of latent parameters of subject-wise generative models. Specifically, we focus on the case where the subject-wise generative model is a dynamic causal model (DCM) for fMRI and clusters are defined in terms of effective brain connectivity. While an attractive approach for detecting mechanistically interpretable subgroups in heterogeneous populations, inverting such a hierarchical model represents a particularly challenging case, since DCM is often characterized by high posterior correlations between its parameters. In this context, standard MCMC schemes exhibit poor performance and extremely slow convergence. In this paper, we investigate the properties of hierarchical clustering which lead to the observed failure of standard MCMC schemes and propose a solution designed to improve convergence but preserve computational complexity. Specifically, we introduce a class of proposal distributions which aims to capture the interdependencies between the parameters of the clustering and subject-wise generative models and helps to reduce random walk behaviour of the MCMC scheme. Critically, these proposal distributions only introduce a single hyperparameter that needs to be tuned to achieve good performance. For validation, we apply our proposed solution to synthetic and real-world datasets and also compare it, in terms of computational complexity and performance, to Hamiltonian Monte Carlo (HMC), a state-of-the-art Monte Carlo. Our results indicate that, for the specific application domain considered here, our proposed solution shows good convergence performance and superior runtime compared to HMC. |
1710.03366 | Bilal Khan | Bilal Khan, Ian Duncan, Mohamad Saad, Daniel Schaefer, Ashly Jordan,
Daniel Smith, Alan Neaigus, Don Des Jarlais, Holly Hagan, Kirk Dombrowski | Combination interventions for Hepatitis C and Cirrhosis reduction among
people who inject drugs: An agent-based, networked population simulation
experiment | null | null | 10.1371/journal.pone.0206356 | null | q-bio.PE cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hepatitis C virus (HCV) infection is endemic in people who inject drugs
(PWID), with prevalence estimates above 60 percent for PWID in the United
States. Previous modeling studies suggest that direct acting antiviral (DAA)
treatment can lower overall prevalence in this population, but treatment is
often delayed until the onset of advanced liver disease (fibrosis stage 3 or
later) due to cost. Lower cost interventions featuring syringe access (SA) and
medically assisted treatment (MAT) for addiction are known to be less costly,
but have shown mixed results in lowering HCV rates below current levels. Little
is known about the potential synergistic effects of combining DAA and MAT
treatment, and large-scale tests of combined interventions are rare. While
simulation experiments can reveal likely long-term effects, most prior
simulations have been performed on closed populations of model agents--a
scenario quite different from the open, mobile populations known to most health
agencies. This paper uses data from the Centers for Disease Control's National
HIV Behavioral Surveillance project, IDU round 3, collected in New York City in
2012 by the New York City Department of Health and Mental Hygiene to
parameterize simulations of open populations. Our results show that, in an open
population, SA/MAT by itself has only small effects on HCV prevalence, while
DAA treatment by itself can significantly lower both HCV and HCV-related
advanced liver disease prevalence. More importantly, the simulation experiments
suggest that cost effective synergistic combinations of the two strategies can
dramatically reduce HCV incidence. We conclude that adopting SA/MAT
implementations alongside DAA interventions can play a critical role in
reducing the long-term consequences of ongoing infection.
| [
{
"created": "Tue, 10 Oct 2017 00:56:02 GMT",
"version": "v1"
}
] | 2019-03-06 | [
[
"Khan",
"Bilal",
""
],
[
"Duncan",
"Ian",
""
],
[
"Saad",
"Mohamad",
""
],
[
"Schaefer",
"Daniel",
""
],
[
"Jordan",
"Ashly",
""
],
[
"Smith",
"Daniel",
""
],
[
"Neaigus",
"Alan",
""
],
[
"Jarlais",
"Don Des",
""
],
[
"Hagan",
"Holly",
""
],
[
"Dombrowski",
"Kirk",
""
]
] | Hepatitis C virus (HCV) infection is endemic in people who inject drugs (PWID), with prevalence estimates above 60 percent for PWID in the United States. Previous modeling studies suggest that direct acting antiviral (DAA) treatment can lower overall prevalence in this population, but treatment is often delayed until the onset of advanced liver disease (fibrosis stage 3 or later) due to cost. Lower cost interventions featuring syringe access (SA) and medically assisted treatment (MAT) for addiction are known to be less costly, but have shown mixed results in lowering HCV rates below current levels. Little is known about the potential synergistic effects of combining DAA and MAT treatment, and large-scale tests of combined interventions are rare. While simulation experiments can reveal likely long-term effects, most prior simulations have been performed on closed populations of model agents--a scenario quite different from the open, mobile populations known to most health agencies. This paper uses data from the Centers for Disease Control's National HIV Behavioral Surveillance project, IDU round 3, collected in New York City in 2012 by the New York City Department of Health and Mental Hygiene to parameterize simulations of open populations. Our results show that, in an open population, SA/MAT by itself has only small effects on HCV prevalence, while DAA treatment by itself can significantly lower both HCV and HCV-related advanced liver disease prevalence. More importantly, the simulation experiments suggest that cost effective synergistic combinations of the two strategies can dramatically reduce HCV incidence. We conclude that adopting SA/MAT implementations alongside DAA interventions can play a critical role in reducing the long-term consequences of ongoing infection. |
2012.08755 | Sarah Fay | Sarah C. Fay, Dalton J. Jones, Munther A. Dahleh, A. E. Hosoi | Simple control for complex pandemics | null | null | null | null | q-bio.PE math.PR physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The COVID-19 pandemic began over two years ago, yet schools, businesses, and
other organizations are still struggling to keep the risk of disease outbreak
low while returning to (near) normal functionality. Observations from these
past years suggest that this goal can be achieved through the right balance of
mitigation strategies, which may include some combination of mask use,
vaccinations, viral testing, and contact tracing. The choice of mitigation
measures will be uniquely based on the needs and available resources of each
organization. This article presents practical guidance for creating these
policies based on an analytical model of disease spread that captures the
combined effects of each of these interventions. The resulting guidance is
tested through simulation across a wide range of parameters and used to discuss
the spread of disease on college campuses.
| [
{
"created": "Wed, 16 Dec 2020 06:10:05 GMT",
"version": "v1"
},
{
"created": "Tue, 2 Feb 2021 14:35:11 GMT",
"version": "v2"
},
{
"created": "Mon, 31 Jan 2022 02:22:15 GMT",
"version": "v3"
}
] | 2022-02-02 | [
[
"Fay",
"Sarah C.",
""
],
[
"Jones",
"Dalton J.",
""
],
[
"Dahleh",
"Munther A.",
""
],
[
"Hosoi",
"A. E.",
""
]
] | The COVID-19 pandemic began over two years ago, yet schools, businesses, and other organizations are still struggling to keep the risk of disease outbreak low while returning to (near) normal functionality. Observations from these past years suggest that this goal can be achieved through the right balance of mitigation strategies, which may include some combination of mask use, vaccinations, viral testing, and contact tracing. The choice of mitigation measures will be uniquely based on the needs and available resources of each organization. This article presents practical guidance for creating these policies based on an analytical model of disease spread that captures the combined effects of each of these interventions. The resulting guidance is tested through simulation across a wide range of parameters and used to discuss the spread of disease on college campuses. |
1209.4017 | Leonid Perlovsky | Leonid Perlovsky, Arnaud Cabanac, Marie-Claude Bonniot-Cabanac, Michel
Cabanac | Mozart Effect, Cognitive Dissonance, and the Pleasure of Music | 11 pages | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Mozart effect refers to scientific data on short-term improvement on
certain mental tasks after listening to Mozart, and also to its popularized
version that listening to Mozart makes you smarter (Tomatis, 1991; Wikipedia,
2012). Does Mozart effect point to a fundamental cognitive function of music?
Would such an effect of music be due to the hedonicity, a fundamental dimension
of mental experience? The present paper explores a recent hypothesis that music
helps to tolerate cognitive dissonances and thus enabled accumulation of
knowledge and human cultural evolution (Perlovsky, 2010, 2012). We studied
whether the influence of music is related to its hedonicity and whether
pleasant or unpleasant music would influence scholarly test performance and
cognitive dissonance. Specific hypotheses evaluated here are that during a test
students experience contradictory cognitions that cause cognitive dissonances.
If some music helps to tolerate cognitive dissonances, then first, this music
should increase the duration during which participants can tolerate stressful
conditions while evaluating test choices. Second, this should result in
improved performance. These hypotheses are tentatively confirmed in the
reported experiments as the agreeable music was correlated with better
performance above that under indifferent or unpleasant music. It follows that
music likely performs a fundamental cognitive function explaining the origin
and evolution of musical ability considered previously a mystery.
| [
{
"created": "Sat, 15 Sep 2012 16:24:33 GMT",
"version": "v1"
}
] | 2012-09-19 | [
[
"Perlovsky",
"Leonid",
""
],
[
"Cabanac",
"Arnaud",
""
],
[
"Bonniot-Cabanac",
"Marie-Claude",
""
],
[
"Cabanac",
"Michel",
""
]
] | The Mozart effect refers to scientific data on short-term improvement on certain mental tasks after listening to Mozart, and also to its popularized version that listening to Mozart makes you smarter (Tomatis, 1991; Wikipedia, 2012). Does Mozart effect point to a fundamental cognitive function of music? Would such an effect of music be due to the hedonicity, a fundamental dimension of mental experience? The present paper explores a recent hypothesis that music helps to tolerate cognitive dissonances and thus enabled accumulation of knowledge and human cultural evolution (Perlovsky, 2010, 2012). We studied whether the influence of music is related to its hedonicity and whether pleasant or unpleasant music would influence scholarly test performance and cognitive dissonance. Specific hypotheses evaluated here are that during a test students experience contradictory cognitions that cause cognitive dissonances. If some music helps to tolerate cognitive dissonances, then first, this music should increase the duration during which participants can tolerate stressful conditions while evaluating test choices. Second, this should result in improved performance. These hypotheses are tentatively confirmed in the reported experiments as the agreeable music was correlated with better performance above that under indifferent or unpleasant music. It follows that music likely performs a fundamental cognitive function explaining the origin and evolution of musical ability considered previously a mystery. |
2311.04953 | Aram Mohammed | Aram Akram Mohammed, Rasul Rafiq Aziz, Faraydwn Karim Ahmad, Ibrahim
Maaroof Noori and Tariq Abubakr Ahmad | Rooting capacity of hardwood cuttings of some fruit trees in relation to
cutting pattern | null | null | 10.26682/ajuod.2020.23.1.1 | null | q-bio.OT | http://creativecommons.org/licenses/by/4.0/ | Study two cut patterns in hardwood cuttings of (Cydonia oblonga), (Punica
granatum) and (Ficus carica). The cuttings have been cut either straight with
different internode stub lengths [0 (just onto the basal node as control), 0.5,
1.0, 2.0 or 3.0 cm below the basal node], or slant with 45 degree angle for
each length mentioned above (except the first length (0 cm). Effect of the
basal cut directions on rooting percentage and other shoot and root
characteristics were not significantly different, while the effect of slant cut
pattern on one-side rooting at the basal margin observed in some quince
cuttings but it was rarely observed in pomegranate and fig cuttings. Quince
cuttings gave no different rooting percentage and other shoot and root
characteristics significantly with different internode stub lengths. While,
internode stub 1 and 2 cm in pomegranate cuttings, and 0 cm in fig cuttings
gave the best rooting percentages 44.44% and 100%, respectively. Also,
interaction effects of the two factors on rooting percentage and other shoot
and root characteristics were just significantly different in pomegranate and
fig cuttings. The best rooting capacity achieved in pomegranate cuttings
(49.99%) in those were cut straightly at the base with 1 and 2 cm basal
internode stub lengths, and fig cuttings straightly cut at the base with 0 and
1 cm basal internode stub lengths gave the highest rooting capacity (100%).
| [
{
"created": "Wed, 8 Nov 2023 17:13:18 GMT",
"version": "v1"
}
] | 2023-11-10 | [
[
"Mohammed",
"Aram Akram",
""
],
[
"Aziz",
"Rasul Rafiq",
""
],
[
"Ahmad",
"Faraydwn Karim",
""
],
[
"Noori",
"Ibrahim Maaroof",
""
],
[
"Ahmad",
"Tariq Abubakr",
""
]
] | Study two cut patterns in hardwood cuttings of (Cydonia oblonga), (Punica granatum) and (Ficus carica). The cuttings have been cut either straight with different internode stub lengths [0 (just onto the basal node as control), 0.5, 1.0, 2.0 or 3.0 cm below the basal node], or slant with 45 degree angle for each length mentioned above (except the first length (0 cm). Effect of the basal cut directions on rooting percentage and other shoot and root characteristics were not significantly different, while the effect of slant cut pattern on one-side rooting at the basal margin observed in some quince cuttings but it was rarely observed in pomegranate and fig cuttings. Quince cuttings gave no different rooting percentage and other shoot and root characteristics significantly with different internode stub lengths. While, internode stub 1 and 2 cm in pomegranate cuttings, and 0 cm in fig cuttings gave the best rooting percentages 44.44% and 100%, respectively. Also, interaction effects of the two factors on rooting percentage and other shoot and root characteristics were just significantly different in pomegranate and fig cuttings. The best rooting capacity achieved in pomegranate cuttings (49.99%) in those were cut straightly at the base with 1 and 2 cm basal internode stub lengths, and fig cuttings straightly cut at the base with 0 and 1 cm basal internode stub lengths gave the highest rooting capacity (100%). |
2209.06944 | Iordanka Panayotova | Iordanka Panayotova, John Herrmann, Nathan Kolling | Bioeconomic analysis of harvesting within a predator-prey system: A case
study in the Chesapeake Bay fisheries | 39 pages, 16 figures | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | Sustainable use of biological resources is very important as over
exploitation on the long run may lead to stock depletion, which in turn may
threaten biodiversity. The Chesapeake Bay is an extremely complex ecosystem,
and sustainable harvesting of its fisheries is essential both for the
ecosystem's biodiversity and economic prosperity of the area. Here, we use
ecosystem based mathematical modeling to study the population dynamics with
harvesting of two key fishes in the Chesapeake Bay, the Atlantic Menhaden
(Brevoortia tyrannus) as a prey and the Striped Bass (Morone saxatilis) as a
predator. We start by fitting the generalized Lotka-Volterra model to actual
time series abundance data of the two species obtained from fisheries in the
Bay. We derive conditions for the existence of the bio-economic equilibrium and
investigate the stability and the resilience of the biological system. We study
the maximum sustainable yield, maximum economic yield, and resilience
maximizing yield policies and their effects on the fisheries long term
sustainability, particularly with respect to the menhaden-bass population
dynamics. This study may be used by policy-makers to balance the economic and
ecological harvesting goals while managing the populations of Atlantic menhaden
and striped bass in the Chesapeake Bay fisheries.
| [
{
"created": "Wed, 14 Sep 2022 21:29:41 GMT",
"version": "v1"
},
{
"created": "Thu, 12 Jan 2023 16:46:16 GMT",
"version": "v2"
}
] | 2023-01-13 | [
[
"Panayotova",
"Iordanka",
""
],
[
"Herrmann",
"John",
""
],
[
"Kolling",
"Nathan",
""
]
] | Sustainable use of biological resources is very important as over exploitation on the long run may lead to stock depletion, which in turn may threaten biodiversity. The Chesapeake Bay is an extremely complex ecosystem, and sustainable harvesting of its fisheries is essential both for the ecosystem's biodiversity and economic prosperity of the area. Here, we use ecosystem based mathematical modeling to study the population dynamics with harvesting of two key fishes in the Chesapeake Bay, the Atlantic Menhaden (Brevoortia tyrannus) as a prey and the Striped Bass (Morone saxatilis) as a predator. We start by fitting the generalized Lotka-Volterra model to actual time series abundance data of the two species obtained from fisheries in the Bay. We derive conditions for the existence of the bio-economic equilibrium and investigate the stability and the resilience of the biological system. We study the maximum sustainable yield, maximum economic yield, and resilience maximizing yield policies and their effects on the fisheries long term sustainability, particularly with respect to the menhaden-bass population dynamics. This study may be used by policy-makers to balance the economic and ecological harvesting goals while managing the populations of Atlantic menhaden and striped bass in the Chesapeake Bay fisheries. |
1510.08372 | Haidong Dong | Haidong Dong, Yiyi Yan, Roxana S. Dronca, and Svetomir N. Markovic | T cell equation as a conceptual model of T cell responses for maximizing
the efficacy of cancer immunotherapy | 5 pages | SOJ Immunol 5(1):1-5, 2017 | 10.15226/2372-0948/4/1/00155 | null | q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Following antigen stimulation, the net outcomes of a T cell response are
shaped by integrated signals from both positive co-stimulatory and negative
regulatory molecules. Recently, the blockade of negative regulatory molecules
(i.e. immune checkpoint signals) demonstrates therapeutic effects in treatment
of human cancer, but only in a fraction of cancer patients. Since this therapy
is aimed to enhance T cell responses to cancers, here we devised a conceptual
model by integrating both positive and negative signals in addition to antigen
stimulation. A digital range of adjustment of each signal is formulated in our
model for prediction of a final T cell response. This model allows us to
evaluate strategies in order to enhance antitumor T cell responses. Our model
provides a rational combination strategy for maximizing the therapeutic effects
of cancer immunotherapy.
| [
{
"created": "Wed, 28 Oct 2015 16:36:04 GMT",
"version": "v1"
},
{
"created": "Wed, 6 Sep 2017 02:15:04 GMT",
"version": "v2"
},
{
"created": "Sat, 16 Sep 2017 16:10:32 GMT",
"version": "v3"
}
] | 2017-09-19 | [
[
"Dong",
"Haidong",
""
],
[
"Yan",
"Yiyi",
""
],
[
"Dronca",
"Roxana S.",
""
],
[
"Markovic",
"Svetomir N.",
""
]
] | Following antigen stimulation, the net outcomes of a T cell response are shaped by integrated signals from both positive co-stimulatory and negative regulatory molecules. Recently, the blockade of negative regulatory molecules (i.e. immune checkpoint signals) demonstrates therapeutic effects in treatment of human cancer, but only in a fraction of cancer patients. Since this therapy is aimed to enhance T cell responses to cancers, here we devised a conceptual model by integrating both positive and negative signals in addition to antigen stimulation. A digital range of adjustment of each signal is formulated in our model for prediction of a final T cell response. This model allows us to evaluate strategies in order to enhance antitumor T cell responses. Our model provides a rational combination strategy for maximizing the therapeutic effects of cancer immunotherapy. |
2209.12821 | Gerardo Chowell | Gerardo Chowell, Sushma Dahal, Yuganthi R. Liyanage, Amna Tariq,
Necibe Tuncer | Structural identifiability analysis of epidemic models based on
differential equations: A tutorial-based primer | null | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | The successful application of epidemic models hinges on our ability to
estimate model parameters from limited observations reliably. An
often-overlooked step before estimating model parameters consists of ensuring
that the model parameters are structurally identifiable from the observed
states of the system. In this tutorial-based primer, intended for a diverse
audience, including students training in dynamic systems, we review and provide
detailed guidance for conducting structural identifiability analysis of
differential equation epidemic models based on a differential algebra approach
using DAISY (Differential Algebra for Identifiability of SYstems) and
\textit{Mathematica} (Wolfram Research). This approach aims to uncover any
existing parameter correlations that preclude their estimation from the
observed variables. We demonstrate this approach through examples, including
tutorial videos of compartmental epidemic models previously employed to study
transmission dynamics and control. We show that the lack of structural
identifiability may be remedied by incorporating additional observations from
different model states, assuming that the system's initial conditions are
known, using prior information to fix some parameters involved in parameter
correlations, or modifying the model based on existing parameter correlations.
We also underscore how the results of structural identifiability analysis can
help enrich compartmental diagrams of differential-equation models by
indicating the observed state variables and the results of the structural
identifiability analysis.
| [
{
"created": "Mon, 26 Sep 2022 16:28:32 GMT",
"version": "v1"
},
{
"created": "Thu, 17 Aug 2023 16:04:45 GMT",
"version": "v2"
},
{
"created": "Wed, 27 Sep 2023 20:33:50 GMT",
"version": "v3"
}
] | 2023-09-29 | [
[
"Chowell",
"Gerardo",
""
],
[
"Dahal",
"Sushma",
""
],
[
"Liyanage",
"Yuganthi R.",
""
],
[
"Tariq",
"Amna",
""
],
[
"Tuncer",
"Necibe",
""
]
] | The successful application of epidemic models hinges on our ability to estimate model parameters from limited observations reliably. An often-overlooked step before estimating model parameters consists of ensuring that the model parameters are structurally identifiable from the observed states of the system. In this tutorial-based primer, intended for a diverse audience, including students training in dynamic systems, we review and provide detailed guidance for conducting structural identifiability analysis of differential equation epidemic models based on a differential algebra approach using DAISY (Differential Algebra for Identifiability of SYstems) and \textit{Mathematica} (Wolfram Research). This approach aims to uncover any existing parameter correlations that preclude their estimation from the observed variables. We demonstrate this approach through examples, including tutorial videos of compartmental epidemic models previously employed to study transmission dynamics and control. We show that the lack of structural identifiability may be remedied by incorporating additional observations from different model states, assuming that the system's initial conditions are known, using prior information to fix some parameters involved in parameter correlations, or modifying the model based on existing parameter correlations. We also underscore how the results of structural identifiability analysis can help enrich compartmental diagrams of differential-equation models by indicating the observed state variables and the results of the structural identifiability analysis. |
1608.07259 | Anne Shiu | Mitchell Eithun and Anne Shiu | An All-Encompassing Global Convergence Result for Processive Multisite
Phosphorylation Systems | 23 pages; updated to be consistent with version 2 of arXiv:1606.09480 | null | null | null | q-bio.MN math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Phosphorylation, the enzyme-mediated addition of a phosphate group to a
molecule, is a ubiquitous chemical mechanism in biology. Multisite
phosphorylation, the addition of phosphate groups to multiple sites of a single
molecule, may be distributive or processive. Distributive systems can be
bistable, while processive systems were recently shown to be globally stable.
However, this global convergence result was proven only for a specific
mechanism of processive phosphorylation/dephosphorylation (namely, all
catalytic reactions are reversible). Accordingly, we generalize this result to
allow for processive phosphorylation networks in which each reaction may be
irreversible, and also to account for possible product inhibition. We
accomplish this by defining an all-encompassing processive network that
encapsulates all of these schemes, and then appealing to recent results of
Marcondes de Freitas, Wiuf, and Feliu that assert global convergence by way of
monontone systems theory and network/graph reductions (which correspond to
removal of intermediate complexes). Our results form a case study into the
question of when global convergence is preserved when reactions and/or
intermediate complexes are added to or removed from a network.
| [
{
"created": "Thu, 25 Aug 2016 19:20:36 GMT",
"version": "v1"
},
{
"created": "Thu, 13 Apr 2017 14:08:52 GMT",
"version": "v2"
},
{
"created": "Fri, 2 Jun 2017 21:00:46 GMT",
"version": "v3"
}
] | 2017-06-06 | [
[
"Eithun",
"Mitchell",
""
],
[
"Shiu",
"Anne",
""
]
] | Phosphorylation, the enzyme-mediated addition of a phosphate group to a molecule, is a ubiquitous chemical mechanism in biology. Multisite phosphorylation, the addition of phosphate groups to multiple sites of a single molecule, may be distributive or processive. Distributive systems can be bistable, while processive systems were recently shown to be globally stable. However, this global convergence result was proven only for a specific mechanism of processive phosphorylation/dephosphorylation (namely, all catalytic reactions are reversible). Accordingly, we generalize this result to allow for processive phosphorylation networks in which each reaction may be irreversible, and also to account for possible product inhibition. We accomplish this by defining an all-encompassing processive network that encapsulates all of these schemes, and then appealing to recent results of Marcondes de Freitas, Wiuf, and Feliu that assert global convergence by way of monontone systems theory and network/graph reductions (which correspond to removal of intermediate complexes). Our results form a case study into the question of when global convergence is preserved when reactions and/or intermediate complexes are added to or removed from a network. |
1509.05986 | R.K. Brojen Singh | Soibam Shyamchand Singh, Khundrakpam Budhachandra Singh, Romana
Ishrat, B. Indrajit Sharma and R.K. Brojen Singh | Scaling in topological properties of brain networks | null | null | null | null | q-bio.NC physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The organization in brain networks shows highly modular features with weak
inter-modular interaction. The topology of the networks involves emergence of
modules and sub-modules at different levels of constitution governed by fractal
laws. The modular organization, in terms of modular mass, inter-modular, and
intra-modular interaction, also obeys fractal nature. The parameters which
characterize topological properties of brain networks follow one parameter
scaling theory in all levels of network structure which reveals the
self-similar rules governing the network structure. The calculated fractal
dimensions of brain networks of different species are found to decrease when
one goes from lower to higher level species which implicates the more ordered
and self-organized topography at higher level species. The sparsely distributed
hubs in brain networks may be most influencing nodes but their absence may not
cause network breakdown, and centrality parameters characterizing them also
follow one parameter scaling law indicating self-similar roles of these hubs at
different levels of organization in brain networks.
| [
{
"created": "Sun, 20 Sep 2015 09:33:50 GMT",
"version": "v1"
}
] | 2015-09-22 | [
[
"Singh",
"Soibam Shyamchand",
""
],
[
"Singh",
"Khundrakpam Budhachandra",
""
],
[
"Ishrat",
"Romana",
""
],
[
"Sharma",
"B. Indrajit",
""
],
[
"Singh",
"R. K. Brojen",
""
]
] | The organization in brain networks shows highly modular features with weak inter-modular interaction. The topology of the networks involves emergence of modules and sub-modules at different levels of constitution governed by fractal laws. The modular organization, in terms of modular mass, inter-modular, and intra-modular interaction, also obeys fractal nature. The parameters which characterize topological properties of brain networks follow one parameter scaling theory in all levels of network structure which reveals the self-similar rules governing the network structure. The calculated fractal dimensions of brain networks of different species are found to decrease when one goes from lower to higher level species which implicates the more ordered and self-organized topography at higher level species. The sparsely distributed hubs in brain networks may be most influencing nodes but their absence may not cause network breakdown, and centrality parameters characterizing them also follow one parameter scaling law indicating self-similar roles of these hubs at different levels of organization in brain networks. |
q-bio/0511017 | Eytan Domany | Uri Einav, Yuval Tabach, Gad Getz, Assif Yitzhaky, Ugur Ozbek, Ninette
Amariglio, Shai Izraeli, Gideon Rechavi and Eytan Domany | Gene expression analysis reveals a strong signature of an interferon
induced pathway in childhood lymphoblastic leukemia as well as in breast and
ovarian cancer | null | Oncogene vol 24 p 6367 (2005) | null | null | q-bio.GN q-bio.MN | null | On the basis of epidemiological studies, infection was suggested to play a
role in the etiology of human cancer. While for some cancers such a role was
indeed demonstrated, there is no direct biological support for the role of
viral pathogens in the pathogenesis of childhood leukemia. Using a novel
bioinformatic tool, that alternates between clustering and standard statistical
methods of analysis, we performed a "double blind" search of published gene
expression data of subjects with different childhood ALL subtypes, looking for
unanticipated partitions of patients, induced by unexpected groups of genes
with correlated expression. We discovered a group of about thirty genes,
related to the interferon response pathway, whose expression levels divide the
ALL samples into two subgroups; high in 50, low in 285 patients. Leukemic
subclasses prevalent in early childhood (the age most susceptible to infection)
are over-represented in the high expression subgroup. Similar partitions,
induced by the same genes, were found also in breast and ovarian cancer but not
in lung cancer, prostate cancer and lymphoma. About 40% of breast cancer
samples expressed the "interferon- related" signature. It is of interested that
several studies demonstrated MMTV-like sequences in about 40% of breast cancer
samples. Our discovery of an unanticipated strong signature of an interferon
induced pathway provides molecular support for a role for either inflammation
or viral infection in the pathogenesis of childhood leukemia as well as breast
and ovarian cancer.
| [
{
"created": "Mon, 14 Nov 2005 19:03:21 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Einav",
"Uri",
""
],
[
"Tabach",
"Yuval",
""
],
[
"Getz",
"Gad",
""
],
[
"Yitzhaky",
"Assif",
""
],
[
"Ozbek",
"Ugur",
""
],
[
"Amariglio",
"Ninette",
""
],
[
"Izraeli",
"Shai",
""
],
[
"Rechavi",
"Gideon",
""
],
[
"Domany",
"Eytan",
""
]
] | On the basis of epidemiological studies, infection was suggested to play a role in the etiology of human cancer. While for some cancers such a role was indeed demonstrated, there is no direct biological support for the role of viral pathogens in the pathogenesis of childhood leukemia. Using a novel bioinformatic tool, that alternates between clustering and standard statistical methods of analysis, we performed a "double blind" search of published gene expression data of subjects with different childhood ALL subtypes, looking for unanticipated partitions of patients, induced by unexpected groups of genes with correlated expression. We discovered a group of about thirty genes, related to the interferon response pathway, whose expression levels divide the ALL samples into two subgroups; high in 50, low in 285 patients. Leukemic subclasses prevalent in early childhood (the age most susceptible to infection) are over-represented in the high expression subgroup. Similar partitions, induced by the same genes, were found also in breast and ovarian cancer but not in lung cancer, prostate cancer and lymphoma. About 40% of breast cancer samples expressed the "interferon- related" signature. It is of interested that several studies demonstrated MMTV-like sequences in about 40% of breast cancer samples. Our discovery of an unanticipated strong signature of an interferon induced pathway provides molecular support for a role for either inflammation or viral infection in the pathogenesis of childhood leukemia as well as breast and ovarian cancer. |
2202.10849 | Birgitta Dresp-Langley | Birgitta Dresp-Langley | Brain representation of perceptual stimuli at different levels of
awareness | arXiv admin note: text overlap with arXiv:1805.09176 | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This article questions the widespread assumption that there are brain
representations that will always remain unconscious in the sense of being
inaccessible to individual awareness under any circumstances. This implies that
some part of the knowledge generated by the brain is once and for always
excluded from consciousness and, therefore, from being communicated to the
outside world. This standpoint neglects the possibility that the human brain
might have a capacity for generating metarepresentations of nonconscious
knowledge contents at a given moment in time through context sensitive adaptive
learning, and is somewhat difficult to reconcile with experimental findings
showing that initially subliminal targets can be made available to awareness,
or break through to supraliminal levels of processing, when they are embedded
in an appropriate perceptual object context (relevance condition). Specific
properties of neural network architectures, inspired by the functional
organization of the primate cortex, are able to explain how a human brain could
generate this kind of perceptual learning. Signals or knowledge processed
outside awareness could be made available to awareness through adaptive
resonance of bottom-up and top-down signal exchanges in massively parallel
neural network architectures; in other words, on the basis of statistically
significant signal matches in the domain of time and in the domain of memory
content.
| [
{
"created": "Tue, 22 Feb 2022 12:23:28 GMT",
"version": "v1"
}
] | 2022-02-23 | [
[
"Dresp-Langley",
"Birgitta",
""
]
] | This article questions the widespread assumption that there are brain representations that will always remain unconscious in the sense of being inaccessible to individual awareness under any circumstances. This implies that some part of the knowledge generated by the brain is once and for always excluded from consciousness and, therefore, from being communicated to the outside world. This standpoint neglects the possibility that the human brain might have a capacity for generating metarepresentations of nonconscious knowledge contents at a given moment in time through context sensitive adaptive learning, and is somewhat difficult to reconcile with experimental findings showing that initially subliminal targets can be made available to awareness, or break through to supraliminal levels of processing, when they are embedded in an appropriate perceptual object context (relevance condition). Specific properties of neural network architectures, inspired by the functional organization of the primate cortex, are able to explain how a human brain could generate this kind of perceptual learning. Signals or knowledge processed outside awareness could be made available to awareness through adaptive resonance of bottom-up and top-down signal exchanges in massively parallel neural network architectures; in other words, on the basis of statistically significant signal matches in the domain of time and in the domain of memory content. |
1305.5650 | Dmitri Volchenkov | Dimitri Volchenkov, Jonathan Helbach, Marko Tscherepanow, Sina
K\"uhnel | Exploration-exploitation trade-off features a saltatory search behaviour | J. R. Soc. Interface, 2013 | null | null | null | q-bio.OT physics.soc-ph q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Searching experiments conducted in different virtual environments over a
gender balanced group of people revealed a gender irrelevant scale-free spread
of searching activity on large spatiotemporal scales. We have suggested and
solved analytically a simple statistical model of the coherent-noise type
describing the exploration-exploitation trade-off in humans ("should I stay or
should I go"). The model exhibits a variety of saltatory behaviours, ranging
from Levy flights occurring under uncertainty to Brownian walks performed by a
treasure hunter confident of the eventual success.
| [
{
"created": "Fri, 24 May 2013 08:38:11 GMT",
"version": "v1"
}
] | 2013-05-27 | [
[
"Volchenkov",
"Dimitri",
""
],
[
"Helbach",
"Jonathan",
""
],
[
"Tscherepanow",
"Marko",
""
],
[
"Kühnel",
"Sina",
""
]
] | Searching experiments conducted in different virtual environments over a gender balanced group of people revealed a gender irrelevant scale-free spread of searching activity on large spatiotemporal scales. We have suggested and solved analytically a simple statistical model of the coherent-noise type describing the exploration-exploitation trade-off in humans ("should I stay or should I go"). The model exhibits a variety of saltatory behaviours, ranging from Levy flights occurring under uncertainty to Brownian walks performed by a treasure hunter confident of the eventual success. |
2311.07624 | Wensi Hu | Wensi Hu, Quan-Xing Liu, Bo Wang, Nuo Xu, Lijuan Cui, Chi Xu | Disordered hyperuniformity signals functioning and resilience of
self-organized vegetation patterns | 34 pages, 6 figures; Supplementary Materials, 19 pages, 10 figures, 2
tables | null | null | null | q-bio.PE stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In harsh environments, organisms may self-organize into spatially patterned
systems in various ways. So far, studies of ecosystem spatial self-organization
have primarily focused on apparent orders reflected by regular patterns.
However, self-organized ecosystems may also have cryptic orders that can be
unveiled only through certain quantitative analyses. Here we show that
disordered hyperuniformity as a striking class of hidden orders can exist in
spatially self-organized vegetation landscapes. By analyzing the
high-resolution remotely sensed images across the American drylands, we
demonstrate that it is not uncommon to find disordered hyperuniform vegetation
states characterized by suppressed density fluctuations at long range. Such
long-range hyperuniformity has been documented in a wide range of microscopic
systems. Our finding contributes to expanding this domain to accommodate
natural landscape ecological systems. We use theoretical modeling to propose
that disordered hyperuniform vegetation patterning can arise from three
generalized mechanisms prevalent in dryland ecosystems, including (1) critical
absorbing states driven by an ecological legacy effect, (2) scale-dependent
feedbacks driven by plant-plant facilitation and competition, and (3)
density-dependent aggregation driven by plant-sediment feedbacks. Our modeling
results also show that disordered hyperuniform patterns can help ecosystems
cope with arid conditions with enhanced functioning of soil moisture
acquisition. However, this advantage may come at the cost of slower recovery of
ecosystem structure upon perturbations. Our work highlights that disordered
hyperuniformity as a distinguishable but underexplored ecosystem
self-organization state merits systematic studies to better understand its
underlying mechanisms, functioning, and resilience.
| [
{
"created": "Mon, 13 Nov 2023 08:03:09 GMT",
"version": "v1"
}
] | 2023-11-15 | [
[
"Hu",
"Wensi",
""
],
[
"Liu",
"Quan-Xing",
""
],
[
"Wang",
"Bo",
""
],
[
"Xu",
"Nuo",
""
],
[
"Cui",
"Lijuan",
""
],
[
"Xu",
"Chi",
""
]
] | In harsh environments, organisms may self-organize into spatially patterned systems in various ways. So far, studies of ecosystem spatial self-organization have primarily focused on apparent orders reflected by regular patterns. However, self-organized ecosystems may also have cryptic orders that can be unveiled only through certain quantitative analyses. Here we show that disordered hyperuniformity as a striking class of hidden orders can exist in spatially self-organized vegetation landscapes. By analyzing the high-resolution remotely sensed images across the American drylands, we demonstrate that it is not uncommon to find disordered hyperuniform vegetation states characterized by suppressed density fluctuations at long range. Such long-range hyperuniformity has been documented in a wide range of microscopic systems. Our finding contributes to expanding this domain to accommodate natural landscape ecological systems. We use theoretical modeling to propose that disordered hyperuniform vegetation patterning can arise from three generalized mechanisms prevalent in dryland ecosystems, including (1) critical absorbing states driven by an ecological legacy effect, (2) scale-dependent feedbacks driven by plant-plant facilitation and competition, and (3) density-dependent aggregation driven by plant-sediment feedbacks. Our modeling results also show that disordered hyperuniform patterns can help ecosystems cope with arid conditions with enhanced functioning of soil moisture acquisition. However, this advantage may come at the cost of slower recovery of ecosystem structure upon perturbations. Our work highlights that disordered hyperuniformity as a distinguishable but underexplored ecosystem self-organization state merits systematic studies to better understand its underlying mechanisms, functioning, and resilience. |
2010.03642 | Arun Nethi | Adam J. Starr, Manjula Julka, Arun Nethi, John D. Watkins, Ryan W.
Fairchild, Michael W. Cripps, Dustin Rinehart, and Hayden N. Box | Parkland Trauma Index of Mortality (PTIM): Real-time Predictive Model
for PolyTrauma Patients | 8 pages, 5 figures | null | null | null | q-bio.QM cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vital signs and laboratory values are routinely used to guide clinical
decision-making for polytrauma patients, such as the decision to use damage
control techniques versus early definitive fracture fixation. Prior
multivariate models have tried to predict mortality risk, but due to several
limitations like one-time prediction at the time of admission, they have not
proven clinically useful. There is a need for a dynamic model that captures
evolving physiologic changes during patient's hospital course to trauma and
resuscitation for mortality prediction. The Parkland Trauma Index of Mortality
(PTIM) is a machine learning algorithm that uses electronic medical record
(EMR) data to predict $48-$hour mortality during the first $72$ hours of
hospitalization. The model updates every hour, evolving with the patient's
physiologic response to trauma. Area under (AUC) the receiver-operator
characteristic curve (ROC), sensitivity, specificity, positive (PPV) and
negative predictive value (NPV), and positive and negative likelihood ratios
(LR) were used to evaluate model performance. By evolving with the patient's
physiologic response to trauma and relying only on EMR data, the PTIM overcomes
many of the limitations of prior mortality risk models. It may be a useful tool
to inform clinical decision-making for polytrauma patients early in their
hospitalization.
| [
{
"created": "Wed, 7 Oct 2020 20:34:03 GMT",
"version": "v1"
}
] | 2020-10-09 | [
[
"Starr",
"Adam J.",
""
],
[
"Julka",
"Manjula",
""
],
[
"Nethi",
"Arun",
""
],
[
"Watkins",
"John D.",
""
],
[
"Fairchild",
"Ryan W.",
""
],
[
"Cripps",
"Michael W.",
""
],
[
"Rinehart",
"Dustin",
""
],
[
"Box",
"Hayden N.",
""
]
] | Vital signs and laboratory values are routinely used to guide clinical decision-making for polytrauma patients, such as the decision to use damage control techniques versus early definitive fracture fixation. Prior multivariate models have tried to predict mortality risk, but due to several limitations like one-time prediction at the time of admission, they have not proven clinically useful. There is a need for a dynamic model that captures evolving physiologic changes during patient's hospital course to trauma and resuscitation for mortality prediction. The Parkland Trauma Index of Mortality (PTIM) is a machine learning algorithm that uses electronic medical record (EMR) data to predict $48-$hour mortality during the first $72$ hours of hospitalization. The model updates every hour, evolving with the patient's physiologic response to trauma. Area under (AUC) the receiver-operator characteristic curve (ROC), sensitivity, specificity, positive (PPV) and negative predictive value (NPV), and positive and negative likelihood ratios (LR) were used to evaluate model performance. By evolving with the patient's physiologic response to trauma and relying only on EMR data, the PTIM overcomes many of the limitations of prior mortality risk models. It may be a useful tool to inform clinical decision-making for polytrauma patients early in their hospitalization. |
q-bio/0502017 | Marco Cosentino Lagomarsino | M. Cosentino Lagomarsino, P. Jona, B. Bassetti | The large-scale logico-chemical structure of a transcriptional
regulation network | null | null | null | null | q-bio.MN cond-mat.dis-nn | null | Identity, response to external stimuli, and spatial architecture of a living
system are central topics of molecular biology. Presently, they are largely
seen as a result of the interplay between a gene repertoire and the regulatory
machinery of the cell. At the transcriptional level, the cis-regulatory regions
establish sets of interdependencies between transcription factors and genes,
including other transcription factors. These ``transcription networks'' are too
large to be approached globally with a detailed dynamical model. In this paper,
we describe an approach to this problem that focuses solely on the
compatibility between gene expression patterns and signal integration
functions, discussing calculations carried on the simplest, Boolean,
realization of the model, and a first application to experimental data sets.
| [
{
"created": "Tue, 15 Feb 2005 23:45:37 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Lagomarsino",
"M. Cosentino",
""
],
[
"Jona",
"P.",
""
],
[
"Bassetti",
"B.",
""
]
] | Identity, response to external stimuli, and spatial architecture of a living system are central topics of molecular biology. Presently, they are largely seen as a result of the interplay between a gene repertoire and the regulatory machinery of the cell. At the transcriptional level, the cis-regulatory regions establish sets of interdependencies between transcription factors and genes, including other transcription factors. These ``transcription networks'' are too large to be approached globally with a detailed dynamical model. In this paper, we describe an approach to this problem that focuses solely on the compatibility between gene expression patterns and signal integration functions, discussing calculations carried on the simplest, Boolean, realization of the model, and a first application to experimental data sets. |
q-bio/0701045 | Lucilla de Arcangelis | G.L. Pellegrini, L. de Arcangelis, H.J. Herrmann, C. Perrone-Capano | Modelling the brain as a n Apollonian network | 9 pages, 10 figures | null | null | null | q-bio.NC | null | Networks of living neurons exhibit an avalanche mode of activity,
experimentally found in organotypic cultures. Moreover, experimental studies of
morphology indicate that neurons develop a network of small-world-like
connections, with the possibility of very high connectivity degree. Here we
study a recent model based on self-organized criticality, which consists of an
electrical network with threshold firing and activity-dependent synapse
strengths. We study the model on a scale-free network, the Apollonian network,
which presents many features of neuronal systems. The system exhibits a power
law distributed avalanche activity. The analysis of the power spectra of the
electrical signal reproduces very robustly the power law behaviour with the
exponent 0.8, experimentally measured in electroencephalograms (EEG) spectra.
The exponents are found to be quite stable with respect to initial
configurations and strength of plastic remodelling, indicating that
universality holds for a wide class of brain models.
| [
{
"created": "Sat, 27 Jan 2007 00:17:13 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Pellegrini",
"G. L.",
""
],
[
"de Arcangelis",
"L.",
""
],
[
"Herrmann",
"H. J.",
""
],
[
"Perrone-Capano",
"C.",
""
]
] | Networks of living neurons exhibit an avalanche mode of activity, experimentally found in organotypic cultures. Moreover, experimental studies of morphology indicate that neurons develop a network of small-world-like connections, with the possibility of very high connectivity degree. Here we study a recent model based on self-organized criticality, which consists of an electrical network with threshold firing and activity-dependent synapse strengths. We study the model on a scale-free network, the Apollonian network, which presents many features of neuronal systems. The system exhibits a power law distributed avalanche activity. The analysis of the power spectra of the electrical signal reproduces very robustly the power law behaviour with the exponent 0.8, experimentally measured in electroencephalograms (EEG) spectra. The exponents are found to be quite stable with respect to initial configurations and strength of plastic remodelling, indicating that universality holds for a wide class of brain models. |
1802.09627 | Tito Arecchi | F.Tito Arecchi | Cognition and Reality | 18 pages; submitted to "Substantia" | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We discuss the two moments of human cognition, namely, apprehension (A),
whereby a coherent perception emerges from the recruitment of neuronal groups,
and judgment(B),that entails the comparison of two apprehensions acquired at
different times, coded in a suitable language and retrieved by memory. (B)
entails self-consciousness, in so far as the agent who expresses the judgment
must be aware that the two apprehensions are submitted to his/her own scrutiny
and that it is his/her task to extract a mutual relation. Since (B) lasts
around 3 seconds, the semantic value of the pieces under comparison must be
decided within that time. This implies a fast search of the memory contents. As
a fact, exploring human subjects with sequences of simple words, we find
evidence of a limited time window , corresponding to the memory retrieval of a
linguistic item in order to match it with the next one in a text flow (be it
literary, or musical,or figurative). While apprehension is globally explained
as a Bayes inference, judgment tresults from an inverse Bayes inference. As a
consequence, two hermeneutics emerge (called respectively circle and coil). The
first one acts in a pre-assigned space of features. The second one provides the
discovery of novel features, thus unveiling previously unknown aspects and
hence representing the road to reality.
| [
{
"created": "Wed, 27 Dec 2017 09:39:38 GMT",
"version": "v1"
}
] | 2018-02-28 | [
[
"Arecchi",
"F. Tito",
""
]
] | We discuss the two moments of human cognition, namely, apprehension (A), whereby a coherent perception emerges from the recruitment of neuronal groups, and judgment(B),that entails the comparison of two apprehensions acquired at different times, coded in a suitable language and retrieved by memory. (B) entails self-consciousness, in so far as the agent who expresses the judgment must be aware that the two apprehensions are submitted to his/her own scrutiny and that it is his/her task to extract a mutual relation. Since (B) lasts around 3 seconds, the semantic value of the pieces under comparison must be decided within that time. This implies a fast search of the memory contents. As a fact, exploring human subjects with sequences of simple words, we find evidence of a limited time window , corresponding to the memory retrieval of a linguistic item in order to match it with the next one in a text flow (be it literary, or musical,or figurative). While apprehension is globally explained as a Bayes inference, judgment tresults from an inverse Bayes inference. As a consequence, two hermeneutics emerge (called respectively circle and coil). The first one acts in a pre-assigned space of features. The second one provides the discovery of novel features, thus unveiling previously unknown aspects and hence representing the road to reality. |
1509.05802 | Daniel Barth | Krista M. Rodgers, F. Edward Dudek and Daniel S. Barth | Lack of appropriate controls leads to mistaking absence seizures for
post-traumatic epilepsy | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Here we provide a thorough discussion of a rebuttal by D'Ambrosio et al to a
study conducted by Rodgers et al. (Rodgers KM, Dudek FE, Barth DS (2015)
Progressive, Seizure-Like, Spike- Wave Discharges Are Common in Both Injured
and Uninjured Sprague-Dawley Rats: Implications for the Fluid Percussion Injury
Model of Post-Traumatic Epilepsy. J Neurosci. 35(24):9194-204. doi:
10.1523/JNEUROSCI.0919-15.2015.) to investigate focal seizures and acquired
epileptogenesis induced by head injury in the rat. This manuscript serves as
supplementary document for our letter to the Editor to appear in the Journal of
Neuroscience. We find the rebuttal is flawed on all points, particularly
concerning use of proper controls, experimental methods, analytical methods,
and epilepsy diagnostic criteria, leading to mistaking absence seizures for
post-traumatic epilepsy.
| [
{
"created": "Fri, 18 Sep 2015 21:35:38 GMT",
"version": "v1"
}
] | 2015-09-22 | [
[
"Rodgers",
"Krista M.",
""
],
[
"Dudek",
"F. Edward",
""
],
[
"Barth",
"Daniel S.",
""
]
] | Here we provide a thorough discussion of a rebuttal by D'Ambrosio et al to a study conducted by Rodgers et al. (Rodgers KM, Dudek FE, Barth DS (2015) Progressive, Seizure-Like, Spike- Wave Discharges Are Common in Both Injured and Uninjured Sprague-Dawley Rats: Implications for the Fluid Percussion Injury Model of Post-Traumatic Epilepsy. J Neurosci. 35(24):9194-204. doi: 10.1523/JNEUROSCI.0919-15.2015.) to investigate focal seizures and acquired epileptogenesis induced by head injury in the rat. This manuscript serves as supplementary document for our letter to the Editor to appear in the Journal of Neuroscience. We find the rebuttal is flawed on all points, particularly concerning use of proper controls, experimental methods, analytical methods, and epilepsy diagnostic criteria, leading to mistaking absence seizures for post-traumatic epilepsy. |
2002.12883 | Michael Guertin | Kizhakke Mattada Sathyan, Thomas G. Scott, and Michael J. Guertin | The ARF-AID system: Methods that preserve endogenous protein levels and
facilitate rapidly inducible protein degradation | 43 pages, 8 figures | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | The ARF-AID (Auxin Response Factor-Auxin Inducible Degron) system is a
re-engineered auxin-inducible protein degradation system. Inducible degron
systems are widely used to specifically and rapidly deplete proteins of
interest in cell lines and organisms. An advantage of inducible degradation is
that the biological system under study remains intact and functional until
perturbation. This feature necessitates that the endogenous levels of the
protein are maintained. However, endogenous tagging of genes with AID can
result in chronic, auxin-independent proteasome-mediated degradation. The
additional expression of the ARF-PB1 domain in the re-engineered ARF-AID system
prevents chronic degradation of AID-tagged proteins while preserving rapid
degradation of tagged proteins. Here we describe the protocol for engineering
human cell lines to implement the ARF-AID system for specific and inducible
protein degradation. These methods are adaptable and can be extended from cell
lines to organisms.
| [
{
"created": "Fri, 28 Feb 2020 17:32:09 GMT",
"version": "v1"
}
] | 2020-03-02 | [
[
"Sathyan",
"Kizhakke Mattada",
""
],
[
"Scott",
"Thomas G.",
""
],
[
"Guertin",
"Michael J.",
""
]
] | The ARF-AID (Auxin Response Factor-Auxin Inducible Degron) system is a re-engineered auxin-inducible protein degradation system. Inducible degron systems are widely used to specifically and rapidly deplete proteins of interest in cell lines and organisms. An advantage of inducible degradation is that the biological system under study remains intact and functional until perturbation. This feature necessitates that the endogenous levels of the protein are maintained. However, endogenous tagging of genes with AID can result in chronic, auxin-independent proteasome-mediated degradation. The additional expression of the ARF-PB1 domain in the re-engineered ARF-AID system prevents chronic degradation of AID-tagged proteins while preserving rapid degradation of tagged proteins. Here we describe the protocol for engineering human cell lines to implement the ARF-AID system for specific and inducible protein degradation. These methods are adaptable and can be extended from cell lines to organisms. |
1903.10334 | Jon Borresen | Jon Borresen and Killian O'Brien | An Upper Bound on the Number of Discrete States Possible for the Human
Brain | 10 Pages, 5 Figures | null | null | null | q-bio.NC | http://creativecommons.org/publicdomain/zero/1.0/ | Human brains are arguably the most complex entities known. Composed of
billions of neurons, connected via a highly detailed structure where the
underlying method by which functionality occurs is still debated. Here we
consider one theory for neural coding, synchronization coding, which gives rise
to the highest possible number of discrete states that a brain could exist in.
A strict upper bound on the number of these states is determined. We conclude
that the theoretical upper limit on the capacity of one human brain is almost
inconceivably large and massively larger than the corresponding theoretical
limit that could be obtained using every transistor ever built.
| [
{
"created": "Mon, 18 Mar 2019 11:47:11 GMT",
"version": "v1"
}
] | 2019-03-26 | [
[
"Borresen",
"Jon",
""
],
[
"O'Brien",
"Killian",
""
]
] | Human brains are arguably the most complex entities known. Composed of billions of neurons, connected via a highly detailed structure where the underlying method by which functionality occurs is still debated. Here we consider one theory for neural coding, synchronization coding, which gives rise to the highest possible number of discrete states that a brain could exist in. A strict upper bound on the number of these states is determined. We conclude that the theoretical upper limit on the capacity of one human brain is almost inconceivably large and massively larger than the corresponding theoretical limit that could be obtained using every transistor ever built. |
2003.08447 | Rishikesh Magar | Rishikesh Magar, Prakarsh Yadav, Amir Barati Farimani | Potential Neutralizing Antibodies Discovered for Novel Corona Virus
Using Machine Learning | We have computationally predicted the neutralizing candidates against
COVID -19 using machine learning techniques. The main paper is 25 pages(4
figures) and Supporting information is 10 pages(5 figures) | null | null | null | q-bio.BM cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The fast and untraceable virus mutations take lives of thousands of people
before the immune system can produce the inhibitory antibody. Recent outbreak
of novel coronavirus infected and killed thousands of people in the world.
Rapid methods in finding peptides or antibody sequences that can inhibit the
viral epitopes of COVID-19 will save the life of thousands. In this paper, we
devised a machine learning (ML) model to predict the possible inhibitory
synthetic antibodies for Corona virus. We collected 1933 virus-antibody
sequences and their clinical patient neutralization response and trained an ML
model to predict the antibody response. Using graph featurization with variety
of ML methods, we screened thousands of hypothetical antibody sequences and
found 8 stable antibodies that potentially inhibit COVID-19. We combined
bioinformatics, structural biology, and Molecular Dynamics (MD) simulations to
verify the stability of the candidate antibodies that can inhibit the Corona
virus.
| [
{
"created": "Wed, 18 Mar 2020 19:23:32 GMT",
"version": "v1"
}
] | 2020-03-20 | [
[
"Magar",
"Rishikesh",
""
],
[
"Yadav",
"Prakarsh",
""
],
[
"Farimani",
"Amir Barati",
""
]
] | The fast and untraceable virus mutations take lives of thousands of people before the immune system can produce the inhibitory antibody. Recent outbreak of novel coronavirus infected and killed thousands of people in the world. Rapid methods in finding peptides or antibody sequences that can inhibit the viral epitopes of COVID-19 will save the life of thousands. In this paper, we devised a machine learning (ML) model to predict the possible inhibitory synthetic antibodies for Corona virus. We collected 1933 virus-antibody sequences and their clinical patient neutralization response and trained an ML model to predict the antibody response. Using graph featurization with variety of ML methods, we screened thousands of hypothetical antibody sequences and found 8 stable antibodies that potentially inhibit COVID-19. We combined bioinformatics, structural biology, and Molecular Dynamics (MD) simulations to verify the stability of the candidate antibodies that can inhibit the Corona virus. |
1309.4287 | Javier Orlandi G | Javier G. Orlandi, Olav Stetter, Jordi Soriano, Theo Geisel and Demian
Battaglia | Transfer Entropy reconstruction and labeling of neuronal connections
from simulated calcium imaging | 24 pages, 4 figures | null | 10.1371/journal.pone.0098842 | null | q-bio.NC cond-mat.dis-nn | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neuronal dynamics are fundamentally constrained by the underlying structural
network architecture, yet much of the details of this synaptic connectivity are
still unknown even in neuronal cultures in vitro. Here we extend a previous
approach based on information theory, the Generalized Transfer Entropy, to the
reconstruction of connectivity of simulated neuronal networks of both
excitatory and inhibitory neurons. We show that, due to the model-free nature
of the developed measure, both kinds of connections can be reliably inferred if
the average firing rate between synchronous burst events exceeds a small
minimum frequency. Furthermore, we suggest, based on systematic simulations,
that even lower spontaneous inter- burst rates could be raised to meet the
requirements of our reconstruction algorithm by applying a weak spatially
homogeneous stimulation to the entire network. By combining multiple recordings
of the same in silico network before and after pharmacologically blocking
inhibitory synaptic transmission, we show then how it becomes possible to infer
with high confidence the excitatory or inhibitory nature of each individual
neuron.
| [
{
"created": "Tue, 17 Sep 2013 12:49:08 GMT",
"version": "v1"
},
{
"created": "Tue, 6 May 2014 15:59:36 GMT",
"version": "v2"
}
] | 2017-02-08 | [
[
"Orlandi",
"Javier G.",
""
],
[
"Stetter",
"Olav",
""
],
[
"Soriano",
"Jordi",
""
],
[
"Geisel",
"Theo",
""
],
[
"Battaglia",
"Demian",
""
]
] | Neuronal dynamics are fundamentally constrained by the underlying structural network architecture, yet much of the details of this synaptic connectivity are still unknown even in neuronal cultures in vitro. Here we extend a previous approach based on information theory, the Generalized Transfer Entropy, to the reconstruction of connectivity of simulated neuronal networks of both excitatory and inhibitory neurons. We show that, due to the model-free nature of the developed measure, both kinds of connections can be reliably inferred if the average firing rate between synchronous burst events exceeds a small minimum frequency. Furthermore, we suggest, based on systematic simulations, that even lower spontaneous inter- burst rates could be raised to meet the requirements of our reconstruction algorithm by applying a weak spatially homogeneous stimulation to the entire network. By combining multiple recordings of the same in silico network before and after pharmacologically blocking inhibitory synaptic transmission, we show then how it becomes possible to infer with high confidence the excitatory or inhibitory nature of each individual neuron. |
2004.08288 | \'Angel Gustavo Cervantes P\'erez | Ugo Avila-Ponce de Le\'on, \'Angel G. C. P\'erez, Eric Avila-Vales | A data driven analysis and forecast of an SEIARD epidemic model for
COVID-19 in Mexico | 13 pages, 8 figures | Big Data and Information Analytics, Vol. 5, No. 1 (2020) 14-28 | 10.3934/bdia.2020002 | null | q-bio.PE math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose an SEIARD mathematical model to investigate the current outbreak
of coronavirus disease (COVID-19) in Mexico. We conduct a detailed analysis of
this model and demonstrate its application using publicly reported data. We
calculate the basic reproduction number ($R_0$) via the next-generation matrix
method, and we estimate the per day infection, death and recovery rates. We
calibrate the parameters of the SEIARD model to the reported data by minimizing
the sum of squared errors and attempt to forecast the evolution of the outbreak
until June 2020. Our results estimate that the peak of the epidemic in Mexico
will be around May 2, 2020. Our model incorporates the importance of
considering the aysmptomatic infected individuals, because they represent the
majority of the infected population (with symptoms or not) and they could play
a huge role in spreading the virus without any knowledge.
| [
{
"created": "Thu, 16 Apr 2020 09:26:53 GMT",
"version": "v1"
}
] | 2021-02-23 | [
[
"de León",
"Ugo Avila-Ponce",
""
],
[
"Pérez",
"Ángel G. C.",
""
],
[
"Avila-Vales",
"Eric",
""
]
] | We propose an SEIARD mathematical model to investigate the current outbreak of coronavirus disease (COVID-19) in Mexico. We conduct a detailed analysis of this model and demonstrate its application using publicly reported data. We calculate the basic reproduction number ($R_0$) via the next-generation matrix method, and we estimate the per day infection, death and recovery rates. We calibrate the parameters of the SEIARD model to the reported data by minimizing the sum of squared errors and attempt to forecast the evolution of the outbreak until June 2020. Our results estimate that the peak of the epidemic in Mexico will be around May 2, 2020. Our model incorporates the importance of considering the aysmptomatic infected individuals, because they represent the majority of the infected population (with symptoms or not) and they could play a huge role in spreading the virus without any knowledge. |
1203.4721 | Tommi Aho | Tommi Aho (1), Juha Kesseli (1), Olli Yli-Harja (1) and Stuart A.
Kauffman (1,2) ((1) Department of Signal Processing, Tampere University of
Technology, Finland, (2) Complex Systems Center, University of Vermont,
U.S.A) | Growth efficiency as a cellular objective in Eschericia coli | 18 pages, 5 figures | null | null | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The identification of cellular objectives is one of the central topics in the
research of microbial metabolic networks. In particular, the information about
a cellular objective is needed in flux balance analysis which is a commonly
used constrained-based metabolic network analysis method for the prediction of
cellular phenotypes. The cellular objective may vary depending on the organism
and its growth conditions. It is probable that nutritionally scarce conditions
are very common in the nature and, in order to survive in those conditions,
cells exhibit various highly efficient nutrient processing systems like
enzymes. In this study, we explore the efficiency of a metabolic network in
transformation of substrates to new biomass, and we introduce a new objective
function simulating growth efficiency.
We examined the properties of growth efficiency using a metabolic model for
Eschericia coli. We found that the maximal growth efficiency is obtained at a
finite nutrient uptake rate. The rate is substrate-dependent and it typically
does not exceed 20 mmol/h/gDW. We further examined whether the maximal growth
efficiency could serve as a cellular objective function in metabolic network
analysis, and found that cellular growth in batch cultivation can be predicted
reasonably well under this assumption. The fit to experimental data was found
slightly better than with the commonly used objective function of maximal
growth rate.
Based on our results, we suggest that the maximal growth efficiency can be
considered as a plausible optimization criterion in metabolic modeling for E.
coli. In the future, it would be interesting to study growth efficiency as a
cellular objective also in other cellular systems and under different
cultivation conditions.
| [
{
"created": "Wed, 21 Mar 2012 12:31:13 GMT",
"version": "v1"
}
] | 2012-03-22 | [
[
"Aho",
"Tommi",
""
],
[
"Kesseli",
"Juha",
""
],
[
"Yli-Harja",
"Olli",
""
],
[
"Kauffman",
"Stuart A.",
""
]
] | The identification of cellular objectives is one of the central topics in the research of microbial metabolic networks. In particular, the information about a cellular objective is needed in flux balance analysis which is a commonly used constrained-based metabolic network analysis method for the prediction of cellular phenotypes. The cellular objective may vary depending on the organism and its growth conditions. It is probable that nutritionally scarce conditions are very common in the nature and, in order to survive in those conditions, cells exhibit various highly efficient nutrient processing systems like enzymes. In this study, we explore the efficiency of a metabolic network in transformation of substrates to new biomass, and we introduce a new objective function simulating growth efficiency. We examined the properties of growth efficiency using a metabolic model for Eschericia coli. We found that the maximal growth efficiency is obtained at a finite nutrient uptake rate. The rate is substrate-dependent and it typically does not exceed 20 mmol/h/gDW. We further examined whether the maximal growth efficiency could serve as a cellular objective function in metabolic network analysis, and found that cellular growth in batch cultivation can be predicted reasonably well under this assumption. The fit to experimental data was found slightly better than with the commonly used objective function of maximal growth rate. Based on our results, we suggest that the maximal growth efficiency can be considered as a plausible optimization criterion in metabolic modeling for E. coli. In the future, it would be interesting to study growth efficiency as a cellular objective also in other cellular systems and under different cultivation conditions. |
2211.06426 | Zsolt Vizi PhD | Evans Kiptoo Korir and Zsolt Vizi | Clustering of countries based on the associated social contact patterns
in epidemiological modelling | null | null | null | null | q-bio.QM cs.LG physics.soc-ph q-bio.PE stat.AP stat.ME | http://creativecommons.org/licenses/by/4.0/ | Mathematical models have been used to understand the spread patterns of
infectious diseases such as Coronavirus Disease 2019 (COVID-19). The
transmission component of the models can be modelled in an age-dependent manner
via introducing contact matrix for the population, which describes the contact
rates between the age groups. Since social contact patterns vary from country
to country, we can compare and group the countries using the corresponding
contact matrices. In this paper, we present a framework for clustering
countries based on their contact matrices with respect to an underlying
epidemic model. Since the pipeline is generic and modular, we demonstrate its
application in a COVID-19 model from R\"ost et. al. which gives a hint about
which countries can be compared in a pandemic situation, when only
non-pharmaceutical interventions are available.
| [
{
"created": "Tue, 8 Nov 2022 10:59:14 GMT",
"version": "v1"
}
] | 2022-11-15 | [
[
"Korir",
"Evans Kiptoo",
""
],
[
"Vizi",
"Zsolt",
""
]
] | Mathematical models have been used to understand the spread patterns of infectious diseases such as Coronavirus Disease 2019 (COVID-19). The transmission component of the models can be modelled in an age-dependent manner via introducing contact matrix for the population, which describes the contact rates between the age groups. Since social contact patterns vary from country to country, we can compare and group the countries using the corresponding contact matrices. In this paper, we present a framework for clustering countries based on their contact matrices with respect to an underlying epidemic model. Since the pipeline is generic and modular, we demonstrate its application in a COVID-19 model from R\"ost et. al. which gives a hint about which countries can be compared in a pandemic situation, when only non-pharmaceutical interventions are available. |
1210.6444 | Marc Robinson-Rechavi | Barbara Piasecka, Pawel Lichocki, Sebastien Moretti, Sven Bergmann,
Marc Robinson-Rechavi | The hourglass and the early conservation models - co-existing
evolutionary patterns in vertebrate development | null | null | null | null | q-bio.PE q-bio.GN | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Developmental constraints have been postulated to limit the space of feasible
phenotypes and thus shape animal evolution. These constraints have been
suggested to be the strongest during either early or mid-embryogenesis, which
corresponds to the early conservation model or the hourglass model,
respectively. Conflicting results have been reported, but in recent studies of
animal transcriptomes the hourglass model has been favored. Studies usually
report descriptive statistics calculated for all genes over all developmental
time points. This introduces dependencies between the sets of compared genes,
and may lead to biased results. Here we overcome this problem using an
alternative modular analysis. We used the Iterative Signature Algorithm to
identify distinct modules of genes co-expressed specifically in consecutive
stages of zebrafish development. We then performed a detailed comparison of
several gene properties between modules, allowing for a less biased and more
powerful analysis. Notably, our analysis corroborated the hourglass pattern
only at the regulatory level, with sequences of regulatory regions being most
conserved for genes expressed in mid-development, but not at the level of gene
sequence, age or expression, in contrast to some previous studies. The early
conservation model was supported with gene duplication and birth that were the
most rare for genes expressed in early development. Finally, for all gene
properties we observed the least conservation for genes expressed in late
development or adult, consistent with both models. Overall, with the modular
approach, we showed that different levels of molecular evolution follow
different patterns of developmental constraints. Thus both models are valid,
but with respect to different genomic features.
| [
{
"created": "Wed, 24 Oct 2012 07:22:42 GMT",
"version": "v1"
},
{
"created": "Wed, 13 Mar 2013 08:28:58 GMT",
"version": "v2"
}
] | 2013-03-14 | [
[
"Piasecka",
"Barbara",
""
],
[
"Lichocki",
"Pawel",
""
],
[
"Moretti",
"Sebastien",
""
],
[
"Bergmann",
"Sven",
""
],
[
"Robinson-Rechavi",
"Marc",
""
]
] | Developmental constraints have been postulated to limit the space of feasible phenotypes and thus shape animal evolution. These constraints have been suggested to be the strongest during either early or mid-embryogenesis, which corresponds to the early conservation model or the hourglass model, respectively. Conflicting results have been reported, but in recent studies of animal transcriptomes the hourglass model has been favored. Studies usually report descriptive statistics calculated for all genes over all developmental time points. This introduces dependencies between the sets of compared genes, and may lead to biased results. Here we overcome this problem using an alternative modular analysis. We used the Iterative Signature Algorithm to identify distinct modules of genes co-expressed specifically in consecutive stages of zebrafish development. We then performed a detailed comparison of several gene properties between modules, allowing for a less biased and more powerful analysis. Notably, our analysis corroborated the hourglass pattern only at the regulatory level, with sequences of regulatory regions being most conserved for genes expressed in mid-development, but not at the level of gene sequence, age or expression, in contrast to some previous studies. The early conservation model was supported with gene duplication and birth that were the most rare for genes expressed in early development. Finally, for all gene properties we observed the least conservation for genes expressed in late development or adult, consistent with both models. Overall, with the modular approach, we showed that different levels of molecular evolution follow different patterns of developmental constraints. Thus both models are valid, but with respect to different genomic features. |
1902.10950 | Francesco Maria Sabatini Dr | F.M. Sabatini, R.B. de Andrade, Y. Paillet, P. Odor, C. Bouget, T.
Campagnaro, F. Gosselin, P. Janssen, W. Mattioli, J. Nascimbene, T. Sitzia,
T. Kuemmerle, S. Burrascano | Trade-offs between carbon stocks and biodiversity in European temperate
forests | Pre-review Version 2018, 07\23 + Supplementary information 43 Pages,
5 figures + 9 supplementary Figures | Global Change Biology 25(2):536-548 (2019) | 10.1111/gcb.14503 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Policies to mitigate climate change and biodiversity loss often assume that
protecting carbon-rich forests provides co-benefits in terms of biodiversity,
due to the spatial congruence of carbon stocks and biodiversity at
biogeographic scales. However, it remains unclear whether this holds at the
scales relevant for management, with particularly large knowledge gaps for
temperate forests and for taxa other than trees. We built a comprehensive
dataset of Central European temperate forest structure and multi-taxonomic
diversity (beetles, birds, bryophytes, fungi, lichens, and plants) across 352
plots. We used Boosted Regression Trees to assess the relationship between
above-ground live carbon stocks and (a) taxon-specific richness, (b) a unified
multidiversity index. We used Threshold Indicator Taxa ANalysis to explore
individual species' responses to changing above-ground carbon stocks and to
detect change-points in species composition along the carbon-stock gradient.
Our results reveal an overall weak and highly variable relationship between
richness and carbon stock at the stand scale, both for individual taxonomic
groups and for multidiversity. Similarly, the proportion of win-win and
trade-off species (i.e. species favored or disadvantaged by increasing carbon
stock, respectively) varied substantially across taxa. Win-win species
gradually replaced trade-off species with increasing carbon, without clear
thresholds along the above-ground carbon gradient, suggesting that
community-level surrogates (e.g. richness) might fail to detect critical
changes in biodiversity. Collectively, our analyses highlight that leveraging
co-benefits between carbon and biodiversity in temperate forest may require
stand-scale management that prioritizes either biodiversity or carbon-in order
to maximize co-benefits at broader scales. Importantly, this contrasts with
tropical forests, where climate [...]
| [
{
"created": "Thu, 28 Feb 2019 08:50:11 GMT",
"version": "v1"
}
] | 2019-03-01 | [
[
"Sabatini",
"F. M.",
""
],
[
"de Andrade",
"R. B.",
""
],
[
"Paillet",
"Y.",
""
],
[
"Odor",
"P.",
""
],
[
"Bouget",
"C.",
""
],
[
"Campagnaro",
"T.",
""
],
[
"Gosselin",
"F.",
""
],
[
"Janssen",
"P.",
""
],
[
"Mattioli",
"W.",
""
],
[
"Nascimbene",
"J.",
""
],
[
"Sitzia",
"T.",
""
],
[
"Kuemmerle",
"T.",
""
],
[
"Burrascano",
"S.",
""
]
] | Policies to mitigate climate change and biodiversity loss often assume that protecting carbon-rich forests provides co-benefits in terms of biodiversity, due to the spatial congruence of carbon stocks and biodiversity at biogeographic scales. However, it remains unclear whether this holds at the scales relevant for management, with particularly large knowledge gaps for temperate forests and for taxa other than trees. We built a comprehensive dataset of Central European temperate forest structure and multi-taxonomic diversity (beetles, birds, bryophytes, fungi, lichens, and plants) across 352 plots. We used Boosted Regression Trees to assess the relationship between above-ground live carbon stocks and (a) taxon-specific richness, (b) a unified multidiversity index. We used Threshold Indicator Taxa ANalysis to explore individual species' responses to changing above-ground carbon stocks and to detect change-points in species composition along the carbon-stock gradient. Our results reveal an overall weak and highly variable relationship between richness and carbon stock at the stand scale, both for individual taxonomic groups and for multidiversity. Similarly, the proportion of win-win and trade-off species (i.e. species favored or disadvantaged by increasing carbon stock, respectively) varied substantially across taxa. Win-win species gradually replaced trade-off species with increasing carbon, without clear thresholds along the above-ground carbon gradient, suggesting that community-level surrogates (e.g. richness) might fail to detect critical changes in biodiversity. Collectively, our analyses highlight that leveraging co-benefits between carbon and biodiversity in temperate forest may require stand-scale management that prioritizes either biodiversity or carbon-in order to maximize co-benefits at broader scales. Importantly, this contrasts with tropical forests, where climate [...] |
2211.07360 | Ajitesh Srivastava | James Orme-Rogers and Ajitesh Srivastava | Spatio-Temporal Attention in Multi-Granular Brain Chronnectomes for
Detection of Autism Spectrum Disorder | 6 pages, 2 figures | null | null | null | q-bio.NC cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | The traditional methods for detecting autism spectrum disorder (ASD) are
expensive, subjective, and time-consuming, often taking years for a diagnosis,
with many children growing well into adolescence and even adulthood before
finally confirming the disorder. Recently, graph-based learning techniques have
demonstrated impressive results on resting-state functional magnetic resonance
imaging (rs-fMRI) data from the Autism Brain Imaging Data Exchange (ABIDE). We
introduce IMAGIN, a multI-granular, Multi-Atlas spatio-temporal attention Graph
Isomorphism Network, which we use to learn graph representations of dynamic
functional brain connectivity (chronnectome), as opposed to static connectivity
(connectome). The experimental results demonstrate that IMAGIN achieves a
5-fold cross-validation accuracy of 79.25%, which surpasses the current
state-of-the-art by 1.5%. In addition, analysis of the spatial and temporal
attention scores provides further validation for the neural basis of autism.
| [
{
"created": "Sun, 30 Oct 2022 01:43:17 GMT",
"version": "v1"
}
] | 2022-11-15 | [
[
"Orme-Rogers",
"James",
""
],
[
"Srivastava",
"Ajitesh",
""
]
] | The traditional methods for detecting autism spectrum disorder (ASD) are expensive, subjective, and time-consuming, often taking years for a diagnosis, with many children growing well into adolescence and even adulthood before finally confirming the disorder. Recently, graph-based learning techniques have demonstrated impressive results on resting-state functional magnetic resonance imaging (rs-fMRI) data from the Autism Brain Imaging Data Exchange (ABIDE). We introduce IMAGIN, a multI-granular, Multi-Atlas spatio-temporal attention Graph Isomorphism Network, which we use to learn graph representations of dynamic functional brain connectivity (chronnectome), as opposed to static connectivity (connectome). The experimental results demonstrate that IMAGIN achieves a 5-fold cross-validation accuracy of 79.25%, which surpasses the current state-of-the-art by 1.5%. In addition, analysis of the spatial and temporal attention scores provides further validation for the neural basis of autism. |
1907.02116 | Paria Mehrani | Paria Mehrani, Andrei Mouraviev, and John K. Tsotsos | Multiplicative modulations in hue-selective cells enhance unique hue
representation | null | null | null | null | q-bio.NC cs.AI | http://creativecommons.org/licenses/by/4.0/ | There is still much to understand about the color processing mechanisms in
the brain and the transformation from cone-opponent representations to
perceptual hues. Moreover, it is unclear which areas(s) in the brain represent
unique hues. We propose a hierarchical model inspired by the neuronal
mechanisms in the brain for local hue representation, which reveals the
contributions of each visual cortical area in hue representation. Local hue
encoding is achieved through incrementally increasing processing nonlinearities
beginning with cone input. Besides employing nonlinear rectifications, we
propose multiplicative modulations as a form of nonlinearity. Our simulation
results indicate that multiplicative modulations have significant contributions
in encoding of hues along intermediate directions in the MacLeod-Boynton
diagram and that model V4 neurons have the capacity to encode unique hues.
Additionally, responses of our model neurons resemble those of biological color
cells, suggesting that our model provides a novel formulation of the brain's
color processing pathway.
| [
{
"created": "Wed, 3 Jul 2019 19:57:33 GMT",
"version": "v1"
}
] | 2019-07-05 | [
[
"Mehrani",
"Paria",
""
],
[
"Mouraviev",
"Andrei",
""
],
[
"Tsotsos",
"John K.",
""
]
] | There is still much to understand about the color processing mechanisms in the brain and the transformation from cone-opponent representations to perceptual hues. Moreover, it is unclear which areas(s) in the brain represent unique hues. We propose a hierarchical model inspired by the neuronal mechanisms in the brain for local hue representation, which reveals the contributions of each visual cortical area in hue representation. Local hue encoding is achieved through incrementally increasing processing nonlinearities beginning with cone input. Besides employing nonlinear rectifications, we propose multiplicative modulations as a form of nonlinearity. Our simulation results indicate that multiplicative modulations have significant contributions in encoding of hues along intermediate directions in the MacLeod-Boynton diagram and that model V4 neurons have the capacity to encode unique hues. Additionally, responses of our model neurons resemble those of biological color cells, suggesting that our model provides a novel formulation of the brain's color processing pathway. |
1211.0301 | Amelie Banc | M.-N. Labour (ICGICMMM, MMDN), Am\'elie Banc (L2C), Audrey Tourrette
(ICGICMMM), Fr\'ed\'erique Cunin (ICGICMMM), Jean-Michel Verdier (MMDN),
Jean-Marie Devoisselle (ICGICMMM), Anne Marcilhac (MMDN), Emmanuel Belamie
(ICGICMMM) | Thick collagen-based 3D matrices including growth factors to induce
neurite outgrowth | null | Acta Biomaterialia 8, 9 (2012) 3302-3312 | 10.1016/j.actbio.2012.05.015 | null | q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Designing synthetic microenvironments for cellular investigations is a very
active area of research at the crossroads of cell biology and materials
science. The present work describes the design and functionalization of a
three-dimensional (3D) culture support dedicated to the study of neurite
outgrowth from neural cells. It is based on a dense self-assembled collagen
matrix stabilized by 100-nm-wide interconnected native fibrils without chemical
crosslinking. The matrices were made suitable for cell manipulation and direct
observation in confocal microscopy by anchoring them to traditional glass
supports with a calibrated thickness of similar to 50 mu m. The matrix
composition can be readily adapted to specific neural cell types, notably by
incorporating appropriate neurotrophic growth factors. Both PC-12 and SH-SY5Y
lines respond to growth factors (nerve growth factor and brain-derived
neurotrophic factor, respectively) impregnated and slowly released from the
support. Significant neurite outgrowth is reported for a large proportion of
cells, up to 66% for PC12 and 49% for SH-SY5Y. It is also shown that both
growth factors can be chemically conjugated (EDC/NHS) throughout the matrix and
yield similar proportions of cells with longer neurites (61% and 52%,
respectively). Finally, neurite outgrowth was observed over several tens of
microns within the 3D matrix, with both diffusing and immobilized growth
factors. (C) 2012 Acta Materialia Inc. Published by Elsevier Ltd. All rights
reserved.
| [
{
"created": "Thu, 1 Nov 2012 20:29:27 GMT",
"version": "v1"
}
] | 2012-11-05 | [
[
"Labour",
"M. -N.",
"",
"ICGICMMM, MMDN"
],
[
"Banc",
"Amélie",
"",
"L2C"
],
[
"Tourrette",
"Audrey",
"",
"ICGICMMM"
],
[
"Cunin",
"Frédérique",
"",
"ICGICMMM"
],
[
"Verdier",
"Jean-Michel",
"",
"MMDN"
],
[
"Devoisselle",
"Jean-Marie",
"",
"ICGICMMM"
],
[
"Marcilhac",
"Anne",
"",
"MMDN"
],
[
"Belamie",
"Emmanuel",
"",
"ICGICMMM"
]
] | Designing synthetic microenvironments for cellular investigations is a very active area of research at the crossroads of cell biology and materials science. The present work describes the design and functionalization of a three-dimensional (3D) culture support dedicated to the study of neurite outgrowth from neural cells. It is based on a dense self-assembled collagen matrix stabilized by 100-nm-wide interconnected native fibrils without chemical crosslinking. The matrices were made suitable for cell manipulation and direct observation in confocal microscopy by anchoring them to traditional glass supports with a calibrated thickness of similar to 50 mu m. The matrix composition can be readily adapted to specific neural cell types, notably by incorporating appropriate neurotrophic growth factors. Both PC-12 and SH-SY5Y lines respond to growth factors (nerve growth factor and brain-derived neurotrophic factor, respectively) impregnated and slowly released from the support. Significant neurite outgrowth is reported for a large proportion of cells, up to 66% for PC12 and 49% for SH-SY5Y. It is also shown that both growth factors can be chemically conjugated (EDC/NHS) throughout the matrix and yield similar proportions of cells with longer neurites (61% and 52%, respectively). Finally, neurite outgrowth was observed over several tens of microns within the 3D matrix, with both diffusing and immobilized growth factors. (C) 2012 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved. |
0807.1803 | Kunihiko Kaneko | Kunihiko Kaneko and Chikara Furusawa | Consistency Principle in Biological Dynamical Systems | As a proceeding paper for European Confernce on Complex Systems | Theory Biosci. (2008) 127; 195-204 | null | null | q-bio.CB cond-mat.stat-mech nlin.AO q-bio.PE q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a principle of consistency between different hierarchical levels
of biological systems. Given a consistency between molecule replication and
cell reproduction, universal statistical laws on cellular chemical abundances
are derived and confirmed experimentally. They include a power law distribution
of gene expressions, a lognormal distribution of cellular chemical abundances
over cells, and embedding of the power law into the network connectivity
distribution. Second, given a consistency between genotype and phenotype, a
general relationship between phenotype fluctuations by genetic variation and
isogenic phenotypic fluctuation by developmental noise is derived. Third, we
discuss the chaos mechanism for stem cell differentiation with autonomous
regulation, resulting from a consistency between cell reproduction and growth
of the cell ensemble.
| [
{
"created": "Fri, 11 Jul 2008 09:24:20 GMT",
"version": "v1"
}
] | 2008-07-21 | [
[
"Kaneko",
"Kunihiko",
""
],
[
"Furusawa",
"Chikara",
""
]
] | We propose a principle of consistency between different hierarchical levels of biological systems. Given a consistency between molecule replication and cell reproduction, universal statistical laws on cellular chemical abundances are derived and confirmed experimentally. They include a power law distribution of gene expressions, a lognormal distribution of cellular chemical abundances over cells, and embedding of the power law into the network connectivity distribution. Second, given a consistency between genotype and phenotype, a general relationship between phenotype fluctuations by genetic variation and isogenic phenotypic fluctuation by developmental noise is derived. Third, we discuss the chaos mechanism for stem cell differentiation with autonomous regulation, resulting from a consistency between cell reproduction and growth of the cell ensemble. |
1201.2033 | Michel Bellis | Michel Bellis | Mapping of Affymetrix probe sets to groups of transcripts using
transcriptional networks | 8 pages, 4 figures, 4tables. Formated version. Submitted to
Bioinformatics | null | null | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Motivation: Usefulness of analysis derived from Affymetrix microarrays
depends largely upon the reliability of files describing the correspondence
between probe sets, genes and transcripts. In particular, in case a gene is
targeted by two probe sets, one must be able to assess if the corresponding
signals measure a group of common transcripts or two groups of transcripts with
little or no overlap.
Results: Probe sets that effectively target the same group of transcripts
have specific properties in the trancriptional networks we constructed. We
found indeed that such probe sets had a very low negative correlation, a high
positive correlation and a similar neighbourhood. Taking advantage of these
properties, we devised a test allowing to group probe sets which target the
same group of transcripts in a particular network. By considering several
networks, additional information concerning the frequency of these associations
was obtained.
Availability and Implementation: The programs developed in Python (PSAWNpy)
and in Matlab (PSAWNml) are freely available, and can be downloaded at
http://code.google.com/p/arraymatic/.
Tutorials and reference manuals are available at
http://bns.crbm.cnrs.fr/softwares.html.
Contact: mbellis@crbm.cnrs.fr.
Supplementary information: Supplementary data are available at
http://bns.crbm.cnrs.fr/download.html.
| [
{
"created": "Tue, 10 Jan 2012 11:57:24 GMT",
"version": "v1"
},
{
"created": "Fri, 13 Jan 2012 15:40:56 GMT",
"version": "v2"
}
] | 2012-01-16 | [
[
"Bellis",
"Michel",
""
]
] | Motivation: Usefulness of analysis derived from Affymetrix microarrays depends largely upon the reliability of files describing the correspondence between probe sets, genes and transcripts. In particular, in case a gene is targeted by two probe sets, one must be able to assess if the corresponding signals measure a group of common transcripts or two groups of transcripts with little or no overlap. Results: Probe sets that effectively target the same group of transcripts have specific properties in the trancriptional networks we constructed. We found indeed that such probe sets had a very low negative correlation, a high positive correlation and a similar neighbourhood. Taking advantage of these properties, we devised a test allowing to group probe sets which target the same group of transcripts in a particular network. By considering several networks, additional information concerning the frequency of these associations was obtained. Availability and Implementation: The programs developed in Python (PSAWNpy) and in Matlab (PSAWNml) are freely available, and can be downloaded at http://code.google.com/p/arraymatic/. Tutorials and reference manuals are available at http://bns.crbm.cnrs.fr/softwares.html. Contact: mbellis@crbm.cnrs.fr. Supplementary information: Supplementary data are available at http://bns.crbm.cnrs.fr/download.html. |
2012.09930 | Linli Shi | Linli Shi (1), Ying Jiang (2), Fernando R. Fernandez (2,5,6), Lu Lan
(3), Guo Chen (3), Heng-ye Man (4,5), John A. White (2,5,6), Ji-Xin Cheng
(2,3), Chen Yang (1, 3) ((1) Department of Chemistry, Boston University,
Boston, USA, (2) Department of Biomedical Engineering, Boston University,
Boston, USA, (3) Department of Electrical and Computer Engineering, Boston,
USA, (4) Department of Biology, Boston University, Boston, USA, (5) Center
for Systems Neuroscience, Boston University, Boston, USA, (6) Neurophotonics
Center, Photonics Center, Boston University, Boston, USA) | Non-genetic acoustic stimulation of single neurons by a tapered fiber
optoacoustic emitter | 25 pages, 5 figures | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As an emerging technology, transcranial focused ultrasound has been
demonstrated to successfully evoke motor responses in mice, rabbits, and
sensory/motor responses in humans. Yet, the spatial resolution of ultrasound
does not allow for high-precision stimulation. Here, we developed a tapered
fiber optoacoustic emitter (TFOE) for optoacoustic stimulation of neurons with
an unprecedented spatial resolution of 20 microns, enabling selective
activation of single neurons or subcellular structures, such as axons and
dendrites. A single acoustic pulse of 1 microsecond converted by the TFOE from
a single laser pulse of 3 nanoseconds is shown as the shortest acoustic stimuli
so far for successful neuron activation. The highly localized ultrasound
generated by the TFOE made it possible to integrate the optoacoustic
stimulation and highly stable patch clamp recording on single neurons. Direct
measurements of electrical response of single neurons to acoustic stimulation,
which is difficult for conventional ultrasound stimulation, have been
demonstrated for the first time. By coupling TFOE with ex vivo brain slice
electrophysiology, we unveil cell-type-specific response of excitatory and
inhibitory neurons to acoustic stimulation. These results demonstrate that TFOE
is a non-genetic single-cell and sub-cellular modulation technology, which
could shed new insights into the mechanism of neurostimulation.
| [
{
"created": "Thu, 17 Dec 2020 20:50:19 GMT",
"version": "v1"
}
] | 2020-12-21 | [
[
"Shi",
"Linli",
""
],
[
"Jiang",
"Ying",
""
],
[
"Fernandez",
"Fernando R.",
""
],
[
"Lan",
"Lu",
""
],
[
"Chen",
"Guo",
""
],
[
"Man",
"Heng-ye",
""
],
[
"White",
"John A.",
""
],
[
"Cheng",
"Ji-Xin",
""
],
[
"Yang",
"Chen",
""
]
] | As an emerging technology, transcranial focused ultrasound has been demonstrated to successfully evoke motor responses in mice, rabbits, and sensory/motor responses in humans. Yet, the spatial resolution of ultrasound does not allow for high-precision stimulation. Here, we developed a tapered fiber optoacoustic emitter (TFOE) for optoacoustic stimulation of neurons with an unprecedented spatial resolution of 20 microns, enabling selective activation of single neurons or subcellular structures, such as axons and dendrites. A single acoustic pulse of 1 microsecond converted by the TFOE from a single laser pulse of 3 nanoseconds is shown as the shortest acoustic stimuli so far for successful neuron activation. The highly localized ultrasound generated by the TFOE made it possible to integrate the optoacoustic stimulation and highly stable patch clamp recording on single neurons. Direct measurements of electrical response of single neurons to acoustic stimulation, which is difficult for conventional ultrasound stimulation, have been demonstrated for the first time. By coupling TFOE with ex vivo brain slice electrophysiology, we unveil cell-type-specific response of excitatory and inhibitory neurons to acoustic stimulation. These results demonstrate that TFOE is a non-genetic single-cell and sub-cellular modulation technology, which could shed new insights into the mechanism of neurostimulation. |
2308.01918 | Raul Isea | Raul Isea | A General Approach to Modeling Covid-19 | 19 paqges, 3 figure | Journal Model Based Research (2023) Vol 2(2): 1 -19 | 10.14302/issn.2643-2811.jmbr-23-4556 | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | The present work shows that it is possible to analytically solve a general
model to explain the transmission dynamics of SARS-CoV-2. First, the
within-host model is described, and later a between-host model, where the
coupling between them is the viral load of SARS-CoV-2. The within-host model
describes the equations involved in the life cycle of SARS-CoV-2, and also the
immune response; while that the between-Host model analyzes the dynamics of
virus spread from the original source of contagion associated with bats,
subsequently transmitted to a host, and then reaching the reservoir (Huanan
Seafood Wholesale Market in Wuhan ), until finally infecting the human
population.
| [
{
"created": "Tue, 11 Jul 2023 22:23:21 GMT",
"version": "v1"
}
] | 2023-08-07 | [
[
"Isea",
"Raul",
""
]
] | The present work shows that it is possible to analytically solve a general model to explain the transmission dynamics of SARS-CoV-2. First, the within-host model is described, and later a between-host model, where the coupling between them is the viral load of SARS-CoV-2. The within-host model describes the equations involved in the life cycle of SARS-CoV-2, and also the immune response; while that the between-Host model analyzes the dynamics of virus spread from the original source of contagion associated with bats, subsequently transmitted to a host, and then reaching the reservoir (Huanan Seafood Wholesale Market in Wuhan ), until finally infecting the human population. |
1207.2018 | Alexey Mazur K | Alexey K. Mazur | The torque transfer coefficient in DNA under torsional stress | 5 pages, 4 figures. To appear in Phys. Rev. E | null | 10.1103/PhysRevE.86.011914 | null | q-bio.BM cond-mat.soft | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, significant progress in understanding the properties of
supercoiled DNA has been obtained due to nanotechniques that made stretching
and twisting of single molecules possible. Quantitative interpretation of such
experiments requires accurate knowledge of torques inside manipulated DNA. This
paper argues that it is not possible to transfer the entire magnitudes of
external torques to the twisting stress of the double helix, and that a
reducing torque transfer coefficient (TTC<1) should always be assumed. This
assertion agrees with simple physical intuition and is supported by the results
of all-atom molecular dynamics (MD) simulations. According to MD, the TTCs
around 0.8 are observed in nearly optimal conditions. Reaching higher values
requires special efforts and it should be difficult in practice. The TTC can be
partially responsible for the persistent discrepancies between the twisting
rigidity of DNA measured by different methods.
| [
{
"created": "Mon, 9 Jul 2012 12:06:28 GMT",
"version": "v1"
}
] | 2015-06-05 | [
[
"Mazur",
"Alexey K.",
""
]
] | In recent years, significant progress in understanding the properties of supercoiled DNA has been obtained due to nanotechniques that made stretching and twisting of single molecules possible. Quantitative interpretation of such experiments requires accurate knowledge of torques inside manipulated DNA. This paper argues that it is not possible to transfer the entire magnitudes of external torques to the twisting stress of the double helix, and that a reducing torque transfer coefficient (TTC<1) should always be assumed. This assertion agrees with simple physical intuition and is supported by the results of all-atom molecular dynamics (MD) simulations. According to MD, the TTCs around 0.8 are observed in nearly optimal conditions. Reaching higher values requires special efforts and it should be difficult in practice. The TTC can be partially responsible for the persistent discrepancies between the twisting rigidity of DNA measured by different methods. |
2309.02665 | Xiaohuan Xia | Xiaohuan Xia (1), Andrei A. Klishin (1), Jennifer Stiso (1),
Christopher W. Lynn (2, 3), Ari E. Kahn (4), Lorenzo Caciagli (1), and Dani
S. Bassett (1 and 5) ((1) Department of Bioengineering, University of
Pennsylvania, (2) Joseph Henry Laboratories of Physics, Princeton University,
(3) Initiative for the Theoretical Sciences, Graduate Center, City University
of New York, (4) Princeton Neuroscience Institute, Princeton University, (5)
Santa Fe Institute) | Human Learning of Hierarchical Graphs | 22 pages, 10 figures, 1 table | null | null | null | q-bio.NC cond-mat.stat-mech physics.bio-ph physics.soc-ph | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Humans are constantly exposed to sequences of events in the environment.
Those sequences frequently evince statistical regularities, such as the
probabilities with which one event transitions to another. Collectively,
inter-event transition probabilities can be modeled as a graph or network. Many
real-world networks are organized hierarchically and understanding how humans
learn these networks is an ongoing aim of current investigations. While much is
known about how humans learn basic transition graph topology, whether and to
what degree humans can learn hierarchical structures in such graphs remains
unknown. We investigate how humans learn hierarchical graphs of the
Sierpi\'nski family using computer simulations and behavioral laboratory
experiments. We probe the mental estimates of transition probabilities via the
surprisal effect: a phenomenon in which humans react more slowly to less
expected transitions, such as those between communities or modules in the
network. Using mean-field predictions and numerical simulations, we show that
surprisal effects are stronger for finer-level than coarser-level hierarchical
transitions. Surprisal effects at coarser levels of the hierarchy are difficult
to detect for limited learning times or in small samples. Using a serial
response experiment with human participants (n=$100$), we replicate our
predictions by detecting a surprisal effect at the finer-level of the hierarchy
but not at the coarser-level of the hierarchy. To further explain our findings,
we evaluate the presence of a trade-off in learning, whereby humans who learned
the finer-level of the hierarchy better tended to learn the coarser-level
worse, and vice versa. Our study elucidates the processes by which humans learn
hierarchical sequential events. Our work charts a road map for future
investigation of the neural underpinnings and behavioral manifestations of
graph learning.
| [
{
"created": "Wed, 6 Sep 2023 02:22:17 GMT",
"version": "v1"
}
] | 2023-09-07 | [
[
"Xia",
"Xiaohuan",
"",
"1 and 5"
],
[
"Klishin",
"Andrei A.",
"",
"1 and 5"
],
[
"Stiso",
"Jennifer",
"",
"1 and 5"
],
[
"Lynn",
"Christopher W.",
"",
"1 and 5"
],
[
"Kahn",
"Ari E.",
"",
"1 and 5"
],
[
"Caciagli",
"Lorenzo",
"",
"1 and 5"
],
[
"Bassett",
"Dani S.",
"",
"1 and 5"
]
] | Humans are constantly exposed to sequences of events in the environment. Those sequences frequently evince statistical regularities, such as the probabilities with which one event transitions to another. Collectively, inter-event transition probabilities can be modeled as a graph or network. Many real-world networks are organized hierarchically and understanding how humans learn these networks is an ongoing aim of current investigations. While much is known about how humans learn basic transition graph topology, whether and to what degree humans can learn hierarchical structures in such graphs remains unknown. We investigate how humans learn hierarchical graphs of the Sierpi\'nski family using computer simulations and behavioral laboratory experiments. We probe the mental estimates of transition probabilities via the surprisal effect: a phenomenon in which humans react more slowly to less expected transitions, such as those between communities or modules in the network. Using mean-field predictions and numerical simulations, we show that surprisal effects are stronger for finer-level than coarser-level hierarchical transitions. Surprisal effects at coarser levels of the hierarchy are difficult to detect for limited learning times or in small samples. Using a serial response experiment with human participants (n=$100$), we replicate our predictions by detecting a surprisal effect at the finer-level of the hierarchy but not at the coarser-level of the hierarchy. To further explain our findings, we evaluate the presence of a trade-off in learning, whereby humans who learned the finer-level of the hierarchy better tended to learn the coarser-level worse, and vice versa. Our study elucidates the processes by which humans learn hierarchical sequential events. Our work charts a road map for future investigation of the neural underpinnings and behavioral manifestations of graph learning. |
2011.08703 | Chrisyen Damanik | Chrisyen Damanik, Sumiati Sinaga, Kiki Hardiansyah | Effectiveness Of Sesame Oil For The Prevention Of Pressure Ulcer In
Patients With Bed Rest Undergoing Hospitalization | null | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Pressure Ulcer is one of the most problems in patients with bed rest.
Reposition and skin care are deterrent against the incidence of pressure ulcer.
Objective: This study aimed to analyze the effectiveness of sesame oil for the
prevention of pressure ulcer in patients with bed rest undergoing
hospitalization. Method: This study used a randomized controlled trial design.
Forty samples were divided groups: control and intervention groups. This study
was analysed using Chi Square. Results: The results showed that there was a
significant difference between two group (p=0,04). Conclusions: Skin care with
sesame oil can prevention of pressure ulcers. These results recommended that
sesame oil can be used for nursing intervention for the prevention of pressure
ulcers.
| [
{
"created": "Mon, 16 Nov 2020 14:45:14 GMT",
"version": "v1"
}
] | 2020-11-18 | [
[
"Damanik",
"Chrisyen",
""
],
[
"Sinaga",
"Sumiati",
""
],
[
"Hardiansyah",
"Kiki",
""
]
] | Pressure Ulcer is one of the most problems in patients with bed rest. Reposition and skin care are deterrent against the incidence of pressure ulcer. Objective: This study aimed to analyze the effectiveness of sesame oil for the prevention of pressure ulcer in patients with bed rest undergoing hospitalization. Method: This study used a randomized controlled trial design. Forty samples were divided groups: control and intervention groups. This study was analysed using Chi Square. Results: The results showed that there was a significant difference between two group (p=0,04). Conclusions: Skin care with sesame oil can prevention of pressure ulcers. These results recommended that sesame oil can be used for nursing intervention for the prevention of pressure ulcers. |
1308.3172 | Botond Sipos | Botond Sipos, Greg Slodkowicz, Tim Massingham and Nick Goldman | Realistic simulations reveal extensive sample-specificity of RNA-seq
biases | Analysis pipeline and results at http://bit.ly/rlsim-pl | null | null | null | q-bio.GN | http://creativecommons.org/licenses/by-nc-sa/3.0/ | In line with the importance of RNA-seq, the bioinformatics community has
produced numerous data analysis tools incorporating methods to correct
sample-specific biases. However, few advanced simulation tools exist to enable
benchmarking of competing correction methods. We introduce the first framework
to reproduce the properties of individual RNA-seq runs and, by applying it on
several datasets, we demonstrate the importance of accounting for
sample-specificity in realistic simulations.
| [
{
"created": "Wed, 14 Aug 2013 16:26:29 GMT",
"version": "v1"
}
] | 2013-08-15 | [
[
"Sipos",
"Botond",
""
],
[
"Slodkowicz",
"Greg",
""
],
[
"Massingham",
"Tim",
""
],
[
"Goldman",
"Nick",
""
]
] | In line with the importance of RNA-seq, the bioinformatics community has produced numerous data analysis tools incorporating methods to correct sample-specific biases. However, few advanced simulation tools exist to enable benchmarking of competing correction methods. We introduce the first framework to reproduce the properties of individual RNA-seq runs and, by applying it on several datasets, we demonstrate the importance of accounting for sample-specificity in realistic simulations. |
2007.07154 | Hagai B. Perets | Hagai B. Perets and Ruth Perets | A preceding low-virulence strain pandemic inducing immunity against
COVID-19 | Comments are welcome | null | null | null | q-bio.PE physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Countries highly exposed to incoming traffic from China were expected to be
at the highest risk of COVID-19 spread. However, COVID-19 case numbers
(infection levels) are negatively correlated with incoming traffic-level.
Moreover, infection levels are positively correlated with population-size,
while the latter should only affect infection-level once herd immunity is
reached. These could be explained if a low-virulence strain (LVS) began
spreading a few months earlier from China, providing immunity from the later
emerging known SARS-CoV-2 high-virulence strain (HVS). We find that the
dynamics of the COVID-19 pandemic depend on the LVS and HVS spread
doubling-times and the delay between their initial onsets. We find that LVS
doubling-time to be $T_L\sim1.59\pm0.17$ times slower than the HVS ($T_H$), but
its earlier onset allowed its global wide-spread to the levels required for
herd-immunity. In countries exposed earlier to the LVS and/or having smaller
population-size, the LVS achieved herd-immunity earlier, allowing less time for
the spread of the HVS, and giving rise to lower HVS-infection levels. Such
model accurately predicts a country's infection-level ({\rm R^{2}=0.74};
p-value of {\rm 5.2\times10^{-13}}), given only its population-size and
incoming-traffic from China. It explains the negative correlation with
incoming-traffic ($c_{exp}$), the positive correlation with the population size
(n_{pop}) and their specific relations (${\rm N}_{{\rm cases}}\propto
n_{pop}^{{\rm T_{L}/{\rm T_{H}}}}\times c_{exp}^{{\rm T_{L}/{\rm T_{H}-1}}}$).
We find that most countries should have already achieved herd-immunity. Further
COVID-19-spread in these countries is limited and is not expected to rise by
more than a factor of 2-3. We suggest tests/predictions to further verify the
model and biologically identify the LVS, and discuss the implications.
| [
{
"created": "Tue, 14 Jul 2020 16:13:43 GMT",
"version": "v1"
}
] | 2020-07-15 | [
[
"Perets",
"Hagai B.",
""
],
[
"Perets",
"Ruth",
""
]
] | Countries highly exposed to incoming traffic from China were expected to be at the highest risk of COVID-19 spread. However, COVID-19 case numbers (infection levels) are negatively correlated with incoming traffic-level. Moreover, infection levels are positively correlated with population-size, while the latter should only affect infection-level once herd immunity is reached. These could be explained if a low-virulence strain (LVS) began spreading a few months earlier from China, providing immunity from the later emerging known SARS-CoV-2 high-virulence strain (HVS). We find that the dynamics of the COVID-19 pandemic depend on the LVS and HVS spread doubling-times and the delay between their initial onsets. We find that LVS doubling-time to be $T_L\sim1.59\pm0.17$ times slower than the HVS ($T_H$), but its earlier onset allowed its global wide-spread to the levels required for herd-immunity. In countries exposed earlier to the LVS and/or having smaller population-size, the LVS achieved herd-immunity earlier, allowing less time for the spread of the HVS, and giving rise to lower HVS-infection levels. Such model accurately predicts a country's infection-level ({\rm R^{2}=0.74}; p-value of {\rm 5.2\times10^{-13}}), given only its population-size and incoming-traffic from China. It explains the negative correlation with incoming-traffic ($c_{exp}$), the positive correlation with the population size (n_{pop}) and their specific relations (${\rm N}_{{\rm cases}}\propto n_{pop}^{{\rm T_{L}/{\rm T_{H}}}}\times c_{exp}^{{\rm T_{L}/{\rm T_{H}-1}}}$). We find that most countries should have already achieved herd-immunity. Further COVID-19-spread in these countries is limited and is not expected to rise by more than a factor of 2-3. We suggest tests/predictions to further verify the model and biologically identify the LVS, and discuss the implications. |
1905.00574 | Mohammad Qasaimeh | Roaa Alnemari, Pavithra Sukumar, Muhammedin Deliorman, and Mohammad A.
Qasaimeh | Paper-based cell cryopreservation | null | null | null | null | q-bio.TO q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The continuous development of simple and practical cell cryopreservation
methods is of great importance to a variety of sectors, especially when
considering the efficient short- and long-term storage of cells and their
transportation. Although the overall success of such methods has been increased
in recent years, there is still need for a unified platform that is highly
suitable for efficient cryogenic storage of cells in addition to their
easy-to-manage retrieval. Here, we present a paper-based cell cryopreservation
method as an alternative to conventional cryopreservation methods. The method
is space saving, cost-effective, simple and easy to manage, and requires no
additional fine-tuning to conventional freezing and thawing procedures to yield
comparable recovery of viable cells. We show that treating papers with
fibronectin solution induces enhanced release of viable cells post thawing as
compared to untreated paper platforms. Additionally, upon release, the
remaining cells within paper lead to the formation and growth of spheroid-like
structures. Moreover, we demonstrate that the developed method works with
paper-based 3D cultures, where pre-formed 3D cultures can be efficiently
cryopreserved.
| [
{
"created": "Thu, 2 May 2019 05:09:55 GMT",
"version": "v1"
}
] | 2019-05-03 | [
[
"Alnemari",
"Roaa",
""
],
[
"Sukumar",
"Pavithra",
""
],
[
"Deliorman",
"Muhammedin",
""
],
[
"Qasaimeh",
"Mohammad A.",
""
]
] | The continuous development of simple and practical cell cryopreservation methods is of great importance to a variety of sectors, especially when considering the efficient short- and long-term storage of cells and their transportation. Although the overall success of such methods has been increased in recent years, there is still need for a unified platform that is highly suitable for efficient cryogenic storage of cells in addition to their easy-to-manage retrieval. Here, we present a paper-based cell cryopreservation method as an alternative to conventional cryopreservation methods. The method is space saving, cost-effective, simple and easy to manage, and requires no additional fine-tuning to conventional freezing and thawing procedures to yield comparable recovery of viable cells. We show that treating papers with fibronectin solution induces enhanced release of viable cells post thawing as compared to untreated paper platforms. Additionally, upon release, the remaining cells within paper lead to the formation and growth of spheroid-like structures. Moreover, we demonstrate that the developed method works with paper-based 3D cultures, where pre-formed 3D cultures can be efficiently cryopreserved. |
1005.2372 | Andrew Black | Andrew J Black and Alan J McKane | Stochastic amplification in an epidemic model with seasonal forcing | Updated with revisions and two new appendices | J. Theor. Biol., 267, 85-94, (2010) | 10.1016/j.jtbi.2010.08.014 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the stochastic susceptible-infected-recovered (SIR) model with
time-dependent forcing using analytic techniques which allow us to disentangle
the interaction of stochasticity and external forcing. The model is formulated
as a continuous time Markov process, which is decomposed into a deterministic
dynamics together with stochastic corrections, by using an expansion in inverse
system size. The forcing induces a limit cycle in the deterministic dynamics,
and a complete analysis of the fluctuations about this time-dependent solution
is given. This analysis is applied when the limit cycle is annual, and after a
period-doubling when it is biennial. The comprehensive nature of our approach
allows us to give a coherent picture of the dynamics which unifies past work,
but which also provides a systematic method for predicting the periods of
oscillations seen in whooping cough and measles epidemics.
| [
{
"created": "Thu, 13 May 2010 16:44:41 GMT",
"version": "v1"
},
{
"created": "Wed, 4 Aug 2010 10:56:59 GMT",
"version": "v2"
}
] | 2010-11-23 | [
[
"Black",
"Andrew J",
""
],
[
"McKane",
"Alan J",
""
]
] | We study the stochastic susceptible-infected-recovered (SIR) model with time-dependent forcing using analytic techniques which allow us to disentangle the interaction of stochasticity and external forcing. The model is formulated as a continuous time Markov process, which is decomposed into a deterministic dynamics together with stochastic corrections, by using an expansion in inverse system size. The forcing induces a limit cycle in the deterministic dynamics, and a complete analysis of the fluctuations about this time-dependent solution is given. This analysis is applied when the limit cycle is annual, and after a period-doubling when it is biennial. The comprehensive nature of our approach allows us to give a coherent picture of the dynamics which unifies past work, but which also provides a systematic method for predicting the periods of oscillations seen in whooping cough and measles epidemics. |
2201.04778 | Jiayu Shang | Jiayu Shang and Xubo Tang and Ruocheng Guo and Yanni Sun | Accurate identification of bacteriophages from metagenomic data using
Transformer | 15 phages, 11 figures | Briefings in Bioinformatics, Volume 23, Issue 4, July 2022,
bbac258 | 10.1093/bib/bbac258 | null | q-bio.GN | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Motivation: Bacteriophages are viruses infecting bacteria. Being key players
in microbial communities, they can regulate the composition/function of
microbiome by infecting their bacterial hosts and mediating gene transfer.
Recently, metagenomic sequencing, which can sequence all genetic materials from
various microbiome, has become a popular means for new phage discovery.
However, accurate and comprehensive detection of phages from the metagenomic
data remains difficult. High diversity/abundance, and limited reference genomes
pose major challenges for recruiting phage fragments from metagenomic data.
Existing alignment-based or learning-based models have either low recall or
precision on metagenomic data. Results: In this work, we adopt the
state-of-the-art language model, Transformer, to conduct contextual embedding
for phage contigs. By constructing a protein-cluster vocabulary, we can feed
both the protein composition and the proteins' positions from each contig into
the Transformer. The Transformer can learn the protein organization and
associations using the self-attention mechanism and predicts the label for test
contigs. We rigorously tested our developed tool named PhaMer on multiple
datasets with increasing difficulty, including quality RefSeq genomes, short
contigs, simulated metagenomic data, mock metagenomic data, and the public
IMG/VR dataset. All the experimental results show that PhaMer outperforms the
state-of-the-art tools. In the real metagenomic data experiment, PhaMer
improves the F1-score of phage detection by 27\%.
| [
{
"created": "Thu, 13 Jan 2022 03:32:04 GMT",
"version": "v1"
},
{
"created": "Fri, 12 Aug 2022 02:50:00 GMT",
"version": "v2"
}
] | 2022-08-15 | [
[
"Shang",
"Jiayu",
""
],
[
"Tang",
"Xubo",
""
],
[
"Guo",
"Ruocheng",
""
],
[
"Sun",
"Yanni",
""
]
] | Motivation: Bacteriophages are viruses infecting bacteria. Being key players in microbial communities, they can regulate the composition/function of microbiome by infecting their bacterial hosts and mediating gene transfer. Recently, metagenomic sequencing, which can sequence all genetic materials from various microbiome, has become a popular means for new phage discovery. However, accurate and comprehensive detection of phages from the metagenomic data remains difficult. High diversity/abundance, and limited reference genomes pose major challenges for recruiting phage fragments from metagenomic data. Existing alignment-based or learning-based models have either low recall or precision on metagenomic data. Results: In this work, we adopt the state-of-the-art language model, Transformer, to conduct contextual embedding for phage contigs. By constructing a protein-cluster vocabulary, we can feed both the protein composition and the proteins' positions from each contig into the Transformer. The Transformer can learn the protein organization and associations using the self-attention mechanism and predicts the label for test contigs. We rigorously tested our developed tool named PhaMer on multiple datasets with increasing difficulty, including quality RefSeq genomes, short contigs, simulated metagenomic data, mock metagenomic data, and the public IMG/VR dataset. All the experimental results show that PhaMer outperforms the state-of-the-art tools. In the real metagenomic data experiment, PhaMer improves the F1-score of phage detection by 27\%. |
2202.04324 | Florent Meyniel | Edgar Y Walker, Stephan Pohl, Rachel N Denison, David L Barack,
Jennifer Lee, Ned Block, Wei Ji Ma, Florent Meyniel | Studying the neural representations of uncertainty | 23 pages, 3 figures. Nature Neuroscience (2023) | null | 10.1038/s41593-023-01444-y | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | The study of the brain's representations of uncertainty is a central topic in
neuroscience. Unlike most quantities of which the neural representation is
studied, uncertainty is a property of an observer's beliefs about the world,
which poses specific methodological challenges. We analyze how the literature
on the neural representations of uncertainty addresses those challenges and
distinguish between "code-driven" and "correlational" approaches. Code-driven
approaches make assumptions about the neural code for representing world states
and the associated uncertainty. By contrast, correlational approaches search
for relationships between uncertainty and neural activity without constraints
on the neural representation of the world state that this uncertainty
accompanies. To compare these two approaches, we apply several criteria for
neural representations: sensitivity, specificity, invariance, functionality.
Our analysis reveals that the two approaches lead to different, but
complementary findings, shaping new research questions and guiding future
experiments.
| [
{
"created": "Wed, 9 Feb 2022 08:13:36 GMT",
"version": "v1"
},
{
"created": "Fri, 5 Aug 2022 14:47:41 GMT",
"version": "v2"
},
{
"created": "Thu, 15 Dec 2022 14:05:10 GMT",
"version": "v3"
},
{
"created": "Wed, 11 Oct 2023 07:46:37 GMT",
"version": "v4"
}
] | 2023-10-12 | [
[
"Walker",
"Edgar Y",
""
],
[
"Pohl",
"Stephan",
""
],
[
"Denison",
"Rachel N",
""
],
[
"Barack",
"David L",
""
],
[
"Lee",
"Jennifer",
""
],
[
"Block",
"Ned",
""
],
[
"Ma",
"Wei Ji",
""
],
[
"Meyniel",
"Florent",
""
]
] | The study of the brain's representations of uncertainty is a central topic in neuroscience. Unlike most quantities of which the neural representation is studied, uncertainty is a property of an observer's beliefs about the world, which poses specific methodological challenges. We analyze how the literature on the neural representations of uncertainty addresses those challenges and distinguish between "code-driven" and "correlational" approaches. Code-driven approaches make assumptions about the neural code for representing world states and the associated uncertainty. By contrast, correlational approaches search for relationships between uncertainty and neural activity without constraints on the neural representation of the world state that this uncertainty accompanies. To compare these two approaches, we apply several criteria for neural representations: sensitivity, specificity, invariance, functionality. Our analysis reveals that the two approaches lead to different, but complementary findings, shaping new research questions and guiding future experiments. |
2108.04992 | Maxim Lavrentovich | Adam S. Bryant and Maxim O. Lavrentovich | Survival in Branching Cellular Populations | 23 pages, 10 figures, minor corrections and clarifications | Theor. Popul. Biol. 144 (2022) 13-23 | 10.1016/j.tpb.2022.01.005 | null | q-bio.PE cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We analyze evolutionary dynamics in a confluent, branching cellular
population, such as in a growing duct, vasculature, or in a branching microbial
colony. We focus on the coarse-grained features of the evolution and build a
statistical model that captures the essential features of the dynamics. Using
simulations and analytic approaches, we show that the survival probability of
strains within the growing population is sensitive to the branching geometry:
Branch bifurcations enhance survival probability due to an overall population
growth (i.e., "inflation"), while branch termination and the small effective
population size at the growing branch tips increase the probability of strain
extinction. We show that the evolutionary dynamics may be captured on a wide
range of branch geometries parameterized just by the branch diameter $N_0$ and
branching rate $b$. We find that the survival probability of neutral cell
strains is largest at an "optimal" branching rate, which balances the effects
of inflation and branch termination. We find that increasing the selective
advantage $s$ of the cell strain mitigates the inflationary effect by
decreasing the average time at which the mutant cell fate is determined. For
sufficiently large selective advantages, the survival probability of the
advantageous mutant decreases monotonically with the branching rate.
| [
{
"created": "Wed, 11 Aug 2021 02:32:00 GMT",
"version": "v1"
},
{
"created": "Sun, 13 Feb 2022 06:52:29 GMT",
"version": "v2"
}
] | 2022-02-15 | [
[
"Bryant",
"Adam S.",
""
],
[
"Lavrentovich",
"Maxim O.",
""
]
] | We analyze evolutionary dynamics in a confluent, branching cellular population, such as in a growing duct, vasculature, or in a branching microbial colony. We focus on the coarse-grained features of the evolution and build a statistical model that captures the essential features of the dynamics. Using simulations and analytic approaches, we show that the survival probability of strains within the growing population is sensitive to the branching geometry: Branch bifurcations enhance survival probability due to an overall population growth (i.e., "inflation"), while branch termination and the small effective population size at the growing branch tips increase the probability of strain extinction. We show that the evolutionary dynamics may be captured on a wide range of branch geometries parameterized just by the branch diameter $N_0$ and branching rate $b$. We find that the survival probability of neutral cell strains is largest at an "optimal" branching rate, which balances the effects of inflation and branch termination. We find that increasing the selective advantage $s$ of the cell strain mitigates the inflationary effect by decreasing the average time at which the mutant cell fate is determined. For sufficiently large selective advantages, the survival probability of the advantageous mutant decreases monotonically with the branching rate. |
2111.10656 | Aleksei Shpilman | Natalia Zenkova, Ekaterina Sedykh, Tatiana Shugaeva, Vladislav
Strashko, Timofei Ermak, Aleksei Shpilman | Simple End-to-end Deep Learning Model for CDR-H3 Loop Structure
Prediction | NeurIPS 2021 Machine Learning for Structural Biology Workshop | null | null | null | q-bio.BM cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Predicting a structure of an antibody from its sequence is important since it
allows for a better design process of synthetic antibodies that play a vital
role in the health industry. Most of the structure of an antibody is
conservative. The most variable and hard-to-predict part is the third
complementarity-determining region of the antibody heavy chain (CDR H3).
Lately, deep learning has been employed to solve the task of CDR H3 prediction.
However, current state-of-the-art methods are not end-to-end, but rather they
output inter-residue distances and orientations to the RosettaAntibody package
that uses this additional information alongside statistical and physics-based
methods to predict the 3D structure. This does not allow a fast screening
process and, therefore, inhibits the development of targeted synthetic
antibodies. In this work, we present an end-to-end model to predict CDR H3 loop
structure, that performs on par with state-of-the-art methods in terms of
accuracy but an order of magnitude faster. We also raise an issue with a
commonly used RosettaAntibody benchmark that leads to data leaks, i.e., the
presence of identical sequences in the train and test datasets.
| [
{
"created": "Sat, 20 Nov 2021 18:55:09 GMT",
"version": "v1"
},
{
"created": "Wed, 22 Dec 2021 14:40:58 GMT",
"version": "v2"
}
] | 2021-12-23 | [
[
"Zenkova",
"Natalia",
""
],
[
"Sedykh",
"Ekaterina",
""
],
[
"Shugaeva",
"Tatiana",
""
],
[
"Strashko",
"Vladislav",
""
],
[
"Ermak",
"Timofei",
""
],
[
"Shpilman",
"Aleksei",
""
]
] | Predicting a structure of an antibody from its sequence is important since it allows for a better design process of synthetic antibodies that play a vital role in the health industry. Most of the structure of an antibody is conservative. The most variable and hard-to-predict part is the third complementarity-determining region of the antibody heavy chain (CDR H3). Lately, deep learning has been employed to solve the task of CDR H3 prediction. However, current state-of-the-art methods are not end-to-end, but rather they output inter-residue distances and orientations to the RosettaAntibody package that uses this additional information alongside statistical and physics-based methods to predict the 3D structure. This does not allow a fast screening process and, therefore, inhibits the development of targeted synthetic antibodies. In this work, we present an end-to-end model to predict CDR H3 loop structure, that performs on par with state-of-the-art methods in terms of accuracy but an order of magnitude faster. We also raise an issue with a commonly used RosettaAntibody benchmark that leads to data leaks, i.e., the presence of identical sequences in the train and test datasets. |
2403.07664 | Lingyu Li | Lingyu Li, Chunbo Li | Enabling self-identification in intelligent agent: insights from
computational psychoanalysis | 18 pages, 3 figures | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Building upon prior framework of computational Lacanian psychoanalysis with
the theory of active inference, this paper aims to further explore the concept
of self-identification and its potential applications. Beginning with two
classic paradigms in psychology, mirror self-recognition and rubber hand
illusion, we suggest that imaginary identification is characterized by an
integrated body schema with minimal free energy. Next, we briefly survey three
dimensions of symbolic identification (sociological, psychoanalytic, and
linguistical) and corresponding active inference accounts. To provide
intuition, we respectively employ a convolutional neural network (CNN) and a
multi-layer perceptron (MLP) supervised by ChatGPT to showcase optimization of
free energy during motor skill and language mastery underlying identification
formation. We then introduce Lacan's Graph II of desire, unifying imaginary and
symbolic identification, and propose an illustrative model called FreeAgent. In
concluding remarks, we discuss some key issues in the potential of
computational Lacanian psychoanalysis to advance mental health and artificial
intelligence, including digital twin mind, large language models as avatars of
the Lacanian Other, and the feasibility of human-level artificial general
intelligence with self-awareness in the context of post-structuralism.
| [
{
"created": "Tue, 12 Mar 2024 13:54:25 GMT",
"version": "v1"
}
] | 2024-03-13 | [
[
"Li",
"Lingyu",
""
],
[
"Li",
"Chunbo",
""
]
] | Building upon prior framework of computational Lacanian psychoanalysis with the theory of active inference, this paper aims to further explore the concept of self-identification and its potential applications. Beginning with two classic paradigms in psychology, mirror self-recognition and rubber hand illusion, we suggest that imaginary identification is characterized by an integrated body schema with minimal free energy. Next, we briefly survey three dimensions of symbolic identification (sociological, psychoanalytic, and linguistical) and corresponding active inference accounts. To provide intuition, we respectively employ a convolutional neural network (CNN) and a multi-layer perceptron (MLP) supervised by ChatGPT to showcase optimization of free energy during motor skill and language mastery underlying identification formation. We then introduce Lacan's Graph II of desire, unifying imaginary and symbolic identification, and propose an illustrative model called FreeAgent. In concluding remarks, we discuss some key issues in the potential of computational Lacanian psychoanalysis to advance mental health and artificial intelligence, including digital twin mind, large language models as avatars of the Lacanian Other, and the feasibility of human-level artificial general intelligence with self-awareness in the context of post-structuralism. |
1809.10681 | Zexian Zeng | Zexian Zeng, Andy Vo, Chengsheng Mao, Susan E Clare, Seema A Khan,
Yuan Luo | Cancer classification and pathway discovery using non-negative matrix
factorization | 8 pages, 5 figures, conference | null | null | null | q-bio.GN cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Extracting genetic information from a full range of sequencing data is
important for understanding diseases. We propose a novel method to effectively
explore the landscape of genetic mutations and aggregate them to predict cancer
type. We used multinomial logistic regression, nonsmooth non-negative matrix
factorization (nsNMF), and support vector machine (SVM) to utilize the full
range of sequencing data, aiming at better aggregating genetic mutations and
improving their power in predicting cancer types. Specifically, we introduced a
classifier to distinguish cancer types using somatic mutations obtained from
whole-exome sequencing data. Mutations were identified from multiple cancers
and scored using SIFT, PP2, and CADD, and grouped at the individual gene level.
The nsNMF was then applied to reduce dimensionality and to obtain coefficient
and basis matrices. A feature matrix was derived from the obtained matrices to
train a classifier for cancer type classification with the SVM model. We have
demonstrated that the classifier was able to distinguish the cancer types with
reasonable accuracy. In five-fold cross-validations using mutation counts as
features, the average prediction accuracy was 77.1% (SEM=0.1%), significantly
outperforming baselines and outperforming models using mutation scores as
features. Using the factor matrices derived from the nsNMF, we identified
multiple genes and pathways that are significantly associated with each cancer
type. This study presents a generic and complete pipeline to study the
associations between somatic mutations and cancers. The discovered genes and
pathways associated with each cancer type can lead to biological insights. The
proposed method can be adapted to other studies for disease classification and
pathway discovery.
| [
{
"created": "Thu, 27 Sep 2018 13:22:11 GMT",
"version": "v1"
},
{
"created": "Mon, 8 Oct 2018 19:56:24 GMT",
"version": "v2"
}
] | 2018-10-10 | [
[
"Zeng",
"Zexian",
""
],
[
"Vo",
"Andy",
""
],
[
"Mao",
"Chengsheng",
""
],
[
"Clare",
"Susan E",
""
],
[
"Khan",
"Seema A",
""
],
[
"Luo",
"Yuan",
""
]
] | Extracting genetic information from a full range of sequencing data is important for understanding diseases. We propose a novel method to effectively explore the landscape of genetic mutations and aggregate them to predict cancer type. We used multinomial logistic regression, nonsmooth non-negative matrix factorization (nsNMF), and support vector machine (SVM) to utilize the full range of sequencing data, aiming at better aggregating genetic mutations and improving their power in predicting cancer types. Specifically, we introduced a classifier to distinguish cancer types using somatic mutations obtained from whole-exome sequencing data. Mutations were identified from multiple cancers and scored using SIFT, PP2, and CADD, and grouped at the individual gene level. The nsNMF was then applied to reduce dimensionality and to obtain coefficient and basis matrices. A feature matrix was derived from the obtained matrices to train a classifier for cancer type classification with the SVM model. We have demonstrated that the classifier was able to distinguish the cancer types with reasonable accuracy. In five-fold cross-validations using mutation counts as features, the average prediction accuracy was 77.1% (SEM=0.1%), significantly outperforming baselines and outperforming models using mutation scores as features. Using the factor matrices derived from the nsNMF, we identified multiple genes and pathways that are significantly associated with each cancer type. This study presents a generic and complete pipeline to study the associations between somatic mutations and cancers. The discovered genes and pathways associated with each cancer type can lead to biological insights. The proposed method can be adapted to other studies for disease classification and pathway discovery. |
q-bio/0406012 | Ted Theodosopoulos | Patricia Theodosopoulos and Ted Theodosopoulos | A computational study of the statistical mechanics of antibody-antigen
conformations | 12 pages, 5 figures | null | null | null | q-bio.QM q-bio.BM q-bio.PE | null | We describe the representation of the chemical affinity between the
antigen-combining site of the immunoglobulin molecule and the antigen molecule
as the probability of the two molecules existing in a bound state. Our model is
based on the identification of shape attractors in the configuration space for
the joint antibody / antigen combining site sequence. We parameterize
configuration space in terms of Ramachandran angles. The shape attractors allow
us to construct a Markov chain whose steady state distribution gives rise to
the desired attachment probability. As a result we are able to delineate the
enthalpic, entropic and kinetic components of affinity and study their
interactions.
| [
{
"created": "Fri, 4 Jun 2004 01:51:12 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Theodosopoulos",
"Patricia",
""
],
[
"Theodosopoulos",
"Ted",
""
]
] | We describe the representation of the chemical affinity between the antigen-combining site of the immunoglobulin molecule and the antigen molecule as the probability of the two molecules existing in a bound state. Our model is based on the identification of shape attractors in the configuration space for the joint antibody / antigen combining site sequence. We parameterize configuration space in terms of Ramachandran angles. The shape attractors allow us to construct a Markov chain whose steady state distribution gives rise to the desired attachment probability. As a result we are able to delineate the enthalpic, entropic and kinetic components of affinity and study their interactions. |
1112.3046 | Steven Frank | Steven A. Frank | Evolutionary foundations of cooperation and group cohesion | null | Games, Groups, and the Global Good (2009), Springer, 3-40 | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In biology, the evolution of increasingly cooperative groups has shaped the
history of life. Genes collaborate in the control of cells; cells efficiently
divide tasks to produce cohesive multicellular individuals; individual members
of insect colonies cooperate in integrated societies. Biological cooperation
provides a foundation on which to understand human behavior. Conceptually, the
economics of efficient allocation and the game-like processes of strategy are
well understood in biology; we find the same essential processes in many
successful theories of human sociality. Historically, the trace of biological
evolution informs in two ways. First, the evolutionary transformations in
biological cooperation provide insight into how economic and strategic
processes play out over time--a source of analogy that, when applied
thoughtfully, aids analysis of human sociality. Second, humans arose from
biological history--a factual account of the past that tells us much about the
material basis of human behavior.
| [
{
"created": "Tue, 13 Dec 2011 21:13:22 GMT",
"version": "v1"
}
] | 2011-12-15 | [
[
"Frank",
"Steven A.",
""
]
] | In biology, the evolution of increasingly cooperative groups has shaped the history of life. Genes collaborate in the control of cells; cells efficiently divide tasks to produce cohesive multicellular individuals; individual members of insect colonies cooperate in integrated societies. Biological cooperation provides a foundation on which to understand human behavior. Conceptually, the economics of efficient allocation and the game-like processes of strategy are well understood in biology; we find the same essential processes in many successful theories of human sociality. Historically, the trace of biological evolution informs in two ways. First, the evolutionary transformations in biological cooperation provide insight into how economic and strategic processes play out over time--a source of analogy that, when applied thoughtfully, aids analysis of human sociality. Second, humans arose from biological history--a factual account of the past that tells us much about the material basis of human behavior. |
1305.0306 | Frederick Matsen IV | Connor O. McCoy and Frederick A. Matsen IV | Abundance-weighted phylogenetic diversity measures distinguish microbial
community states and are robust to sampling depth | Submitted to PeerJ | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In microbial ecology studies, the most commonly used ways of investigating
alpha (within-sample) diversity are either to apply count-only measures such as
Simpson's index to Operational Taxonomic Unit (OTU) groupings, or to use
classical phylogenetic diversity (PD), which is not abundance-weighted.
Although alpha diversity measures that use abundance information in a
phylogenetic framework do exist, but are not widely used within the microbial
ecology community. The performance of abundance-weighted phylogenetic diversity
measures compared to classical discrete measures has not been explored, and the
behavior of these measures under rarefaction (sub-sampling) is not yet clear.
In this paper we compare the ability of various alpha diversity measures to
distinguish between different community states in the human microbiome for
three different data sets. We also present and compare a novel one-parameter
family of alpha diversity measures, BWPD_\theta, that interpolates between
classical phylogenetic diversity (PD) and an abundance-weighted extension of
PD. Additionally, we examine the sensitivity of these phylogenetic diversity
measures to sampling, via computational experiments and by deriving a closed
form solution for the expectation of phylogenetic quadratic entropy under
re-sampling. In all three of the datasets considered, an abundance-weighted
measure is the best differentiator between community states. OTU-based
measures, on the other hand, are less effective in distinguishing community
types. In addition, abundance-weighted phylogenetic diversity measures are less
sensitive to differing sampling intensity than their unweighted counterparts.
Based on these results we encourage the use of abundance-weighted phylogenetic
diversity measures, especially for cases such as microbial ecology where
species delimitation is difficult.
| [
{
"created": "Wed, 1 May 2013 22:30:30 GMT",
"version": "v1"
}
] | 2013-05-03 | [
[
"McCoy",
"Connor O.",
""
],
[
"Matsen",
"Frederick A.",
"IV"
]
] | In microbial ecology studies, the most commonly used ways of investigating alpha (within-sample) diversity are either to apply count-only measures such as Simpson's index to Operational Taxonomic Unit (OTU) groupings, or to use classical phylogenetic diversity (PD), which is not abundance-weighted. Although alpha diversity measures that use abundance information in a phylogenetic framework do exist, but are not widely used within the microbial ecology community. The performance of abundance-weighted phylogenetic diversity measures compared to classical discrete measures has not been explored, and the behavior of these measures under rarefaction (sub-sampling) is not yet clear. In this paper we compare the ability of various alpha diversity measures to distinguish between different community states in the human microbiome for three different data sets. We also present and compare a novel one-parameter family of alpha diversity measures, BWPD_\theta, that interpolates between classical phylogenetic diversity (PD) and an abundance-weighted extension of PD. Additionally, we examine the sensitivity of these phylogenetic diversity measures to sampling, via computational experiments and by deriving a closed form solution for the expectation of phylogenetic quadratic entropy under re-sampling. In all three of the datasets considered, an abundance-weighted measure is the best differentiator between community states. OTU-based measures, on the other hand, are less effective in distinguishing community types. In addition, abundance-weighted phylogenetic diversity measures are less sensitive to differing sampling intensity than their unweighted counterparts. Based on these results we encourage the use of abundance-weighted phylogenetic diversity measures, especially for cases such as microbial ecology where species delimitation is difficult. |
2303.06183 | Sarah Kaakai | Daphn\'e Giorgi, Sarah Kaakai, Vincent Lemaire | Efficient simulation of individual-based population models: the R
Package IBMPopSim | null | null | null | null | q-bio.PE cs.MS math.PR | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The R Package IBMPopSim aims to simulate the random evolution of
heterogeneous populations using stochastic Individual-Based Models (IBMs).
The package enables users to simulate population evolution, in which
individuals are characterized by their age and some characteristics, and the
population is modified by different types of events, including births/arrivals,
death/exit events, or changes of characteristics. The frequency at which an
event can occur to an individual can depend on their age and characteristics,
but also on the characteristics of other individuals (interactions). Such
models have a wide range of applications in fields including actuarial science,
biology, ecology or epidemiology.
IBMPopSim overcomes the limitations of time-consuming IBMs simulations by
implementing new efficient algorithms based on thinning methods, which are
compiled using the Rcpp package while providing a user-friendly interface.
| [
{
"created": "Fri, 10 Mar 2023 19:31:50 GMT",
"version": "v1"
},
{
"created": "Tue, 27 Feb 2024 14:56:53 GMT",
"version": "v2"
}
] | 2024-02-28 | [
[
"Giorgi",
"Daphné",
""
],
[
"Kaakai",
"Sarah",
""
],
[
"Lemaire",
"Vincent",
""
]
] | The R Package IBMPopSim aims to simulate the random evolution of heterogeneous populations using stochastic Individual-Based Models (IBMs). The package enables users to simulate population evolution, in which individuals are characterized by their age and some characteristics, and the population is modified by different types of events, including births/arrivals, death/exit events, or changes of characteristics. The frequency at which an event can occur to an individual can depend on their age and characteristics, but also on the characteristics of other individuals (interactions). Such models have a wide range of applications in fields including actuarial science, biology, ecology or epidemiology. IBMPopSim overcomes the limitations of time-consuming IBMs simulations by implementing new efficient algorithms based on thinning methods, which are compiled using the Rcpp package while providing a user-friendly interface. |
2209.06603 | James Blachly | Charles Thomas Gregory and James S. Blachly | Typesafe Coordinate Systems in High-Throughput Sequencing Applications | 14 pages, 3 figures. Code available at
https://github.com/blachlylab/typesafe-coordinates | null | null | null | q-bio.GN cs.LO cs.PL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | High-throughput sequencing file formats and tools encode coordinate intervals
with respect to a reference sequence in at least four distinct, incompatible
ways. Integrating data from and moving data between different formats has the
potential to introduce subtle off-by-one errors. Here, we introduce the notion
of typesafe coordinates: coordinate intervals are not only an integer pair, but
members of a type class comprising four types: the Cartesian product of a zero
or one basis, and an open or closed interval end. By leveraging the type system
of statically and strongly-typed, compiled languages we can provide static
guarantees that an entire class of error is eliminated. We provide a reference
implementation in D as part of a larger work (dhtslib), and proofs of concept
in Rust, OCaml, and Python. Exploratory implementations are available at
https://github.com/blachlylab/typesafe-coordinates.
| [
{
"created": "Wed, 14 Sep 2022 12:41:52 GMT",
"version": "v1"
}
] | 2022-09-15 | [
[
"Gregory",
"Charles Thomas",
""
],
[
"Blachly",
"James S.",
""
]
] | High-throughput sequencing file formats and tools encode coordinate intervals with respect to a reference sequence in at least four distinct, incompatible ways. Integrating data from and moving data between different formats has the potential to introduce subtle off-by-one errors. Here, we introduce the notion of typesafe coordinates: coordinate intervals are not only an integer pair, but members of a type class comprising four types: the Cartesian product of a zero or one basis, and an open or closed interval end. By leveraging the type system of statically and strongly-typed, compiled languages we can provide static guarantees that an entire class of error is eliminated. We provide a reference implementation in D as part of a larger work (dhtslib), and proofs of concept in Rust, OCaml, and Python. Exploratory implementations are available at https://github.com/blachlylab/typesafe-coordinates. |
1604.03071 | Anton Korobeynikov | Sergey Nurk, Dmitry Meleshko, Anton Korobeynikov and Pavel Pevzner | metaSPAdes: a new versatile de novo metagenomics assembler | null | null | null | null | q-bio.GN q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While metagenomics has emerged as a technology of choice for analyzing
bacterial populations, assembly of metagenomic data remains difficult thus
stifling biological discoveries. metaSPAdes is a new assembler that addresses
the challenge of metagenome analysis and capitalizes on computational ideas
that proved to be useful in assemblies of single cells and highly polymorphic
diploid genomes. We benchmark metaSPAdes against other state-of-the-art
metagenome assemblers across diverse da-tasets and demonstrate that it results
in high-quality assemblies.
| [
{
"created": "Mon, 11 Apr 2016 19:09:22 GMT",
"version": "v1"
},
{
"created": "Mon, 1 Aug 2016 15:32:16 GMT",
"version": "v2"
}
] | 2016-08-02 | [
[
"Nurk",
"Sergey",
""
],
[
"Meleshko",
"Dmitry",
""
],
[
"Korobeynikov",
"Anton",
""
],
[
"Pevzner",
"Pavel",
""
]
] | While metagenomics has emerged as a technology of choice for analyzing bacterial populations, assembly of metagenomic data remains difficult thus stifling biological discoveries. metaSPAdes is a new assembler that addresses the challenge of metagenome analysis and capitalizes on computational ideas that proved to be useful in assemblies of single cells and highly polymorphic diploid genomes. We benchmark metaSPAdes against other state-of-the-art metagenome assemblers across diverse da-tasets and demonstrate that it results in high-quality assemblies. |
2311.04545 | Eiji Yamamoto | Eiji Yamamoto and Keehyoung Joo and Jooyoung Lee and Mark S. P. Sansom
and Masato Yasui | Molecular mechanism of anion permeation through aquaporin 6 | null | null | null | null | q-bio.BM physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Aquaporins (AQPs) are recognized as transmembrane water channels that
facilitate selective water permeation through their monomeric pores. Among the
AQP family, AQP6 has a unique characteristic as an anion channel, which is
allosterically controlled by pH conditions and is eliminated by a single amino
acid mutation. However, the molecular mechanism of anion permeation through
AQP6 remains unclear. Using molecular dynamics simulations in the presence of a
transmembrane voltage utilizing an ion concentration gradient, we show that
chloride ions permeate through the pore corresponding to the central axis of
the AQP6 homotetramer. Under low pH conditions, a subtle opening of the
hydrophobic selective filter (SF), located near the extracellular part of the
central pore, becomes wetted and enables anion permeation. Our simulations also
indicate that a single mutation (N63G) in human AQP6, located at the central
pore, significantly reduces anion conduction, consistent with experimental
data. Moreover, we demonstrate the pH-sensing mechanism in which the
protonation of H184 and H189 under low pH conditions allosterically triggers
the gating of the SF region. These results suggest a unique pH-dependent
allosteric anion permeation mechanism in AQP6 and could clarify the role of the
central pore in some of the AQP tetramers.
| [
{
"created": "Wed, 8 Nov 2023 09:23:44 GMT",
"version": "v1"
}
] | 2023-11-09 | [
[
"Yamamoto",
"Eiji",
""
],
[
"Joo",
"Keehyoung",
""
],
[
"Lee",
"Jooyoung",
""
],
[
"Sansom",
"Mark S. P.",
""
],
[
"Yasui",
"Masato",
""
]
] | Aquaporins (AQPs) are recognized as transmembrane water channels that facilitate selective water permeation through their monomeric pores. Among the AQP family, AQP6 has a unique characteristic as an anion channel, which is allosterically controlled by pH conditions and is eliminated by a single amino acid mutation. However, the molecular mechanism of anion permeation through AQP6 remains unclear. Using molecular dynamics simulations in the presence of a transmembrane voltage utilizing an ion concentration gradient, we show that chloride ions permeate through the pore corresponding to the central axis of the AQP6 homotetramer. Under low pH conditions, a subtle opening of the hydrophobic selective filter (SF), located near the extracellular part of the central pore, becomes wetted and enables anion permeation. Our simulations also indicate that a single mutation (N63G) in human AQP6, located at the central pore, significantly reduces anion conduction, consistent with experimental data. Moreover, we demonstrate the pH-sensing mechanism in which the protonation of H184 and H189 under low pH conditions allosterically triggers the gating of the SF region. These results suggest a unique pH-dependent allosteric anion permeation mechanism in AQP6 and could clarify the role of the central pore in some of the AQP tetramers. |
2312.12916 | Matthias Borgstede | Matthias Borgstede | A generalized Price equation for fuzzy set-mappings | 9 pages | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | The Price equation provides a formal account of selection building on a
right-total mapping between two classes of individuals, that is usually
interpreted as a parent-offspring relation. This paper presents a new
formulation of the Price equation in terms of fuzzy set-mappings to account for
structures where the targets of selection may vary in the degree to which they
belong to the classes of "parents" or "offspring," and in the degree to which
these two classes of individuals are related. The fuzzy set formulation widens
the scope of the Price equation such that it equally applies to natural
selection, cultural selection, operant selection, or any combination of
different types of selection.
| [
{
"created": "Wed, 20 Dec 2023 10:52:54 GMT",
"version": "v1"
},
{
"created": "Wed, 3 Jan 2024 14:24:26 GMT",
"version": "v2"
}
] | 2024-01-04 | [
[
"Borgstede",
"Matthias",
""
]
] | The Price equation provides a formal account of selection building on a right-total mapping between two classes of individuals, that is usually interpreted as a parent-offspring relation. This paper presents a new formulation of the Price equation in terms of fuzzy set-mappings to account for structures where the targets of selection may vary in the degree to which they belong to the classes of "parents" or "offspring," and in the degree to which these two classes of individuals are related. The fuzzy set formulation widens the scope of the Price equation such that it equally applies to natural selection, cultural selection, operant selection, or any combination of different types of selection. |
q-bio/0412028 | Osvaldo Zagordi | O. Zagordi, J. R. Lobry | Forcing reversibility in the no strand-bias substitution model allows
for the theoretical and practical identifiability of its 5 parameters from
pairwise DNA sequence comparisons | 12 pages, 4 figures, corrected typos | Gene, Volume 347 (2) 175-182 (2005) | 10.1016/j.gene.2004.12.019 | null | q-bio.PE | null | Because of the base pairing rules in DNA, some mutations experienced by a
portion of DNA during its evolution result in the same substitution, as we can
only observe differences in coupled nucleotides. Then, in the absence of a bias
between the two DNA strands, a model with at most 6 different parameters
instead of 12 is sufficient to study the evolutionary relationship between
homologous sequences derived from a common ancestor. On the other hand the same
symmetry reduces the number of independent observations which can be made. Such
a reduction can in some cases invalidate the calculation of the parameters. A
compromise between biologically acceptable hypotheses and tractability is
introduced and a five parameter reversible no-strand-bias condition (RNSB) is
presented. The identifiability of the parameters under this model is shown by
examples.
| [
{
"created": "Wed, 15 Dec 2004 18:15:33 GMT",
"version": "v1"
},
{
"created": "Thu, 16 Dec 2004 11:00:31 GMT",
"version": "v2"
}
] | 2007-05-23 | [
[
"Zagordi",
"O.",
""
],
[
"Lobry",
"J. R.",
""
]
] | Because of the base pairing rules in DNA, some mutations experienced by a portion of DNA during its evolution result in the same substitution, as we can only observe differences in coupled nucleotides. Then, in the absence of a bias between the two DNA strands, a model with at most 6 different parameters instead of 12 is sufficient to study the evolutionary relationship between homologous sequences derived from a common ancestor. On the other hand the same symmetry reduces the number of independent observations which can be made. Such a reduction can in some cases invalidate the calculation of the parameters. A compromise between biologically acceptable hypotheses and tractability is introduced and a five parameter reversible no-strand-bias condition (RNSB) is presented. The identifiability of the parameters under this model is shown by examples. |
1806.09532 | Joos Behncke | Joos Behncke, Robin Tibor Schirrmeister, Martin V\"olker, Ji\v{r}\'i
Hammer, Petr Marusi\v{c}, Andreas Schulze-Bonhage, Wolfram Burgard, Tonio
Ball | Cross-paradigm pretraining of convolutional networks improves
intracranial EEG decoding | null | null | null | null | q-bio.NC cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | When it comes to the classification of brain signals in real-life
applications, the training and the prediction data are often described by
different distributions. Furthermore, diverse data sets, e.g., recorded from
various subjects or tasks, can even exhibit distinct feature spaces. The fact
that data that have to be classified are often only available in small amounts
reinforces the need for techniques to generalize learned information, as
performances of brain-computer interfaces (BCIs) are enhanced by increasing
quantity of available data. In this paper, we apply transfer learning to a
framework based on deep convolutional neural networks (deep ConvNets) to prove
the transferability of learned patterns in error-related brain signals across
different tasks. The experiments described in this paper demonstrate the
usefulness of transfer learning, especially improving performances when only
little data can be used to distinguish between erroneous and correct
realization of a task. This effect could be delimited from a transfer of merely
general brain signal characteristics, underlining the transfer of
error-specific information. Furthermore, we could extract similar patterns in
time-frequency analyses in identical channels, leading to selective high signal
correlations between the two different paradigms. Classification on the
intracranial data yields in median accuracies up to $(81.50 \pm 9.49)\,\%$.
Decoding on only $10\%$ of the data without pre-training reaches performances
of $(54.76 \pm 3.56)\,\%$, compared to $(64.95 \pm 0.79)\,\%$ with
pre-training.
| [
{
"created": "Wed, 20 Jun 2018 11:34:36 GMT",
"version": "v1"
},
{
"created": "Fri, 20 Jul 2018 10:18:28 GMT",
"version": "v2"
}
] | 2018-07-23 | [
[
"Behncke",
"Joos",
""
],
[
"Schirrmeister",
"Robin Tibor",
""
],
[
"Völker",
"Martin",
""
],
[
"Hammer",
"Jiří",
""
],
[
"Marusič",
"Petr",
""
],
[
"Schulze-Bonhage",
"Andreas",
""
],
[
"Burgard",
"Wolfram",
""
],
[
"Ball",
"Tonio",
""
]
] | When it comes to the classification of brain signals in real-life applications, the training and the prediction data are often described by different distributions. Furthermore, diverse data sets, e.g., recorded from various subjects or tasks, can even exhibit distinct feature spaces. The fact that data that have to be classified are often only available in small amounts reinforces the need for techniques to generalize learned information, as performances of brain-computer interfaces (BCIs) are enhanced by increasing quantity of available data. In this paper, we apply transfer learning to a framework based on deep convolutional neural networks (deep ConvNets) to prove the transferability of learned patterns in error-related brain signals across different tasks. The experiments described in this paper demonstrate the usefulness of transfer learning, especially improving performances when only little data can be used to distinguish between erroneous and correct realization of a task. This effect could be delimited from a transfer of merely general brain signal characteristics, underlining the transfer of error-specific information. Furthermore, we could extract similar patterns in time-frequency analyses in identical channels, leading to selective high signal correlations between the two different paradigms. Classification on the intracranial data yields in median accuracies up to $(81.50 \pm 9.49)\,\%$. Decoding on only $10\%$ of the data without pre-training reaches performances of $(54.76 \pm 3.56)\,\%$, compared to $(64.95 \pm 0.79)\,\%$ with pre-training. |
1609.09202 | Sutapa Mukherji | Sutapa Mukherji | Transcriptional and translational regulation in Arc protein network of
Escherichia coli's stress response | 16 pages, 9 figures | null | null | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, there has been a lot of effort in understanding sRNA mediated
regulation of gene expression and how this mode of regulation differs from
transcriptional regulation.In E.coli, in the presence of oxidative stress, the
synthesis of sigma^s is regulated through an interesting mechanism involving
both transcriptional and sRNA-mediated translational regulation. The key
regulatory factors involved in transcriptional and translational regulation are
ArcA and ArcB proteins and ArcZ sRNA, respectively. Phosphorylated ArcA, in a
feedforward mechanism, represses the transcriptions sigma^s and ArcZ sRNA with
the latter being a post-transcriptional activator for sigma^s. Through a
feedback mechanisms, ArcZ sRNA destabilises ArcB mRNA and regulates the level
of ArcB protein, a kinase that phosphorylates ArcA. The oxygen and energy
availability is expected to influence the ArcA phosphorylation rate and, as a
consequence, in equilibrium, the system is likely to be in either a high ArcB
(low ArcZ) or a low ArcB (high ArcZ) state. Kinetic modelling studies suggest
that the rate of destabilisation of ArcB mRNA by ArcZ sRNA must be
appropriately tuned for achieving the desired state. In particular, for a high
phosphorylation rate, the transition from a low to a high ArcZ synthesis regime
with the increase in sRNA-mRNA interaction is similar to the threshold-linear
response observed earlier. Further, we show that the mRNA destabilisation by
sRNA might be, in particular, beneficial in the low phosphorylation state for
having the right concentration levels of ArcZ and ArcB. Stochastic simulation
results suggest that as the ArcZ-ArcB binding affinity is increased, the
probability distribution for the number of ArcZ molecules becomes flatter
indicating frequently occurring transcriptional bursts of varying strengths.
| [
{
"created": "Thu, 29 Sep 2016 04:49:58 GMT",
"version": "v1"
}
] | 2016-09-30 | [
[
"Mukherji",
"Sutapa",
""
]
] | Recently, there has been a lot of effort in understanding sRNA mediated regulation of gene expression and how this mode of regulation differs from transcriptional regulation.In E.coli, in the presence of oxidative stress, the synthesis of sigma^s is regulated through an interesting mechanism involving both transcriptional and sRNA-mediated translational regulation. The key regulatory factors involved in transcriptional and translational regulation are ArcA and ArcB proteins and ArcZ sRNA, respectively. Phosphorylated ArcA, in a feedforward mechanism, represses the transcriptions sigma^s and ArcZ sRNA with the latter being a post-transcriptional activator for sigma^s. Through a feedback mechanisms, ArcZ sRNA destabilises ArcB mRNA and regulates the level of ArcB protein, a kinase that phosphorylates ArcA. The oxygen and energy availability is expected to influence the ArcA phosphorylation rate and, as a consequence, in equilibrium, the system is likely to be in either a high ArcB (low ArcZ) or a low ArcB (high ArcZ) state. Kinetic modelling studies suggest that the rate of destabilisation of ArcB mRNA by ArcZ sRNA must be appropriately tuned for achieving the desired state. In particular, for a high phosphorylation rate, the transition from a low to a high ArcZ synthesis regime with the increase in sRNA-mRNA interaction is similar to the threshold-linear response observed earlier. Further, we show that the mRNA destabilisation by sRNA might be, in particular, beneficial in the low phosphorylation state for having the right concentration levels of ArcZ and ArcB. Stochastic simulation results suggest that as the ArcZ-ArcB binding affinity is increased, the probability distribution for the number of ArcZ molecules becomes flatter indicating frequently occurring transcriptional bursts of varying strengths. |
1109.4498 | Ronan M.T. Fleming Dr | Ronan M.T. Fleming and Ines Thiele | Mass conserved elementary kinetics is sufficient for the existence of a
non-equilibrium steady state concentration | 11 pages, 2 figures (v2 is now placed in proper context of the
excellent 1962 paper by James Wei entitled "Axiomatic treatment of chemical
reaction systems". In addition, section 4, on "Utility of steady state
existence theorem" has been expanded.) | null | null | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Living systems are forced away from thermodynamic equilibrium by exchange of
mass and energy with their environment. In order to model a biochemical
reaction network in a non-equilibrium state one requires a mathematical
formulation to mimic this forcing. We provide a general formulation to force an
arbitrary large kinetic model in a manner that is still consistent with the
existence of a non-equilibrium steady state. We can guarantee the existence of
a non-equilibrium steady state assuming only two conditions; that every
reaction is mass balanced and that continuous kinetic reaction rate laws never
lead to a negative molecule concentration. These conditions can be verified in
polynomial time and are flexible enough to permit one to force a system away
from equilibrium. In an expository biochemical example we show how a
reversible, mass balanced perpetual reaction, with thermodynamically infeasible
kinetic parameters, can be used to perpetually force a kinetic model of
anaerobic glycolysis in a manner consistent with the existence of a steady
state. Easily testable existence conditions are foundational for efforts to
reliably compute non-equilibrium steady states in genome-scale biochemical
kinetic models.
| [
{
"created": "Wed, 21 Sep 2011 10:31:29 GMT",
"version": "v1"
},
{
"created": "Sat, 18 Feb 2012 00:07:48 GMT",
"version": "v2"
}
] | 2012-02-21 | [
[
"Fleming",
"Ronan M. T.",
""
],
[
"Thiele",
"Ines",
""
]
] | Living systems are forced away from thermodynamic equilibrium by exchange of mass and energy with their environment. In order to model a biochemical reaction network in a non-equilibrium state one requires a mathematical formulation to mimic this forcing. We provide a general formulation to force an arbitrary large kinetic model in a manner that is still consistent with the existence of a non-equilibrium steady state. We can guarantee the existence of a non-equilibrium steady state assuming only two conditions; that every reaction is mass balanced and that continuous kinetic reaction rate laws never lead to a negative molecule concentration. These conditions can be verified in polynomial time and are flexible enough to permit one to force a system away from equilibrium. In an expository biochemical example we show how a reversible, mass balanced perpetual reaction, with thermodynamically infeasible kinetic parameters, can be used to perpetually force a kinetic model of anaerobic glycolysis in a manner consistent with the existence of a steady state. Easily testable existence conditions are foundational for efforts to reliably compute non-equilibrium steady states in genome-scale biochemical kinetic models. |
0807.4860 | Peter Csermely | Zoltan Spiro, Istvan A. Kovacs and Peter Csermely | Drug-therapy networks and the predictions of novel drug targets | This is an extended version of the Journal of Biology paper
containing 2 Figures, 1 Table and 44 references | Journal of Biology 2008, 7:20 | 10.1186/jbiol81 | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, a number of drug-therapy, disease, drug, and drug-target networks
have been introduced. Here we suggest novel methods for network-based
prediction of novel drug targets and for improvement of drug efficiency by
analysing the effects of drugs on the robustness of cellular networks.
| [
{
"created": "Wed, 30 Jul 2008 13:28:27 GMT",
"version": "v1"
}
] | 2008-07-31 | [
[
"Spiro",
"Zoltan",
""
],
[
"Kovacs",
"Istvan A.",
""
],
[
"Csermely",
"Peter",
""
]
] | Recently, a number of drug-therapy, disease, drug, and drug-target networks have been introduced. Here we suggest novel methods for network-based prediction of novel drug targets and for improvement of drug efficiency by analysing the effects of drugs on the robustness of cellular networks. |
1511.04027 | Diego Chowell | Diego Chowell, Muntaser Safan, and Carlos Castillo-Chavez | Modeling the case of early detection of Ebola virus disease | null | null | null | null | q-bio.PE q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The most recent Ebola outbreak in West Africa highlighted critical weaknesses
in the medical infrastructure of the affected countries, including effective
diagnostics tools, sufficient isolation wards, and enough medical personnel.
Here, we develop and analyze a mathematical model to assess the impact of early
diagnosis of pre-symptomatic individuals on the transmission dynamics of Ebola
virus disease in West Africa in scenarios where Ebola may remain at low levels
in the population. Our findings highlight the importance of implementing
integrated control measures of early diagnosis and isolation. The mathematical
analysis shows a threshold where early diagnosis of pre-symptomatic
individuals, combined with a sufficient level of effective isolation, can lead
to an epidemic control of Ebola virus disease. That is, the local erradication
of the disease or the effective management of the disease at low levels of
endemicity.
| [
{
"created": "Fri, 6 Nov 2015 23:18:32 GMT",
"version": "v1"
}
] | 2015-11-13 | [
[
"Chowell",
"Diego",
""
],
[
"Safan",
"Muntaser",
""
],
[
"Castillo-Chavez",
"Carlos",
""
]
] | The most recent Ebola outbreak in West Africa highlighted critical weaknesses in the medical infrastructure of the affected countries, including effective diagnostics tools, sufficient isolation wards, and enough medical personnel. Here, we develop and analyze a mathematical model to assess the impact of early diagnosis of pre-symptomatic individuals on the transmission dynamics of Ebola virus disease in West Africa in scenarios where Ebola may remain at low levels in the population. Our findings highlight the importance of implementing integrated control measures of early diagnosis and isolation. The mathematical analysis shows a threshold where early diagnosis of pre-symptomatic individuals, combined with a sufficient level of effective isolation, can lead to an epidemic control of Ebola virus disease. That is, the local erradication of the disease or the effective management of the disease at low levels of endemicity. |
q-bio/0502007 | Axel G. Rossberg | A. G. Rossberg, H. Matsuda, T. Amemiya, K. Itoh | Some Properties of the Speciation Model for Food-Web Structure -
Mechanisms for Degree Distributions and Intervality | 23 pages, 6 figures, minor rewrites | null | null | null | q-bio.PE q-bio.MN | null | We present a mathematical analysis of the speciation model for food-web
structure, which had in previous work been shown to yield a good description of
empirical data of food-web topology. The degree distributions of the network
are derived. Properties of the speciation model are compared to those of other
models that successfully describe empirical data. It is argued that the
speciation model unifies the underlying ideas of previous theories. In
particular, it offers a mechanistic explanation for the success of the niche
model of Williams and Martinez and the frequent observation of intervality in
empirical food webs.
| [
{
"created": "Tue, 8 Feb 2005 05:05:17 GMT",
"version": "v1"
},
{
"created": "Thu, 2 Jun 2005 04:34:02 GMT",
"version": "v2"
}
] | 2007-05-23 | [
[
"Rossberg",
"A. G.",
""
],
[
"Matsuda",
"H.",
""
],
[
"Amemiya",
"T.",
""
],
[
"Itoh",
"K.",
""
]
] | We present a mathematical analysis of the speciation model for food-web structure, which had in previous work been shown to yield a good description of empirical data of food-web topology. The degree distributions of the network are derived. Properties of the speciation model are compared to those of other models that successfully describe empirical data. It is argued that the speciation model unifies the underlying ideas of previous theories. In particular, it offers a mechanistic explanation for the success of the niche model of Williams and Martinez and the frequent observation of intervality in empirical food webs. |
1303.5569 | Daniel Zerbino | Daniel R. Zerbino, Tracy Ballinger, Benedict Paten, Glenn Hickey and
David Haussler | Representing and decomposing genomic structural variants as balanced
integer flows on sequence graphs | null | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The study of genomic variation has provided key insights into the functional
role of mutations. Predominantly, studies have focused on single nucleotide
variants (SNV), which are relatively easy to detect and can be described with
rich mathematical models. However, it has been observed that genomes are highly
plastic, and that whole regions can be moved, removed or duplicated in bulk.
These structural variants (SV) have been shown to have significant impact on
the phenotype, but their study has been held back by the combinatorial
complexity of the underlying models. We describe here a general model of
structural variation that encompasses both balanced rearrangements and
arbitrary copy-numbers variants (CNV). In this model, we show that the space of
possible evolutionary histories that explain the structural differences between
any two genomes can be sampled ergodically.
| [
{
"created": "Fri, 22 Mar 2013 10:10:41 GMT",
"version": "v1"
},
{
"created": "Thu, 3 Sep 2015 09:56:40 GMT",
"version": "v2"
}
] | 2015-09-04 | [
[
"Zerbino",
"Daniel R.",
""
],
[
"Ballinger",
"Tracy",
""
],
[
"Paten",
"Benedict",
""
],
[
"Hickey",
"Glenn",
""
],
[
"Haussler",
"David",
""
]
] | The study of genomic variation has provided key insights into the functional role of mutations. Predominantly, studies have focused on single nucleotide variants (SNV), which are relatively easy to detect and can be described with rich mathematical models. However, it has been observed that genomes are highly plastic, and that whole regions can be moved, removed or duplicated in bulk. These structural variants (SV) have been shown to have significant impact on the phenotype, but their study has been held back by the combinatorial complexity of the underlying models. We describe here a general model of structural variation that encompasses both balanced rearrangements and arbitrary copy-numbers variants (CNV). In this model, we show that the space of possible evolutionary histories that explain the structural differences between any two genomes can be sampled ergodically. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.