id
stringlengths 9
13
| submitter
stringlengths 4
48
| authors
stringlengths 4
9.62k
| title
stringlengths 4
343
| comments
stringlengths 2
480
⌀ | journal-ref
stringlengths 9
309
⌀ | doi
stringlengths 12
138
⌀ | report-no
stringclasses 277
values | categories
stringlengths 8
87
| license
stringclasses 9
values | orig_abstract
stringlengths 27
3.76k
| versions
listlengths 1
15
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
147
| abstract
stringlengths 24
3.75k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1110.4070
|
Gilberto Gonzalez
|
Jose L. Herrera and Gilberto Gonzalez-Parra
|
Dynamical graphs for the SI epidemiological model
|
9 pages, 2 figures
| null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we study the susceptible-infectious (SI) epidemiological model
using dynamical graphs. Dynamical structures have been recently applied in many
areas including complex systems. Dynamical structures include the mutual
interaction between the structure topology and the characteristics of its
members. Dynamical graphs applied to epidemics consider generally that the
nodes are individuals and the links represent different classes of
relationships between individuals with the potential to transmit the disease.
The main aim in this article is to study the evolution of the SI
epidemiological model and the creation of subgraphs due to the dynamic behavior
of the individuals trying to avoid the contagious of the disease. The proposed
dynamical graph model uses a single parameter which reflects the probability of
rewire that represent actions to avoid the disease. This parameter includes
also information regarding the infectivity of the disease. The numerical
simulations using Monte Carlo method show that the dynamical behavior of
individuals affects the evolution of the subgraphs. Furthermore, it is shown
that the connectivity degree of the graphs can change the arise of subgraphs
and the asymptotic state of the infectious diseases.
|
[
{
"created": "Tue, 18 Oct 2011 17:50:08 GMT",
"version": "v1"
}
] |
2011-10-19
|
[
[
"Herrera",
"Jose L.",
""
],
[
"Gonzalez-Parra",
"Gilberto",
""
]
] |
In this paper we study the susceptible-infectious (SI) epidemiological model using dynamical graphs. Dynamical structures have been recently applied in many areas including complex systems. Dynamical structures include the mutual interaction between the structure topology and the characteristics of its members. Dynamical graphs applied to epidemics consider generally that the nodes are individuals and the links represent different classes of relationships between individuals with the potential to transmit the disease. The main aim in this article is to study the evolution of the SI epidemiological model and the creation of subgraphs due to the dynamic behavior of the individuals trying to avoid the contagious of the disease. The proposed dynamical graph model uses a single parameter which reflects the probability of rewire that represent actions to avoid the disease. This parameter includes also information regarding the infectivity of the disease. The numerical simulations using Monte Carlo method show that the dynamical behavior of individuals affects the evolution of the subgraphs. Furthermore, it is shown that the connectivity degree of the graphs can change the arise of subgraphs and the asymptotic state of the infectious diseases.
|
1905.02053
|
Christoph Zechner
|
David T. Gonzales and T-Y Dora Tang and Christoph Zechner
|
Moment-based analysis of biochemical networks in a heterogeneous
population of communicating cells
|
6 pages, 5 Figures
| null | null | null |
q-bio.MN q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cells can utilize chemical communication to exchange information and
coordinate their behavior in the presence of noise. Communication can reduce
noise to shape a collective response, or amplify noise to generate distinct
phenotypic subpopulations. Here we discuss a moment-based approach to study how
cell-cell communication affects noise in biochemical networks that arises from
both intrinsic and extrinsic sources. We derive a system of approximate
differential equations that captures lower-order moments of a population of
cells, which communicate by secreting and sensing a diffusing molecule. Since
the number of obtained equations grows combinatorially with number of
considered cells, we employ a previously proposed model reduction technique,
which exploits symmetries in the underlying moment dynamics. Importantly, the
number of equations obtained in this way is independent of the number of
considered cells such that the method scales to arbitrary population sizes.
Based on this approach, we study how cell-cell communication affects population
variability in several biochemical networks. Moreover, we analyze the accuracy
and computational efficiency of the moment-based approximation by comparing it
with moments obtained from stochastic simulations.
|
[
{
"created": "Mon, 6 May 2019 14:08:16 GMT",
"version": "v1"
},
{
"created": "Fri, 19 Jul 2019 12:19:28 GMT",
"version": "v2"
},
{
"created": "Mon, 23 Sep 2019 11:45:05 GMT",
"version": "v3"
}
] |
2019-09-24
|
[
[
"Gonzales",
"David T.",
""
],
[
"Tang",
"T-Y Dora",
""
],
[
"Zechner",
"Christoph",
""
]
] |
Cells can utilize chemical communication to exchange information and coordinate their behavior in the presence of noise. Communication can reduce noise to shape a collective response, or amplify noise to generate distinct phenotypic subpopulations. Here we discuss a moment-based approach to study how cell-cell communication affects noise in biochemical networks that arises from both intrinsic and extrinsic sources. We derive a system of approximate differential equations that captures lower-order moments of a population of cells, which communicate by secreting and sensing a diffusing molecule. Since the number of obtained equations grows combinatorially with number of considered cells, we employ a previously proposed model reduction technique, which exploits symmetries in the underlying moment dynamics. Importantly, the number of equations obtained in this way is independent of the number of considered cells such that the method scales to arbitrary population sizes. Based on this approach, we study how cell-cell communication affects population variability in several biochemical networks. Moreover, we analyze the accuracy and computational efficiency of the moment-based approximation by comparing it with moments obtained from stochastic simulations.
|
2404.08101
|
Samuel Lippl
|
Samuel Lippl, Raphael Gerraty, John Morrison, Nikolaus Kriegeskorte
|
Source Invariance and Probabilistic Transfer: A Testable Theory of
Probabilistic Neural Representations
| null | null | null | null |
q-bio.NC
|
http://creativecommons.org/licenses/by/4.0/
|
As animals interact with their environments, they must infer properties of
their surroundings. Some animals, including humans, can represent uncertainty
about those properties. But when, if ever, do they use probability
distributions to represent their uncertainty? It depends on which definition we
choose. In this paper, we argue that existing definitions are inadequate
because they are untestable. We then propose our own definition.
There are two reasons why existing definitions are untestable. First, they do
not distinguish between representations of uncertainty and representations of
variables merely related to uncertainty ('representational indeterminacy').
Second, they do not distinguish between probabilistic representations of
uncertainty and merely "heuristic" representations of uncertainty. We call this
'model indeterminacy' because the underlying problem is that we do not have
access to the animal's generative model.
We define probabilistic representations by two properties: 1) they encode
uncertainty regardless of the source of the uncertainty ('source invariance'),
2) they support the efficient learning of new tasks that would be more
difficult to learn given non-probabilistic representations ('probabilistic task
transfer'). Source invariance indicates that they are representations of
uncertainty rather than variables merely related to uncertainty, thereby
solving representational indeterminacy. Probabilistic task transfer indicates
that they are probabilistic representations of uncertainty rather than merely
heuristic representations, thereby solving model indeterminacy.
|
[
{
"created": "Thu, 11 Apr 2024 19:38:07 GMT",
"version": "v1"
}
] |
2024-04-15
|
[
[
"Lippl",
"Samuel",
""
],
[
"Gerraty",
"Raphael",
""
],
[
"Morrison",
"John",
""
],
[
"Kriegeskorte",
"Nikolaus",
""
]
] |
As animals interact with their environments, they must infer properties of their surroundings. Some animals, including humans, can represent uncertainty about those properties. But when, if ever, do they use probability distributions to represent their uncertainty? It depends on which definition we choose. In this paper, we argue that existing definitions are inadequate because they are untestable. We then propose our own definition. There are two reasons why existing definitions are untestable. First, they do not distinguish between representations of uncertainty and representations of variables merely related to uncertainty ('representational indeterminacy'). Second, they do not distinguish between probabilistic representations of uncertainty and merely "heuristic" representations of uncertainty. We call this 'model indeterminacy' because the underlying problem is that we do not have access to the animal's generative model. We define probabilistic representations by two properties: 1) they encode uncertainty regardless of the source of the uncertainty ('source invariance'), 2) they support the efficient learning of new tasks that would be more difficult to learn given non-probabilistic representations ('probabilistic task transfer'). Source invariance indicates that they are representations of uncertainty rather than variables merely related to uncertainty, thereby solving representational indeterminacy. Probabilistic task transfer indicates that they are probabilistic representations of uncertainty rather than merely heuristic representations, thereby solving model indeterminacy.
|
1202.0501
|
Leo Lahti
|
Leo Lahti, Juha E. A. Knuuttila, Samuel Kaski
|
Global modeling of transcriptional responses in interaction networks
|
19 pages, 13 figures
|
Global modeling of transcriptional responses in interaction
networks. Leo Lahti, Juha E.A. Knuuttila, and Samuel Kaski. Bioinformatics
26(21):2713-2720, 2010
|
10.1093/bioinformatics/btq500
| null |
q-bio.MN cs.CE q-bio.QM stat.AP stat.ML
|
http://creativecommons.org/licenses/by/3.0/
|
Motivation: Cell-biological processes are regulated through a complex network
of interactions between genes and their products. The processes, their
activating conditions, and the associated transcriptional responses are often
unknown. Organism-wide modeling of network activation can reveal unique and
shared mechanisms between physiological conditions, and potentially as yet
unknown processes. We introduce a novel approach for organism-wide discovery
and analysis of transcriptional responses in interaction networks. The method
searches for local, connected regions in a network that exhibit coordinated
transcriptional response in a subset of conditions. Known interactions between
genes are used to limit the search space and to guide the analysis. Validation
on a human pathway network reveals physiologically coherent responses,
functional relatedness between physiological conditions, and coordinated,
context-specific regulation of the genes. Availability: Implementation is
freely available in R and Matlab at http://netpro.r-forge.r-project.org
|
[
{
"created": "Thu, 2 Feb 2012 17:40:14 GMT",
"version": "v1"
}
] |
2012-02-03
|
[
[
"Lahti",
"Leo",
""
],
[
"Knuuttila",
"Juha E. A.",
""
],
[
"Kaski",
"Samuel",
""
]
] |
Motivation: Cell-biological processes are regulated through a complex network of interactions between genes and their products. The processes, their activating conditions, and the associated transcriptional responses are often unknown. Organism-wide modeling of network activation can reveal unique and shared mechanisms between physiological conditions, and potentially as yet unknown processes. We introduce a novel approach for organism-wide discovery and analysis of transcriptional responses in interaction networks. The method searches for local, connected regions in a network that exhibit coordinated transcriptional response in a subset of conditions. Known interactions between genes are used to limit the search space and to guide the analysis. Validation on a human pathway network reveals physiologically coherent responses, functional relatedness between physiological conditions, and coordinated, context-specific regulation of the genes. Availability: Implementation is freely available in R and Matlab at http://netpro.r-forge.r-project.org
|
1805.03233
|
Juan Lorenzo Rodriguez-Flores Ph.D.
|
Khalid A. Fakhro, Michelle R. Staudt, Monica Denise Ramstetter, Amal
Robay, Joel A. Malek, Ramin Badii, Ajayeb Al-Nabet Al-Marri, Charbel Abi
Khalil, Alya Al-Shakaki, Omar Chidiac, Dora Stadler, Mahmoud Zirie, Amin
Jayyousi, Jacqueline Salit, Jason G. Mezey, Ronald G. Crystal, and Juan L.
Rodriguez-Flores
|
The Qatar Genome: A Population-Specific Tool for Precision Medicine in
the Middle East
|
Includes supplementary figures missing from publisher website
|
Hum Genome Var. 2016 Jun 30;3:16016
|
10.1038/hgv.2016.16
| null |
q-bio.GN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reaching the full potential of precision medicine depends on the quality of
personalized genome interpretation. In order to facilitate precision medicine
in regions of the Middle East and North Africa (MENA), a population-specific
reference genome for the indigenous Arab popula-tion of Qatar (QTRG) was
constructed by incorporating allele frequency data from sequencing of 1,161
Qataris, representing 0.4% of the population. A total of 20.9 million SNP and
3.1 million indels were observed in Qatar, including an average of 1.79% novel
variants per individual ge-nome. Replacement of the GRCh37 standard reference
with QTRG in a best practices genome analysis workflow resulted in an average
of 7* deeper coverage depth (an improvement of 23%), and 756,671 fewer variants
on average, a reduction of 16% that is attributed to common Qatari alleles
being present in the QTRG reference. The benefit for using QTRG varies across
ances-tries, a factor that should be taken into consideration when selecting an
appropriate reference for analysis.
|
[
{
"created": "Tue, 8 May 2018 19:04:37 GMT",
"version": "v1"
},
{
"created": "Sun, 13 May 2018 12:25:23 GMT",
"version": "v2"
}
] |
2018-05-15
|
[
[
"Fakhro",
"Khalid A.",
""
],
[
"Staudt",
"Michelle R.",
""
],
[
"Ramstetter",
"Monica Denise",
""
],
[
"Robay",
"Amal",
""
],
[
"Malek",
"Joel A.",
""
],
[
"Badii",
"Ramin",
""
],
[
"Al-Marri",
"Ajayeb Al-Nabet",
""
],
[
"Khalil",
"Charbel Abi",
""
],
[
"Al-Shakaki",
"Alya",
""
],
[
"Chidiac",
"Omar",
""
],
[
"Stadler",
"Dora",
""
],
[
"Zirie",
"Mahmoud",
""
],
[
"Jayyousi",
"Amin",
""
],
[
"Salit",
"Jacqueline",
""
],
[
"Mezey",
"Jason G.",
""
],
[
"Crystal",
"Ronald G.",
""
],
[
"Rodriguez-Flores",
"Juan L.",
""
]
] |
Reaching the full potential of precision medicine depends on the quality of personalized genome interpretation. In order to facilitate precision medicine in regions of the Middle East and North Africa (MENA), a population-specific reference genome for the indigenous Arab popula-tion of Qatar (QTRG) was constructed by incorporating allele frequency data from sequencing of 1,161 Qataris, representing 0.4% of the population. A total of 20.9 million SNP and 3.1 million indels were observed in Qatar, including an average of 1.79% novel variants per individual ge-nome. Replacement of the GRCh37 standard reference with QTRG in a best practices genome analysis workflow resulted in an average of 7* deeper coverage depth (an improvement of 23%), and 756,671 fewer variants on average, a reduction of 16% that is attributed to common Qatari alleles being present in the QTRG reference. The benefit for using QTRG varies across ances-tries, a factor that should be taken into consideration when selecting an appropriate reference for analysis.
|
2107.07938
|
Gilberto Nakamura
|
F. Meloni, G. Nakamura, B. Grammaticos, A. S. Martinez, M. Badoual
|
Modeling cooperation and competition in biological communities
|
24 pages, 16 figures
| null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The far-reaching consequences of ecological interactions in the dynamics of
biological communities remain an intriguing subject. For decades, competition
has been a cornerstone in ecological processes, but mounting evidence shows
that cooperation does also contribute to the structure of biological
communities. Here, we propose a simple deterministic model for the study of the
effects of facilitation and competition in the dynamics of such systems. The
simultaneous inclusion of both effects produces rich dynamics and captures the
context-dependence observed in the formation of ecological communities. The
approach reproduces relevant aspects of primary and secondary plant succession,
the effect invasive species, and the survival of rare species. The model also
takes into account the role of the ecological priority effect and stress the
crucial role of facilitation in conservation efforts and species coexistence.
|
[
{
"created": "Fri, 16 Jul 2021 14:54:09 GMT",
"version": "v1"
}
] |
2021-07-19
|
[
[
"Meloni",
"F.",
""
],
[
"Nakamura",
"G.",
""
],
[
"Grammaticos",
"B.",
""
],
[
"Martinez",
"A. S.",
""
],
[
"Badoual",
"M.",
""
]
] |
The far-reaching consequences of ecological interactions in the dynamics of biological communities remain an intriguing subject. For decades, competition has been a cornerstone in ecological processes, but mounting evidence shows that cooperation does also contribute to the structure of biological communities. Here, we propose a simple deterministic model for the study of the effects of facilitation and competition in the dynamics of such systems. The simultaneous inclusion of both effects produces rich dynamics and captures the context-dependence observed in the formation of ecological communities. The approach reproduces relevant aspects of primary and secondary plant succession, the effect invasive species, and the survival of rare species. The model also takes into account the role of the ecological priority effect and stress the crucial role of facilitation in conservation efforts and species coexistence.
|
1108.1150
|
Steffen Schaper
|
Steffen Schaper, Iain G. Johnston and Ard A. Louis
|
Epistasis can lead to fragmented neutral spaces and contingency in
evolution
|
21 pages, 21 figures
|
Proc. R. Soc. B 7 May 2012 vol. 279 no. 1734 1777-1783
|
10.1098/rspb.2011.2183
| null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In evolution, the effects of a single deleterious mutation can sometimes be
compensated for by a second mutation which recovers the original phenotype.
Such epistatic interactions have implications for the structure of genome space
- namely, that networks of genomes encoding the same phenotype may not be
connected by single mutational moves. We use the folding of RNA sequences into
secondary structures as a model genotype-phenotype map and explore the neutral
spaces corresponding to networks of genotypes with the same phenotype. In most
of these networks, we find that it is not possible to connect all genotypes to
one another by single point mutations. Instead, a network for a phenotypic
structure with $n$ bonds typically fragments into at least $2^n$ neutral
components, often of similar size. While components of the same network
generate the same phenotype, they show important variations in their
properties, most strikingly in their evolvability and mutational robustness.
This heterogeneity implies contingency in the evolutionary process.
|
[
{
"created": "Thu, 4 Aug 2011 17:50:50 GMT",
"version": "v1"
},
{
"created": "Tue, 6 Dec 2011 19:08:02 GMT",
"version": "v2"
}
] |
2015-03-19
|
[
[
"Schaper",
"Steffen",
""
],
[
"Johnston",
"Iain G.",
""
],
[
"Louis",
"Ard A.",
""
]
] |
In evolution, the effects of a single deleterious mutation can sometimes be compensated for by a second mutation which recovers the original phenotype. Such epistatic interactions have implications for the structure of genome space - namely, that networks of genomes encoding the same phenotype may not be connected by single mutational moves. We use the folding of RNA sequences into secondary structures as a model genotype-phenotype map and explore the neutral spaces corresponding to networks of genotypes with the same phenotype. In most of these networks, we find that it is not possible to connect all genotypes to one another by single point mutations. Instead, a network for a phenotypic structure with $n$ bonds typically fragments into at least $2^n$ neutral components, often of similar size. While components of the same network generate the same phenotype, they show important variations in their properties, most strikingly in their evolvability and mutational robustness. This heterogeneity implies contingency in the evolutionary process.
|
2112.09079
|
Suraj Shankar
|
Roberto Benzi, David R. Nelson, Suraj Shankar, Federico Toschi,
Xiaojue Zhu
|
Spatial population genetics with fluid flow
|
29 pages, 22 figures
|
Rep. Prog. Phys. 85 096601, 2022
|
10.1088/1361-6633/ac8231
| null |
q-bio.PE cond-mat.soft cond-mat.stat-mech physics.flu-dyn
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The growth and evolution of microbial populations is often subjected to
advection by fluid flows in spatially extended environments, with immediate
consequences for questions of spatial population genetics in marine ecology,
planktonic diversity and origin of life scenarios. Here, we review recent
progress made in understanding this rich problem in the simplified setting of
two competing genetic microbial strains subjected to fluid flows. As a
pedagogical example we focus on antagonsim, i.e., two killer microorganism
strains, each secreting toxins that impede the growth of their competitors
(competitive exclusion), in the presence of stationary fluid flows. By solving
two coupled reaction-diffusion equations that include advection by simple
steady cellular flows composed of characteristic flow motifs in two dimensions
(2d), we show how local flow shear and compressibility effects can interact
with selective advantage to have a dramatic influence on genetic competition
and fixation in spatially distributed populations. We analyze several 1d and 2d
flow geometries including sources, sinks, vortices and saddles, and show how
simple analytical models of the dynamics of the genetic interface can be used
to shed light on the nucleation, coexistence and flow-driven instabilities of
genetic drops. By exploiting an analogy with phase separation with nonconserved
order parameters, we uncover how these genetic drops harness fluid flows for
novel evolutionary strategies, even in the presence of number fluctuations, as
confirmed by agent-based simulations as well.
|
[
{
"created": "Thu, 16 Dec 2021 18:03:56 GMT",
"version": "v1"
},
{
"created": "Thu, 30 Jun 2022 15:47:39 GMT",
"version": "v2"
}
] |
2022-08-26
|
[
[
"Benzi",
"Roberto",
""
],
[
"Nelson",
"David R.",
""
],
[
"Shankar",
"Suraj",
""
],
[
"Toschi",
"Federico",
""
],
[
"Zhu",
"Xiaojue",
""
]
] |
The growth and evolution of microbial populations is often subjected to advection by fluid flows in spatially extended environments, with immediate consequences for questions of spatial population genetics in marine ecology, planktonic diversity and origin of life scenarios. Here, we review recent progress made in understanding this rich problem in the simplified setting of two competing genetic microbial strains subjected to fluid flows. As a pedagogical example we focus on antagonsim, i.e., two killer microorganism strains, each secreting toxins that impede the growth of their competitors (competitive exclusion), in the presence of stationary fluid flows. By solving two coupled reaction-diffusion equations that include advection by simple steady cellular flows composed of characteristic flow motifs in two dimensions (2d), we show how local flow shear and compressibility effects can interact with selective advantage to have a dramatic influence on genetic competition and fixation in spatially distributed populations. We analyze several 1d and 2d flow geometries including sources, sinks, vortices and saddles, and show how simple analytical models of the dynamics of the genetic interface can be used to shed light on the nucleation, coexistence and flow-driven instabilities of genetic drops. By exploiting an analogy with phase separation with nonconserved order parameters, we uncover how these genetic drops harness fluid flows for novel evolutionary strategies, even in the presence of number fluctuations, as confirmed by agent-based simulations as well.
|
1304.6034
|
Louis Scheffer
|
Louis K. Scheffer, Bill Karsh, Shiv Vitaladevun
|
Automated Alignment of Imperfect EM Images for Neural Reconstruction
|
23 pages, 10 figures
| null | null | null |
q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The most established method of reconstructing neural circuits from animals
involves slicing tissue very thin, then taking mosaics of electron microscope
(EM) images. To trace neurons across different images and through different
sections, these images must be accurately aligned, both with the others in the
same section and to the sections above and below. Unfortunately, sectioning and
imaging are not ideal processes - some of the problems that make alignment
difficult include lens distortion, tissue shrinkage during imaging, tears and
folds in the sectioned tissue, and dust and other artifacts. In addition the
data sets are large (hundreds of thousands of images) and each image must be
aligned with many neighbors, so the process must be automated and reliable.
This paper discusses methods of dealing with these problems, with numeric
results describing the accuracy of the resulting alignments.
|
[
{
"created": "Mon, 22 Apr 2013 18:00:14 GMT",
"version": "v1"
}
] |
2013-04-23
|
[
[
"Scheffer",
"Louis K.",
""
],
[
"Karsh",
"Bill",
""
],
[
"Vitaladevun",
"Shiv",
""
]
] |
The most established method of reconstructing neural circuits from animals involves slicing tissue very thin, then taking mosaics of electron microscope (EM) images. To trace neurons across different images and through different sections, these images must be accurately aligned, both with the others in the same section and to the sections above and below. Unfortunately, sectioning and imaging are not ideal processes - some of the problems that make alignment difficult include lens distortion, tissue shrinkage during imaging, tears and folds in the sectioned tissue, and dust and other artifacts. In addition the data sets are large (hundreds of thousands of images) and each image must be aligned with many neighbors, so the process must be automated and reliable. This paper discusses methods of dealing with these problems, with numeric results describing the accuracy of the resulting alignments.
|
1703.07943
|
Haiping Huang
|
Haiping Huang
|
Role of zero synapses in unsupervised feature learning
|
6 pages, 4 figures, to appear in J. Phys A as a LETTER
| null |
10.1088/1751-8121/aaa631
| null |
q-bio.NC cond-mat.dis-nn cond-mat.stat-mech cs.LG cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Synapses in real neural circuits can take discrete values, including zero
(silent or potential) synapses. The computational role of zero synapses in
unsupervised feature learning of unlabeled noisy data is still unclear, thus it
is important to understand how the sparseness of synaptic activity is shaped
during learning and its relationship with receptive field formation. Here, we
formulate this kind of sparse feature learning by a statistical mechanics
approach. We find that learning decreases the fraction of zero synapses, and
when the fraction decreases rapidly around a critical data size, an
intrinsically structured receptive field starts to develop. Further increasing
the data size refines the receptive field, while a very small fraction of zero
synapses remain to act as contour detectors. This phenomenon is discovered not
only in learning a handwritten digits dataset, but also in learning retinal
neural activity measured in a natural-movie-stimuli experiment.
|
[
{
"created": "Thu, 23 Mar 2017 06:19:47 GMT",
"version": "v1"
},
{
"created": "Mon, 26 Jun 2017 08:07:52 GMT",
"version": "v2"
},
{
"created": "Wed, 12 Jul 2017 05:13:42 GMT",
"version": "v3"
},
{
"created": "Wed, 10 Jan 2018 06:57:00 GMT",
"version": "v4"
}
] |
2018-01-11
|
[
[
"Huang",
"Haiping",
""
]
] |
Synapses in real neural circuits can take discrete values, including zero (silent or potential) synapses. The computational role of zero synapses in unsupervised feature learning of unlabeled noisy data is still unclear, thus it is important to understand how the sparseness of synaptic activity is shaped during learning and its relationship with receptive field formation. Here, we formulate this kind of sparse feature learning by a statistical mechanics approach. We find that learning decreases the fraction of zero synapses, and when the fraction decreases rapidly around a critical data size, an intrinsically structured receptive field starts to develop. Further increasing the data size refines the receptive field, while a very small fraction of zero synapses remain to act as contour detectors. This phenomenon is discovered not only in learning a handwritten digits dataset, but also in learning retinal neural activity measured in a natural-movie-stimuli experiment.
|
2212.14754
|
J. C. Phillips
|
J. C. Phillips
|
Biophysical Sequence Analysis of Functional Differences of Piezo1 and
Piezo2
|
10 page, two figures
| null | null | null |
q-bio.OT
|
http://creativecommons.org/licenses/by/4.0/
|
Because of their large size and widespread mechanosensitive interactions the
only recently discovered titled transmembrane proteins have attracted much
attention. Here we present and discuss their hydropathic profiles using a new
method of sequence analysis. We find large-scale similarities and differences
not obtainable by conventional sequence or structural studies. These
differences support the evolution-towards-criticality conjecture popular among
physicists.
|
[
{
"created": "Tue, 27 Dec 2022 17:48:55 GMT",
"version": "v1"
}
] |
2023-01-02
|
[
[
"Phillips",
"J. C.",
""
]
] |
Because of their large size and widespread mechanosensitive interactions the only recently discovered titled transmembrane proteins have attracted much attention. Here we present and discuss their hydropathic profiles using a new method of sequence analysis. We find large-scale similarities and differences not obtainable by conventional sequence or structural studies. These differences support the evolution-towards-criticality conjecture popular among physicists.
|
1407.1627
|
Kavita Jain
|
Brian Charlesworth and Kavita Jain
|
Purifying selection, drift and reversible mutation with arbitrarily high
mutation rates
|
Supplementary Information available on request
|
Genetics 198, 1587-1602 (2014)
| null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Some species exhibit very high levels of DNA sequence variability; there is
also evidence for the existence of heritable epigenetic variants that
experience state changes at a much higher rate than sequence variants. In both
cases, the resulting high diversity levels within a population (hyperdiversity)
mean that standard population genetics methods are not trustworthy. We analyze
a population genetics model that incorporates purifying selection, reversible
mutations and genetic drift, assuming a stationary population size. We derive
analytical results for both population parameters and sample statistics, and
discuss their implications for studies of natural genetic and epigenetic
variation. In particular, we find that (1) many more intermediate frequency
variants are expected than under standard models, even with moderately strong
purifying selection (2) rates of evolution under purifying selection may be
close to, or even exceed, neutral rates. These findings are related to
empirical studies of sequence and epigenetic variation.
|
[
{
"created": "Mon, 7 Jul 2014 08:54:47 GMT",
"version": "v1"
}
] |
2016-01-13
|
[
[
"Charlesworth",
"Brian",
""
],
[
"Jain",
"Kavita",
""
]
] |
Some species exhibit very high levels of DNA sequence variability; there is also evidence for the existence of heritable epigenetic variants that experience state changes at a much higher rate than sequence variants. In both cases, the resulting high diversity levels within a population (hyperdiversity) mean that standard population genetics methods are not trustworthy. We analyze a population genetics model that incorporates purifying selection, reversible mutations and genetic drift, assuming a stationary population size. We derive analytical results for both population parameters and sample statistics, and discuss their implications for studies of natural genetic and epigenetic variation. In particular, we find that (1) many more intermediate frequency variants are expected than under standard models, even with moderately strong purifying selection (2) rates of evolution under purifying selection may be close to, or even exceed, neutral rates. These findings are related to empirical studies of sequence and epigenetic variation.
|
1509.00801
|
Samuel Scarpino
|
Samuel V. Scarpino, Antoine Allard, Laurent Hebert-Dufresne
|
Prudent behaviour accelerates disease transmission
| null |
Nature Physics 12, 1042-1046 (2016)
|
10.1038/nphys3832
| null |
q-bio.QM nlin.AO physics.soc-ph q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Infectious diseases often spread faster near their peak than would be
predicted given early data on transmission. Despite the commonality of this
phenomena, there are no known general mechanisms able to cause an exponentially
spreading dis- ease to begin spreading faster. Indeed most features of real
world social networks, e.g. clustering1,2 and community structure3, and of
human behaviour, e.g. social distancing4 and increased hygiene5, will slow
disease spread. Here, we consider a model where individuals with essential
societal roles-e.g. teachers, first responders, health-care workers, etc.- who
fall ill are replaced with healthy individuals. We refer to this process as
relational exchange. Relational exchange is also a behavioural process, but one
whose effect on disease transmission is less obvious. By incorporating this
behaviour into a dynamic network model, we demonstrate that replacing
individuals can accelerate disease transmission. Furthermore, we find that the
effects of this process are trivial when considering a standard mass-action
model, but dramatic when considering network structure. This result highlights
another critical shortcoming in mass-action models, namely their inability to
account for behavioural processes. Lastly, using empirical data, we find that
this mechanism parsimoniously explains observed patterns across more than
seventeen years of influenza and dengue virus data. We anticipate that our
findings will advance the emerging field of disease forecasting and will better
inform public health decision making during outbreaks.
|
[
{
"created": "Wed, 2 Sep 2015 17:57:24 GMT",
"version": "v1"
}
] |
2017-07-06
|
[
[
"Scarpino",
"Samuel V.",
""
],
[
"Allard",
"Antoine",
""
],
[
"Hebert-Dufresne",
"Laurent",
""
]
] |
Infectious diseases often spread faster near their peak than would be predicted given early data on transmission. Despite the commonality of this phenomena, there are no known general mechanisms able to cause an exponentially spreading dis- ease to begin spreading faster. Indeed most features of real world social networks, e.g. clustering1,2 and community structure3, and of human behaviour, e.g. social distancing4 and increased hygiene5, will slow disease spread. Here, we consider a model where individuals with essential societal roles-e.g. teachers, first responders, health-care workers, etc.- who fall ill are replaced with healthy individuals. We refer to this process as relational exchange. Relational exchange is also a behavioural process, but one whose effect on disease transmission is less obvious. By incorporating this behaviour into a dynamic network model, we demonstrate that replacing individuals can accelerate disease transmission. Furthermore, we find that the effects of this process are trivial when considering a standard mass-action model, but dramatic when considering network structure. This result highlights another critical shortcoming in mass-action models, namely their inability to account for behavioural processes. Lastly, using empirical data, we find that this mechanism parsimoniously explains observed patterns across more than seventeen years of influenza and dengue virus data. We anticipate that our findings will advance the emerging field of disease forecasting and will better inform public health decision making during outbreaks.
|
2012.02628
|
Jonathan Cohen
|
Jonathan D. Cohen
|
A Mitigation Score for COVID-19
|
15 pages, 12 figures
| null | null | null |
q-bio.OT
|
http://creativecommons.org/licenses/by/4.0/
|
This note describes a simple score to indicate the effectiveness of
mitigation against infections of COVID-19 as observed by new case counts. The
score includes normalization, making comparisons across jurisdictions possible.
The smoothing employed provides robustness in the face of reporting vagaries
while retaining salient features of evolution, enabling a clearer picture for
decision makers and the public.
|
[
{
"created": "Wed, 2 Dec 2020 21:25:50 GMT",
"version": "v1"
}
] |
2020-12-07
|
[
[
"Cohen",
"Jonathan D.",
""
]
] |
This note describes a simple score to indicate the effectiveness of mitigation against infections of COVID-19 as observed by new case counts. The score includes normalization, making comparisons across jurisdictions possible. The smoothing employed provides robustness in the face of reporting vagaries while retaining salient features of evolution, enabling a clearer picture for decision makers and the public.
|
1004.2035
|
Maxim Paliy Dr.
|
Maxim Paliy, Roderick Melnik, Bruce A. Shapiro
|
Coarse Graining RNA Nanostructures for Molecular Dynamics Simulations
| null | null |
10.1088/1478-3975/7/3/036001
| null |
q-bio.QM physics.bio-ph q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A series of coarse-grained models have been developed for the study of the
molecular dynamics of RNA nanostructures. The models in the series have one to
three beads per nucleotide and include different amounts of detailed structural
information. Such a treatment allows us to reach, for the systems of thousands
of nucleotides, a time scale of microseconds (i.e. by three orders of magnitude
longer than in the full atomistic modelling) and thus to enable simulations of
large RNA polymers in the context of bionanotechnology. We find that the
3-beads-per-nucleotide models, described by a set of just a few universal
parameters, are able to describe different RNA conformations and are comparable
in structural precision to the models where detailed values of the backbone
P-C4' dihedrals taken from a reference structure are included. These findings
are discussed in the context of the RNA conformation classes.
|
[
{
"created": "Mon, 12 Apr 2010 19:50:08 GMT",
"version": "v1"
}
] |
2015-05-18
|
[
[
"Paliy",
"Maxim",
""
],
[
"Melnik",
"Roderick",
""
],
[
"Shapiro",
"Bruce A.",
""
]
] |
A series of coarse-grained models have been developed for the study of the molecular dynamics of RNA nanostructures. The models in the series have one to three beads per nucleotide and include different amounts of detailed structural information. Such a treatment allows us to reach, for the systems of thousands of nucleotides, a time scale of microseconds (i.e. by three orders of magnitude longer than in the full atomistic modelling) and thus to enable simulations of large RNA polymers in the context of bionanotechnology. We find that the 3-beads-per-nucleotide models, described by a set of just a few universal parameters, are able to describe different RNA conformations and are comparable in structural precision to the models where detailed values of the backbone P-C4' dihedrals taken from a reference structure are included. These findings are discussed in the context of the RNA conformation classes.
|
1712.06451
|
Roland Kr\"amer
|
Ulrich Warttinger, Christina Giese, Roland Kr\"amer
|
Quantification of sulfated polysaccharides in mouse and rat plasma by
the Heparin Red mix-and-read fluorescence assay
|
19 pages, 8 figures, 3 schemes, 4 tables. arXiv admin note: text
overlap with arXiv:1712.03377
| null | null | null |
q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Sulfated polysaccharides constitute a large and complex group of
macromolecules which possess a wide range of important biological properties.
Many of them hold promise as new therapeutics, but determination of their blood
levels during pharmacokinetic studies can be challenging. Heparin Red, a
commercial mix-and-read fluorescence assay, has recently emerged as a tool in
clinical drug development and pharmacokinetic analysis for the quantification
of sulfated polysaccharides in human plasma. The present study describes the
application of Heparin Red to the detection of heparin, a highly sulfated
polysaccharide, and fucoidan, a less sulfated polysaccharide, in spiked mouse
and rat plasmas. While the standard assay protocol for human plasma matrix gave
less satisfactory results, a modified protocol was developed that provides
within a detection range 0 to 10 micrograms per mL better limits of
quantification, 1.1 to 2.3 micrograms per mL for heparin, and 1.7 to 3.4
micrograms per mL for fucoidan. The required plasma sample volume of only 20
microliters is advantegous in particular when blood samples need to be
collected from mice. Our results suggest that Heparin Red is a promising tool
for the preclinical evaluation of sulfated polysaccharides with varying
sulfation degrees in mouse and rat models.
|
[
{
"created": "Fri, 15 Dec 2017 14:28:26 GMT",
"version": "v1"
}
] |
2017-12-19
|
[
[
"Warttinger",
"Ulrich",
""
],
[
"Giese",
"Christina",
""
],
[
"Krämer",
"Roland",
""
]
] |
Sulfated polysaccharides constitute a large and complex group of macromolecules which possess a wide range of important biological properties. Many of them hold promise as new therapeutics, but determination of their blood levels during pharmacokinetic studies can be challenging. Heparin Red, a commercial mix-and-read fluorescence assay, has recently emerged as a tool in clinical drug development and pharmacokinetic analysis for the quantification of sulfated polysaccharides in human plasma. The present study describes the application of Heparin Red to the detection of heparin, a highly sulfated polysaccharide, and fucoidan, a less sulfated polysaccharide, in spiked mouse and rat plasmas. While the standard assay protocol for human plasma matrix gave less satisfactory results, a modified protocol was developed that provides within a detection range 0 to 10 micrograms per mL better limits of quantification, 1.1 to 2.3 micrograms per mL for heparin, and 1.7 to 3.4 micrograms per mL for fucoidan. The required plasma sample volume of only 20 microliters is advantegous in particular when blood samples need to be collected from mice. Our results suggest that Heparin Red is a promising tool for the preclinical evaluation of sulfated polysaccharides with varying sulfation degrees in mouse and rat models.
|
1704.01841
|
Greg Gloor Dr
|
Jia R. Wu, Jean M. Macklaim, Briana L. Genge, and Gregory B. Gloor
|
Finding the centre: corrections for asymmetry in high-throughput
sequencing datasets
|
Preliminary conference paper for CoDaWork 2017. Outlines asymmetry
correction incorporated into the Bioconductor package ALDEx2
| null | null | null |
q-bio.QM q-bio.GN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
High throughput sequencing is a technology that allows for the generation of
millions of reads of genomic data regarding a study of interest, and data from
high throughput sequencing platforms are usually count compositions. Subsequent
analysis of such data can yield information on tran- scription profiles,
microbial diversity, or even relative cellular abundance in culture. Because of
the high cost of acquisition, the data are usually sparse, and always contain
far fewer observations than variables. However, an under-appreciated pathology
of these data are their often unbalanced nature: i.e, there is often systematic
variation between groups simply due to presence or absence of features, and
this variation is important to the biological interpretation of the data. A
simple example would be comparing transcriptomes of yeast cells with and
without a gene knockout. This causes samples in the comparison groups to
exhibit widely varying centres. This work extends a previously described
log-ratio transformation method that allows for variable comparisons between
samples in a Bayesian compositional context. We demonstrate the pathology in
modelled and real unbalanced experimental designs to show how this dramatically
causes both false negative and false positive inference. We then introduce
several approaches to demonstrate how the pathologies can be addressed. An
extreme example is presented where only the use of a predefined basis is
appropriate. The transformations are implemented as an extension to a general
compositional data analysis tool known as ALDEx2 which is available on
Bioconductor.
|
[
{
"created": "Thu, 6 Apr 2017 13:41:48 GMT",
"version": "v1"
}
] |
2017-04-07
|
[
[
"Wu",
"Jia R.",
""
],
[
"Macklaim",
"Jean M.",
""
],
[
"Genge",
"Briana L.",
""
],
[
"Gloor",
"Gregory B.",
""
]
] |
High throughput sequencing is a technology that allows for the generation of millions of reads of genomic data regarding a study of interest, and data from high throughput sequencing platforms are usually count compositions. Subsequent analysis of such data can yield information on tran- scription profiles, microbial diversity, or even relative cellular abundance in culture. Because of the high cost of acquisition, the data are usually sparse, and always contain far fewer observations than variables. However, an under-appreciated pathology of these data are their often unbalanced nature: i.e, there is often systematic variation between groups simply due to presence or absence of features, and this variation is important to the biological interpretation of the data. A simple example would be comparing transcriptomes of yeast cells with and without a gene knockout. This causes samples in the comparison groups to exhibit widely varying centres. This work extends a previously described log-ratio transformation method that allows for variable comparisons between samples in a Bayesian compositional context. We demonstrate the pathology in modelled and real unbalanced experimental designs to show how this dramatically causes both false negative and false positive inference. We then introduce several approaches to demonstrate how the pathologies can be addressed. An extreme example is presented where only the use of a predefined basis is appropriate. The transformations are implemented as an extension to a general compositional data analysis tool known as ALDEx2 which is available on Bioconductor.
|
1609.03925
|
Yue Wu
|
Dongmei Li, Yue Wu, Panpan Wen, Weihua Liu
|
The qualitative analysis of the impact of media delay on the control of
infectious disease
|
19 pages, 7 figures
| null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we consider the impact of time delay by media on the control
of the disease. We set up a class of SISM epidemic model with the time delay
and the cumulative density of awareness caused by media. The sufficient
condition of global asymptotic stability of disease-free equilibrium is
approved. We get the global stability of the epidemic equilibrium and the
existence conditions of Hopf bifurcation. Numerical simulations are presented
to illustrate the analytical results. Finally, we analyze the influence of
parameters on the control of infectious disease by combining the data of H1N1.
By shortening the time of media lag, increasing transmission rate of media and
the implementation rate of the media project, the spread of disease will be
controlled effectively.
|
[
{
"created": "Mon, 12 Sep 2016 01:28:11 GMT",
"version": "v1"
}
] |
2016-09-14
|
[
[
"Li",
"Dongmei",
""
],
[
"Wu",
"Yue",
""
],
[
"Wen",
"Panpan",
""
],
[
"Liu",
"Weihua",
""
]
] |
In this paper, we consider the impact of time delay by media on the control of the disease. We set up a class of SISM epidemic model with the time delay and the cumulative density of awareness caused by media. The sufficient condition of global asymptotic stability of disease-free equilibrium is approved. We get the global stability of the epidemic equilibrium and the existence conditions of Hopf bifurcation. Numerical simulations are presented to illustrate the analytical results. Finally, we analyze the influence of parameters on the control of infectious disease by combining the data of H1N1. By shortening the time of media lag, increasing transmission rate of media and the implementation rate of the media project, the spread of disease will be controlled effectively.
|
q-bio/0509028
|
Martin Huber
|
Martin Tobias Huber And Hans Albert Braun
|
Stimulus - response curves of a neuronal model for noisy subthreshold
oscillations and related spike generation
|
19 pages, 7 figures
| null |
10.1103/PhysRevE.73.041929
| null |
q-bio.NC
| null |
We investigate the stimulus-dependent tuning properties of a noisy ionic
conductance model for intrinsic subthreshold oscillations in membrane potential
and associated spike generation. On depolarization by an applied current, the
model exhibits subthreshold oscillatory activity with occasional spike
generation when oscillations reach the spike threshold. We consider how the
amount of applied current, the noise intensity, variation of maximum
conductance values and scaling to different temperature ranges alter the
responses of the model with respect to voltage traces, interspike intervals and
their statistics and the mean spike frequency curves. We demonstrate that
subthreshold oscillatory neurons in the presence of noise can sensitively and
also selectively be tuned by stimulus-dependent variation of model parameters.
|
[
{
"created": "Thu, 22 Sep 2005 15:40:24 GMT",
"version": "v1"
}
] |
2009-11-11
|
[
[
"Braun",
"Martin Tobias Huber And Hans Albert",
""
]
] |
We investigate the stimulus-dependent tuning properties of a noisy ionic conductance model for intrinsic subthreshold oscillations in membrane potential and associated spike generation. On depolarization by an applied current, the model exhibits subthreshold oscillatory activity with occasional spike generation when oscillations reach the spike threshold. We consider how the amount of applied current, the noise intensity, variation of maximum conductance values and scaling to different temperature ranges alter the responses of the model with respect to voltage traces, interspike intervals and their statistics and the mean spike frequency curves. We demonstrate that subthreshold oscillatory neurons in the presence of noise can sensitively and also selectively be tuned by stimulus-dependent variation of model parameters.
|
2111.05855
|
Takuma Usuzaki Dr
|
Takuma Usuzaki, Kengo Takahash, Kazuma Umemiya
|
A new radiomics feature: image frequency analysis
|
4 pages, 5 figures
| null | null | null |
q-bio.QM
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Radiomics is a promising technology that focuses on improvements of image
analysis, using an automated high-throughput extraction of quantitative
features. However, the character of lesion is affected by the surrounding
tissue. A lesion on medical image should be characterized from the
inter-relation between lesion and surrounding tissue as well as property of the
lesion itself. The aim of this study is to introduce a new radiomics feature
which quantitatively analyze the inter-relation between lesion and surrounding
tissue focusing on the value change of rows and columns in a medical image.
|
[
{
"created": "Wed, 10 Nov 2021 09:44:45 GMT",
"version": "v1"
}
] |
2021-11-12
|
[
[
"Usuzaki",
"Takuma",
""
],
[
"Takahash",
"Kengo",
""
],
[
"Umemiya",
"Kazuma",
""
]
] |
Radiomics is a promising technology that focuses on improvements of image analysis, using an automated high-throughput extraction of quantitative features. However, the character of lesion is affected by the surrounding tissue. A lesion on medical image should be characterized from the inter-relation between lesion and surrounding tissue as well as property of the lesion itself. The aim of this study is to introduce a new radiomics feature which quantitatively analyze the inter-relation between lesion and surrounding tissue focusing on the value change of rows and columns in a medical image.
|
1502.04591
|
Yintao Wang
|
Yintao Wang, Hao He, Shaoyang Wang, Yaohui Liu, Minglie Hu, Youjia
Cao, Chingyue Wang
|
Photostimulation activates restorable fragmentation of single
mitochondrion by initiating oxide flashes
| null | null | null | null |
q-bio.SC physics.optics q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Mitochondrial research is important to ageing, apoptosis, and mitochondrial
diseases. In previous works, mitochondria are usually stimulated indirectly by
proapoptotic drugs to study mitochondrial development, which is in lack of
controllability, or spatial and temporal resolution. These chemicals or even
gene techniques regulating mitochondrial dynamics may also activate other
inter- or intra-cellular processes simultaneously. Here we demonstrate a
photostimulation method on single-mitochondrion level by tightly-focused
femtosecond laser that can precisely activate restorable fragmentation of
mitochondria which soon recover their original tubular structure after tens of
seconds. In this process, series of mitochondrial reactive oxygen species
(mROS) flashes are observed and found very critical to mitochondrial
fragmentation. Meanwhile, transient openings of mitochondrial permeability
transition pores (mPTP), suggested by oscillations of mitochondrial membrane
potential, contribute to the scavenging of redundant mROS and recovery of
fragmented mitochondria. Those results demonstrate photostimulation as an
active, precise and controllable method for the study of mitochondrial
oxidative and morphological dynamics or related fields.
|
[
{
"created": "Mon, 16 Feb 2015 16:06:21 GMT",
"version": "v1"
},
{
"created": "Tue, 17 Feb 2015 13:54:30 GMT",
"version": "v2"
}
] |
2015-02-18
|
[
[
"Wang",
"Yintao",
""
],
[
"He",
"Hao",
""
],
[
"Wang",
"Shaoyang",
""
],
[
"Liu",
"Yaohui",
""
],
[
"Hu",
"Minglie",
""
],
[
"Cao",
"Youjia",
""
],
[
"Wang",
"Chingyue",
""
]
] |
Mitochondrial research is important to ageing, apoptosis, and mitochondrial diseases. In previous works, mitochondria are usually stimulated indirectly by proapoptotic drugs to study mitochondrial development, which is in lack of controllability, or spatial and temporal resolution. These chemicals or even gene techniques regulating mitochondrial dynamics may also activate other inter- or intra-cellular processes simultaneously. Here we demonstrate a photostimulation method on single-mitochondrion level by tightly-focused femtosecond laser that can precisely activate restorable fragmentation of mitochondria which soon recover their original tubular structure after tens of seconds. In this process, series of mitochondrial reactive oxygen species (mROS) flashes are observed and found very critical to mitochondrial fragmentation. Meanwhile, transient openings of mitochondrial permeability transition pores (mPTP), suggested by oscillations of mitochondrial membrane potential, contribute to the scavenging of redundant mROS and recovery of fragmented mitochondria. Those results demonstrate photostimulation as an active, precise and controllable method for the study of mitochondrial oxidative and morphological dynamics or related fields.
|
2312.10519
|
Vishal Rana
|
Vishal Rana, Jianhao Peng, Chao Pan, Hanbaek Lyu, Albert Cheng, Minji
Kim, Olgica Milenkovic
|
Interpretable Online Network Dictionary Learning for Inferring
Long-Range Chromatin Interactions
| null | null | null | null |
q-bio.GN q-bio.QM
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Dictionary learning (DL) is commonly used in computational biology to tackle
ubiquitous clustering problems due to its conceptual simplicity and relatively
low computational complexity. However, DL algorithms produce results that lack
interpretability and are not optimized for large-scale graph-structured data.
We propose a novel DL algorithm called online convex network dictionary
learning (online cvxNDL) that can handle extremely large datasets and enables
the interpretation of dictionary elements, which serve as cluster
representatives, through convex combinations of real measurements. Moreover,
the algorithm can be applied to network-structured data via specialized
subnetwork sampling techniques.
To demonstrate the utility of our approach, we apply cvxNDL on 3D-genome
RNAPII ChIA-Drop data to identify important long-range interaction patterns.
ChIA-Drop probes higher-order interactions, and produces hypergraphs whose
nodes represent genomic fragments. The hyperedges represent observed physical
contacts. Our hypergraph model analysis creates an interpretable dictionary of
long-range interaction patterns that accurately represent global chromatin
physical contact maps. Using dictionary information, one can also associate the
contact maps with RNA transcripts and infer cellular functions.
Our results offer two key insights. First, we demonstrate that online cvxNDL
retains the accuracy of classical DL methods while simultaneously ensuring
unique interpretability and scalability. Second, we identify distinct
collections of proximal and distal interaction patterns involving chromatin
elements shared by related processes across different chromosomes, as well as
patterns unique to specific chromosomes. To associate the dictionary elements
with biological properties of the corresponding chromatin regions, we employ
Gene Ontology enrichment analysis and perform RNA coexpression studies.
|
[
{
"created": "Sat, 16 Dec 2023 18:47:09 GMT",
"version": "v1"
}
] |
2023-12-19
|
[
[
"Rana",
"Vishal",
""
],
[
"Peng",
"Jianhao",
""
],
[
"Pan",
"Chao",
""
],
[
"Lyu",
"Hanbaek",
""
],
[
"Cheng",
"Albert",
""
],
[
"Kim",
"Minji",
""
],
[
"Milenkovic",
"Olgica",
""
]
] |
Dictionary learning (DL) is commonly used in computational biology to tackle ubiquitous clustering problems due to its conceptual simplicity and relatively low computational complexity. However, DL algorithms produce results that lack interpretability and are not optimized for large-scale graph-structured data. We propose a novel DL algorithm called online convex network dictionary learning (online cvxNDL) that can handle extremely large datasets and enables the interpretation of dictionary elements, which serve as cluster representatives, through convex combinations of real measurements. Moreover, the algorithm can be applied to network-structured data via specialized subnetwork sampling techniques. To demonstrate the utility of our approach, we apply cvxNDL on 3D-genome RNAPII ChIA-Drop data to identify important long-range interaction patterns. ChIA-Drop probes higher-order interactions, and produces hypergraphs whose nodes represent genomic fragments. The hyperedges represent observed physical contacts. Our hypergraph model analysis creates an interpretable dictionary of long-range interaction patterns that accurately represent global chromatin physical contact maps. Using dictionary information, one can also associate the contact maps with RNA transcripts and infer cellular functions. Our results offer two key insights. First, we demonstrate that online cvxNDL retains the accuracy of classical DL methods while simultaneously ensuring unique interpretability and scalability. Second, we identify distinct collections of proximal and distal interaction patterns involving chromatin elements shared by related processes across different chromosomes, as well as patterns unique to specific chromosomes. To associate the dictionary elements with biological properties of the corresponding chromatin regions, we employ Gene Ontology enrichment analysis and perform RNA coexpression studies.
|
2201.02869
|
Wesley Cota
|
Arthur Schulenburg, Wesley Cota, Guilherme S. Costa, Silvio C.
Ferreira
|
Effects of infection fatality ratio and social contact matrices on
vaccine prioritization strategies
|
13 pages, 10 figures, 2 tables. Accepted in Chaos: An
Interdisciplinary Journal of Nonlinear Science
|
Chaos 32, 093102 (2022)
|
10.1063/5.0096532
| null |
q-bio.PE physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Effective strategies of vaccine prioritization are essential to mitigate the
impacts of severe infectious diseases. We investigate the role of infection
fatality ratio (IFR) and social contact matrices on vaccination prioritization
using a compartmental epidemic model fueled by real-world data of different
diseases and countries. Our study confirms that massive and early vaccination
is extremely effective to reduce the disease fatality if the contagion is
mitigated, but the effectiveness is increasingly reduced as vaccination
beginning delays in an uncontrolled epidemiological scenario. The optimal and
least effective prioritization strategies depend non-linearly on
epidemiological variables. Regions of the epidemiological parameter space, in
which prioritizing the most vulnerable population is more effective than the
most contagious individuals, depend strongly on the IFR age profile being, for
example, substantially broader for COVID-19 in comparison with seasonal
influenza. Demographics and social contact matrices deform the phase diagrams
but do not alter their qualitative shapes.
|
[
{
"created": "Sat, 8 Jan 2022 17:57:44 GMT",
"version": "v1"
},
{
"created": "Sun, 7 Aug 2022 13:21:54 GMT",
"version": "v2"
}
] |
2023-10-04
|
[
[
"Schulenburg",
"Arthur",
""
],
[
"Cota",
"Wesley",
""
],
[
"Costa",
"Guilherme S.",
""
],
[
"Ferreira",
"Silvio C.",
""
]
] |
Effective strategies of vaccine prioritization are essential to mitigate the impacts of severe infectious diseases. We investigate the role of infection fatality ratio (IFR) and social contact matrices on vaccination prioritization using a compartmental epidemic model fueled by real-world data of different diseases and countries. Our study confirms that massive and early vaccination is extremely effective to reduce the disease fatality if the contagion is mitigated, but the effectiveness is increasingly reduced as vaccination beginning delays in an uncontrolled epidemiological scenario. The optimal and least effective prioritization strategies depend non-linearly on epidemiological variables. Regions of the epidemiological parameter space, in which prioritizing the most vulnerable population is more effective than the most contagious individuals, depend strongly on the IFR age profile being, for example, substantially broader for COVID-19 in comparison with seasonal influenza. Demographics and social contact matrices deform the phase diagrams but do not alter their qualitative shapes.
|
2012.08667
|
Luca Faes
|
Gorana Mijatovic, Yuri Antonacci, Tatjana Loncar-Turukalo, Ludovico
Minati and Luca Faes
|
An information-theoretic framework to measure the dynamic interaction
between neural spike trains
|
12 pages, 7 figures; supplementary material at
www.lucafaes.net/TEMI.html
| null | null | null |
q-bio.NC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Understanding the interaction patterns among simultaneous recordings of spike
trains from multiple neuronal units is a key topic in neuroscience. However, an
optimal approach of assessing these interactions has not been established, as
existing methods either do not consider the inherent point process nature of
spike trains or are based on parametric assumptions that may lead to wrong
inferences if not met. This work presents a framework, grounded in the field of
information dynamics, for the model-free, continuous-time estimation of both
undirected (symmetric) and directed (causal) interactions between pairs of
spike trains. The framework decomposes the overall information exchanged
dynamically between two point processes X and Y as the sum of the dynamic
mutual information (dMI) between the histories of X and Y, plus the transfer
entropy (TE) along the directions X->Y and Y->X. Building on recent work which
derived theoretical expressions and consistent estimators for the TE in
continuous time, we develop algorithms for estimating efficiently all measures
in our framework through nearest neighbor statistics. These algorithms are
validated in simulations of independent and coupled spike train processes,
showing the accuracy of dMI and TE in the assessment of undirected and directed
interactions even for weakly coupled and short realizations, and proving the
superiority of the continuous-time estimator over the discrete-time method.
Then, the usefulness of the framework is illustrated in a real data scenario of
recordings from in-vitro preparations of spontaneously-growing cultures of
cortical neurons, where we show the ability of dMI and TE to identify how the
networks of undirected and directed spike train interactions change their
topology through maturation of the neuronal cultures.
|
[
{
"created": "Tue, 15 Dec 2020 23:27:24 GMT",
"version": "v1"
}
] |
2020-12-17
|
[
[
"Mijatovic",
"Gorana",
""
],
[
"Antonacci",
"Yuri",
""
],
[
"Loncar-Turukalo",
"Tatjana",
""
],
[
"Minati",
"Ludovico",
""
],
[
"Faes",
"Luca",
""
]
] |
Understanding the interaction patterns among simultaneous recordings of spike trains from multiple neuronal units is a key topic in neuroscience. However, an optimal approach of assessing these interactions has not been established, as existing methods either do not consider the inherent point process nature of spike trains or are based on parametric assumptions that may lead to wrong inferences if not met. This work presents a framework, grounded in the field of information dynamics, for the model-free, continuous-time estimation of both undirected (symmetric) and directed (causal) interactions between pairs of spike trains. The framework decomposes the overall information exchanged dynamically between two point processes X and Y as the sum of the dynamic mutual information (dMI) between the histories of X and Y, plus the transfer entropy (TE) along the directions X->Y and Y->X. Building on recent work which derived theoretical expressions and consistent estimators for the TE in continuous time, we develop algorithms for estimating efficiently all measures in our framework through nearest neighbor statistics. These algorithms are validated in simulations of independent and coupled spike train processes, showing the accuracy of dMI and TE in the assessment of undirected and directed interactions even for weakly coupled and short realizations, and proving the superiority of the continuous-time estimator over the discrete-time method. Then, the usefulness of the framework is illustrated in a real data scenario of recordings from in-vitro preparations of spontaneously-growing cultures of cortical neurons, where we show the ability of dMI and TE to identify how the networks of undirected and directed spike train interactions change their topology through maturation of the neuronal cultures.
|
2203.12290
|
Renata Rychtarikova
|
Ali Ghaznavi, Renata Rychtarikova, Mohammadmehdi Saberioon, Dalibor
Stys
|
Cell segmentation from telecentric bright-field transmitted light
microscopy images using a Residual Attention U-Net: a case study on HeLa line
|
32 pages, 7 figures
|
Computers in Biology and Medicine, 105805 (2022)
|
10.1016/j.compbiomed.2022.105805
| null |
q-bio.QM cs.CV eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Living cell segmentation from bright-field light microscopy images is
challenging due to the image complexity and temporal changes in the living
cells. Recently developed deep learning (DL)-based methods became popular in
medical and microscopy image segmentation tasks due to their success and
promising outcomes. The main objective of this paper is to develop a deep
learning, U-Net-based method to segment the living cells of the HeLa line in
bright-field transmitted light microscopy. To find the most suitable
architecture for our datasets, a residual attention U-Net was proposed and
compared with an attention and a simple U-Net architecture.
The attention mechanism highlights the remarkable features and suppresses
activations in the irrelevant image regions. The residual mechanism overcomes
with vanishing gradient problem. The Mean-IoU score for our datasets reaches
0.9505, 0.9524, and 0.9530 for the simple, attention, and residual attention
U-Net, respectively. The most accurate semantic segmentation results was
achieved in the Mean-IoU and Dice metrics by applying the residual and
attention mechanisms together. The watershed method applied to this best --
Residual Attention -- semantic segmentation result gave the segmentation with
the specific information for each cell.
|
[
{
"created": "Wed, 23 Mar 2022 09:20:30 GMT",
"version": "v1"
},
{
"created": "Tue, 7 Jun 2022 08:33:37 GMT",
"version": "v2"
},
{
"created": "Wed, 6 Jul 2022 14:08:18 GMT",
"version": "v3"
}
] |
2022-07-07
|
[
[
"Ghaznavi",
"Ali",
""
],
[
"Rychtarikova",
"Renata",
""
],
[
"Saberioon",
"Mohammadmehdi",
""
],
[
"Stys",
"Dalibor",
""
]
] |
Living cell segmentation from bright-field light microscopy images is challenging due to the image complexity and temporal changes in the living cells. Recently developed deep learning (DL)-based methods became popular in medical and microscopy image segmentation tasks due to their success and promising outcomes. The main objective of this paper is to develop a deep learning, U-Net-based method to segment the living cells of the HeLa line in bright-field transmitted light microscopy. To find the most suitable architecture for our datasets, a residual attention U-Net was proposed and compared with an attention and a simple U-Net architecture. The attention mechanism highlights the remarkable features and suppresses activations in the irrelevant image regions. The residual mechanism overcomes with vanishing gradient problem. The Mean-IoU score for our datasets reaches 0.9505, 0.9524, and 0.9530 for the simple, attention, and residual attention U-Net, respectively. The most accurate semantic segmentation results was achieved in the Mean-IoU and Dice metrics by applying the residual and attention mechanisms together. The watershed method applied to this best -- Residual Attention -- semantic segmentation result gave the segmentation with the specific information for each cell.
|
0902.4692
|
Michael McGuigan
|
Michael McGuigan
|
The Tiger and the Sun: Solar Power Plants and Wildlife Sanctuaries
|
19 pages, 4 tables, 9 figures
| null | null | null |
q-bio.PE physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We discuss separate and integrated approaches to building scalable solar
power plants and wildlife sanctuaries. Both solar power plants and wildlife
sanctuaries need a lot of land. We quantify some of the requirements using
various estimates of the rate of solar power production as well as the rate of
adding wildlife to a sanctuary over the time range 2010-2050. We use population
dynamics equations to study the evolution of solar energy and tiger populations
up to and beyond 2050.
|
[
{
"created": "Thu, 26 Feb 2009 20:33:23 GMT",
"version": "v1"
}
] |
2009-02-27
|
[
[
"McGuigan",
"Michael",
""
]
] |
We discuss separate and integrated approaches to building scalable solar power plants and wildlife sanctuaries. Both solar power plants and wildlife sanctuaries need a lot of land. We quantify some of the requirements using various estimates of the rate of solar power production as well as the rate of adding wildlife to a sanctuary over the time range 2010-2050. We use population dynamics equations to study the evolution of solar energy and tiger populations up to and beyond 2050.
|
0712.0846
|
Kai Miller
|
Kai J. Miller, Larry B. Sorensen, Jeffrey G. Ojemann, Marcel den Nijs
|
ECoG observations of power-law scaling in the human cortex
|
4 pages, 4 figures
| null | null | null |
q-bio.NC cond-mat.other
| null |
We report the results of our search for power-law electrical signals in the
human brain, using subdural electrocorticographic recordings from the surface
of the cortex. The power spectral density (PSD) of these signals has the
power-law form $ P(f)\sim f^{-\chi} $ from 80 to 500 Hz. This scaling index
$\chi = 4.0\pm 0.1$ is universal, across subjects, area in the cortex, and
local neural activity levels. The shape of the PSD does not change with local
cortex activity, only its amplitude increases. We observe a knee in the spectra
at $f_0\simeq 70$ Hz, implying the existence of a characteristic time scale
$\tau=(2\pi f_0)^{-1}\simeq 2-4$ msec. For $f<f_0$ we find evidence for a
power-law with $\chi_L\simeq 2.0\pm 0.4$.
|
[
{
"created": "Wed, 5 Dec 2007 23:04:04 GMT",
"version": "v1"
}
] |
2007-12-11
|
[
[
"Miller",
"Kai J.",
""
],
[
"Sorensen",
"Larry B.",
""
],
[
"Ojemann",
"Jeffrey G.",
""
],
[
"Nijs",
"Marcel den",
""
]
] |
We report the results of our search for power-law electrical signals in the human brain, using subdural electrocorticographic recordings from the surface of the cortex. The power spectral density (PSD) of these signals has the power-law form $ P(f)\sim f^{-\chi} $ from 80 to 500 Hz. This scaling index $\chi = 4.0\pm 0.1$ is universal, across subjects, area in the cortex, and local neural activity levels. The shape of the PSD does not change with local cortex activity, only its amplitude increases. We observe a knee in the spectra at $f_0\simeq 70$ Hz, implying the existence of a characteristic time scale $\tau=(2\pi f_0)^{-1}\simeq 2-4$ msec. For $f<f_0$ we find evidence for a power-law with $\chi_L\simeq 2.0\pm 0.4$.
|
2010.01260
|
Hong-Li Zeng
|
Hong-Li Zeng, Vito Dichio, Edwin Rodr\'iguez Horta, Kaisa Thorell, and
Erik Aurell
|
Global analysis of more than 50,000 SARS-Cov-2 genomes reveals epistasis
between 8 viral genes
|
22 pages, 11 pages
| null |
10.1073/pnas.2012331117
| null |
q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Genome-wide epistasis analysis is a powerful tool to infer gene interactions,
which can guide drug and vaccine development and lead to a deeper understanding
of microbial pathogenesis. We have considered all complete SARS-CoV-2 genomes
deposited in the GISAID repository until \textbf{four} different cut-off dates,
and used Direct Coupling Analysis together with an assumption of Quasi-Linkage
Equilibrium to infer epistatic contributions to fitness from polymorphic loci.
We find \textbf{eight} interactions, of which three between pairs where one
locus lies in gene ORF3a, both loci holding non-synonymous mutations. We also
find interactions between two loci in gene nsp13, both holding non-synonymous
mutations, and four interactions involving one locus holding a synonymous
mutation. Altogether we infer interactions between loci in viral genes ORF3a
and nsp2, nsp12 and nsp6, between ORF8 and nsp4, and between loci in genes
nsp2, nsp13 and nsp14. The paper opens the prospect to use prominent
epistatically linked pairs as a starting point to search for combinatorial
weaknesses of recombinant viral pathogens.
|
[
{
"created": "Sat, 3 Oct 2020 02:19:29 GMT",
"version": "v1"
}
] |
2022-05-18
|
[
[
"Zeng",
"Hong-Li",
""
],
[
"Dichio",
"Vito",
""
],
[
"Horta",
"Edwin Rodríguez",
""
],
[
"Thorell",
"Kaisa",
""
],
[
"Aurell",
"Erik",
""
]
] |
Genome-wide epistasis analysis is a powerful tool to infer gene interactions, which can guide drug and vaccine development and lead to a deeper understanding of microbial pathogenesis. We have considered all complete SARS-CoV-2 genomes deposited in the GISAID repository until \textbf{four} different cut-off dates, and used Direct Coupling Analysis together with an assumption of Quasi-Linkage Equilibrium to infer epistatic contributions to fitness from polymorphic loci. We find \textbf{eight} interactions, of which three between pairs where one locus lies in gene ORF3a, both loci holding non-synonymous mutations. We also find interactions between two loci in gene nsp13, both holding non-synonymous mutations, and four interactions involving one locus holding a synonymous mutation. Altogether we infer interactions between loci in viral genes ORF3a and nsp2, nsp12 and nsp6, between ORF8 and nsp4, and between loci in genes nsp2, nsp13 and nsp14. The paper opens the prospect to use prominent epistatically linked pairs as a starting point to search for combinatorial weaknesses of recombinant viral pathogens.
|
2209.06948
|
Thor Andreassen
|
Thor E. Andreassen, Donald R. Hume, Landon D. Hamilton, Sean E.
Higinbotham, Kevin B. Shelburne
|
Automated 2D and 3D Finite Element Overclosure Adjustment and Mesh
Morphing Using Generalized Regression Neural Networks
|
Updates were made to the original article to include a new algorithm
capable of rapid morphing of generated meshes, and the validation included.
New Paper total is 40 Pages, 10 Figures, 4 Tables
|
Automated 2D and 3D finite element overclosure adjustment and mesh
morphing using generalized regression neural networks, Medical Engineering &
Physics, Volume 126, 2024, 104136, ISSN 1350-4533
|
10.1016/j.medengphy.2024.104136
| null |
q-bio.QM cs.NA math.NA
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Computer representations of three-dimensional (3D) geometries are crucial for
simulating systems and processes in engineering and science. In medicine, and
more specifically, biomechanics and orthopaedics, obtaining and using 3D
geometries is critical to many workflows. However, while many tools exist to
obtain 3D geometries of organic structures, little has been done to make them
usable for their intended medical purposes. Furthermore, many of the proposed
tools are proprietary, limiting their use. This work introduces two novel
algorithms based on Generalized Regression Neural Networks (GRNN) and 4
processes to perform mesh morphing and overclosure adjustment. These algorithms
were implemented, and test cases were used to validate them against existing
algorithms to demonstrate improved performance. The resulting algorithms
demonstrate improvements to existing techniques based on Radial Basis Function
(RBF) networks by converting to GRNN-based implementations. Implementations in
MATLAB of these algorithms and the source code are publicly available at the
following locations:
https://github.com/thor-andreassen/femors
https://simtk.org/projects/femors-rbf
https://www.mathworks.com/matlabcentral/fileexchange/120353-finite-element-morphing-overclosure-reduction-and-slicing
|
[
{
"created": "Wed, 14 Sep 2022 21:48:55 GMT",
"version": "v1"
},
{
"created": "Wed, 19 Apr 2023 17:35:18 GMT",
"version": "v2"
},
{
"created": "Mon, 9 Oct 2023 20:07:38 GMT",
"version": "v3"
}
] |
2024-04-05
|
[
[
"Andreassen",
"Thor E.",
""
],
[
"Hume",
"Donald R.",
""
],
[
"Hamilton",
"Landon D.",
""
],
[
"Higinbotham",
"Sean E.",
""
],
[
"Shelburne",
"Kevin B.",
""
]
] |
Computer representations of three-dimensional (3D) geometries are crucial for simulating systems and processes in engineering and science. In medicine, and more specifically, biomechanics and orthopaedics, obtaining and using 3D geometries is critical to many workflows. However, while many tools exist to obtain 3D geometries of organic structures, little has been done to make them usable for their intended medical purposes. Furthermore, many of the proposed tools are proprietary, limiting their use. This work introduces two novel algorithms based on Generalized Regression Neural Networks (GRNN) and 4 processes to perform mesh morphing and overclosure adjustment. These algorithms were implemented, and test cases were used to validate them against existing algorithms to demonstrate improved performance. The resulting algorithms demonstrate improvements to existing techniques based on Radial Basis Function (RBF) networks by converting to GRNN-based implementations. Implementations in MATLAB of these algorithms and the source code are publicly available at the following locations: https://github.com/thor-andreassen/femors https://simtk.org/projects/femors-rbf https://www.mathworks.com/matlabcentral/fileexchange/120353-finite-element-morphing-overclosure-reduction-and-slicing
|
2110.11914
|
Meghdad Saeedian
|
Meghdad Saeedian, Emanuele Pigani, Amos Maritan, Samir Suweis, and
Sandro Azaele
|
Effect of delay on the emergent stability patterns in Generalized
Lotka-Volterra ecological dynamics
| null | null | null | null |
q-bio.PE physics.bio-ph
|
http://creativecommons.org/licenses/by/4.0/
|
Understanding the conditions of feasibility and stability in ecological
systems is a major challenge in theoretical ecology. The seminal work of May in
1972 and recent developments based on the theory of random matrices have shown
the existence of emergent universal patterns of both stability and feasibility
in ecological dynamics. However, only a few studies have investigated the role
of delay coupled with population dynamics in the emergence of feasible and
stable states. In this work, we study the effects of delay on Generalized
Loka-Volterra population dynamics of several interacting species in closed
ecological environments. First, we investigate the relation between feasibility
and stability of the modeled ecological community in the absence of delay and
find a simple analytical relation when intra-species interactions are dominant.
We then show how, by increasing the time delay, there is a transition in the
stability phases of the population dynamics: from an equilibrium state to a
stable non-point attractor phase. We calculate analytically the critical delay
of that transition and show that it is in excellent agreement with numerical
simulations. Finally, we introduce a measure of stability that holds for out of
equilibrium dynamics and we show that in the oscillatory regime induced by the
delay stability increases for increasing ecosystem diversity.
|
[
{
"created": "Fri, 22 Oct 2021 16:56:53 GMT",
"version": "v1"
}
] |
2021-10-25
|
[
[
"Saeedian",
"Meghdad",
""
],
[
"Pigani",
"Emanuele",
""
],
[
"Maritan",
"Amos",
""
],
[
"Suweis",
"Samir",
""
],
[
"Azaele",
"Sandro",
""
]
] |
Understanding the conditions of feasibility and stability in ecological systems is a major challenge in theoretical ecology. The seminal work of May in 1972 and recent developments based on the theory of random matrices have shown the existence of emergent universal patterns of both stability and feasibility in ecological dynamics. However, only a few studies have investigated the role of delay coupled with population dynamics in the emergence of feasible and stable states. In this work, we study the effects of delay on Generalized Loka-Volterra population dynamics of several interacting species in closed ecological environments. First, we investigate the relation between feasibility and stability of the modeled ecological community in the absence of delay and find a simple analytical relation when intra-species interactions are dominant. We then show how, by increasing the time delay, there is a transition in the stability phases of the population dynamics: from an equilibrium state to a stable non-point attractor phase. We calculate analytically the critical delay of that transition and show that it is in excellent agreement with numerical simulations. Finally, we introduce a measure of stability that holds for out of equilibrium dynamics and we show that in the oscillatory regime induced by the delay stability increases for increasing ecosystem diversity.
|
1911.09268
|
Michel Pleimling
|
Ryan Baker and Michel Pleimling
|
The effect of habitats and fitness on species coexistence in systems
with cyclic dominance
|
21 pages, 9 figures, accepted for publication in Journal of
Theoretical Biology
|
J. Theor. Biol. 486, 110084 (2020)
|
10.1016/j.jtbi.2019.110084
| null |
q-bio.PE cond-mat.stat-mech
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cyclic dominance between species may yield spiral waves that are known to
provide a mechanism enabling persistent species coexistence. This observation
holds true even in presence of spatial heterogeneity in the form of quenched
disorder. In this work we study the effects on spatio-temporal patterns and
species coexistence of structured spatial heterogeneity in the form of habitats
that locally provide one of the species with an advantage. Performing extensive
numerical simulations of systems with three and six species we show that these
structured habitats destabilize spiral waves. Analyzing extinction events, we
find that species extinction probabilities display a succession of maxima as
function of time, that indicate a periodically enhanced probability for species
extinction. Analysis of the mean extinction time reveals that as a function of
the parameter governing the advantage of one of the species a transition
between stable coexistence and unstable coexistence takes place. We also
investigate how efficiency as a predator or a prey affects species coexistence.
|
[
{
"created": "Thu, 21 Nov 2019 03:28:05 GMT",
"version": "v1"
}
] |
2020-02-05
|
[
[
"Baker",
"Ryan",
""
],
[
"Pleimling",
"Michel",
""
]
] |
Cyclic dominance between species may yield spiral waves that are known to provide a mechanism enabling persistent species coexistence. This observation holds true even in presence of spatial heterogeneity in the form of quenched disorder. In this work we study the effects on spatio-temporal patterns and species coexistence of structured spatial heterogeneity in the form of habitats that locally provide one of the species with an advantage. Performing extensive numerical simulations of systems with three and six species we show that these structured habitats destabilize spiral waves. Analyzing extinction events, we find that species extinction probabilities display a succession of maxima as function of time, that indicate a periodically enhanced probability for species extinction. Analysis of the mean extinction time reveals that as a function of the parameter governing the advantage of one of the species a transition between stable coexistence and unstable coexistence takes place. We also investigate how efficiency as a predator or a prey affects species coexistence.
|
1206.5557
|
Sepehr Ehsani
|
Sepehr Ehsani
|
Simple variation of the logistic map as a model to invoke questions on
cellular protein trafficking
|
11 pages, 6 figures, 1 table
| null | null | null |
q-bio.SC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Many open problems in biology, as in the physical sciences, display nonlinear
and 'chaotic' dynamics, which, to the extent possible, cannot be reasonably
understood. Moreover, mathematical models which aim to predict/estimate unknown
aspects of a biological system cannot provide more information about the set of
biologically meaningful (e.g., 'hidden') states of the system than could be
understood by the designer of the model ab initio. Here, the case is made for
the utilization of such models to shift from a 'predictive' to a 'questioning'
nature, and a simple natural-logarithm variation of the logistic polynomial map
is presented that can invoke questions about protein trafficking in eukaryotic
cells.
|
[
{
"created": "Mon, 25 Jun 2012 01:35:36 GMT",
"version": "v1"
}
] |
2012-06-26
|
[
[
"Ehsani",
"Sepehr",
""
]
] |
Many open problems in biology, as in the physical sciences, display nonlinear and 'chaotic' dynamics, which, to the extent possible, cannot be reasonably understood. Moreover, mathematical models which aim to predict/estimate unknown aspects of a biological system cannot provide more information about the set of biologically meaningful (e.g., 'hidden') states of the system than could be understood by the designer of the model ab initio. Here, the case is made for the utilization of such models to shift from a 'predictive' to a 'questioning' nature, and a simple natural-logarithm variation of the logistic polynomial map is presented that can invoke questions about protein trafficking in eukaryotic cells.
|
1802.01375
|
Yasser A. Ahmed
|
Yasser A. Ahmed and Mohammed Abdelsabour-Khalaf
|
Adipochondrocytes in rabbit auricular cartilage
|
Adipochondrocytes in rabbit auricular cartilage
| null | null | null |
q-bio.TO
|
http://creativecommons.org/publicdomain/zero/1.0/
|
Chondrocytes are described as one cell population in different cartilage
types. The auricular cartilage in mouse and rat contains unique chondrocytes
similar in morphology to white adipocytes and known as lipochondrocytes.
Lipochondrocytes were not mentioned in other species. The current study aimed
to explore the existence of this cell type in rabbits. The auricles of adult
male white rabbits were harvested and processed for histological examination
with light and electron microscopy. With the light microscopy, the auricular
cartilage of adult rabbits contained central large rounded adipocyte-like
chondrocytes, termed in the current study adipochondrocytes The
adipochondrocytes were embedded in relatively wide lacunae and had large lipid
droplets with a rim of cytoplasm. The scanning electron microscopy confirmed
this result. With the transmission electron microscopy, the adipochondrocytes
showed dark nucleus and electron-dense cytoplasm with few organelles and
cytoplasmic processes. The adipochondrocytes of the auricular cartilage in
adult rabbits were unique cell type and different from chondrocytes in other
cartilage subtypes. This result should be considered during cartilage
transplant. Further studies are suggested to investigate the development and
physiological roles of adipochondrocytes in the auricular cartilage in rabbits.
|
[
{
"created": "Mon, 5 Feb 2018 13:11:02 GMT",
"version": "v1"
}
] |
2018-02-06
|
[
[
"Ahmed",
"Yasser A.",
""
],
[
"Abdelsabour-Khalaf",
"Mohammed",
""
]
] |
Chondrocytes are described as one cell population in different cartilage types. The auricular cartilage in mouse and rat contains unique chondrocytes similar in morphology to white adipocytes and known as lipochondrocytes. Lipochondrocytes were not mentioned in other species. The current study aimed to explore the existence of this cell type in rabbits. The auricles of adult male white rabbits were harvested and processed for histological examination with light and electron microscopy. With the light microscopy, the auricular cartilage of adult rabbits contained central large rounded adipocyte-like chondrocytes, termed in the current study adipochondrocytes The adipochondrocytes were embedded in relatively wide lacunae and had large lipid droplets with a rim of cytoplasm. The scanning electron microscopy confirmed this result. With the transmission electron microscopy, the adipochondrocytes showed dark nucleus and electron-dense cytoplasm with few organelles and cytoplasmic processes. The adipochondrocytes of the auricular cartilage in adult rabbits were unique cell type and different from chondrocytes in other cartilage subtypes. This result should be considered during cartilage transplant. Further studies are suggested to investigate the development and physiological roles of adipochondrocytes in the auricular cartilage in rabbits.
|
1710.02600
|
Robert Endres
|
Robert G. Endres
|
Entropy production selects nonequilibrium states in multistable systems
|
15 pages, 4 figures
| null | null | null |
q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Far-from-equilibrium thermodynamics underpins the emergence of life, but how
has been a long-outstanding puzzle. Best candidate theories based on the
maximum entropy production principle could not be unequivocally proven, in part
due to complicated physics, unintuitive stochastic thermodynamics, and the
existence of alternative theories such as the minimum entropy production
principle. Here, we use a simple, analytically solvable, one-dimensional
bistable chemical system to demonstrate the validity of the maximum entropy
production principle. To generalize to multistable stochastic system, we use
the stochastic least-action principle to derive the entropy production and its
role in the stability of nonequilibrium steady states. This shows that in a
multistable system, all else being equal, the steady state with the highest
entropy production is favored, with a number of implications for the evolution
of biological, physical, and geological systems.
|
[
{
"created": "Fri, 6 Oct 2017 22:15:08 GMT",
"version": "v1"
}
] |
2017-10-10
|
[
[
"Endres",
"Robert G.",
""
]
] |
Far-from-equilibrium thermodynamics underpins the emergence of life, but how has been a long-outstanding puzzle. Best candidate theories based on the maximum entropy production principle could not be unequivocally proven, in part due to complicated physics, unintuitive stochastic thermodynamics, and the existence of alternative theories such as the minimum entropy production principle. Here, we use a simple, analytically solvable, one-dimensional bistable chemical system to demonstrate the validity of the maximum entropy production principle. To generalize to multistable stochastic system, we use the stochastic least-action principle to derive the entropy production and its role in the stability of nonequilibrium steady states. This shows that in a multistable system, all else being equal, the steady state with the highest entropy production is favored, with a number of implications for the evolution of biological, physical, and geological systems.
|
2302.00104
|
Xizhe Zhang
|
Xinru Wei, Shuai Dong, Zhao Su, Lili Tang, Pengfei Zhao, Chunyu Pan,
Fei Wang, Yanqing Tang, Weixiong Zhang, Xizhe Zhang
|
NetMoST: A network-based machine learning approach for subtyping
schizophrenia using polygenic SNP allele biomarkers
|
21 pages,4 figures
| null | null | null |
q-bio.MN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Subtyping neuropsychiatric disorders like schizophrenia is essential for
improving the diagnosis and treatment of complex diseases. Subtyping
schizophrenia is challenging because it is polygenic and genetically
heterogeneous, rendering the standard symptom-based diagnosis often unreliable
and unrepeatable. We developed a novel network-based machine-learning approach,
netMoST, to subtyping psychiatric disorders. NetMoST identifies polygenic risk
SNP-allele modules from genome-wide genotyping data as polygenic haplotype
biomarkers (PHBs) for disease subtyping. We applied netMoST to subtype a cohort
of schizophrenia subjects into three distinct biotypes with differentiable
genetic, neuroimaging and functional characteristics. The PHBs of the first
biotype (36.9% of all patients) were related to neurodevelopment and cognition,
the PHBs of the second biotype (28.4%) were enriched for neuroimmune functions,
and the PHBs of the third biotype (34.7%) were associated with the transport of
calcium ions and neurotransmitters. Neuroimaging patterns provided additional
support to the new biotypes, with unique regional homogeneity (ReHo) patterns
observed in the brains of each biotype compared with healthy controls. Our
findings demonstrated netMoST's capability for uncovering novel biotypes of
complex diseases such as schizophrenia. The results also showed the power of
exploring polygenic allelic patterns that transcend the conventional GWAS
approaches.
|
[
{
"created": "Tue, 31 Jan 2023 21:11:27 GMT",
"version": "v1"
},
{
"created": "Fri, 10 Mar 2023 10:08:07 GMT",
"version": "v2"
}
] |
2023-03-13
|
[
[
"Wei",
"Xinru",
""
],
[
"Dong",
"Shuai",
""
],
[
"Su",
"Zhao",
""
],
[
"Tang",
"Lili",
""
],
[
"Zhao",
"Pengfei",
""
],
[
"Pan",
"Chunyu",
""
],
[
"Wang",
"Fei",
""
],
[
"Tang",
"Yanqing",
""
],
[
"Zhang",
"Weixiong",
""
],
[
"Zhang",
"Xizhe",
""
]
] |
Subtyping neuropsychiatric disorders like schizophrenia is essential for improving the diagnosis and treatment of complex diseases. Subtyping schizophrenia is challenging because it is polygenic and genetically heterogeneous, rendering the standard symptom-based diagnosis often unreliable and unrepeatable. We developed a novel network-based machine-learning approach, netMoST, to subtyping psychiatric disorders. NetMoST identifies polygenic risk SNP-allele modules from genome-wide genotyping data as polygenic haplotype biomarkers (PHBs) for disease subtyping. We applied netMoST to subtype a cohort of schizophrenia subjects into three distinct biotypes with differentiable genetic, neuroimaging and functional characteristics. The PHBs of the first biotype (36.9% of all patients) were related to neurodevelopment and cognition, the PHBs of the second biotype (28.4%) were enriched for neuroimmune functions, and the PHBs of the third biotype (34.7%) were associated with the transport of calcium ions and neurotransmitters. Neuroimaging patterns provided additional support to the new biotypes, with unique regional homogeneity (ReHo) patterns observed in the brains of each biotype compared with healthy controls. Our findings demonstrated netMoST's capability for uncovering novel biotypes of complex diseases such as schizophrenia. The results also showed the power of exploring polygenic allelic patterns that transcend the conventional GWAS approaches.
|
2005.02567
|
Friedrich Sommer
|
Christopher Warner, Friedrich T. Sommer
|
A Model for Image Segmentation in Retina
|
39 pages, 20 figures
| null | null | null |
q-bio.NC cs.NE eess.IV
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While traditional feed-forward filter models can reproduce the rate responses
of retinal ganglion neurons to simple stimuli, they cannot explain why
synchrony between spikes is much higher than expected by Poisson firing [6],
and can be sometimes rhythmic [25, 16]. Here we investigate the hypothesis that
synchrony in periodic retinal spike trains could convey contextual information
of the visual input, which is extracted by computations in the retinal network.
We propose a computational model for image segmentation consisting of a
Kuramoto model of coupled oscillators whose phases model the timing of
individual retinal spikes. The phase couplings between oscillators are shaped
by the stimulus structure, causing cells to synchronize if the local contrast
in their receptive fields is similar. In essence, relaxation in the oscillator
network solves a graph clustering problem with the graph representing feature
similarity between different points in the image. We tested different model
versions on the Berkeley Image Segmentation Data Set (BSDS). Networks with
phase interactions set by standard representations of the feature graph
(adjacency matrix, Graph Laplacian or modularity) failed to exhibit
segmentation performance significantly over the baseline, a model of
independent sensors. In contrast, a network with phase interactions that takes
into account not only feature similarities but also geometric distances between
receptive fields exhibited segmentation performance significantly above
baseline.
|
[
{
"created": "Wed, 6 May 2020 02:58:18 GMT",
"version": "v1"
}
] |
2020-05-07
|
[
[
"Warner",
"Christopher",
""
],
[
"Sommer",
"Friedrich T.",
""
]
] |
While traditional feed-forward filter models can reproduce the rate responses of retinal ganglion neurons to simple stimuli, they cannot explain why synchrony between spikes is much higher than expected by Poisson firing [6], and can be sometimes rhythmic [25, 16]. Here we investigate the hypothesis that synchrony in periodic retinal spike trains could convey contextual information of the visual input, which is extracted by computations in the retinal network. We propose a computational model for image segmentation consisting of a Kuramoto model of coupled oscillators whose phases model the timing of individual retinal spikes. The phase couplings between oscillators are shaped by the stimulus structure, causing cells to synchronize if the local contrast in their receptive fields is similar. In essence, relaxation in the oscillator network solves a graph clustering problem with the graph representing feature similarity between different points in the image. We tested different model versions on the Berkeley Image Segmentation Data Set (BSDS). Networks with phase interactions set by standard representations of the feature graph (adjacency matrix, Graph Laplacian or modularity) failed to exhibit segmentation performance significantly over the baseline, a model of independent sensors. In contrast, a network with phase interactions that takes into account not only feature similarities but also geometric distances between receptive fields exhibited segmentation performance significantly above baseline.
|
0907.4765
|
Frederick W. Cummings
|
F. W. Cummings
|
Phyllotaxis, a model
|
11 pages, two figures, one table
| null | null | null |
q-bio.TO q-bio.CB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A model of the regular arrangement of leaves on a plant stem (phyllotactic
patterns) is proposed, based on a new plant pattern algorithm. Tripartite
patterning is proposed to occur by the interaction of two signaling pathways.
Each pathway produces stimulated extracellular emission of like ligand upon
activation of its respective receptor, as well as inhibiting such emission from
the other pathway. Patterns arise spontaneously from zero density of activated
receptor. All known phyllotactic patterns are reproduced, Fibonacci,
distichous, decussate, and whorls, as well as the rare monostichy, with only
one leaf directly above the previous.
|
[
{
"created": "Mon, 27 Jul 2009 20:40:00 GMT",
"version": "v1"
}
] |
2009-07-29
|
[
[
"Cummings",
"F. W.",
""
]
] |
A model of the regular arrangement of leaves on a plant stem (phyllotactic patterns) is proposed, based on a new plant pattern algorithm. Tripartite patterning is proposed to occur by the interaction of two signaling pathways. Each pathway produces stimulated extracellular emission of like ligand upon activation of its respective receptor, as well as inhibiting such emission from the other pathway. Patterns arise spontaneously from zero density of activated receptor. All known phyllotactic patterns are reproduced, Fibonacci, distichous, decussate, and whorls, as well as the rare monostichy, with only one leaf directly above the previous.
|
1101.5495
|
Sophio Bakhtadze
|
Sophio Bakhtadze, Marine Janelidze, Nana Khachapuridze
|
Impact of EEG biofeedback on event-related potentials (ERPs) in
attention-deficit hyperactivity (ADHD) children
|
31 pages, 4 tables, 10 figures
| null | null | null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Impact of EEG biofeedback on event-related potentials (ERPs) in
attention-deficit hyperactivity (ADHD) children. Introduction: ADHD is one of
the most widely spread condition of school aged children affecting 5% of
children of this age.The core clinical signs of ADHD are inattention,
restlessness and impulsivity. According to various authors direct measures of
attention are of two types: 1. Recording the alpha rhythm on the EEG and
event-related potentials (ERP); 2. Tests of reaction time, continuous
performance tests, paired associated learning, and tests of memorization; The
second one is evidence-based. As for the first one it is known that ERPs
especially of those with later response reflect the process of mental
effortfullness to select the appropriate behavior and accomplish decision
making during the action of target stimulus. Thus selection of action as well
as decision making is the most important points affected in ADHD children.
Besides in recent years EEG biofeedback (Neurofeedback) have become the
evidence-based in the treatment of ADHD. Unfortunately the effectiveness of
this approach on ERPs parameters is still unknown. Thus we aimed to study the
changes of ERPs after neurofeedback therapy. Methods: We have examined 16
children with ADHD before- and after 3- sessions of neurofeedback therapy and
23 without treatment. Results: We have observed statistically significant
improvement of parameters of later response like P300 in treated children
compared with untreated ones whereas the treatment was non effective for
earlier components of ERPs. Conclusions: Neurofeedback can affect on the
process of selection of action and decision making by means of changing of P300
parameters in ADHDchildren.
|
[
{
"created": "Fri, 28 Jan 2011 10:03:39 GMT",
"version": "v1"
}
] |
2011-01-31
|
[
[
"Bakhtadze",
"Sophio",
""
],
[
"Janelidze",
"Marine",
""
],
[
"Khachapuridze",
"Nana",
""
]
] |
Impact of EEG biofeedback on event-related potentials (ERPs) in attention-deficit hyperactivity (ADHD) children. Introduction: ADHD is one of the most widely spread condition of school aged children affecting 5% of children of this age.The core clinical signs of ADHD are inattention, restlessness and impulsivity. According to various authors direct measures of attention are of two types: 1. Recording the alpha rhythm on the EEG and event-related potentials (ERP); 2. Tests of reaction time, continuous performance tests, paired associated learning, and tests of memorization; The second one is evidence-based. As for the first one it is known that ERPs especially of those with later response reflect the process of mental effortfullness to select the appropriate behavior and accomplish decision making during the action of target stimulus. Thus selection of action as well as decision making is the most important points affected in ADHD children. Besides in recent years EEG biofeedback (Neurofeedback) have become the evidence-based in the treatment of ADHD. Unfortunately the effectiveness of this approach on ERPs parameters is still unknown. Thus we aimed to study the changes of ERPs after neurofeedback therapy. Methods: We have examined 16 children with ADHD before- and after 3- sessions of neurofeedback therapy and 23 without treatment. Results: We have observed statistically significant improvement of parameters of later response like P300 in treated children compared with untreated ones whereas the treatment was non effective for earlier components of ERPs. Conclusions: Neurofeedback can affect on the process of selection of action and decision making by means of changing of P300 parameters in ADHDchildren.
|
1607.04158
|
Jose A. Cuesta
|
Jos\'e A. Cuesta, Gustav W. Delius, and Richard Law
|
Sheldon Spectrum and the Plankton Paradox: Two Sides of the Same Coin. A
trait-based plankton size-spectrum model
|
26 pages, 1 figure, needs Springer class file svjour3.cls
|
J. Math. Biol. (2017) 1--30
|
10.1007/s00285-017-1132-7
| null |
q-bio.PE physics.bio-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Sheldon spectrum describes a remarkable regularity in aquatic ecosystems:
the biomass density as a function of logarithmic body mass is approximately
constant over many orders of magnitude. While size-spectrum models have
explained this phenomenon for assemblages of multicellular organisms, this
paper introduces a species-resolved size-spectrum model to explain the
phenomenon in unicellular plankton. A Sheldon spectrum spanning the cell-size
range of unicellular plankton necessarily consists of a large number of
coexisting species covering a wide range of characteristic sizes. The
coexistence of many phytoplankton species feeding on a small number of
resources is known as the Paradox of the Plankton. Our model resolves the
paradox by showing that coexistence is facilitated by the allometric scaling of
four physiological rates. Two of the allometries have empirical support, the
remaining two emerge from predator-prey interactions exactly when the
abundances follow a Sheldon spectrum. Our plankton model is a scale-invariant
trait-based size-spectrum model: it describes the abundance of phyto- and
zooplankton cells as a function of both size and species trait (the maximal
size before cell division). It incorporates growth due to resource consumption
and predation on smaller cells, death due to predation, and a flexible cell
division process. We give analytic solutions at steady state for both the
within-species size distributions and the relative abundances across species.
|
[
{
"created": "Thu, 14 Jul 2016 15:00:04 GMT",
"version": "v1"
}
] |
2017-05-26
|
[
[
"Cuesta",
"José A.",
""
],
[
"Delius",
"Gustav W.",
""
],
[
"Law",
"Richard",
""
]
] |
The Sheldon spectrum describes a remarkable regularity in aquatic ecosystems: the biomass density as a function of logarithmic body mass is approximately constant over many orders of magnitude. While size-spectrum models have explained this phenomenon for assemblages of multicellular organisms, this paper introduces a species-resolved size-spectrum model to explain the phenomenon in unicellular plankton. A Sheldon spectrum spanning the cell-size range of unicellular plankton necessarily consists of a large number of coexisting species covering a wide range of characteristic sizes. The coexistence of many phytoplankton species feeding on a small number of resources is known as the Paradox of the Plankton. Our model resolves the paradox by showing that coexistence is facilitated by the allometric scaling of four physiological rates. Two of the allometries have empirical support, the remaining two emerge from predator-prey interactions exactly when the abundances follow a Sheldon spectrum. Our plankton model is a scale-invariant trait-based size-spectrum model: it describes the abundance of phyto- and zooplankton cells as a function of both size and species trait (the maximal size before cell division). It incorporates growth due to resource consumption and predation on smaller cells, death due to predation, and a flexible cell division process. We give analytic solutions at steady state for both the within-species size distributions and the relative abundances across species.
|
1610.09045
|
Gajendra Katuwal
|
Gajendra Jung Katuwal, Robert Chen
|
Machine Learning Model Interpretability for Precision Medicine
| null | null | null | null |
q-bio.QM
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Interpretability of machine learning models is critical for data-driven
precision medicine efforts. However, highly predictive models are generally
complex and are difficult to interpret. Here using Model-Agnostic Explanations
algorithm, we show that complex models such as random forest can be made
interpretable. Using MIMIC-II dataset, we successfully predicted ICU mortality
with 80% balanced accuracy and were also were able to interpret the relative
effect of the features on prediction at individual level.
|
[
{
"created": "Fri, 28 Oct 2016 01:08:56 GMT",
"version": "v1"
}
] |
2016-10-31
|
[
[
"Katuwal",
"Gajendra Jung",
""
],
[
"Chen",
"Robert",
""
]
] |
Interpretability of machine learning models is critical for data-driven precision medicine efforts. However, highly predictive models are generally complex and are difficult to interpret. Here using Model-Agnostic Explanations algorithm, we show that complex models such as random forest can be made interpretable. Using MIMIC-II dataset, we successfully predicted ICU mortality with 80% balanced accuracy and were also were able to interpret the relative effect of the features on prediction at individual level.
|
1409.2210
|
Liane Gabora
|
Liane Gabora
|
Why Blind-Variation Selective-Retention is an Inappropriate Explanatory
Framework for Creativity
|
6 pages
|
Physics of Life Reviews, 7(2), 182-183, 2010
|
10.1016/j.plrev.2010.04.008
| null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Simonton is attempting to salvage the Blind Variation Selective Retention
theory of creativity (often referred to as the Darwinian theory of creativity)
by dissociating it from Darwinism. This is a necessary move for complex reasons
outlined in detail elsewhere. However, whether or not one calls BVSR a
Darwinian theory, it is still a variation-and-selection theory.
Variation-and-selection was put forward to solve a certain kind of paradox,
that of how biological change accumulates (that is, over generations, species
become more adapted to their environment) despite being discarded at the end of
each generation (that is, parents don't transmit to offspring knowledge or
bodily changes acquired during their lifetimes, e.g., you don't inherit your
mother's ear piercings). This paradox does not exist with respect to creative
thought. There is no discarding of acquired change when ideas are transmitted
amongst individuals; we share with others modified versions of the ideas we
were exposed to on a regular basis.
|
[
{
"created": "Mon, 8 Sep 2014 05:39:08 GMT",
"version": "v1"
},
{
"created": "Mon, 15 Jul 2019 21:41:09 GMT",
"version": "v2"
}
] |
2019-07-17
|
[
[
"Gabora",
"Liane",
""
]
] |
Simonton is attempting to salvage the Blind Variation Selective Retention theory of creativity (often referred to as the Darwinian theory of creativity) by dissociating it from Darwinism. This is a necessary move for complex reasons outlined in detail elsewhere. However, whether or not one calls BVSR a Darwinian theory, it is still a variation-and-selection theory. Variation-and-selection was put forward to solve a certain kind of paradox, that of how biological change accumulates (that is, over generations, species become more adapted to their environment) despite being discarded at the end of each generation (that is, parents don't transmit to offspring knowledge or bodily changes acquired during their lifetimes, e.g., you don't inherit your mother's ear piercings). This paradox does not exist with respect to creative thought. There is no discarding of acquired change when ideas are transmitted amongst individuals; we share with others modified versions of the ideas we were exposed to on a regular basis.
|
2011.09703
|
Regis Faure
|
Olga Gherbovet (TBI), Fernando Ferreira (TBI), Apolline Cl\'ement
(TBI), M\'elanie Ragon (TBI), Julien Durand (TBI), Sophie Bozonnet (TBI),
Michael O'Donohue (TBI), R\'egis Faur\'e (TBI)
|
Regioselective chemo-enzymatic syntheses of ferulate conjugates as
chromogenic substrates for feruloyl esterases
| null | null | null | null |
q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Generally, carbohydrate-active enzymes are studied using chromogenic
substrates that provide quick and easy color-based detection of enzyme-mediated
hydrolysis. In the case of feruloyl esterases, commercially available
chromogenic ferulate derivatives are both costly and limited in terms of their
experimental application. In this study, we describe solutions for these two
issues, using a chemoenzymatic approach to synthesize different ferulate
compounds. The overall synthetic routes towards commercially available
5-bromo-4-chloro-3-indolyl and 4-nitrophenyl
O-5-feruloyl-$\alpha$-l-arabinofuranosides 1a and 1b were significantly
shortened (7-8 steps reduced to 4-6) and transesterification yields enhanced
(from 46 to 73% for 1a and 47 to 86 % for 1b). This was achieved using
enzymatic (immobilized Lipolase 100T from Thermomyces lanuginosus)
transesterification of unprotected vinyl ferulate to the primary hydroxyl group
of $\alpha$-l-arabinofuranosides. Moreover, a novel feruloylated-butanetriol
4-nitrocatechol-1-yl analog 12, containing a cleavable hydroxylated linker was
also synthesized in 29% overall yield in 3 steps (convergent synthesis). The
latter route combined regioselective functionalization of 4-nitrocatechol and
enzymatic transferuloylation. The use of 12 as a substrate to characterize type
A feruloyl esterase from Aspergillus niger reveals the advantages of this
substrate for the characterizations of feruloyl esterases.
|
[
{
"created": "Thu, 19 Nov 2020 08:13:12 GMT",
"version": "v1"
}
] |
2020-11-20
|
[
[
"Gherbovet",
"Olga",
"",
"TBI"
],
[
"Ferreira",
"Fernando",
"",
"TBI"
],
[
"Clément",
"Apolline",
"",
"TBI"
],
[
"Ragon",
"Mélanie",
"",
"TBI"
],
[
"Durand",
"Julien",
"",
"TBI"
],
[
"Bozonnet",
"Sophie",
"",
"TBI"
],
[
"O'Donohue",
"Michael",
"",
"TBI"
],
[
"Fauré",
"Régis",
"",
"TBI"
]
] |
Generally, carbohydrate-active enzymes are studied using chromogenic substrates that provide quick and easy color-based detection of enzyme-mediated hydrolysis. In the case of feruloyl esterases, commercially available chromogenic ferulate derivatives are both costly and limited in terms of their experimental application. In this study, we describe solutions for these two issues, using a chemoenzymatic approach to synthesize different ferulate compounds. The overall synthetic routes towards commercially available 5-bromo-4-chloro-3-indolyl and 4-nitrophenyl O-5-feruloyl-$\alpha$-l-arabinofuranosides 1a and 1b were significantly shortened (7-8 steps reduced to 4-6) and transesterification yields enhanced (from 46 to 73% for 1a and 47 to 86 % for 1b). This was achieved using enzymatic (immobilized Lipolase 100T from Thermomyces lanuginosus) transesterification of unprotected vinyl ferulate to the primary hydroxyl group of $\alpha$-l-arabinofuranosides. Moreover, a novel feruloylated-butanetriol 4-nitrocatechol-1-yl analog 12, containing a cleavable hydroxylated linker was also synthesized in 29% overall yield in 3 steps (convergent synthesis). The latter route combined regioselective functionalization of 4-nitrocatechol and enzymatic transferuloylation. The use of 12 as a substrate to characterize type A feruloyl esterase from Aspergillus niger reveals the advantages of this substrate for the characterizations of feruloyl esterases.
|
1911.11466
|
Nimrod Sherf
|
Nimrod Sherf, Maoz Shamir
|
Multiplexing rhythmic information by spike timing dependent plasticity
|
14 pages, 7 figures
| null |
10.1371/journal.pcbi.1008000
| null |
q-bio.NC nlin.AO physics.bio-ph
|
http://creativecommons.org/licenses/by/4.0/
|
Rhythmic activity has been associated with a wide range of cognitive
processes. Previous studies have shown that spike-timing-dependent plasticity
can facilitate the transfer of rhythmic activity downstream the information
processing pathway. However, STDP has also been known to generate strong
winner-take-all like competitions between subgroups of correlated synaptic
inputs. Consequently, one might expect that STDP would induce strong
competition between different rhythmicity channels thus preventing the
multiplexing of information across different frequency channels. This study
explored whether STDP facilitates the multiplexing of information across
multiple frequency channels, and if so, under what conditions. We investigated
the STDP dynamics in the framework of a model consisting of two competing
subpopulations of neurons that synapse in a feedforward manner onto a single
postsynaptic neuron. Each sub-population was assumed to oscillate in an
independent manner and in a different frequency band. To investigate the STDP
dynamics, a mean field Fokker-Planck theory was developed in the limit of the
slow learning rate. Surprisingly, our theory predicted limited interactions
between the different sub-groups. Our analysis further revealed that the
interaction between these channels was mainly mediated by the shared component
of the mean activity. Next, we generalized these results beyond the simplistic
model using numerical simulations. We found that for a wide range of
parameters, the system converged to a solution in which the post-synaptic
neuron responded to both rhythms. Nevertheless, all the synaptic weights
remained dynamic and did not converge to a fixed point. These findings imply
that STDP can support the multiplexing of rhythmic information and demonstrate
how functionality can be retained in the face of continuous remodeling of all
the synaptic weights.
|
[
{
"created": "Tue, 26 Nov 2019 11:37:52 GMT",
"version": "v1"
}
] |
2020-09-09
|
[
[
"Sherf",
"Nimrod",
""
],
[
"Shamir",
"Maoz",
""
]
] |
Rhythmic activity has been associated with a wide range of cognitive processes. Previous studies have shown that spike-timing-dependent plasticity can facilitate the transfer of rhythmic activity downstream the information processing pathway. However, STDP has also been known to generate strong winner-take-all like competitions between subgroups of correlated synaptic inputs. Consequently, one might expect that STDP would induce strong competition between different rhythmicity channels thus preventing the multiplexing of information across different frequency channels. This study explored whether STDP facilitates the multiplexing of information across multiple frequency channels, and if so, under what conditions. We investigated the STDP dynamics in the framework of a model consisting of two competing subpopulations of neurons that synapse in a feedforward manner onto a single postsynaptic neuron. Each sub-population was assumed to oscillate in an independent manner and in a different frequency band. To investigate the STDP dynamics, a mean field Fokker-Planck theory was developed in the limit of the slow learning rate. Surprisingly, our theory predicted limited interactions between the different sub-groups. Our analysis further revealed that the interaction between these channels was mainly mediated by the shared component of the mean activity. Next, we generalized these results beyond the simplistic model using numerical simulations. We found that for a wide range of parameters, the system converged to a solution in which the post-synaptic neuron responded to both rhythms. Nevertheless, all the synaptic weights remained dynamic and did not converge to a fixed point. These findings imply that STDP can support the multiplexing of rhythmic information and demonstrate how functionality can be retained in the face of continuous remodeling of all the synaptic weights.
|
1808.00864
|
Teresa Pegan
|
Teresa M. Pegan, David W. Winkler, Mark F. Haussmann and Maren N.
Vitousek
|
Brief increases in corticosterone affect morphology, stress responses,
and telomere length, but not post-fledging movements, in a wild songbird
|
35 pages, 4 figures, 1 appendix
| null |
10.1086/702827
| null |
q-bio.TO
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Organisms are frequently exposed to challenges during development, such as
poor weather and food shortage. Such challenges can initiate the hormonal
stress response, which involves secretion of glucocorticoids. Although the
hormonal stress response helps organisms deal with challenges, long-term
exposure to high levels of glucocorticoids can have morphological, behavioral,
and physiological consequences, especially during development. Glucocorticoids
are also associated with reduced survival and telomere shortening. To
investigate whether brief, acute exposures to glucocorticoids can also produce
these phenotypic effects in free-living birds, we exposed wild tree swallow
(Tachycineta bicolor) nestlings to a brief exogenous dose of cort once per day
for five days and then measured their morphology, baseline and stress-induced
corticosterone levels, and telomere length. We also deployed radio tags on a
subset of nestlings, which allowed us to determine the age at which tagged
nestlings left the nest (fledged) and their pattern of presence and absence at
the natal site during the post-breeding period. Corticosterone-treated
nestlings had lower mass, higher baseline and stress-induced corticosterone,
and reduced telomeres; other metrics of morphology were affected weakly or not
at all. Our treatment resulted in no significant effect on survival to
fledging, fledge age, or age at first departure from the natal site, and we
found no negative effect of corticosterone on inter-annual return rate. These
results show that brief acute corticosterone exposure during development can
have measurable effects on phenotype in free-living tree swallows.
Corticosterone may therefore mediate correlations between rearing environment
and phenotype in developing organisms, even in the absence of prolonged
stressors.
|
[
{
"created": "Tue, 31 Jul 2018 22:56:11 GMT",
"version": "v1"
}
] |
2019-03-08
|
[
[
"Pegan",
"Teresa M.",
""
],
[
"Winkler",
"David W.",
""
],
[
"Haussmann",
"Mark F.",
""
],
[
"Vitousek",
"Maren N.",
""
]
] |
Organisms are frequently exposed to challenges during development, such as poor weather and food shortage. Such challenges can initiate the hormonal stress response, which involves secretion of glucocorticoids. Although the hormonal stress response helps organisms deal with challenges, long-term exposure to high levels of glucocorticoids can have morphological, behavioral, and physiological consequences, especially during development. Glucocorticoids are also associated with reduced survival and telomere shortening. To investigate whether brief, acute exposures to glucocorticoids can also produce these phenotypic effects in free-living birds, we exposed wild tree swallow (Tachycineta bicolor) nestlings to a brief exogenous dose of cort once per day for five days and then measured their morphology, baseline and stress-induced corticosterone levels, and telomere length. We also deployed radio tags on a subset of nestlings, which allowed us to determine the age at which tagged nestlings left the nest (fledged) and their pattern of presence and absence at the natal site during the post-breeding period. Corticosterone-treated nestlings had lower mass, higher baseline and stress-induced corticosterone, and reduced telomeres; other metrics of morphology were affected weakly or not at all. Our treatment resulted in no significant effect on survival to fledging, fledge age, or age at first departure from the natal site, and we found no negative effect of corticosterone on inter-annual return rate. These results show that brief acute corticosterone exposure during development can have measurable effects on phenotype in free-living tree swallows. Corticosterone may therefore mediate correlations between rearing environment and phenotype in developing organisms, even in the absence of prolonged stressors.
|
2311.10410
|
Lisa Roux
|
Lisa Roux (NYU Langone Medical Center), Eran Stark (NYU Langone
Medical Center), Lucas Sjulson (NYU Langone Medical Center), Gy\"orgy
Buzs\'aki (NYU Langone Medical Center)
|
In vivo optogenetic identification and manipulation of GABAergic
interneuron subtypes
| null |
Current Opinion in Neurobiology, 2014, 26, pp.88 - 95
|
10.1016/j.conb.2013.12.013
| null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Identification and manipulation of different GABAergic interneuron classes in
the behaving animal are important to understand their role in circuit dynamics
and behavior. The combination of optogenetics and large-scale neuronal
recordings allows specific interneuron populations to be identified and
perturbed for circuit analysis in intact animals. A crucial aspect of this
approach is coupling electrophysiological recording with spatially and
temporally precise light delivery. Focal multisite illumination of neuronal
activators and silencers in predetermined temporal configurations or a closed
loop manner opens the door to addressing many novel questions. Recent progress
demonstrates the utility and power of this novel technique for interneuron
research.
|
[
{
"created": "Fri, 17 Nov 2023 09:27:19 GMT",
"version": "v1"
}
] |
2023-11-20
|
[
[
"Roux",
"Lisa",
"",
"NYU Langone Medical Center"
],
[
"Stark",
"Eran",
"",
"NYU Langone\n Medical Center"
],
[
"Sjulson",
"Lucas",
"",
"NYU Langone Medical Center"
],
[
"Buzsáki",
"György",
"",
"NYU Langone Medical Center"
]
] |
Identification and manipulation of different GABAergic interneuron classes in the behaving animal are important to understand their role in circuit dynamics and behavior. The combination of optogenetics and large-scale neuronal recordings allows specific interneuron populations to be identified and perturbed for circuit analysis in intact animals. A crucial aspect of this approach is coupling electrophysiological recording with spatially and temporally precise light delivery. Focal multisite illumination of neuronal activators and silencers in predetermined temporal configurations or a closed loop manner opens the door to addressing many novel questions. Recent progress demonstrates the utility and power of this novel technique for interneuron research.
|
q-bio/0507030
|
Luc Frappat
|
Luc Frappat, Antonino Sciarrino
|
Conspiracy in bacterial genomes
|
revised version: introduction and conclusion enhanced, references
added, figures added, some tables removed
| null |
10.1016/j.physa.2006.02.008
|
LAPTH-1108/05 and DSF-23/05
|
q-bio.GN
| null |
The rank ordered distribution of the codon usage frequencies for 123
bacteriae is best fitted by a three parameters function that is the sum of a
constant, an exponential and a linear term in the rank n. The parameters depend
(two parabolically) from the total GC content. The rank ordered distribution of
the amino acids is fitted by a straight line. The Shannon entropy computed over
all the codons is well fitted by a parabola in the GC content, while the
partial entropies computed over subsets of the codons show peculiar different
behavior, exhibiting therefore a first conspiracy effect. Moreover the sum of
the codon usage frequencies over particular sets, e.g. with C and A
(respectively G and U) as i-th nucleotide, shows a clear linear dependence from
the GC content, exhibiting another conspiracy effect.
|
[
{
"created": "Wed, 20 Jul 2005 11:46:08 GMT",
"version": "v1"
},
{
"created": "Fri, 3 Feb 2006 18:09:15 GMT",
"version": "v2"
}
] |
2009-11-11
|
[
[
"Frappat",
"Luc",
""
],
[
"Sciarrino",
"Antonino",
""
]
] |
The rank ordered distribution of the codon usage frequencies for 123 bacteriae is best fitted by a three parameters function that is the sum of a constant, an exponential and a linear term in the rank n. The parameters depend (two parabolically) from the total GC content. The rank ordered distribution of the amino acids is fitted by a straight line. The Shannon entropy computed over all the codons is well fitted by a parabola in the GC content, while the partial entropies computed over subsets of the codons show peculiar different behavior, exhibiting therefore a first conspiracy effect. Moreover the sum of the codon usage frequencies over particular sets, e.g. with C and A (respectively G and U) as i-th nucleotide, shows a clear linear dependence from the GC content, exhibiting another conspiracy effect.
|
1702.08747
|
Michael Adamer
|
Michael F Adamer, Thomas E Woolley, and Heather A Harrington
|
Graph-Facilitated Resonant Mode Counting in Stochastic Interaction
Networks
|
5 pages, 4 figures
| null | null | null |
q-bio.MN physics.bio-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Oscillations in a stochastic dynamical system, whose deterministic
counterpart has a stable steady state, are a widely reported phenomenon.
Traditional methods of finding parameter regimes for stochastically-driven
resonances are, however, cumbersome for any but the smallest networks. In this
letter we show by example of the Brusselator how to use real root counting
algorithms and graph theoretic tools to efficiently determine the number of
resonant modes and parameter ranges for stochastic oscillations. We argue that
stochastic resonance is a network property by showing that resonant modes only
depend on the squared Jacobian matrix $J^2$ , unlike deterministic oscillations
which are determined by $J$. By using graph theoretic tools, analysis of
stochastic behaviour for larger networks is simplified and chemical reaction
networks with multiple resonant modes can be identified easily.
|
[
{
"created": "Tue, 28 Feb 2017 11:24:14 GMT",
"version": "v1"
}
] |
2017-03-01
|
[
[
"Adamer",
"Michael F",
""
],
[
"Woolley",
"Thomas E",
""
],
[
"Harrington",
"Heather A",
""
]
] |
Oscillations in a stochastic dynamical system, whose deterministic counterpart has a stable steady state, are a widely reported phenomenon. Traditional methods of finding parameter regimes for stochastically-driven resonances are, however, cumbersome for any but the smallest networks. In this letter we show by example of the Brusselator how to use real root counting algorithms and graph theoretic tools to efficiently determine the number of resonant modes and parameter ranges for stochastic oscillations. We argue that stochastic resonance is a network property by showing that resonant modes only depend on the squared Jacobian matrix $J^2$ , unlike deterministic oscillations which are determined by $J$. By using graph theoretic tools, analysis of stochastic behaviour for larger networks is simplified and chemical reaction networks with multiple resonant modes can be identified easily.
|
0807.4272
|
Tijana Ivancevic
|
Tijana T. Ivancevic, Murk J. Bottema and Lakhmi C. Jain
|
A Theoretical Model of Chaotic Attractor in Tumor Growth and Metastasis
|
17 pages, 7 figures, Latex
| null | null | null |
q-bio.CB q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper proposes a novel chaotic reaction-diffusion model of cellular
tumor growth and metastasis. The model is based on the multiscale diffusion
cancer-invasion model (MDCM) and formulated by introducing strong nonlinear
coupling into the MDCM. The new model exhibits temporal chaotic behavior (which
resembles the classical Lorenz strange attractor) and yet retains all the
characteristics of the MDCM diffusion model. It mathematically describes both
the processes of carcinogenesis and metastasis, as well as the sensitive
dependence of cancer evolution on initial conditions and parameters. On the
basis of this chaotic tumor-growth model, a generic concept of carcinogenesis
and metastasis is formulated.
Keywords: reaction-diffusion tumor growth model, chaotic attractor, sensitive
dependence on initial tumor characteristics
|
[
{
"created": "Sun, 27 Jul 2008 04:54:40 GMT",
"version": "v1"
}
] |
2008-07-29
|
[
[
"Ivancevic",
"Tijana T.",
""
],
[
"Bottema",
"Murk J.",
""
],
[
"Jain",
"Lakhmi C.",
""
]
] |
This paper proposes a novel chaotic reaction-diffusion model of cellular tumor growth and metastasis. The model is based on the multiscale diffusion cancer-invasion model (MDCM) and formulated by introducing strong nonlinear coupling into the MDCM. The new model exhibits temporal chaotic behavior (which resembles the classical Lorenz strange attractor) and yet retains all the characteristics of the MDCM diffusion model. It mathematically describes both the processes of carcinogenesis and metastasis, as well as the sensitive dependence of cancer evolution on initial conditions and parameters. On the basis of this chaotic tumor-growth model, a generic concept of carcinogenesis and metastasis is formulated. Keywords: reaction-diffusion tumor growth model, chaotic attractor, sensitive dependence on initial tumor characteristics
|
2307.09877
|
Alain Nogaret
|
Paul G Morris, Joseph D. Taylor, Julian F. R. Paton and Alain Nogaret
|
Single shot diagnosis of ion channel dysfunction from assimilation of
cell membrane dynamics
|
42 pages, 6 figures, 3 tables
| null | null | null |
q-bio.QM physics.bio-ph
|
http://creativecommons.org/licenses/by/4.0/
|
Many neurological diseases originate in the dysfunction of cellular ion
channels. Their diagnosis presents a challenge especially when alterations in
the complement of ion channels are a priori unknown. Current approaches based
on voltage clamps lack the throughput necessary to identify the mutations
causing changes in electrical activity. Here, we introduce a single-shot method
for diagnosing changes in the complement of ion channels from changes in the
electrical activity of a cell. We developed data assimilation (DA) to estimate
the parameters of individual ion channels and from these parameters reconstruct
the ionic currents of hippocampal CA1 neurons to within 11% of their actual
value. DA correctly predicts which ionic current is altered and by how much
after we blocked the BK, SK, A and HCN channels with selective antagonists of
known potency. We anticipate our assay will transform the treatment of
neurological disease through comprehensive diagnosis and drug screening.
|
[
{
"created": "Wed, 19 Jul 2023 10:15:07 GMT",
"version": "v1"
}
] |
2023-07-20
|
[
[
"Morris",
"Paul G",
""
],
[
"Taylor",
"Joseph D.",
""
],
[
"Paton",
"Julian F. R.",
""
],
[
"Nogaret",
"Alain",
""
]
] |
Many neurological diseases originate in the dysfunction of cellular ion channels. Their diagnosis presents a challenge especially when alterations in the complement of ion channels are a priori unknown. Current approaches based on voltage clamps lack the throughput necessary to identify the mutations causing changes in electrical activity. Here, we introduce a single-shot method for diagnosing changes in the complement of ion channels from changes in the electrical activity of a cell. We developed data assimilation (DA) to estimate the parameters of individual ion channels and from these parameters reconstruct the ionic currents of hippocampal CA1 neurons to within 11% of their actual value. DA correctly predicts which ionic current is altered and by how much after we blocked the BK, SK, A and HCN channels with selective antagonists of known potency. We anticipate our assay will transform the treatment of neurological disease through comprehensive diagnosis and drug screening.
|
q-bio/0403005
|
Ping Ao
|
X.-M. Zhu, L. Yin, L. Hood, and P. Ao
|
Calculating Biological Behaviors of Epigenetic States in Phage lambda
Life Cycle
|
16 pages, 3 figures, 1 table
|
Functional and Integrative Genomics (2004)
|
10.1007/s10142-003-0095-5
| null |
q-bio.MN q-bio.QM
| null |
Gene regulatory network of lambda phage is one the best studied model systems
in molecular biology. More 50 years of experimental study has provided a
tremendous amount of data at all levels: physics, chemistry, DNA, protein, and
function. However, its stability and robustness for both wild type and mutants
has been a notorious theoretical/mathematical problem. In this paper we report
our successful calculation on the properties of this gene regulatory network.
We believe it is of its first kind. Our success is of course built upon
numerous previous theoretical attempts, but following 3 features make our
modeling uniqu:
1) A new modeling method particular suitable for stability and robustness
study;
2) Paying a close attention to the well-known difference of in vivo and in
vitro;
3) Allowing more important role for noise and stochastic effect to play.
The last two points have been discussed by two of us (Ao and Yin,
cond-mat/0307747), which we believe would be enough to make some of previous
theoretical attempts successful, too. We hope the present work would stimulate
a further interest in the emerging field of gene regulatory network.
|
[
{
"created": "Wed, 3 Mar 2004 17:56:01 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Zhu",
"X. -M.",
""
],
[
"Yin",
"L.",
""
],
[
"Hood",
"L.",
""
],
[
"Ao",
"P.",
""
]
] |
Gene regulatory network of lambda phage is one the best studied model systems in molecular biology. More 50 years of experimental study has provided a tremendous amount of data at all levels: physics, chemistry, DNA, protein, and function. However, its stability and robustness for both wild type and mutants has been a notorious theoretical/mathematical problem. In this paper we report our successful calculation on the properties of this gene regulatory network. We believe it is of its first kind. Our success is of course built upon numerous previous theoretical attempts, but following 3 features make our modeling uniqu: 1) A new modeling method particular suitable for stability and robustness study; 2) Paying a close attention to the well-known difference of in vivo and in vitro; 3) Allowing more important role for noise and stochastic effect to play. The last two points have been discussed by two of us (Ao and Yin, cond-mat/0307747), which we believe would be enough to make some of previous theoretical attempts successful, too. We hope the present work would stimulate a further interest in the emerging field of gene regulatory network.
|
1302.1909
|
Alexandre Ferreira Ramos
|
Alexandre F. Ramos and Jose Eduardo M. Hornos and John Reinitz
|
Noise reduction by coupling of stochastic processes and canalization in
biology
| null | null | null | null |
q-bio.MN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Randomness is an unavoidable feature of the intracellular environment due to
chemical reactants being present in low copy number. That phenomenon, predicted
by Delbr\"uck long ago \cite{delbruck40}, has been detected in both prokaryotic
\cite{elowitz02,cai06} and eukaryotic \cite{blake03} cells after the
development of the fluorescence techniques. On the other hand, developing
organisms, e.g. {\em D. melanogaster}, exhibit strikingly precise
spatio-temporal patterns of protein/mRNA concentrations
\cite{gregor07b,manu09a,manu09b,boettiger09}. Those two characteristics of
living organisms are in apparent contradiction: the precise patterns of protein
concentrations are the result of multiple mutually interacting random chemical
reactions. The main question is to establish biochemical mechanisms for
coupling random reactions so that canalization, or fluctuations reduction
instead of amplification, takes place. Here we explore a model for coupling two
stochastic processes where the noise of the combined process can be smaller
than that of the isolated ones. Such a canalization occurs if, and only if,
there is negative covariance between the random variables of the model. Our
results are obtained in the framework of a master equation for a negatively
self-regulated -- or externally regulated -- binary gene and show that the
precise control due to negative self regulation \cite{becskei00} is because it
may generate negative covariance. Our results suggest that negative covariance,
in the coupling of random chemical reactions, is a theoretical mechanism
underlying the precision of developmental processes.
|
[
{
"created": "Thu, 7 Feb 2013 23:32:19 GMT",
"version": "v1"
}
] |
2013-02-11
|
[
[
"Ramos",
"Alexandre F.",
""
],
[
"Hornos",
"Jose Eduardo M.",
""
],
[
"Reinitz",
"John",
""
]
] |
Randomness is an unavoidable feature of the intracellular environment due to chemical reactants being present in low copy number. That phenomenon, predicted by Delbr\"uck long ago \cite{delbruck40}, has been detected in both prokaryotic \cite{elowitz02,cai06} and eukaryotic \cite{blake03} cells after the development of the fluorescence techniques. On the other hand, developing organisms, e.g. {\em D. melanogaster}, exhibit strikingly precise spatio-temporal patterns of protein/mRNA concentrations \cite{gregor07b,manu09a,manu09b,boettiger09}. Those two characteristics of living organisms are in apparent contradiction: the precise patterns of protein concentrations are the result of multiple mutually interacting random chemical reactions. The main question is to establish biochemical mechanisms for coupling random reactions so that canalization, or fluctuations reduction instead of amplification, takes place. Here we explore a model for coupling two stochastic processes where the noise of the combined process can be smaller than that of the isolated ones. Such a canalization occurs if, and only if, there is negative covariance between the random variables of the model. Our results are obtained in the framework of a master equation for a negatively self-regulated -- or externally regulated -- binary gene and show that the precise control due to negative self regulation \cite{becskei00} is because it may generate negative covariance. Our results suggest that negative covariance, in the coupling of random chemical reactions, is a theoretical mechanism underlying the precision of developmental processes.
|
2405.18456
|
Arash Shaban-Nejad
|
Fekede Asefa Kumsa, Jay H. Fowke, Soheil Hashtarkhani, Brianna M.
White, Martha J. Shrubsole, Arash Shaban-Nejad
|
The association between neighborhood obesogenic factors and prostate
cancer risk and mortality: the Southern Community Cohort Study
|
18 Pages, 1 Figure, 8 Tables
|
Front Oncol Frontiers in Oncology, 2024 Apr 9:14:1343070
|
10.3389/fonc.2024.1343070
| null |
q-bio.QM
|
http://creativecommons.org/licenses/by/4.0/
|
Prostate cancer is one of the leading causes of cancer-related mortality
among men in the U.S. We examined the role of neighborhood obesogenic
attributes on prostate cancer risk and mortality in the Southern Community
Cohort Study (SCCS). From 34,166 SCCS male participants, 28,356 were included
in the analysis. We assessed relationship between neighborhood socioeconomic
status (nSES) and neighborhood obesogenic environment indices including
restaurant environment index, retail food environment index, parks,
recreational facilities, and businesses and prostate cancer risk and mortality
by controlling for individual-level factors using a multivariable Cox
proportional hazards model. We further stratified prostate cancer risk analysis
by race and body mass index (BMI). Median follow-up time was 133 months, and
mean age was 51.62 years. There were 1,524 (5.37%) prostate cancer diagnoses
and 98 (6.43%) prostate cancer deaths during follow-up. Compared to
participants residing in wealthiest quintile, those residing in the poorest
quintile had a higher risk of prostate cancer, particularly among non-obese men
with a BMI less than 30. The restaurant environment index was associated with a
higher prostate cancer risk in overweight (BMI equal or greater 25) White men.
Obese Black individuals without any neighborhood recreational facilities had a
42% higher risk compared to those with any access. Compared to residents in
wealthiest quintile and most walkable area, those residing within the poorest
quintile or the least walkable area had a higher risk of prostate cancer death.
|
[
{
"created": "Tue, 28 May 2024 15:53:19 GMT",
"version": "v1"
}
] |
2024-05-30
|
[
[
"Kumsa",
"Fekede Asefa",
""
],
[
"Fowke",
"Jay H.",
""
],
[
"Hashtarkhani",
"Soheil",
""
],
[
"White",
"Brianna M.",
""
],
[
"Shrubsole",
"Martha J.",
""
],
[
"Shaban-Nejad",
"Arash",
""
]
] |
Prostate cancer is one of the leading causes of cancer-related mortality among men in the U.S. We examined the role of neighborhood obesogenic attributes on prostate cancer risk and mortality in the Southern Community Cohort Study (SCCS). From 34,166 SCCS male participants, 28,356 were included in the analysis. We assessed relationship between neighborhood socioeconomic status (nSES) and neighborhood obesogenic environment indices including restaurant environment index, retail food environment index, parks, recreational facilities, and businesses and prostate cancer risk and mortality by controlling for individual-level factors using a multivariable Cox proportional hazards model. We further stratified prostate cancer risk analysis by race and body mass index (BMI). Median follow-up time was 133 months, and mean age was 51.62 years. There were 1,524 (5.37%) prostate cancer diagnoses and 98 (6.43%) prostate cancer deaths during follow-up. Compared to participants residing in wealthiest quintile, those residing in the poorest quintile had a higher risk of prostate cancer, particularly among non-obese men with a BMI less than 30. The restaurant environment index was associated with a higher prostate cancer risk in overweight (BMI equal or greater 25) White men. Obese Black individuals without any neighborhood recreational facilities had a 42% higher risk compared to those with any access. Compared to residents in wealthiest quintile and most walkable area, those residing within the poorest quintile or the least walkable area had a higher risk of prostate cancer death.
|
1910.03491
|
Adriano Siqueira Francisco
|
Adriano Francisco Siqueira, Morun Bernardino Neto, Ana Lucia Gabas
Ferreira, Luciana Alves de Medeiros, Mario da Silva Garrote-Filho, Ubirajara
Coutinho Filho, Nilson Penha-Silva
|
Stochastic modeling of hyposmotic lysis and characterization of
different osmotic stability subgroups of human erythrocytes
| null | null | null | null |
q-bio.QM stat.AP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This study proposes a novel stochastic model for the study of hyposmotic
hemolysis. This model is capable of reproducing both the kinetics in the
transient phase and the lysis equilibrium in the stationary phase, as well as
the variability of the experimental measurements. The stationary distribution
of this model can be approximated to a normal distribution, with mean and
variance related to the salt concentration used in the erythrocyte osmotic
fragility assay. The proposed model can generalize the classical Boltzmann
sigmoidal model often used in adjusting the stationary experimental data
distribution. A typical osmotic fragility curve is constructed from the
absorbance of free hemoglobin as a function of the decrease in NaCl (X)
concentration and allows the determination of H50, an osmotic fragility
variable that represents the saline concentration capable of promoting 50%
lysis, and dX, an osmotic stability variable that represents 1/4 of the
variation in salt concentration required to promote 100% lysis. Based on the
stationary distribution of the proposed model it is possible to stratify a
population into different groups of individuals with similar levels of cell
stability. These groups are very suitable to study the factors associated with
cell stability, such as gender, age and lipids, among others. The method
presented here was applied to a sample of 71 individuals and several results
were obtained. In a group of 25 female subjects, with H50 values between 0.42
and 0.47 g/dL NaCl, for example, the use of a quadratic model to study the
dependence of the stability index dX/H50 with blood LDL-cholesterol levels,
showed that the erythrocyte osmotic stability increases with increasing LDL-C
to a maximum value close to 90 mg/dL and then decreases.
|
[
{
"created": "Tue, 8 Oct 2019 15:57:40 GMT",
"version": "v1"
}
] |
2019-10-09
|
[
[
"Siqueira",
"Adriano Francisco",
""
],
[
"Neto",
"Morun Bernardino",
""
],
[
"Ferreira",
"Ana Lucia Gabas",
""
],
[
"de Medeiros",
"Luciana Alves",
""
],
[
"Garrote-Filho",
"Mario da Silva",
""
],
[
"Filho",
"Ubirajara Coutinho",
""
],
[
"Penha-Silva",
"Nilson",
""
]
] |
This study proposes a novel stochastic model for the study of hyposmotic hemolysis. This model is capable of reproducing both the kinetics in the transient phase and the lysis equilibrium in the stationary phase, as well as the variability of the experimental measurements. The stationary distribution of this model can be approximated to a normal distribution, with mean and variance related to the salt concentration used in the erythrocyte osmotic fragility assay. The proposed model can generalize the classical Boltzmann sigmoidal model often used in adjusting the stationary experimental data distribution. A typical osmotic fragility curve is constructed from the absorbance of free hemoglobin as a function of the decrease in NaCl (X) concentration and allows the determination of H50, an osmotic fragility variable that represents the saline concentration capable of promoting 50% lysis, and dX, an osmotic stability variable that represents 1/4 of the variation in salt concentration required to promote 100% lysis. Based on the stationary distribution of the proposed model it is possible to stratify a population into different groups of individuals with similar levels of cell stability. These groups are very suitable to study the factors associated with cell stability, such as gender, age and lipids, among others. The method presented here was applied to a sample of 71 individuals and several results were obtained. In a group of 25 female subjects, with H50 values between 0.42 and 0.47 g/dL NaCl, for example, the use of a quadratic model to study the dependence of the stability index dX/H50 with blood LDL-cholesterol levels, showed that the erythrocyte osmotic stability increases with increasing LDL-C to a maximum value close to 90 mg/dL and then decreases.
|
1406.3075
|
Istvan Kiss Z
|
David Juher, Istvan Z. Kiss, Joan Saldana
|
Analysis of an epidemic model with awareness decay on regular random
networks
| null |
Journal of Theoretical Biology, 365 (2015): 457-468
|
10.1016/j.jtbi.2014.10.013
| null |
q-bio.QM math.DS nlin.AO q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The existence of a die-out threshold (different from the classic
disease-invasion one) defining a region of slow extinction of an epidemic has
been proved elsewhere for susceptible-aware-infectious-susceptible models
without awareness decay, through bifurcation analysis. By means of an
equivalent mean-field model defined on regular random networks, we interpret
the dynamics of the system in this region and prove that the existence of
bifurcation for this second epidemic threshold crucially depends on the absence
of awareness decay. We show that the continuum of equilibria that characterizes
the slow die-out dynamics collapses into a unique equilibrium when a constant
rate of awareness decay is assumed, no matter how small, and that the resulting
bifurcation from the disease-free equilibrium is equivalent to that of standard
epidemic models. We illustrate these findings with continuous-time stochastic
simulations on regular random networks with different degrees. Finally, the
behaviour of solutions with and without decay in awareness is compared around
the second epidemic threshold for a small rate of awareness decay.
|
[
{
"created": "Wed, 11 Jun 2014 22:01:20 GMT",
"version": "v1"
},
{
"created": "Wed, 17 Dec 2014 13:47:20 GMT",
"version": "v2"
}
] |
2016-10-19
|
[
[
"Juher",
"David",
""
],
[
"Kiss",
"Istvan Z.",
""
],
[
"Saldana",
"Joan",
""
]
] |
The existence of a die-out threshold (different from the classic disease-invasion one) defining a region of slow extinction of an epidemic has been proved elsewhere for susceptible-aware-infectious-susceptible models without awareness decay, through bifurcation analysis. By means of an equivalent mean-field model defined on regular random networks, we interpret the dynamics of the system in this region and prove that the existence of bifurcation for this second epidemic threshold crucially depends on the absence of awareness decay. We show that the continuum of equilibria that characterizes the slow die-out dynamics collapses into a unique equilibrium when a constant rate of awareness decay is assumed, no matter how small, and that the resulting bifurcation from the disease-free equilibrium is equivalent to that of standard epidemic models. We illustrate these findings with continuous-time stochastic simulations on regular random networks with different degrees. Finally, the behaviour of solutions with and without decay in awareness is compared around the second epidemic threshold for a small rate of awareness decay.
|
1311.5544
|
Salvatore De Martino
|
G. Buccheri, E. De Lauro, S. De Martino, M. Falanga
|
Experimental investigation of the trachea oscillation and its role in
the pitch production
| null | null | null | null |
q-bio.TO physics.bio-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Several experiments have been performed to investigate the mechanical
vibrations associated with trachea and larynx when Italian vowels are emitted.
The mechanical measurements have been made by using two laser Doppler
vibrometers (based on the well-known not-invasive optical measurement
technique) coupled with the acoustic field measured by high-quality certified
microphones. The recorded signals are analyzed by using well-established
methods in time and frequency domains. The signals of the mechanical vibrations
along the trachea and the larynx are compared with those of the acoustic ones.
Focusing the attention of the signals' onsets, we can observe an upward
propagation of the mechanical vibrations for which it is possible to estimate a
delay between the traces. We observe that the mechanical oscillations at the
trachea start before the larynx and acoustic oscillations. Moreover these
tracheal oscillations are self-oscillations in time and are associated with the
pitch production, indicating a further hydrodynamic instability at trachea.
This leads to new insights in the mechanism controlling the pitch in the
speech.
|
[
{
"created": "Thu, 31 Oct 2013 13:47:15 GMT",
"version": "v1"
}
] |
2013-11-22
|
[
[
"Buccheri",
"G.",
""
],
[
"De Lauro",
"E.",
""
],
[
"De Martino",
"S.",
""
],
[
"Falanga",
"M.",
""
]
] |
Several experiments have been performed to investigate the mechanical vibrations associated with trachea and larynx when Italian vowels are emitted. The mechanical measurements have been made by using two laser Doppler vibrometers (based on the well-known not-invasive optical measurement technique) coupled with the acoustic field measured by high-quality certified microphones. The recorded signals are analyzed by using well-established methods in time and frequency domains. The signals of the mechanical vibrations along the trachea and the larynx are compared with those of the acoustic ones. Focusing the attention of the signals' onsets, we can observe an upward propagation of the mechanical vibrations for which it is possible to estimate a delay between the traces. We observe that the mechanical oscillations at the trachea start before the larynx and acoustic oscillations. Moreover these tracheal oscillations are self-oscillations in time and are associated with the pitch production, indicating a further hydrodynamic instability at trachea. This leads to new insights in the mechanism controlling the pitch in the speech.
|
2005.09805
|
Cintia Menendez
|
Cintia A. Menendez, Fabian Bylehn, Gustavo R. Perez-Lemus, Walter
Alvarado and Juan J. de Pablo
|
Molecular Characterization of Ebselen Binding Activity to SARS-CoV-2
Main Protease
|
10 pages, 5 figures
| null | null | null |
q-bio.BM physics.bio-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The Coronavirus Disease (COVID-19) pandemic caused by the SARS-coronavirus 2
(SARS-CoV-2) urgently calls for the design of drugs directed against this new
virus. Given its essential role in proteolytic processing, the main protease
Mpro has been identified as an attractive candidate for drugs against
SARS-CoV-2 and similar coronaviruses. Recent high-throughput screening studies
have identified a set of existing, small-molecule drugs as potent Mpro
inhibitors. Amongst these, Ebselen (2-Phenyl-1,2-benzoselenazol-3-one), a
glutathione peroxidase mimetic seleno-organic compound, is particularly
attractive. Recent experiments suggest that its effectiveness is higher than
that of other molecules that also act at the enzyme catalytic site. By relying
on extensive simulations with all-atom models, in this study we examine at a
molecular level the potential of Ebselen to decrease Mpro catalytic activity.
Our results indicate that Ebselen exhibits a distinct affinity for the
catalytic site cavity of Mpro. In addition, our molecular models reveal a
second, previously unkown binding site for Ebselen in the dimerization region
localized between the II and III domains of the protein. A detailed analysis of
the free energy of binding indicates that the affinity of Ebselen to this
second binding site is in fact significantly larger than that to the catalytic
site. A strain analysis indicates that Ebselen bound between the II-III domains
exerts a pronounced allosteric effect that regulates catalytic site access
through surface loop interactions, and induces a displacement and
reconfiguration of water hotspots, including the catalytic water, which could
interfere with normal enzymatic function. Taken together, these findings
provide a framework for the future design of more potent and specific Mpro
inhibitors, based on the Ebselen scaffold, that could lead to new therapeutic
strategies for COVID-19.
|
[
{
"created": "Wed, 20 May 2020 00:52:57 GMT",
"version": "v1"
}
] |
2020-05-21
|
[
[
"Menendez",
"Cintia A.",
""
],
[
"Bylehn",
"Fabian",
""
],
[
"Perez-Lemus",
"Gustavo R.",
""
],
[
"Alvarado",
"Walter",
""
],
[
"de Pablo",
"Juan J.",
""
]
] |
The Coronavirus Disease (COVID-19) pandemic caused by the SARS-coronavirus 2 (SARS-CoV-2) urgently calls for the design of drugs directed against this new virus. Given its essential role in proteolytic processing, the main protease Mpro has been identified as an attractive candidate for drugs against SARS-CoV-2 and similar coronaviruses. Recent high-throughput screening studies have identified a set of existing, small-molecule drugs as potent Mpro inhibitors. Amongst these, Ebselen (2-Phenyl-1,2-benzoselenazol-3-one), a glutathione peroxidase mimetic seleno-organic compound, is particularly attractive. Recent experiments suggest that its effectiveness is higher than that of other molecules that also act at the enzyme catalytic site. By relying on extensive simulations with all-atom models, in this study we examine at a molecular level the potential of Ebselen to decrease Mpro catalytic activity. Our results indicate that Ebselen exhibits a distinct affinity for the catalytic site cavity of Mpro. In addition, our molecular models reveal a second, previously unkown binding site for Ebselen in the dimerization region localized between the II and III domains of the protein. A detailed analysis of the free energy of binding indicates that the affinity of Ebselen to this second binding site is in fact significantly larger than that to the catalytic site. A strain analysis indicates that Ebselen bound between the II-III domains exerts a pronounced allosteric effect that regulates catalytic site access through surface loop interactions, and induces a displacement and reconfiguration of water hotspots, including the catalytic water, which could interfere with normal enzymatic function. Taken together, these findings provide a framework for the future design of more potent and specific Mpro inhibitors, based on the Ebselen scaffold, that could lead to new therapeutic strategies for COVID-19.
|
1503.07794
|
Bhavin Khatri
|
Bhavin S. Khatri and Richard A. Goldstein
|
A simple biophysical model predicts more rapid accumulation of hybrid
incompatibilities in small populations
|
13 pages, 6 figures
| null | null | null |
q-bio.PE
|
http://creativecommons.org/licenses/by/3.0/
|
Speciation is fundamental to the huge diversity of life on Earth. Evidence
suggests reproductive isolation arises most commonly in allopatry with a higher
speciation rate in small populations. Current theory does not address this
dependence in the important weak mutation regime. Here, we examine a
biophysical model of speciation based on the binding of a protein transcription
factor to a DNA binding site, and how their independent co-evolution, in a
stabilizing landscape, of two allopatric lineages leads to incompatibilities.
Our results give a new prediction for the monomorphic regime of evolution,
consistent with data, that smaller populations should develop incompatibilities
more quickly. This arises as: 1) smaller populations having a greater initial
drift load, as there are more sequences that bind poorly than well, so fewer
substitutions are needed to reach incompatible regions of phenotype space; 2)
slower divergence when the population size is larger than the inverse of
discrete differences in fitness. Further, we find longer sequences develop
incompatibilities more quickly at small population sizes, but more slowly at
large population sizes. The biophysical model thus represents a robust
mechanism of rapid reproductive isolation for small populations and large
sequences, that does not require peak-shifts or positive selection.
|
[
{
"created": "Thu, 26 Mar 2015 17:16:54 GMT",
"version": "v1"
}
] |
2015-03-27
|
[
[
"Khatri",
"Bhavin S.",
""
],
[
"Goldstein",
"Richard A.",
""
]
] |
Speciation is fundamental to the huge diversity of life on Earth. Evidence suggests reproductive isolation arises most commonly in allopatry with a higher speciation rate in small populations. Current theory does not address this dependence in the important weak mutation regime. Here, we examine a biophysical model of speciation based on the binding of a protein transcription factor to a DNA binding site, and how their independent co-evolution, in a stabilizing landscape, of two allopatric lineages leads to incompatibilities. Our results give a new prediction for the monomorphic regime of evolution, consistent with data, that smaller populations should develop incompatibilities more quickly. This arises as: 1) smaller populations having a greater initial drift load, as there are more sequences that bind poorly than well, so fewer substitutions are needed to reach incompatible regions of phenotype space; 2) slower divergence when the population size is larger than the inverse of discrete differences in fitness. Further, we find longer sequences develop incompatibilities more quickly at small population sizes, but more slowly at large population sizes. The biophysical model thus represents a robust mechanism of rapid reproductive isolation for small populations and large sequences, that does not require peak-shifts or positive selection.
|
1805.10248
|
Wentian Li
|
Wentian Li, Dimitrios Thanos, Astero Provata
|
Quantifying Local Randomness in Human DNA and RNA Sequences Using Erdos
Motifs
|
4 figures
|
Journal of Theoretical Biology, 461:41-50 (2019)
|
10.1016/j.jtbi.2018.09.031
| null |
q-bio.GN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In 1932, Paul Erdos asked whether a random walk constructed from a binary
sequence can achieve the lowest possible deviation (lowest discrepancy), for
the sequence itself and for all its subsequences formed by homogeneous
arithmetic progressions. Although avoiding low discrepancy is impossible for
infinite sequences, as recently proven by Terence Tao, attempts were made to
construct such sequences with finite lengths. We recognize that such
constructed sequences (we call these "Erdos sequences") exhibit certain
hallmarks of randomness at the local level: they show roughly equal frequencies
of subsequences, and at the same time exclude the trivial periodic patterns.
For the human DNA we examine the frequency of a set of Erdos motifs of
length-10 using three nucleotides-to-binary mappings. The particular length-10
Erdos sequence is derived by the length-11 Mathias sequence and is identical
with the first 10 digits of the Thue-Morse sequence, underscoring the fact that
both are deficient in periodicities. Our calculations indicate that: (1) the
purine (A and G)/pyridimine (C and T) based Erdos motifs are greatly
underrepresented in the human genome, (2) the strong(G and C)/weak(A and T)
based Erdos motifs are slightly overrepresented, (3) the densities of the two
are negatively correlated, (4) the Erdos motifs based on all three mappings
being combined are slightly underrepresented, and (5) the strong/weak based
Erdos motifs are greatly overrepresented in the human messenger RNA sequences.
|
[
{
"created": "Fri, 25 May 2018 16:59:50 GMT",
"version": "v1"
},
{
"created": "Sat, 29 Sep 2018 18:40:47 GMT",
"version": "v2"
}
] |
2018-10-31
|
[
[
"Li",
"Wentian",
""
],
[
"Thanos",
"Dimitrios",
""
],
[
"Provata",
"Astero",
""
]
] |
In 1932, Paul Erdos asked whether a random walk constructed from a binary sequence can achieve the lowest possible deviation (lowest discrepancy), for the sequence itself and for all its subsequences formed by homogeneous arithmetic progressions. Although avoiding low discrepancy is impossible for infinite sequences, as recently proven by Terence Tao, attempts were made to construct such sequences with finite lengths. We recognize that such constructed sequences (we call these "Erdos sequences") exhibit certain hallmarks of randomness at the local level: they show roughly equal frequencies of subsequences, and at the same time exclude the trivial periodic patterns. For the human DNA we examine the frequency of a set of Erdos motifs of length-10 using three nucleotides-to-binary mappings. The particular length-10 Erdos sequence is derived by the length-11 Mathias sequence and is identical with the first 10 digits of the Thue-Morse sequence, underscoring the fact that both are deficient in periodicities. Our calculations indicate that: (1) the purine (A and G)/pyridimine (C and T) based Erdos motifs are greatly underrepresented in the human genome, (2) the strong(G and C)/weak(A and T) based Erdos motifs are slightly overrepresented, (3) the densities of the two are negatively correlated, (4) the Erdos motifs based on all three mappings being combined are slightly underrepresented, and (5) the strong/weak based Erdos motifs are greatly overrepresented in the human messenger RNA sequences.
|
1711.09536
|
Shatrunjai Singh
|
Shatrunjai P. Singh and Swagata Karkare
|
Stress, Depression and Neuroplasticity
|
6 pages
| null | null | null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Modifications of signaling pathways and synapses owing to changing behaviors,
environments, numerous neural modulation as well as brain-tissue injuries is
defined as neuroplasticity in developmental neurology. The central purpose of
the review is to gain a better understanding of the relation between stress,
depression and neuroplasticity and explore potential therapeutic interventions
for enhancing neural resilience. We have also reviewed the role of different
factors like age, stress and sex on inducing neuroplasticity within various
brain regions.
|
[
{
"created": "Mon, 27 Nov 2017 04:57:02 GMT",
"version": "v1"
}
] |
2017-11-28
|
[
[
"Singh",
"Shatrunjai P.",
""
],
[
"Karkare",
"Swagata",
""
]
] |
Modifications of signaling pathways and synapses owing to changing behaviors, environments, numerous neural modulation as well as brain-tissue injuries is defined as neuroplasticity in developmental neurology. The central purpose of the review is to gain a better understanding of the relation between stress, depression and neuroplasticity and explore potential therapeutic interventions for enhancing neural resilience. We have also reviewed the role of different factors like age, stress and sex on inducing neuroplasticity within various brain regions.
|
1408.4221
|
Deok-Sun Lee
|
Deok-Sun Lee
|
Evolution of regulatory networks towards adaptability and stability in a
changing environment
|
11 pages, 6 Figures
|
Physical Review E 90, 052822 (2014)
|
10.1103/PhysRevE.90.052822
| null |
q-bio.MN physics.bio-ph q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Diverse biological networks exhibit universal features distinguished from
those of random networks, calling much attention to their origins and
implications. Here we propose a minimal evolution model of Boolean regulatory
networks, which evolve by selectively rewiring links towards enhancing
adaptability to a changing environment and stability against dynamical
perturbations. We find that sparse and heterogeneous connectivity patterns
emerge, which show qualitative agreement with real transcriptional regulatory
networks and metabolic networks. The characteristic scaling behavior of
stability reflects the balance between robustness and flexibility. The scaling
of fluctuation in the perturbation spread shows a dynamic crossover, which is
analyzed by investigating separately the stochasticity of internal dynamics and
the network structures different depending on the evolution pathways. Our study
delineates how the ambivalent pressure of evolution shapes biological networks,
which can be helpful for studying general complex systems interacting with
environments.
|
[
{
"created": "Tue, 19 Aug 2014 06:39:36 GMT",
"version": "v1"
},
{
"created": "Tue, 25 Nov 2014 01:38:03 GMT",
"version": "v2"
}
] |
2014-11-26
|
[
[
"Lee",
"Deok-Sun",
""
]
] |
Diverse biological networks exhibit universal features distinguished from those of random networks, calling much attention to their origins and implications. Here we propose a minimal evolution model of Boolean regulatory networks, which evolve by selectively rewiring links towards enhancing adaptability to a changing environment and stability against dynamical perturbations. We find that sparse and heterogeneous connectivity patterns emerge, which show qualitative agreement with real transcriptional regulatory networks and metabolic networks. The characteristic scaling behavior of stability reflects the balance between robustness and flexibility. The scaling of fluctuation in the perturbation spread shows a dynamic crossover, which is analyzed by investigating separately the stochasticity of internal dynamics and the network structures different depending on the evolution pathways. Our study delineates how the ambivalent pressure of evolution shapes biological networks, which can be helpful for studying general complex systems interacting with environments.
|
2007.09025
|
Alexandru Hening
|
Alexandru Hening and Dang H. Nguyen and Peter Chesson
|
A general theory of coexistence and extinction for stochastic ecological
communities
|
65 pages, 3 figures
|
J. Math. Biol. 82, 56 (2021)
| null | null |
q-bio.PE math.DS math.PR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We analyze a general theory for coexistence and extinction of ecological
communities that are influenced by stochastic temporal environmental
fluctuations. The results apply to discrete time (stochastic difference
equations), continuous time (stochastic differential equations), compact and
non-compact state spaces and degenerate or non-degenerate noise. In addition,
we can also include in the dynamics auxiliary variables that model
environmental fluctuations, population structure, eco-environmental feedbacks
or other internal or external factors.
We are able to significantly generalize the recent discrete time results by
Benaim and Schreiber (Journal of Mathematical Biology '19) to non-compact state
spaces, and we provide stronger persistence and extinction results. The
continuous time results by Hening and Nguyen (Annals of Applied Probability
'18) are strengthened to include degenerate noise and auxiliary variables.
Using the general theory, we work out several examples. In discrete time, we
classify the dynamics when there are one or two species, and look at the Ricker
model, Log-normally distributed offspring models, lottery models, discrete
Lotka-Volterra models as well as models of perennial and annual organisms. For
the continuous time setting we explore models with a resource variable,
stochastic replicator models, and three dimensional Lotka-Volterra models.
|
[
{
"created": "Fri, 17 Jul 2020 14:41:20 GMT",
"version": "v1"
},
{
"created": "Wed, 17 Feb 2021 17:53:19 GMT",
"version": "v2"
}
] |
2021-05-19
|
[
[
"Hening",
"Alexandru",
""
],
[
"Nguyen",
"Dang H.",
""
],
[
"Chesson",
"Peter",
""
]
] |
We analyze a general theory for coexistence and extinction of ecological communities that are influenced by stochastic temporal environmental fluctuations. The results apply to discrete time (stochastic difference equations), continuous time (stochastic differential equations), compact and non-compact state spaces and degenerate or non-degenerate noise. In addition, we can also include in the dynamics auxiliary variables that model environmental fluctuations, population structure, eco-environmental feedbacks or other internal or external factors. We are able to significantly generalize the recent discrete time results by Benaim and Schreiber (Journal of Mathematical Biology '19) to non-compact state spaces, and we provide stronger persistence and extinction results. The continuous time results by Hening and Nguyen (Annals of Applied Probability '18) are strengthened to include degenerate noise and auxiliary variables. Using the general theory, we work out several examples. In discrete time, we classify the dynamics when there are one or two species, and look at the Ricker model, Log-normally distributed offspring models, lottery models, discrete Lotka-Volterra models as well as models of perennial and annual organisms. For the continuous time setting we explore models with a resource variable, stochastic replicator models, and three dimensional Lotka-Volterra models.
|
1306.4021
|
Simon Gravel
|
Simon Gravel, Fouad Zakharia, Andres Moreno-Estrada, Jake K Byrnes,
Marina Muzzio, Juan L. Rodriguez-Flores, Eimear E. Kenny, Christopher R.
Gignoux, Brian K. Maples, Wilfried Guiblet, Julie Dutil, Marc Via, Karla
Sandoval, Gabriel Bedoya, Taras K Oleksyk, Andres Ruiz-Linares, Esteban G
Burchard, Juan Carlos Martinez-Cruzado, Carlos D. Bustamante, and The 1000
Genomes Project
|
Reconstructing Native American Migrations from Whole-genome and
Whole-exome Data
|
30 pages, inludes supplement. v2 contains clarifications, extra
analyses, and a change in the language classification scheme used
| null | null | null |
q-bio.PE q-bio.GN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
There is great scientific and popular interest in understanding the genetic
history of populations in the Americas. We wish to understand when different
regions of the continent were inhabited, where settlers came from, and how
current inhabitants relate genetically to earlier populations. Recent studies
unraveled parts of the genetic history of the continent using genotyping arrays
and uniparental markers. The 1000 Genomes Project provides a unique opportunity
for improving our understanding of population genetic history by providing over
a hundred sequenced low coverage genomes and exomes from Colombian (CLM),
Mexican-American (MXL), and Puerto Rican (PUR) populations. Here, we explore
the genomic contributions of African, European, and Native American ancestry to
these populations. Estimated Native American ancestry is 48% in MXL, 25% in
CLM, and 13% in PUR. Native American ancestry in PUR is most closely related to
populations surrounding the Orinoco River basin, confirming the Southern
America ancestry of the Ta\'ino people of the Caribbean. We present new methods
to estimate the allele frequencies in the Native American fraction of the
populations, and model their distribution using a demographic model for three
ancestral Native American populations. These ancestral populations likely split
in close succession: the most likely scenario, based on a peopling of the
Americas 16 thousand years ago (kya), supports that the MXL Ancestors split
12.2kya, with a subsequent split of the ancestors to CLM and PUR 11.7kya. The
model also features effective populations of 62,000 in Mexico, 8,700 in
Colombia, and 1,900 in Puerto Rico. Modeling Identity-by-descent and ancestry
tract length, we show that post-contact populations differ markedly in their
effective sizes and migration patterns, with Puerto Rico showing the smallest
effective size and the earlier migration from Europe.
|
[
{
"created": "Mon, 17 Jun 2013 21:15:39 GMT",
"version": "v1"
},
{
"created": "Fri, 15 Nov 2013 23:00:19 GMT",
"version": "v2"
}
] |
2013-11-19
|
[
[
"Gravel",
"Simon",
""
],
[
"Zakharia",
"Fouad",
""
],
[
"Moreno-Estrada",
"Andres",
""
],
[
"Byrnes",
"Jake K",
""
],
[
"Muzzio",
"Marina",
""
],
[
"Rodriguez-Flores",
"Juan L.",
""
],
[
"Kenny",
"Eimear E.",
""
],
[
"Gignoux",
"Christopher R.",
""
],
[
"Maples",
"Brian K.",
""
],
[
"Guiblet",
"Wilfried",
""
],
[
"Dutil",
"Julie",
""
],
[
"Via",
"Marc",
""
],
[
"Sandoval",
"Karla",
""
],
[
"Bedoya",
"Gabriel",
""
],
[
"Oleksyk",
"Taras K",
""
],
[
"Ruiz-Linares",
"Andres",
""
],
[
"Burchard",
"Esteban G",
""
],
[
"Martinez-Cruzado",
"Juan Carlos",
""
],
[
"Bustamante",
"Carlos D.",
""
],
[
"Project",
"The 1000 Genomes",
""
]
] |
There is great scientific and popular interest in understanding the genetic history of populations in the Americas. We wish to understand when different regions of the continent were inhabited, where settlers came from, and how current inhabitants relate genetically to earlier populations. Recent studies unraveled parts of the genetic history of the continent using genotyping arrays and uniparental markers. The 1000 Genomes Project provides a unique opportunity for improving our understanding of population genetic history by providing over a hundred sequenced low coverage genomes and exomes from Colombian (CLM), Mexican-American (MXL), and Puerto Rican (PUR) populations. Here, we explore the genomic contributions of African, European, and Native American ancestry to these populations. Estimated Native American ancestry is 48% in MXL, 25% in CLM, and 13% in PUR. Native American ancestry in PUR is most closely related to populations surrounding the Orinoco River basin, confirming the Southern America ancestry of the Ta\'ino people of the Caribbean. We present new methods to estimate the allele frequencies in the Native American fraction of the populations, and model their distribution using a demographic model for three ancestral Native American populations. These ancestral populations likely split in close succession: the most likely scenario, based on a peopling of the Americas 16 thousand years ago (kya), supports that the MXL Ancestors split 12.2kya, with a subsequent split of the ancestors to CLM and PUR 11.7kya. The model also features effective populations of 62,000 in Mexico, 8,700 in Colombia, and 1,900 in Puerto Rico. Modeling Identity-by-descent and ancestry tract length, we show that post-contact populations differ markedly in their effective sizes and migration patterns, with Puerto Rico showing the smallest effective size and the earlier migration from Europe.
|
2209.13530
|
Islem Rekik
|
Imen Jegham and Islem Rekik
|
Meta-RegGNN: Predicting Verbal and Full-Scale Intelligence Scores using
Graph Neural Networks and Meta-Learning
| null | null | null | null |
q-bio.NC cs.AI cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Decrypting intelligence from the human brain construct is vital in the
detection of particular neurological disorders. Recently, functional brain
connectomes have been used successfully to predict behavioral scores. However,
state-of-the-art methods, on one hand, neglect the topological properties of
the connectomes and, on the other hand, fail to solve the high inter-subject
brain heterogeneity. To address these limitations, we propose a novel
regression graph neural network through meta-learning namely Meta-RegGNN for
predicting behavioral scores from brain connectomes. The parameters of our
proposed regression GNN are explicitly trained so that a small number of
gradient steps combined with a small training data amount produces a good
generalization to unseen brain connectomes. Our results on verbal and
full-scale intelligence quotient (IQ) prediction outperform existing methods in
both neurotypical and autism spectrum disorder cohorts. Furthermore, we show
that our proposed approach ensures generalizability, particularly for autistic
subjects. Our Meta-RegGNN source code is available at
https://github.com/basiralab/Meta-RegGNN.
|
[
{
"created": "Wed, 14 Sep 2022 07:19:03 GMT",
"version": "v1"
}
] |
2022-09-28
|
[
[
"Jegham",
"Imen",
""
],
[
"Rekik",
"Islem",
""
]
] |
Decrypting intelligence from the human brain construct is vital in the detection of particular neurological disorders. Recently, functional brain connectomes have been used successfully to predict behavioral scores. However, state-of-the-art methods, on one hand, neglect the topological properties of the connectomes and, on the other hand, fail to solve the high inter-subject brain heterogeneity. To address these limitations, we propose a novel regression graph neural network through meta-learning namely Meta-RegGNN for predicting behavioral scores from brain connectomes. The parameters of our proposed regression GNN are explicitly trained so that a small number of gradient steps combined with a small training data amount produces a good generalization to unseen brain connectomes. Our results on verbal and full-scale intelligence quotient (IQ) prediction outperform existing methods in both neurotypical and autism spectrum disorder cohorts. Furthermore, we show that our proposed approach ensures generalizability, particularly for autistic subjects. Our Meta-RegGNN source code is available at https://github.com/basiralab/Meta-RegGNN.
|
1809.08214
|
Anatoly Sorokin
|
Anatoly Sorokin, Oksana Sorokina and J. Douglas Armstrong
|
RKappa: Software for Analyzing Rule-Based Models
|
33 pages, 13 figures, this manuscript was prepared for the Methods in
Molecular Biology series
| null | null | null |
q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
RKappa is a framework for the development, simulation and analysis of
rule-base models within the mature statistically empowered R environment. It is
designed for model editing, parameter identification, simulation, sensitivity
analysis and visualisation. The framework is optimised for high-performance
computing platforms and facilitates analysis of large-scale systems biology
models where knowledge of exact mechanisms is limited and parameter values are
uncertain. The RKappa software is an open source (GLP3 license) package for R,
which is freely available online ( https://github.com/lptolik/R4Kappa ).
|
[
{
"created": "Fri, 21 Sep 2018 17:15:53 GMT",
"version": "v1"
}
] |
2018-09-24
|
[
[
"Sorokin",
"Anatoly",
""
],
[
"Sorokina",
"Oksana",
""
],
[
"Armstrong",
"J. Douglas",
""
]
] |
RKappa is a framework for the development, simulation and analysis of rule-base models within the mature statistically empowered R environment. It is designed for model editing, parameter identification, simulation, sensitivity analysis and visualisation. The framework is optimised for high-performance computing platforms and facilitates analysis of large-scale systems biology models where knowledge of exact mechanisms is limited and parameter values are uncertain. The RKappa software is an open source (GLP3 license) package for R, which is freely available online ( https://github.com/lptolik/R4Kappa ).
|
2407.10019
|
Trevor Reckell
|
Trevor Reckell, Beckett Sterner, Petar Jevti\'c, Reggie Davidrajuh
|
A Numerical Comparison of Petri Net and Ordinary Differential Equation
SIR Component Models
|
27 pages, 11 figures, six equations, and associated code can be found
in GitHub repository
https://github.com/trevorreckell/Numerical-Comparison-of-PN-vs-ODE-for-SIR
| null | null | null |
q-bio.OT
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Petri nets are a promising modeling framework for epidemiology, including the
spread of disease across populations or within an individual. In particular,
the Susceptible-Infectious-Recovered (SIR) compartment model is foundational
for population epidemiological modeling and has been implemented in several
prior Petri net studies. However, the SIR model is generally stated as a system
of ordinary differential equations (ODEs) with continuous time and variables,
while Petri nets are discrete event simulations. To our knowledge, no prior
study has investigated the numerical equivalence of Petri net SIR models to the
classical ODE formulation. We introduce crucial numerical techniques for
implementing SIR models in the GPenSim package for Petri net simulations. We
show that these techniques are critical for Petri net SIR models and show a
relative root mean squared error of less than 1% compared to ODE simulations
for biologically relevant parameter ranges. We conclude that Petri nets provide
a valid framework for modeling SIR-type dynamics using biologically relevant
parameter values provided that the other PN structures we outline are also
implemented.
|
[
{
"created": "Sat, 13 Jul 2024 22:31:15 GMT",
"version": "v1"
},
{
"created": "Wed, 17 Jul 2024 15:16:26 GMT",
"version": "v2"
}
] |
2024-07-18
|
[
[
"Reckell",
"Trevor",
""
],
[
"Sterner",
"Beckett",
""
],
[
"Jevtić",
"Petar",
""
],
[
"Davidrajuh",
"Reggie",
""
]
] |
Petri nets are a promising modeling framework for epidemiology, including the spread of disease across populations or within an individual. In particular, the Susceptible-Infectious-Recovered (SIR) compartment model is foundational for population epidemiological modeling and has been implemented in several prior Petri net studies. However, the SIR model is generally stated as a system of ordinary differential equations (ODEs) with continuous time and variables, while Petri nets are discrete event simulations. To our knowledge, no prior study has investigated the numerical equivalence of Petri net SIR models to the classical ODE formulation. We introduce crucial numerical techniques for implementing SIR models in the GPenSim package for Petri net simulations. We show that these techniques are critical for Petri net SIR models and show a relative root mean squared error of less than 1% compared to ODE simulations for biologically relevant parameter ranges. We conclude that Petri nets provide a valid framework for modeling SIR-type dynamics using biologically relevant parameter values provided that the other PN structures we outline are also implemented.
|
q-bio/0611042
|
Stefan Carstea
|
A.S.Carstea
|
Proteomic nonlinear waves in networks of transcriptional regulators
|
4 pages, 5 figures
| null | null | null |
q-bio.GN
| null |
A chain of connected genes with activation-repression links is analysed. It
is shown that for various promoter activity functions (parametrised by Hill
coefficient) the equations describing the concentrations of transcription
factors, are differential-difference KdV-type with perturbations. In the case
of large Hill coefficient the proteomic signal along the gene network is given
by a superposition of perturbed dark solitons of defocusing
differential-difference mKdV equation. Biological implications are discussed.
|
[
{
"created": "Tue, 14 Nov 2006 10:03:39 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Carstea",
"A. S.",
""
]
] |
A chain of connected genes with activation-repression links is analysed. It is shown that for various promoter activity functions (parametrised by Hill coefficient) the equations describing the concentrations of transcription factors, are differential-difference KdV-type with perturbations. In the case of large Hill coefficient the proteomic signal along the gene network is given by a superposition of perturbed dark solitons of defocusing differential-difference mKdV equation. Biological implications are discussed.
|
2212.04953
|
Meryem Banu Cavlak
|
Meryem Banu Cavlak, Gagandeep Singh, Mohammed Alser, Can Firtina,
Jo\"el Lindegger, Mohammad Sadrosadati, Nika Mansouri Ghiasi, Can Alkan, Onur
Mutlu
|
TargetCall: Eliminating the Wasted Computation in Basecalling via
Pre-Basecalling Filtering
| null | null | null | null |
q-bio.GN cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Basecalling is an essential step in nanopore sequencing analysis where the
raw signals of nanopore sequencers are converted into nucleotide sequences,
i.e., reads. State-of-the-art basecallers employ complex deep learning models
to achieve high basecalling accuracy. This makes basecalling
computationally-inefficient and memory-hungry; bottlenecking the entire genome
analysis pipeline. However, for many applications, the majority of reads do no
match the reference genome of interest (i.e., target reference) and thus are
discarded in later steps in the genomics pipeline, wasting the basecalling
computation. To overcome this issue, we propose TargetCall, the first
pre-basecalling filter to eliminate the wasted computation in basecalling.
TargetCall's key idea is to discard reads that will not match the target
reference (i.e., off-target reads) prior to basecalling. TargetCall consists of
two main components: (1) LightCall, a lightweight neural network basecaller
that produces noisy reads; and (2) Similarity Check, which labels each of these
noisy reads as on-target or off-target by matching them to the target
reference. TargetCall aims to filter out all off-target reads before
basecalling. The highly-accurate but slow basecalling is performed only on the
raw signals whose noisy reads are labeled as on-target. Our thorough
experimental evaluations using both real and simulated data show that
TargetCall 1) improves the end-to-end basecalling performance while maintaining
high sensitivity in keeping on-target reads, 2) maintains high accuracy in
downstream analysis, 3) precisely filters out up to 94.71% of off-target reads,
and 4) achieves better performance, throughput, sensitivity, precision, and
generality compared to prior works. We open-source TargetCall at
https://github.com/CMU-SAFARI/TargetCall
|
[
{
"created": "Fri, 9 Dec 2022 16:03:34 GMT",
"version": "v1"
},
{
"created": "Thu, 14 Sep 2023 15:42:54 GMT",
"version": "v2"
}
] |
2023-09-15
|
[
[
"Cavlak",
"Meryem Banu",
""
],
[
"Singh",
"Gagandeep",
""
],
[
"Alser",
"Mohammed",
""
],
[
"Firtina",
"Can",
""
],
[
"Lindegger",
"Joël",
""
],
[
"Sadrosadati",
"Mohammad",
""
],
[
"Ghiasi",
"Nika Mansouri",
""
],
[
"Alkan",
"Can",
""
],
[
"Mutlu",
"Onur",
""
]
] |
Basecalling is an essential step in nanopore sequencing analysis where the raw signals of nanopore sequencers are converted into nucleotide sequences, i.e., reads. State-of-the-art basecallers employ complex deep learning models to achieve high basecalling accuracy. This makes basecalling computationally-inefficient and memory-hungry; bottlenecking the entire genome analysis pipeline. However, for many applications, the majority of reads do no match the reference genome of interest (i.e., target reference) and thus are discarded in later steps in the genomics pipeline, wasting the basecalling computation. To overcome this issue, we propose TargetCall, the first pre-basecalling filter to eliminate the wasted computation in basecalling. TargetCall's key idea is to discard reads that will not match the target reference (i.e., off-target reads) prior to basecalling. TargetCall consists of two main components: (1) LightCall, a lightweight neural network basecaller that produces noisy reads; and (2) Similarity Check, which labels each of these noisy reads as on-target or off-target by matching them to the target reference. TargetCall aims to filter out all off-target reads before basecalling. The highly-accurate but slow basecalling is performed only on the raw signals whose noisy reads are labeled as on-target. Our thorough experimental evaluations using both real and simulated data show that TargetCall 1) improves the end-to-end basecalling performance while maintaining high sensitivity in keeping on-target reads, 2) maintains high accuracy in downstream analysis, 3) precisely filters out up to 94.71% of off-target reads, and 4) achieves better performance, throughput, sensitivity, precision, and generality compared to prior works. We open-source TargetCall at https://github.com/CMU-SAFARI/TargetCall
|
1311.6222
|
Da Zhou Dr.
|
Da Zhou and Yue Wang and Bin Wu
|
A multi-phenotypic cancer model with cell plasticity
|
29 pages,6 figures
| null |
10.1016/j.jtbi.2014.04.039
| null |
q-bio.CB q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The conventional cancer stem cell (CSC) theory indicates a hierarchy of CSCs
and non-stem cancer cells (NSCCs), that is, CSCs can differentiate into NSCCs
but not vice versa. However, an alternative paradigm of CSC theory with
reversible cell plasticity among cancer cells has received much attention very
recently. Here we present a generalized multi-phenotypic cancer model by
integrating cell plasticity with the conventional hierarchical structure of
cancer cells. We prove that under very weak assumption, the nonlinear dynamics
of multi-phenotypic proportions in our model has only one stable steady state
and no stable limit cycle. This result theoretically explains the phenotypic
equilibrium phenomena reported in various cancer cell lines. Furthermore,
according to the transient analysis of our model, it is found that cancer cell
plasticity plays an essential role in maintaining the phenotypic diversity in
cancer especially during the transient dynamics. Two biological examples with
experimental data show that the phenotypic conversions from NCSSs to CSCs
greatly contribute to the transient growth of CSCs proportion shortly after the
drastic reduction of it. In particular, an interesting overshooting phenomenon
of CSCs proportion arises in three-phenotypic example. Our work may pave the
way for modeling and analyzing the multi-phenotypic cell population dynamics
with cell plasticity.
|
[
{
"created": "Mon, 25 Nov 2013 07:57:36 GMT",
"version": "v1"
},
{
"created": "Fri, 9 May 2014 07:41:51 GMT",
"version": "v2"
}
] |
2014-05-29
|
[
[
"Zhou",
"Da",
""
],
[
"Wang",
"Yue",
""
],
[
"Wu",
"Bin",
""
]
] |
The conventional cancer stem cell (CSC) theory indicates a hierarchy of CSCs and non-stem cancer cells (NSCCs), that is, CSCs can differentiate into NSCCs but not vice versa. However, an alternative paradigm of CSC theory with reversible cell plasticity among cancer cells has received much attention very recently. Here we present a generalized multi-phenotypic cancer model by integrating cell plasticity with the conventional hierarchical structure of cancer cells. We prove that under very weak assumption, the nonlinear dynamics of multi-phenotypic proportions in our model has only one stable steady state and no stable limit cycle. This result theoretically explains the phenotypic equilibrium phenomena reported in various cancer cell lines. Furthermore, according to the transient analysis of our model, it is found that cancer cell plasticity plays an essential role in maintaining the phenotypic diversity in cancer especially during the transient dynamics. Two biological examples with experimental data show that the phenotypic conversions from NCSSs to CSCs greatly contribute to the transient growth of CSCs proportion shortly after the drastic reduction of it. In particular, an interesting overshooting phenomenon of CSCs proportion arises in three-phenotypic example. Our work may pave the way for modeling and analyzing the multi-phenotypic cell population dynamics with cell plasticity.
|
1003.4044
|
Denis Tolkunov
|
George Locke, Denis Tolkunov, Zarmik Moqtaderi, Kevin Struhl, and
Alexandre V. Morozov
|
High-throughput sequencing reveals a simple model of nucleosome
energetics
|
36 pages, 13 figures
|
PNAS vol. 107 no. 49 (2010), 20998-21003
|
10.1073/pnas.1003838107
| null |
q-bio.GN q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We use nucleosome maps obtained by high-throughput sequencing to study
sequence specificity of intrinsic histone-DNA interactions. In contrast with
previous approaches, we employ an analogy between a classical one-dimensional
fluid of finite-size particles in an arbitrary external potential and arrays of
DNA-bound histone octamers. We derive an analytical solution to infer free
energies of nucleosome formation directly from nucleosome occupancies measured
in high-throughput experiments. The sequence-specific part of free energies is
then captured by fitting them to a sum of energies assigned to individual
nucleotide motifs. We have developed hierarchical models of increasing
complexity and spatial resolution, establishing that nucleosome occupancies can
be explained by systematic differences in mono- and dinucleotide content
between nucleosomal and linker DNA sequences, with periodic dinucleotide
distributions and longer sequence motifs playing a secondary role. Furthermore,
similar sequence signatures are exhibited by control experiments in which
genomic DNA is either sonicated or digested with micrococcal nuclease in the
absence of nucleosomes, making it possible that current predictions based on
high-throughput nucleosome positioning maps are biased by experimental
artifacts.
|
[
{
"created": "Mon, 22 Mar 2010 02:15:58 GMT",
"version": "v1"
}
] |
2015-05-18
|
[
[
"Locke",
"George",
""
],
[
"Tolkunov",
"Denis",
""
],
[
"Moqtaderi",
"Zarmik",
""
],
[
"Struhl",
"Kevin",
""
],
[
"Morozov",
"Alexandre V.",
""
]
] |
We use nucleosome maps obtained by high-throughput sequencing to study sequence specificity of intrinsic histone-DNA interactions. In contrast with previous approaches, we employ an analogy between a classical one-dimensional fluid of finite-size particles in an arbitrary external potential and arrays of DNA-bound histone octamers. We derive an analytical solution to infer free energies of nucleosome formation directly from nucleosome occupancies measured in high-throughput experiments. The sequence-specific part of free energies is then captured by fitting them to a sum of energies assigned to individual nucleotide motifs. We have developed hierarchical models of increasing complexity and spatial resolution, establishing that nucleosome occupancies can be explained by systematic differences in mono- and dinucleotide content between nucleosomal and linker DNA sequences, with periodic dinucleotide distributions and longer sequence motifs playing a secondary role. Furthermore, similar sequence signatures are exhibited by control experiments in which genomic DNA is either sonicated or digested with micrococcal nuclease in the absence of nucleosomes, making it possible that current predictions based on high-throughput nucleosome positioning maps are biased by experimental artifacts.
|
0707.3465
|
Claus O. Wilke
|
Eric Brunet, Igor M. Rouzine, Claus O. Wilke
|
The stochastic edge in adaptive evolution
|
36 pages, 4 figures
| null | null | null |
q-bio.PE
| null |
In a recent article, Desai and Fisher (2007) proposed that the speed of
adaptation in an asexual population is determined by the dynamics of the
stochastic edge of the population, that is, by the emergence and subsequent
establishment of rare mutants that exceed the fitness of all sequences
currently present in the population. Desai and Fisher perform an elaborate
stochastic calculation of the mean time $\tau$ until a new class of mutants has
been established, and interpret $1/\tau$ as the speed of adaptation. As they
note, however, their calculations are valid only for moderate speeds. This
limitation arises from their method to determine $\tau$: Desai and Fisher
back-extrapolate the value of $\tau$ from the best-fit class' exponential
growth at infinite time. This approach is not valid when the population adapts
rapidly, because in this case the best-fit class grows non-exponentially during
the relevant time interval. Here, we substantially extend Desai and Fisher's
analysis of the stochastic edge. We show that we can apply Desai and Fisher's
method to high speeds by either exponentially back-extrapolating from finite
time or using a non-exponential back-extrapolation. Our results are compatible
with predictions made using a different analytical approach (Rouzine et al.
2003, 2007), and agree well with numerical simulations.
|
[
{
"created": "Mon, 23 Jul 2007 23:31:29 GMT",
"version": "v1"
},
{
"created": "Tue, 18 Dec 2007 23:22:06 GMT",
"version": "v2"
}
] |
2007-12-19
|
[
[
"Brunet",
"Eric",
""
],
[
"Rouzine",
"Igor M.",
""
],
[
"Wilke",
"Claus O.",
""
]
] |
In a recent article, Desai and Fisher (2007) proposed that the speed of adaptation in an asexual population is determined by the dynamics of the stochastic edge of the population, that is, by the emergence and subsequent establishment of rare mutants that exceed the fitness of all sequences currently present in the population. Desai and Fisher perform an elaborate stochastic calculation of the mean time $\tau$ until a new class of mutants has been established, and interpret $1/\tau$ as the speed of adaptation. As they note, however, their calculations are valid only for moderate speeds. This limitation arises from their method to determine $\tau$: Desai and Fisher back-extrapolate the value of $\tau$ from the best-fit class' exponential growth at infinite time. This approach is not valid when the population adapts rapidly, because in this case the best-fit class grows non-exponentially during the relevant time interval. Here, we substantially extend Desai and Fisher's analysis of the stochastic edge. We show that we can apply Desai and Fisher's method to high speeds by either exponentially back-extrapolating from finite time or using a non-exponential back-extrapolation. Our results are compatible with predictions made using a different analytical approach (Rouzine et al. 2003, 2007), and agree well with numerical simulations.
|
q-bio/0311031
|
Dmitri Volchenkov
|
D. Volchenkov, R. Lima
|
Homogeneous and Scalable Gene Expression Regulatory Networks with Random
Layouts of Switching Parameters
|
LaTeX, 30 pages, 20 pictures
| null | null | null |
q-bio.MN
| null |
We consider a model of large regulatory gene expression networks where the
thresholds activating the sigmoidal interactions between genes and the signs of
these interactions are shuffled randomly. Such an approach allows for a
qualitative understanding of network dynamics in a lack of empirical data
concerning the large genomes of living organisms. Local dynamics of network
nodes exhibits the multistationarity and oscillations and depends crucially
upon the global topology of a "maximal" graph (comprising of all possible
interactions between genes in the network). The long time behavior observed in
the network defined on the homogeneous "maximal" graphs is featured by the
fraction of positive interactions ($0\leq \eta\leq 1$) allowed between genes.
There exists a critical value $\eta_c<1$ such that if $\eta<\eta_c$, the
oscillations persist in the system, otherwise, when $\eta>\eta_c,$ it tends to
a fixed point (which position in the phase space is determined by the initial
conditions and the certain layout of switching parameters). In networks defined
on the inhomogeneous directed graphs depleted in cycles, no oscillations arise
in the system even if the negative interactions in between genes present
therein in abundance ($\eta_c=0$). For such networks, the bidirectional edges
(if occur) influence on the dynamics essentially. In particular, if a number of
edges in the "maximal" graph is bidirectional, oscillations can arise and
persist in the system at any low rate of negative interactions between genes
($\eta_c=1$). Local dynamics observed in the inhomogeneous scalable regulatory
networks is less sensitive to the choice of initial conditions. The scale free
networks demonstrate their high error tolerance.
|
[
{
"created": "Fri, 21 Nov 2003 18:08:07 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Volchenkov",
"D.",
""
],
[
"Lima",
"R.",
""
]
] |
We consider a model of large regulatory gene expression networks where the thresholds activating the sigmoidal interactions between genes and the signs of these interactions are shuffled randomly. Such an approach allows for a qualitative understanding of network dynamics in a lack of empirical data concerning the large genomes of living organisms. Local dynamics of network nodes exhibits the multistationarity and oscillations and depends crucially upon the global topology of a "maximal" graph (comprising of all possible interactions between genes in the network). The long time behavior observed in the network defined on the homogeneous "maximal" graphs is featured by the fraction of positive interactions ($0\leq \eta\leq 1$) allowed between genes. There exists a critical value $\eta_c<1$ such that if $\eta<\eta_c$, the oscillations persist in the system, otherwise, when $\eta>\eta_c,$ it tends to a fixed point (which position in the phase space is determined by the initial conditions and the certain layout of switching parameters). In networks defined on the inhomogeneous directed graphs depleted in cycles, no oscillations arise in the system even if the negative interactions in between genes present therein in abundance ($\eta_c=0$). For such networks, the bidirectional edges (if occur) influence on the dynamics essentially. In particular, if a number of edges in the "maximal" graph is bidirectional, oscillations can arise and persist in the system at any low rate of negative interactions between genes ($\eta_c=1$). Local dynamics observed in the inhomogeneous scalable regulatory networks is less sensitive to the choice of initial conditions. The scale free networks demonstrate their high error tolerance.
|
2004.01966
|
Kei Tokita
|
Atsuki Nakai, Yoko Inui, Kei Tokita
|
Facultative predation can alter the ant-aphid population
|
28 pages, 6 figures
| null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Although ant--aphid interactions are the most typical example of mutualism
between insect species, some studies suggest that ant attendance is not always
advantageous for the aphids because they may pay a physiological cost. In this
study, we propose a new mathematical model of an ant--aphid system considering
the costs of ant attendance. It includes both mutualism and predation. In the
model, we incorporate not only the trade-off between the intrinsic growth rate
of aphids and the honeydew reward for ants, but also the facultative predation
of aphids by ants. The analysis and computer simulations of the two-dimensional
nonlinear dynamical system with functional response produces fixed points and
also novel and complex bifurcations. These results suggest that a higher degree
of dependence of the aphids on the ants does not always enhance the abundance
of the aphids. In contrast, the model without facultative predation gives a
simple prediction, that is, the higher the degree of dependence, the more
abundant the aphids are. The present study predicts two overall scenarios for
an ant--aphid system with mutualism and facultative predation: (1) aphids with
a lower intrinsic growth rate and many attending ants and (2) aphids with a
higher intrinsic growth rate and fewer attending ants. This seems to explain
why there are two lineages of aphids: one is associated with ants and the other
is not.
|
[
{
"created": "Sat, 4 Apr 2020 16:11:06 GMT",
"version": "v1"
}
] |
2020-04-07
|
[
[
"Nakai",
"Atsuki",
""
],
[
"Inui",
"Yoko",
""
],
[
"Tokita",
"Kei",
""
]
] |
Although ant--aphid interactions are the most typical example of mutualism between insect species, some studies suggest that ant attendance is not always advantageous for the aphids because they may pay a physiological cost. In this study, we propose a new mathematical model of an ant--aphid system considering the costs of ant attendance. It includes both mutualism and predation. In the model, we incorporate not only the trade-off between the intrinsic growth rate of aphids and the honeydew reward for ants, but also the facultative predation of aphids by ants. The analysis and computer simulations of the two-dimensional nonlinear dynamical system with functional response produces fixed points and also novel and complex bifurcations. These results suggest that a higher degree of dependence of the aphids on the ants does not always enhance the abundance of the aphids. In contrast, the model without facultative predation gives a simple prediction, that is, the higher the degree of dependence, the more abundant the aphids are. The present study predicts two overall scenarios for an ant--aphid system with mutualism and facultative predation: (1) aphids with a lower intrinsic growth rate and many attending ants and (2) aphids with a higher intrinsic growth rate and fewer attending ants. This seems to explain why there are two lineages of aphids: one is associated with ants and the other is not.
|
2005.12376
|
Sebasti\'an Contreras
|
Sebasti\'an Contreras, Juan Pablo Biron-Lattes, H. Andr\'es
Villavicencio, David Medina-Ortiz, Nyna Llanovarced-Kawles, \'Alvaro
Olivera-Nappa
|
Statistically-based methodology for revealing real contagion trends and
correcting delay-induced errors in the assessment of COVID-19 pandemic
| null |
Chaos Solitons Fractals 139 (2020) 110087
|
10.1016/j.chaos.2020.110087
| null |
q-bio.PE stat.AP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
COVID-19 pandemic has reshaped our world in a timescale much shorter than
what we can understand. Particularities of SARS-CoV-2, such as its persistence
in surfaces and the lack of a curative treatment or vaccine against COVID-19,
have pushed authorities to apply restrictive policies to control its spreading.
As data drove most of the decisions made in this global contingency, their
quality is a critical variable for decision-making actors, and therefore should
be carefully curated. In this work, we analyze the sources of error in
typically reported epidemiological variables and usual tests used for
diagnosis, and their impact on our understanding of COVID-19 spreading
dynamics. We address the existence of different delays in the report of new
cases, induced by the incubation time of the virus and testing-diagnosis time
gaps, and other error sources related to the sensitivity/specificity of the
tests used to diagnose COVID-19. Using a statistically-based algorithm, we
perform a temporal reclassification of cases to avoid delay-induced errors,
building up new epidemiologic curves centered in the day where the contagion
effectively occurred. We also statistically enhance the robustness behind the
discharge/recovery clinical criteria in the absence of a direct test, which is
typically the case of non-first world countries, where the limited testing
capabilities are fully dedicated to the evaluation of new cases. Finally, we
applied our methodology to assess the evolution of the pandemic in Chile
through the Effective Reproduction Number $R_t$, identifying different moments
in which data was misleading governmental actions. In doing so, we aim to raise
public awareness of the need for proper data reporting and processing protocols
for epidemiological modelling and predictions.
|
[
{
"created": "Mon, 25 May 2020 20:15:37 GMT",
"version": "v1"
},
{
"created": "Wed, 27 May 2020 08:52:12 GMT",
"version": "v2"
},
{
"created": "Wed, 24 Jun 2020 19:13:17 GMT",
"version": "v3"
}
] |
2020-07-14
|
[
[
"Contreras",
"Sebastián",
""
],
[
"Biron-Lattes",
"Juan Pablo",
""
],
[
"Villavicencio",
"H. Andrés",
""
],
[
"Medina-Ortiz",
"David",
""
],
[
"Llanovarced-Kawles",
"Nyna",
""
],
[
"Olivera-Nappa",
"Álvaro",
""
]
] |
COVID-19 pandemic has reshaped our world in a timescale much shorter than what we can understand. Particularities of SARS-CoV-2, such as its persistence in surfaces and the lack of a curative treatment or vaccine against COVID-19, have pushed authorities to apply restrictive policies to control its spreading. As data drove most of the decisions made in this global contingency, their quality is a critical variable for decision-making actors, and therefore should be carefully curated. In this work, we analyze the sources of error in typically reported epidemiological variables and usual tests used for diagnosis, and their impact on our understanding of COVID-19 spreading dynamics. We address the existence of different delays in the report of new cases, induced by the incubation time of the virus and testing-diagnosis time gaps, and other error sources related to the sensitivity/specificity of the tests used to diagnose COVID-19. Using a statistically-based algorithm, we perform a temporal reclassification of cases to avoid delay-induced errors, building up new epidemiologic curves centered in the day where the contagion effectively occurred. We also statistically enhance the robustness behind the discharge/recovery clinical criteria in the absence of a direct test, which is typically the case of non-first world countries, where the limited testing capabilities are fully dedicated to the evaluation of new cases. Finally, we applied our methodology to assess the evolution of the pandemic in Chile through the Effective Reproduction Number $R_t$, identifying different moments in which data was misleading governmental actions. In doing so, we aim to raise public awareness of the need for proper data reporting and processing protocols for epidemiological modelling and predictions.
|
2307.10308
|
Benjamin Bolker
|
Darren Flynn-Primrose and Steven C. Walker and Michael Li and Benjamin
M. Bolker and David J. D. Earn and Jonathan Dushoff
|
Toward a comprehensive system for constructing compartmental epidemic
models
| null | null | null | null |
q-bio.PE
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Compartmental models are valuable tools for investigating infectious
diseases. Researchers building such models typically begin with a simple
structure where compartments correspond to individuals with different
epidemiological statuses, e.g., the classic SIR model which splits the
population into susceptible, infected, and recovered compartments. However, as
more information about a specific pathogen is discovered, or as a means to
investigate the effects of heterogeneities, it becomes useful to stratify
models further -- for example by age, geographic location, or pathogen strain.
The operation of constructing stratified compartmental models from a pair of
simpler models resembles the Cartesian product used in graph theory, but
several key differences complicate matters. In this article we give explicit
mathematical definitions for several so-called ``model products'' and provide
examples where each is suitable. We also provide examples of model
stratification where no existing model product will generate the desired
result.
|
[
{
"created": "Wed, 19 Jul 2023 00:18:51 GMT",
"version": "v1"
}
] |
2023-07-21
|
[
[
"Flynn-Primrose",
"Darren",
""
],
[
"Walker",
"Steven C.",
""
],
[
"Li",
"Michael",
""
],
[
"Bolker",
"Benjamin M.",
""
],
[
"Earn",
"David J. D.",
""
],
[
"Dushoff",
"Jonathan",
""
]
] |
Compartmental models are valuable tools for investigating infectious diseases. Researchers building such models typically begin with a simple structure where compartments correspond to individuals with different epidemiological statuses, e.g., the classic SIR model which splits the population into susceptible, infected, and recovered compartments. However, as more information about a specific pathogen is discovered, or as a means to investigate the effects of heterogeneities, it becomes useful to stratify models further -- for example by age, geographic location, or pathogen strain. The operation of constructing stratified compartmental models from a pair of simpler models resembles the Cartesian product used in graph theory, but several key differences complicate matters. In this article we give explicit mathematical definitions for several so-called ``model products'' and provide examples where each is suitable. We also provide examples of model stratification where no existing model product will generate the desired result.
|
0709.2071
|
Monika Schoenerklee
|
Monika Schoenerklee, Momtchil Peev
|
Parameter Estimation in Biokinetic Degradation Models in Wastewater
Treatment - A Novel Approach Relevant for Micro-pollutant Removal
|
20 pages, 4 figures
| null | null | null |
q-bio.QM
| null |
In this paper we address a general parameter estimation methodology for an
extended biokinetic degradation model [1] for poorly degradable
micropollutants. In particular we concentrate on parameter estimation of the
micropollutant degradation sub-model by specialised microorganisms. In this
case we focus on the case when only substrate degradation data are available
and prove the structural identifiability of the model. Further we consider the
problem of practical identifiability and propose experimental and related
numerical methods for unambiguous parameter estimation based on multiple
substrate degradation curves with different initial concentrations. Finally by
means of simulated pseudo-experiments we have found convincing indications that
the proposed algorithm is stable and yields appropriate parameter estimates
even in unfavourable regimes.
|
[
{
"created": "Thu, 13 Sep 2007 13:32:32 GMT",
"version": "v1"
}
] |
2007-09-14
|
[
[
"Schoenerklee",
"Monika",
""
],
[
"Peev",
"Momtchil",
""
]
] |
In this paper we address a general parameter estimation methodology for an extended biokinetic degradation model [1] for poorly degradable micropollutants. In particular we concentrate on parameter estimation of the micropollutant degradation sub-model by specialised microorganisms. In this case we focus on the case when only substrate degradation data are available and prove the structural identifiability of the model. Further we consider the problem of practical identifiability and propose experimental and related numerical methods for unambiguous parameter estimation based on multiple substrate degradation curves with different initial concentrations. Finally by means of simulated pseudo-experiments we have found convincing indications that the proposed algorithm is stable and yields appropriate parameter estimates even in unfavourable regimes.
|
q-bio/0412021
|
Valmir Barbosa
|
A. H. L. Porto, V. C. Barbosa
|
Multiple sequence alignment based on set covers
| null |
Lecture Notes in Computer Science 3907 (2006), 127-137
|
10.1007/11732242_12
|
ES-666/04
|
q-bio.QM
| null |
We introduce a new heuristic for the multiple alignment of a set of
sequences. The heuristic is based on a set cover of the residue alphabet of the
sequences, and also on the determination of a significant set of blocks
comprising subsequences of the sequences to be aligned. These blocks are
obtained with the aid of a new data structure, called a suffix-set tree, which
is constructed from the input sequences with the guidance of the
residue-alphabet set cover and generalizes the well-known suffix tree of the
sequence set. We provide performance results on selected BAliBASE amino-acid
sequences and compare them with those yielded by some prominent approaches.
|
[
{
"created": "Fri, 10 Dec 2004 17:10:53 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Porto",
"A. H. L.",
""
],
[
"Barbosa",
"V. C.",
""
]
] |
We introduce a new heuristic for the multiple alignment of a set of sequences. The heuristic is based on a set cover of the residue alphabet of the sequences, and also on the determination of a significant set of blocks comprising subsequences of the sequences to be aligned. These blocks are obtained with the aid of a new data structure, called a suffix-set tree, which is constructed from the input sequences with the guidance of the residue-alphabet set cover and generalizes the well-known suffix tree of the sequence set. We provide performance results on selected BAliBASE amino-acid sequences and compare them with those yielded by some prominent approaches.
|
2208.09648
|
Tong Wang
|
Tong Wang
|
Research on Creative Thinking Mode Based on Category Theory
| null | null | null | null |
q-bio.NC
|
http://creativecommons.org/licenses/by/4.0/
|
The research on the brain mechanism of creativity mainly has two aspects, one
is the creative thinking process, and the other is the brain structure and
functional connection characteristics of highly creative people. The billions
of nerve cells in the brain connect and interact with each other. The hundreds
of millions of nerve cells in the brain connect and interact with each other.
The human brain has a high degree of complexity at the biological level,
especially the rational thinking ability of the human brain. Starting from the
connection of molecules, cells, neural networks and the neural function
structure of the brain, it may be fundamentally impossible to study the
rational thinking mode of human beings. Human's rational thinking mode has a
high degree of freedom and transcendence, and such problems cannot be expected
to be studied by elaborating the realization of the nervous system. The
rational thinking of the brain is mainly based on the structured thinking mode,
and the structured thinking mode shows the great scientific power. This paper
studies the theoretical model of innovative thinking based on of category
theory, and analyzes the creation process of two scientific theories which are
landmarks in the history of science, and provides an intuitive, clear
interpretation model and rigorous mathematical argument for the creative
thinking. The structured thinking way have great revelation and help to create
new scientific theories.
|
[
{
"created": "Sat, 20 Aug 2022 09:28:42 GMT",
"version": "v1"
}
] |
2022-08-23
|
[
[
"Wang",
"Tong",
""
]
] |
The research on the brain mechanism of creativity mainly has two aspects, one is the creative thinking process, and the other is the brain structure and functional connection characteristics of highly creative people. The billions of nerve cells in the brain connect and interact with each other. The hundreds of millions of nerve cells in the brain connect and interact with each other. The human brain has a high degree of complexity at the biological level, especially the rational thinking ability of the human brain. Starting from the connection of molecules, cells, neural networks and the neural function structure of the brain, it may be fundamentally impossible to study the rational thinking mode of human beings. Human's rational thinking mode has a high degree of freedom and transcendence, and such problems cannot be expected to be studied by elaborating the realization of the nervous system. The rational thinking of the brain is mainly based on the structured thinking mode, and the structured thinking mode shows the great scientific power. This paper studies the theoretical model of innovative thinking based on of category theory, and analyzes the creation process of two scientific theories which are landmarks in the history of science, and provides an intuitive, clear interpretation model and rigorous mathematical argument for the creative thinking. The structured thinking way have great revelation and help to create new scientific theories.
|
1606.06842
|
Zhou Xu
|
Sarah Eug\`ene, Thibault Bourgeron and Zhou Xu
|
Effects of initial telomere length distribution on senescence onset and
heterogeneity
| null | null | null | null |
q-bio.GN q-bio.SC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Replicative senescence, induced by telomere shortening, exhibits considerable
asynchrony and heterogeneity, the origins of which remain unclear. Here, we
formally study how telomere shortening mechanisms impact on senescence kinetics
and define two regimes of senescence, depending on the initial telomere length
variance. We provide analytical solutions to the model, highlighting a
non-linear relationship between senescence onset and initial telomere length
distribution. This study reveals the complexity of the collective behavior of
telomeres as they shorten, leading to senescence heterogeneity.
|
[
{
"created": "Wed, 22 Jun 2016 08:04:52 GMT",
"version": "v1"
},
{
"created": "Mon, 7 Nov 2016 10:10:14 GMT",
"version": "v2"
}
] |
2016-11-08
|
[
[
"Eugène",
"Sarah",
""
],
[
"Bourgeron",
"Thibault",
""
],
[
"Xu",
"Zhou",
""
]
] |
Replicative senescence, induced by telomere shortening, exhibits considerable asynchrony and heterogeneity, the origins of which remain unclear. Here, we formally study how telomere shortening mechanisms impact on senescence kinetics and define two regimes of senescence, depending on the initial telomere length variance. We provide analytical solutions to the model, highlighting a non-linear relationship between senescence onset and initial telomere length distribution. This study reveals the complexity of the collective behavior of telomeres as they shorten, leading to senescence heterogeneity.
|
1902.09365
|
Solenn Stoeckel
|
Solenn Stoeckel, Barbara Porro and Sophie Arnaud-Haond
|
The discernible and hidden effects of clonality on the genotypic and
genetic states of populations: improving our estimation of clonal rates
|
45 pages, 4 figures, 1 box and no table; 8 supplementary figures
| null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Partial clonality is widespread across the tree of life, but most population
genetics models are designed for exclusively clonal or sexual organisms. This
gap hampers our understanding of the influence of clonality on evolutionary
trajectories and the interpretation of population genetics data. We performed
forward simulations of diploid populations at increasing rates of clonality
(c), analysed their relationships with genotypic (clonal richness, R, and
distribution of clonal sizes, Pareto \beta) and genetic (FIS and linkage
disequilibrium) indices, and tested predictions of c from population genetics
data through supervised machine learning. Two complementary behaviours emerged
from the probability distributions of genotypic and genetic indices with
increasing c. While the impact of c on R and Pareto \beta was easily described
by simple mathematical equations, its effects on genetic indices were
noticeable only at the highest levels (c>0.95). Consequently, genotypic indices
allowed reliable estimates of c, while genetic descriptors led to poorer
performances when c<0.95. These results provide clear baseline expectations for
genotypic and genetic diversity and dynamics under partial clonality.
Worryingly, however, the use of realistic sample sizes to acquire empirical
data systematically led to gross underestimates (often of one to two orders of
magnitude) of c, suggesting that many interpretations hitherto proposed in the
literature, mostly based on genotypic richness, should be reappraised. We
propose future avenues to derive realistic confidence intervals for c and show
that, although still approximate, a supervised learning method would greatly
improve the estimation of c from population genetics data.
|
[
{
"created": "Mon, 25 Feb 2019 15:29:57 GMT",
"version": "v1"
},
{
"created": "Mon, 11 Mar 2019 22:26:36 GMT",
"version": "v2"
},
{
"created": "Mon, 1 Jul 2019 19:13:31 GMT",
"version": "v3"
},
{
"created": "Sat, 3 Aug 2019 09:43:26 GMT",
"version": "v4"
}
] |
2019-08-06
|
[
[
"Stoeckel",
"Solenn",
""
],
[
"Porro",
"Barbara",
""
],
[
"Arnaud-Haond",
"Sophie",
""
]
] |
Partial clonality is widespread across the tree of life, but most population genetics models are designed for exclusively clonal or sexual organisms. This gap hampers our understanding of the influence of clonality on evolutionary trajectories and the interpretation of population genetics data. We performed forward simulations of diploid populations at increasing rates of clonality (c), analysed their relationships with genotypic (clonal richness, R, and distribution of clonal sizes, Pareto \beta) and genetic (FIS and linkage disequilibrium) indices, and tested predictions of c from population genetics data through supervised machine learning. Two complementary behaviours emerged from the probability distributions of genotypic and genetic indices with increasing c. While the impact of c on R and Pareto \beta was easily described by simple mathematical equations, its effects on genetic indices were noticeable only at the highest levels (c>0.95). Consequently, genotypic indices allowed reliable estimates of c, while genetic descriptors led to poorer performances when c<0.95. These results provide clear baseline expectations for genotypic and genetic diversity and dynamics under partial clonality. Worryingly, however, the use of realistic sample sizes to acquire empirical data systematically led to gross underestimates (often of one to two orders of magnitude) of c, suggesting that many interpretations hitherto proposed in the literature, mostly based on genotypic richness, should be reappraised. We propose future avenues to derive realistic confidence intervals for c and show that, although still approximate, a supervised learning method would greatly improve the estimation of c from population genetics data.
|
1904.02979
|
Fiona Macfarlane
|
Fiona R Macfarlane, Mark AJ Chaplain, Tommaso Lorenzi
|
A stochastic individual-based model to explore the role of spatial
interactions and antigen recognition in the immune response against solid
tumours
| null | null |
10.1016/j.jtbi.2019.07.019
| null |
q-bio.CB math.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Spatial interactions between cancer and immune cells, as well as the
recognition of tumour antigens by cells of the immune system, play a key role
in the immune response against solid tumours. The existing mathematical models
generally focus only on one of these key aspects. We present here a spatial
stochastic individual-based model that explicitly captures antigen expression
and recognition. In our model, each cancer cell is characterised by an antigen
profile which can change over time due to either epimutations or mutations. The
immune response against the cancer cells is initiated by the dendritic cells
that recognise the tumour antigens and present them to the cytotoxic T cells.
Consequently, T cells become activated against the tumour cells expressing such
antigens. Moreover, the differences in movement between inactive and active
immune cells are explicitly taken into account by the model. Computational
simulations of our model clarify the conditions for the emergence of tumour
clearance, dormancy or escape, and allow us to assess the impact of antigenic
heterogeneity of cancer cells on the efficacy of immune action. Ultimately, our
results highlight the complex interplay between spatial interactions and
adaptive mechanisms that underpins the immune response against solid tumours,
and suggest how this may be exploited to further develop cancer
immunotherapies.
|
[
{
"created": "Fri, 5 Apr 2019 10:14:37 GMT",
"version": "v1"
},
{
"created": "Fri, 12 Jul 2019 12:00:23 GMT",
"version": "v2"
}
] |
2019-08-05
|
[
[
"Macfarlane",
"Fiona R",
""
],
[
"Chaplain",
"Mark AJ",
""
],
[
"Lorenzi",
"Tommaso",
""
]
] |
Spatial interactions between cancer and immune cells, as well as the recognition of tumour antigens by cells of the immune system, play a key role in the immune response against solid tumours. The existing mathematical models generally focus only on one of these key aspects. We present here a spatial stochastic individual-based model that explicitly captures antigen expression and recognition. In our model, each cancer cell is characterised by an antigen profile which can change over time due to either epimutations or mutations. The immune response against the cancer cells is initiated by the dendritic cells that recognise the tumour antigens and present them to the cytotoxic T cells. Consequently, T cells become activated against the tumour cells expressing such antigens. Moreover, the differences in movement between inactive and active immune cells are explicitly taken into account by the model. Computational simulations of our model clarify the conditions for the emergence of tumour clearance, dormancy or escape, and allow us to assess the impact of antigenic heterogeneity of cancer cells on the efficacy of immune action. Ultimately, our results highlight the complex interplay between spatial interactions and adaptive mechanisms that underpins the immune response against solid tumours, and suggest how this may be exploited to further develop cancer immunotherapies.
|
1811.06809
|
Hendrik Richter
|
Hendrik Richter
|
Fixation properties of multiple cooperator configurations on regular
graphs
| null |
Theory in Biosciences (2019)
|
10.1007/s12064-019-00293-3
| null |
q-bio.PE cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Whether or not cooperation is favored in evolutionary games on graphs depends
on the population structure and spatial properties of the interaction network.
Population structures can be expressed as configurations. Such configurations
extend scenarios with a single cooperator among defectors to any number of
cooperators and any arrangement of cooperators and defectors. Thus, as a single
cooperator can be interpreted as a lone mutant, the discussion about fixation
properties based on configurations also applies to multiple mutants. For
interaction networks modeled as regular graphs and for weak selection, the
emergence of cooperation can be assessed by structure coefficients, which are
specific for a configuration and a graph. We analyze these structure
coefficients and particularly show that under certain conditions the
coefficients strongly correlate to the average shortest path length between
cooperators on the evolutionary graph. Thus,for multiple cooperators fixation
properties on regular evolutionary graphs can be linked to cooperator path
length.
|
[
{
"created": "Fri, 16 Nov 2018 14:13:51 GMT",
"version": "v1"
},
{
"created": "Wed, 20 Mar 2019 14:13:37 GMT",
"version": "v2"
}
] |
2019-03-21
|
[
[
"Richter",
"Hendrik",
""
]
] |
Whether or not cooperation is favored in evolutionary games on graphs depends on the population structure and spatial properties of the interaction network. Population structures can be expressed as configurations. Such configurations extend scenarios with a single cooperator among defectors to any number of cooperators and any arrangement of cooperators and defectors. Thus, as a single cooperator can be interpreted as a lone mutant, the discussion about fixation properties based on configurations also applies to multiple mutants. For interaction networks modeled as regular graphs and for weak selection, the emergence of cooperation can be assessed by structure coefficients, which are specific for a configuration and a graph. We analyze these structure coefficients and particularly show that under certain conditions the coefficients strongly correlate to the average shortest path length between cooperators on the evolutionary graph. Thus,for multiple cooperators fixation properties on regular evolutionary graphs can be linked to cooperator path length.
|
1706.03283
|
Sachin Talathi
|
Sachin S. Talathi
|
Deep Recurrent Neural Networks for seizure detection and early seizure
detection systems
| null | null | null | null |
q-bio.QM cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Epilepsy is common neurological diseases, affecting about 0.6-0.8 % of world
population. Epileptic patients suffer from chronic unprovoked seizures, which
can result in broad spectrum of debilitating medical and social consequences.
Since seizures, in general, occur infrequently and are unpredictable, automated
seizure detection systems are recommended to screen for seizures during
long-term electroencephalogram (EEG) recordings. In addition, systems for early
seizure detection can lead to the development of new types of intervention
systems that are designed to control or shorten the duration of seizure events.
In this article, we investigate the utility of recurrent neural networks (RNNs)
in designing seizure detection and early seizure detection systems. We propose
a deep learning framework via the use of Gated Recurrent Unit (GRU) RNNs for
seizure detection. We use publicly available data in order to evaluate our
method and demonstrate very promising evaluation results with overall accuracy
close to 100 %. We also systematically investigate the application of our
method for early seizure warning systems. Our method can detect about 98% of
seizure events within the first 5 seconds of the overall epileptic seizure
duration.
|
[
{
"created": "Sat, 10 Jun 2017 21:29:09 GMT",
"version": "v1"
}
] |
2017-06-13
|
[
[
"Talathi",
"Sachin S.",
""
]
] |
Epilepsy is common neurological diseases, affecting about 0.6-0.8 % of world population. Epileptic patients suffer from chronic unprovoked seizures, which can result in broad spectrum of debilitating medical and social consequences. Since seizures, in general, occur infrequently and are unpredictable, automated seizure detection systems are recommended to screen for seizures during long-term electroencephalogram (EEG) recordings. In addition, systems for early seizure detection can lead to the development of new types of intervention systems that are designed to control or shorten the duration of seizure events. In this article, we investigate the utility of recurrent neural networks (RNNs) in designing seizure detection and early seizure detection systems. We propose a deep learning framework via the use of Gated Recurrent Unit (GRU) RNNs for seizure detection. We use publicly available data in order to evaluate our method and demonstrate very promising evaluation results with overall accuracy close to 100 %. We also systematically investigate the application of our method for early seizure warning systems. Our method can detect about 98% of seizure events within the first 5 seconds of the overall epileptic seizure duration.
|
2301.07194
|
Jocelyn Faubert PhD
|
Jean-Marie Hanssens, Bernard Bourdoncle, Jacques Gresset, Jocelyn
Faubert, Pierre Simonet
|
Distortion in ophthalmic optics: A review of the principal concepts and
models
|
38 pages, 11 figures
| null | null | null |
q-bio.QM
|
http://creativecommons.org/licenses/by/4.0/
|
Although all members of the ophthalmic community agree that distortion is an
aberration affecting the geometry of an image produced by the periphery of an
ophthalmic lens, there are several approaches for analyzing and quantifying
this aberration. Various concepts have been introduced: ordinary distortion,
stationary distortion and central static distortion are associated with a fixed
eye behind the ophthalmic lens, whereas rotatory distortion, peripheral
distortion, lateral static distortion, and dynamic distortion require a
secondary position of gaze behind the lens. Furthermore, concept definitions
vary from one author to another. The goal of this paper is to review the
various concepts, analyze their effects on lens design and determine their
ability to predict the deformation of an image as perceived by the lens wearer.
These entities can be classified within 3 categories: the concepts associated
with an ocular rotation, the concepts resulting from an optical approach, and
the concepts using a perceptual approach. Among the various concepts reviewed,
it appears that the Le Grand-Fry approach for analyzing and displaying
distortion is preferable to others and allows modeling of the different
possible types of distortions affecting the periphery of an ophthalmic lens.
|
[
{
"created": "Tue, 17 Jan 2023 21:15:51 GMT",
"version": "v1"
}
] |
2023-01-19
|
[
[
"Hanssens",
"Jean-Marie",
""
],
[
"Bourdoncle",
"Bernard",
""
],
[
"Gresset",
"Jacques",
""
],
[
"Faubert",
"Jocelyn",
""
],
[
"Simonet",
"Pierre",
""
]
] |
Although all members of the ophthalmic community agree that distortion is an aberration affecting the geometry of an image produced by the periphery of an ophthalmic lens, there are several approaches for analyzing and quantifying this aberration. Various concepts have been introduced: ordinary distortion, stationary distortion and central static distortion are associated with a fixed eye behind the ophthalmic lens, whereas rotatory distortion, peripheral distortion, lateral static distortion, and dynamic distortion require a secondary position of gaze behind the lens. Furthermore, concept definitions vary from one author to another. The goal of this paper is to review the various concepts, analyze their effects on lens design and determine their ability to predict the deformation of an image as perceived by the lens wearer. These entities can be classified within 3 categories: the concepts associated with an ocular rotation, the concepts resulting from an optical approach, and the concepts using a perceptual approach. Among the various concepts reviewed, it appears that the Le Grand-Fry approach for analyzing and displaying distortion is preferable to others and allows modeling of the different possible types of distortions affecting the periphery of an ophthalmic lens.
|
2204.05103
|
Juan Vazquez-Rodriguez
|
Juan Vazquez-Rodriguez (M-PSI), Gr\'egoire Lefebvre, Julien Cumin,
James L. Crowley (M-PSI)
|
Transformer-Based Self-Supervised Learning for Emotion Recognition
| null |
26th International Conference on Pattern Recognition (ICPR 2022),
Aug 2022, Montreal, Canada
| null | null |
q-bio.NC cs.AI cs.LG eess.SP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In order to exploit representations of time-series signals, such as
physiological signals, it is essential that these representations capture
relevant information from the whole signal. In this work, we propose to use a
Transformer-based model to process electrocardiograms (ECG) for emotion
recognition. Attention mechanisms of the Transformer can be used to build
contextualized representations for a signal, giving more importance to relevant
parts. These representations may then be processed with a fully-connected
network to predict emotions. To overcome the relatively small size of datasets
with emotional labels, we employ self-supervised learning. We gathered several
ECG datasets with no labels of emotion to pre-train our model, which we then
fine-tuned for emotion recognition on the AMIGOS dataset. We show that our
approach reaches state-of-the-art performances for emotion recognition using
ECG signals on AMIGOS. More generally, our experiments show that transformers
and pre-training are promising strategies for emotion recognition with
physiological signals.
|
[
{
"created": "Fri, 8 Apr 2022 07:14:55 GMT",
"version": "v1"
},
{
"created": "Fri, 3 Jun 2022 09:13:10 GMT",
"version": "v2"
}
] |
2022-06-06
|
[
[
"Vazquez-Rodriguez",
"Juan",
"",
"M-PSI"
],
[
"Lefebvre",
"Grégoire",
"",
"M-PSI"
],
[
"Cumin",
"Julien",
"",
"M-PSI"
],
[
"Crowley",
"James L.",
"",
"M-PSI"
]
] |
In order to exploit representations of time-series signals, such as physiological signals, it is essential that these representations capture relevant information from the whole signal. In this work, we propose to use a Transformer-based model to process electrocardiograms (ECG) for emotion recognition. Attention mechanisms of the Transformer can be used to build contextualized representations for a signal, giving more importance to relevant parts. These representations may then be processed with a fully-connected network to predict emotions. To overcome the relatively small size of datasets with emotional labels, we employ self-supervised learning. We gathered several ECG datasets with no labels of emotion to pre-train our model, which we then fine-tuned for emotion recognition on the AMIGOS dataset. We show that our approach reaches state-of-the-art performances for emotion recognition using ECG signals on AMIGOS. More generally, our experiments show that transformers and pre-training are promising strategies for emotion recognition with physiological signals.
|
1104.4568
|
Raul Rabadan
|
Carlos Xavier Hernandez, Joseph Chan, Hossein Khiabanian, Raul Rabadan
|
Understanding the Origins of a Pandemic Virus
|
13 pages, 2 figures
| null | null |
ac:129992
|
q-bio.PE q-bio.GN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Understanding the origin of infectious diseases provides scientifically based
rationales for implementing public health measures that may help to avoid or
mitigate future epidemics. The recent ancestors of a pandemic virus provide
invaluable information about the set of minimal genomic alterations that
transformed a zoonotic agent into a full human pandemic. Since the first
confirmed cases of the H1N1 pandemic virus in the spring of 2009, several
hypotheses about the strain's origins have been proposed. However, how, where,
and when it first infected humans is still far from clear. The only way to
piece together this epidemiological puzzle relies on the collective effort of
the international scientific community to increase genomic sequencing of
influenza isolates, especially ones collected in the months prior to the origin
of the pandemic.
|
[
{
"created": "Sat, 23 Apr 2011 15:58:43 GMT",
"version": "v1"
}
] |
2011-04-26
|
[
[
"Hernandez",
"Carlos Xavier",
""
],
[
"Chan",
"Joseph",
""
],
[
"Khiabanian",
"Hossein",
""
],
[
"Rabadan",
"Raul",
""
]
] |
Understanding the origin of infectious diseases provides scientifically based rationales for implementing public health measures that may help to avoid or mitigate future epidemics. The recent ancestors of a pandemic virus provide invaluable information about the set of minimal genomic alterations that transformed a zoonotic agent into a full human pandemic. Since the first confirmed cases of the H1N1 pandemic virus in the spring of 2009, several hypotheses about the strain's origins have been proposed. However, how, where, and when it first infected humans is still far from clear. The only way to piece together this epidemiological puzzle relies on the collective effort of the international scientific community to increase genomic sequencing of influenza isolates, especially ones collected in the months prior to the origin of the pandemic.
|
2312.15795
|
Assaf Marron
|
Assaf Marron, Smadar Szekely, Irun R. Cohen, and David Harel
|
Biparental Reproduction may Enhance Species Sustainability by Conserving
Shared Parental Traits more Faithfully than Monoparental Reproduction
|
This version has an analytical angle. See V1 for simulations of a
slightly different model
| null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recognized effects of sexual reproduction and other forms of biparental
reproduction in species sustainment and evolution include the increasing of
diversity, accelerating adaptation, constraining the accumulation of
deleterious mutations, and the homogenization of species genotype.
Nevertheless, many questions remain open with regard to the evolution of
biparental reproduction. In this paper we contribute an initial exploration of
finer details of the homogenization effect that we believe deserve focused
analysis and discussion. Specifically, rudimentary mathematical analyses
suggest the perspective that biparental reproduction enhances the retention in
the offspring of shared, i.e., common, properties of the parent generation, as
compared with monoparental, clonal reproduction. We argue that this is an
intrinsic effect of merging the encodings of parents' traits, regardless of
physical, chemical, biological and social aspects of the reproduction process
and of the trait at hand. As the survival of offspring often depends on their
ability to step in and actively fill the voids left by failing, dying or absent
members of the species in interaction networks that sustain the species,
cross-generation retention of common species properties helps sustain the
species. This side-effect of sexual reproduction may have also contributed to
the very evolution and pervasiveness of sexual reproduction in nature.
|
[
{
"created": "Mon, 25 Dec 2023 19:11:43 GMT",
"version": "v1"
},
{
"created": "Fri, 3 May 2024 17:44:20 GMT",
"version": "v2"
}
] |
2024-05-06
|
[
[
"Marron",
"Assaf",
""
],
[
"Szekely",
"Smadar",
""
],
[
"Cohen",
"Irun R.",
""
],
[
"Harel",
"David",
""
]
] |
Recognized effects of sexual reproduction and other forms of biparental reproduction in species sustainment and evolution include the increasing of diversity, accelerating adaptation, constraining the accumulation of deleterious mutations, and the homogenization of species genotype. Nevertheless, many questions remain open with regard to the evolution of biparental reproduction. In this paper we contribute an initial exploration of finer details of the homogenization effect that we believe deserve focused analysis and discussion. Specifically, rudimentary mathematical analyses suggest the perspective that biparental reproduction enhances the retention in the offspring of shared, i.e., common, properties of the parent generation, as compared with monoparental, clonal reproduction. We argue that this is an intrinsic effect of merging the encodings of parents' traits, regardless of physical, chemical, biological and social aspects of the reproduction process and of the trait at hand. As the survival of offspring often depends on their ability to step in and actively fill the voids left by failing, dying or absent members of the species in interaction networks that sustain the species, cross-generation retention of common species properties helps sustain the species. This side-effect of sexual reproduction may have also contributed to the very evolution and pervasiveness of sexual reproduction in nature.
|
2307.06314
|
Matthew Asker
|
Matthew Asker, Llu\'is Hern\'andez-Navarro, Alastair M. Rucklidge,
Mauro Mobilia
|
Coexistence of Competing Microbial Strains under Twofold Environmental
Variability and Demographic Fluctuations
|
15 pages + appendix of 10 pages (Supplementary Material); Figshare
resources: https://doi.org/10.6084/m9.figshare.23553066
|
New J. Phys 25, 123010 (2023)
|
10.1088/1367-2630/ad0d36
| null |
q-bio.PE cond-mat.stat-mech nlin.AO physics.bio-ph
|
http://creativecommons.org/licenses/by/4.0/
|
Microbial populations generally evolve in volatile environments, under
conditions fluctuating between harsh and mild, e.g. as the result of sudden
changes in toxin concentration or nutrient abundance. Environmental variability
thus shapes the long-time population dynamics, notably by influencing the
ability of different strains of microorganisms to coexist. Inspired by the
evolution of antimicrobial resistance, we study the dynamics of a community
consisting of two competing strains subject to twofold environmental
variability. The level of toxin varies in time, favouring the growth of one
strain under low drug concentration and the other strain when the toxin level
is high. We also model time-changing resource abundance by a randomly switching
carrying capacity that drives the fluctuating size of the community. While one
strain dominates in a static environment, we show that species coexistence is
possible in the presence of environmental variability. By computational and
analytical means, we determine the environmental conditions under which
long-lived coexistence is possible and when it is almost certain. Notably, we
study the circumstances under which environmental and demographic fluctuations
promote, or hinder, the strains coexistence. We also determine how the make-up
of the coexistence phase and the average abundance of each strain depend on the
environmental variability.
|
[
{
"created": "Wed, 12 Jul 2023 17:22:55 GMT",
"version": "v1"
},
{
"created": "Fri, 14 Jul 2023 16:44:46 GMT",
"version": "v2"
},
{
"created": "Thu, 30 Nov 2023 09:46:05 GMT",
"version": "v3"
}
] |
2023-12-07
|
[
[
"Asker",
"Matthew",
""
],
[
"Hernández-Navarro",
"Lluís",
""
],
[
"Rucklidge",
"Alastair M.",
""
],
[
"Mobilia",
"Mauro",
""
]
] |
Microbial populations generally evolve in volatile environments, under conditions fluctuating between harsh and mild, e.g. as the result of sudden changes in toxin concentration or nutrient abundance. Environmental variability thus shapes the long-time population dynamics, notably by influencing the ability of different strains of microorganisms to coexist. Inspired by the evolution of antimicrobial resistance, we study the dynamics of a community consisting of two competing strains subject to twofold environmental variability. The level of toxin varies in time, favouring the growth of one strain under low drug concentration and the other strain when the toxin level is high. We also model time-changing resource abundance by a randomly switching carrying capacity that drives the fluctuating size of the community. While one strain dominates in a static environment, we show that species coexistence is possible in the presence of environmental variability. By computational and analytical means, we determine the environmental conditions under which long-lived coexistence is possible and when it is almost certain. Notably, we study the circumstances under which environmental and demographic fluctuations promote, or hinder, the strains coexistence. We also determine how the make-up of the coexistence phase and the average abundance of each strain depend on the environmental variability.
|
2007.07029
|
Alexander Becker
|
Vladimir Golkov, Alexander Becker, Daniel T. Plop, Daniel
\v{C}uturilo, Neda Davoudi, Jeffrey Mendenhall, Rocco Moretti, Jens Meiler,
Daniel Cremers
|
Deep Learning for Virtual Screening: Five Reasons to Use ROC Cost
Functions
|
10 pages
| null | null | null |
q-bio.BM cs.LG q-bio.QM stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Computer-aided drug discovery is an essential component of modern drug
development. Therein, deep learning has become an important tool for rapid
screening of billions of molecules in silico for potential hits containing
desired chemical features. Despite its importance, substantial challenges
persist in training these models, such as severe class imbalance, high decision
thresholds, and lack of ground truth labels in some datasets. In this work we
argue in favor of directly optimizing the receiver operating characteristic
(ROC) in such cases, due to its robustness to class imbalance, its ability to
compromise over different decision thresholds, certain freedom to influence the
relative weights in this compromise, fidelity to typical benchmarking measures,
and equivalence to positive/unlabeled learning. We also propose new training
schemes (coherent mini-batch arrangement, and usage of out-of-batch samples)
for cost functions based on the ROC, as well as a cost function based on the
logAUC metric that facilitates early enrichment (i.e. improves performance at
high decision thresholds, as often desired when synthesizing predicted hit
compounds). We demonstrate that these approaches outperform standard deep
learning approaches on a series of PubChem high-throughput screening datasets
that represent realistic and diverse drug discovery campaigns on major drug
target families.
|
[
{
"created": "Thu, 25 Jun 2020 08:46:37 GMT",
"version": "v1"
}
] |
2020-07-15
|
[
[
"Golkov",
"Vladimir",
""
],
[
"Becker",
"Alexander",
""
],
[
"Plop",
"Daniel T.",
""
],
[
"Čuturilo",
"Daniel",
""
],
[
"Davoudi",
"Neda",
""
],
[
"Mendenhall",
"Jeffrey",
""
],
[
"Moretti",
"Rocco",
""
],
[
"Meiler",
"Jens",
""
],
[
"Cremers",
"Daniel",
""
]
] |
Computer-aided drug discovery is an essential component of modern drug development. Therein, deep learning has become an important tool for rapid screening of billions of molecules in silico for potential hits containing desired chemical features. Despite its importance, substantial challenges persist in training these models, such as severe class imbalance, high decision thresholds, and lack of ground truth labels in some datasets. In this work we argue in favor of directly optimizing the receiver operating characteristic (ROC) in such cases, due to its robustness to class imbalance, its ability to compromise over different decision thresholds, certain freedom to influence the relative weights in this compromise, fidelity to typical benchmarking measures, and equivalence to positive/unlabeled learning. We also propose new training schemes (coherent mini-batch arrangement, and usage of out-of-batch samples) for cost functions based on the ROC, as well as a cost function based on the logAUC metric that facilitates early enrichment (i.e. improves performance at high decision thresholds, as often desired when synthesizing predicted hit compounds). We demonstrate that these approaches outperform standard deep learning approaches on a series of PubChem high-throughput screening datasets that represent realistic and diverse drug discovery campaigns on major drug target families.
|
2006.14178
|
Jonathan Kadmon
|
Jonathan Kadmon, Jonathan Timcheck, and Surya Ganguli
|
Predictive coding in balanced neural networks with noise, chaos and
delays
| null | null | null | null |
q-bio.NC cond-mat.dis-nn stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Biological neural networks face a formidable task: performing reliable
computations in the face of intrinsic stochasticity in individual neurons,
imprecisely specified synaptic connectivity, and nonnegligible delays in
synaptic transmission. A common approach to combatting such biological
heterogeneity involves averaging over large redundant networks of $N$ neurons
resulting in coding errors that decrease classically as $1/\sqrt{N}$. Recent
work demonstrated a novel mechanism whereby recurrent spiking networks could
efficiently encode dynamic stimuli, achieving a superclassical scaling in which
coding errors decrease as $1/N$. This specific mechanism involved two key
ideas: predictive coding, and a tight balance, or cancellation between strong
feedforward inputs and strong recurrent feedback. However, the theoretical
principles governing the efficacy of balanced predictive coding and its
robustness to noise, synaptic weight heterogeneity and communication delays
remain poorly understood. To discover such principles, we introduce an
analytically tractable model of balanced predictive coding, in which the degree
of balance and the degree of weight disorder can be dissociated unlike in
previous balanced network models, and we develop a mean field theory of coding
accuracy. Overall, our work provides and solves a general theoretical framework
for dissecting the differential contributions neural noise, synaptic disorder,
chaos, synaptic delays, and balance to the fidelity of predictive neural codes,
reveals the fundamental role that balance plays in achieving superclassical
scaling, and unifies previously disparate models in theoretical neuroscience.
|
[
{
"created": "Thu, 25 Jun 2020 05:03:27 GMT",
"version": "v1"
}
] |
2020-06-26
|
[
[
"Kadmon",
"Jonathan",
""
],
[
"Timcheck",
"Jonathan",
""
],
[
"Ganguli",
"Surya",
""
]
] |
Biological neural networks face a formidable task: performing reliable computations in the face of intrinsic stochasticity in individual neurons, imprecisely specified synaptic connectivity, and nonnegligible delays in synaptic transmission. A common approach to combatting such biological heterogeneity involves averaging over large redundant networks of $N$ neurons resulting in coding errors that decrease classically as $1/\sqrt{N}$. Recent work demonstrated a novel mechanism whereby recurrent spiking networks could efficiently encode dynamic stimuli, achieving a superclassical scaling in which coding errors decrease as $1/N$. This specific mechanism involved two key ideas: predictive coding, and a tight balance, or cancellation between strong feedforward inputs and strong recurrent feedback. However, the theoretical principles governing the efficacy of balanced predictive coding and its robustness to noise, synaptic weight heterogeneity and communication delays remain poorly understood. To discover such principles, we introduce an analytically tractable model of balanced predictive coding, in which the degree of balance and the degree of weight disorder can be dissociated unlike in previous balanced network models, and we develop a mean field theory of coding accuracy. Overall, our work provides and solves a general theoretical framework for dissecting the differential contributions neural noise, synaptic disorder, chaos, synaptic delays, and balance to the fidelity of predictive neural codes, reveals the fundamental role that balance plays in achieving superclassical scaling, and unifies previously disparate models in theoretical neuroscience.
|
2207.09903
|
Elena Losero
|
Elena Losero, Somanath Jagannath, Maurizio Pezzoli, Valentin Goblot,
Hossein Babashah, Hilal A. Lashuel, Christophe Galland, and Niels Quack
|
Neuronal growth on high-aspect-ratio diamond nanopillar arrays for
biosensing applications
|
26 pages. 10 figures
|
Scientific Reports 13: 5909 (2023)
|
10.1038/s41598-023-32235-x
| null |
q-bio.NC physics.app-ph quant-ph
|
http://creativecommons.org/licenses/by/4.0/
|
Monitoring neuronal activity with simultaneously high spatial and temporal
resolution in living cell cultures is crucial to advance understanding of the
development and functioning of our brain, and to gain further insights in the
origin of brain disorders. While it has been demonstrated that the quantum
sensing capabilities of nitrogen-vacancy (NV) centers in diamond allow real
time detection of action potentials from large neurons in marine invertebrates,
quantum monitoring of mammalian neurons (presenting much smaller dimensions and
thus producing much lower signal and requiring higher spatial resolution) has
hitherto remained elusive. In this context, diamond nanostructuring can offer
the opportunity to boost the diamond platform sensitivity to the required
level. However, a comprehensive analysis of the impact of a nanostructured
diamond surface on the neuronal viability and growth was lacking. Here, we
pattern a single crystal diamond surface with large-scale nanopillar arrays and
we successfully demonstrate growth of a network of living and functional
primary mouse hippocampal neurons on it. Our study on geometrical parameters
reveals preferential growth along the nanopillar grid axes with excellent
physical contact between cell membrane and nanopillar apex. Our results suggest
that neuron growth can be tailored on diamond nanopillars to realize a
nanophotonic quantum sensing platform for wide-field and label-free neuronal
activity recording with sub-cellular resolution.
|
[
{
"created": "Mon, 11 Jul 2022 07:20:28 GMT",
"version": "v1"
},
{
"created": "Tue, 22 Nov 2022 08:26:55 GMT",
"version": "v2"
}
] |
2023-07-18
|
[
[
"Losero",
"Elena",
""
],
[
"Jagannath",
"Somanath",
""
],
[
"Pezzoli",
"Maurizio",
""
],
[
"Goblot",
"Valentin",
""
],
[
"Babashah",
"Hossein",
""
],
[
"Lashuel",
"Hilal A.",
""
],
[
"Galland",
"Christophe",
""
],
[
"Quack",
"Niels",
""
]
] |
Monitoring neuronal activity with simultaneously high spatial and temporal resolution in living cell cultures is crucial to advance understanding of the development and functioning of our brain, and to gain further insights in the origin of brain disorders. While it has been demonstrated that the quantum sensing capabilities of nitrogen-vacancy (NV) centers in diamond allow real time detection of action potentials from large neurons in marine invertebrates, quantum monitoring of mammalian neurons (presenting much smaller dimensions and thus producing much lower signal and requiring higher spatial resolution) has hitherto remained elusive. In this context, diamond nanostructuring can offer the opportunity to boost the diamond platform sensitivity to the required level. However, a comprehensive analysis of the impact of a nanostructured diamond surface on the neuronal viability and growth was lacking. Here, we pattern a single crystal diamond surface with large-scale nanopillar arrays and we successfully demonstrate growth of a network of living and functional primary mouse hippocampal neurons on it. Our study on geometrical parameters reveals preferential growth along the nanopillar grid axes with excellent physical contact between cell membrane and nanopillar apex. Our results suggest that neuron growth can be tailored on diamond nanopillars to realize a nanophotonic quantum sensing platform for wide-field and label-free neuronal activity recording with sub-cellular resolution.
|
1309.7798
|
Guy-Bart Stan PhD
|
Rhys Algar and Tom Ellis and Guy-Bart Stan
|
Modelling the burden caused by gene expression: an in silico
investigation into the interactions between synthetic gene circuits and their
chassis cell
| null | null | null | null |
q-bio.MN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper we motivate and develop a model of gene expression for the
purpose of studying the interaction between synthetic gene circuits and the
chassis cell within which they are in- serted. This model focuses on the
translational aspect of gene expression as this is where the literature
suggests the crucial interaction between gene expression and shared resources
lies.
|
[
{
"created": "Mon, 30 Sep 2013 11:22:42 GMT",
"version": "v1"
}
] |
2013-10-01
|
[
[
"Algar",
"Rhys",
""
],
[
"Ellis",
"Tom",
""
],
[
"Stan",
"Guy-Bart",
""
]
] |
In this paper we motivate and develop a model of gene expression for the purpose of studying the interaction between synthetic gene circuits and the chassis cell within which they are in- serted. This model focuses on the translational aspect of gene expression as this is where the literature suggests the crucial interaction between gene expression and shared resources lies.
|
1405.4327
|
Christopher Lee
|
Christopher Lee, Marc Harper, Dashiell Fryer
|
The Art of War: Beyond Memory-one Strategies in Population Games
|
16 pages, 4 figures
|
PLoS One. 2015; 10(3): e012062
|
10.1371/journal.pone.0120625
| null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We define a new strategy for population games based on techniques from
machine learning and statistical inference that is essentially uninvadable and
can successfully invade (significantly more likely than a neutral mutant)
essentially all known memory-one strategies for the prisoner's dilemma and
other population games, including ALLC (always cooperate), ALLD (always
defect), tit-for-tat (TFT), win-stay-lose-shift (WSLS), and zero determinant
(ZD) strategies, including extortionate and generous strategies. We will refer
to a player using this strategy as an "information player" and the specific
implementation as $IP_0$. Such players use the history of play to identify
opponent's strategies and respond accordingly, and naturally learn to cooperate
with each other.
|
[
{
"created": "Fri, 16 May 2014 23:25:13 GMT",
"version": "v1"
}
] |
2015-05-11
|
[
[
"Lee",
"Christopher",
""
],
[
"Harper",
"Marc",
""
],
[
"Fryer",
"Dashiell",
""
]
] |
We define a new strategy for population games based on techniques from machine learning and statistical inference that is essentially uninvadable and can successfully invade (significantly more likely than a neutral mutant) essentially all known memory-one strategies for the prisoner's dilemma and other population games, including ALLC (always cooperate), ALLD (always defect), tit-for-tat (TFT), win-stay-lose-shift (WSLS), and zero determinant (ZD) strategies, including extortionate and generous strategies. We will refer to a player using this strategy as an "information player" and the specific implementation as $IP_0$. Such players use the history of play to identify opponent's strategies and respond accordingly, and naturally learn to cooperate with each other.
|
1806.10889
|
Adrien Coulier
|
Adrien Coulier, Andreas Hellander
|
Orchestral: a lightweight framework for parallel simulations of
cell-cell communication
|
preprint, 9 pages, 9 figures, submitted to IEEE eScience 2018
| null | null | null |
q-bio.CB q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We develop a modeling and simulation framework capable of massively parallel
simulation of multicellular systems with spatially resolved stochastic kinetics
in individual cells. By the use of operator-splitting we decouple the
simulation of reaction-diffusion kinetics inside the cells from the simulation
of molecular cell-cell interactions occurring on the boundaries between cells.
This decoupling leverages the inherent scale separation in the underlying model
to enable highly horizontally scalable parallel simulation, suitable for
simulation on heterogeneous, distributed computing infrastructures such as
public and private clouds. Thanks to its modular structure, our frameworks
makes it possible to couple just any existing single-cell simulation software
together with any cell signaling simulator. We exemplify the flexibility and
scalability of the framework by using the popular single-cell simulation
software eGFRD to construct and simulate a multicellular model of Notch-Delta
signaling over OpenStack cloud infrastructure provided by the SNIC Science
Cloud.
|
[
{
"created": "Thu, 28 Jun 2018 11:30:52 GMT",
"version": "v1"
}
] |
2018-06-29
|
[
[
"Coulier",
"Adrien",
""
],
[
"Hellander",
"Andreas",
""
]
] |
We develop a modeling and simulation framework capable of massively parallel simulation of multicellular systems with spatially resolved stochastic kinetics in individual cells. By the use of operator-splitting we decouple the simulation of reaction-diffusion kinetics inside the cells from the simulation of molecular cell-cell interactions occurring on the boundaries between cells. This decoupling leverages the inherent scale separation in the underlying model to enable highly horizontally scalable parallel simulation, suitable for simulation on heterogeneous, distributed computing infrastructures such as public and private clouds. Thanks to its modular structure, our frameworks makes it possible to couple just any existing single-cell simulation software together with any cell signaling simulator. We exemplify the flexibility and scalability of the framework by using the popular single-cell simulation software eGFRD to construct and simulate a multicellular model of Notch-Delta signaling over OpenStack cloud infrastructure provided by the SNIC Science Cloud.
|
2004.01819
|
Rifat Zahan
|
Rifat Zahan, Ian McQuillan and Nathaniel D. Osgood
|
DNA Methylation Data to Predict Suicidal and Non-Suicidal Deaths: A
Machine Learning Approach
| null |
In 2018 IEEE International Conference on Healthcare Informatics
(ICHI) (pp. 363-365). IEEE (2018, June)
|
10.1109/ICHI.2018.00057
| null |
q-bio.GN cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The objective of this study is to predict suicidal and non-suicidal deaths
from DNA methylation data using a modern machine learning algorithm. We used
support vector machines to classify existing secondary data consisting of
normalized values of methylated DNA probe intensities from tissues of two
cortical brain regions to distinguish suicide cases from control cases. Before
classification, we employed Principal component analysis (PCA) and
t-distributed Stochastic Neighbor Embedding (t-SNE) to reduce the dimension of
the data. In comparison to PCA, the modern data visualization method t-SNE
performs better in dimensionality reduction. t-SNE accounts for the possible
non-linear patterns in low-dimensional data. We applied four-fold
cross-validation in which the resulting output from t-SNE was used as training
data for the Support Vector Machine (SVM). Despite the use of cross-validation,
the nominally perfect prediction of suicidal deaths for BA11 data suggests
possible over-fitting of the model. The study also may have suffered from
'spectrum bias' since the individuals were only studied from two extreme
scenarios. This research constitutes a baseline study for classifying suicidal
and non-suicidal deaths from DNA methylation data. Future studies with larger
sample size, while possibly incorporating methylation data from living
individuals, may reduce the bias and improve the accuracy of the results.
|
[
{
"created": "Sat, 4 Apr 2020 00:34:22 GMT",
"version": "v1"
}
] |
2020-04-07
|
[
[
"Zahan",
"Rifat",
""
],
[
"McQuillan",
"Ian",
""
],
[
"Osgood",
"Nathaniel D.",
""
]
] |
The objective of this study is to predict suicidal and non-suicidal deaths from DNA methylation data using a modern machine learning algorithm. We used support vector machines to classify existing secondary data consisting of normalized values of methylated DNA probe intensities from tissues of two cortical brain regions to distinguish suicide cases from control cases. Before classification, we employed Principal component analysis (PCA) and t-distributed Stochastic Neighbor Embedding (t-SNE) to reduce the dimension of the data. In comparison to PCA, the modern data visualization method t-SNE performs better in dimensionality reduction. t-SNE accounts for the possible non-linear patterns in low-dimensional data. We applied four-fold cross-validation in which the resulting output from t-SNE was used as training data for the Support Vector Machine (SVM). Despite the use of cross-validation, the nominally perfect prediction of suicidal deaths for BA11 data suggests possible over-fitting of the model. The study also may have suffered from 'spectrum bias' since the individuals were only studied from two extreme scenarios. This research constitutes a baseline study for classifying suicidal and non-suicidal deaths from DNA methylation data. Future studies with larger sample size, while possibly incorporating methylation data from living individuals, may reduce the bias and improve the accuracy of the results.
|
1910.04059
|
Kezhi Li
|
Taiyu Zhu, Kezhi Li, Pantelis Georgiou
|
A Dual-Hormone Closed-Loop Delivery System for Type 1 Diabetes Using
Deep Reinforcement Learning
| null | null | null | null |
q-bio.QM cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a dual-hormone delivery strategy by exploiting deep reinforcement
learning (RL) for people with Type 1 Diabetes (T1D). Specifically, double
dilated recurrent neural networks (RNN) are used to learn the hormone delivery
strategy, trained by a variant of Q-learning, whose inputs are raw data of
glucose \& meal carbohydrate and outputs are dual-hormone (insulin and
glucagon) delivery. Without prior knowledge of the glucose-insulin metabolism,
we run the method on the UVA/Padova simulator. Hundreds days of self-play are
performed to obtain a generalized model, then importance sampling is adopted to
customize the model for personal use. \emph{In-silico} the proposed strategy
achieves glucose time in target range (TIR) $93\%$ for adults and $83\%$ for
adolescents given standard bolus, outperforming previous approaches
significantly. The results indicate that deep RL is effective in building
personalized hormone delivery strategy for people with T1D.
|
[
{
"created": "Wed, 9 Oct 2019 15:14:55 GMT",
"version": "v1"
}
] |
2019-10-10
|
[
[
"Zhu",
"Taiyu",
""
],
[
"Li",
"Kezhi",
""
],
[
"Georgiou",
"Pantelis",
""
]
] |
We propose a dual-hormone delivery strategy by exploiting deep reinforcement learning (RL) for people with Type 1 Diabetes (T1D). Specifically, double dilated recurrent neural networks (RNN) are used to learn the hormone delivery strategy, trained by a variant of Q-learning, whose inputs are raw data of glucose \& meal carbohydrate and outputs are dual-hormone (insulin and glucagon) delivery. Without prior knowledge of the glucose-insulin metabolism, we run the method on the UVA/Padova simulator. Hundreds days of self-play are performed to obtain a generalized model, then importance sampling is adopted to customize the model for personal use. \emph{In-silico} the proposed strategy achieves glucose time in target range (TIR) $93\%$ for adults and $83\%$ for adolescents given standard bolus, outperforming previous approaches significantly. The results indicate that deep RL is effective in building personalized hormone delivery strategy for people with T1D.
|
1202.2518
|
Liang Wang
|
Wang Liang
|
Segmenting DNA sequence into `words'
|
12 pages,2 figures
| null | null | null |
q-bio.GN cs.CL
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper presents a novel method to segment/decode DNA sequences based on
n-grams statistical language model. Firstly, we find the length of most DNA
'words' is 12 to 15 bps by analyzing the genomes of 12 model species. Then we
design an unsupervised probability based approach to segment the DNA sequences.
The benchmark of segmenting method is also proposed.
|
[
{
"created": "Sun, 12 Feb 2012 11:23:38 GMT",
"version": "v1"
},
{
"created": "Thu, 23 Feb 2012 02:39:32 GMT",
"version": "v2"
},
{
"created": "Tue, 2 Oct 2012 12:49:43 GMT",
"version": "v3"
},
{
"created": "Sat, 22 Jun 2013 10:14:11 GMT",
"version": "v4"
}
] |
2015-03-13
|
[
[
"Liang",
"Wang",
""
]
] |
This paper presents a novel method to segment/decode DNA sequences based on n-grams statistical language model. Firstly, we find the length of most DNA 'words' is 12 to 15 bps by analyzing the genomes of 12 model species. Then we design an unsupervised probability based approach to segment the DNA sequences. The benchmark of segmenting method is also proposed.
|
q-bio/0412037
|
HC Paul Lee
|
Hong-Da Chen, Chang-Heng Chang, Li-Ching Hsieh and Hoong-Chien Lee
|
Divergence and Shannon information in genomes
|
4 pages, 3 tables, 2 figures
| null |
10.1103/PhysRevLett.94.178103
| null |
q-bio.GN
| null |
Shannon information (SI) and its special case, divergence, are defined for a
DNA sequence in terms of probabilities of chemical words in the sequence and
are computed for a set of complete genomes highly diverse in length and
composition. We find the following: SI (but not divergence) is inversely
proportional to sequence length for a random sequence but is length-independent
for genomes; the genomic SI is always greater and, for shorter words and longer
sequences, hundreds to thousands times greater than the SI in a random sequence
whose length and composition match those of the genome; genomic SIs appear to
have word-length dependent universal values. The universality is inferred to be
an evolution footprint of a universal mode for genome growth.
|
[
{
"created": "Sat, 18 Dec 2004 03:15:03 GMT",
"version": "v1"
}
] |
2009-11-10
|
[
[
"Chen",
"Hong-Da",
""
],
[
"Chang",
"Chang-Heng",
""
],
[
"Hsieh",
"Li-Ching",
""
],
[
"Lee",
"Hoong-Chien",
""
]
] |
Shannon information (SI) and its special case, divergence, are defined for a DNA sequence in terms of probabilities of chemical words in the sequence and are computed for a set of complete genomes highly diverse in length and composition. We find the following: SI (but not divergence) is inversely proportional to sequence length for a random sequence but is length-independent for genomes; the genomic SI is always greater and, for shorter words and longer sequences, hundreds to thousands times greater than the SI in a random sequence whose length and composition match those of the genome; genomic SIs appear to have word-length dependent universal values. The universality is inferred to be an evolution footprint of a universal mode for genome growth.
|
2311.02226
|
Nina Fefferman
|
Nina H. Fefferman, Michael J. Blum, Lydia Bourouiba, Nathaniel L.
Gibson, Qiang He, Debra L. Miller, Monica Papes, Dana K. Pasquale, Connor
Verheyen, Sadie J. Ryan
|
The Case for Controls: Identifying outbreak risk factors through
case-control comparisons
| null | null | null | null |
q-bio.PE
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Investigations of infectious disease outbreaks often focus on identifying
place- and context-dependent factors responsible for emergence and spread,
resulting in phenomenological narratives ill-suited to developing generalizable
predictive and preventive measures. We contend that case-control hypothesis
testing is a more powerful framework for epidemiological investigation. The
approach, widely used in medical research, involves identifying
counterfactuals, with case-control comparisons drawn to test hypotheses about
the conditions that manifest outbreaks. Here we outline the merits of applying
a case-control framework as epidemiological study design. We first describe a
framework for iterative multidisciplinary interrogation to discover minimally
sufficient sets of factors that can lead to disease outbreaks. We then lay out
how case-control comparisons can respectively center on pathogen(s), factor(s),
or landscape(s) with vignettes focusing on pathogen transmission. Finally, we
consider how adopting case-control approaches can promote evidence-based
decision making for responding to and preventing outbreaks.
|
[
{
"created": "Fri, 3 Nov 2023 20:32:01 GMT",
"version": "v1"
}
] |
2023-11-07
|
[
[
"Fefferman",
"Nina H.",
""
],
[
"Blum",
"Michael J.",
""
],
[
"Bourouiba",
"Lydia",
""
],
[
"Gibson",
"Nathaniel L.",
""
],
[
"He",
"Qiang",
""
],
[
"Miller",
"Debra L.",
""
],
[
"Papes",
"Monica",
""
],
[
"Pasquale",
"Dana K.",
""
],
[
"Verheyen",
"Connor",
""
],
[
"Ryan",
"Sadie J.",
""
]
] |
Investigations of infectious disease outbreaks often focus on identifying place- and context-dependent factors responsible for emergence and spread, resulting in phenomenological narratives ill-suited to developing generalizable predictive and preventive measures. We contend that case-control hypothesis testing is a more powerful framework for epidemiological investigation. The approach, widely used in medical research, involves identifying counterfactuals, with case-control comparisons drawn to test hypotheses about the conditions that manifest outbreaks. Here we outline the merits of applying a case-control framework as epidemiological study design. We first describe a framework for iterative multidisciplinary interrogation to discover minimally sufficient sets of factors that can lead to disease outbreaks. We then lay out how case-control comparisons can respectively center on pathogen(s), factor(s), or landscape(s) with vignettes focusing on pathogen transmission. Finally, we consider how adopting case-control approaches can promote evidence-based decision making for responding to and preventing outbreaks.
|
1709.01998
|
Olivier Simon
|
Olivier Simon, Rabi Yacoub, Sanjay Jain, and Pinaki Sarder
|
Multi-radial LBP Features as a Tool for Rapid Glomerular Detection and
Assessment in Whole Slide Histopathology Images
|
14 pages, 6 figures. Added scalebars, and for Fig. 3b a clearer
example of medulla removal
| null | null | null |
q-bio.TO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We demonstrate a simple and effective automated method for the segmentation
of glomeruli from large (~1 gigapixel) histopathological whole-slide images
(WSIs) of thin renal tissue sections and biopsies, using an adaptation of the
well-known local binary patterns (LBP) image feature vector to train a support
vector machine (SVM) model. Our method offers high precision (>90%) and
reasonable recall (>70%) for glomeruli from WSIs, is readily adaptable to
glomeruli from multiple species, including mouse, rat, and human, and is robust
to diverse slide staining methods. Using 5 Intel(R) Core(TM) i7-4790 CPUs with
40 GB RAM, our method typically requires ~15 sec for training and ~2 min to
extract glomeruli reproducibly from a WSI. Deploying a deep convolutional
neural network trained for glomerular recognition in tandem with the SVM
suffices to reduce false positives to below 3%. We also apply our LBP-based
descriptor to successfully detect pathologic changes in a mouse model of
diabetic nephropathy. We envision potential clinical and laboratory
applications for this approach in the study and diagnosis of glomerular
disease, and as a means of greatly accelerating the construction of feature
sets to fuel deep learning studies into tissue structure and pathology.
|
[
{
"created": "Wed, 6 Sep 2017 20:54:10 GMT",
"version": "v1"
},
{
"created": "Wed, 20 Sep 2017 15:06:48 GMT",
"version": "v2"
}
] |
2017-09-21
|
[
[
"Simon",
"Olivier",
""
],
[
"Yacoub",
"Rabi",
""
],
[
"Jain",
"Sanjay",
""
],
[
"Sarder",
"Pinaki",
""
]
] |
We demonstrate a simple and effective automated method for the segmentation of glomeruli from large (~1 gigapixel) histopathological whole-slide images (WSIs) of thin renal tissue sections and biopsies, using an adaptation of the well-known local binary patterns (LBP) image feature vector to train a support vector machine (SVM) model. Our method offers high precision (>90%) and reasonable recall (>70%) for glomeruli from WSIs, is readily adaptable to glomeruli from multiple species, including mouse, rat, and human, and is robust to diverse slide staining methods. Using 5 Intel(R) Core(TM) i7-4790 CPUs with 40 GB RAM, our method typically requires ~15 sec for training and ~2 min to extract glomeruli reproducibly from a WSI. Deploying a deep convolutional neural network trained for glomerular recognition in tandem with the SVM suffices to reduce false positives to below 3%. We also apply our LBP-based descriptor to successfully detect pathologic changes in a mouse model of diabetic nephropathy. We envision potential clinical and laboratory applications for this approach in the study and diagnosis of glomerular disease, and as a means of greatly accelerating the construction of feature sets to fuel deep learning studies into tissue structure and pathology.
|
0801.0254
|
Reinhard Laubenbacher
|
Reinhard Laubenbacher and Brandilyn Stigler
|
Design of experiments and biochemical network inference
|
To appear in "Algebraic and geometric methods in statistics," P.
Gibilisco, E. Riccomagno, M.-P. Rogantin, H. P. Wynn, eds., Cambridge
University Press, 2008
|
Algebraic and Geometric Methods in Statistics. Eds: Gibilisco,
Riccomagno, Rogantin, Wynn, Cambridge University Press (2008)
| null | null |
q-bio.MN stat.AP
| null |
Design of experiments is a branch of statistics that aims to identify
efficient procedures for planning experiments in order to optimize knowledge
discovery. Network inference is a subfield of systems biology devoted to the
identification of biochemical networks from experimental data. Common to both
areas of research is their focus on the maximization of information gathered
from experimentation. The goal of this paper is to establish a connection
between these two areas coming from the common use of polynomial models and
techniques from computational algebra.
|
[
{
"created": "Mon, 31 Dec 2007 23:52:47 GMT",
"version": "v1"
}
] |
2019-07-12
|
[
[
"Laubenbacher",
"Reinhard",
""
],
[
"Stigler",
"Brandilyn",
""
]
] |
Design of experiments is a branch of statistics that aims to identify efficient procedures for planning experiments in order to optimize knowledge discovery. Network inference is a subfield of systems biology devoted to the identification of biochemical networks from experimental data. Common to both areas of research is their focus on the maximization of information gathered from experimentation. The goal of this paper is to establish a connection between these two areas coming from the common use of polynomial models and techniques from computational algebra.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.