id
stringlengths 9
13
| submitter
stringlengths 4
48
| authors
stringlengths 4
9.62k
| title
stringlengths 4
343
| comments
stringlengths 2
480
⌀ | journal-ref
stringlengths 9
309
⌀ | doi
stringlengths 12
138
⌀ | report-no
stringclasses 277
values | categories
stringlengths 8
87
| license
stringclasses 9
values | orig_abstract
stringlengths 27
3.76k
| versions
listlengths 1
15
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
147
| abstract
stringlengths 24
3.75k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1503.00399
|
Hamidreza Badri
|
Hamidreza Badri and Yoichi Watanabe and Kevin Leder
|
Robust and probabilistic optimization of dose schedules in radiotherapy
| null | null | null | null |
q-bio.TO physics.med-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We consider the effects of parameter uncertainty on the optimal radiation
schedule in the context of the linear-quadratic model. Our interest arises from
the observation that if inter-patient variations in OAR and tumor sensitivities
to radiation or sparing factor of the OAR are not accounted for during
radiation scheduling, the performance of the therapy may be strongly degraded
or the OAR may receive a substantially larger dose than the maximum threshold.
This paper proposes two radiation scheduling concepts to incorporate
inter-patient variability into the scheduling optimization problem. The first
approach is a robust formulation that formulates the problem as a conservative
model that optimizes the worst case dose scheduling that may occur. The second
method is a probabilistic approach, where the model parameters are given by a
set of random variables. This formulation insures that our constraints are
satisfied with a given probability, and that our objective function achieves a
desired level with a stated probability. We used a transformation to reduce the
resulting optimization problem to two dimensions. We showed that the optimal
solution lies on the boundary of the feasible region and we used a branch and
bound algorithm to find the global optimal solution. We observed that if the
number of fractions in the optimal conventional schedule is the same as the
robust and stochastic solutions, it is preferable to administer equal or
smaller total dose. In addition if there exist more (fewer) treatment sessions
in the probabilistic or robust solution compared to the conventional schedule,
a reduction in total dose squared (total dose) will be expected. Finally, we
performed numerical experiments in the setting of head-and-neck tumors to
reveal the effect of parameter uncertainty on optimal schedules and to evaluate
the sensitivity of the model to the choice of key model parameters.
|
[
{
"created": "Mon, 2 Mar 2015 03:10:16 GMT",
"version": "v1"
},
{
"created": "Wed, 3 Jun 2015 21:33:11 GMT",
"version": "v2"
}
] |
2015-06-05
|
[
[
"Badri",
"Hamidreza",
""
],
[
"Watanabe",
"Yoichi",
""
],
[
"Leder",
"Kevin",
""
]
] |
We consider the effects of parameter uncertainty on the optimal radiation schedule in the context of the linear-quadratic model. Our interest arises from the observation that if inter-patient variations in OAR and tumor sensitivities to radiation or sparing factor of the OAR are not accounted for during radiation scheduling, the performance of the therapy may be strongly degraded or the OAR may receive a substantially larger dose than the maximum threshold. This paper proposes two radiation scheduling concepts to incorporate inter-patient variability into the scheduling optimization problem. The first approach is a robust formulation that formulates the problem as a conservative model that optimizes the worst case dose scheduling that may occur. The second method is a probabilistic approach, where the model parameters are given by a set of random variables. This formulation insures that our constraints are satisfied with a given probability, and that our objective function achieves a desired level with a stated probability. We used a transformation to reduce the resulting optimization problem to two dimensions. We showed that the optimal solution lies on the boundary of the feasible region and we used a branch and bound algorithm to find the global optimal solution. We observed that if the number of fractions in the optimal conventional schedule is the same as the robust and stochastic solutions, it is preferable to administer equal or smaller total dose. In addition if there exist more (fewer) treatment sessions in the probabilistic or robust solution compared to the conventional schedule, a reduction in total dose squared (total dose) will be expected. Finally, we performed numerical experiments in the setting of head-and-neck tumors to reveal the effect of parameter uncertainty on optimal schedules and to evaluate the sensitivity of the model to the choice of key model parameters.
|
2201.04739
|
Marcos Trevisan Dr.
|
Alejandro Pardo Pintos, Diego E Shalom, Enzo Tagliazucchi, Gabriel
Mindlin and Marcos A Trevisan
|
Cognitive forces shape the dynamics of word usage across multiple
languages
|
8 pages, 3 figures
| null |
10.1016/j.chaos.2022.112327
| null |
q-bio.NC q-bio.PE
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The analysis of thousands of time series in different languages reveals that
word usage presents oscillations with a prevalence of 16-year cycles, mounted
on slowly varying trends. These components carry different information: while
similar oscillatory patterns gather semantically related words, similar trends
group together keywords representative of cultural and historical periods. We
interpreted the regular oscillations as cycles of interest and saturation,
whose behavior could be captured using a simple mathematical model. Driving the
model with the empirical trends, we were able to explain word frequency traces
across multiple languages throughout the last three centuries. Our results
suggest that word frequency usage is poised at dynamical criticality, close to
a Hopf bifurcation which signals the emergence of oscillatory dynamics.
Crucially, our model explains the oscillatory synchronization observed within
groups of words and provides an interpretation of this phenomenon in terms of
the cultural context driving collective cognition. These findings contribute to
unravel how our use of language is shaped by the interplay between human
cognition and sociocultural forces.
|
[
{
"created": "Wed, 12 Jan 2022 23:32:38 GMT",
"version": "v1"
},
{
"created": "Thu, 10 Feb 2022 18:26:33 GMT",
"version": "v2"
}
] |
2022-07-20
|
[
[
"Pintos",
"Alejandro Pardo",
""
],
[
"Shalom",
"Diego E",
""
],
[
"Tagliazucchi",
"Enzo",
""
],
[
"Mindlin",
"Gabriel",
""
],
[
"Trevisan",
"Marcos A",
""
]
] |
The analysis of thousands of time series in different languages reveals that word usage presents oscillations with a prevalence of 16-year cycles, mounted on slowly varying trends. These components carry different information: while similar oscillatory patterns gather semantically related words, similar trends group together keywords representative of cultural and historical periods. We interpreted the regular oscillations as cycles of interest and saturation, whose behavior could be captured using a simple mathematical model. Driving the model with the empirical trends, we were able to explain word frequency traces across multiple languages throughout the last three centuries. Our results suggest that word frequency usage is poised at dynamical criticality, close to a Hopf bifurcation which signals the emergence of oscillatory dynamics. Crucially, our model explains the oscillatory synchronization observed within groups of words and provides an interpretation of this phenomenon in terms of the cultural context driving collective cognition. These findings contribute to unravel how our use of language is shaped by the interplay between human cognition and sociocultural forces.
|
1904.12652
|
Azam Yazdani
|
Azam Yazdani, Akram Yazdani, Sarah H. Elsea, Daniel J. Schaid, Michael
R. Kosorok, Gita Dangol, Ahmad Samiei
|
Genome analysis and pleiotropy assessment using causal networks with
loss of function mutation and metabolomics
| null | null | null | null |
q-bio.GN stat.AP stat.ME
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Background: Many genome-wide association studies have detected genomic
regions associated with traits, yet understanding the functional causes of
association often remains elusive. Utilizing systems approaches and focusing on
intermediate molecular phenotypes might facilitate biologic understanding.
Results: The availability of exome sequencing of two populations of
African-Americans and European-Americans from the Atherosclerosis Risk in
Communities study allowed us to investigate the effects of annotated
loss-of-function (LoF) mutations on 122 serum metabolites. To assess the
findings, we built metabolomic causal networks for each population separately
and utilized structural equation modeling. We then validated our findings with
a set of independent samples. By use of methods based on concepts of Mendelian
randomization of genetic variants, we showed that some of the affected
metabolites are risk predictors in the causal pathway of disease. For example,
LoF mutations in the gene KIAA1755 were identified to elevate the levels of
eicosapentaenoate (p-value=5E-14), an essential fatty acid clinically
identified to increase essential hypertension. We showed that this gene is in
the pathway to triglycerides, where both triglycerides and essential
hypertension are risk factors of metabolomic disorder and heart attack. We also
identified that the gene CLDN17, harboring loss-of-function mutations, had
pleiotropic actions on metabolites from amino acid and lipid pathways.
Conclusion: Using systems biology approaches for the analysis of metabolomics
and genetic data, we integrated several biological processes, which lead to
findings that may functionally connect genetic variants with complex diseases.
|
[
{
"created": "Mon, 29 Apr 2019 12:45:06 GMT",
"version": "v1"
}
] |
2019-04-30
|
[
[
"Yazdani",
"Azam",
""
],
[
"Yazdani",
"Akram",
""
],
[
"Elsea",
"Sarah H.",
""
],
[
"Schaid",
"Daniel J.",
""
],
[
"Kosorok",
"Michael R.",
""
],
[
"Dangol",
"Gita",
""
],
[
"Samiei",
"Ahmad",
""
]
] |
Background: Many genome-wide association studies have detected genomic regions associated with traits, yet understanding the functional causes of association often remains elusive. Utilizing systems approaches and focusing on intermediate molecular phenotypes might facilitate biologic understanding. Results: The availability of exome sequencing of two populations of African-Americans and European-Americans from the Atherosclerosis Risk in Communities study allowed us to investigate the effects of annotated loss-of-function (LoF) mutations on 122 serum metabolites. To assess the findings, we built metabolomic causal networks for each population separately and utilized structural equation modeling. We then validated our findings with a set of independent samples. By use of methods based on concepts of Mendelian randomization of genetic variants, we showed that some of the affected metabolites are risk predictors in the causal pathway of disease. For example, LoF mutations in the gene KIAA1755 were identified to elevate the levels of eicosapentaenoate (p-value=5E-14), an essential fatty acid clinically identified to increase essential hypertension. We showed that this gene is in the pathway to triglycerides, where both triglycerides and essential hypertension are risk factors of metabolomic disorder and heart attack. We also identified that the gene CLDN17, harboring loss-of-function mutations, had pleiotropic actions on metabolites from amino acid and lipid pathways. Conclusion: Using systems biology approaches for the analysis of metabolomics and genetic data, we integrated several biological processes, which lead to findings that may functionally connect genetic variants with complex diseases.
|
1811.05649
|
Ramon Grima
|
Emma M. Keizer, Bjorn Bastian, Robert W. Smith, Ramon Grima and
Christian Fleck
|
Extending the linear-noise approximation to biochemical systems
influenced by intrinsic noise and slow lognormally distributed extrinsic
noise
|
43 pages, 4 figures
|
Phys. Rev. E 99, 052417 (2019)
|
10.1103/PhysRevE.99.052417
| null |
q-bio.SC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
It is well known that the kinetics of an intracellular biochemical network is
stochastic. This is due to intrinsic noise arising from the random timing of
biochemical reactions in the network as well as due to extrinsic noise stemming
from the interaction of unknown molecular components with the network and from
the cell's changing environment. While there are many methods to study the
effect of intrinsic noise on the system dynamics, few exist to study the
influence of both types of noise. Here we show how one can extend the
conventional linear-noise approximation to allow for the rapid evaluation of
the molecule numbers statistics of a biochemical network influenced by
intrinsic noise and by slow lognormally distributed extrinsic noise. The theory
is applied to simple models of gene regulatory networks and its validity
confirmed by comparison with exact stochastic simulations. In particular we
show how extrinsic noise modifies the dependence of the variance of the
molecule number fluctuations on the rate constants, the mutual information
between input and output signalling molecules and the robustness of
feed-forward loop motifs.
|
[
{
"created": "Wed, 14 Nov 2018 05:25:10 GMT",
"version": "v1"
}
] |
2019-06-05
|
[
[
"Keizer",
"Emma M.",
""
],
[
"Bastian",
"Bjorn",
""
],
[
"Smith",
"Robert W.",
""
],
[
"Grima",
"Ramon",
""
],
[
"Fleck",
"Christian",
""
]
] |
It is well known that the kinetics of an intracellular biochemical network is stochastic. This is due to intrinsic noise arising from the random timing of biochemical reactions in the network as well as due to extrinsic noise stemming from the interaction of unknown molecular components with the network and from the cell's changing environment. While there are many methods to study the effect of intrinsic noise on the system dynamics, few exist to study the influence of both types of noise. Here we show how one can extend the conventional linear-noise approximation to allow for the rapid evaluation of the molecule numbers statistics of a biochemical network influenced by intrinsic noise and by slow lognormally distributed extrinsic noise. The theory is applied to simple models of gene regulatory networks and its validity confirmed by comparison with exact stochastic simulations. In particular we show how extrinsic noise modifies the dependence of the variance of the molecule number fluctuations on the rate constants, the mutual information between input and output signalling molecules and the robustness of feed-forward loop motifs.
|
2004.15018
|
Wesley Pegden
|
Maria Chikina and Wesley Pegden
|
Failure of monotonicity in epidemic models
|
7 pages, 4 figures. Code is available in arXiv'd files
| null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We discuss the failure of monotonicity properties for even simple
compartmental epidemic models, for the case where transmission rates are
non-constant. We also identify a special case in which monotonicity holds.
|
[
{
"created": "Thu, 30 Apr 2020 17:58:17 GMT",
"version": "v1"
}
] |
2020-05-01
|
[
[
"Chikina",
"Maria",
""
],
[
"Pegden",
"Wesley",
""
]
] |
We discuss the failure of monotonicity properties for even simple compartmental epidemic models, for the case where transmission rates are non-constant. We also identify a special case in which monotonicity holds.
|
1112.3357
|
Steven Frank
|
Steven A. Frank
|
A general model of the public goods dilemma
| null |
Journal of Evolutionary Biology 23:1245-1250 (2010)
|
10.1111/j.1420-9101.2010.01986.x
| null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An individually costly act that benefits all group members is a public good.
Natural selection favors individual contribution to public goods only when some
benefit to the individual offsets the cost of contribution. Problems of sex
ratio, parasite virulence, microbial metabolism, punishment of noncooperators,
and nearly all aspects of sociality have been analyzed as public goods shaped
by kin and group selection. Here, I develop two general aspects of the public
goods problem that have received relatively little attention. First, variation
in individual resources favors selfish individuals to vary their allocation to
public goods. Those individuals better endowed contribute their excess
resources to public benefit, whereas those individuals with fewer resources
contribute less to the public good. Thus, purely selfish behavior causes
individuals to stratify into upper classes that contribute greatly to public
benefit and social cohesion and to lower classes that contribute little to the
public good. Second, if group success absolutely requires production of the
public good, then the pressure favoring production is relatively high. By
contrast, if group success depends weakly on the public good, then the pressure
favoring production is relatively weak. Stated in this way, it is obvious that
the role of baseline success is important. However, discussions of public goods
problems sometimes fail to emphasize this point sufficiently. The models here
suggest simple tests for the roles of resource variation and baseline success.
Given the widespread importance of public goods, better models and tests would
greatly deepen our understanding of many processes in biology and sociality.
|
[
{
"created": "Wed, 14 Dec 2011 21:05:51 GMT",
"version": "v1"
}
] |
2011-12-16
|
[
[
"Frank",
"Steven A.",
""
]
] |
An individually costly act that benefits all group members is a public good. Natural selection favors individual contribution to public goods only when some benefit to the individual offsets the cost of contribution. Problems of sex ratio, parasite virulence, microbial metabolism, punishment of noncooperators, and nearly all aspects of sociality have been analyzed as public goods shaped by kin and group selection. Here, I develop two general aspects of the public goods problem that have received relatively little attention. First, variation in individual resources favors selfish individuals to vary their allocation to public goods. Those individuals better endowed contribute their excess resources to public benefit, whereas those individuals with fewer resources contribute less to the public good. Thus, purely selfish behavior causes individuals to stratify into upper classes that contribute greatly to public benefit and social cohesion and to lower classes that contribute little to the public good. Second, if group success absolutely requires production of the public good, then the pressure favoring production is relatively high. By contrast, if group success depends weakly on the public good, then the pressure favoring production is relatively weak. Stated in this way, it is obvious that the role of baseline success is important. However, discussions of public goods problems sometimes fail to emphasize this point sufficiently. The models here suggest simple tests for the roles of resource variation and baseline success. Given the widespread importance of public goods, better models and tests would greatly deepen our understanding of many processes in biology and sociality.
|
q-bio/0403019
|
Hiroshi Fujisaki
|
Hiroshi Fujisaki, Lintao Bu, and John E. Straub
|
Vibrational energy relaxation (VER) of a CD stretching mode in
cytochrome c
|
20 pages, 7 figures, 3 tables, submitted to Adv. Chem. Phys. for the
proceedings of the YITP international symposium on "Geometrical structure of
phase space in multi-dimensional chaos: Applications to chemical reaction
dynamics in complex systems"
| null | null | null |
q-bio.BM
| null |
We first review how to determine the rate of vibrational energy relaxation
(VER) using perturbation theory. We then apply those theoretical results to the
problem of VER of a CD stretching mode in the protein cytochrome c. We model
cytochrome c in vacuum as a normal mode system with the lowest-order anharmonic
coupling elements. We find that, for the ``lifetime'' width parameter $\gamma=3
\sim 30$ cm$^{-1}$, the VER time is $0.2 \sim 0.3$ ps, which agrees rather well
with the previous classical calculation using the quantum correction factor
method, and is consistent with spectroscopic experiments by Romesberg's group.
We decompose the VER rate into separate contributions from two modes, and find
that the most significant contribution, which depends on the ``lifetime'' width
parameter, comes from those modes most resonant with the CD vibrational mode.
|
[
{
"created": "Mon, 15 Mar 2004 20:42:06 GMT",
"version": "v1"
},
{
"created": "Wed, 17 Mar 2004 17:11:29 GMT",
"version": "v2"
},
{
"created": "Thu, 26 Aug 2004 23:52:49 GMT",
"version": "v3"
}
] |
2007-05-23
|
[
[
"Fujisaki",
"Hiroshi",
""
],
[
"Bu",
"Lintao",
""
],
[
"Straub",
"John E.",
""
]
] |
We first review how to determine the rate of vibrational energy relaxation (VER) using perturbation theory. We then apply those theoretical results to the problem of VER of a CD stretching mode in the protein cytochrome c. We model cytochrome c in vacuum as a normal mode system with the lowest-order anharmonic coupling elements. We find that, for the ``lifetime'' width parameter $\gamma=3 \sim 30$ cm$^{-1}$, the VER time is $0.2 \sim 0.3$ ps, which agrees rather well with the previous classical calculation using the quantum correction factor method, and is consistent with spectroscopic experiments by Romesberg's group. We decompose the VER rate into separate contributions from two modes, and find that the most significant contribution, which depends on the ``lifetime'' width parameter, comes from those modes most resonant with the CD vibrational mode.
|
q-bio/0409005
|
Anders Irb\"ack
|
Giorgio Favrin, Anders Irb\"ack, Sandipan Mohanty
|
Oligomerization of amyloid Abeta peptides using hydrogen bonds and
hydrophobicity forces
|
19 pages, 7 figures (to appear in Biophys. J.)
|
Biophys. J. 87 (2004) 3657-3664
|
10.1529/biophysj.104.046839
|
LU TP 04-18
|
q-bio.BM
| null |
The 16-22 amino acid fragment of the beta-amyloid peptide associated with the
Alzheimer's disease, Abeta, is capable of forming amyloid fibrils. Here we
study the aggregation mechanism of Abeta(16-22) peptides by unbiased
thermodynamic simulations at the atomic level for systems of one, three and six
Abeta(16-22) peptides. We find that the isolated Abeta(16-22) peptide is mainly
a random coil in the sense that both the alpha-helix and beta-strand contents
are low, whereas the three- and six-chain systems form aggregated structures
with a high beta-sheet content. Furthermore, in agreement with experiments on
Abeta(16-22) fibrils, we find that large parallel beta-sheets are unlikely to
form. For the six-chain system, the aggregated structures can have many
different shapes, but certain particularly stable shapes can be identified.
|
[
{
"created": "Wed, 1 Sep 2004 17:32:45 GMT",
"version": "v1"
},
{
"created": "Tue, 14 Sep 2004 20:50:48 GMT",
"version": "v2"
}
] |
2009-11-10
|
[
[
"Favrin",
"Giorgio",
""
],
[
"Irbäck",
"Anders",
""
],
[
"Mohanty",
"Sandipan",
""
]
] |
The 16-22 amino acid fragment of the beta-amyloid peptide associated with the Alzheimer's disease, Abeta, is capable of forming amyloid fibrils. Here we study the aggregation mechanism of Abeta(16-22) peptides by unbiased thermodynamic simulations at the atomic level for systems of one, three and six Abeta(16-22) peptides. We find that the isolated Abeta(16-22) peptide is mainly a random coil in the sense that both the alpha-helix and beta-strand contents are low, whereas the three- and six-chain systems form aggregated structures with a high beta-sheet content. Furthermore, in agreement with experiments on Abeta(16-22) fibrils, we find that large parallel beta-sheets are unlikely to form. For the six-chain system, the aggregated structures can have many different shapes, but certain particularly stable shapes can be identified.
|
1604.00268
|
David Schwab
|
DJ Strouse, David J Schwab
|
The deterministic information bottleneck
|
15 pages, 4 figures
| null | null | null |
q-bio.NC cond-mat.stat-mech cs.IT math.IT q-bio.QM stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Lossy compression and clustering fundamentally involve a decision about what
features are relevant and which are not. The information bottleneck method (IB)
by Tishby, Pereira, and Bialek formalized this notion as an
information-theoretic optimization problem and proposed an optimal tradeoff
between throwing away as many bits as possible, and selectively keeping those
that are most important. In the IB, compression is measure my mutual
information. Here, we introduce an alternative formulation that replaces mutual
information with entropy, which we call the deterministic information
bottleneck (DIB), that we argue better captures this notion of compression. As
suggested by its name, the solution to the DIB problem turns out to be a
deterministic encoder, or hard clustering, as opposed to the stochastic
encoder, or soft clustering, that is optimal under the IB. We compare the IB
and DIB on synthetic data, showing that the IB and DIB perform similarly in
terms of the IB cost function, but that the DIB significantly outperforms the
IB in terms of the DIB cost function. We also empirically find that the DIB
offers a considerable gain in computational efficiency over the IB, over a
range of convergence parameters. Our derivation of the DIB also suggests a
method for continuously interpolating between the soft clustering of the IB and
the hard clustering of the DIB.
|
[
{
"created": "Fri, 1 Apr 2016 14:48:31 GMT",
"version": "v1"
},
{
"created": "Mon, 19 Dec 2016 05:26:11 GMT",
"version": "v2"
}
] |
2017-02-23
|
[
[
"Strouse",
"DJ",
""
],
[
"Schwab",
"David J",
""
]
] |
Lossy compression and clustering fundamentally involve a decision about what features are relevant and which are not. The information bottleneck method (IB) by Tishby, Pereira, and Bialek formalized this notion as an information-theoretic optimization problem and proposed an optimal tradeoff between throwing away as many bits as possible, and selectively keeping those that are most important. In the IB, compression is measure my mutual information. Here, we introduce an alternative formulation that replaces mutual information with entropy, which we call the deterministic information bottleneck (DIB), that we argue better captures this notion of compression. As suggested by its name, the solution to the DIB problem turns out to be a deterministic encoder, or hard clustering, as opposed to the stochastic encoder, or soft clustering, that is optimal under the IB. We compare the IB and DIB on synthetic data, showing that the IB and DIB perform similarly in terms of the IB cost function, but that the DIB significantly outperforms the IB in terms of the DIB cost function. We also empirically find that the DIB offers a considerable gain in computational efficiency over the IB, over a range of convergence parameters. Our derivation of the DIB also suggests a method for continuously interpolating between the soft clustering of the IB and the hard clustering of the DIB.
|
1812.10421
|
Vyacheslav Volov
|
V.V. Eskov, V.T. Volov, V.M. Eskov, L.K. Ilyashenko
|
Chaotic dynamics of movements stochastic instability and the hypothesis
of N.A. Bernstein about "repetition without repetition"
|
13 pages, 2 figures, 6 tables
| null | null | null |
q-bio.OT nlin.CD physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The registration of tremor was performed in two groups of subjects (15 people
in each group) with different physical fitness at rest and at a static loads of
3N. Each subject has been tested 15 series (number of series N=15) in both
states (with and without physical loads) and each series contained 15 samples
(n=15) of tremorogramm measurements (500 elements in each sample, registered
coordinates x1(t) of the finger position relative to eddy current sensor) of
the finger. Using non-parametric Wilcoxon test of each series of experiment a
pairwise comparison was made forming 15 tables in which the results of
calculation of pairwise comparison was presented as a matrix (15x15) for
tremorogramms are presented. The average number of hits random pairs of samples
(<k>) and standard deviation {\sigma} were calculated for all 15 matrices
without load and under the impact of physical load (3N), which showed an
increase almost in twice in the number k of pairs of matching samples of
tremorogramms at conditions of a static load. For all these samples it was
calculated special quasi-attractor (this square was presented the distinguishes
between physical load and without it. All samples present the stochastic
unstable state.
|
[
{
"created": "Fri, 21 Dec 2018 16:39:00 GMT",
"version": "v1"
}
] |
2018-12-27
|
[
[
"Eskov",
"V. V.",
""
],
[
"Volov",
"V. T.",
""
],
[
"Eskov",
"V. M.",
""
],
[
"Ilyashenko",
"L. K.",
""
]
] |
The registration of tremor was performed in two groups of subjects (15 people in each group) with different physical fitness at rest and at a static loads of 3N. Each subject has been tested 15 series (number of series N=15) in both states (with and without physical loads) and each series contained 15 samples (n=15) of tremorogramm measurements (500 elements in each sample, registered coordinates x1(t) of the finger position relative to eddy current sensor) of the finger. Using non-parametric Wilcoxon test of each series of experiment a pairwise comparison was made forming 15 tables in which the results of calculation of pairwise comparison was presented as a matrix (15x15) for tremorogramms are presented. The average number of hits random pairs of samples (<k>) and standard deviation {\sigma} were calculated for all 15 matrices without load and under the impact of physical load (3N), which showed an increase almost in twice in the number k of pairs of matching samples of tremorogramms at conditions of a static load. For all these samples it was calculated special quasi-attractor (this square was presented the distinguishes between physical load and without it. All samples present the stochastic unstable state.
|
1312.7532
|
Carsten Maedler
|
Carsten Maedler, Daniel Kim, Remco A. Spanjaard, Mi Hong, Shyamsunder
Erramilli, Pritiraj Mohanty
|
Detection of the melanoma biomarker TROY using silicon nanowire
field-effect transistors
| null | null | null | null |
q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Antibody-functionalized silicon nanowire field-effect transistors have been
shown to exhibit excellent analyte detection sensitivity enabling sensing of
analyte concentrations at levels not readily accessible by other methods. One
example where accurate measurement of small concentrations is necessary is
detection of serum biomarkers, such as the recently discovered tumor necrosis
factor receptor superfamily member TROY (TNFRSF19), which may serve as a
biomarker for melanoma. TROY is normally only present in brain but it is
aberrantly expressed in primary and metastatic melanoma cells and shed into the
surrounding environment. In this study, we show the detection of different
concentrations of TROY in buffer solution using top-down fabricated silicon
nanowires. We demonstrate the selectivity of our sensors by comparing the
signal with that obtained from bovine serum albumin in buffer solution. Both
the signal size and the reaction kinetics serve to distinguish the two signals.
Using a fast-mixing two-compartment reaction model, we are able to extract the
association and dissociation rate constants for the reaction of TROY with the
antibody immobilized on the sensor surface.
|
[
{
"created": "Sun, 29 Dec 2013 13:04:05 GMT",
"version": "v1"
}
] |
2013-12-31
|
[
[
"Maedler",
"Carsten",
""
],
[
"Kim",
"Daniel",
""
],
[
"Spanjaard",
"Remco A.",
""
],
[
"Hong",
"Mi",
""
],
[
"Erramilli",
"Shyamsunder",
""
],
[
"Mohanty",
"Pritiraj",
""
]
] |
Antibody-functionalized silicon nanowire field-effect transistors have been shown to exhibit excellent analyte detection sensitivity enabling sensing of analyte concentrations at levels not readily accessible by other methods. One example where accurate measurement of small concentrations is necessary is detection of serum biomarkers, such as the recently discovered tumor necrosis factor receptor superfamily member TROY (TNFRSF19), which may serve as a biomarker for melanoma. TROY is normally only present in brain but it is aberrantly expressed in primary and metastatic melanoma cells and shed into the surrounding environment. In this study, we show the detection of different concentrations of TROY in buffer solution using top-down fabricated silicon nanowires. We demonstrate the selectivity of our sensors by comparing the signal with that obtained from bovine serum albumin in buffer solution. Both the signal size and the reaction kinetics serve to distinguish the two signals. Using a fast-mixing two-compartment reaction model, we are able to extract the association and dissociation rate constants for the reaction of TROY with the antibody immobilized on the sensor surface.
|
q-bio/0402031
|
Mauro Copelli
|
M. Copelli, M. H. R. Tragtenberg and O. Kinouchi
|
Stability diagrams for bursting neurons modeled by three-variable maps
|
7 pages, 3 figures, accepted for publication
|
Physica A, 342, 263-269 (2004)
|
10.1016/j.physa.2004.04.087
| null |
q-bio.NC cond-mat.dis-nn nlin.CD physics.bio-ph
| null |
We study a simple map as a minimal model of excitable cells. The map has two
fast variables which mimic the behavior of class I neurons, undergoing a
sub-critical Hopf bifurcation. Adding a third slow variable allows the system
to present bursts and other interesting biological behaviors. Bifurcation lines
which locate the excitability region are obtained for different planes in
parameter space.
|
[
{
"created": "Fri, 13 Feb 2004 22:51:17 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Copelli",
"M.",
""
],
[
"Tragtenberg",
"M. H. R.",
""
],
[
"Kinouchi",
"O.",
""
]
] |
We study a simple map as a minimal model of excitable cells. The map has two fast variables which mimic the behavior of class I neurons, undergoing a sub-critical Hopf bifurcation. Adding a third slow variable allows the system to present bursts and other interesting biological behaviors. Bifurcation lines which locate the excitability region are obtained for different planes in parameter space.
|
1903.04921
|
Zi Chen
|
Catalina-Paula Spatarelu, Hao Zhang, Dung Trung Nguyen, Xinyue Han,
Ruchuan Liu, Qiaohang Guo, Jacob Notbohm, Jing Fan, Liyu Liu, and Zi Chen
|
Biomechanics of Collective Cell Migration in Cancer Progression --
Experimental and Computational Methods
| null | null | null | null |
q-bio.CB physics.bio-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cell migration is essential for regulating many biological processes in
physiological or pathological conditions, including embryonic development and
cancer invasion. In vitro and in silico studies suggest that collective cell
migration is associated with some biomechanical particularities, such as
restructuring of extracellular matrix, stress and force distribution profiles,
and reorganization of cytoskeleton. Therefore, the phenomenon could be
understood by an in-depth study of cells' behavior determinants, including but
not limited to mechanical cues from the environment and from fellow travelers.
This review article aims to cover the recent development of experimental and
computational methods for studying the biomechanics of collective cell
migration during cancer progression and invasion. We also summarized the tested
hypotheses regarding the mechanism underlying collective cell migration enabled
by these methods. Together, the paper enables a broad overview on the methods
and tools currently available to unravel the biophysical mechanisms pertinent
to cell collective migration, as well as providing perspectives on future
development towards eventually deciphering the key mechanisms behind the most
lethal feature of cancer.
|
[
{
"created": "Tue, 12 Mar 2019 13:55:09 GMT",
"version": "v1"
}
] |
2019-03-13
|
[
[
"Spatarelu",
"Catalina-Paula",
""
],
[
"Zhang",
"Hao",
""
],
[
"Nguyen",
"Dung Trung",
""
],
[
"Han",
"Xinyue",
""
],
[
"Liu",
"Ruchuan",
""
],
[
"Guo",
"Qiaohang",
""
],
[
"Notbohm",
"Jacob",
""
],
[
"Fan",
"Jing",
""
],
[
"Liu",
"Liyu",
""
],
[
"Chen",
"Zi",
""
]
] |
Cell migration is essential for regulating many biological processes in physiological or pathological conditions, including embryonic development and cancer invasion. In vitro and in silico studies suggest that collective cell migration is associated with some biomechanical particularities, such as restructuring of extracellular matrix, stress and force distribution profiles, and reorganization of cytoskeleton. Therefore, the phenomenon could be understood by an in-depth study of cells' behavior determinants, including but not limited to mechanical cues from the environment and from fellow travelers. This review article aims to cover the recent development of experimental and computational methods for studying the biomechanics of collective cell migration during cancer progression and invasion. We also summarized the tested hypotheses regarding the mechanism underlying collective cell migration enabled by these methods. Together, the paper enables a broad overview on the methods and tools currently available to unravel the biophysical mechanisms pertinent to cell collective migration, as well as providing perspectives on future development towards eventually deciphering the key mechanisms behind the most lethal feature of cancer.
|
0712.1970
|
Julien Dervaux
|
Julien Dervaux, Martine Ben Amar
|
Morphogenesis of growing soft tissues
|
4 pages, 3 figures
| null |
10.1103/PhysRevLett.101.068101
| null |
q-bio.TO
| null |
Recently, much attention has been given to a noteworthy property of some soft
tissues: their ability to grow. Many attempts have been made to model this
behaviour in biology, chemistry and physics. Using the theory of finite
elasticity, Rodriguez has postulated a multiplicative decomposition of the
geometric deformation gradient into a growth-induced part and an elastic one
needed to ensure compatibility of the body. In order to fully explore the
consequences of this hypothesis, the equations describing thin elastic objects
under finite growth are derived. Under appropriate scaling assumptions for the
growth rates, the proposed model is of the Foppl-von Karman type. As an
illustration, the circumferential growth of a free hyperelastic disk is
studied.
|
[
{
"created": "Wed, 12 Dec 2007 16:21:02 GMT",
"version": "v1"
}
] |
2009-11-13
|
[
[
"Dervaux",
"Julien",
""
],
[
"Amar",
"Martine Ben",
""
]
] |
Recently, much attention has been given to a noteworthy property of some soft tissues: their ability to grow. Many attempts have been made to model this behaviour in biology, chemistry and physics. Using the theory of finite elasticity, Rodriguez has postulated a multiplicative decomposition of the geometric deformation gradient into a growth-induced part and an elastic one needed to ensure compatibility of the body. In order to fully explore the consequences of this hypothesis, the equations describing thin elastic objects under finite growth are derived. Under appropriate scaling assumptions for the growth rates, the proposed model is of the Foppl-von Karman type. As an illustration, the circumferential growth of a free hyperelastic disk is studied.
|
1606.03071
|
Umut G\"u\c{c}l\"u
|
Umut G\"u\c{c}l\"u, Marcel A. J. van Gerven
|
Modeling the dynamics of human brain activity with recurrent neural
networks
| null | null |
10.3389/fncom.2017.00007
| null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Encoding models are used for predicting brain activity in response to sensory
stimuli with the objective of elucidating how sensory information is
represented in the brain. Encoding models typically comprise a nonlinear
transformation of stimuli to features (feature model) and a linear
transformation of features to responses (response model). While there has been
extensive work on developing better feature models, the work on developing
better response models has been rather limited. Here, we investigate the extent
to which recurrent neural network models can use their internal memories for
nonlinear processing of arbitrary feature sequences to predict feature-evoked
response sequences as measured by functional magnetic resonance imaging. We
show that the proposed recurrent neural network models can significantly
outperform established response models by accurately estimating long-term
dependencies that drive hemodynamic responses. The results open a new window
into modeling the dynamics of brain activity in response to sensory stimuli.
|
[
{
"created": "Thu, 9 Jun 2016 19:22:13 GMT",
"version": "v1"
}
] |
2017-03-13
|
[
[
"Güçlü",
"Umut",
""
],
[
"van Gerven",
"Marcel A. J.",
""
]
] |
Encoding models are used for predicting brain activity in response to sensory stimuli with the objective of elucidating how sensory information is represented in the brain. Encoding models typically comprise a nonlinear transformation of stimuli to features (feature model) and a linear transformation of features to responses (response model). While there has been extensive work on developing better feature models, the work on developing better response models has been rather limited. Here, we investigate the extent to which recurrent neural network models can use their internal memories for nonlinear processing of arbitrary feature sequences to predict feature-evoked response sequences as measured by functional magnetic resonance imaging. We show that the proposed recurrent neural network models can significantly outperform established response models by accurately estimating long-term dependencies that drive hemodynamic responses. The results open a new window into modeling the dynamics of brain activity in response to sensory stimuli.
|
q-bio/0309006
|
Sven Bilke
|
S. Bilke, T. Breslin, M. Sigvardsson
|
Probabilistic estimation of microarray data reliability and underlying
gene expression
|
11 pages, 4 figures
|
BMC Bioinformatics 4:40 (2003)
| null |
LU TP 02-14
|
q-bio.QM
| null |
Background: The availability of high throughput methods for measurement of
mRNA concentrations makes the reliability of conclusions drawn from the data
and global quality control of samples and hybridization important issues. We
address these issues by an information theoretic approach, applied to
discretized expression values in replicated gene expression data.
Results: Our approach yields a quantitative measure of two important
parameter classes: First, the probability $P(\sigma | S)$ that a gene is in the
biological state $\sigma$ in a certain variety, given its observed expression
$S$ in the samples of that variety. Second, sample specific error probabilities
which serve as consistency indicators of the measured samples of each variety.
The method and its limitations are tested on gene expression data for
developing murine B-cells and a $t$-test is used as reference. On a set of
known genes it performs better than the $t$-test despite the crude
discretization into only two expression levels. The consistency indicators,
i.e. the error probabilities, correlate well with variations in the biological
material and thus prove efficient.
Conclusions: The proposed method is effective in determining differential
gene expression and sample reliability in replicated microarray data. Already
at two discrete expression levels in each sample, it gives a good explanation
of the data and is comparable to standard techniques.
|
[
{
"created": "Thu, 18 Sep 2003 15:22:50 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Bilke",
"S.",
""
],
[
"Breslin",
"T.",
""
],
[
"Sigvardsson",
"M.",
""
]
] |
Background: The availability of high throughput methods for measurement of mRNA concentrations makes the reliability of conclusions drawn from the data and global quality control of samples and hybridization important issues. We address these issues by an information theoretic approach, applied to discretized expression values in replicated gene expression data. Results: Our approach yields a quantitative measure of two important parameter classes: First, the probability $P(\sigma | S)$ that a gene is in the biological state $\sigma$ in a certain variety, given its observed expression $S$ in the samples of that variety. Second, sample specific error probabilities which serve as consistency indicators of the measured samples of each variety. The method and its limitations are tested on gene expression data for developing murine B-cells and a $t$-test is used as reference. On a set of known genes it performs better than the $t$-test despite the crude discretization into only two expression levels. The consistency indicators, i.e. the error probabilities, correlate well with variations in the biological material and thus prove efficient. Conclusions: The proposed method is effective in determining differential gene expression and sample reliability in replicated microarray data. Already at two discrete expression levels in each sample, it gives a good explanation of the data and is comparable to standard techniques.
|
2111.14159
|
Uria Mor
|
Uria Mor, Yotam Cohen, Rafael Valdes-Mas, Denise Kviatcovsky, Eran
Elinav, Haim Avron
|
Dimensionality Reduction of Longitudinal 'Omics Data using Modern Tensor
Factorization
| null | null |
10.1371/journal.pcbi.1010212
| null |
q-bio.QM cs.CE cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Precision medicine is a clinical approach for disease prevention, detection
and treatment, which considers each individual's genetic background,
environment and lifestyle. The development of this tailored avenue has been
driven by the increased availability of omics methods, large cohorts of
temporal samples, and their integration with clinical data. Despite the immense
progression, existing computational methods for data analysis fail to provide
appropriate solutions for this complex, high-dimensional and longitudinal data.
In this work we have developed a new method termed TCAM, a dimensionality
reduction technique for multi-way data, that overcomes major limitations when
doing trajectory analysis of longitudinal omics data. Using real-world data, we
show that TCAM outperforms traditional methods, as well as state-of-the-art
tensor-based approaches for longitudinal microbiome data analysis. Moreover, we
demonstrate the versatility of TCAM by applying it to several different omics
datasets, and the applicability of it as a drop-in replacement within
straightforward ML tasks.
|
[
{
"created": "Sun, 28 Nov 2021 14:50:14 GMT",
"version": "v1"
}
] |
2022-07-26
|
[
[
"Mor",
"Uria",
""
],
[
"Cohen",
"Yotam",
""
],
[
"Valdes-Mas",
"Rafael",
""
],
[
"Kviatcovsky",
"Denise",
""
],
[
"Elinav",
"Eran",
""
],
[
"Avron",
"Haim",
""
]
] |
Precision medicine is a clinical approach for disease prevention, detection and treatment, which considers each individual's genetic background, environment and lifestyle. The development of this tailored avenue has been driven by the increased availability of omics methods, large cohorts of temporal samples, and their integration with clinical data. Despite the immense progression, existing computational methods for data analysis fail to provide appropriate solutions for this complex, high-dimensional and longitudinal data. In this work we have developed a new method termed TCAM, a dimensionality reduction technique for multi-way data, that overcomes major limitations when doing trajectory analysis of longitudinal omics data. Using real-world data, we show that TCAM outperforms traditional methods, as well as state-of-the-art tensor-based approaches for longitudinal microbiome data analysis. Moreover, we demonstrate the versatility of TCAM by applying it to several different omics datasets, and the applicability of it as a drop-in replacement within straightforward ML tasks.
|
1307.4141
|
Naoki Masuda Dr.
|
Naoki Masuda
|
Evolution via imitation among like-minded individuals
|
3 figures
|
Journal of Theoretical Biology, 349, 100-108 (2014)
|
10.1016/j.jtbi.2014.02.003
| null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In social situations with which evolutionary game is concerned, individuals
are considered to be heterogeneous in various aspects. In particular, they may
differently perceive the same outcome of the game owing to heterogeneity in
idiosyncratic preferences, fighting abilities, and positions in a social
network. In such a population, an individual may imitate successful and similar
others, where similarity refers to that in the idiosyncratic fitness function.
I propose an evolutionary game model with two subpopulations on the basis of
multipopulation replicator dynamics to describe such a situation. In the
proposed model, pairs of players are involved in a two-person game as a
well-mixed population, and imitation occurs within subpopulations in each of
which players have the same payoff matrix. It is shown that the model does not
allow any internal equilibrium such that the dynamics differs from that of
other related models such as the bimatrix game. In particular, even a slight
difference in the payoff matrix in the two subpopulations can make the opposite
strategies to be stably selected in the two subpopulations in the snowdrift and
coordination games.
|
[
{
"created": "Tue, 16 Jul 2013 01:20:47 GMT",
"version": "v1"
},
{
"created": "Thu, 6 Mar 2014 17:20:32 GMT",
"version": "v2"
}
] |
2014-03-07
|
[
[
"Masuda",
"Naoki",
""
]
] |
In social situations with which evolutionary game is concerned, individuals are considered to be heterogeneous in various aspects. In particular, they may differently perceive the same outcome of the game owing to heterogeneity in idiosyncratic preferences, fighting abilities, and positions in a social network. In such a population, an individual may imitate successful and similar others, where similarity refers to that in the idiosyncratic fitness function. I propose an evolutionary game model with two subpopulations on the basis of multipopulation replicator dynamics to describe such a situation. In the proposed model, pairs of players are involved in a two-person game as a well-mixed population, and imitation occurs within subpopulations in each of which players have the same payoff matrix. It is shown that the model does not allow any internal equilibrium such that the dynamics differs from that of other related models such as the bimatrix game. In particular, even a slight difference in the payoff matrix in the two subpopulations can make the opposite strategies to be stably selected in the two subpopulations in the snowdrift and coordination games.
|
1203.4771
|
Jens Christian Claussen
|
Markus Sch\"utt and Jens Christian Claussen
|
Desynchronizing effect of high-frequency stimulation in a generic
cortical network model
|
9 pages, figs included. Accepted for publication in Cognitive
Neurodynamics
|
Cognitive Neurodynamics 6 (4), 343-351 (2012)
|
10.1007/s11571-012-9199-8
| null |
q-bio.NC cond-mat.dis-nn nlin.CD physics.bio-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Transcranial Electrical Stimulation (TCES) and Deep Brain Stimulation (DBS)
are two different applications of electrical current to the brain used in
different areas of medicine. Both have a similar frequency dependence of their
efficiency, with the most pronounced effects around 100Hz. We apply
superthreshold electrical stimulation, specifically depolarizing DC current,
interrupted at different frequencies, to a simple model of a population of
cortical neurons which uses phenomenological descriptions of neurons by
Izhikevich and synaptic connections on a similar level of sophistication. With
this model, we are able to reproduce the optimal desynchronization around
100Hz, as well as to predict the full frequency dependence of the efficiency of
desynchronization, and thereby to give a possible explanation for the action
mechanism of TCES.
|
[
{
"created": "Wed, 21 Mar 2012 16:16:30 GMT",
"version": "v1"
}
] |
2019-07-15
|
[
[
"Schütt",
"Markus",
""
],
[
"Claussen",
"Jens Christian",
""
]
] |
Transcranial Electrical Stimulation (TCES) and Deep Brain Stimulation (DBS) are two different applications of electrical current to the brain used in different areas of medicine. Both have a similar frequency dependence of their efficiency, with the most pronounced effects around 100Hz. We apply superthreshold electrical stimulation, specifically depolarizing DC current, interrupted at different frequencies, to a simple model of a population of cortical neurons which uses phenomenological descriptions of neurons by Izhikevich and synaptic connections on a similar level of sophistication. With this model, we are able to reproduce the optimal desynchronization around 100Hz, as well as to predict the full frequency dependence of the efficiency of desynchronization, and thereby to give a possible explanation for the action mechanism of TCES.
|
2003.11094
|
Borko D. Stosic
|
Borko Stosic
|
Phenomenological analysis of the 2020 COVID-19 outbreak dynamics
| null | null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In the wake of the COVID-19 virus outbreak, a brief phenomenological
(descriptive, comparative) analysis of the dynamics of the disease spread among
different countries is presented. Results indicate that the infection spread
dynamics is currently the most pronounced in the USA (confirmed cases are
currently doubling every 2.16 days, with a decreasing doubling time tendency),
while other countries with the most confirmed cases show different values, and
tendencies. The reported number of deaths is currently doubled every 2.28 days
in Germany, 2.56 days in France, 2.57 days in Switzerland, 2.59 days in France,
and 2.62 days in USA, while only France and USA are currently exhibiting
further acceleration (diminishing doubling time).
|
[
{
"created": "Tue, 24 Mar 2020 19:59:17 GMT",
"version": "v1"
}
] |
2020-03-26
|
[
[
"Stosic",
"Borko",
""
]
] |
In the wake of the COVID-19 virus outbreak, a brief phenomenological (descriptive, comparative) analysis of the dynamics of the disease spread among different countries is presented. Results indicate that the infection spread dynamics is currently the most pronounced in the USA (confirmed cases are currently doubling every 2.16 days, with a decreasing doubling time tendency), while other countries with the most confirmed cases show different values, and tendencies. The reported number of deaths is currently doubled every 2.28 days in Germany, 2.56 days in France, 2.57 days in Switzerland, 2.59 days in France, and 2.62 days in USA, while only France and USA are currently exhibiting further acceleration (diminishing doubling time).
|
2010.02368
|
Delfim F. M. Torres
|
Cristiana J. Silva, Guillaume Cantin, Carla Cruz, Rui Fonseca-Pinto,
Rui Passadouro da Fonseca, Estevao Soares dos Santos, Delfim F. M. Torres
|
Complex network model for COVID-19: human behavior, pseudo-periodic
solutions and multiple epidemic waves
|
23 pages, 10 figures, submitted 5-Oct-2020
|
J. Math. Anal. Appl. 514 (2022), no. 2, Art. 125171, 25pp
|
10.1016/j.jmaa.2021.125171
| null |
q-bio.PE math.DS
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a mathematical model for the transmission dynamics of SARS-CoV-2
in a homogeneously mixing non constant population, and generalize it to a model
where the parameters are given by piecewise constant functions. This allows us
to model the human behavior and the impact of public health policies on the
dynamics of the curve of active infected individuals during a COVID-19 epidemic
outbreak. After proving the existence and global asymptotic stability of the
disease-free and endemic equilibrium points of the model with constant
parameters, we consider a family of Cauchy problems, with piecewise constant
parameters, and prove the existence of pseudo-oscillations between a
neighborhood of the disease-free equilibrium and a neighborhood of the endemic
equilibrium, in a biologically feasible region. In the context of the COVID-19
pandemic, this pseudo-periodic solutions are related to the emergence of
epidemic waves. Then, to capture the impact of mobility in the dynamics of
COVID-19 epidemics, we propose a complex network with six distinct regions
based on COVID-19 real data from Portugal. We perform numerical simulations for
the complex network model, where the objective is to determine a topology that
minimizes the level of active infected individuals and the existence of
topologies that are likely to worsen the level of infection. We claim that this
methodology is a tool with enormous potential in the current pandemic context,
and can be applied in the management of outbreaks (in regional terms) but also
to manage the opening/closing of borders.
|
[
{
"created": "Mon, 5 Oct 2020 22:22:40 GMT",
"version": "v1"
}
] |
2022-06-20
|
[
[
"Silva",
"Cristiana J.",
""
],
[
"Cantin",
"Guillaume",
""
],
[
"Cruz",
"Carla",
""
],
[
"Fonseca-Pinto",
"Rui",
""
],
[
"da Fonseca",
"Rui Passadouro",
""
],
[
"Santos",
"Estevao Soares dos",
""
],
[
"Torres",
"Delfim F. M.",
""
]
] |
We propose a mathematical model for the transmission dynamics of SARS-CoV-2 in a homogeneously mixing non constant population, and generalize it to a model where the parameters are given by piecewise constant functions. This allows us to model the human behavior and the impact of public health policies on the dynamics of the curve of active infected individuals during a COVID-19 epidemic outbreak. After proving the existence and global asymptotic stability of the disease-free and endemic equilibrium points of the model with constant parameters, we consider a family of Cauchy problems, with piecewise constant parameters, and prove the existence of pseudo-oscillations between a neighborhood of the disease-free equilibrium and a neighborhood of the endemic equilibrium, in a biologically feasible region. In the context of the COVID-19 pandemic, this pseudo-periodic solutions are related to the emergence of epidemic waves. Then, to capture the impact of mobility in the dynamics of COVID-19 epidemics, we propose a complex network with six distinct regions based on COVID-19 real data from Portugal. We perform numerical simulations for the complex network model, where the objective is to determine a topology that minimizes the level of active infected individuals and the existence of topologies that are likely to worsen the level of infection. We claim that this methodology is a tool with enormous potential in the current pandemic context, and can be applied in the management of outbreaks (in regional terms) but also to manage the opening/closing of borders.
|
2405.03370
|
Magnus Haraldson H{\o}ie
|
Magnus Haraldson H{\o}ie and Alissa Hummer and Tobias H. Olsen and
Broncio Aguilar-Sanjuan and Morten Nielsen and Charlotte M. Deane
|
AntiFold: Improved antibody structure-based design using inverse folding
| null | null | null | null |
q-bio.BM q-bio.QM
|
http://creativecommons.org/licenses/by-sa/4.0/
|
The design and optimization of antibodies requires an intricate balance
across multiple properties. Protein inverse folding models, capable of
generating diverse sequences folding into the same structure, are promising
tools for maintaining structural integrity during antibody design. Here, we
present AntiFold, an antibody-specific inverse folding model, fine-tuned from
ESM-IF1 on solved and predicted antibody structures. AntiFold outperforms
existing inverse folding tools on sequence recovery across
complementarity-determining regions, with designed sequences showing high
structural similarity to their solved counterpart. It additionally achieves
stronger correlations when predicting antibody-antigen binding affinity in a
zero-shot manner, while performance is augmented further when including antigen
information. AntiFold assigns low probabilities to mutations that disrupt
antigen binding, synergizing with protein language model residue probabilities,
and demonstrates promise for guiding antibody optimization while retaining
structure-related properties. AntiFold is freely available under the BSD
3-Clause as a web server at https://opig.stats.ox.ac.uk/webapps/antifold/ and
and pip installable package at https://github.com/oxpig/AntiFold
|
[
{
"created": "Mon, 6 May 2024 11:23:47 GMT",
"version": "v1"
}
] |
2024-05-07
|
[
[
"Høie",
"Magnus Haraldson",
""
],
[
"Hummer",
"Alissa",
""
],
[
"Olsen",
"Tobias H.",
""
],
[
"Aguilar-Sanjuan",
"Broncio",
""
],
[
"Nielsen",
"Morten",
""
],
[
"Deane",
"Charlotte M.",
""
]
] |
The design and optimization of antibodies requires an intricate balance across multiple properties. Protein inverse folding models, capable of generating diverse sequences folding into the same structure, are promising tools for maintaining structural integrity during antibody design. Here, we present AntiFold, an antibody-specific inverse folding model, fine-tuned from ESM-IF1 on solved and predicted antibody structures. AntiFold outperforms existing inverse folding tools on sequence recovery across complementarity-determining regions, with designed sequences showing high structural similarity to their solved counterpart. It additionally achieves stronger correlations when predicting antibody-antigen binding affinity in a zero-shot manner, while performance is augmented further when including antigen information. AntiFold assigns low probabilities to mutations that disrupt antigen binding, synergizing with protein language model residue probabilities, and demonstrates promise for guiding antibody optimization while retaining structure-related properties. AntiFold is freely available under the BSD 3-Clause as a web server at https://opig.stats.ox.ac.uk/webapps/antifold/ and and pip installable package at https://github.com/oxpig/AntiFold
|
1405.4357
|
John Canning prof
|
Md. Arafat Hossain, John Canning, Sandra Ast, Peter J. Rutledge, Teh
Li Yen, Abbas Jamalipour
|
Lab-in-a-phone: Smartphone-based Portable Fluorometer for pH Field
Measurements of Environmental Water
|
Submitted to IEEE Sensors Journal 21_2_2014
| null |
10.1109/JSEN.2014.2361651
| null |
q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A novel portable fluorometer combining the attributes of a smartphone with an
easy fit, simple and compact sample chamber fabricated using 3D printing has
been developed for pH measurements of environmental water in the field. The
results were then compared directly with those obtained using conventional
electrode based measurements.
|
[
{
"created": "Sat, 17 May 2014 06:40:13 GMT",
"version": "v1"
}
] |
2015-06-11
|
[
[
"Hossain",
"Md. Arafat",
""
],
[
"Canning",
"John",
""
],
[
"Ast",
"Sandra",
""
],
[
"Rutledge",
"Peter J.",
""
],
[
"Yen",
"Teh Li",
""
],
[
"Jamalipour",
"Abbas",
""
]
] |
A novel portable fluorometer combining the attributes of a smartphone with an easy fit, simple and compact sample chamber fabricated using 3D printing has been developed for pH measurements of environmental water in the field. The results were then compared directly with those obtained using conventional electrode based measurements.
|
2404.19309
|
Noam Ben-Eliezer
|
Liad Doniza (1), Mitchel Lee (2), Tamar Blumenfeld Katzir (3), Moran
Artzi (4,5,6), Dafna Ben Bashat (4,5,6), Dvir Radunsky (3), Karin Shmueli
(2), Noam Ben-Eliezer (3,5,7) ((1) Department of Electrical Engineering, Tel
Aviv University, Tel Aviv, Israel, (2) Department of Medical Physics and
Biomedical Engineering, University College London, London, UK, (3) Department
of Biomedical Engineering, Tel Aviv University, Tel Aviv, Israel, (4) Sagol
Brain Institute, Tel Aviv Medical Center, Tel Aviv, Israel, (5) Sagol School
of Neuroscience, Tel Aviv University, Tel-Aviv, Israel, (6) Sackler Faculty
of Medicine, Tel Aviv University, Tel Aviv, Israel, (7) Center for Advanced
Imaging Innovation and Research (CAI2R), New-York University Langone Medical
Center, New York, NY, United States)
|
Noise propagation and MP-PCA image denoising for high-resolution
quantitative T2* and magnetic susceptibility mapping (QSM)
|
9 pages, 8 figures, 3 tables. It was accepted to be presented in a
peer-reviewed annual ISMRM meeting, which will be held in Singapore in May
2024
| null | null | null |
q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Quantitative Susceptibility Mapping (QSM) is a technique for measuring
magnetic susceptibility of tissues, aiding in the detection of pathologies like
traumatic brain injury and multiple sclerosis by analyzing variations in
substances such as iron and calcium. Despite its clinical value, achieving
high-resolution QSM (voxel sizes < 1 mm3) reduces signal-to-noise ratio (SNR),
compromising diagnostic quality. To mitigate this, we applied the
Marchenko-Pastur Principal Component Analysis (MP-PCA) denoising technique on
T2* weighted data, to enhance the quality of R2*, T2*, and QSM maps. Denoising
was tested on a numerical phantom, healthy subjects, and patients with brain
metastases and sickle cell disease, demonstrating effective and robust
improvements across different scan settings. Further analysis examined noise
propagation in R2* and T2* values, revealing lower noise-related variations in
R2* values compared to T2* values which tended to be overestimated due to
noise. Reduced variability was observed in QSM values post denoising,
demonstrating MP-PCA's potential to improve the
|
[
{
"created": "Tue, 30 Apr 2024 07:28:14 GMT",
"version": "v1"
}
] |
2024-05-01
|
[
[
"Doniza",
"Liad",
""
],
[
"Lee",
"Mitchel",
""
],
[
"Katzir",
"Tamar Blumenfeld",
""
],
[
"Artzi",
"Moran",
""
],
[
"Bashat",
"Dafna Ben",
""
],
[
"Radunsky",
"Dvir",
""
],
[
"Shmueli",
"Karin",
""
],
[
"Ben-Eliezer",
"Noam",
""
]
] |
Quantitative Susceptibility Mapping (QSM) is a technique for measuring magnetic susceptibility of tissues, aiding in the detection of pathologies like traumatic brain injury and multiple sclerosis by analyzing variations in substances such as iron and calcium. Despite its clinical value, achieving high-resolution QSM (voxel sizes < 1 mm3) reduces signal-to-noise ratio (SNR), compromising diagnostic quality. To mitigate this, we applied the Marchenko-Pastur Principal Component Analysis (MP-PCA) denoising technique on T2* weighted data, to enhance the quality of R2*, T2*, and QSM maps. Denoising was tested on a numerical phantom, healthy subjects, and patients with brain metastases and sickle cell disease, demonstrating effective and robust improvements across different scan settings. Further analysis examined noise propagation in R2* and T2* values, revealing lower noise-related variations in R2* values compared to T2* values which tended to be overestimated due to noise. Reduced variability was observed in QSM values post denoising, demonstrating MP-PCA's potential to improve the
|
1504.07833
|
Ovidiu Radulescu
|
Ovidiu Radulescu, Satya Swarup Samal, Aur\'elien Naldi, Dima
Grigoriev, Andreas Weber
|
Symbolic dynamics of biochemical pathways as finite states machines
| null | null | null | null |
q-bio.MN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We discuss the symbolic dynamics of biochemical networks with separate
timescales. We show that symbolic dynamics of monomolecular reaction networks
with separated rate constants can be described by deterministic, acyclic
automata with a number of states that is inferior to the number of biochemical
species. For nonlinear pathways, we propose a general approach to approximate
their dynamics by finite state machines working on the metastable states of the
network (long life states where the system has slow dynamics). For networks
with polynomial rate functions we propose to compute metastable states as
solutions of the tropical equilibration problem. Tropical equilibrations are
defined by the equality of at least two dominant monomials of opposite signs in
the differential equations of each dynamic variable. In algebraic geometry,
tropical equilibrations are tantamount to tropical prevarieties, that are
finite intersections of tropical hypersurfaces.
|
[
{
"created": "Wed, 29 Apr 2015 12:38:17 GMT",
"version": "v1"
},
{
"created": "Sun, 5 Jul 2015 20:25:22 GMT",
"version": "v2"
}
] |
2015-07-07
|
[
[
"Radulescu",
"Ovidiu",
""
],
[
"Samal",
"Satya Swarup",
""
],
[
"Naldi",
"Aurélien",
""
],
[
"Grigoriev",
"Dima",
""
],
[
"Weber",
"Andreas",
""
]
] |
We discuss the symbolic dynamics of biochemical networks with separate timescales. We show that symbolic dynamics of monomolecular reaction networks with separated rate constants can be described by deterministic, acyclic automata with a number of states that is inferior to the number of biochemical species. For nonlinear pathways, we propose a general approach to approximate their dynamics by finite state machines working on the metastable states of the network (long life states where the system has slow dynamics). For networks with polynomial rate functions we propose to compute metastable states as solutions of the tropical equilibration problem. Tropical equilibrations are defined by the equality of at least two dominant monomials of opposite signs in the differential equations of each dynamic variable. In algebraic geometry, tropical equilibrations are tantamount to tropical prevarieties, that are finite intersections of tropical hypersurfaces.
|
2310.09175
|
Kingsley Cox
|
Kingsley J.A. Cox and Paul R. Adams
|
Shedding light on social learning
|
11 pages 8 figures
| null | null | null |
q-bio.NC q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Culture involves the origination and transmission of ideas, but the
conditions in which culture can emerge and evolve are unclear. We constructed
and studied a highly simplified neural-network model of these processes. In
this model ideas originate by individual learning from the environment and are
transmitted by communication between individuals. Individuals (or "agents")
comprise a single neuron which receives structured data from the environment
via plastic synaptic connections. The data are generated in the simplest
possible way: linear mixing of independently fluctuating sources and the goal
of learning is to unmix the data. To make this problem tractable we assume that
at least one of the sources fluctuates in a nonGaussian manner. Linear mixing
creates structure in the data, and agents attempt to learn (from the data and
possibly from other individuals) synaptic weights that will unmix, i.e., to
"understand" the agent's world. For a variety of reasons even this goal can be
difficult for a single agent to achieve; we studied one particular type of
difficulty (created by imperfection in synaptic plasticity), though our
conclusions should carry over to many other types of difficulty. We previously
studied whether a small population of communicating agents, learning from each
other, could more easily learn unmixing coefficients than isolated individuals,
learning only from their environment. We found, unsurprisingly, that if agents
learn indiscriminately from any other agent (whether or not they have learned
good solutions), communication does not enhance understanding. Here we extend
the model slightly, by allowing successful learners to be more effective
teachers, and find that now a population of agents can learn more effectively
than isolated individuals. We suggest that a key factor in the onset of culture
might be the development of selective learning.
|
[
{
"created": "Fri, 13 Oct 2023 15:09:44 GMT",
"version": "v1"
}
] |
2023-10-16
|
[
[
"Cox",
"Kingsley J. A.",
""
],
[
"Adams",
"Paul R.",
""
]
] |
Culture involves the origination and transmission of ideas, but the conditions in which culture can emerge and evolve are unclear. We constructed and studied a highly simplified neural-network model of these processes. In this model ideas originate by individual learning from the environment and are transmitted by communication between individuals. Individuals (or "agents") comprise a single neuron which receives structured data from the environment via plastic synaptic connections. The data are generated in the simplest possible way: linear mixing of independently fluctuating sources and the goal of learning is to unmix the data. To make this problem tractable we assume that at least one of the sources fluctuates in a nonGaussian manner. Linear mixing creates structure in the data, and agents attempt to learn (from the data and possibly from other individuals) synaptic weights that will unmix, i.e., to "understand" the agent's world. For a variety of reasons even this goal can be difficult for a single agent to achieve; we studied one particular type of difficulty (created by imperfection in synaptic plasticity), though our conclusions should carry over to many other types of difficulty. We previously studied whether a small population of communicating agents, learning from each other, could more easily learn unmixing coefficients than isolated individuals, learning only from their environment. We found, unsurprisingly, that if agents learn indiscriminately from any other agent (whether or not they have learned good solutions), communication does not enhance understanding. Here we extend the model slightly, by allowing successful learners to be more effective teachers, and find that now a population of agents can learn more effectively than isolated individuals. We suggest that a key factor in the onset of culture might be the development of selective learning.
|
2011.05853
|
Robert Worden
|
Robert Worden
|
The Aggregator Model of Spatial Cognition
| null | null | null | null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Tracking the positions of objects in local space is a core function of animal
brains. We do not yet understand how it is done with limited neural resources.
The challenges of spatial cognition are discussed under the criteria: (a)
scaling of computational costs; (b) feature binding; (c) precise calculation of
spatial displacements; (d) fast learning of invariant patterns; and (e)
exploiting the strong Bayesian prior of object constancy. The leading current
models of spatial cognition are Hierarchical Bayesian models of vision, and
Deep Neural Nets. These are typically fully distributed models, which compute
using direct communication links between a set of modular knowledge sources,
and no other essential components. Their distributed nature leads to
difficulties with the criteria (a) - (e). I discuss an alternative model of
spatial cognition, which uses a single central position aggregator to store
estimated locations of each object or feature, and applies constraints on
locations in an iterative cycle between the aggregator and the knowledge
sources. This model has advantages in addressing the criteria (a) - (e). If
there is an aggregator in mammalian brains, there are reasons to believe that
it is in the thalamus. I outline a possible neural realisation of the
aggregator function in the thalamus.
|
[
{
"created": "Wed, 4 Nov 2020 12:22:59 GMT",
"version": "v1"
},
{
"created": "Thu, 19 Nov 2020 18:01:40 GMT",
"version": "v2"
}
] |
2020-11-20
|
[
[
"Worden",
"Robert",
""
]
] |
Tracking the positions of objects in local space is a core function of animal brains. We do not yet understand how it is done with limited neural resources. The challenges of spatial cognition are discussed under the criteria: (a) scaling of computational costs; (b) feature binding; (c) precise calculation of spatial displacements; (d) fast learning of invariant patterns; and (e) exploiting the strong Bayesian prior of object constancy. The leading current models of spatial cognition are Hierarchical Bayesian models of vision, and Deep Neural Nets. These are typically fully distributed models, which compute using direct communication links between a set of modular knowledge sources, and no other essential components. Their distributed nature leads to difficulties with the criteria (a) - (e). I discuss an alternative model of spatial cognition, which uses a single central position aggregator to store estimated locations of each object or feature, and applies constraints on locations in an iterative cycle between the aggregator and the knowledge sources. This model has advantages in addressing the criteria (a) - (e). If there is an aggregator in mammalian brains, there are reasons to believe that it is in the thalamus. I outline a possible neural realisation of the aggregator function in the thalamus.
|
2306.09408
|
Zachary G. Nicolaou
|
Zachary G. Nicolaou, Schuyler B. Nicholson, Adilson E. Motter, and
Jason R. Green
|
Prevalence of multistability and nonstationarity in driven chemical
networks
|
12 pages, 4 figures
|
J. Chem. Phys. 158, 225101 (2023)
|
10.1063/5.0142589
| null |
q-bio.MN nlin.AO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
External flows of energy, entropy, and matter can cause sudden transitions in
the stability of biological and industrial systems, fundamentally altering
their dynamical function. How might we control and design these transitions in
chemical reaction networks? Here, we analyze transitions giving rise to complex
behavior in random reaction networks subject to external driving forces. In the
absence of driving, we characterize the uniqueness of the steady state and
identify the percolation of a giant connected component in these networks as
the number of reactions increases. When subject to chemical driving (influx and
outflux of chemical species), the steady state can undergo bifurcations,
leading to multistability or oscillatory dynamics. By quantifying the
prevalence of these bifurcations, we show how chemical driving and network
sparsity tend to promote the emergence of these complex dynamics and increased
rates of entropy production. We show that catalysis also plays an important
role in the emergence of complexity, strongly correlating with the prevalence
of bifurcations. Our results suggest that coupling a minimal number of chemical
signatures with external driving can lead to features present in biochemical
processes and abiogenesis.
|
[
{
"created": "Thu, 15 Jun 2023 18:00:02 GMT",
"version": "v1"
}
] |
2023-06-19
|
[
[
"Nicolaou",
"Zachary G.",
""
],
[
"Nicholson",
"Schuyler B.",
""
],
[
"Motter",
"Adilson E.",
""
],
[
"Green",
"Jason R.",
""
]
] |
External flows of energy, entropy, and matter can cause sudden transitions in the stability of biological and industrial systems, fundamentally altering their dynamical function. How might we control and design these transitions in chemical reaction networks? Here, we analyze transitions giving rise to complex behavior in random reaction networks subject to external driving forces. In the absence of driving, we characterize the uniqueness of the steady state and identify the percolation of a giant connected component in these networks as the number of reactions increases. When subject to chemical driving (influx and outflux of chemical species), the steady state can undergo bifurcations, leading to multistability or oscillatory dynamics. By quantifying the prevalence of these bifurcations, we show how chemical driving and network sparsity tend to promote the emergence of these complex dynamics and increased rates of entropy production. We show that catalysis also plays an important role in the emergence of complexity, strongly correlating with the prevalence of bifurcations. Our results suggest that coupling a minimal number of chemical signatures with external driving can lead to features present in biochemical processes and abiogenesis.
|
2207.04568
|
Tom Chou
|
Xiangting Li and Tom Chou
|
Stochastic dynamics and ribosome-RNAP interactions in
Transcription-Translation Coupling
|
Submitted to Biophysical Journal. 23 pages, 11 figures
| null |
10.1016/j.bpj.2022.09.041
| null |
q-bio.SC cond-mat.stat-mech q-bio.BM
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Under certain cellular conditions, transcription and mRNA translation in
prokaryotes appear to be "coupled," in which the formation of mRNA transcript
and production of its associated protein are temporally correlated. Such
transcription-translation coupling (TTC) has been evoked as a mechanism that
speeds up the overall process, provides protection during the transcription,
and/or regulates the timing of transcript and protein formation. What molecular
mechanisms underlie ribosome-RNAP coupling and how they can perform these
functions have not been explicitly modeled. We develop and analyze a
continuous-time stochastic model that incorporates ribosome and RNAP elongation
rates, initiation and termination rates, RNAP pausing, and direct ribosome and
RNAP interactions (exclusion and binding). Our model predicts how distributions
of delay times depend on these molecular features of transcription and
translation. We also propose additional measures for TTC: a direct
ribosome-RNAP binding probability and the fraction of time the
translation-transcription process is "protected" from attack by
transcription-terminating proteins. These metrics quantify different aspects of
TTC and differentially depend on parameters of known molecular processes. We
use our metrics to reveal how and when our model can exhibit either
acceleration or deceleration of transcription, as well as protection from
termination. Our detailed mechanistic model provides a basis for designing new
experimental assays that can better elucidate the mechanisms of TTC.
|
[
{
"created": "Sun, 10 Jul 2022 23:55:46 GMT",
"version": "v1"
}
] |
2023-01-18
|
[
[
"Li",
"Xiangting",
""
],
[
"Chou",
"Tom",
""
]
] |
Under certain cellular conditions, transcription and mRNA translation in prokaryotes appear to be "coupled," in which the formation of mRNA transcript and production of its associated protein are temporally correlated. Such transcription-translation coupling (TTC) has been evoked as a mechanism that speeds up the overall process, provides protection during the transcription, and/or regulates the timing of transcript and protein formation. What molecular mechanisms underlie ribosome-RNAP coupling and how they can perform these functions have not been explicitly modeled. We develop and analyze a continuous-time stochastic model that incorporates ribosome and RNAP elongation rates, initiation and termination rates, RNAP pausing, and direct ribosome and RNAP interactions (exclusion and binding). Our model predicts how distributions of delay times depend on these molecular features of transcription and translation. We also propose additional measures for TTC: a direct ribosome-RNAP binding probability and the fraction of time the translation-transcription process is "protected" from attack by transcription-terminating proteins. These metrics quantify different aspects of TTC and differentially depend on parameters of known molecular processes. We use our metrics to reveal how and when our model can exhibit either acceleration or deceleration of transcription, as well as protection from termination. Our detailed mechanistic model provides a basis for designing new experimental assays that can better elucidate the mechanisms of TTC.
|
2405.07771
|
Samuel Gornard-Laidet
|
Samuel Gornard (EGCE), Florence Mougel, Isabelle Germon, V\'eronique
Borday-Birraux, Pascaline Venon, Salimata Drabo, Laure Marie-Paule
Kaiser-Arnauld
|
Cellular dynamics of host-parasitoid interactions: Insights from the
encapsulation process in a partially resistant host
| null | null | null | null |
q-bio.CB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cotesia typhae is an eastern African endoparasitoid braconid wasp that
targets the larval stage of the lepidopteran stem borer, Sesamia nonagrioides,
a maize crop pest in Europe. The French host population is partially resistant
to the Makindu strain of the wasp, allowing its development in only 40% of the
cases. Resistant larvae can encapsulate the parasitoid and survive the
infection. This interaction provides a very interesting frame for investigating
the impact of parasitism on host cellular resistance. We characterized the
parasitoid ovolarval development in a permissive host and studied the
encapsulation process in a resistant host by dissection and histological
sectioning compared to that of inert chromatography beads. We measured the
total hemocyte count in parasitized and bead-injected larvae over time to
monitor the magnitude of the immune reaction. Our results show that parasitism
of resistant hosts delayed encapsulation but did not affect immune abilities
towards inert beads. Moreover, while bead injection increased total hemocyte
count, it remained constant in resistant and permissive larvae. We conclude
that while Cotesia spp virulence factors are known to impair the host immune
system, our results suggest that passive evasion could also occur.
|
[
{
"created": "Mon, 13 May 2024 14:16:19 GMT",
"version": "v1"
}
] |
2024-05-14
|
[
[
"Gornard",
"Samuel",
"",
"EGCE"
],
[
"Mougel",
"Florence",
""
],
[
"Germon",
"Isabelle",
""
],
[
"Borday-Birraux",
"Véronique",
""
],
[
"Venon",
"Pascaline",
""
],
[
"Drabo",
"Salimata",
""
],
[
"Kaiser-Arnauld",
"Laure Marie-Paule",
""
]
] |
Cotesia typhae is an eastern African endoparasitoid braconid wasp that targets the larval stage of the lepidopteran stem borer, Sesamia nonagrioides, a maize crop pest in Europe. The French host population is partially resistant to the Makindu strain of the wasp, allowing its development in only 40% of the cases. Resistant larvae can encapsulate the parasitoid and survive the infection. This interaction provides a very interesting frame for investigating the impact of parasitism on host cellular resistance. We characterized the parasitoid ovolarval development in a permissive host and studied the encapsulation process in a resistant host by dissection and histological sectioning compared to that of inert chromatography beads. We measured the total hemocyte count in parasitized and bead-injected larvae over time to monitor the magnitude of the immune reaction. Our results show that parasitism of resistant hosts delayed encapsulation but did not affect immune abilities towards inert beads. Moreover, while bead injection increased total hemocyte count, it remained constant in resistant and permissive larvae. We conclude that while Cotesia spp virulence factors are known to impair the host immune system, our results suggest that passive evasion could also occur.
|
2006.15706
|
Rudy Kusdiantara
|
H. Susanto, V.R. Tjahjono, A. Hasan, M.F. Kasim, N. Nuraini, E.R.M.
Putri, R. Kusdiantara, H. Kurniawan
|
How many can you infect? Simple (and naive) methods of estimating the
reproduction number
|
http://journals.itb.ac.id/index.php/cbms/article/view/13808
|
COMMUN. BIOMATH. SCI., VOL. 3, NO. 1, 2020, PP. 28-36, 2020
| null | null |
q-bio.PE physics.soc-ph q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This is a pedagogical paper on estimating the number of people that can be
infected by one infectious person during an epidemic outbreak, known as the
reproduction number. Knowing the number is crucial for developing policy
responses. There are generally two types of such a number, i.e., basic and
effective (or instantaneous). While basic reproduction number is the average
expected number of cases directly generated by one case in a population where
all individuals are susceptible, effective reproduction number is the number of
cases generated in the current state of a population. In this paper, we exploit
the deterministic susceptible-infected-removed (SIR) model to estimate them
through three different numerical approximations. We apply the methods to the
pandemic COVID-19 in Italy to provide insights into the spread of the disease
in the country. We see that the effect of the national lockdown in slowing down
the disease exponential growth appeared about two weeks after the
implementation date. We also discuss available improvements to the simple (and
naive) methods that have been made by researchers in the field.
Authors of this paper are members of the SimcovID (Simulasi dan Pemodelan
COVID-19 Indonesia) collaboration.
|
[
{
"created": "Sun, 28 Jun 2020 20:45:29 GMT",
"version": "v1"
}
] |
2020-06-30
|
[
[
"Susanto",
"H.",
""
],
[
"Tjahjono",
"V. R.",
""
],
[
"Hasan",
"A.",
""
],
[
"Kasim",
"M. F.",
""
],
[
"Nuraini",
"N.",
""
],
[
"Putri",
"E. R. M.",
""
],
[
"Kusdiantara",
"R.",
""
],
[
"Kurniawan",
"H.",
""
]
] |
This is a pedagogical paper on estimating the number of people that can be infected by one infectious person during an epidemic outbreak, known as the reproduction number. Knowing the number is crucial for developing policy responses. There are generally two types of such a number, i.e., basic and effective (or instantaneous). While basic reproduction number is the average expected number of cases directly generated by one case in a population where all individuals are susceptible, effective reproduction number is the number of cases generated in the current state of a population. In this paper, we exploit the deterministic susceptible-infected-removed (SIR) model to estimate them through three different numerical approximations. We apply the methods to the pandemic COVID-19 in Italy to provide insights into the spread of the disease in the country. We see that the effect of the national lockdown in slowing down the disease exponential growth appeared about two weeks after the implementation date. We also discuss available improvements to the simple (and naive) methods that have been made by researchers in the field. Authors of this paper are members of the SimcovID (Simulasi dan Pemodelan COVID-19 Indonesia) collaboration.
|
q-bio/0512048
|
Marek Czachor
|
Diederik Aerts, Marek Czachor
|
Two-state dynamics for replicating two-strand systems
|
revtex, 3 eps figures
|
Open Systems & Inf. Dynamics 14, 397-410 (2007)
| null | null |
q-bio.PE nlin.PS quant-ph
| null |
We propose a formalism for describing two-strand systems of a DNA type by
means of soliton von Neumann equations, and illustrate how it works on a simple
example exactly solvably by a Darboux transformation. The main idea behind the
construction is the link between solutions of von Neumann equations and
entangled states of systems consisting of two subsystems evolving in time in
opposite directions. Such a time evolution has analogies in realistic DNA where
the polymerazes move on leading and lagging strands in opposite directions.
|
[
{
"created": "Fri, 30 Dec 2005 16:15:13 GMT",
"version": "v1"
}
] |
2008-01-30
|
[
[
"Aerts",
"Diederik",
""
],
[
"Czachor",
"Marek",
""
]
] |
We propose a formalism for describing two-strand systems of a DNA type by means of soliton von Neumann equations, and illustrate how it works on a simple example exactly solvably by a Darboux transformation. The main idea behind the construction is the link between solutions of von Neumann equations and entangled states of systems consisting of two subsystems evolving in time in opposite directions. Such a time evolution has analogies in realistic DNA where the polymerazes move on leading and lagging strands in opposite directions.
|
2106.02948
|
Yu Takagi
|
Yu Takagi, Laurence T. Hunt, Ryu Ohata, Hiroshi Imamizu, Jun-ichiro
Hirayama
|
Neural dSCA: demixing multimodal interaction among brain areas during
naturalistic experiments
| null | null | null | null |
q-bio.NC cs.AI cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Multi-regional interaction among neuronal populations underlies the brain's
processing of rich sensory information in our daily lives. Recent neuroscience
and neuroimaging studies have increasingly used naturalistic stimuli and
experimental design to identify such realistic sensory computation in the
brain. However, existing methods for cross-areal interaction analysis with
dimensionality reduction, such as reduced-rank regression and canonical
correlation analysis, have limited applicability and interpretability in
naturalistic settings because they usually do not appropriately 'demix' neural
interactions into those associated with different types of task parameters or
stimulus features (e.g., visual or audio). In this paper, we develop a new
method for cross-areal interaction analysis that uses the rich task or stimulus
parameters to reveal how and what types of information are shared by different
neural populations. The proposed neural demixed shared component analysis
combines existing dimensionality reduction methods with a practical neural
network implementation of functional analysis of variance with latent
variables, thereby efficiently demixing nonlinear effects of continuous and
multimodal stimuli. We also propose a simplifying alternative under the
assumptions of linear effects and unimodal stimuli. To demonstrate our methods,
we analyzed two human neuroimaging datasets of participants watching
naturalistic videos of movies and dance movements. The results demonstrate that
our methods provide new insights into multi-regional interaction in the brain
during naturalistic sensory inputs, which cannot be captured by conventional
techniques.
|
[
{
"created": "Sat, 5 Jun 2021 19:16:21 GMT",
"version": "v1"
}
] |
2021-06-08
|
[
[
"Takagi",
"Yu",
""
],
[
"Hunt",
"Laurence T.",
""
],
[
"Ohata",
"Ryu",
""
],
[
"Imamizu",
"Hiroshi",
""
],
[
"Hirayama",
"Jun-ichiro",
""
]
] |
Multi-regional interaction among neuronal populations underlies the brain's processing of rich sensory information in our daily lives. Recent neuroscience and neuroimaging studies have increasingly used naturalistic stimuli and experimental design to identify such realistic sensory computation in the brain. However, existing methods for cross-areal interaction analysis with dimensionality reduction, such as reduced-rank regression and canonical correlation analysis, have limited applicability and interpretability in naturalistic settings because they usually do not appropriately 'demix' neural interactions into those associated with different types of task parameters or stimulus features (e.g., visual or audio). In this paper, we develop a new method for cross-areal interaction analysis that uses the rich task or stimulus parameters to reveal how and what types of information are shared by different neural populations. The proposed neural demixed shared component analysis combines existing dimensionality reduction methods with a practical neural network implementation of functional analysis of variance with latent variables, thereby efficiently demixing nonlinear effects of continuous and multimodal stimuli. We also propose a simplifying alternative under the assumptions of linear effects and unimodal stimuli. To demonstrate our methods, we analyzed two human neuroimaging datasets of participants watching naturalistic videos of movies and dance movements. The results demonstrate that our methods provide new insights into multi-regional interaction in the brain during naturalistic sensory inputs, which cannot be captured by conventional techniques.
|
1710.10860
|
Christopher Lester
|
Christopher Lester
|
Efficient simulation techniques for biochemical reaction networks
|
Doctor of Philosophy thesis submitted at the University of Oxford.
This research was supervised by Prof Ruth E. Baker and Dr Christian A. Yates
| null | null | null |
q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Discrete-state, continuous-time Markov models are becoming commonplace in the
modelling of biochemical processes. The mathematical formulations that such
models lead to are opaque, and, due to their complexity, are often considered
analytically intractable. As such, a variety of Monte Carlo simulation
algorithms have been developed to explore model dynamics empirically. Whilst
well-known methods, such as the Gillespie Algorithm, can be implemented to
investigate a given model, the computational demands of traditional simulation
techniques remain a significant barrier to modern research.
In order to further develop and explore biologically relevant stochastic
models, new and efficient computational methods are required. In this thesis,
high-performance simulation algorithms are developed to estimate summary
statistics that characterise a chosen reaction network. The algorithms make use
of variance reduction techniques, which exploit statistical properties of the
model dynamics, to improve performance.
The multi-level method is an example of a variance reduction technique. The
method estimates summary statistics of well-mixed, spatially homogeneous models
by using estimates from multiple ensembles of sample paths of different
accuracies. In this thesis, the multi-level method is developed in three
directions: firstly, a nuanced implementation framework is described; secondly,
a reformulated method is applied to stiff reaction systems; and, finally,
different approaches to variance reduction are implemented and compared.
The variance reduction methods that underpin the multi-level method are then
re-purposed to understand how the dynamics of a spatially-extended Markov model
are affected by changes in its input parameters. By exploiting the inherent
dynamics of spatially-extended models, an efficient finite difference scheme is
used to estimate parametric sensitivities robustly.
|
[
{
"created": "Mon, 30 Oct 2017 10:48:02 GMT",
"version": "v1"
}
] |
2017-10-31
|
[
[
"Lester",
"Christopher",
""
]
] |
Discrete-state, continuous-time Markov models are becoming commonplace in the modelling of biochemical processes. The mathematical formulations that such models lead to are opaque, and, due to their complexity, are often considered analytically intractable. As such, a variety of Monte Carlo simulation algorithms have been developed to explore model dynamics empirically. Whilst well-known methods, such as the Gillespie Algorithm, can be implemented to investigate a given model, the computational demands of traditional simulation techniques remain a significant barrier to modern research. In order to further develop and explore biologically relevant stochastic models, new and efficient computational methods are required. In this thesis, high-performance simulation algorithms are developed to estimate summary statistics that characterise a chosen reaction network. The algorithms make use of variance reduction techniques, which exploit statistical properties of the model dynamics, to improve performance. The multi-level method is an example of a variance reduction technique. The method estimates summary statistics of well-mixed, spatially homogeneous models by using estimates from multiple ensembles of sample paths of different accuracies. In this thesis, the multi-level method is developed in three directions: firstly, a nuanced implementation framework is described; secondly, a reformulated method is applied to stiff reaction systems; and, finally, different approaches to variance reduction are implemented and compared. The variance reduction methods that underpin the multi-level method are then re-purposed to understand how the dynamics of a spatially-extended Markov model are affected by changes in its input parameters. By exploiting the inherent dynamics of spatially-extended models, an efficient finite difference scheme is used to estimate parametric sensitivities robustly.
|
2004.03934
|
Ginestra Bianconi
|
Ginestra Bianconi, Pavel L. Krapivsky
|
Epidemics with containment measures
|
(15 pages, 6 figures)
|
Phys. Rev. E 102, 032305 (2020)
|
10.1103/PhysRevE.102.032305
| null |
q-bio.PE cond-mat.dis-nn cond-mat.stat-mech physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We propose a tractable epidemic model that includes containment measures. In
the absence of containment measures, the epidemics spread exponentially fast
whenever the infectivity rate is positive, $\lambda>0$. The containment
measures are modeled by considering a time-dependent modulation of the bare
infectivity $\lambda$ leading to effective infectivity that decays in time for
each infected individual, mimicking for instance the combined effect of the
asymptomatic onset of the disease, testing policies and quarantine. We consider
a wide range of temporal kernels for effective infectivity and we investigate
the effect of the considered containment measures. We find that not all kernels
are able to push the epidemic dynamics below the epidemic threshold, with some
containment measures only able to reduce the rate of the exponential growth of
newly infected individuals. We also propose a pandemic model caused by a
growing number of separated foci.
|
[
{
"created": "Wed, 8 Apr 2020 11:06:20 GMT",
"version": "v1"
},
{
"created": "Wed, 28 Oct 2020 09:22:46 GMT",
"version": "v2"
}
] |
2020-10-29
|
[
[
"Bianconi",
"Ginestra",
""
],
[
"Krapivsky",
"Pavel L.",
""
]
] |
We propose a tractable epidemic model that includes containment measures. In the absence of containment measures, the epidemics spread exponentially fast whenever the infectivity rate is positive, $\lambda>0$. The containment measures are modeled by considering a time-dependent modulation of the bare infectivity $\lambda$ leading to effective infectivity that decays in time for each infected individual, mimicking for instance the combined effect of the asymptomatic onset of the disease, testing policies and quarantine. We consider a wide range of temporal kernels for effective infectivity and we investigate the effect of the considered containment measures. We find that not all kernels are able to push the epidemic dynamics below the epidemic threshold, with some containment measures only able to reduce the rate of the exponential growth of newly infected individuals. We also propose a pandemic model caused by a growing number of separated foci.
|
2407.06596
|
Benjamin Morillon
|
J\'er\'emy Giroud, Benjamin Morillon (INS)
|
Beyond acoustics -- capacity limitations of linguistic levels
|
Rhythms of Speech and Language-Table of Contents, In press
| null | null | null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Speech is a multiplexed signal displaying levels of complexity,
organizational principles and perceptual units of analysis at distinct
timescales. This critical acoustic signal for human communication is thus
characterized at distinct representational and temporal scales, related to
distinct linguistic features, from acoustic to supra-lexical. This chapter
presents an overview of experimental work devoted to the characterization of
the speech signal at different timescales, beyond its acoustic properties. The
functional relevance of these different levels of analysis for speech
processing is discussed. We advocate that studying speech perception through
the prism of multi-time scale representations effectively integrates work from
various research areas into a coherent picture and contributes significantly to
increase our knowledge on the topic. Finally, we discuss how these experimental
results fit with neural data and current dynamical models of speech perception.
|
[
{
"created": "Tue, 9 Jul 2024 06:57:07 GMT",
"version": "v1"
}
] |
2024-07-10
|
[
[
"Giroud",
"Jérémy",
"",
"INS"
],
[
"Morillon",
"Benjamin",
"",
"INS"
]
] |
Speech is a multiplexed signal displaying levels of complexity, organizational principles and perceptual units of analysis at distinct timescales. This critical acoustic signal for human communication is thus characterized at distinct representational and temporal scales, related to distinct linguistic features, from acoustic to supra-lexical. This chapter presents an overview of experimental work devoted to the characterization of the speech signal at different timescales, beyond its acoustic properties. The functional relevance of these different levels of analysis for speech processing is discussed. We advocate that studying speech perception through the prism of multi-time scale representations effectively integrates work from various research areas into a coherent picture and contributes significantly to increase our knowledge on the topic. Finally, we discuss how these experimental results fit with neural data and current dynamical models of speech perception.
|
1506.02539
|
Christian Matek
|
Christian Matek, Petr \v{S}ulc, Ferdinando Randisi, Jonathan P. K.
Doye, Ard A. Louis
|
Coarse-grained modelling of supercoiled RNA
|
8 pages + 5 pages Supplementary Material
|
J. Chem. Phys. 143, 243122 (2015)
|
10.1063/1.4933066
| null |
q-bio.BM cond-mat.soft physics.bio-ph physics.chem-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the behaviour of double-stranded RNA under twist and tension using
oxRNA, a recently developed coarse-grained model of RNA. Introducing explicit
salt-dependence into the model allows us to directly compare our results to
data from recent single-molecule experiments. The model reproduces extension
curves as a function of twist and stretching force, including the buckling
transition and the behaviour of plectoneme structures. For negative
supercoiling, we predict denaturation bubble formation in plectoneme end-loops,
suggesting preferential plectoneme localisation in weak base sequences. OxRNA
exhibits a positive twist-stretch coupling constant, in agreement with recent
experimental observations.
|
[
{
"created": "Mon, 8 Jun 2015 15:14:42 GMT",
"version": "v1"
}
] |
2016-01-19
|
[
[
"Matek",
"Christian",
""
],
[
"Šulc",
"Petr",
""
],
[
"Randisi",
"Ferdinando",
""
],
[
"Doye",
"Jonathan P. K.",
""
],
[
"Louis",
"Ard A.",
""
]
] |
We study the behaviour of double-stranded RNA under twist and tension using oxRNA, a recently developed coarse-grained model of RNA. Introducing explicit salt-dependence into the model allows us to directly compare our results to data from recent single-molecule experiments. The model reproduces extension curves as a function of twist and stretching force, including the buckling transition and the behaviour of plectoneme structures. For negative supercoiling, we predict denaturation bubble formation in plectoneme end-loops, suggesting preferential plectoneme localisation in weak base sequences. OxRNA exhibits a positive twist-stretch coupling constant, in agreement with recent experimental observations.
|
1612.01150
|
Richard Granger
|
A Rodriguez, R Granger
|
The grammar of mammalian brain capacity
|
18 pages, 2 figures, 2 tables
|
Theoretical Computer Science 633 (2016) 100-111
|
10.1016/j.tcs.2016.03.021
| null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Uniquely human abilities may arise from special-purpose brain circuitry, or
from concerted general capacity increases due to our outsized brains. We
forward a novel hypothesis of the relation between computational capacity and
brain size, linking mathematical formalisms of grammars with the allometric
increases in cortical-subcortical ratios that arise in large brains. In sum, i)
thalamocortical loops compute formal grammars; ii) successive cortical regions
describe grammar rewrite rules of increasing size; iii) cortical-subcortical
ratios determine the quantity of stacks in single-stack pushdown grammars; iii)
quantitative increase of stacks yields grammars with qualitatively increased
computational power. We arrive at the specific conjecture that human brain
capacity is equivalent to that of indexed grammars, far short of full
Turing-computable (recursively enumerable) systems. The work provides a
candidate explanatory account of a range of existing human and animal data,
addressing longstanding questions of how repeated similar brain algorithms can
be successfully applied to apparently dissimilar computational tasks (e.g.,
perceptual versus cognitive, phonological versus syntactic); and how
quantitative increases to brains can confer qualitative changes to their
computational repertoire.
|
[
{
"created": "Sun, 4 Dec 2016 17:40:44 GMT",
"version": "v1"
}
] |
2016-12-06
|
[
[
"Rodriguez",
"A",
""
],
[
"Granger",
"R",
""
]
] |
Uniquely human abilities may arise from special-purpose brain circuitry, or from concerted general capacity increases due to our outsized brains. We forward a novel hypothesis of the relation between computational capacity and brain size, linking mathematical formalisms of grammars with the allometric increases in cortical-subcortical ratios that arise in large brains. In sum, i) thalamocortical loops compute formal grammars; ii) successive cortical regions describe grammar rewrite rules of increasing size; iii) cortical-subcortical ratios determine the quantity of stacks in single-stack pushdown grammars; iii) quantitative increase of stacks yields grammars with qualitatively increased computational power. We arrive at the specific conjecture that human brain capacity is equivalent to that of indexed grammars, far short of full Turing-computable (recursively enumerable) systems. The work provides a candidate explanatory account of a range of existing human and animal data, addressing longstanding questions of how repeated similar brain algorithms can be successfully applied to apparently dissimilar computational tasks (e.g., perceptual versus cognitive, phonological versus syntactic); and how quantitative increases to brains can confer qualitative changes to their computational repertoire.
|
2406.06985
|
Huiming Xia
|
Huiming Xia, My Hoang, Evelyn Schmidt, Susanna Kiwala, Joshua
McMichael, Zachary L. Skidmore, Bryan Fisk, Jonathan J. Song, Jasreet Hundal,
Thomas Mooney, Jason R. Walker, S. Peter Goedegebuure, Christopher A. Miller,
William E. Gillanders, Obi L. Griffith, Malachi Griffith
|
pVACview: an interactive visualization tool for efficient neoantigen
prioritization and selection
|
Supplemental tables available at 10.5281/zenodo.11534338
| null | null | null |
q-bio.GN
|
http://creativecommons.org/licenses/by/4.0/
|
Neoantigen targeting therapies including personalized vaccines have shown
promise in the treatment of cancers. Accurate identification/prioritization of
neoantigens is highly relevant to designing clinical trials, predicting
treatment response, and understanding mechanisms of resistance. With the advent
of massively parallel sequencing technologies, it is now possible to predict
neoantigens based on patient-specific variant information. However, numerous
factors must be considered when prioritizing neoantigens for use in
personalized therapies. Complexities such as alternative transcript
annotations, various binding, presentation and immunogenicity prediction
algorithms, and variable peptide lengths/registers all potentially impact the
neoantigen selection process. While computational tools generate numerous
algorithmic predictions for neoantigen characterization, results from these
pipelines are difficult to navigate and require extensive knowledge of the
underlying tools for accurate interpretation. Due to the intricate nature and
number of salient neoantigen features, presenting all relevant information to
facilitate candidate selection for downstream applications is a difficult
challenge that current tools fail to address. We have created pVACview, the
first interactive tool designed to aid in the prioritization and selection of
neoantigen candidates for personalized neoantigen therapies. pVACview has a
user-friendly and intuitive interface where users can upload, explore, select
and export their neoantigen candidates. The tool allows users to visualize
candidates using variant, transcript and peptide information. pVACview will
allow researchers to analyze and prioritize neoantigen candidates with greater
efficiency and accuracy in basic and translational settings. The application is
available as part of the pVACtools pipeline at pvactools.org and as an online
server at pvacview.org.
|
[
{
"created": "Tue, 11 Jun 2024 06:28:56 GMT",
"version": "v1"
}
] |
2024-06-12
|
[
[
"Xia",
"Huiming",
""
],
[
"Hoang",
"My",
""
],
[
"Schmidt",
"Evelyn",
""
],
[
"Kiwala",
"Susanna",
""
],
[
"McMichael",
"Joshua",
""
],
[
"Skidmore",
"Zachary L.",
""
],
[
"Fisk",
"Bryan",
""
],
[
"Song",
"Jonathan J.",
""
],
[
"Hundal",
"Jasreet",
""
],
[
"Mooney",
"Thomas",
""
],
[
"Walker",
"Jason R.",
""
],
[
"Goedegebuure",
"S. Peter",
""
],
[
"Miller",
"Christopher A.",
""
],
[
"Gillanders",
"William E.",
""
],
[
"Griffith",
"Obi L.",
""
],
[
"Griffith",
"Malachi",
""
]
] |
Neoantigen targeting therapies including personalized vaccines have shown promise in the treatment of cancers. Accurate identification/prioritization of neoantigens is highly relevant to designing clinical trials, predicting treatment response, and understanding mechanisms of resistance. With the advent of massively parallel sequencing technologies, it is now possible to predict neoantigens based on patient-specific variant information. However, numerous factors must be considered when prioritizing neoantigens for use in personalized therapies. Complexities such as alternative transcript annotations, various binding, presentation and immunogenicity prediction algorithms, and variable peptide lengths/registers all potentially impact the neoantigen selection process. While computational tools generate numerous algorithmic predictions for neoantigen characterization, results from these pipelines are difficult to navigate and require extensive knowledge of the underlying tools for accurate interpretation. Due to the intricate nature and number of salient neoantigen features, presenting all relevant information to facilitate candidate selection for downstream applications is a difficult challenge that current tools fail to address. We have created pVACview, the first interactive tool designed to aid in the prioritization and selection of neoantigen candidates for personalized neoantigen therapies. pVACview has a user-friendly and intuitive interface where users can upload, explore, select and export their neoantigen candidates. The tool allows users to visualize candidates using variant, transcript and peptide information. pVACview will allow researchers to analyze and prioritize neoantigen candidates with greater efficiency and accuracy in basic and translational settings. The application is available as part of the pVACtools pipeline at pvactools.org and as an online server at pvacview.org.
|
q-bio/0612037
|
Le Zhang
|
Le Zhang, Costas G. Strouthos, Zhihui Wang, and Thomas S. Deisboeck
|
Simulating Brain Tumor Heterogeneity with a Multiscale Agent-Based
Model: Linking Molecular Signatures, Phenotypes and Expansion Rate
|
37 pages, 10 figures
|
Mathematical and Computer Modelling 49 (1-2), pp. 307-319, 2009
|
10.1016/j.mcm.2008.05.011
| null |
q-bio.TO q-bio.MN
| null |
We have extended our previously developed 3D multi-scale agent-based brain
tumor model to simulate cancer heterogeneity and to analyze its impact across
the scales of interest. While our algorithm continues to employ an epidermal
growth factor receptor (EGFR) gene-protein interaction network to determine the
cells' phenotype, it now adds an explicit treatment of tumor cell adhesion
related to the model's biochemical microenvironment. We simulate a simplified
tumor progression pathway that leads to the emergence of five distinct glioma
cell clones with different EGFR density and cell 'search precisions'. The in
silico results show that microscopic tumor heterogeneity can impact the tumor
system's multicellular growth patterns. Our findings further confirm that EGFR
density results in the more aggressive clonal populations switching earlier
from proliferation-dominated to a more migratory phenotype. Moreover, analyzing
the dynamic molecular profile that triggers the phenotypic switch between
proliferation and migration, our in silico oncogenomics data display spatial
and temporal diversity in documenting the regional impact of tumorigenesis, and
thus support the added value of multi-site and repeated assessments in vitro
and in vivo. Potential implications from this in silico work for experimental
and computational studies are discussed.
|
[
{
"created": "Mon, 18 Dec 2006 20:20:37 GMT",
"version": "v1"
},
{
"created": "Wed, 3 Jan 2007 18:50:02 GMT",
"version": "v2"
},
{
"created": "Mon, 16 Jul 2007 16:47:50 GMT",
"version": "v3"
}
] |
2010-03-23
|
[
[
"Zhang",
"Le",
""
],
[
"Strouthos",
"Costas G.",
""
],
[
"Wang",
"Zhihui",
""
],
[
"Deisboeck",
"Thomas S.",
""
]
] |
We have extended our previously developed 3D multi-scale agent-based brain tumor model to simulate cancer heterogeneity and to analyze its impact across the scales of interest. While our algorithm continues to employ an epidermal growth factor receptor (EGFR) gene-protein interaction network to determine the cells' phenotype, it now adds an explicit treatment of tumor cell adhesion related to the model's biochemical microenvironment. We simulate a simplified tumor progression pathway that leads to the emergence of five distinct glioma cell clones with different EGFR density and cell 'search precisions'. The in silico results show that microscopic tumor heterogeneity can impact the tumor system's multicellular growth patterns. Our findings further confirm that EGFR density results in the more aggressive clonal populations switching earlier from proliferation-dominated to a more migratory phenotype. Moreover, analyzing the dynamic molecular profile that triggers the phenotypic switch between proliferation and migration, our in silico oncogenomics data display spatial and temporal diversity in documenting the regional impact of tumorigenesis, and thus support the added value of multi-site and repeated assessments in vitro and in vivo. Potential implications from this in silico work for experimental and computational studies are discussed.
|
0811.4581
|
Michal Wojciechowski dr
|
M. Wojciechowski and Marek Cieplak
|
Effects of confinement and crowding on folding of model proteins
| null |
Biosystems. 2008 Dec;94(3):248-52
|
10.1016/j.biosystems.2008.06.016
| null |
q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We perform molecular dynamics simulations for a simple coarse-grained model
of crambin placed inside of a softly repulsive sphere of radius R. The
confinement makes folding at the optimal temperature slower and affects the
folding scenarios, but both effects are not dramatic. The influence of crowding
on folding are studied by placing several identical proteins within the sphere,
denaturing them, and then by monitoring refolding. If the interactions between
the proteins are dominated by the excluded volume effects, the net folding
times are essentially like for a single protein. An introduction of
inter-proteinic attractive contacts hinders folding when the strength of the
attraction exceeds about a half of the value of the strength of the single
protein contacts. The bigger the strength of the attraction, the more likely is
the occurrence of aggregation and misfolding.
|
[
{
"created": "Thu, 27 Nov 2008 16:21:42 GMT",
"version": "v1"
}
] |
2008-12-01
|
[
[
"Wojciechowski",
"M.",
""
],
[
"Cieplak",
"Marek",
""
]
] |
We perform molecular dynamics simulations for a simple coarse-grained model of crambin placed inside of a softly repulsive sphere of radius R. The confinement makes folding at the optimal temperature slower and affects the folding scenarios, but both effects are not dramatic. The influence of crowding on folding are studied by placing several identical proteins within the sphere, denaturing them, and then by monitoring refolding. If the interactions between the proteins are dominated by the excluded volume effects, the net folding times are essentially like for a single protein. An introduction of inter-proteinic attractive contacts hinders folding when the strength of the attraction exceeds about a half of the value of the strength of the single protein contacts. The bigger the strength of the attraction, the more likely is the occurrence of aggregation and misfolding.
|
q-bio/0401040
|
Thorsten Poeschel
|
Thorsten Poeschel, Werner Ebeling, Cornelius Froemmel, Rosa Ramirez
|
Correction algorithm for finite sample statistics
|
11 pages, 9 figures
|
Eur. Phys. J. E, Vol. 12, 531-541 (2003).
| null | null |
q-bio.OT cond-mat.stat-mech
| null |
Assume in a sample of size M one finds M_i representatives of species i with
i=1...N^*. The normalized frequency p^*_i=M_i/M, based on the finite sample,
may deviate considerably from the true probabilities p_i. We propose a method
to infer rank-ordered true probabilities r_i from measured frequencies M_i. We
show that the rank-ordered probabilities provide important informations on the
system, e.g., the true number of species, the Shannon- and the Renyi-entropies.
|
[
{
"created": "Wed, 28 Jan 2004 10:11:19 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Poeschel",
"Thorsten",
""
],
[
"Ebeling",
"Werner",
""
],
[
"Froemmel",
"Cornelius",
""
],
[
"Ramirez",
"Rosa",
""
]
] |
Assume in a sample of size M one finds M_i representatives of species i with i=1...N^*. The normalized frequency p^*_i=M_i/M, based on the finite sample, may deviate considerably from the true probabilities p_i. We propose a method to infer rank-ordered true probabilities r_i from measured frequencies M_i. We show that the rank-ordered probabilities provide important informations on the system, e.g., the true number of species, the Shannon- and the Renyi-entropies.
|
1501.02402
|
Ariana Anderson
|
Ariana E. Anderson, Wesley T. Kerr, April Thames, Tong Li, Jiayang
Xiao, Mark S. Cohen
|
Electronic health record phenotyping improves detection and screening of
type 2 diabetes in the general United States population: A cross-sectional,
unselected, retrospective study
| null | null | null | null |
q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Objectives: In the United States, 25% of people with type 2 diabetes are
undiagnosed. Conventional screening models use limited demographic information
to assess risk. We evaluated whether electronic health record (EHR) phenotyping
could improve diabetes screening, even when records are incomplete and data are
not recorded systematically across patients and practice locations. Methods: In
this cross-sectional, retrospective study, data from 9,948 US patients between
2009 and 2012 were used to develop a pre-screening tool to predict current type
2 diabetes, using multivariate logistic regression. We compared (1) a full EHR
model containing prescribed medications, diagnoses, and traditional predictive
information, (2) a restricted EHR model where medication information was
removed, and (3) a conventional model containing only traditional predictive
information (BMI, age, gender, hypertensive and smoking status). We
additionally used a random-forests classification model to judge whether
including additional EHR information could increase the ability to detect
patients with Type 2 diabetes on new patient samples. Results: Using a
patient's full or restricted EHR to detect diabetes was superior to using basic
covariates alone (p<0.001). The random forests model replicated on out-of-bag
data. Migraines and cardiac dysrhythmias were negatively associated with type 2
diabetes, while acute bronchitis and herpes zoster were positively associated,
among other factors. Conclusions: EHR phenotyping resulted in markedly superior
detection of type 2 diabetes in a general US population, could increase the
efficiency and accuracy of disease screening, and are capable of picking up
signals in real-world records.
|
[
{
"created": "Sat, 10 Jan 2015 23:21:23 GMT",
"version": "v1"
}
] |
2015-01-13
|
[
[
"Anderson",
"Ariana E.",
""
],
[
"Kerr",
"Wesley T.",
""
],
[
"Thames",
"April",
""
],
[
"Li",
"Tong",
""
],
[
"Xiao",
"Jiayang",
""
],
[
"Cohen",
"Mark S.",
""
]
] |
Objectives: In the United States, 25% of people with type 2 diabetes are undiagnosed. Conventional screening models use limited demographic information to assess risk. We evaluated whether electronic health record (EHR) phenotyping could improve diabetes screening, even when records are incomplete and data are not recorded systematically across patients and practice locations. Methods: In this cross-sectional, retrospective study, data from 9,948 US patients between 2009 and 2012 were used to develop a pre-screening tool to predict current type 2 diabetes, using multivariate logistic regression. We compared (1) a full EHR model containing prescribed medications, diagnoses, and traditional predictive information, (2) a restricted EHR model where medication information was removed, and (3) a conventional model containing only traditional predictive information (BMI, age, gender, hypertensive and smoking status). We additionally used a random-forests classification model to judge whether including additional EHR information could increase the ability to detect patients with Type 2 diabetes on new patient samples. Results: Using a patient's full or restricted EHR to detect diabetes was superior to using basic covariates alone (p<0.001). The random forests model replicated on out-of-bag data. Migraines and cardiac dysrhythmias were negatively associated with type 2 diabetes, while acute bronchitis and herpes zoster were positively associated, among other factors. Conclusions: EHR phenotyping resulted in markedly superior detection of type 2 diabetes in a general US population, could increase the efficiency and accuracy of disease screening, and are capable of picking up signals in real-world records.
|
2208.09813
|
Sheng Xu
|
Sheng Xu, Junkang Wei, Yu Li
|
Genome-wide nucleotide-resolution model of single-strand break site
reveals species evolutionary hierarchy
| null | null | null | null |
q-bio.GN q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Single-strand breaks (SSBs) are the major DNA damage in the genome arising
spontaneously as the outcome of genotoxins and intermediates of DNA
transactions. SSBs play a crucial role in various biological processes and show
a non-random distribution in the genome. Several SSB detection approaches such
as S1 END-seq and SSiNGLe-ILM emerged to characterize the genomic landscape of
SSB with nucleotide resolution. However, these sequencing-based methods are
costly and unfeasible for large-scale analysis of diverse species. Thus, we
proposed the first computational approach, SSBlazer, which is an explainable
and scalable deep learning framework for genome-wide nucleotide-resolution SSB
site prediction. We demonstrated that SSBlazer can accurately predict SSB sites
and sufficiently alleviate false positives by constructing an imbalanced
dataset to simulate the realistic SSB distribution. The model interpretation
analysis reveals that SSBlazer captures the pattern of individual CpG in
genomic context and the motif of TGCC in the center region as critical
features. Besides, SSBlazer is a lightweight model with robust cross-species
generalization ability in the cross-species evaluation, which enables the
large-scale genome-wide application in diverse species. Strikingly, the
putative SSB genomic landscapes of 216 vertebrates reveal a negative
correlation between SSB frequency and evolutionary hierarchy, suggesting that
the genome tends to be integrity during evolution.
|
[
{
"created": "Sun, 21 Aug 2022 06:07:19 GMT",
"version": "v1"
}
] |
2022-08-23
|
[
[
"Xu",
"Sheng",
""
],
[
"Wei",
"Junkang",
""
],
[
"Li",
"Yu",
""
]
] |
Single-strand breaks (SSBs) are the major DNA damage in the genome arising spontaneously as the outcome of genotoxins and intermediates of DNA transactions. SSBs play a crucial role in various biological processes and show a non-random distribution in the genome. Several SSB detection approaches such as S1 END-seq and SSiNGLe-ILM emerged to characterize the genomic landscape of SSB with nucleotide resolution. However, these sequencing-based methods are costly and unfeasible for large-scale analysis of diverse species. Thus, we proposed the first computational approach, SSBlazer, which is an explainable and scalable deep learning framework for genome-wide nucleotide-resolution SSB site prediction. We demonstrated that SSBlazer can accurately predict SSB sites and sufficiently alleviate false positives by constructing an imbalanced dataset to simulate the realistic SSB distribution. The model interpretation analysis reveals that SSBlazer captures the pattern of individual CpG in genomic context and the motif of TGCC in the center region as critical features. Besides, SSBlazer is a lightweight model with robust cross-species generalization ability in the cross-species evaluation, which enables the large-scale genome-wide application in diverse species. Strikingly, the putative SSB genomic landscapes of 216 vertebrates reveal a negative correlation between SSB frequency and evolutionary hierarchy, suggesting that the genome tends to be integrity during evolution.
|
q-bio/0702013
|
Jiafu Wang
|
Ting Zeng, Jiafu Wang, Shenbing Kuang
|
Influence of Temperature on Neuronal Excitability in Cochlear Nucleus
|
5 pages, 4 figures
| null | null | null |
q-bio.NC
| null |
The influence of temperature on neuronal excitability is studied by numerical
simulations on the spiking threshold characteristics of bushy cells in cochlear
nucleus periodically stimulated by synaptic currents. The results reveal that
there is a cut-off frequency for the spiking of bushy cell in a specific
temperature environment, corresponding to the existence of a critical
temperature for the neuron to respond with real spikes to the synaptic stimulus
of a given frequency, due to the finiteness of spike width. An optimal
temperature range for neuronal spiking is also found for a specific stimulus
frequency, and the temperature range span decreases with increasing stimulus
frequency. These findings imply that there is a physiological temperature range
which is beneficial for the information processing in auditory system.
|
[
{
"created": "Wed, 7 Feb 2007 23:38:42 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Zeng",
"Ting",
""
],
[
"Wang",
"Jiafu",
""
],
[
"Kuang",
"Shenbing",
""
]
] |
The influence of temperature on neuronal excitability is studied by numerical simulations on the spiking threshold characteristics of bushy cells in cochlear nucleus periodically stimulated by synaptic currents. The results reveal that there is a cut-off frequency for the spiking of bushy cell in a specific temperature environment, corresponding to the existence of a critical temperature for the neuron to respond with real spikes to the synaptic stimulus of a given frequency, due to the finiteness of spike width. An optimal temperature range for neuronal spiking is also found for a specific stimulus frequency, and the temperature range span decreases with increasing stimulus frequency. These findings imply that there is a physiological temperature range which is beneficial for the information processing in auditory system.
|
2210.15044
|
Bartek Rajwa
|
Abida Sanjana Shemonti, Joshua D. Eisenberg, Robert O. Heuckeroth,
Marthe J. Howard, Alex Pothen and Bartek Rajwa
|
Generative modeling of the enteric nervous system employing point
pattern analysis and graph construction
|
17 pages, 5 figures
| null | null | null |
q-bio.NC cs.CV q-bio.QM stat.OT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We describe a generative network model of the architecture of the enteric
nervous system (ENS) in the colon employing data from images of human and mouse
tissue samples obtained through confocal microscopy. Our models combine spatial
point pattern analysis with graph generation to characterize the spatial and
topological properties of the ganglia (clusters of neurons and glial cells),
the inter-ganglionic connections, and the neuronal organization within the
ganglia. We employ a hybrid hardcore-Strauss process for spatial patterns and a
planar random graph generation for constructing the spatially embedded network.
We show that our generative model may be helpful in both basic and
translational studies, and it is sufficiently expressive to model the ENS
architecture of individuals who vary in age and health status. Increased
understanding of the ENS connectome will enable the use of neuromodulation
strategies in treatment and clarify anatomic diagnostic criteria for people
with bowel motility disorders.
|
[
{
"created": "Wed, 26 Oct 2022 21:22:41 GMT",
"version": "v1"
}
] |
2022-10-28
|
[
[
"Shemonti",
"Abida Sanjana",
""
],
[
"Eisenberg",
"Joshua D.",
""
],
[
"Heuckeroth",
"Robert O.",
""
],
[
"Howard",
"Marthe J.",
""
],
[
"Pothen",
"Alex",
""
],
[
"Rajwa",
"Bartek",
""
]
] |
We describe a generative network model of the architecture of the enteric nervous system (ENS) in the colon employing data from images of human and mouse tissue samples obtained through confocal microscopy. Our models combine spatial point pattern analysis with graph generation to characterize the spatial and topological properties of the ganglia (clusters of neurons and glial cells), the inter-ganglionic connections, and the neuronal organization within the ganglia. We employ a hybrid hardcore-Strauss process for spatial patterns and a planar random graph generation for constructing the spatially embedded network. We show that our generative model may be helpful in both basic and translational studies, and it is sufficiently expressive to model the ENS architecture of individuals who vary in age and health status. Increased understanding of the ENS connectome will enable the use of neuromodulation strategies in treatment and clarify anatomic diagnostic criteria for people with bowel motility disorders.
|
2403.07902
|
Xiangxin Zhou
|
Jiaqi Guan, Xiangxin Zhou, Yuwei Yang, Yu Bao, Jian Peng, Jianzhu Ma,
Qiang Liu, Liang Wang, Quanquan Gu
|
DecompDiff: Diffusion Models with Decomposed Priors for Structure-Based
Drug Design
|
Accepted to ICML 2023
| null | null | null |
q-bio.BM cs.LG
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Designing 3D ligands within a target binding site is a fundamental task in
drug discovery. Existing structured-based drug design methods treat all ligand
atoms equally, which ignores different roles of atoms in the ligand for drug
design and can be less efficient for exploring the large drug-like molecule
space. In this paper, inspired by the convention in pharmaceutical practice, we
decompose the ligand molecule into two parts, namely arms and scaffold, and
propose a new diffusion model, DecompDiff, with decomposed priors over arms and
scaffold. In order to facilitate the decomposed generation and improve the
properties of the generated molecules, we incorporate both bond diffusion in
the model and additional validity guidance in the sampling phase. Extensive
experiments on CrossDocked2020 show that our approach achieves state-of-the-art
performance in generating high-affinity molecules while maintaining proper
molecular properties and conformational stability, with up to -8.39 Avg. Vina
Dock score and 24.5 Success Rate. The code is provided at
https://github.com/bytedance/DecompDiff
|
[
{
"created": "Mon, 26 Feb 2024 05:21:21 GMT",
"version": "v1"
}
] |
2024-03-14
|
[
[
"Guan",
"Jiaqi",
""
],
[
"Zhou",
"Xiangxin",
""
],
[
"Yang",
"Yuwei",
""
],
[
"Bao",
"Yu",
""
],
[
"Peng",
"Jian",
""
],
[
"Ma",
"Jianzhu",
""
],
[
"Liu",
"Qiang",
""
],
[
"Wang",
"Liang",
""
],
[
"Gu",
"Quanquan",
""
]
] |
Designing 3D ligands within a target binding site is a fundamental task in drug discovery. Existing structured-based drug design methods treat all ligand atoms equally, which ignores different roles of atoms in the ligand for drug design and can be less efficient for exploring the large drug-like molecule space. In this paper, inspired by the convention in pharmaceutical practice, we decompose the ligand molecule into two parts, namely arms and scaffold, and propose a new diffusion model, DecompDiff, with decomposed priors over arms and scaffold. In order to facilitate the decomposed generation and improve the properties of the generated molecules, we incorporate both bond diffusion in the model and additional validity guidance in the sampling phase. Extensive experiments on CrossDocked2020 show that our approach achieves state-of-the-art performance in generating high-affinity molecules while maintaining proper molecular properties and conformational stability, with up to -8.39 Avg. Vina Dock score and 24.5 Success Rate. The code is provided at https://github.com/bytedance/DecompDiff
|
1701.05970
|
Matti Gralka
|
Matti Gralka, Diana Fusco, Stephen Martis, Oskar Hallatschek
|
Convection shapes the trade-off between antibiotic efficacy and the
selection for resistance in spatial gradients
| null | null |
10.1088/1478-3975/aa7bb3
| null |
q-bio.PE physics.bio-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Since penicillin was discovered about 90 years ago, we have become used to
using drugs to eradicate unwanted pathogenic cells. However, using drugs to
kill bacteria, viruses or cancer cells has the serious side effect of selecting
for mutant types that survive the drug attack. A key question therefore is how
one could eradicate as many cells as possible for a given acceptable risk of
drug resistance evolution. We address this general question in a model of drug
resistance evolution in spatial drug gradients, which recent experiments and
theories have suggested as key drivers of drug resistance. Importantly, our
model takes into account the influence of convection, resulting for instance
from blood flow. Using stochastic simulations, we study the fates of individual
resistance mutations and quantify the trade-off between the killing of
wild-type cells and the rise of resistance mutations: shallow gradients and
convection into the antibiotic region promote wild-type death, at the cost of
increasing the establishment probability of resistance mutations. We can
explain these observed trends by modeling the adaptation process as a branching
random walk. Our analysis reveals that the trade-off between death and
adaptation depends on the relative length scales of the spatial drug gradient
and random dispersal, and the strength of convection. Our results show that
convection can have a momentous effect on the rate of establishment of new
mutations, and may heavily impact the efficiency of antibiotic treatment.
|
[
{
"created": "Sat, 21 Jan 2017 02:38:18 GMT",
"version": "v1"
},
{
"created": "Tue, 2 May 2017 20:34:47 GMT",
"version": "v2"
}
] |
2017-08-02
|
[
[
"Gralka",
"Matti",
""
],
[
"Fusco",
"Diana",
""
],
[
"Martis",
"Stephen",
""
],
[
"Hallatschek",
"Oskar",
""
]
] |
Since penicillin was discovered about 90 years ago, we have become used to using drugs to eradicate unwanted pathogenic cells. However, using drugs to kill bacteria, viruses or cancer cells has the serious side effect of selecting for mutant types that survive the drug attack. A key question therefore is how one could eradicate as many cells as possible for a given acceptable risk of drug resistance evolution. We address this general question in a model of drug resistance evolution in spatial drug gradients, which recent experiments and theories have suggested as key drivers of drug resistance. Importantly, our model takes into account the influence of convection, resulting for instance from blood flow. Using stochastic simulations, we study the fates of individual resistance mutations and quantify the trade-off between the killing of wild-type cells and the rise of resistance mutations: shallow gradients and convection into the antibiotic region promote wild-type death, at the cost of increasing the establishment probability of resistance mutations. We can explain these observed trends by modeling the adaptation process as a branching random walk. Our analysis reveals that the trade-off between death and adaptation depends on the relative length scales of the spatial drug gradient and random dispersal, and the strength of convection. Our results show that convection can have a momentous effect on the rate of establishment of new mutations, and may heavily impact the efficiency of antibiotic treatment.
|
1711.01632
|
Khalid Raza
|
Muniba Faiza, Khushnuma Tanveer, Saman Fatihi, Yonghua Wang, Khalid
Raza
|
Comprehensive overview and assessment of miRNA target prediction tools
in human and drosophila melanogaster
|
26 pages, 9 figures
|
Current Bioinformatics (2019), 14(5): 432-445
|
10.2174/1574893614666190103101033
| null |
q-bio.GN q-bio.MN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
MicroRNAs (miRNAs) are small non-coding RNAs that control gene expression at
the post-transcriptional level through complementary base pairing with the
target mRNA, leading to mRNA degradation and blocking translation process. Any
dysfunctions of these small regulatory molecules have been linked with the
development and progression of several diseases. Therefore, it is necessary to
reliably predict potential miRNA targets. A large number of computational
prediction tools have been developed which provide a faster way to find
putative miRNA targets, but at the same time their results are often
inconsistent. Hence, finding a reliable, functional miRNA target is still a
challenging task. Also, each tool is equipped with different algorithms, and it
is difficult for the biologists to know which tool is the best choice for their
study. This paper briefly describes fundamental of miRNA target prediction
algorithms, discuss frequently used prediction tools, and further, the
performance of frequently used prediction tools have been assessed using
experimentally validated high confident mature miRNAs and their targets for two
organisms Human and Drosophila Melanogaster. Both Drosophila Melanogaster and
Human supported miRNA target prediction tools have been evaluated separately to
find out best performing tool for each of these two organisms. In the human
dataset, TargetScan showed the best results amongst the other predictors
followed by the miRmap and microT, whereas in the D. Melanogaster dataset,
MicroT tool showed the best performance followed by the TargetScan in the
comparison of other tools.
|
[
{
"created": "Sun, 5 Nov 2017 18:19:01 GMT",
"version": "v1"
}
] |
2020-05-01
|
[
[
"Faiza",
"Muniba",
""
],
[
"Tanveer",
"Khushnuma",
""
],
[
"Fatihi",
"Saman",
""
],
[
"Wang",
"Yonghua",
""
],
[
"Raza",
"Khalid",
""
]
] |
MicroRNAs (miRNAs) are small non-coding RNAs that control gene expression at the post-transcriptional level through complementary base pairing with the target mRNA, leading to mRNA degradation and blocking translation process. Any dysfunctions of these small regulatory molecules have been linked with the development and progression of several diseases. Therefore, it is necessary to reliably predict potential miRNA targets. A large number of computational prediction tools have been developed which provide a faster way to find putative miRNA targets, but at the same time their results are often inconsistent. Hence, finding a reliable, functional miRNA target is still a challenging task. Also, each tool is equipped with different algorithms, and it is difficult for the biologists to know which tool is the best choice for their study. This paper briefly describes fundamental of miRNA target prediction algorithms, discuss frequently used prediction tools, and further, the performance of frequently used prediction tools have been assessed using experimentally validated high confident mature miRNAs and their targets for two organisms Human and Drosophila Melanogaster. Both Drosophila Melanogaster and Human supported miRNA target prediction tools have been evaluated separately to find out best performing tool for each of these two organisms. In the human dataset, TargetScan showed the best results amongst the other predictors followed by the miRmap and microT, whereas in the D. Melanogaster dataset, MicroT tool showed the best performance followed by the TargetScan in the comparison of other tools.
|
q-bio/0407011
|
Tonau Nakai
|
Tonau Nakai, Kohji Hizume, Shige. H. Yoshimura, Kunio Takeyasu, and
Kenichi Yoshikawa
|
Phase Transition in Reconstituted Chromatin
|
16 pages, 3 figures
|
Europhysics Letters, Vol. 69, Iss. 6, pp. 1024-1030 (2005)
|
10.1209/epl/i2004-10444-6
| null |
q-bio.SC
| null |
By observing reconstituted chromatin by fluorescence microscopy (FM) and
atomic force microscopy (AFM), we found that the density of nucleosomes
exhibits a bimodal profile, i.e., there is a large transition between the dense
and dispersed states in reconstituted chromatin. Based on an analysis of the
spatial distribution of nucleosome cores, we deduced an effective thermodynamic
potential as a function of the nucleosome-nucleosome distance. This enabled us
to interpret the folding transition of chromatin in terms of a first-order
phase transition. This mechanism for the condensation of chromatin is discussed
in terms of its biological significance.
|
[
{
"created": "Wed, 7 Jul 2004 16:39:37 GMT",
"version": "v1"
},
{
"created": "Sun, 11 Jul 2004 01:24:32 GMT",
"version": "v2"
},
{
"created": "Fri, 10 Sep 2004 06:32:36 GMT",
"version": "v3"
}
] |
2015-06-26
|
[
[
"Nakai",
"Tonau",
""
],
[
"Hizume",
"Kohji",
""
],
[
"Yoshimura",
"Shige. H.",
""
],
[
"Takeyasu",
"Kunio",
""
],
[
"Yoshikawa",
"Kenichi",
""
]
] |
By observing reconstituted chromatin by fluorescence microscopy (FM) and atomic force microscopy (AFM), we found that the density of nucleosomes exhibits a bimodal profile, i.e., there is a large transition between the dense and dispersed states in reconstituted chromatin. Based on an analysis of the spatial distribution of nucleosome cores, we deduced an effective thermodynamic potential as a function of the nucleosome-nucleosome distance. This enabled us to interpret the folding transition of chromatin in terms of a first-order phase transition. This mechanism for the condensation of chromatin is discussed in terms of its biological significance.
|
1309.0853
|
David Schneider Dr
|
David M. Schneider, Ayana B. Martins, Eduardo do Carmo and Marcus A.M.
de Aguiar
|
Evolutionary consequences of assortativeness in haploid genotypes
| null | null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study the evolution of allele frequencies in a large population where
random mating is violated in a particular way that is related to recent works
on speciation. Specifically, we consider non-random encounters in haploid
organisms described by biallelic genes at two loci and assume that individuals
whose alleles differ at both loci are incompatible. We show that evolution
under these conditions leads to the disappearance of one of the alleles and
substantially reduces the diversity of the population. The allele that
disappears, and the other allele frequencies at equilibrium, depend only on
their initial values, and so does the time to equilibration. However, certain
combinations of allele frequencies remain constant during the process,
revealing the emergence of strong correlation between the two loci promoted by
the epistatic mechanism of incompatibility. We determine the geometrical
structure of the haplotype frequency space and solve the dynamical equations,
obtaining a simple rule to determine equilibrium solution from the initial
conditions. We show that our results are equivalent to selection against double
heterozigotes for a population of diploid individuals and discuss the relevance
of our findings to speciation.
|
[
{
"created": "Tue, 3 Sep 2013 22:07:54 GMT",
"version": "v1"
}
] |
2013-09-05
|
[
[
"Schneider",
"David M.",
""
],
[
"Martins",
"Ayana B.",
""
],
[
"Carmo",
"Eduardo do",
""
],
[
"de Aguiar",
"Marcus A. M.",
""
]
] |
We study the evolution of allele frequencies in a large population where random mating is violated in a particular way that is related to recent works on speciation. Specifically, we consider non-random encounters in haploid organisms described by biallelic genes at two loci and assume that individuals whose alleles differ at both loci are incompatible. We show that evolution under these conditions leads to the disappearance of one of the alleles and substantially reduces the diversity of the population. The allele that disappears, and the other allele frequencies at equilibrium, depend only on their initial values, and so does the time to equilibration. However, certain combinations of allele frequencies remain constant during the process, revealing the emergence of strong correlation between the two loci promoted by the epistatic mechanism of incompatibility. We determine the geometrical structure of the haplotype frequency space and solve the dynamical equations, obtaining a simple rule to determine equilibrium solution from the initial conditions. We show that our results are equivalent to selection against double heterozigotes for a population of diploid individuals and discuss the relevance of our findings to speciation.
|
2205.04464
|
Feng Liang
|
Ngan Nguyen, Feng Liang, Dominik Engel, Ciril Bohak, Peter Wonka, Timo
Ropinski, Ivan Viola
|
Differentiable Electron Microscopy Simulation: Methods and Applications
for Visualization
|
Version 2: Page 10: Fix the rendering problem in in Line 12 of
Algorithm 2 Page 12: Table 2: Fix wrong data entries in the table
| null | null | null |
q-bio.QM cs.CV cs.GR cs.LG eess.IV
|
http://creativecommons.org/licenses/by-sa/4.0/
|
We propose a new microscopy simulation system that can depict atomistic
models in a micrograph visual style, similar to results of physical electron
microscopy imaging. This system is scalable, able to represent simulation of
electron microscopy of tens of viral particles and synthesizes the image faster
than previous methods. On top of that, the simulator is differentiable, both
its deterministic as well as stochastic stages that form signal and noise
representations in the micrograph. This notable property has the capability for
solving inverse problems by means of optimization and thus allows for
generation of microscopy simulations using the parameter settings estimated
from real data. We demonstrate this learning capability through two
applications: (1) estimating the parameters of the modulation transfer function
defining the detector properties of the simulated and real micrographs, and (2)
denoising the real data based on parameters trained from the simulated
examples. While current simulators do not support any parameter estimation due
to their forward design, we show that the results obtained using estimated
parameters are very similar to the results of real micrographs. Additionally,
we evaluate the denoising capabilities of our approach and show that the
results showed an improvement over state-of-the-art methods. Denoised
micrographs exhibit less noise in the tilt-series tomography reconstructions,
ultimately reducing the visual dominance of noise in direct volume rendering of
microscopy tomograms.
|
[
{
"created": "Sun, 8 May 2022 12:39:04 GMT",
"version": "v1"
},
{
"created": "Thu, 26 May 2022 13:25:20 GMT",
"version": "v2"
}
] |
2022-05-27
|
[
[
"Nguyen",
"Ngan",
""
],
[
"Liang",
"Feng",
""
],
[
"Engel",
"Dominik",
""
],
[
"Bohak",
"Ciril",
""
],
[
"Wonka",
"Peter",
""
],
[
"Ropinski",
"Timo",
""
],
[
"Viola",
"Ivan",
""
]
] |
We propose a new microscopy simulation system that can depict atomistic models in a micrograph visual style, similar to results of physical electron microscopy imaging. This system is scalable, able to represent simulation of electron microscopy of tens of viral particles and synthesizes the image faster than previous methods. On top of that, the simulator is differentiable, both its deterministic as well as stochastic stages that form signal and noise representations in the micrograph. This notable property has the capability for solving inverse problems by means of optimization and thus allows for generation of microscopy simulations using the parameter settings estimated from real data. We demonstrate this learning capability through two applications: (1) estimating the parameters of the modulation transfer function defining the detector properties of the simulated and real micrographs, and (2) denoising the real data based on parameters trained from the simulated examples. While current simulators do not support any parameter estimation due to their forward design, we show that the results obtained using estimated parameters are very similar to the results of real micrographs. Additionally, we evaluate the denoising capabilities of our approach and show that the results showed an improvement over state-of-the-art methods. Denoised micrographs exhibit less noise in the tilt-series tomography reconstructions, ultimately reducing the visual dominance of noise in direct volume rendering of microscopy tomograms.
|
2303.04215
|
Debayan Chakraborty
|
D. Thirumalai, Abhinaw Kumar, Debayan Chakraborty, John E. Straub,
Mauro L. Mugnai
|
Conformational Fluctuations and Phases in Fused in Sarcoma (FUS)
Low-Complexity Domain
| null | null | null | null |
q-bio.BM
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The well known phenomenon of phase separation in synthetic polymers and
proteins has become a major topic in biophysics because it has been invoked as
a mechanism of compartment formation in cells, without the need for membranes.
Most of the coacervates (or condensates) are composed of Intrinsically
Disordered Proteins (IDPs) or regions that are structureless, often in
interaction with RNA and DNA. One of the more intriguing IDPs is the
526-residue RNA binding protein, Fused In Sarcoma (FUS), whose monomer
conformations and condensates exhibit unusual behavior that are sensitive to
solution conditions. By focussing principally on the N-terminus low complexity
domain (FUS-LC comprising residues 1-214) and other truncations, we rationalize
the findings of solid state NMR experiments, which show that FUS-LC adopts a
non-polymorphic fibril (core-1) involving residues 39-95, flanked by fuzzy
coats on both the N- and C- terminal ends. An alternate structure (core-2),
whose free energy is comparable to core-1, emerges only in the truncated
construct (residues 110-214). Both core-1 and core-2 fibrils are stabilized by
a Tyrosine ladder as well as hydrophilic interactions. The morphologies (gels,
fibrils, and glass-like behavior) adopted by FUS seem to vary greatly,
depending on the experimental conditions. The effect of phosphorylation is site
specific and affects the stability of the fibril depending on the sites that
are phosphorylated. Many of the peculiarities associated with FUS may also be
shared by other IDPs, such as TDP43 and hnRNPA2. We outline a number of
problems for which there is no clear molecular understanding.
|
[
{
"created": "Tue, 7 Mar 2023 20:12:06 GMT",
"version": "v1"
},
{
"created": "Mon, 5 Jun 2023 03:24:08 GMT",
"version": "v2"
}
] |
2023-06-06
|
[
[
"Thirumalai",
"D.",
""
],
[
"Kumar",
"Abhinaw",
""
],
[
"Chakraborty",
"Debayan",
""
],
[
"Straub",
"John E.",
""
],
[
"Mugnai",
"Mauro L.",
""
]
] |
The well known phenomenon of phase separation in synthetic polymers and proteins has become a major topic in biophysics because it has been invoked as a mechanism of compartment formation in cells, without the need for membranes. Most of the coacervates (or condensates) are composed of Intrinsically Disordered Proteins (IDPs) or regions that are structureless, often in interaction with RNA and DNA. One of the more intriguing IDPs is the 526-residue RNA binding protein, Fused In Sarcoma (FUS), whose monomer conformations and condensates exhibit unusual behavior that are sensitive to solution conditions. By focussing principally on the N-terminus low complexity domain (FUS-LC comprising residues 1-214) and other truncations, we rationalize the findings of solid state NMR experiments, which show that FUS-LC adopts a non-polymorphic fibril (core-1) involving residues 39-95, flanked by fuzzy coats on both the N- and C- terminal ends. An alternate structure (core-2), whose free energy is comparable to core-1, emerges only in the truncated construct (residues 110-214). Both core-1 and core-2 fibrils are stabilized by a Tyrosine ladder as well as hydrophilic interactions. The morphologies (gels, fibrils, and glass-like behavior) adopted by FUS seem to vary greatly, depending on the experimental conditions. The effect of phosphorylation is site specific and affects the stability of the fibril depending on the sites that are phosphorylated. Many of the peculiarities associated with FUS may also be shared by other IDPs, such as TDP43 and hnRNPA2. We outline a number of problems for which there is no clear molecular understanding.
|
2105.04042
|
Xiaobin Guan
|
Xiaobin Guan, Jing M. Chen, Huanfeng Shen, Xinyao Xie
|
A modified two-leaf light use efficiency model for improving the
simulation of GPP using a radiation scalar
|
40 pages, 9 figures
| null | null | null |
q-bio.QM
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
A TL-LUE model modified with a radiation scalar (RTL-LUE) is developed in
this paper. The same maximum LUE is used for both sunlit and shaded leaves, and
the difference in LUE between sunlit and shaded leaf groups is determined by
the same radiation scalar. The RTL-LUE model was calibrated and validated at
global 169 FLUXNET eddy covariance (EC) sites. Results indicate that although
GPP simulations from the TL-LUE model match well with the EC GPP, the RTL-LUE
model can further improve the simulation, for half-hour, 8-day, and yearly time
scales. The TL-LUE model tends to overestimate GPP under conditions of high
incoming photosynthetically active radiation (PAR), because the
radiation-independent LUE values for both sunlit and shaded leaves are only
suitable for low-medium (e.g. average) incoming PAR conditions. The errors in
the RTL-LUE model show lower sensitivity to PAR, and its GPP simulations can
better track the diurnal and seasonal variations of EC GPP by alleviating the
overestimation at noon and growing seasons associated with the TL-LUE model.
This study demonstrates the necessity of considering a radiation scalar in GPP
simulation in LUE models even if the first-order effect of radiation is already
considered through differentiating sunlit and shaded leaves. The simple RTL-LUE
developed in this study would be a useful alternative to complex process-based
models for global carbon cycle research.
|
[
{
"created": "Sun, 9 May 2021 22:59:25 GMT",
"version": "v1"
}
] |
2021-05-11
|
[
[
"Guan",
"Xiaobin",
""
],
[
"Chen",
"Jing M.",
""
],
[
"Shen",
"Huanfeng",
""
],
[
"Xie",
"Xinyao",
""
]
] |
A TL-LUE model modified with a radiation scalar (RTL-LUE) is developed in this paper. The same maximum LUE is used for both sunlit and shaded leaves, and the difference in LUE between sunlit and shaded leaf groups is determined by the same radiation scalar. The RTL-LUE model was calibrated and validated at global 169 FLUXNET eddy covariance (EC) sites. Results indicate that although GPP simulations from the TL-LUE model match well with the EC GPP, the RTL-LUE model can further improve the simulation, for half-hour, 8-day, and yearly time scales. The TL-LUE model tends to overestimate GPP under conditions of high incoming photosynthetically active radiation (PAR), because the radiation-independent LUE values for both sunlit and shaded leaves are only suitable for low-medium (e.g. average) incoming PAR conditions. The errors in the RTL-LUE model show lower sensitivity to PAR, and its GPP simulations can better track the diurnal and seasonal variations of EC GPP by alleviating the overestimation at noon and growing seasons associated with the TL-LUE model. This study demonstrates the necessity of considering a radiation scalar in GPP simulation in LUE models even if the first-order effect of radiation is already considered through differentiating sunlit and shaded leaves. The simple RTL-LUE developed in this study would be a useful alternative to complex process-based models for global carbon cycle research.
|
0904.3654
|
Yves Jouanneau
|
Luc Schuler (UCL), Sinead M. Ni Chadhain (BCAE), Yves Jouanneau
(LCBM), Christine Meyer (LCBM), Gerben J. Zylstra (BCAE), Pascal Hols (UCL),
Spiros N. Agathos (UCL)
|
Characterization of a novel angular dioxygenase from fluorene-degrading
Sphingomonas sp. strain LB126
| null |
Applied and Environmental Microbiology 74 (2008) 1050-1057
| null | null |
q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this study, the genes involved in the initial attack on fluorene by
Sphingomonas sp. LB126 were investigated. The ? and ? subunits of a dioxygenase
complex (FlnA1A2), showing 63% and 51% sequence identity respectively, with the
subunits of an angular dioxygenase from Gram-positive Terrabacter sp. DBF63,
were identified. When overexpressed in E. coli, FlnA1A2 was responsible for the
angular oxidation of fluorene, fluorenol, fluorenone, dibenzofuran and
dibenzo-p-dioxin. Moreover, FlnA1A2 was able to oxidize polycyclic aromatic
hydrocarbons and heteroaromatics, some of which were not oxidized by the
dioxygenase from Terrabacter sp. DBF63. Quantification of resulting oxidation
products showed that fluorene and phenanthrene were preferred substrates.
|
[
{
"created": "Thu, 23 Apr 2009 10:54:24 GMT",
"version": "v1"
}
] |
2009-04-24
|
[
[
"Schuler",
"Luc",
"",
"UCL"
],
[
"Chadhain",
"Sinead M. Ni",
"",
"BCAE"
],
[
"Jouanneau",
"Yves",
"",
"LCBM"
],
[
"Meyer",
"Christine",
"",
"LCBM"
],
[
"Zylstra",
"Gerben J.",
"",
"BCAE"
],
[
"Hols",
"Pascal",
"",
"UCL"
],
[
"Agathos",
"Spiros N.",
"",
"UCL"
]
] |
In this study, the genes involved in the initial attack on fluorene by Sphingomonas sp. LB126 were investigated. The ? and ? subunits of a dioxygenase complex (FlnA1A2), showing 63% and 51% sequence identity respectively, with the subunits of an angular dioxygenase from Gram-positive Terrabacter sp. DBF63, were identified. When overexpressed in E. coli, FlnA1A2 was responsible for the angular oxidation of fluorene, fluorenol, fluorenone, dibenzofuran and dibenzo-p-dioxin. Moreover, FlnA1A2 was able to oxidize polycyclic aromatic hydrocarbons and heteroaromatics, some of which were not oxidized by the dioxygenase from Terrabacter sp. DBF63. Quantification of resulting oxidation products showed that fluorene and phenanthrene were preferred substrates.
|
1410.2098
|
Philippe Terrier PhD
|
Philippe Terrier and Fabienne Reynard
|
Effect of age on the variability and stability of gait: a
cross-sectional treadmill study in healthy individuals between 20 and 69
years of age
|
Author's version of an article published in Gait & Posture (2014)
| null |
10.1016/j.gaitpost.2014.09.024
| null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Falls during walking are a major health issue in the elderly population.
Older individuals are usually more cautious, work more slowly, take shorter
steps, and exhibit increased step-to-step variability. They often have impaired
dynamic balance, which explains their increased falling risk. Those locomotor
characteristics might be the result of the neurological/musculoskeletal
degenerative processes typical of advanced age or of a decline that began
earlier in life. In order to help determine between the two possibilities, we
analyzed the relationship between age and gait features among 100 individuals
aged 20-69. Trunk acceleration was measured during 5-min treadmill session
using a 3D accelerometer. The following dependent variables were assessed:
preferred walking speed, walk ratio (step length normalized by step frequency),
gait instability (local dynamic stability, Lyapunov exponent method), and
acceleration variability (root mean square (RMS)). Using age as a predictor,
linear regressions were performed for each dependent variable. The results
indicated that walking speed, walk ratio and trunk acceleration variability
were not dependent on age (R2<2%). However, there was a significant quadratic
association between age and gait instability in the mediolateral direction
(R2=15%). We concluded that most of the typical gait features of older age do
not result from a slow evolution over the life course. On the other hand, gait
instability likely begins to increase at an accelerated rate as early as age
40-50. This finding support the premise that local dynamic stability is likely
a relevant early indicator of falling risk.
|
[
{
"created": "Tue, 7 Oct 2014 07:27:17 GMT",
"version": "v1"
},
{
"created": "Fri, 10 Oct 2014 13:06:50 GMT",
"version": "v2"
}
] |
2014-10-13
|
[
[
"Terrier",
"Philippe",
""
],
[
"Reynard",
"Fabienne",
""
]
] |
Falls during walking are a major health issue in the elderly population. Older individuals are usually more cautious, work more slowly, take shorter steps, and exhibit increased step-to-step variability. They often have impaired dynamic balance, which explains their increased falling risk. Those locomotor characteristics might be the result of the neurological/musculoskeletal degenerative processes typical of advanced age or of a decline that began earlier in life. In order to help determine between the two possibilities, we analyzed the relationship between age and gait features among 100 individuals aged 20-69. Trunk acceleration was measured during 5-min treadmill session using a 3D accelerometer. The following dependent variables were assessed: preferred walking speed, walk ratio (step length normalized by step frequency), gait instability (local dynamic stability, Lyapunov exponent method), and acceleration variability (root mean square (RMS)). Using age as a predictor, linear regressions were performed for each dependent variable. The results indicated that walking speed, walk ratio and trunk acceleration variability were not dependent on age (R2<2%). However, there was a significant quadratic association between age and gait instability in the mediolateral direction (R2=15%). We concluded that most of the typical gait features of older age do not result from a slow evolution over the life course. On the other hand, gait instability likely begins to increase at an accelerated rate as early as age 40-50. This finding support the premise that local dynamic stability is likely a relevant early indicator of falling risk.
|
2304.11863
|
Maksim Kitsak
|
Long MA and Piet Van Mieghem and Maksim Kitsak
|
Reporting delays: a widely neglected impact factor in COVID-19 forecasts
|
10 pages, 4 figures
| null | null | null |
q-bio.PE
|
http://creativecommons.org/licenses/by/4.0/
|
Epidemic forecasts are only as good as the accuracy of epidemic measurements.
Is epidemic data, particularly COVID-19 epidemic data, clean and devoid of
noise? Common sense implies the negative answer. While we cannot evaluate the
cleanliness of the COVID-19 epidemic data in a holistic fashion, we can assess
the data for the presence of reporting delays. In our work, through the
analysis of the first COVID-19 wave, we find substantial reporting delays in
the published epidemic data. Motivated by the desire to enhance epidemic
forecasts, we develop a statistical framework to detect, uncover, and remove
reporting delays in the infectious, recovered, and deceased epidemic time
series. Our framework can uncover and analyze reporting delays in 8 regions
significantly affected by the first COVID-19 wave. Further, we demonstrate that
removing reporting delays from epidemic data using our statistical framework
may decrease the error in epidemic forecasts. While our statistical framework
can be used in combination with any epidemic forecast method that intakes
infectious, recovered, and deceased data, to make a basic assessment, we
employed the classical SIRD epidemic model. Our results indicate that the
removal of reporting delays from the epidemic data may decrease the forecast
error by up to 50. We anticipate that our framework will be indispensable in
the analysis of novel COVID-19 strains and other existing or novel infectious
diseases.
|
[
{
"created": "Mon, 24 Apr 2023 07:18:38 GMT",
"version": "v1"
}
] |
2023-04-25
|
[
[
"MA",
"Long",
""
],
[
"Van Mieghem",
"Piet",
""
],
[
"Kitsak",
"Maksim",
""
]
] |
Epidemic forecasts are only as good as the accuracy of epidemic measurements. Is epidemic data, particularly COVID-19 epidemic data, clean and devoid of noise? Common sense implies the negative answer. While we cannot evaluate the cleanliness of the COVID-19 epidemic data in a holistic fashion, we can assess the data for the presence of reporting delays. In our work, through the analysis of the first COVID-19 wave, we find substantial reporting delays in the published epidemic data. Motivated by the desire to enhance epidemic forecasts, we develop a statistical framework to detect, uncover, and remove reporting delays in the infectious, recovered, and deceased epidemic time series. Our framework can uncover and analyze reporting delays in 8 regions significantly affected by the first COVID-19 wave. Further, we demonstrate that removing reporting delays from epidemic data using our statistical framework may decrease the error in epidemic forecasts. While our statistical framework can be used in combination with any epidemic forecast method that intakes infectious, recovered, and deceased data, to make a basic assessment, we employed the classical SIRD epidemic model. Our results indicate that the removal of reporting delays from the epidemic data may decrease the forecast error by up to 50. We anticipate that our framework will be indispensable in the analysis of novel COVID-19 strains and other existing or novel infectious diseases.
|
2301.13387
|
Owen Queen
|
Cai W. John, Owen Queen, Wellington Muchero, and Scott J. Emrich
|
Deep Learning for Reference-Free Geolocation for Poplar Trees
|
Accepted at NeurIPS 2022 AI for Science Workshop
| null | null | null |
q-bio.GN cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
A core task in precision agriculture is the identification of climatic and
ecological conditions that are advantageous for a given crop. The most succinct
approach is geolocation, which is concerned with locating the native region of
a given sample based on its genetic makeup. Here, we investigate genomic
geolocation of Populus trichocarpa, or poplar, which has been identified by the
US Department of Energy as a fast-rotation biofuel crop to be harvested
nationwide. In particular, we approach geolocation from a reference-free
perspective, circumventing the need for compute-intensive processes such as
variant calling and alignment. Our model, MashNet, predicts latitude and
longitude for poplar trees from randomly-sampled, unaligned sequence fragments.
We show that our model performs comparably to Locator, a state-of-the-art
method based on aligned whole-genome sequence data. MashNet achieves an error
of 34.0 km^2 compared to Locator's 22.1 km^2. MashNet allows growers to quickly
and efficiently identify natural varieties that will be most productive in
their growth environment based on genotype. This paper explores geolocation for
precision agriculture while providing a framework and data source for further
development by the machine learning community.
|
[
{
"created": "Tue, 31 Jan 2023 03:37:47 GMT",
"version": "v1"
}
] |
2023-02-01
|
[
[
"John",
"Cai W.",
""
],
[
"Queen",
"Owen",
""
],
[
"Muchero",
"Wellington",
""
],
[
"Emrich",
"Scott J.",
""
]
] |
A core task in precision agriculture is the identification of climatic and ecological conditions that are advantageous for a given crop. The most succinct approach is geolocation, which is concerned with locating the native region of a given sample based on its genetic makeup. Here, we investigate genomic geolocation of Populus trichocarpa, or poplar, which has been identified by the US Department of Energy as a fast-rotation biofuel crop to be harvested nationwide. In particular, we approach geolocation from a reference-free perspective, circumventing the need for compute-intensive processes such as variant calling and alignment. Our model, MashNet, predicts latitude and longitude for poplar trees from randomly-sampled, unaligned sequence fragments. We show that our model performs comparably to Locator, a state-of-the-art method based on aligned whole-genome sequence data. MashNet achieves an error of 34.0 km^2 compared to Locator's 22.1 km^2. MashNet allows growers to quickly and efficiently identify natural varieties that will be most productive in their growth environment based on genotype. This paper explores geolocation for precision agriculture while providing a framework and data source for further development by the machine learning community.
|
q-bio/0401029
|
Laszlo Papp
|
S. Bumble
|
Toy Models and Statistical Mechanics of Subgraphs and Motifs of Genetic
and Protein Networks
|
7 pages, 2 figures
| null | null | null |
q-bio.MN
| null |
Theoretical physics is used for a toy model of molecular biology to assess
conditions that lead to the edge of chaos (EOC) in a network of biomolecules.
Results can enhance our ability to understand complex diseases and their
treatment or cure.
|
[
{
"created": "Thu, 22 Jan 2004 07:47:03 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Bumble",
"S.",
""
]
] |
Theoretical physics is used for a toy model of molecular biology to assess conditions that lead to the edge of chaos (EOC) in a network of biomolecules. Results can enhance our ability to understand complex diseases and their treatment or cure.
|
2112.15252
|
Eleodor Nichita
|
Eleodor Nichita, Mary-Anne Pietrusiak, Fangli Xie, Peter Schwanke and
Anjali Pandya
|
Modeling COVID-19 Transmission using IDSIM, an Epidemiological-Modelling
Desktop App with Multi-Level Immunization Capabilities
|
28 pages, 8 figures
| null |
10.14745/ccdr.v48i10a05
| null |
q-bio.PE
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
The COVID-19 pandemic has placed unprecedented demands on local public health
units in Ontario, Canada, one of which was the need for in-house
epidemiological-modelling capabilities. To address this need, Ontario Tech
University and the Durham Region Health Department developed a native Windows
desktop app that performs epidemiological modelling of infectious diseases. The
app is an implementation of a multi-stratified compartmental epidemiological
model that can accommodate multiple virus variants and levels of vaccination,
as well as public health measures such as physical distancing, contact tracing
followed by quarantine, and testing followed by isolation. This article
presents the epidemiological model and epidemiological-simulation results
obtained using the developed app. The simulations investigate the effects of
different factors on COVID-19 transmission in Durham Region, including
vaccination coverage, vaccine effectiveness, waning of vaccine-induced
immunity, advent of the Omicron variant and effect of COVID-19 booster vaccines
in reducing the number of infections and severe cases. Results indicate that,
for the Delta variant, natural immunity, in addition to vaccination-induced
immunity, is necessary to achieve herd immunity and that waning of
vaccine-induced immunity lengthens the time necessary to reach herd immunity.
In the absence of additional public health measures, a wave driven by the
Omicron variant is predicted to pose significant public health challenges with
infections predicted to peak in approximately two to three months, depending on
the rate of administration of booster doses.
|
[
{
"created": "Fri, 31 Dec 2021 00:32:23 GMT",
"version": "v1"
},
{
"created": "Sat, 5 Mar 2022 02:37:17 GMT",
"version": "v2"
}
] |
2022-11-10
|
[
[
"Nichita",
"Eleodor",
""
],
[
"Pietrusiak",
"Mary-Anne",
""
],
[
"Xie",
"Fangli",
""
],
[
"Schwanke",
"Peter",
""
],
[
"Pandya",
"Anjali",
""
]
] |
The COVID-19 pandemic has placed unprecedented demands on local public health units in Ontario, Canada, one of which was the need for in-house epidemiological-modelling capabilities. To address this need, Ontario Tech University and the Durham Region Health Department developed a native Windows desktop app that performs epidemiological modelling of infectious diseases. The app is an implementation of a multi-stratified compartmental epidemiological model that can accommodate multiple virus variants and levels of vaccination, as well as public health measures such as physical distancing, contact tracing followed by quarantine, and testing followed by isolation. This article presents the epidemiological model and epidemiological-simulation results obtained using the developed app. The simulations investigate the effects of different factors on COVID-19 transmission in Durham Region, including vaccination coverage, vaccine effectiveness, waning of vaccine-induced immunity, advent of the Omicron variant and effect of COVID-19 booster vaccines in reducing the number of infections and severe cases. Results indicate that, for the Delta variant, natural immunity, in addition to vaccination-induced immunity, is necessary to achieve herd immunity and that waning of vaccine-induced immunity lengthens the time necessary to reach herd immunity. In the absence of additional public health measures, a wave driven by the Omicron variant is predicted to pose significant public health challenges with infections predicted to peak in approximately two to three months, depending on the rate of administration of booster doses.
|
1604.02909
|
Benjamin Schott M.Sc.
|
Benjamin Schott, Johannes Stegmaier, Alexandre Arbaud, Markus Reischl,
Ralf Mikut, Francis L\'evi
|
Robust Individual Circadian Parameter Estimation for Biosignal-based
Personalisation of Cancer Chronotherapy
|
Conference Biosig 2016, Berlin
| null | null | null |
q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In cancer treatment, chemotherapy is administered according a constant
schedule. The chronotherapy approach, considering chronobiological drug
delivery, adapts the chemotherapy profile to the circadian rhythms of the human
organism. This reduces toxicity effects and at the same time enhances
efficiency of chemotherapy. To personalize cancer treatment, chemotherapy
profiles have to be further adapted to individual patients. Therefore, we
present a new model to represent cycle phenomena in circadian rhythms. The
model enables a more precise modelling of the underlying circadian rhythms. In
comparison with the standard model, our model delivers better results in all
defined quality indices. The new model can be used to adapt the chemotherapy
profile efficiently to individual patients. The adaption to individual patients
contributes to the aim of personalizing cancer therapy.
|
[
{
"created": "Mon, 11 Apr 2016 12:14:07 GMT",
"version": "v1"
}
] |
2016-04-12
|
[
[
"Schott",
"Benjamin",
""
],
[
"Stegmaier",
"Johannes",
""
],
[
"Arbaud",
"Alexandre",
""
],
[
"Reischl",
"Markus",
""
],
[
"Mikut",
"Ralf",
""
],
[
"Lévi",
"Francis",
""
]
] |
In cancer treatment, chemotherapy is administered according a constant schedule. The chronotherapy approach, considering chronobiological drug delivery, adapts the chemotherapy profile to the circadian rhythms of the human organism. This reduces toxicity effects and at the same time enhances efficiency of chemotherapy. To personalize cancer treatment, chemotherapy profiles have to be further adapted to individual patients. Therefore, we present a new model to represent cycle phenomena in circadian rhythms. The model enables a more precise modelling of the underlying circadian rhythms. In comparison with the standard model, our model delivers better results in all defined quality indices. The new model can be used to adapt the chemotherapy profile efficiently to individual patients. The adaption to individual patients contributes to the aim of personalizing cancer therapy.
|
1910.04824
|
Nassim Versbraegen
|
Pieter Libin, Nassim Versbraegen, Ana B. Abecasis, Perpetua Gomes, Tom
Lenaerts, Ann Now\'e
|
Towards a phylogenetic measure to quantify HIV incidence
|
Accepted at BNAIC 2019 (Benelux AI conference)
| null | null | null |
q-bio.PE
|
http://creativecommons.org/licenses/by/4.0/
|
One of the cornerstones in combating the HIV pandemic is being able to assess
the current state and evolution of local HIV epidemics. This remains a complex
problem, as many HIV infected individuals remain unaware of their infection
status, leading to parts of HIV epidemics being undiagnosed and under-reported.
To that end, we firstly present a method to learn epidemiological parameters
from phylogenetic trees, using approximate Bayesian computation (ABC). The
epidemiological parameters learned as a result of applying ABC are subsequently
used in epidemiological models that aim to simulate a specific epidemic.
Secondly, we continue by describing the development of a tree statistic, rooted
in coalescent theory, which we use to relate epidemiological parameters to a
phylogenetic tree, by using the simulated epidemics. We show that the presented
tree statistic enables differentiation of epidemiological parameters, while
only relying on phylogenetic trees, thus enabling the construction of new
methods to ascertain the epidemiological state of an HIV epidemic. By using
genetic data to infer epidemic sizes, we expect to enhance understanding of the
portions of the infected population in which diagnosis rates are low.
|
[
{
"created": "Thu, 10 Oct 2019 19:20:48 GMT",
"version": "v1"
},
{
"created": "Tue, 22 Oct 2019 12:46:29 GMT",
"version": "v2"
},
{
"created": "Wed, 23 Oct 2019 16:52:42 GMT",
"version": "v3"
}
] |
2019-10-24
|
[
[
"Libin",
"Pieter",
""
],
[
"Versbraegen",
"Nassim",
""
],
[
"Abecasis",
"Ana B.",
""
],
[
"Gomes",
"Perpetua",
""
],
[
"Lenaerts",
"Tom",
""
],
[
"Nowé",
"Ann",
""
]
] |
One of the cornerstones in combating the HIV pandemic is being able to assess the current state and evolution of local HIV epidemics. This remains a complex problem, as many HIV infected individuals remain unaware of their infection status, leading to parts of HIV epidemics being undiagnosed and under-reported. To that end, we firstly present a method to learn epidemiological parameters from phylogenetic trees, using approximate Bayesian computation (ABC). The epidemiological parameters learned as a result of applying ABC are subsequently used in epidemiological models that aim to simulate a specific epidemic. Secondly, we continue by describing the development of a tree statistic, rooted in coalescent theory, which we use to relate epidemiological parameters to a phylogenetic tree, by using the simulated epidemics. We show that the presented tree statistic enables differentiation of epidemiological parameters, while only relying on phylogenetic trees, thus enabling the construction of new methods to ascertain the epidemiological state of an HIV epidemic. By using genetic data to infer epidemic sizes, we expect to enhance understanding of the portions of the infected population in which diagnosis rates are low.
|
2103.09954
|
Xiaochen Liu
|
Xiaochen Liu, Peter A. Robinson
|
Analytic model for feature maps in the primary visual cortex
|
28 pages, 15 figures
| null | null | null |
q-bio.NC physics.bio-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A compact analytic model is proposed to describe the combined orientation
preference (OP) and ocular dominance (OD) features of simple cells and their
layout in the primary visual cortex (V1). This model consists of three parts:
(i) an anisotropic Laplacian (AL) operator that represents the local neural
sensitivity to the orientation of visual inputs; (ii) a receptive field (RF)
operator that models the anisotropic spatial RF that projects to a given V1
cell over scales of a few tenths of a millimeter and combines with the AL
operator to give an overall OP operator; and (iii) a map that describes how the
parameters of these operators vary approximately periodically across V1. The
parameters of the proposed model maximize the neural response at a given OP
with an OP tuning curve fitted to experimental results. It is found that the
anisotropy of the AL operator does not significantly affect OP selectivity,
which is dominated by the RF anisotropy, consistent with Hubel and Wiesel's
original conclusions that orientation tuning width of V1 simple cell is
inversely related to the elongation of its RF. A simplified OP-OD map is then
constructed to describe the approximately periodic OP-OD structure of V1 in a
compact form. Specifically, the map is approximated by retaining its dominant
spatial Fourier coefficients, which are shown to suffice to reconstruct the
overall structure of the OP-OD map. This representation is a suitable form to
analyze observed maps compactly and to be used in neural field theory of V1.
Application to independently simulated V1 structures shows that observed
irregularities in the map correspond to a spread of dominant coefficients in a
circle in Fourier space.
|
[
{
"created": "Wed, 17 Mar 2021 23:56:15 GMT",
"version": "v1"
}
] |
2021-03-19
|
[
[
"Liu",
"Xiaochen",
""
],
[
"Robinson",
"Peter A.",
""
]
] |
A compact analytic model is proposed to describe the combined orientation preference (OP) and ocular dominance (OD) features of simple cells and their layout in the primary visual cortex (V1). This model consists of three parts: (i) an anisotropic Laplacian (AL) operator that represents the local neural sensitivity to the orientation of visual inputs; (ii) a receptive field (RF) operator that models the anisotropic spatial RF that projects to a given V1 cell over scales of a few tenths of a millimeter and combines with the AL operator to give an overall OP operator; and (iii) a map that describes how the parameters of these operators vary approximately periodically across V1. The parameters of the proposed model maximize the neural response at a given OP with an OP tuning curve fitted to experimental results. It is found that the anisotropy of the AL operator does not significantly affect OP selectivity, which is dominated by the RF anisotropy, consistent with Hubel and Wiesel's original conclusions that orientation tuning width of V1 simple cell is inversely related to the elongation of its RF. A simplified OP-OD map is then constructed to describe the approximately periodic OP-OD structure of V1 in a compact form. Specifically, the map is approximated by retaining its dominant spatial Fourier coefficients, which are shown to suffice to reconstruct the overall structure of the OP-OD map. This representation is a suitable form to analyze observed maps compactly and to be used in neural field theory of V1. Application to independently simulated V1 structures shows that observed irregularities in the map correspond to a spread of dominant coefficients in a circle in Fourier space.
|
2204.05109
|
Peng Liu
|
Peng Liu and Yanyan Zheng
|
Temporal and spatial evolution of the distribution related to the number
of COVID-19 pandemic
| null |
Physica A 603, 127837 (2022)
|
10.1016/j.physa.2022.127837
| null |
q-bio.PE physics.data-an physics.soc-ph stat.AP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This work systematically conducts a data analysis based on the numbers of
both cumulative and daily confirmed COVID-19 cases and deaths in a time span
through April 2020 to June 2022 for over 200 countries around the world. Such
research feature aims to reveal the temporal and spatial evolution of the
country-level distribution observed in COVID-19 pandemic, and obtains some
interesting results as follows. (1) The distributions of the numbers for
cumulative confirmed cases and deaths obey power-law in early stages of
COVID-19 and stretched exponential function in subsequent course. (2) The
distributions of the numbers for daily confirmed cases and deaths obey
power-law in early and late stages of COVID-19 and stretched exponential
function in middle stages. The crossover region between power-law and stretched
exponential behaviour seems to depend on the evolution of "infection" event and
"death" event. Such observation implies a kind of important symmetry related to
the dynamics process of COVID-19 spreading. (3) The distributions of the
normalized numbers for each metric show a temporal scaling behaviour in 2-year
period, and are well described by stretched exponential function. The
observation of power-law and stretched exponential behaviour in such
country-level distributions suggests underlying intrinsic dynamics of a virus
spreading process in human interconnected society. And thus it is important for
understanding and mathematically modeling the COVID-19 pandemic.
|
[
{
"created": "Fri, 8 Apr 2022 04:51:03 GMT",
"version": "v1"
},
{
"created": "Tue, 23 Aug 2022 12:44:56 GMT",
"version": "v2"
}
] |
2023-03-20
|
[
[
"Liu",
"Peng",
""
],
[
"Zheng",
"Yanyan",
""
]
] |
This work systematically conducts a data analysis based on the numbers of both cumulative and daily confirmed COVID-19 cases and deaths in a time span through April 2020 to June 2022 for over 200 countries around the world. Such research feature aims to reveal the temporal and spatial evolution of the country-level distribution observed in COVID-19 pandemic, and obtains some interesting results as follows. (1) The distributions of the numbers for cumulative confirmed cases and deaths obey power-law in early stages of COVID-19 and stretched exponential function in subsequent course. (2) The distributions of the numbers for daily confirmed cases and deaths obey power-law in early and late stages of COVID-19 and stretched exponential function in middle stages. The crossover region between power-law and stretched exponential behaviour seems to depend on the evolution of "infection" event and "death" event. Such observation implies a kind of important symmetry related to the dynamics process of COVID-19 spreading. (3) The distributions of the normalized numbers for each metric show a temporal scaling behaviour in 2-year period, and are well described by stretched exponential function. The observation of power-law and stretched exponential behaviour in such country-level distributions suggests underlying intrinsic dynamics of a virus spreading process in human interconnected society. And thus it is important for understanding and mathematically modeling the COVID-19 pandemic.
|
2005.06180
|
Marco Piangerelli
|
Andrea De Simone and Marco Piangerelli
|
The impact of undetected cases on tracking epidemics: the case of
COVID-19
|
23 Pages, 10 Figures
|
Chaos, Solitons & Fractals, Volume 140, 2020, 110167
|
10.1016/j.chaos.2020.110167
| null |
q-bio.PE q-bio.QM stat.AP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
One of the key indicators used in tracking the evolution of an infectious
disease isthe reproduction number. This quantity is usually computed using the
reportednumber of cases, but ignoring that many more individuals may be
infected (e.g.asymptomatics). We propose a statistical procedure to quantify
the impact of un-detected infectious cases on the determination of the
effective reproduction number. Our approach is stochastic, data-driven and not
relying on any compartmentalmodel. It is applied to the COVID-19 case in eight
different countries and all Italianregions, showing that the effect of
undetected cases leads to estimates of the effective reproduction numbers
larger than those obtained only with the reported cases by factors ranging from
two to ten. Our findings urge caution about deciding when and how to relax
containment measures based on the value of the reproduction number.
|
[
{
"created": "Wed, 13 May 2020 06:49:46 GMT",
"version": "v1"
}
] |
2020-09-11
|
[
[
"De Simone",
"Andrea",
""
],
[
"Piangerelli",
"Marco",
""
]
] |
One of the key indicators used in tracking the evolution of an infectious disease isthe reproduction number. This quantity is usually computed using the reportednumber of cases, but ignoring that many more individuals may be infected (e.g.asymptomatics). We propose a statistical procedure to quantify the impact of un-detected infectious cases on the determination of the effective reproduction number. Our approach is stochastic, data-driven and not relying on any compartmentalmodel. It is applied to the COVID-19 case in eight different countries and all Italianregions, showing that the effect of undetected cases leads to estimates of the effective reproduction numbers larger than those obtained only with the reported cases by factors ranging from two to ten. Our findings urge caution about deciding when and how to relax containment measures based on the value of the reproduction number.
|
2003.05462
|
Armando G. M. Neves
|
Evandro P. de Souza and Armando G. M. Neves
|
Exact fixation probabilities for the Birth-Death and Death-Birth
frequency-dependent Moran processes on the star graph
|
20 pages, 4 figures
| null | null | null |
q-bio.PE math.PR physics.bio-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Broom and Rycht\'{a}\v{r} [Proc. R. Soc. A (2008) 464, 2609--2627] found an
exact solution for the fixation probabilities of the Moran process for a
structured population, in which the interaction structure among individuals is
given by the so-called star graph, i.e. one central vertex and $n$ leaves, the
leaves connecting only to the center. We generalize on their solution by
allowing individuals' fitnesses to depend on the population frequency, and also
by allowing a possible change in the order of reproduction and death draws. In
their cited paper, Broom and Rycht\'{a}\v{r} considered the birth-death (BD)
process, in which at each time step an individual is first drawn for
reproduction and then an individual is selected for death. In the death-birth
(DB) process, the order of the draws is reversed. It may be seen that the order
of the draws makes a big difference in the fixation probabilities. Our solution
method applies to both the BD and the DB cases. As expected, the exact formulae
for the fixation probabilities are complicated. We will also illustrate them
with some examples and provide results on the asymptotic behavior of the
fixation probabilities when the number $n$ of leaves in the graph tends to
infinity.
|
[
{
"created": "Wed, 11 Mar 2020 18:01:40 GMT",
"version": "v1"
},
{
"created": "Wed, 18 Mar 2020 13:41:17 GMT",
"version": "v2"
}
] |
2020-03-19
|
[
[
"de Souza",
"Evandro P.",
""
],
[
"Neves",
"Armando G. M.",
""
]
] |
Broom and Rycht\'{a}\v{r} [Proc. R. Soc. A (2008) 464, 2609--2627] found an exact solution for the fixation probabilities of the Moran process for a structured population, in which the interaction structure among individuals is given by the so-called star graph, i.e. one central vertex and $n$ leaves, the leaves connecting only to the center. We generalize on their solution by allowing individuals' fitnesses to depend on the population frequency, and also by allowing a possible change in the order of reproduction and death draws. In their cited paper, Broom and Rycht\'{a}\v{r} considered the birth-death (BD) process, in which at each time step an individual is first drawn for reproduction and then an individual is selected for death. In the death-birth (DB) process, the order of the draws is reversed. It may be seen that the order of the draws makes a big difference in the fixation probabilities. Our solution method applies to both the BD and the DB cases. As expected, the exact formulae for the fixation probabilities are complicated. We will also illustrate them with some examples and provide results on the asymptotic behavior of the fixation probabilities when the number $n$ of leaves in the graph tends to infinity.
|
2008.07417
|
Kuang Liu
|
Kuang Liu, Alison E. Patteson, Edward J. Banigan, J. M. Schwarz
|
Dynamic nuclear structure emerges from chromatin crosslinks and motors
|
18 pages, 21 figures
|
Phys. Rev. Lett. 126, 158101 (2021)
|
10.1103/PhysRevLett.126.158101
| null |
q-bio.SC
|
http://creativecommons.org/licenses/by/4.0/
|
The cell nucleus houses the chromosomes, which are linked to a soft shell of
lamin filaments. Experiments indicate that correlated chromosome dynamics and
nuclear shape fluctuations arise from motor activity. To identify the physical
mechanisms, we develop a model of an active, crosslinked Rouse chain bound to a
polymeric shell. System-sized correlated motions occur but require both motor
activity {\it and} crosslinks. Contractile motors, in particular, enhance
chromosome dynamics by driving anomalous density fluctuations. Nuclear shape
fluctuations depend on motor strength, crosslinking, and chromosome-lamina
binding. Therefore, complex chromatin dynamics and nuclear shape emerge from a
minimal, active chromosome-lamina system.
|
[
{
"created": "Mon, 17 Aug 2020 15:42:01 GMT",
"version": "v1"
},
{
"created": "Fri, 5 Mar 2021 21:10:13 GMT",
"version": "v2"
}
] |
2021-04-21
|
[
[
"Liu",
"Kuang",
""
],
[
"Patteson",
"Alison E.",
""
],
[
"Banigan",
"Edward J.",
""
],
[
"Schwarz",
"J. M.",
""
]
] |
The cell nucleus houses the chromosomes, which are linked to a soft shell of lamin filaments. Experiments indicate that correlated chromosome dynamics and nuclear shape fluctuations arise from motor activity. To identify the physical mechanisms, we develop a model of an active, crosslinked Rouse chain bound to a polymeric shell. System-sized correlated motions occur but require both motor activity {\it and} crosslinks. Contractile motors, in particular, enhance chromosome dynamics by driving anomalous density fluctuations. Nuclear shape fluctuations depend on motor strength, crosslinking, and chromosome-lamina binding. Therefore, complex chromatin dynamics and nuclear shape emerge from a minimal, active chromosome-lamina system.
|
1403.4033
|
Markus Pagitz Dr
|
Markus Pagitz and Remco I. Leine
|
Continuum Model for Pressure Actuated Cellular Structures
|
11 pages, 9 figures
| null | null | null |
q-bio.QM cond-mat.soft physics.bio-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Previous work introduced a lower-dimensional numerical model for the
geometric nonlinear simulation and optimization of compliant pressure actuated
cellular structures. This model takes into account hinge eccentricities as well
as rotational and axial cell side springs. The aim of this article is twofold.
First, previous work is extended by introducing an associated continuum model.
This model is an exact geometric representation of a cellular structure and the
basis for the spring stiffnesses and eccentricities of the numerical model.
Second, the state variables of the continuum and numerical model are linked via
discontinuous stress constraints on the one hand and spring stiffness, hinge
eccentricities on the other hand. An efficient optimization algorithm that
fully couples both sets of variables is presented. The performance of the
proposed approach is demonstrated with the help of an examples.
|
[
{
"created": "Mon, 17 Mar 2014 09:16:37 GMT",
"version": "v1"
},
{
"created": "Mon, 27 Jul 2015 16:15:57 GMT",
"version": "v2"
},
{
"created": "Wed, 5 Aug 2015 12:58:31 GMT",
"version": "v3"
},
{
"created": "Fri, 28 Jul 2017 14:07:13 GMT",
"version": "v4"
}
] |
2017-07-31
|
[
[
"Pagitz",
"Markus",
""
],
[
"Leine",
"Remco I.",
""
]
] |
Previous work introduced a lower-dimensional numerical model for the geometric nonlinear simulation and optimization of compliant pressure actuated cellular structures. This model takes into account hinge eccentricities as well as rotational and axial cell side springs. The aim of this article is twofold. First, previous work is extended by introducing an associated continuum model. This model is an exact geometric representation of a cellular structure and the basis for the spring stiffnesses and eccentricities of the numerical model. Second, the state variables of the continuum and numerical model are linked via discontinuous stress constraints on the one hand and spring stiffness, hinge eccentricities on the other hand. An efficient optimization algorithm that fully couples both sets of variables is presented. The performance of the proposed approach is demonstrated with the help of an examples.
|
1803.11270
|
Melanie Hopkins
|
Melanie J. Hopkins, David W. Bapst, Carl Simpson, Rachel C. M. Warnock
|
The inseparability of sampling and time and its influence on attempts to
unify the molecular and fossil records
|
29 pages, 1 figure. All others contributed equally to this work
| null | null | null |
q-bio.PE
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The two major approaches to studying macroevolution in deep time are the
fossil record and reconstructed relationships among extant taxa from molecular
data. Results based on one approach sometimes conflict with those based on the
other, with inconsistencies often attributed to inherent flaws of one (or the
other) data source. What is unquestionable is that both the molecular and
fossil records are limited reflections of the same evolutionary history, and
any contradiction between them represents a failure of our existing models to
explain the patterns we observe. Fortunately, the different limitations of each
record provide an opportunity to test or calibrate the other, and new
methodological developments leverage both records simultaneously. However, we
must reckon with the distinct relationships between sampling and time in the
fossil record and molecular phylogenies. These differences impact our
recognition of baselines, and the analytical incorporation of age estimate
uncertainty. These differences in perspective also influence how different
practitioners view the past and evolutionary time itself, bearing important
implications for the generality of methodological advancements, and differences
in the philosophical approach to macroevolutionary theory across fields.
|
[
{
"created": "Thu, 29 Mar 2018 22:08:09 GMT",
"version": "v1"
}
] |
2018-04-02
|
[
[
"Hopkins",
"Melanie J.",
""
],
[
"Bapst",
"David W.",
""
],
[
"Simpson",
"Carl",
""
],
[
"Warnock",
"Rachel C. M.",
""
]
] |
The two major approaches to studying macroevolution in deep time are the fossil record and reconstructed relationships among extant taxa from molecular data. Results based on one approach sometimes conflict with those based on the other, with inconsistencies often attributed to inherent flaws of one (or the other) data source. What is unquestionable is that both the molecular and fossil records are limited reflections of the same evolutionary history, and any contradiction between them represents a failure of our existing models to explain the patterns we observe. Fortunately, the different limitations of each record provide an opportunity to test or calibrate the other, and new methodological developments leverage both records simultaneously. However, we must reckon with the distinct relationships between sampling and time in the fossil record and molecular phylogenies. These differences impact our recognition of baselines, and the analytical incorporation of age estimate uncertainty. These differences in perspective also influence how different practitioners view the past and evolutionary time itself, bearing important implications for the generality of methodological advancements, and differences in the philosophical approach to macroevolutionary theory across fields.
|
1602.00650
|
Romeil Sandhu
|
Romeil Sandhu, Sarah Tannenbaum, Daniel Diolaiti, Alberto
Ambesi-Impiombato, Andrew Kung, and Allen Tannenbaum
|
A Quantitative Analysis of Localized Robustness of MYCN in Neuroblastoma
| null | null | null | null |
q-bio.MN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The amplification of the gene MYCN (V-myc myelocytomatosis viral-valeted
oncogene, neuroblastoma derived) has been a well-documented indicator for poor
prognosis in neuroblastoma, a childhood cancer. Unfortunately, there has been
limited success in understanding MYCN functionality in the landscape of
neuroblastoma and more importantly, given that MYCN has been deemed
undruggable, the need to potentially illuminate key opportunities that
indirectly target MYCN is of great interest. To this end, this work employs an
emerging quantitative technique from network science, namely network curvature,
to quantify the biological robustness of MYCN and its surrounding neighborhood.
In particular, when amplified in Stage IV cancer, MYCN exhibits higher
curvature (more robust) than those samples with under expressed MYCN levels.
When examining the surrounding neighborhood, the above argument still holds for
network curvature, but is lost when only analyzing differential expression - a
common technique amongst oncologists and computational/molecular biologists.
This finding points to the problem (and possible solution) of drug targeting in
the context of complexity and indirect cell signaling affects that have often
been obfuscated through traditional techniques.
|
[
{
"created": "Sun, 13 Dec 2015 20:43:21 GMT",
"version": "v1"
}
] |
2016-02-02
|
[
[
"Sandhu",
"Romeil",
""
],
[
"Tannenbaum",
"Sarah",
""
],
[
"Diolaiti",
"Daniel",
""
],
[
"Ambesi-Impiombato",
"Alberto",
""
],
[
"Kung",
"Andrew",
""
],
[
"Tannenbaum",
"Allen",
""
]
] |
The amplification of the gene MYCN (V-myc myelocytomatosis viral-valeted oncogene, neuroblastoma derived) has been a well-documented indicator for poor prognosis in neuroblastoma, a childhood cancer. Unfortunately, there has been limited success in understanding MYCN functionality in the landscape of neuroblastoma and more importantly, given that MYCN has been deemed undruggable, the need to potentially illuminate key opportunities that indirectly target MYCN is of great interest. To this end, this work employs an emerging quantitative technique from network science, namely network curvature, to quantify the biological robustness of MYCN and its surrounding neighborhood. In particular, when amplified in Stage IV cancer, MYCN exhibits higher curvature (more robust) than those samples with under expressed MYCN levels. When examining the surrounding neighborhood, the above argument still holds for network curvature, but is lost when only analyzing differential expression - a common technique amongst oncologists and computational/molecular biologists. This finding points to the problem (and possible solution) of drug targeting in the context of complexity and indirect cell signaling affects that have often been obfuscated through traditional techniques.
|
2010.00541
|
Ines Samengo Dr.
|
Nicol\'as Vattuone (1,2), Thomas Wachtler (1), In\'es Samengo (1) ((1)
Department of Biology II, Ludwig-Maximilians-Universit\"at M\"unchen and
Bernstein Center for Computational Neuroscience, Munich, Germany. (2)
Department of Medical Physics and Instituto Balseiro, Centro At\'omico
Bariloche, Argentina)
|
Perceptual spaces and their symmetries: The geometry of color space
|
(v1) 42 pages, 9 figures, 1 appendix. (v2) 47 pages, 10 figures, 1
appendix. (v3) Text modified after peer-review process. (v4) 34 pages, 1
appendix, 10 figures. Article accepted to be published at Mathematical
Neuroscience and Applications (v5) ISSN added
|
Mathematical Neuroscience and Applications, Volume 1 (July 15,
2021) mna:7108
|
10.46298/mna.7108
| null |
q-bio.NC
|
http://creativecommons.org/licenses/by/4.0/
|
Our sensory systems transform external signals into neural activity, thereby
producing percepts. We are endowed with an intuitive notion of similarity
between percepts, that need not reflect the proximity of the physical
properties of the corresponding external stimuli. The quantitative
characterization of the geometry of percepts is therefore an endeavour that
must be accomplished behaviorally. Here we characterized the geometry of color
space using discrimination and matching experiments. We proposed an
individually tailored metric defined in terms of the minimal chromatic
difference required for each observer to differentiate a stimulus from its
surround. Next, we showed that this perceptual metric was particularly adequate
to describe two additional experiments, since it revealed the natural symmetry
of perceptual computations. In one of the experiments, observers were required
to discriminate two stimuli surrounded by a chromaticity that differed from
that of the tested stimuli. In the perceptual coordinates, the change in
discrimination thresholds induced by the surround followed a simple law that
only depended on the perceptual distance between the surround and each of the
two compared stimuli. In the other experiment, subjects were asked to match the
color of two stimuli surrounded by two different chromaticities. Again, in the
perceptual coordinates the induction effect produced by surrounds followed a
simple, symmetric law. We conclude that the individually-tailored notion of
perceptual distance reveals the symmetry of the laws governing perceptual
computations.
|
[
{
"created": "Thu, 1 Oct 2020 16:52:29 GMT",
"version": "v1"
},
{
"created": "Sat, 16 Jan 2021 22:11:26 GMT",
"version": "v2"
},
{
"created": "Wed, 19 May 2021 14:26:16 GMT",
"version": "v3"
},
{
"created": "Wed, 14 Jul 2021 14:38:57 GMT",
"version": "v4"
},
{
"created": "Fri, 13 Aug 2021 17:46:52 GMT",
"version": "v5"
}
] |
2023-06-22
|
[
[
"Vattuone",
"Nicolás",
""
],
[
"Wachtler",
"Thomas",
""
],
[
"Samengo",
"Inés",
""
]
] |
Our sensory systems transform external signals into neural activity, thereby producing percepts. We are endowed with an intuitive notion of similarity between percepts, that need not reflect the proximity of the physical properties of the corresponding external stimuli. The quantitative characterization of the geometry of percepts is therefore an endeavour that must be accomplished behaviorally. Here we characterized the geometry of color space using discrimination and matching experiments. We proposed an individually tailored metric defined in terms of the minimal chromatic difference required for each observer to differentiate a stimulus from its surround. Next, we showed that this perceptual metric was particularly adequate to describe two additional experiments, since it revealed the natural symmetry of perceptual computations. In one of the experiments, observers were required to discriminate two stimuli surrounded by a chromaticity that differed from that of the tested stimuli. In the perceptual coordinates, the change in discrimination thresholds induced by the surround followed a simple law that only depended on the perceptual distance between the surround and each of the two compared stimuli. In the other experiment, subjects were asked to match the color of two stimuli surrounded by two different chromaticities. Again, in the perceptual coordinates the induction effect produced by surrounds followed a simple, symmetric law. We conclude that the individually-tailored notion of perceptual distance reveals the symmetry of the laws governing perceptual computations.
|
1909.09111
|
Leo Polansky
|
Leo Polansky, Ken B. Newman, Lara Mitchell
|
Improving inference for nonlinear state-space models of animal
population dynamics given biased sequential life stage data
| null | null | null | null |
q-bio.PE q-bio.QM stat.ME
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
State-space models (SSMs) are a popular tool for modeling animal abundances.
Inference difficulties for simple linear SSMs are well known, particularly in
relation to simultaneous estimation of process and observation variances.
Several remedies to overcome estimation problems have been studied for
relatively simple SSMs, but whether these challenges and proposed remedies
apply for nonlinear stage-structured SSMs, an important class of ecological
models, is less well understood. Here we identify improvements for inference
about nonlinear stage-structured SSMs fit with biased sequential life stage
data. Theoretical analyses indicate parameter identifiability requires
covariates in the state processes. Simulation studies show that plugging in
externally estimated observation variances, as opposed to jointly estimating
them with other parameters, reduces bias and standard error of estimates. In
contrast to previous results for simple linear SSMs, strong confounding between
jointly estimated process and observation variance parameters was not found in
the models explored here. However, when observation variance was also estimated
in the motivating case study, the resulting process variance estimates were
implausibly low (near-zero). As SSMs are used in increasingly complex ways,
understanding when inference can be expected to be successful, and what aids
it, becomes more important. Our study illustrates (i) the need for relevant
process covariates and (ii) the benefits of using externally estimated
observation variances for inference for nonlinear stage-structured SSMs.
|
[
{
"created": "Thu, 19 Sep 2019 17:38:24 GMT",
"version": "v1"
}
] |
2019-09-20
|
[
[
"Polansky",
"Leo",
""
],
[
"Newman",
"Ken B.",
""
],
[
"Mitchell",
"Lara",
""
]
] |
State-space models (SSMs) are a popular tool for modeling animal abundances. Inference difficulties for simple linear SSMs are well known, particularly in relation to simultaneous estimation of process and observation variances. Several remedies to overcome estimation problems have been studied for relatively simple SSMs, but whether these challenges and proposed remedies apply for nonlinear stage-structured SSMs, an important class of ecological models, is less well understood. Here we identify improvements for inference about nonlinear stage-structured SSMs fit with biased sequential life stage data. Theoretical analyses indicate parameter identifiability requires covariates in the state processes. Simulation studies show that plugging in externally estimated observation variances, as opposed to jointly estimating them with other parameters, reduces bias and standard error of estimates. In contrast to previous results for simple linear SSMs, strong confounding between jointly estimated process and observation variance parameters was not found in the models explored here. However, when observation variance was also estimated in the motivating case study, the resulting process variance estimates were implausibly low (near-zero). As SSMs are used in increasingly complex ways, understanding when inference can be expected to be successful, and what aids it, becomes more important. Our study illustrates (i) the need for relevant process covariates and (ii) the benefits of using externally estimated observation variances for inference for nonlinear stage-structured SSMs.
|
2102.03910
|
Rossana Segreto
|
Rossana Segreto (1), Yuri Deigin (2), Kevin McCairn (3), Alejandro
Sousa (4 and 5), Dan Sirotkin (6), Karl Sirotkin (6), Jonathan J. Couey (7),
Adrian Jones (8), Daoyu Zhang (9) ((1) Department of Microbiology, University
of Innsbruck, Austria, (2) Youthereum Genetics Inc., Toronto, Ontario,
Canada, (3) Synaptek - Deep Learning Solutions, Gifu, Japan, (4) Regional
Hospital of Monforte, Lugo, Spain, (5) University of Santiago de Compostela,
Spain, (6) Karl Sirotkin LLC, Lake Mary, FL, USA, (7) University of
Pittsburgh, School of Medicine, USA, (8) Independent bioinformatics
researcher, (9) Independent genetics researcher)
|
An open debate on SARS-CoV-2's proximal origin is long overdue
| null | null | null | null |
q-bio.PE q-bio.GN
|
http://creativecommons.org/licenses/by/4.0/
|
There is a near consensus view that SARS-CoV-2 has a natural zoonotic origin;
however, several characteristics of SARS-CoV-2 taken together are not easily
explained by a natural zoonotic origin hypothesis. These include: a low rate of
evolution in the early phase of transmission; the lack of evidence of
recombination events; a high pre-existing binding to human ACE2; a novel furin
cleavage site insert; a flat glycan binding domain of the spike protein which
conflicts with host evasion survival patterns exhibited by other coronaviruses,
and high human and mouse peptide mimicry. Initial assumptions against a
laboratory origin, by contrast, have remained unsubstantiated. Furthermore,
over a year after the initial outbreak in Wuhan, there is still no clear
evidence of zoonotic transfer from a bat or intermediate species. Given the
immense social and economic impact of this pandemic, identifying the true
origin of SARS-CoV-2 is fundamental to preventing future outbreaks. The search
for SARS-CoV-2's origin should include an open and unbiased inquiry into a
possible laboratory origin.
|
[
{
"created": "Sun, 7 Feb 2021 20:54:08 GMT",
"version": "v1"
},
{
"created": "Tue, 9 Feb 2021 14:22:19 GMT",
"version": "v2"
}
] |
2021-02-10
|
[
[
"Segreto",
"Rossana",
"",
"4 and 5"
],
[
"Deigin",
"Yuri",
"",
"4 and 5"
],
[
"McCairn",
"Kevin",
"",
"4 and 5"
],
[
"Sousa",
"Alejandro",
"",
"4 and 5"
],
[
"Sirotkin",
"Dan",
""
],
[
"Sirotkin",
"Karl",
""
],
[
"Couey",
"Jonathan J.",
""
],
[
"Jones",
"Adrian",
""
],
[
"Zhang",
"Daoyu",
""
]
] |
There is a near consensus view that SARS-CoV-2 has a natural zoonotic origin; however, several characteristics of SARS-CoV-2 taken together are not easily explained by a natural zoonotic origin hypothesis. These include: a low rate of evolution in the early phase of transmission; the lack of evidence of recombination events; a high pre-existing binding to human ACE2; a novel furin cleavage site insert; a flat glycan binding domain of the spike protein which conflicts with host evasion survival patterns exhibited by other coronaviruses, and high human and mouse peptide mimicry. Initial assumptions against a laboratory origin, by contrast, have remained unsubstantiated. Furthermore, over a year after the initial outbreak in Wuhan, there is still no clear evidence of zoonotic transfer from a bat or intermediate species. Given the immense social and economic impact of this pandemic, identifying the true origin of SARS-CoV-2 is fundamental to preventing future outbreaks. The search for SARS-CoV-2's origin should include an open and unbiased inquiry into a possible laboratory origin.
|
2102.02077
|
Pilar Cossio Dr.
|
Julian Giraldo-Barreto, Sebastian Ortiz, Erik H. Thiede, Karen
Palacio-Rodriguez, Bob Carpenter, Alex H. Barnett, Pilar Cossio
|
A Bayesian approach for extracting free energy profiles from
cryo-electron microscopy experiments using a path collective variable
| null | null | null | null |
q-bio.BM physics.bio-ph
|
http://creativecommons.org/licenses/by/4.0/
|
Cryo-electron microscopy (cryo-EM) extracts single-particle density
projections of individual biomolecules. Although cryo-EM is widely used for 3D
reconstruction, due to its single-particle nature, it has the potential to
provide information about the biomolecule's conformational variability and
underlying free energy landscape. However, treating cryo-EM as a
single-molecule technique is challenging because of the low signal-to-noise
ratio (SNR) in the individual particles. In this work, we developed the
cryo-BIFE method, cryo-EM Bayesian Inference of Free Energy profiles, that uses
a path collective variable to extract free energy profiles and their
uncertainties from cryo-EM images. We tested the framework over several
synthetic systems, where we controlled the imaging parameters and conditions.
We found that for realistic cryo-EM environments and relevant biomolecular
systems, it is possible to recover the underlying free energy, with the pose
accuracy and SNR as crucial determinants. Then, we used the method to study the
conformational transitions of a calcium-activated channel with real cryo-EM
particles. Interestingly, we recover the most probable conformation (used to
generate a high resolution reconstruction of the calcium-bound state), and we
find two additional meta-stable states, one which corresponds to the
calcium-unbound conformation. As expected for turnover transitions within the
same sample, the activation barriers are of the order of a couple $k_BT$.
Extracting free energy profiles from cryo-EM will enable a more complete
characterization of the thermodynamic ensemble of biomolecules.
|
[
{
"created": "Wed, 3 Feb 2021 14:22:28 GMT",
"version": "v1"
}
] |
2021-02-04
|
[
[
"Giraldo-Barreto",
"Julian",
""
],
[
"Ortiz",
"Sebastian",
""
],
[
"Thiede",
"Erik H.",
""
],
[
"Palacio-Rodriguez",
"Karen",
""
],
[
"Carpenter",
"Bob",
""
],
[
"Barnett",
"Alex H.",
""
],
[
"Cossio",
"Pilar",
""
]
] |
Cryo-electron microscopy (cryo-EM) extracts single-particle density projections of individual biomolecules. Although cryo-EM is widely used for 3D reconstruction, due to its single-particle nature, it has the potential to provide information about the biomolecule's conformational variability and underlying free energy landscape. However, treating cryo-EM as a single-molecule technique is challenging because of the low signal-to-noise ratio (SNR) in the individual particles. In this work, we developed the cryo-BIFE method, cryo-EM Bayesian Inference of Free Energy profiles, that uses a path collective variable to extract free energy profiles and their uncertainties from cryo-EM images. We tested the framework over several synthetic systems, where we controlled the imaging parameters and conditions. We found that for realistic cryo-EM environments and relevant biomolecular systems, it is possible to recover the underlying free energy, with the pose accuracy and SNR as crucial determinants. Then, we used the method to study the conformational transitions of a calcium-activated channel with real cryo-EM particles. Interestingly, we recover the most probable conformation (used to generate a high resolution reconstruction of the calcium-bound state), and we find two additional meta-stable states, one which corresponds to the calcium-unbound conformation. As expected for turnover transitions within the same sample, the activation barriers are of the order of a couple $k_BT$. Extracting free energy profiles from cryo-EM will enable a more complete characterization of the thermodynamic ensemble of biomolecules.
|
2311.13337
|
Michiel Van Der Vlag
|
Michiel van der Vlag, Lionel Kusch, Alain Destexhe, Viktor Jirsa,
Sandra Diaz-Pier and Jennifer S. Goldman
|
Vast TVB parameter space exploration: A Modular Framework for
Accelerating the Multi-Scale Simulation of Human Brain Dynamics
|
21 pages, 9 figures
| null | null | null |
q-bio.NC cs.DC
|
http://creativecommons.org/licenses/by/4.0/
|
Global neural dynamics emerge from multi-scale brain structures, with neurons
communicating through synapses to form transiently communicating networks.
Network activity arises from intercellular communication that depends on the
structure of connectome tracts and local connection, intracellular signalling
cascades, and the extracellular molecular milieu that regulate cellular
properties. Multi-scale models of brain function have begun to directly link
the emergence of global brain dynamics in conscious and unconscious brain
states to microscopic changes at the level of cells. In particular, AdEx
mean-field models representing statistical properties of local populations of
neurons have been connected following human tractography data to represent
multi-scale neural phenomena in simulations using The Virtual Brain (TVB).
While mean-field models can be run on personal computers for short simulations,
or in parallel on high-performance computing (HPC) architectures for longer
simulations and parameter scans, the computational burden remains high and vast
areas of the parameter space remain unexplored. In this work, we report that
our TVB-HPC framework, a modular set of methods used here to implement the
TVB-AdEx model for GPU and analyze emergent dynamics, notably accelerates
simulations and substantially reduces computational resource requirements. The
framework preserves the stability and robustness of the TVB-AdEx model, thus
facilitating finer resolution exploration of vast parameter spaces as well as
longer simulations previously near impossible to perform. Given that simulation
and analysis toolkits are made public as open-source packages, our framework
serves as a template onto which other models can be easily scripted and
personalized datasets can be used for studies of inter-individual variability
of parameters related to functional brain dynamics.
|
[
{
"created": "Wed, 22 Nov 2023 12:01:33 GMT",
"version": "v1"
}
] |
2023-11-23
|
[
[
"van der Vlag",
"Michiel",
""
],
[
"Kusch",
"Lionel",
""
],
[
"Destexhe",
"Alain",
""
],
[
"Jirsa",
"Viktor",
""
],
[
"Diaz-Pier",
"Sandra",
""
],
[
"Goldman",
"Jennifer S.",
""
]
] |
Global neural dynamics emerge from multi-scale brain structures, with neurons communicating through synapses to form transiently communicating networks. Network activity arises from intercellular communication that depends on the structure of connectome tracts and local connection, intracellular signalling cascades, and the extracellular molecular milieu that regulate cellular properties. Multi-scale models of brain function have begun to directly link the emergence of global brain dynamics in conscious and unconscious brain states to microscopic changes at the level of cells. In particular, AdEx mean-field models representing statistical properties of local populations of neurons have been connected following human tractography data to represent multi-scale neural phenomena in simulations using The Virtual Brain (TVB). While mean-field models can be run on personal computers for short simulations, or in parallel on high-performance computing (HPC) architectures for longer simulations and parameter scans, the computational burden remains high and vast areas of the parameter space remain unexplored. In this work, we report that our TVB-HPC framework, a modular set of methods used here to implement the TVB-AdEx model for GPU and analyze emergent dynamics, notably accelerates simulations and substantially reduces computational resource requirements. The framework preserves the stability and robustness of the TVB-AdEx model, thus facilitating finer resolution exploration of vast parameter spaces as well as longer simulations previously near impossible to perform. Given that simulation and analysis toolkits are made public as open-source packages, our framework serves as a template onto which other models can be easily scripted and personalized datasets can be used for studies of inter-individual variability of parameters related to functional brain dynamics.
|
2004.03251
|
Steffen Eikenberry
|
Steffen E. Eikenberry, Marina Mancuso, Enahoro Iboi, Tin Phan, Keenan
Eikenberry, Yang Kuang, Eric Kostelich, Abba B. Gumel
|
To mask or not to mask: Modeling the potential for face mask use by the
general public to curtail the COVID-19 pandemic
|
20 pages, 9 figures
|
Infectious Disease Modelling. 5 (2020) 248-255
|
10.1016/j.idm.2020.04.001
| null |
q-bio.PE q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Face mask use by the general public for limiting the spread of the COVID-19
pandemic is controversial, though increasingly recommended, and the potential
of this intervention is not well understood. We develop a compartmental model
for assessing the community-wide impact of mask use by the general,
asymptomatic public, a portion of which may be asymptomatically infectious.
Model simulations, using data relevant to COVID-19 dynamics in the US states of
New York and Washington, suggest that broad adoption of even relatively
ineffective face masks may meaningfully reduce community transmission of
COVID-19 and decrease peak hospitalizations and deaths. Moreover, mask use
decreases the effective transmission rate in nearly linear proportion to the
product of mask effectiveness (as a fraction of potentially infectious contacts
blocked) and coverage rate (as a fraction of the general population), while the
impact on epidemiologic outcomes (death, hospitalizations) is highly nonlinear,
indicating masks could synergize with other non-pharmaceutical measures. Masks
are found to be useful with respect to both preventing illness in healthy
persons and preventing asymptomatic transmission. Hypothetical mask adoption
scenarios suggest that immediate near universal (80%) adoption of moderately
(50%) effective masks could prevent on the order of 17--45% of projected deaths
over two months in New York, while decreasing the peak daily death rate by
34--58%, absent other changes in epidemic dynamics. Our results suggest use of
face masks by the general public is potentially of high value in curtailing
community transmission and the burden of the pandemic. The community-wide
benefits are likely to be greatest when face masks are used in conjunction with
other non-pharmaceutical practices (such as social-distancing), and when
adoption is nearly universal (nation-wide) and compliance is high.
|
[
{
"created": "Tue, 7 Apr 2020 10:41:30 GMT",
"version": "v1"
}
] |
2020-05-05
|
[
[
"Eikenberry",
"Steffen E.",
""
],
[
"Mancuso",
"Marina",
""
],
[
"Iboi",
"Enahoro",
""
],
[
"Phan",
"Tin",
""
],
[
"Eikenberry",
"Keenan",
""
],
[
"Kuang",
"Yang",
""
],
[
"Kostelich",
"Eric",
""
],
[
"Gumel",
"Abba B.",
""
]
] |
Face mask use by the general public for limiting the spread of the COVID-19 pandemic is controversial, though increasingly recommended, and the potential of this intervention is not well understood. We develop a compartmental model for assessing the community-wide impact of mask use by the general, asymptomatic public, a portion of which may be asymptomatically infectious. Model simulations, using data relevant to COVID-19 dynamics in the US states of New York and Washington, suggest that broad adoption of even relatively ineffective face masks may meaningfully reduce community transmission of COVID-19 and decrease peak hospitalizations and deaths. Moreover, mask use decreases the effective transmission rate in nearly linear proportion to the product of mask effectiveness (as a fraction of potentially infectious contacts blocked) and coverage rate (as a fraction of the general population), while the impact on epidemiologic outcomes (death, hospitalizations) is highly nonlinear, indicating masks could synergize with other non-pharmaceutical measures. Masks are found to be useful with respect to both preventing illness in healthy persons and preventing asymptomatic transmission. Hypothetical mask adoption scenarios suggest that immediate near universal (80%) adoption of moderately (50%) effective masks could prevent on the order of 17--45% of projected deaths over two months in New York, while decreasing the peak daily death rate by 34--58%, absent other changes in epidemic dynamics. Our results suggest use of face masks by the general public is potentially of high value in curtailing community transmission and the burden of the pandemic. The community-wide benefits are likely to be greatest when face masks are used in conjunction with other non-pharmaceutical practices (such as social-distancing), and when adoption is nearly universal (nation-wide) and compliance is high.
|
1702.05065
|
Gonzalo Hernandez Hernandez
|
Gonzalo Hernandez-Hernandez, Jesse Myers, Enric Alvarez-Lacalle,
Yohannes Shiferaw
|
Nonlinear signaling on biological networks: the role of stochasticity
and spectral clustering
|
30 pages, 12 figures
| null |
10.1103/PhysRevE.95.032313
| null |
q-bio.MN physics.bio-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Signal transduction within biological cells is governed by networks of
interacting proteins. Communication between these proteins is mediated by
signaling molecules which bind to receptors and induce stochastic transitions
between different conformational states. Signaling is typically a cooperative
process which requires the occurrence of multiple binding events so that
reaction rates have a nonlinear dependence on the amount of signaling molecule.
It is this nonlinearity that endows biological signaling networks with robust
switch-like properties which are critical to their biological function. In this
study, we investigate how the properties of these signaling systems depend on
the network architecture. Our main result is that these nonlinear networks
exhibit bistability where the network activity can switch between states that
correspond to a low and high activity level. We show that this bistable regime
emerges at a critical coupling strength that is determined by the spectral
structure of the network. In particular, the set of nodes that correspond to
large components of the leading eigenvector of the adjacency matrix determines
the onset of bistability. Above this transition, the eigenvectors of the
adjacency matrix determine a hierarchy of clusters, defined by its spectral
properties, which are activated sequentially with increasing network activity.
We argue further that the onset of bistability occurs either continuously or
discontinuously depending upon whether the leading eigenvector is localized or
delocalized. Finally, we show that at low network coupling stochastic
transitions to the active branch are also driven by the set of nodes that
contribute more strongly to the leading eigenvector.
|
[
{
"created": "Thu, 16 Feb 2017 17:48:18 GMT",
"version": "v1"
}
] |
2017-04-05
|
[
[
"Hernandez-Hernandez",
"Gonzalo",
""
],
[
"Myers",
"Jesse",
""
],
[
"Alvarez-Lacalle",
"Enric",
""
],
[
"Shiferaw",
"Yohannes",
""
]
] |
Signal transduction within biological cells is governed by networks of interacting proteins. Communication between these proteins is mediated by signaling molecules which bind to receptors and induce stochastic transitions between different conformational states. Signaling is typically a cooperative process which requires the occurrence of multiple binding events so that reaction rates have a nonlinear dependence on the amount of signaling molecule. It is this nonlinearity that endows biological signaling networks with robust switch-like properties which are critical to their biological function. In this study, we investigate how the properties of these signaling systems depend on the network architecture. Our main result is that these nonlinear networks exhibit bistability where the network activity can switch between states that correspond to a low and high activity level. We show that this bistable regime emerges at a critical coupling strength that is determined by the spectral structure of the network. In particular, the set of nodes that correspond to large components of the leading eigenvector of the adjacency matrix determines the onset of bistability. Above this transition, the eigenvectors of the adjacency matrix determine a hierarchy of clusters, defined by its spectral properties, which are activated sequentially with increasing network activity. We argue further that the onset of bistability occurs either continuously or discontinuously depending upon whether the leading eigenvector is localized or delocalized. Finally, we show that at low network coupling stochastic transitions to the active branch are also driven by the set of nodes that contribute more strongly to the leading eigenvector.
|
1111.0097
|
Michael Famulare
|
Michael Famulare and Adrienne Fairhall
|
Adaptive probabilistic neural coding from deterministic spiking neurons:
analysis from first principles
|
v2: revised/expanded results/discussion regarding contrast gain
control. 51 pages, 12 figures
| null | null | null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A neuron transforms its input into output spikes, and this transformation is
the basic unit of computation in the nervous system. The spiking response of
the neuron to a complex, time-varying input can be predicted from the detailed
biophysical properties of the neuron, modeled as a deterministic nonlinear
dynamical system. In the tradition of neural coding, however, a neuron or
neural system is treated as a black box and statistical techniques are used to
identify functional models of its encoding properties. The goal of this work is
to connect the mechanistic, biophysical approach to neuronal function to a
description in terms of a coding model. Building from preceding work at the
single neuron level, we develop from first principles a mathematical theory
mapping the relationships between two simple but powerful classes of models:
deterministic integrate-and-fire dynamical models and linear-nonlinear coding
models. To do so, we develop an approach for studying a nonlinear dynamical
system by conditioning on an observed linear estimator. We derive asymptotic
closed-form expressions for the linear filter and estimates for the nonlinear
decision function of the linear/nonlinear model. We analytically derive the
dependence of the linear filter on the input statistics and we show how
deterministic nonlinear dynamics can be used to modulate the properties of a
probabilistic code. We demonstrate that integrate-and-fire models without any
additional currents can perform perfect contrast gain control, a sophisticated
adaptive computation, and we identify the general dynamical principles
responsible. We then design from first principles a nonlinear dynamical model
that implements gain control. While we focus on the integrate-and-fire models
for tractability, the framework we propose to relate LN and dynamical models
generalizes naturally to more complex biophysical models.
|
[
{
"created": "Tue, 1 Nov 2011 00:57:39 GMT",
"version": "v1"
},
{
"created": "Thu, 15 Dec 2011 22:13:03 GMT",
"version": "v2"
}
] |
2011-12-19
|
[
[
"Famulare",
"Michael",
""
],
[
"Fairhall",
"Adrienne",
""
]
] |
A neuron transforms its input into output spikes, and this transformation is the basic unit of computation in the nervous system. The spiking response of the neuron to a complex, time-varying input can be predicted from the detailed biophysical properties of the neuron, modeled as a deterministic nonlinear dynamical system. In the tradition of neural coding, however, a neuron or neural system is treated as a black box and statistical techniques are used to identify functional models of its encoding properties. The goal of this work is to connect the mechanistic, biophysical approach to neuronal function to a description in terms of a coding model. Building from preceding work at the single neuron level, we develop from first principles a mathematical theory mapping the relationships between two simple but powerful classes of models: deterministic integrate-and-fire dynamical models and linear-nonlinear coding models. To do so, we develop an approach for studying a nonlinear dynamical system by conditioning on an observed linear estimator. We derive asymptotic closed-form expressions for the linear filter and estimates for the nonlinear decision function of the linear/nonlinear model. We analytically derive the dependence of the linear filter on the input statistics and we show how deterministic nonlinear dynamics can be used to modulate the properties of a probabilistic code. We demonstrate that integrate-and-fire models without any additional currents can perform perfect contrast gain control, a sophisticated adaptive computation, and we identify the general dynamical principles responsible. We then design from first principles a nonlinear dynamical model that implements gain control. While we focus on the integrate-and-fire models for tractability, the framework we propose to relate LN and dynamical models generalizes naturally to more complex biophysical models.
|
2306.10553
|
Seemi Tasnim Alam
|
Bushra Rahman Eipa, Md Riadul Islam, Raquiba Sultana, Seemi Tasnim
Alam, Tanaj Mehjabin, Nohor Noon Haque Bushra, S M Moniruzzaman, Md Ifrat
Hossain, Shamia Naz Rashna, Md Aftab Uddin
|
Determination of the antibiotic susceptibility pattern of Gram positive
bacteria causing UTI in Dhaka Bangladesh
|
16 pages, 4 figures
| null | null |
SUB_MBO_2301
|
q-bio.QM q-bio.BM
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Urinary Tract Infection (UTIs) is referred as one of the most common
infection in medical sectors worldwide and antimicrobial resistance (AMR) is
also a global threat to human that is related with many diseases. As
antibiotics used for the treatment of infectious diseases, the rate of
resistance is increasing day by day. Gram positive pathogens are commonly found
in urine sample collected from different age groups of people, associated with
UTI. The study was conducted in a diagnostic center in Dhaka, Bangladesh with
total 1308 urine samples from November 2021 to April 2022. Gram positive
pathogens were isolated and antimicrobial susceptibility tests were done. From
total 121 samples of gram positive bacteria the highest prevalence rate of UTIs
was found in age group of 21-30 year. Mostly Enterococcus spp. (33.05%)
Staphylococcus aureus (27.27%), Streptococcus spp. (20.66%), Beta-hemolytic
streptococci (19.00%) were found as causative agents of UTI compared to others.
The majority of isolates have been detected as multi-drug resistant (MDR). The
higher percentage of antibiotic resistance were found against Azithromycin
(75%), and cefixime (64.46%). This research focused on the regular basis of
surveillance for the Gram-positive bacteria antibiotic susceptibility to
increase awareness about the use of proper antibiotic thus minimize the drug
resistance.
|
[
{
"created": "Sun, 18 Jun 2023 13:26:52 GMT",
"version": "v1"
}
] |
2023-06-21
|
[
[
"Eipa",
"Bushra Rahman",
""
],
[
"Islam",
"Md Riadul",
""
],
[
"Sultana",
"Raquiba",
""
],
[
"Alam",
"Seemi Tasnim",
""
],
[
"Mehjabin",
"Tanaj",
""
],
[
"Bushra",
"Nohor Noon Haque",
""
],
[
"Moniruzzaman",
"S M",
""
],
[
"Hossain",
"Md Ifrat",
""
],
[
"Rashna",
"Shamia Naz",
""
],
[
"Uddin",
"Md Aftab",
""
]
] |
Urinary Tract Infection (UTIs) is referred as one of the most common infection in medical sectors worldwide and antimicrobial resistance (AMR) is also a global threat to human that is related with many diseases. As antibiotics used for the treatment of infectious diseases, the rate of resistance is increasing day by day. Gram positive pathogens are commonly found in urine sample collected from different age groups of people, associated with UTI. The study was conducted in a diagnostic center in Dhaka, Bangladesh with total 1308 urine samples from November 2021 to April 2022. Gram positive pathogens were isolated and antimicrobial susceptibility tests were done. From total 121 samples of gram positive bacteria the highest prevalence rate of UTIs was found in age group of 21-30 year. Mostly Enterococcus spp. (33.05%) Staphylococcus aureus (27.27%), Streptococcus spp. (20.66%), Beta-hemolytic streptococci (19.00%) were found as causative agents of UTI compared to others. The majority of isolates have been detected as multi-drug resistant (MDR). The higher percentage of antibiotic resistance were found against Azithromycin (75%), and cefixime (64.46%). This research focused on the regular basis of surveillance for the Gram-positive bacteria antibiotic susceptibility to increase awareness about the use of proper antibiotic thus minimize the drug resistance.
|
0811.3515
|
Noa Sela
|
Galit Lev-Maor, Oren Ram, Eddo Kim, Noa Sela, Amir Goren, Erez Y
Levanon, Gil Ast
|
Intronic Alus Influence Alternative Splicing
| null |
PLoS Genet 2008 4(9): e1000204
| null | null |
q-bio.GN q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Examination of the human transcriptome reveals higher levels of RNA editing
than in any other organism tested to date. This is indicative of extensive
double-stranded RNA (dsRNA) formation within the human transcriptome. Most of
the editing sites are located in the primate-specific retrotransposed element
called Alu. A large fraction of Alus are found in intronic sequences, implying
extensive Alu-Alu dsRNA formation in mRNA precursors. Yet, the effect of these
intronic Alus on splicing of the flanking exons is largely unknown. Here, we
show that more Alus flank alternatively spliced exons than constitutively
spliced ones; this is especially notable for those exons that have changed
their mode of splicing from constitutive to alternative during human evolution.
This implies that Alu insertions may change the mode of splicing of the
flanking exons. Indeed, we demonstrate experimentally that two Alu elements
that were inserted into an intron in opposite orientation undergo base-pairing,
as evident by RNA editing, and affect the splicing patterns of a downstream
exon, shifting it from constitutive to alternative. Our results indicate the
importance of intronic Alus in influencing the splicing of flanking exons,
further emphasizing the role of Alus in shaping of the human transcriptome
|
[
{
"created": "Fri, 21 Nov 2008 11:07:47 GMT",
"version": "v1"
}
] |
2008-11-24
|
[
[
"Lev-Maor",
"Galit",
""
],
[
"Ram",
"Oren",
""
],
[
"Kim",
"Eddo",
""
],
[
"Sela",
"Noa",
""
],
[
"Goren",
"Amir",
""
],
[
"Levanon",
"Erez Y",
""
],
[
"Ast",
"Gil",
""
]
] |
Examination of the human transcriptome reveals higher levels of RNA editing than in any other organism tested to date. This is indicative of extensive double-stranded RNA (dsRNA) formation within the human transcriptome. Most of the editing sites are located in the primate-specific retrotransposed element called Alu. A large fraction of Alus are found in intronic sequences, implying extensive Alu-Alu dsRNA formation in mRNA precursors. Yet, the effect of these intronic Alus on splicing of the flanking exons is largely unknown. Here, we show that more Alus flank alternatively spliced exons than constitutively spliced ones; this is especially notable for those exons that have changed their mode of splicing from constitutive to alternative during human evolution. This implies that Alu insertions may change the mode of splicing of the flanking exons. Indeed, we demonstrate experimentally that two Alu elements that were inserted into an intron in opposite orientation undergo base-pairing, as evident by RNA editing, and affect the splicing patterns of a downstream exon, shifting it from constitutive to alternative. Our results indicate the importance of intronic Alus in influencing the splicing of flanking exons, further emphasizing the role of Alus in shaping of the human transcriptome
|
q-bio/0311003
|
Alexey Mazur K.
|
Dimitri E. Kamashev and Alexey K. Mazur
|
Relaxation of DNA curvature by single stranded breaks: Simulations and
experiments
|
13 two-column pages, 9 integrated eps plates, RevTeX4
| null | null | null |
q-bio.BM cond-mat.soft physics.bio-ph
| null |
The recently proposed compressed backbone theory suggested that the intrinsic
curvature in DNA can result from a geometric mismatch between the specific
backbone length and optimal base stacking orientations. It predicted that the
curvature in A-tract repeats can be relaxed by introducing single stranded
breaks (nicks). This effect has not been tested earlier and it would not be
accounted for by alternative models of DNA bending. Here the curvature in a
specifically designed series of nicked DNA fragments is tested experimentally
by gel mobility assays and, simultaneously, by free molecular dynamics
simulations. Single stranded breaks produce virtually no effect upon the gel
mobility of the random sequence DNA. In contrast, nicked A-tract fragments
reveal a regular modulation of curvature depending upon the position of the
strand break with respect to the overall bend. Maximal relaxation is observed
when nicks occur inside A-tracts. The results are partially reproduced in
simulations. Analysis of computed curved DNA conformations reveals a group of
sugar atoms that exhibit reduced backbone length within A-tracts, which can
correspond to the compression hypothesis.
|
[
{
"created": "Thu, 6 Nov 2003 15:08:55 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Kamashev",
"Dimitri E.",
""
],
[
"Mazur",
"Alexey K.",
""
]
] |
The recently proposed compressed backbone theory suggested that the intrinsic curvature in DNA can result from a geometric mismatch between the specific backbone length and optimal base stacking orientations. It predicted that the curvature in A-tract repeats can be relaxed by introducing single stranded breaks (nicks). This effect has not been tested earlier and it would not be accounted for by alternative models of DNA bending. Here the curvature in a specifically designed series of nicked DNA fragments is tested experimentally by gel mobility assays and, simultaneously, by free molecular dynamics simulations. Single stranded breaks produce virtually no effect upon the gel mobility of the random sequence DNA. In contrast, nicked A-tract fragments reveal a regular modulation of curvature depending upon the position of the strand break with respect to the overall bend. Maximal relaxation is observed when nicks occur inside A-tracts. The results are partially reproduced in simulations. Analysis of computed curved DNA conformations reveals a group of sugar atoms that exhibit reduced backbone length within A-tracts, which can correspond to the compression hypothesis.
|
1606.06668
|
Adam Porter
|
Adam H. Porter, Norman A. Johnson and Alexander Y. Tulchinsky
|
Competitive binding of transcription factors drives Mendelian dominance
in regulatory genetic pathways
|
3.3 Mb file. This revision includes a more thorough analysis of
dominance propagation and the effects of genetic background in the 3-locus
model
| null | null | null |
q-bio.MN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We report a new mechanism for allelic dominance in regulatory genetic
interactions that we call binding dominance. We investigated a biophysical
model of gene regulation, where the fractional occupancy of a transcription
factor (TF) on the cis-regulated promoter site it binds to is determined by
binding energy (-{\Delta}G) and TF dosage. Transcription and gene expression
proceed when the TF is bound to the promoter. In diploids, individuals may be
heterozygous at the cis-site, at the TF's coding region, or at the TF's own
promoter, which determines allele-specific dosage. We find that when the TF's
coding region is heterozygous, TF alleles compete for occupancy at the cis
sites and the tighter-binding TF is dominant in proportion to the difference in
binding strength. When the TF's own promoter is heterozygous, the TF produced
at the higher dosage is also dominant. Cis-site heterozygotes have additive
expression and therefore codominant phenotypes. Binding dominance propagates to
affect the expression of downstream loci and it is sensitive in both magnitude
and direction to genetic background, but its detectability often attenuates.
While binding dominance is inevitable at the molecular level, it is difficult
to detect in the phenotype under some biophysical conditions, more so when TF
dosage is high and allele-specific binding affinities are similar. A body of
empirical research on the biophysics of TF binding demonstrates the
plausibility of this mechanism of dominance, but studies of gene expression
under competitive binding in heterozygotes in a diversity of genetic
backgrounds are needed.
|
[
{
"created": "Tue, 21 Jun 2016 17:18:38 GMT",
"version": "v1"
},
{
"created": "Sat, 27 Aug 2016 00:49:53 GMT",
"version": "v2"
}
] |
2016-08-30
|
[
[
"Porter",
"Adam H.",
""
],
[
"Johnson",
"Norman A.",
""
],
[
"Tulchinsky",
"Alexander Y.",
""
]
] |
We report a new mechanism for allelic dominance in regulatory genetic interactions that we call binding dominance. We investigated a biophysical model of gene regulation, where the fractional occupancy of a transcription factor (TF) on the cis-regulated promoter site it binds to is determined by binding energy (-{\Delta}G) and TF dosage. Transcription and gene expression proceed when the TF is bound to the promoter. In diploids, individuals may be heterozygous at the cis-site, at the TF's coding region, or at the TF's own promoter, which determines allele-specific dosage. We find that when the TF's coding region is heterozygous, TF alleles compete for occupancy at the cis sites and the tighter-binding TF is dominant in proportion to the difference in binding strength. When the TF's own promoter is heterozygous, the TF produced at the higher dosage is also dominant. Cis-site heterozygotes have additive expression and therefore codominant phenotypes. Binding dominance propagates to affect the expression of downstream loci and it is sensitive in both magnitude and direction to genetic background, but its detectability often attenuates. While binding dominance is inevitable at the molecular level, it is difficult to detect in the phenotype under some biophysical conditions, more so when TF dosage is high and allele-specific binding affinities are similar. A body of empirical research on the biophysics of TF binding demonstrates the plausibility of this mechanism of dominance, but studies of gene expression under competitive binding in heterozygotes in a diversity of genetic backgrounds are needed.
|
1308.6534
|
David Wick
|
W. David Wick
|
Stopping the SuperSpreader Epidemic: the lessons from SARS (with,
perhaps, applications to MERS)
| null | null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
I discuss the so-called SuperSpreader epidemic, for which SARS is the
canonical examples (and, perhaps, MERS will be another). I use simulation by an
agent-based model as well as the mathematics of multi-type branching-processes
to illustrate how the SS epidemic differs from the more familiar uniform
epidemic (e.g., caused by influenza). The conclusions may surprise the reader:
(a) The SS epidemic must be described by at least two numbers, such as the mean
reproductive number (of "secondary" cases caused by a "primary case"), R0, and
the variance of same, call it V0; (b) Even if R0 > 1, if V0 >> R0 the
probability that an infection-chain caused by one primary case goes extinct
without intervention may be close to one (e.g., 0.97); (c) The SS epidemic may
have a long "kindling period" in which sporadic cases appear (transmitted from
some unknown host) and generate a cluster of cases, but the chains peter out,
perhaps generating a false sense of security that a pandemic will not occur;
(d) Interventions such as isolation (or contact-tracing and secondary case
isolation) may prove efficacious even without driving R0 below one; (e) The
efficacy of such interventions diminishes, but slowly, with increasing V0 at
fixed R0. From these considerations, I argue that the SS epidemic has dynamics
sufficiently distinct from the uniform case that efficacious public-health
interventions can be designed even in the absence of a vaccine or other form of
treatment.
|
[
{
"created": "Thu, 29 Aug 2013 17:44:47 GMT",
"version": "v1"
}
] |
2013-08-30
|
[
[
"Wick",
"W. David",
""
]
] |
I discuss the so-called SuperSpreader epidemic, for which SARS is the canonical examples (and, perhaps, MERS will be another). I use simulation by an agent-based model as well as the mathematics of multi-type branching-processes to illustrate how the SS epidemic differs from the more familiar uniform epidemic (e.g., caused by influenza). The conclusions may surprise the reader: (a) The SS epidemic must be described by at least two numbers, such as the mean reproductive number (of "secondary" cases caused by a "primary case"), R0, and the variance of same, call it V0; (b) Even if R0 > 1, if V0 >> R0 the probability that an infection-chain caused by one primary case goes extinct without intervention may be close to one (e.g., 0.97); (c) The SS epidemic may have a long "kindling period" in which sporadic cases appear (transmitted from some unknown host) and generate a cluster of cases, but the chains peter out, perhaps generating a false sense of security that a pandemic will not occur; (d) Interventions such as isolation (or contact-tracing and secondary case isolation) may prove efficacious even without driving R0 below one; (e) The efficacy of such interventions diminishes, but slowly, with increasing V0 at fixed R0. From these considerations, I argue that the SS epidemic has dynamics sufficiently distinct from the uniform case that efficacious public-health interventions can be designed even in the absence of a vaccine or other form of treatment.
|
1807.08039
|
Chieh Lo
|
Chieh Lo and Radu Marculescu
|
PGLasso: Microbial Community Detection through Phylogenetic Graphical
Lasso
| null | null | null | null |
q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Due to the recent advances in high-throughput sequencing technologies, it
becomes possible to directly analyze microbial communities in the human body
and in the environment. Knowledge of how microbes interact with each other and
form functional communities can provide a solid foundation to understand
microbiome related diseases; this can serve as a key step towards precision
medicine. In order to understand how microbes form communities, we propose a
two step approach: First, we infer the microbial co-occurrence network by
integrating a graph inference algorithm with phylogenetic information obtained
directly from metagenomic data. Next, we utilize a network-based community
detection algorithm to cluster microbes into functional groups where microbes
in each group are highly correlated. We also curate a "gold standard" network
based on the microbe-metabolic relationships which are extracted directly from
the metagenomic data. Utilizing community detection on the resulting microbial
metabolic pathway bipartite graph, the community membership for each microbe
can be viewed as the true label when evaluating against other existing methods.
Overall, our proposed framework Phylogenetic Graphical Lasso (PGLasso)
outperforms existing methods with gains larger than 100% in terms of Adjusted
Rand Index (ARI) which is commonly used to quantify the goodness of
clusterings.
|
[
{
"created": "Fri, 20 Jul 2018 21:36:02 GMT",
"version": "v1"
}
] |
2018-07-24
|
[
[
"Lo",
"Chieh",
""
],
[
"Marculescu",
"Radu",
""
]
] |
Due to the recent advances in high-throughput sequencing technologies, it becomes possible to directly analyze microbial communities in the human body and in the environment. Knowledge of how microbes interact with each other and form functional communities can provide a solid foundation to understand microbiome related diseases; this can serve as a key step towards precision medicine. In order to understand how microbes form communities, we propose a two step approach: First, we infer the microbial co-occurrence network by integrating a graph inference algorithm with phylogenetic information obtained directly from metagenomic data. Next, we utilize a network-based community detection algorithm to cluster microbes into functional groups where microbes in each group are highly correlated. We also curate a "gold standard" network based on the microbe-metabolic relationships which are extracted directly from the metagenomic data. Utilizing community detection on the resulting microbial metabolic pathway bipartite graph, the community membership for each microbe can be viewed as the true label when evaluating against other existing methods. Overall, our proposed framework Phylogenetic Graphical Lasso (PGLasso) outperforms existing methods with gains larger than 100% in terms of Adjusted Rand Index (ARI) which is commonly used to quantify the goodness of clusterings.
|
2107.02962
|
Bin-Guo Wang
|
Bin-Guo Wang, Shunxiang Huang, Yongping Xiong, Ming-Zhen Xin, Jing LI,
Jiangqian Zhang, Zhihui Ma
|
Transmission Dynamics of COVID-19 Pandemic Non-pharmaceutical
Interventions and Vaccination
| null | null | null | null |
q-bio.PE math.DS
|
http://creativecommons.org/licenses/by/4.0/
|
Non-pharmaceutical interventions(NPIs) play an important role in the early
stage control of COVID-19 pandemic. Vaccination is considered to be the
inevitable course to stop the spread of SARS-CoV-2. Based on the mechanism, a
SVEIR COVID-19 model with vaccination and NPIs is proposed. By means of the
basic reproduction number $R_{0}$, it is shown that the disease-free
equilibrium is globally attractive if $\mathscr{R}_{0}<1$, and COVID-19 is
uniform persistence if $\mathscr{R}_{0}>1$. Taking Indian dates for example in
the numerical simulation, we find that our dynamical results fits well with the
statistical dates. Consequently, we forecast the spreading trend of COVID-19
pandemic in India. Furthermore, our results imply that improving the intensity
of NPIs will greatly reduce the number of confirmed cases. Especially, NPIs are
indispensable even if all the people were vaccinated when the efficiency of
vaccine is relatively low. By simulating the relation ships of the basic
reproduction number $\mathscr{R}_{0}$, the vaccination rate and the efficiency
of vaccine, we find that it is impossible to achieve the herd immunity without
NPIs when the efficiency of vaccine is lower than $76.9\%$. Therefore, the herd
immunity area is defined by the evolution of relationships between the
vaccination rate and the efficiency of vaccine. In the study of two patchy, we
give the conditions for India and China to be open to navigation. Furthermore,
an appropriate dispersal of population between India and China is obtained. A
discussion completes the paper.
|
[
{
"created": "Wed, 7 Jul 2021 00:51:09 GMT",
"version": "v1"
}
] |
2021-07-08
|
[
[
"Wang",
"Bin-Guo",
""
],
[
"Huang",
"Shunxiang",
""
],
[
"Xiong",
"Yongping",
""
],
[
"Xin",
"Ming-Zhen",
""
],
[
"LI",
"Jing",
""
],
[
"Zhang",
"Jiangqian",
""
],
[
"Ma",
"Zhihui",
""
]
] |
Non-pharmaceutical interventions(NPIs) play an important role in the early stage control of COVID-19 pandemic. Vaccination is considered to be the inevitable course to stop the spread of SARS-CoV-2. Based on the mechanism, a SVEIR COVID-19 model with vaccination and NPIs is proposed. By means of the basic reproduction number $R_{0}$, it is shown that the disease-free equilibrium is globally attractive if $\mathscr{R}_{0}<1$, and COVID-19 is uniform persistence if $\mathscr{R}_{0}>1$. Taking Indian dates for example in the numerical simulation, we find that our dynamical results fits well with the statistical dates. Consequently, we forecast the spreading trend of COVID-19 pandemic in India. Furthermore, our results imply that improving the intensity of NPIs will greatly reduce the number of confirmed cases. Especially, NPIs are indispensable even if all the people were vaccinated when the efficiency of vaccine is relatively low. By simulating the relation ships of the basic reproduction number $\mathscr{R}_{0}$, the vaccination rate and the efficiency of vaccine, we find that it is impossible to achieve the herd immunity without NPIs when the efficiency of vaccine is lower than $76.9\%$. Therefore, the herd immunity area is defined by the evolution of relationships between the vaccination rate and the efficiency of vaccine. In the study of two patchy, we give the conditions for India and China to be open to navigation. Furthermore, an appropriate dispersal of population between India and China is obtained. A discussion completes the paper.
|
1405.1413
|
Giuseppe Pontrelli
|
Giuseppe Pontrelli, Filippo de Monte
|
A two-phase two-layer model for transdermal drug delivery and
percutaneous absorption
|
Mathematical Biosciences, accepted, 2014
|
Mathematical Biosciences, 257, pp. 96-103, 2014
|
10.1016/j.mbs.2014.05.001
| null |
q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
One of the promising frontiers of bioengineering is the controlled release of
a therapeuticdrug from a vehicle across the skin (transdermal drug delivery).
In order to study the complete process, a two-phase mathematical model
describing the dynamics of a substance between two coupled media of different
properties and dimensions is presented. A system of partial differential
equations describes the diffusion and the binding/unbinding processes in both
layers. Additional flux continuity at the interface and clearance conditions
into systemic circulation are imposed. An eigenvalue problem with discontinuous
coefficients is solved and an analytical solution is given in the form of an
infinite series expansion. The model points out the role of the diffusion and
reaction parameters, which control the complex transfer mechanism and the drug
kinetics across the two layers. Drug masses are given and their dependence on
the physical parameters is discussed.
|
[
{
"created": "Tue, 6 May 2014 09:07:48 GMT",
"version": "v1"
}
] |
2016-01-15
|
[
[
"Pontrelli",
"Giuseppe",
""
],
[
"de Monte",
"Filippo",
""
]
] |
One of the promising frontiers of bioengineering is the controlled release of a therapeuticdrug from a vehicle across the skin (transdermal drug delivery). In order to study the complete process, a two-phase mathematical model describing the dynamics of a substance between two coupled media of different properties and dimensions is presented. A system of partial differential equations describes the diffusion and the binding/unbinding processes in both layers. Additional flux continuity at the interface and clearance conditions into systemic circulation are imposed. An eigenvalue problem with discontinuous coefficients is solved and an analytical solution is given in the form of an infinite series expansion. The model points out the role of the diffusion and reaction parameters, which control the complex transfer mechanism and the drug kinetics across the two layers. Drug masses are given and their dependence on the physical parameters is discussed.
|
1912.03412
|
Eric Jones
|
Zipeng Wang, Eric W. Jones, Joshua M. Mueller, Jean M. Carlson
|
Control of ecological outcomes through deliberate parameter changes in a
model of the gut microbiome
|
main text 9 pages, 5 figures; supplement 10 pages, 2 figures
|
Phys. Rev. E 101, 052402 (2020)
|
10.1103/PhysRevE.101.052402
| null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The generalized Lotka-Volterra (gLV) equations are a mathematical proxy for
ecological dynamics. We focus on a gLV model of the gut microbiome, in which
the evolution of the gut microbial state is determined in part by pairwise
inter-species interaction parameters that encode environmentally-mediated
resource competition between microbes. We develop an in silico method that
controls the steady-state outcome of the system by adjusting these interaction
parameters. This approach is confined to a bistable region of the gLV model.
The two steady states of interest are idealized as either a "healthy" or
"diseased" steady state of the gut microbiome. In this method, a dimensionality
reduction technique called steady-state reduction (SSR) is first used to
generate a two-dimensional (2D) gLV model that approximates the
high-dimensional dynamics on the 2D subspace spanned by the two steady states.
Then a bifurcation analysis of the 2D model analytically determines parameter
modifications that drive a disease-prone initial condition to the healthy
steady state. This parameter modification of the reduced 2D model guides
parameter modifications of the original high-dimensional model, resulting in a
change of steady-state outcome in the high-dimensional model. This control
method, called SPARC (SSR-guided parameter change), bypasses the computational
challenge of directly determining parameter modifications in the original
high-dimensional system. SPARC could guide the development of indirect
bacteriotherapies, which seek to change microbial compositions by deliberately
modifying gut environmental variables such as gut acidity or macronutrient
availability.
|
[
{
"created": "Sat, 7 Dec 2019 02:04:40 GMT",
"version": "v1"
},
{
"created": "Sun, 29 Mar 2020 06:02:17 GMT",
"version": "v2"
}
] |
2020-05-13
|
[
[
"Wang",
"Zipeng",
""
],
[
"Jones",
"Eric W.",
""
],
[
"Mueller",
"Joshua M.",
""
],
[
"Carlson",
"Jean M.",
""
]
] |
The generalized Lotka-Volterra (gLV) equations are a mathematical proxy for ecological dynamics. We focus on a gLV model of the gut microbiome, in which the evolution of the gut microbial state is determined in part by pairwise inter-species interaction parameters that encode environmentally-mediated resource competition between microbes. We develop an in silico method that controls the steady-state outcome of the system by adjusting these interaction parameters. This approach is confined to a bistable region of the gLV model. The two steady states of interest are idealized as either a "healthy" or "diseased" steady state of the gut microbiome. In this method, a dimensionality reduction technique called steady-state reduction (SSR) is first used to generate a two-dimensional (2D) gLV model that approximates the high-dimensional dynamics on the 2D subspace spanned by the two steady states. Then a bifurcation analysis of the 2D model analytically determines parameter modifications that drive a disease-prone initial condition to the healthy steady state. This parameter modification of the reduced 2D model guides parameter modifications of the original high-dimensional model, resulting in a change of steady-state outcome in the high-dimensional model. This control method, called SPARC (SSR-guided parameter change), bypasses the computational challenge of directly determining parameter modifications in the original high-dimensional system. SPARC could guide the development of indirect bacteriotherapies, which seek to change microbial compositions by deliberately modifying gut environmental variables such as gut acidity or macronutrient availability.
|
0704.3748
|
Gerald A. Miller
|
Gerald A. Miller, Yi Y. Shi, Hong Qian, and Karol Bomsztyk
|
Clustering Coefficients of Protein-Protein Interaction Networks
|
16 pages, 3 figures, in Press PRE uses pdflatex
|
Phys. Rev. E 75, 051910 (2007)
|
10.1103/PhysRevE.75.051910
| null |
q-bio.QM cond-mat.stat-mech physics.bio-ph q-bio.MN
| null |
The properties of certain networks are determined by hidden variables that
are not explicitly measured. The conditional probability (propagator) that a
vertex with a given value of the hidden variable is connected to k of other
vertices determines all measurable properties. We study hidden variable models
and find an averaging approximation that enables us to obtain a general
analytical result for the propagator. Analytic results showing the validity of
the approximation are obtained. We apply hidden variable models to
protein-protein interaction networks (PINs) in which the hidden variable is the
association free-energy, determined by distributions that depend on
biochemistry and evolution. We compute degree distributions as well as
clustering coefficients of several PINs of different species; good agreement
with measured data is obtained. For the human interactome two different
parameter sets give the same degree distributions, but the computed clustering
coefficients differ by a factor of about two. This shows that degree
distributions are not sufficient to determine the properties of PINs.
|
[
{
"created": "Fri, 27 Apr 2007 21:00:20 GMT",
"version": "v1"
}
] |
2009-11-13
|
[
[
"Miller",
"Gerald A.",
""
],
[
"Shi",
"Yi Y.",
""
],
[
"Qian",
"Hong",
""
],
[
"Bomsztyk",
"Karol",
""
]
] |
The properties of certain networks are determined by hidden variables that are not explicitly measured. The conditional probability (propagator) that a vertex with a given value of the hidden variable is connected to k of other vertices determines all measurable properties. We study hidden variable models and find an averaging approximation that enables us to obtain a general analytical result for the propagator. Analytic results showing the validity of the approximation are obtained. We apply hidden variable models to protein-protein interaction networks (PINs) in which the hidden variable is the association free-energy, determined by distributions that depend on biochemistry and evolution. We compute degree distributions as well as clustering coefficients of several PINs of different species; good agreement with measured data is obtained. For the human interactome two different parameter sets give the same degree distributions, but the computed clustering coefficients differ by a factor of about two. This shows that degree distributions are not sufficient to determine the properties of PINs.
|
2004.12124
|
Jean-Francois Berret
|
Milad Radiom, Jean-Franccois Berret
|
Common trends in the epidemic of Covid-19 disease
|
15 pages, 5 figures, 2 tables
|
The European Physical Journal Plus 135, 517 (2020)
|
10.1140/epjp/s13360-020-00526-1
| null |
q-bio.PE physics.bio-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The discovery of SARS-CoV-2, the responsible virus for the Covid-19 epidemic,
has sparked a global health concern with many countries affected. Developing
models that can interpret the epidemic and give common trend parameters are
useful for prediction purposes by other countries that are at an earlier phase
of the epidemic; it is also useful for future planning against viral
respiratory diseases. One model is developed to interpret the fast-growth phase
of the epidemic and another model for an interpretation of the entire data set.
Both models agree reasonably with the data. It is shown by the first model that
during the fast phase, the number of new infected cases depends on the total
number of cases by a power-law relation with a scaling exponent equal to 0.82.
The second model gives a duplication time in the range 1 to 3 days early in the
start of the epidemic, and another parameter alpha = 0.1-0.5) that deviates the
progress of the epidemic from an exponential growth. Our models may be used for
data interpretation and for guiding predictions regarding this disease, e.g.
the onset of the maximum in the number of new cases.
|
[
{
"created": "Sat, 25 Apr 2020 12:24:27 GMT",
"version": "v1"
},
{
"created": "Wed, 10 Jun 2020 16:06:44 GMT",
"version": "v2"
}
] |
2021-09-21
|
[
[
"Radiom",
"Milad",
""
],
[
"Berret",
"Jean-Franccois",
""
]
] |
The discovery of SARS-CoV-2, the responsible virus for the Covid-19 epidemic, has sparked a global health concern with many countries affected. Developing models that can interpret the epidemic and give common trend parameters are useful for prediction purposes by other countries that are at an earlier phase of the epidemic; it is also useful for future planning against viral respiratory diseases. One model is developed to interpret the fast-growth phase of the epidemic and another model for an interpretation of the entire data set. Both models agree reasonably with the data. It is shown by the first model that during the fast phase, the number of new infected cases depends on the total number of cases by a power-law relation with a scaling exponent equal to 0.82. The second model gives a duplication time in the range 1 to 3 days early in the start of the epidemic, and another parameter alpha = 0.1-0.5) that deviates the progress of the epidemic from an exponential growth. Our models may be used for data interpretation and for guiding predictions regarding this disease, e.g. the onset of the maximum in the number of new cases.
|
2311.07793
|
Marcela Svarc
|
Fernando A. Najman, Antonio Galves, Marcela Svarc and Claudia D.
Vargas
|
The brain uses renewal points to model random sequences of stimuli
|
11 pages, 5 figures
| null | null | null |
q-bio.NC stat.ME
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
It has been classically conjectured that the brain assigns probabilistic
models to sequences of stimuli. An important issue associated with this
conjecture is the identification of the classes of models used by the brain to
perform this task. We address this issue by using a new clustering procedure
for sets of electroencephalographic (EEG) data recorded from participants
exposed to a sequence of auditory stimuli generated by a stochastic chain. This
clustering procedure indicates that the brain uses renewal points in the
stochastic sequence of auditory stimuli in order to build a model.
|
[
{
"created": "Mon, 13 Nov 2023 23:02:32 GMT",
"version": "v1"
},
{
"created": "Wed, 27 Dec 2023 18:20:58 GMT",
"version": "v2"
}
] |
2023-12-29
|
[
[
"Najman",
"Fernando A.",
""
],
[
"Galves",
"Antonio",
""
],
[
"Svarc",
"Marcela",
""
],
[
"Vargas",
"Claudia D.",
""
]
] |
It has been classically conjectured that the brain assigns probabilistic models to sequences of stimuli. An important issue associated with this conjecture is the identification of the classes of models used by the brain to perform this task. We address this issue by using a new clustering procedure for sets of electroencephalographic (EEG) data recorded from participants exposed to a sequence of auditory stimuli generated by a stochastic chain. This clustering procedure indicates that the brain uses renewal points in the stochastic sequence of auditory stimuli in order to build a model.
|
1612.01409
|
Santosh Tirunagari
|
Norman Poh, Simon Bull, Santosh Tirunagari, Nicholas Cole, Simon de
Lusignan
|
Probabilistic Broken-Stick Model: A Regression Algorithm for Irregularly
Sampled Data with Application to eGFR
|
Preprint submitted to Journal of Biomedical Informatics
| null | null | null |
q-bio.QM stat.AP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In order for clinicians to manage disease progression and make effective
decisions about drug dosage, treatment regimens or scheduling follow up
appointments, it is necessary to be able to identify both short and long-term
trends in repeated biomedical measurements. However, this is complicated by the
fact that these measurements are irregularly sampled and influenced by both
genuine physiological changes and external factors. In their current forms,
existing regression algorithms often do not fulfil all of a clinician's
requirements for identifying short-term events while still being able to
identify long-term trends in disease progression. Therefore, in order to
balance both short term interpretability and long term flexibility, an
extension to broken-stick regression models is proposed in order to make them
more suitable for modelling clinical time series. The proposed probabilistic
broken-stick model can robustly estimate both short-term and long-term trends
simultaneously, while also accommodating the unequal length and irregularly
sampled nature of clinical time series. Moreover, since the model is parametric
and completely generative, its first derivative provides a long-term non-linear
estimate of the annual rate of change in the measurements more reliably than
linear regression. The benefits of the proposed model are illustrated using
estimated glomerular filtration rate as a case study for managing patients with
chronic kidney disease.
|
[
{
"created": "Wed, 30 Nov 2016 17:50:44 GMT",
"version": "v1"
}
] |
2016-12-06
|
[
[
"Poh",
"Norman",
""
],
[
"Bull",
"Simon",
""
],
[
"Tirunagari",
"Santosh",
""
],
[
"Cole",
"Nicholas",
""
],
[
"de Lusignan",
"Simon",
""
]
] |
In order for clinicians to manage disease progression and make effective decisions about drug dosage, treatment regimens or scheduling follow up appointments, it is necessary to be able to identify both short and long-term trends in repeated biomedical measurements. However, this is complicated by the fact that these measurements are irregularly sampled and influenced by both genuine physiological changes and external factors. In their current forms, existing regression algorithms often do not fulfil all of a clinician's requirements for identifying short-term events while still being able to identify long-term trends in disease progression. Therefore, in order to balance both short term interpretability and long term flexibility, an extension to broken-stick regression models is proposed in order to make them more suitable for modelling clinical time series. The proposed probabilistic broken-stick model can robustly estimate both short-term and long-term trends simultaneously, while also accommodating the unequal length and irregularly sampled nature of clinical time series. Moreover, since the model is parametric and completely generative, its first derivative provides a long-term non-linear estimate of the annual rate of change in the measurements more reliably than linear regression. The benefits of the proposed model are illustrated using estimated glomerular filtration rate as a case study for managing patients with chronic kidney disease.
|
q-bio/0506016
|
Bob Eisenberg
|
Bob Eisenberg
|
Ions in Fluctuating Channels: Transistors Alive
|
Revised version of earlier submission, as invited, refereed, and
published by journal
|
Fluctuations and Noise Letters (2012) 11:76-96
|
10.1142/S0219477512400019
| null |
q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Ion channels are proteins with a hole down the middle embedded in cell
membranes. Membranes form insulating structures and the channels through them
allow and control the movement of charged particles, spherical ions, mostly
Na+, K+, Ca++, and Cl-. Membranes contain hundreds or thousands of types of
channels, fluctuating between open conducting, and closed insulating states.
Channels control an enormous range of biological function by opening and
closing in response to specific stimuli using mechanisms that are not yet
understood in physical language. Open channels conduct current of charged
particles following laws of Brownian movement of charged spheres rather like
the laws of electrodiffusion of quasi-particles in semiconductors. Open
channels select between similar ions using a combination of electrostatic and
'crowded charge' (Lennard-Jones) forces. The specific location of atoms and the
exact atomic structure of the channel protein seems much less important than
certain properties of the structure, namely the volume accessible to ions and
the effective density of fixed and polarization charge. There is no sign of
other chemical effects like delocalization of electron orbitals between ions
and the channel protein. Channels play a role in biology as important as
transistors in computers, and they use rather similar physics to perform part
of that role. Understanding their fluctuations awaits physical insight into the
source of the variance and mathematical analysis of the coupling of the
fluctuations to the other components and forces of the system.
|
[
{
"created": "Tue, 14 Jun 2005 14:28:58 GMT",
"version": "v1"
},
{
"created": "Sun, 3 Feb 2008 22:32:30 GMT",
"version": "v2"
},
{
"created": "Fri, 10 May 2013 18:40:14 GMT",
"version": "v3"
}
] |
2015-05-11
|
[
[
"Eisenberg",
"Bob",
""
]
] |
Ion channels are proteins with a hole down the middle embedded in cell membranes. Membranes form insulating structures and the channels through them allow and control the movement of charged particles, spherical ions, mostly Na+, K+, Ca++, and Cl-. Membranes contain hundreds or thousands of types of channels, fluctuating between open conducting, and closed insulating states. Channels control an enormous range of biological function by opening and closing in response to specific stimuli using mechanisms that are not yet understood in physical language. Open channels conduct current of charged particles following laws of Brownian movement of charged spheres rather like the laws of electrodiffusion of quasi-particles in semiconductors. Open channels select between similar ions using a combination of electrostatic and 'crowded charge' (Lennard-Jones) forces. The specific location of atoms and the exact atomic structure of the channel protein seems much less important than certain properties of the structure, namely the volume accessible to ions and the effective density of fixed and polarization charge. There is no sign of other chemical effects like delocalization of electron orbitals between ions and the channel protein. Channels play a role in biology as important as transistors in computers, and they use rather similar physics to perform part of that role. Understanding their fluctuations awaits physical insight into the source of the variance and mathematical analysis of the coupling of the fluctuations to the other components and forces of the system.
|
1407.0865
|
Gibin Powathil
|
Gibin G Powathil, Mark AJ Chaplain and Maciej Swat
|
Investigating the development of chemotherapeutic drug resistance in
cancer: A multiscale computational study
| null | null | null | null |
q-bio.TO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Chemotherapy is one of the most important therapeutic options used to treat
human cancers, either alone or in combination with radiation therapy and
surgery. Recent studies have indicated that intra-tumoural heterogeneity has a
significant role in driving resistance to chemotherapy in many human
malignancies. Multiple factors including the internal cell-cycle dynamics and
the external microenvironement contribute to the intra-tumoural heterogeneity.
In this paper we present a hybrid, multiscale, individual-based mathematical
model, incorporating internal cell-cycle dynamics and changes in oxygen
concentration, to study the effects of delivery of several different
chemotherapeutic drugs on the heterogeneous subpopulations of cancer cells with
varying cell-cycle dynamics. The computational simulation results from the
multiscale model are in good agreement with available experimental data and
support the hypothesis that slow-cycling sub-populations of tumour cells within
a growing tumour mass can induce drug resistance to chemotherapy and thus the
use of conventional chemotherapy may actually result in the emergence of
dominant, therapy-resistant, slow-cycling subpopulations of tumour cells. Our
results indicate that the appearance of this chemotherapeutic resistance is
mainly due to the inability of the administered drug to target all cancer cells
irrespective of the stage in the cell-cycle they are in i.e. most
chemotherapeutic drugs target cells in a particular phase/phases of the
cell-cycle, and hence always spare some cancer cells that are not in the
targeted cell-cycle phase/phases. The results also suggest that this
cell-cycle-mediated drug resistance may be overcome by using multiple doses of
cell-cycle, phase-specific chemotherapy that targets cells in all phases and
its appropriate sequencing and scheduling.
|
[
{
"created": "Thu, 3 Jul 2014 11:03:02 GMT",
"version": "v1"
}
] |
2014-07-04
|
[
[
"Powathil",
"Gibin G",
""
],
[
"Chaplain",
"Mark AJ",
""
],
[
"Swat",
"Maciej",
""
]
] |
Chemotherapy is one of the most important therapeutic options used to treat human cancers, either alone or in combination with radiation therapy and surgery. Recent studies have indicated that intra-tumoural heterogeneity has a significant role in driving resistance to chemotherapy in many human malignancies. Multiple factors including the internal cell-cycle dynamics and the external microenvironement contribute to the intra-tumoural heterogeneity. In this paper we present a hybrid, multiscale, individual-based mathematical model, incorporating internal cell-cycle dynamics and changes in oxygen concentration, to study the effects of delivery of several different chemotherapeutic drugs on the heterogeneous subpopulations of cancer cells with varying cell-cycle dynamics. The computational simulation results from the multiscale model are in good agreement with available experimental data and support the hypothesis that slow-cycling sub-populations of tumour cells within a growing tumour mass can induce drug resistance to chemotherapy and thus the use of conventional chemotherapy may actually result in the emergence of dominant, therapy-resistant, slow-cycling subpopulations of tumour cells. Our results indicate that the appearance of this chemotherapeutic resistance is mainly due to the inability of the administered drug to target all cancer cells irrespective of the stage in the cell-cycle they are in i.e. most chemotherapeutic drugs target cells in a particular phase/phases of the cell-cycle, and hence always spare some cancer cells that are not in the targeted cell-cycle phase/phases. The results also suggest that this cell-cycle-mediated drug resistance may be overcome by using multiple doses of cell-cycle, phase-specific chemotherapy that targets cells in all phases and its appropriate sequencing and scheduling.
|
2210.09308
|
Nicolas F. Chaves De Plaza
|
Nicolas F. Chaves-de-Plaza, Klaus Hildebrandt, Anna Vilanova
|
ProtoFold Neighborhood Inspector
|
Accepted submission for the Bio+MedVis challenge @ IEEE VIS 2022
| null | null | null |
q-bio.QM cs.GR
|
http://creativecommons.org/licenses/by/4.0/
|
Post-translational modifications (PTMs) affecting a protein's residues (amino
acids) can disturb its function, leading to illness. Whether or not a PTM is
pathogenic depends on its type and the status of neighboring residues. In this
paper, we present the ProtoFold Neighborhood Inspector (PFNI), a visualization
system for analyzing residues neighborhoods. The main contribution is a
visualization idiom, the Residue Constellation (RC), for identifying and
comparing three-dimensional neighborhoods based on per-residue features and
spatial characteristics. The RC leverages two-dimensional representations of
the protein's three-dimensional structure to overcome problems like occlusion,
easing the analysis of neighborhoods that often have complicated spatial
arrangements. Using the PFNI, we explored proteins' structural PTM data, which
allowed us to identify patterns in the distribution and quantity of
per-neighborhood PTMs that might be related to their pathogenic status. In the
following, we define the tasks that guided the development of the PFNI and
describe the data sources we derived and used. Then, we introduce the PFNI and
illustrate its usage through an example of an analysis workflow. We conclude by
reflecting on preliminary findings obtained while using the tool on the
provided data and future directions concerning the development of the PFNI.
|
[
{
"created": "Mon, 17 Oct 2022 09:23:24 GMT",
"version": "v1"
}
] |
2022-10-19
|
[
[
"Chaves-de-Plaza",
"Nicolas F.",
""
],
[
"Hildebrandt",
"Klaus",
""
],
[
"Vilanova",
"Anna",
""
]
] |
Post-translational modifications (PTMs) affecting a protein's residues (amino acids) can disturb its function, leading to illness. Whether or not a PTM is pathogenic depends on its type and the status of neighboring residues. In this paper, we present the ProtoFold Neighborhood Inspector (PFNI), a visualization system for analyzing residues neighborhoods. The main contribution is a visualization idiom, the Residue Constellation (RC), for identifying and comparing three-dimensional neighborhoods based on per-residue features and spatial characteristics. The RC leverages two-dimensional representations of the protein's three-dimensional structure to overcome problems like occlusion, easing the analysis of neighborhoods that often have complicated spatial arrangements. Using the PFNI, we explored proteins' structural PTM data, which allowed us to identify patterns in the distribution and quantity of per-neighborhood PTMs that might be related to their pathogenic status. In the following, we define the tasks that guided the development of the PFNI and describe the data sources we derived and used. Then, we introduce the PFNI and illustrate its usage through an example of an analysis workflow. We conclude by reflecting on preliminary findings obtained while using the tool on the provided data and future directions concerning the development of the PFNI.
|
2004.11242
|
Aurelie Nakamura
|
Aurelie Nakamura (iPLESP), Laura Pryor (iPLESP), Morgane Ballon
(CRESS), Sandrine Lioret (CRESS), Barbara Heude (CRESS), Marie-Aline Charles
(ELFE), Maria Melchior (iPLESP), El-Khoury Lesueur (iPLESP)
|
Maternal education and offspring birthweight for gestational age: the
mediating effect of smoking during pregnancy
| null | null | null | null |
q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Background Small for gestational age (SGA) birthweight, a risk factor of
infant mortality and delayed child development, is associated with maternal
educational attainment. Maternal tobacco smoking during pregnancy could
contribute to this association. We aimed to quantify the contribution of
maternal smoking during pregnancy to social inequalities in child birthweight
for gestational age (GA). Methods Data come from the French nation-wide ELFE
cohort study, which included 17,155 singletons. Birthweights for GA were
calculated using z-scores. Associations between maternal educational
attainment, tobacco smoking during pregnancy and child birthweight for GA were
ascertained using mediation analysis. Mediation analyses were also stratified
by maternal pre-pregnancy body mass index.Results Low maternal educational
attainment was associated with an increased odd of tobacco smoking during
pregnancy (adjusted OR (ORa)=2.58 [95% CI 2.34, 2.84]) as well as a decrease in
child birthweight for GA (RRa=0.94 [95% 0.91, 0.98]). Tobacco smoking during
pregnancy was associated with a decrease in offspring birthweight for GA
(RRa=0.73 [95% CI 0.70, 0.76]). Mediation analysis suggests that 39% of the
effect of low maternal educational attainment on offspring birthweight for GA
was mediated by smoking during pregnancy. A more important direct effect of
maternal educational attainment on child birthweight for GA was observed among
underweight women (RRa=0.82 [95%CI 0.72, 0.93]).Conclusions The relationship
between maternal educational attainment and child birthweight for GA is
strongly mediated by smoking during pregnancy. Reducing maternal smoking could
lessen the occurrence of infant SGA and decrease socioeconomic inequalities in
birthweight for GA.Keywords Birthweight, educational attainment, tobacco
smoking, pregnancy, mediation analysis, health inequalities
|
[
{
"created": "Wed, 22 Apr 2020 09:00:35 GMT",
"version": "v1"
}
] |
2020-04-24
|
[
[
"Nakamura",
"Aurelie",
"",
"iPLESP"
],
[
"Pryor",
"Laura",
"",
"iPLESP"
],
[
"Ballon",
"Morgane",
"",
"CRESS"
],
[
"Lioret",
"Sandrine",
"",
"CRESS"
],
[
"Heude",
"Barbara",
"",
"CRESS"
],
[
"Charles",
"Marie-Aline",
"",
"ELFE"
],
[
"Melchior",
"Maria",
"",
"iPLESP"
],
[
"Lesueur",
"El-Khoury",
"",
"iPLESP"
]
] |
Background Small for gestational age (SGA) birthweight, a risk factor of infant mortality and delayed child development, is associated with maternal educational attainment. Maternal tobacco smoking during pregnancy could contribute to this association. We aimed to quantify the contribution of maternal smoking during pregnancy to social inequalities in child birthweight for gestational age (GA). Methods Data come from the French nation-wide ELFE cohort study, which included 17,155 singletons. Birthweights for GA were calculated using z-scores. Associations between maternal educational attainment, tobacco smoking during pregnancy and child birthweight for GA were ascertained using mediation analysis. Mediation analyses were also stratified by maternal pre-pregnancy body mass index.Results Low maternal educational attainment was associated with an increased odd of tobacco smoking during pregnancy (adjusted OR (ORa)=2.58 [95% CI 2.34, 2.84]) as well as a decrease in child birthweight for GA (RRa=0.94 [95% 0.91, 0.98]). Tobacco smoking during pregnancy was associated with a decrease in offspring birthweight for GA (RRa=0.73 [95% CI 0.70, 0.76]). Mediation analysis suggests that 39% of the effect of low maternal educational attainment on offspring birthweight for GA was mediated by smoking during pregnancy. A more important direct effect of maternal educational attainment on child birthweight for GA was observed among underweight women (RRa=0.82 [95%CI 0.72, 0.93]).Conclusions The relationship between maternal educational attainment and child birthweight for GA is strongly mediated by smoking during pregnancy. Reducing maternal smoking could lessen the occurrence of infant SGA and decrease socioeconomic inequalities in birthweight for GA.Keywords Birthweight, educational attainment, tobacco smoking, pregnancy, mediation analysis, health inequalities
|
1006.2752
|
Eva Gehrmann
|
Eva Gehrmann and Barbara Drossel
|
Boolean versus continuous dynamics on simple two-gene modules
|
8 pages, 10 figures
| null |
10.1103/PhysRevE.82.046120
| null |
q-bio.MN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We investigate the dynamical behavior of simple modules composed of two genes
with two or three regulating connections. Continuous dynamics for mRNA and
protein concentrations is compared to a Boolean model for gene activity. Using
a generalized method, we study within a single framework different continuous
models and different types of regulatory functions, and establish conditions
under which the system can display stable oscillations. These conditions
concern the time scales, the degree of cooperativity of the regulating
interactions, and the signs of the interactions. Not all models that show
oscillations under Boolean dynamics can have oscillations under continuous
dynamics, and vice versa.
|
[
{
"created": "Mon, 14 Jun 2010 16:09:22 GMT",
"version": "v1"
}
] |
2013-05-29
|
[
[
"Gehrmann",
"Eva",
""
],
[
"Drossel",
"Barbara",
""
]
] |
We investigate the dynamical behavior of simple modules composed of two genes with two or three regulating connections. Continuous dynamics for mRNA and protein concentrations is compared to a Boolean model for gene activity. Using a generalized method, we study within a single framework different continuous models and different types of regulatory functions, and establish conditions under which the system can display stable oscillations. These conditions concern the time scales, the degree of cooperativity of the regulating interactions, and the signs of the interactions. Not all models that show oscillations under Boolean dynamics can have oscillations under continuous dynamics, and vice versa.
|
1311.6345
|
Pierre-Olivier Amblard
|
Pierre-Olivier Amblard
|
A non-parametric efficient evaluation of Partial Directed Coherence
| null | null | null | null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Studying the flow of information between different areas of the brain can be
performed by using the so-called Partial Directed Coherence. This measure is
usually evaluated by first identifying a multivariate autoregressive model, and
then by using Fourier transforms of the impulse responses identified and
applying appropriate normalizations. Here, we present another route to evaluate
the partial directed coherences in multivariate time series. The method
proposed is non parametric, and utilises the strong spectral factorization of
the inverse of the spectral density matrix of the multivariate process. To
perform the factorization, we have recourse to an algorithm developed by Davis
and his collaborators. We present simulations as well as an application on a
real data set (Local Field Potentials in the sleeping mouse) to illustrate the
methodology. A comparison to the usual approach in term of complexity is
detailed. For long AR models, the proposed approach is of interest.
|
[
{
"created": "Mon, 25 Nov 2013 16:04:18 GMT",
"version": "v1"
}
] |
2013-11-26
|
[
[
"Amblard",
"Pierre-Olivier",
""
]
] |
Studying the flow of information between different areas of the brain can be performed by using the so-called Partial Directed Coherence. This measure is usually evaluated by first identifying a multivariate autoregressive model, and then by using Fourier transforms of the impulse responses identified and applying appropriate normalizations. Here, we present another route to evaluate the partial directed coherences in multivariate time series. The method proposed is non parametric, and utilises the strong spectral factorization of the inverse of the spectral density matrix of the multivariate process. To perform the factorization, we have recourse to an algorithm developed by Davis and his collaborators. We present simulations as well as an application on a real data set (Local Field Potentials in the sleeping mouse) to illustrate the methodology. A comparison to the usual approach in term of complexity is detailed. For long AR models, the proposed approach is of interest.
|
2310.14722
|
bastien chassagnol
|
Bastien Chassagnol (LPSM), Gr\'egory Nuel (LPSM), Etienne Becht
|
An updated State-of-the-Art Overview of transcriptomic Deconvolution
Methods
| null | null | null | null |
q-bio.QM q-bio.GN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Although bulk transcriptomic analyses have significantly contributed to an
enhanced comprehension of multifaceted diseases, their exploration capacity is
impeded by the heterogeneous compositions of biological samples. Indeed, by
averaging expression of multiple cell types, RNA-Seq analysis is oblivious to
variations in cellular changes, hindering the identification of the internal
constituents of tissues, involved in disease progression. On the other hand,
single-cell techniques are still time, manpower and resource-consuming
analyses.To address the intrinsic limitations of both bulk and single-cell
methodologies, computational deconvolution techniques have been developed to
estimate the frequencies of cell subtypes within complex tissues. These methods
are especially valuable for dissecting intricate tissue niches, with a
particular focus on tumour microenvironments (TME).In this paper, we offer a
comprehensive overview of deconvolution techniques, classifying them based on
their methodological characteristics, the type of prior knowledge required for
the algorithm, and the statistical constraints they address. Within each
category identified, we delve into the theoretical aspects for implementing the
underlying method, while providing an in-depth discussion of their main
advantages and disadvantages in supplementary materials.Notably, we emphasise
the advantages of cutting-edge deconvolution tools based on probabilistic
models, as they offer robust statistical frameworks that closely align with
biological realities. We anticipate that this review will provide valuable
guidelines for computational bioinformaticians in order to select the
appropriate method in alignment with their statistical and biological
objectives.We ultimately end this review by discussing open challenges that
must be addressed to accurately quantify closely related cell types from RNA
sequencing data, and the complementary role of single-cell RNA-Seq to that
purpose.
|
[
{
"created": "Mon, 23 Oct 2023 09:00:03 GMT",
"version": "v1"
}
] |
2023-10-24
|
[
[
"Chassagnol",
"Bastien",
"",
"LPSM"
],
[
"Nuel",
"Grégory",
"",
"LPSM"
],
[
"Becht",
"Etienne",
""
]
] |
Although bulk transcriptomic analyses have significantly contributed to an enhanced comprehension of multifaceted diseases, their exploration capacity is impeded by the heterogeneous compositions of biological samples. Indeed, by averaging expression of multiple cell types, RNA-Seq analysis is oblivious to variations in cellular changes, hindering the identification of the internal constituents of tissues, involved in disease progression. On the other hand, single-cell techniques are still time, manpower and resource-consuming analyses.To address the intrinsic limitations of both bulk and single-cell methodologies, computational deconvolution techniques have been developed to estimate the frequencies of cell subtypes within complex tissues. These methods are especially valuable for dissecting intricate tissue niches, with a particular focus on tumour microenvironments (TME).In this paper, we offer a comprehensive overview of deconvolution techniques, classifying them based on their methodological characteristics, the type of prior knowledge required for the algorithm, and the statistical constraints they address. Within each category identified, we delve into the theoretical aspects for implementing the underlying method, while providing an in-depth discussion of their main advantages and disadvantages in supplementary materials.Notably, we emphasise the advantages of cutting-edge deconvolution tools based on probabilistic models, as they offer robust statistical frameworks that closely align with biological realities. We anticipate that this review will provide valuable guidelines for computational bioinformaticians in order to select the appropriate method in alignment with their statistical and biological objectives.We ultimately end this review by discussing open challenges that must be addressed to accurately quantify closely related cell types from RNA sequencing data, and the complementary role of single-cell RNA-Seq to that purpose.
|
1906.07511
|
Anne-Sophie Herard
|
Florent Letronne, Geoffroy Laumet, Anne-Marie Ayral, Julien Chapuis,
Florie Demiautte, Mathias Laga, Michel Vandenberghe (LMN), Nicolas Malmanche,
Florence Leroux, Fanny Eysert, Yoann Sottejeau, Linda Chami, Amandine Flaig,
Charlotte Bauer (IPMC), Pierre Dourlen (JPArc - U837 Inserm), Marie Lesaffre,
Charlotte Delay, Ludovic Huot (CIIL), Julie Dumont (EGID), Elisabeth
Werkmeister, Franck Lafont (CIIL), Tiago Mendes (Inserm U1167 - RID-AGE -
Institut Pasteur), Franck Hansmannel (NGERE), Bart Dermaut, Benoit Deprez,
Anne-Sophie Herard (LMN), Marc Dhenain (UGRA / SETA), Nicolas Souedet (LMN),
Florence Pasquier, David Tulasne (IBLI), Claudine Berr (UMRESTTE UMR T9405),
Jean-Jacques Hauw, Yves Lemoine (UPVM), Philippe Amouyel, David Mann, Rebecca
D\'eprez, Fr\'ed\'eric Checler (IPMC), David Hot (CIIL), Thierry Delzescaux
(MIRCEN), Kris Gevaert, Jean-Charles Lambert (DISC)
|
ADAM30 Downregulates APP-Linked Defects Through Cathepsin D Activation
in Alzheimer's Disease
| null |
EBioMedicine, Elsevier, 2016, 9, pp.278-292
|
10.1016/j.ebiom.2016.06.002
| null |
q-bio.NC q-bio.TO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Although several ADAMs (A disintegrin-like and metalloproteases) have been
shown to contribute to the amy-loid precursor protein (APP) metabolism, the
full spectrum of metalloproteases involved in this metabolism remains to be
established. Transcriptomic analyses centred on metalloprotease genes unraveled
a 50% decrease in ADAM30 expression that inversely correlates with amyloid load
in Alzheimer's disease brains. Accordingly, in vitro down-or up-regulation of
ADAM30 expression triggered an increase/decrease in A$\beta$ peptides levels
whereas expression of a biologically inactive ADAM30 (ADAM30 mut) did not
affect A$\beta$ secretion. Proteomics/cell-based experiments showed that
ADAM30-dependent regulation of APP metabolism required both cathepsin D (CTSD)
activation and APP sorting to lysosomes. Accordingly, in Alzheimer-like
transgenic mice, neuronal ADAM30 over-expression lowered A$\beta$42 secretion
in neuron primary cultures, soluble A$\beta$42 and amyloid plaque load levels
in the brain and concomitantly enhanced CTSD activity and finally rescued long
term potentiation.
|
[
{
"created": "Tue, 18 Jun 2019 11:56:49 GMT",
"version": "v1"
}
] |
2019-06-19
|
[
[
"Letronne",
"Florent",
"",
"LMN"
],
[
"Laumet",
"Geoffroy",
"",
"LMN"
],
[
"Ayral",
"Anne-Marie",
"",
"LMN"
],
[
"Chapuis",
"Julien",
"",
"LMN"
],
[
"Demiautte",
"Florie",
"",
"LMN"
],
[
"Laga",
"Mathias",
"",
"LMN"
],
[
"Vandenberghe",
"Michel",
"",
"LMN"
],
[
"Malmanche",
"Nicolas",
"",
"IPMC"
],
[
"Leroux",
"Florence",
"",
"IPMC"
],
[
"Eysert",
"Fanny",
"",
"IPMC"
],
[
"Sottejeau",
"Yoann",
"",
"IPMC"
],
[
"Chami",
"Linda",
"",
"IPMC"
],
[
"Flaig",
"Amandine",
"",
"IPMC"
],
[
"Bauer",
"Charlotte",
"",
"IPMC"
],
[
"Dourlen",
"Pierre",
"",
"JPArc - U837 Inserm"
],
[
"Lesaffre",
"Marie",
"",
"CIIL"
],
[
"Delay",
"Charlotte",
"",
"CIIL"
],
[
"Huot",
"Ludovic",
"",
"CIIL"
],
[
"Dumont",
"Julie",
"",
"EGID"
],
[
"Werkmeister",
"Elisabeth",
"",
"CIIL"
],
[
"Lafont",
"Franck",
"",
"CIIL"
],
[
"Mendes",
"Tiago",
"",
"Inserm U1167 - RID-AGE -\n Institut Pasteur"
],
[
"Hansmannel",
"Franck",
"",
"NGERE"
],
[
"Dermaut",
"Bart",
"",
"LMN"
],
[
"Deprez",
"Benoit",
"",
"LMN"
],
[
"Herard",
"Anne-Sophie",
"",
"LMN"
],
[
"Dhenain",
"Marc",
"",
"UGRA / SETA"
],
[
"Souedet",
"Nicolas",
"",
"LMN"
],
[
"Pasquier",
"Florence",
"",
"IBLI"
],
[
"Tulasne",
"David",
"",
"IBLI"
],
[
"Berr",
"Claudine",
"",
"UMRESTTE UMR T9405"
],
[
"Hauw",
"Jean-Jacques",
"",
"UPVM"
],
[
"Lemoine",
"Yves",
"",
"UPVM"
],
[
"Amouyel",
"Philippe",
"",
"IPMC"
],
[
"Mann",
"David",
"",
"IPMC"
],
[
"Déprez",
"Rebecca",
"",
"IPMC"
],
[
"Checler",
"Frédéric",
"",
"IPMC"
],
[
"Hot",
"David",
"",
"CIIL"
],
[
"Delzescaux",
"Thierry",
"",
"MIRCEN"
],
[
"Gevaert",
"Kris",
"",
"DISC"
],
[
"Lambert",
"Jean-Charles",
"",
"DISC"
]
] |
Although several ADAMs (A disintegrin-like and metalloproteases) have been shown to contribute to the amy-loid precursor protein (APP) metabolism, the full spectrum of metalloproteases involved in this metabolism remains to be established. Transcriptomic analyses centred on metalloprotease genes unraveled a 50% decrease in ADAM30 expression that inversely correlates with amyloid load in Alzheimer's disease brains. Accordingly, in vitro down-or up-regulation of ADAM30 expression triggered an increase/decrease in A$\beta$ peptides levels whereas expression of a biologically inactive ADAM30 (ADAM30 mut) did not affect A$\beta$ secretion. Proteomics/cell-based experiments showed that ADAM30-dependent regulation of APP metabolism required both cathepsin D (CTSD) activation and APP sorting to lysosomes. Accordingly, in Alzheimer-like transgenic mice, neuronal ADAM30 over-expression lowered A$\beta$42 secretion in neuron primary cultures, soluble A$\beta$42 and amyloid plaque load levels in the brain and concomitantly enhanced CTSD activity and finally rescued long term potentiation.
|
0804.3939
|
Raphael Plasson
|
Raphael Plasson
|
Microreversible recycled chemical systems. Comment on "A Re-Examination
of Reversibility in Reaction Models for the Spontaneous Emergence of
Homochirality"
|
2 pages, 2 figures
|
J. Phys. Chem. B, 2008, 112, 9550-9552
|
10.1021/jp803588z
|
NORDITA-2008-18
|
q-bio.MN physics.chem-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The question of the onset of the homochirality on prebiotic Earth still
remains a fundamental question in the quest for the origin of life. Recent
works in this field introduce the concept of recycling, rather than the
traditional open-flow system described by Frank. This approach has been
criticized by Blackmond et al. They claimed that such systems are
thermodynamically impossible, except in the cases where non-microreversible
reactions are introduced, like in photochemical reactions, or under the
influence of physical actions (e.g. by crystal crushing). This point of view
reveals misunderstandings about this model of a recycled system, overlooks the
possibility of energy exchanges that could take place in prebiotic systems, and
leads the authors to unawarely remove the activation reaction and energy source
from their "non-equilibrium" models. It is especially important to understand
what are the concepts behind the notion of recycled systems, and of activation
reactions. These points are fundamental to comprehending how chemical systems
-- and especially prebiotic chemical systems -- can be maintained in
non-equilibrium steady states, and how free energy can be used and exchanged
between systems. The proposed approach aims at the decomposition of the
problem, avoiding to embrace the whole system at the same time.
|
[
{
"created": "Thu, 24 Apr 2008 14:40:23 GMT",
"version": "v1"
},
{
"created": "Fri, 14 Nov 2008 10:21:28 GMT",
"version": "v2"
}
] |
2011-05-23
|
[
[
"Plasson",
"Raphael",
""
]
] |
The question of the onset of the homochirality on prebiotic Earth still remains a fundamental question in the quest for the origin of life. Recent works in this field introduce the concept of recycling, rather than the traditional open-flow system described by Frank. This approach has been criticized by Blackmond et al. They claimed that such systems are thermodynamically impossible, except in the cases where non-microreversible reactions are introduced, like in photochemical reactions, or under the influence of physical actions (e.g. by crystal crushing). This point of view reveals misunderstandings about this model of a recycled system, overlooks the possibility of energy exchanges that could take place in prebiotic systems, and leads the authors to unawarely remove the activation reaction and energy source from their "non-equilibrium" models. It is especially important to understand what are the concepts behind the notion of recycled systems, and of activation reactions. These points are fundamental to comprehending how chemical systems -- and especially prebiotic chemical systems -- can be maintained in non-equilibrium steady states, and how free energy can be used and exchanged between systems. The proposed approach aims at the decomposition of the problem, avoiding to embrace the whole system at the same time.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.