id stringlengths 9 13 | submitter stringlengths 4 48 | authors stringlengths 4 9.62k | title stringlengths 4 343 | comments stringlengths 2 480 ⌀ | journal-ref stringlengths 9 309 ⌀ | doi stringlengths 12 138 ⌀ | report-no stringclasses 277 values | categories stringlengths 8 87 | license stringclasses 9 values | orig_abstract stringlengths 27 3.76k | versions listlengths 1 15 | update_date stringlengths 10 10 | authors_parsed listlengths 1 147 | abstract stringlengths 24 3.75k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2301.12491 | Namiko Mitarai | Namiko Mitarai, Anastasios Marantos, and Kim Sneppen | Sustainable Diversity of Phage-Bacteria Systems | Short review. 7 pages, 3 figures. Minor update of the texts | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bacteriophages are central to microbial ecosystems for balancing bacterial
populations and promoting evolution by applying strong selection pressure. Here
we review some of the known aspects that modulate phage-bacteria interaction in
a way that naturally promotes their coexistence. We focus on the modulations
that arise from structural, physical, or physiological constraints. We argue
they should play roles in many phage-bacteria systems providing sustainable
diversity.
| [
{
"created": "Sun, 29 Jan 2023 16:56:21 GMT",
"version": "v1"
},
{
"created": "Tue, 11 Apr 2023 15:27:46 GMT",
"version": "v2"
}
] | 2023-04-12 | [
[
"Mitarai",
"Namiko",
""
],
[
"Marantos",
"Anastasios",
""
],
[
"Sneppen",
"Kim",
""
]
] | Bacteriophages are central to microbial ecosystems for balancing bacterial populations and promoting evolution by applying strong selection pressure. Here we review some of the known aspects that modulate phage-bacteria interaction in a way that naturally promotes their coexistence. We focus on the modulations that arise from structural, physical, or physiological constraints. We argue they should play roles in many phage-bacteria systems providing sustainable diversity. |
1512.05573 | Tom Michoel | Christopher J. Banks, Anagha Joshi, Tom Michoel | Functional transcription factor target discovery via compendia of
binding and expression profiles | 15 pages + 8 pages supplementary material; 6 figures, 6 supplementary
figures, 5 supplementary tables | null | 10.1038/srep20649 | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Genome-wide experiments to map the DNA-binding locations of
transcription-associated factors (TFs) have shown that the number of genes
bound by a TF far exceeds the number of possible direct target genes.
Distinguishing functional from non-functional binding is therefore a major
challenge in the study of transcriptional regulation. We hypothesized that
functional targets can be discovered by correlating binding and expression
profiles across multiple experimental conditions. To test this hypothesis, we
obtained ChIP-seq and RNA-seq data from matching cell types from the human
ENCODE resource, considered promoter-proximal and distal cumulative regulatory
models to map binding sites to genes, and used a combination of linear and
non-linear measures to correlate binding and expression data. We found that a
high degree of correlation between a gene's TF-binding and expression profiles
was significantly more predictive of the gene being differentially expressed
upon knockdown of that TF, compared to using binding sites in the cell type of
interest only. Remarkably, TF targets predicted from correlation across a
compendium of cell types were also predictive of functional targets in other
cell types. Finally, correlation across a time course of ChIP-seq and RNA-seq
experiments was also predictive of functional TF targets in that tissue.
| [
{
"created": "Thu, 17 Dec 2015 13:21:56 GMT",
"version": "v1"
}
] | 2022-11-29 | [
[
"Banks",
"Christopher J.",
""
],
[
"Joshi",
"Anagha",
""
],
[
"Michoel",
"Tom",
""
]
] | Genome-wide experiments to map the DNA-binding locations of transcription-associated factors (TFs) have shown that the number of genes bound by a TF far exceeds the number of possible direct target genes. Distinguishing functional from non-functional binding is therefore a major challenge in the study of transcriptional regulation. We hypothesized that functional targets can be discovered by correlating binding and expression profiles across multiple experimental conditions. To test this hypothesis, we obtained ChIP-seq and RNA-seq data from matching cell types from the human ENCODE resource, considered promoter-proximal and distal cumulative regulatory models to map binding sites to genes, and used a combination of linear and non-linear measures to correlate binding and expression data. We found that a high degree of correlation between a gene's TF-binding and expression profiles was significantly more predictive of the gene being differentially expressed upon knockdown of that TF, compared to using binding sites in the cell type of interest only. Remarkably, TF targets predicted from correlation across a compendium of cell types were also predictive of functional targets in other cell types. Finally, correlation across a time course of ChIP-seq and RNA-seq experiments was also predictive of functional TF targets in that tissue. |
1109.5488 | Edlira Nano | Beno\^it Valot, Olivier Langella, Edlira Nano, Michel Zivy | MassChroQ: A versatile tool for mass spectrometry quantification | 23 pages, 8 figures | Proteomics, volume 11, issue 17, pages 3572-3577, September 2011 | 10.1002/pmic.201100120 | null | q-bio.QM cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, many software tools have been developed to perform quantification
in LC-MS analyses. However, most of them are specific to either a
quantification strategy (e.g. label-free or isotopic labelling) or a
mass-spectrometry system (e.g. high or low resolution).
In this context, we have developed MassChroQ, a versatile software that
performs LC-MS data alignment and peptide quantification by peak area
integration on extracted ion chromatograms. MassChroQ is suitable for
quantification with or without labelling and is not limited to high resolution
systems. Peptides of interest (for example all the identified peptides) can be
determined automatically or manually by providing targeted m/z and retention
time values. It can handle large experiments that include protein or peptide
fractionation (as SDS-PAGE, 2D-LC). It is fully configurable. Every processing
step is traceable, the produced data are in open standard format and its
modularity allows easy integration into proteomic pipelines. The output results
are ready for use in statistical analyses.
Evaluation of MassChroQ on complex label-free data obtained from low and high
resolution mass spectrometers showed low CVs for technical reproducibility
(1.4%) and high coefficients of correlation to protein quantity (0.98).
MassChroQ is freely available under the GNU General Public Licence v3.0 at
http://pappso.inra.fr/bioinfo/masschroq/.
| [
{
"created": "Mon, 26 Sep 2011 08:52:55 GMT",
"version": "v1"
}
] | 2011-09-27 | [
[
"Valot",
"Benoît",
""
],
[
"Langella",
"Olivier",
""
],
[
"Nano",
"Edlira",
""
],
[
"Zivy",
"Michel",
""
]
] | Recently, many software tools have been developed to perform quantification in LC-MS analyses. However, most of them are specific to either a quantification strategy (e.g. label-free or isotopic labelling) or a mass-spectrometry system (e.g. high or low resolution). In this context, we have developed MassChroQ, a versatile software that performs LC-MS data alignment and peptide quantification by peak area integration on extracted ion chromatograms. MassChroQ is suitable for quantification with or without labelling and is not limited to high resolution systems. Peptides of interest (for example all the identified peptides) can be determined automatically or manually by providing targeted m/z and retention time values. It can handle large experiments that include protein or peptide fractionation (as SDS-PAGE, 2D-LC). It is fully configurable. Every processing step is traceable, the produced data are in open standard format and its modularity allows easy integration into proteomic pipelines. The output results are ready for use in statistical analyses. Evaluation of MassChroQ on complex label-free data obtained from low and high resolution mass spectrometers showed low CVs for technical reproducibility (1.4%) and high coefficients of correlation to protein quantity (0.98). MassChroQ is freely available under the GNU General Public Licence v3.0 at http://pappso.inra.fr/bioinfo/masschroq/. |
q-bio/0407034 | Nils Bl\"uthgen | Nils Bl\"uthgen, Karsten Brand, Branka \v{C}ajavec, Maciej Swat,
Hanspeter Herzel, Dieter Beule | Biological Profiling of Gene Groups utilizing Gene Ontology | supplement http://gossip.gene-groups.net/ | Genome Informatics 2005;16(1):106-15. | null | http://www.jsbi.org/journal/IBSB05/IBSB05F015.pdf | q-bio.GN q-bio.MN | null | Increasingly used high throughput experimental techniques, like DNA or
protein microarrays give as a result groups of interesting, e.g. differentially
regulated genes which require further biological interpretation. With the
systematic functional annotation provided by the Gene Ontology the information
required to automate the interpretation task is now accessible. However, the
determination of statistical significant e.g. molecular functions within these
groups is still an open question. In answering this question, multiple testing
issues must be taken into account to avoid misleading results. Here we present
a statistical framework that tests whether functions, processes or locations
described in the Gene Ontology are significantly enriched within a group of
interesting genes when compared to a reference group. First we define an exact
analytical expression for the expected number of false positives that allows us
to calculate adjusted p-values to control the false discovery rate. Next, we
demonstrate and discuss the capabilities of our approach using publicly
available microarray data on cell-cycle regulated genes. Further, we analyze
the robustness of our framework with respect to the exact gene group
composition and compare the performance with earlier approaches. The software
package GOSSIP implements our method and is made freely available at
http://gossip.gene-groups.net/
| [
{
"created": "Mon, 26 Jul 2004 12:45:10 GMT",
"version": "v1"
},
{
"created": "Thu, 19 May 2005 12:22:12 GMT",
"version": "v2"
}
] | 2007-05-23 | [
[
"Blüthgen",
"Nils",
""
],
[
"Brand",
"Karsten",
""
],
[
"Čajavec",
"Branka",
""
],
[
"Swat",
"Maciej",
""
],
[
"Herzel",
"Hanspeter",
""
],
[
"Beule",
"Dieter",
""
]
] | Increasingly used high throughput experimental techniques, like DNA or protein microarrays give as a result groups of interesting, e.g. differentially regulated genes which require further biological interpretation. With the systematic functional annotation provided by the Gene Ontology the information required to automate the interpretation task is now accessible. However, the determination of statistical significant e.g. molecular functions within these groups is still an open question. In answering this question, multiple testing issues must be taken into account to avoid misleading results. Here we present a statistical framework that tests whether functions, processes or locations described in the Gene Ontology are significantly enriched within a group of interesting genes when compared to a reference group. First we define an exact analytical expression for the expected number of false positives that allows us to calculate adjusted p-values to control the false discovery rate. Next, we demonstrate and discuss the capabilities of our approach using publicly available microarray data on cell-cycle regulated genes. Further, we analyze the robustness of our framework with respect to the exact gene group composition and compare the performance with earlier approaches. The software package GOSSIP implements our method and is made freely available at http://gossip.gene-groups.net/ |
1303.6231 | Michael Hinczewski | Michael Hinczewski, J. Christof M. Gebhardt, Matthias Rief, D.
Thirumalai | From mechanical folding trajectories to intrinsic energy landscapes of
biopolymers | Main text: 10 pages, 5 figures; SI: 23 pages, 7 figures | Proc. Natl. Acad. Sci. 110, 4500 (2013) | 10.1073/pnas.1214051110 | null | q-bio.BM cond-mat.soft cond-mat.stat-mech physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In single molecule laser optical tweezer (LOT) pulling experiments a protein
or RNA is juxtaposed between DNA handles that are attached to beads in optical
traps. The LOT generates folding trajectories under force in terms of
time-dependent changes in the distance between the beads. How to construct the
full intrinsic folding landscape (without the handles and the beads) from the
measured time series is a major unsolved problem. By using rigorous theoretical
methods---which account for fluctuations of the DNA handles, rotation of the
optical beads, variations in applied tension due to finite trap stiffness, as
well as environmental noise and the limited bandwidth of the apparatus---we
provide a tractable method to derive intrinsic free energy profiles. We
validate the method by showing that the exactly calculable intrinsic free
energy profile for a Generalized Rouse Model, which mimics the two-state
behavior in nucleic acid hairpins, can be accurately extracted from simulated
time series in a LOT setup regardless of the stiffness of the handles. We next
apply the approach to trajectories from coarse grained LOT molecular
simulations of a coiled-coil protein based on the GCN4 leucine zipper, and
obtain a free energy landscape that is in quantitative agreement with
simulations performed without the beads and handles. Finally, we extract the
intrinsic free energy landscape from experimental LOT measurements for the
leucine zipper, which is independent of the trap parameters.
| [
{
"created": "Mon, 25 Mar 2013 17:52:06 GMT",
"version": "v1"
}
] | 2013-03-28 | [
[
"Hinczewski",
"Michael",
""
],
[
"Gebhardt",
"J. Christof M.",
""
],
[
"Rief",
"Matthias",
""
],
[
"Thirumalai",
"D.",
""
]
] | In single molecule laser optical tweezer (LOT) pulling experiments a protein or RNA is juxtaposed between DNA handles that are attached to beads in optical traps. The LOT generates folding trajectories under force in terms of time-dependent changes in the distance between the beads. How to construct the full intrinsic folding landscape (without the handles and the beads) from the measured time series is a major unsolved problem. By using rigorous theoretical methods---which account for fluctuations of the DNA handles, rotation of the optical beads, variations in applied tension due to finite trap stiffness, as well as environmental noise and the limited bandwidth of the apparatus---we provide a tractable method to derive intrinsic free energy profiles. We validate the method by showing that the exactly calculable intrinsic free energy profile for a Generalized Rouse Model, which mimics the two-state behavior in nucleic acid hairpins, can be accurately extracted from simulated time series in a LOT setup regardless of the stiffness of the handles. We next apply the approach to trajectories from coarse grained LOT molecular simulations of a coiled-coil protein based on the GCN4 leucine zipper, and obtain a free energy landscape that is in quantitative agreement with simulations performed without the beads and handles. Finally, we extract the intrinsic free energy landscape from experimental LOT measurements for the leucine zipper, which is independent of the trap parameters. |
1912.07668 | Fabian Filipp | Simar Singh, Raj Shah, Suzie Chen, Fabian V. Filipp | Targeting glutamate metabolism in melanoma | 6 figures | null | null | null | q-bio.QM q-bio.GN q-bio.MN q-bio.TO | http://creativecommons.org/licenses/by/4.0/ | The glutamate metabotropic receptor 1 (GRM1) drives oncogenesis when
aberrantly activated in melanoma and several other cancers. Metabolomics
reveals that patient-derived xenografts with GRM1-positive melanoma tumors
exhibit elevated plasma glutamate levels associated with metastatic melanoma in
vivo. Stable isotope tracing and GCMS analysis determined that cells expressing
GRM1 fuel a substantial fraction of glutamate from glycolytic carbon.
Stimulation of GRM1 by glutamate leads to activation of mitogenic signaling
pathways, which in turn increases the production of glutamate, fueling
autocrine feedback. Implementing a rational drug-targeting strategy, we
critically evaluate metabolic bottlenecks in vitro and in vivo. Combined
inhibition of glutamate secretion and biosynthesis is an effective rational
drug targeting strategy suppressing tumor growth and restricting tumor
bioavailability of glutamate.
| [
{
"created": "Mon, 16 Dec 2019 19:57:53 GMT",
"version": "v1"
}
] | 2019-12-18 | [
[
"Singh",
"Simar",
""
],
[
"Shah",
"Raj",
""
],
[
"Chen",
"Suzie",
""
],
[
"Filipp",
"Fabian V.",
""
]
] | The glutamate metabotropic receptor 1 (GRM1) drives oncogenesis when aberrantly activated in melanoma and several other cancers. Metabolomics reveals that patient-derived xenografts with GRM1-positive melanoma tumors exhibit elevated plasma glutamate levels associated with metastatic melanoma in vivo. Stable isotope tracing and GCMS analysis determined that cells expressing GRM1 fuel a substantial fraction of glutamate from glycolytic carbon. Stimulation of GRM1 by glutamate leads to activation of mitogenic signaling pathways, which in turn increases the production of glutamate, fueling autocrine feedback. Implementing a rational drug-targeting strategy, we critically evaluate metabolic bottlenecks in vitro and in vivo. Combined inhibition of glutamate secretion and biosynthesis is an effective rational drug targeting strategy suppressing tumor growth and restricting tumor bioavailability of glutamate. |
2302.07955 | Yue Wang | Yue Wang | Mathematical models for order of mutation problem in myeloproliferative
neoplasm: non-additivity and non-commutativity | Highly incomplete work that needs extensive revision | null | null | null | q-bio.PE q-bio.MN | http://creativecommons.org/licenses/by/4.0/ | In some patients of myeloproliferative neoplasm, two genetic mutations can be
found: JAK2 V617F and TET2. When one mutation is present or not, the other
mutation has different effects on regulating gene expressions. Besides, when
both mutations are present, the order of occurrence might make a difference. In
this paper, we build nonlinear ordinary differential equation models and Markov
chain models to explain such phenomena.
| [
{
"created": "Thu, 16 Feb 2023 18:06:14 GMT",
"version": "v1"
},
{
"created": "Thu, 16 Mar 2023 20:23:41 GMT",
"version": "v2"
}
] | 2023-03-20 | [
[
"Wang",
"Yue",
""
]
] | In some patients of myeloproliferative neoplasm, two genetic mutations can be found: JAK2 V617F and TET2. When one mutation is present or not, the other mutation has different effects on regulating gene expressions. Besides, when both mutations are present, the order of occurrence might make a difference. In this paper, we build nonlinear ordinary differential equation models and Markov chain models to explain such phenomena. |
1601.03684 | Rui J. Costa | Rui J. Costa and Hilde Wilkinson-Herbots | Efficient Maximum-Likelihood Inference For The
Isolation-With-Initial-Migration Model With Potentially Asymmetric Gene Flow | Computer code in R included in the ancillary files | null | null | null | q-bio.PE q-bio.QM stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The isolation-with-migration (IM) model is a common tool to make inferences
about the presence of gene flow during speciation, using polymorphism data.
However, Becquet and Przeworski (2009) report that the parameter estimates
obtained by fitting the IM model are very sensitive to the model's assumptions,
including the assumption of constant gene flow until the present. This paper is
concerned with the isolation-with-initial-migration (IIM) model of
Wilkinson-Herbots (2012), which drops precisely this assumption. In the IIM
model, one ancestral population divides into two descendant subpopulations,
between which there is an initial period of gene flow and a subsequent period
of isolation. We derive a fast method of fitting an extended version of the IIM
model, which allows for asymmetric gene flow and unequal subpopulation sizes.
This is a maximum-likelihood method, applicable to observations on the number
of different sites between pairs of DNA sequences from a large number of
independent loci. In addition to obtaining parameter estimates, our method can
also be used to distinguish between alternative models representing different
evolutionary cenarios, by means of likelihood ratio tests. We illustrate the
procedure on pairs of Drosophila sequences from approximately 30,000 loci. The
computing time needed to fit the most complex version of the model to this data
set is only a couple of minutes. The code to fit the IIM model can be found in
the supplementary files of this paper.
| [
{
"created": "Thu, 14 Jan 2016 18:08:08 GMT",
"version": "v1"
}
] | 2016-01-15 | [
[
"Costa",
"Rui J.",
""
],
[
"Wilkinson-Herbots",
"Hilde",
""
]
] | The isolation-with-migration (IM) model is a common tool to make inferences about the presence of gene flow during speciation, using polymorphism data. However, Becquet and Przeworski (2009) report that the parameter estimates obtained by fitting the IM model are very sensitive to the model's assumptions, including the assumption of constant gene flow until the present. This paper is concerned with the isolation-with-initial-migration (IIM) model of Wilkinson-Herbots (2012), which drops precisely this assumption. In the IIM model, one ancestral population divides into two descendant subpopulations, between which there is an initial period of gene flow and a subsequent period of isolation. We derive a fast method of fitting an extended version of the IIM model, which allows for asymmetric gene flow and unequal subpopulation sizes. This is a maximum-likelihood method, applicable to observations on the number of different sites between pairs of DNA sequences from a large number of independent loci. In addition to obtaining parameter estimates, our method can also be used to distinguish between alternative models representing different evolutionary cenarios, by means of likelihood ratio tests. We illustrate the procedure on pairs of Drosophila sequences from approximately 30,000 loci. The computing time needed to fit the most complex version of the model to this data set is only a couple of minutes. The code to fit the IIM model can be found in the supplementary files of this paper. |
1707.06569 | Axel G. Rossberg | Christopher P. Lynam and Axel G. Rossberg | New univariate characterization of fish community size structure
improves precision beyond the Large Fish Indicator | 7 pages, 3 figures (acknowledgements included) | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The size structure of fish-communities is an emergent high-level property of
marine food webs responsive to changes in structure and function. To measure
this food web property using data arising from routine fisheries surveys, a
simple metric known as Typical Length has been proposed as more suitable than
the Large Fish Indicator, which has been highly engineered to be responsive to
fishing pressure. Typical Length avoids the inherent dependence of the Large
Fish Indicator on a parameter that requires case-by-case adjustments. Using
IBTS survey time series for five spatial subdivisions of the Greater North Sea,
we show that the Typical Length can provide information equivalent to the Large
Fish Indicator when fishing is likely the strongest driver, but differences can
also arise. In this example, Typical Length exhibits smaller random
fluctuations ("noise") than the Large Fish Indicator. Typical Length is also
more adaptable than the Large Fish Indicator and can be easily applied to
monitor pelagic fish in addition to demersal fish, and together with
information on the potential growth of the fish community, a proxy of which can
be derived from the Mean Maximum Length indicator, it is possible to partition
change in community composition from change in size structure. This suggests
that Typical Length is an improvement over the Large Fish Indicator as a food
web indicator with the potential to offer further insight when considered in
conjunction with indicators of community composition.
| [
{
"created": "Thu, 20 Jul 2017 15:25:58 GMT",
"version": "v1"
},
{
"created": "Fri, 8 Sep 2017 15:40:26 GMT",
"version": "v2"
}
] | 2017-09-11 | [
[
"Lynam",
"Christopher P.",
""
],
[
"Rossberg",
"Axel G.",
""
]
] | The size structure of fish-communities is an emergent high-level property of marine food webs responsive to changes in structure and function. To measure this food web property using data arising from routine fisheries surveys, a simple metric known as Typical Length has been proposed as more suitable than the Large Fish Indicator, which has been highly engineered to be responsive to fishing pressure. Typical Length avoids the inherent dependence of the Large Fish Indicator on a parameter that requires case-by-case adjustments. Using IBTS survey time series for five spatial subdivisions of the Greater North Sea, we show that the Typical Length can provide information equivalent to the Large Fish Indicator when fishing is likely the strongest driver, but differences can also arise. In this example, Typical Length exhibits smaller random fluctuations ("noise") than the Large Fish Indicator. Typical Length is also more adaptable than the Large Fish Indicator and can be easily applied to monitor pelagic fish in addition to demersal fish, and together with information on the potential growth of the fish community, a proxy of which can be derived from the Mean Maximum Length indicator, it is possible to partition change in community composition from change in size structure. This suggests that Typical Length is an improvement over the Large Fish Indicator as a food web indicator with the potential to offer further insight when considered in conjunction with indicators of community composition. |
2001.03567 | Pietro Cicuta | Pietro Cicuta | The use of biophysical approaches to understand ciliary beating | a review of 11 pages, Submitted to Biochemical Society Transactions | null | null | null | q-bio.TO cond-mat.soft nlin.AO physics.med-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Motile cilia are a striking example of functional cellular organelle,
conserved across all the eukaryotic species. Motile cilia allow swimming of
cells and small organisms and transport of liquids across epithelial tissues.
Whilst the molecular structure is now very well understood, the dynamics of
cilia is not well established either at the single cilium level nor at the
level of collective beating. Indeed, a full understanding of this requires
connecting together behaviour across various lengthscales, from the molecular
to the organelle, then at cellular level and up to the tissue scale. Aside from
the fundamental interest in this system, understanding beating is important to
elucidate aspects of embryonic development and a variety of health conditions
from fertility to genetic and infectious diseases of the airways.
| [
{
"created": "Fri, 10 Jan 2020 17:18:46 GMT",
"version": "v1"
}
] | 2020-01-13 | [
[
"Cicuta",
"Pietro",
""
]
] | Motile cilia are a striking example of functional cellular organelle, conserved across all the eukaryotic species. Motile cilia allow swimming of cells and small organisms and transport of liquids across epithelial tissues. Whilst the molecular structure is now very well understood, the dynamics of cilia is not well established either at the single cilium level nor at the level of collective beating. Indeed, a full understanding of this requires connecting together behaviour across various lengthscales, from the molecular to the organelle, then at cellular level and up to the tissue scale. Aside from the fundamental interest in this system, understanding beating is important to elucidate aspects of embryonic development and a variety of health conditions from fertility to genetic and infectious diseases of the airways. |
2401.05370 | Michael Keiser | Mahdi Ghorbani, Leo Gendelev, Paul Beroza, Michael J. Keiser | Autoregressive fragment-based diffusion for pocket-aware ligand design | Accepted, NeurIPS 2023 Generative AI and Biology Workshop.
OpenReview: https://openreview.net/forum?id=E3HN48zjam | null | null | null | q-bio.BM cs.AI cs.LG physics.chem-ph q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we introduce AutoFragDiff, a fragment-based autoregressive
diffusion model for generating 3D molecular structures conditioned on target
protein structures. We employ geometric vector perceptrons to predict atom
types and spatial coordinates of new molecular fragments conditioned on
molecular scaffolds and protein pockets. Our approach improves the local
geometry of the resulting 3D molecules while maintaining high predicted binding
affinity to protein targets. The model can also perform scaffold extension from
user-provided starting molecular scaffold.
| [
{
"created": "Fri, 15 Dec 2023 04:03:03 GMT",
"version": "v1"
}
] | 2024-01-12 | [
[
"Ghorbani",
"Mahdi",
""
],
[
"Gendelev",
"Leo",
""
],
[
"Beroza",
"Paul",
""
],
[
"Keiser",
"Michael J.",
""
]
] | In this work, we introduce AutoFragDiff, a fragment-based autoregressive diffusion model for generating 3D molecular structures conditioned on target protein structures. We employ geometric vector perceptrons to predict atom types and spatial coordinates of new molecular fragments conditioned on molecular scaffolds and protein pockets. Our approach improves the local geometry of the resulting 3D molecules while maintaining high predicted binding affinity to protein targets. The model can also perform scaffold extension from user-provided starting molecular scaffold. |
2012.09660 | Giuseppe de Vito | Giuseppe de Vito, Lapo Turrini, Caroline M\"ullenbroich, Pietro Ricci,
Giuseppe Sancataldo, Giacomo Mazzamuto, Natascia Tiso, Leonardo Sacconi,
Duccio Fanelli, Ludovico Silvestri, Francesco Vanzi, Francesco Saverio Pavone | Fast whole-brain imaging of seizures in zebrafish larvae by two-photon
light-sheet microscopy | Replacement: accepted version of the manuscript, to be published in
Biomedical Optics Express. 36 pages, 15 figures | null | 10.1364/BOE.434146 | null | q-bio.QM q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Light-sheet fluorescence microscopy (LSFM) enables real-time whole-brain
functional imaging in zebrafish larvae. Conventional one photon LSFM can
however induce undesirable visual stimulation due to the use of visible
excitation light. The use of two-photon (2P) excitation, employing
near-infrared invisible light, provides unbiased investigation of neuronal
circuit dynamics. However, due to the low efficiency of the 2P absorption
process, the imaging speed of this technique is typically limited by the
signal-to-noise-ratio. Here, we describe a 2P LSFM setup designed for
non-invasive imaging that enables quintuplicating state-of-the-art volumetric
acquisition rate of the larval zebrafish brain (5 Hz) while keeping low the
laser intensity on the specimen. We applied our system to the study of
pharmacologically-induced acute seizures, characterizing the spatial-temporal
dynamics of pathological activity and describing for the first time the
appearance of caudo-rostral ictal waves (CRIWs).
| [
{
"created": "Thu, 17 Dec 2020 15:19:00 GMT",
"version": "v1"
},
{
"created": "Fri, 1 Jan 2021 23:13:09 GMT",
"version": "v2"
},
{
"created": "Fri, 25 Jun 2021 22:21:55 GMT",
"version": "v3"
},
{
"created": "Mon, 29 Nov 2021 17:02:10 GMT",
"version": "v4"
}
] | 2021-11-30 | [
[
"de Vito",
"Giuseppe",
""
],
[
"Turrini",
"Lapo",
""
],
[
"Müllenbroich",
"Caroline",
""
],
[
"Ricci",
"Pietro",
""
],
[
"Sancataldo",
"Giuseppe",
""
],
[
"Mazzamuto",
"Giacomo",
""
],
[
"Tiso",
"Natascia",
""
],
[
"Sacconi",
"Leonardo",
""
],
[
"Fanelli",
"Duccio",
""
],
[
"Silvestri",
"Ludovico",
""
],
[
"Vanzi",
"Francesco",
""
],
[
"Pavone",
"Francesco Saverio",
""
]
] | Light-sheet fluorescence microscopy (LSFM) enables real-time whole-brain functional imaging in zebrafish larvae. Conventional one photon LSFM can however induce undesirable visual stimulation due to the use of visible excitation light. The use of two-photon (2P) excitation, employing near-infrared invisible light, provides unbiased investigation of neuronal circuit dynamics. However, due to the low efficiency of the 2P absorption process, the imaging speed of this technique is typically limited by the signal-to-noise-ratio. Here, we describe a 2P LSFM setup designed for non-invasive imaging that enables quintuplicating state-of-the-art volumetric acquisition rate of the larval zebrafish brain (5 Hz) while keeping low the laser intensity on the specimen. We applied our system to the study of pharmacologically-induced acute seizures, characterizing the spatial-temporal dynamics of pathological activity and describing for the first time the appearance of caudo-rostral ictal waves (CRIWs). |
2005.03580 | Johannes K\"ohler | Johannes K\"ohler, Lukas Schwenkel, Anne Koch, Julian Berberich,
Patricia Pauli, Frank Allg\"ower | Robust and optimal predictive control of the COVID-19 outbreak | This is the accepted version of the paper in Annual Reviews in
Control, 2020 | Annual Reviews in Control (2020) | 10.1016/j.arcontrol.2020.11.002 | null | q-bio.PE cs.SY eess.SY math.OC physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate adaptive strategies to robustly and optimally control the
COVID-19 pandemic via social distancing measures based on the example of
Germany. Our goal is to minimize the number of fatalities over the course of
two years without inducing excessive social costs. We consider a tailored model
of the German COVID-19 outbreak with different parameter sets to design and
validate our approach. Our analysis reveals that an open-loop optimal control
policy can significantly decrease the number of fatalities when compared to
simpler policies under the assumption of exact model knowledge. In a more
realistic scenario with uncertain data and model mismatch, a feedback strategy
that updates the policy weekly using model predictive control (MPC) leads to a
reliable performance, even when applied to a validation model with deviant
parameters. On top of that, we propose a robust MPC-based feedback policy using
interval arithmetic that adapts the social distancing measures cautiously and
safely, thus leading to a minimum number of fatalities even if measurements are
inaccurate and the infection rates cannot be precisely specified by social
distancing. Our theoretical findings support various recent studies by showing
that 1) adaptive feedback strategies are required to reliably contain the
COVID-19 outbreak, 2) well-designed policies can significantly reduce the
number of fatalities compared to simpler ones while keeping the amount of
social distancing measures on the same level, and 3) imposing stronger social
distancing measures early on is more effective and cheaper in the long run than
opening up too soon and restoring stricter measures at a later time.
| [
{
"created": "Thu, 7 May 2020 16:10:30 GMT",
"version": "v1"
},
{
"created": "Mon, 8 Feb 2021 11:32:48 GMT",
"version": "v2"
}
] | 2021-02-09 | [
[
"Köhler",
"Johannes",
""
],
[
"Schwenkel",
"Lukas",
""
],
[
"Koch",
"Anne",
""
],
[
"Berberich",
"Julian",
""
],
[
"Pauli",
"Patricia",
""
],
[
"Allgöwer",
"Frank",
""
]
] | We investigate adaptive strategies to robustly and optimally control the COVID-19 pandemic via social distancing measures based on the example of Germany. Our goal is to minimize the number of fatalities over the course of two years without inducing excessive social costs. We consider a tailored model of the German COVID-19 outbreak with different parameter sets to design and validate our approach. Our analysis reveals that an open-loop optimal control policy can significantly decrease the number of fatalities when compared to simpler policies under the assumption of exact model knowledge. In a more realistic scenario with uncertain data and model mismatch, a feedback strategy that updates the policy weekly using model predictive control (MPC) leads to a reliable performance, even when applied to a validation model with deviant parameters. On top of that, we propose a robust MPC-based feedback policy using interval arithmetic that adapts the social distancing measures cautiously and safely, thus leading to a minimum number of fatalities even if measurements are inaccurate and the infection rates cannot be precisely specified by social distancing. Our theoretical findings support various recent studies by showing that 1) adaptive feedback strategies are required to reliably contain the COVID-19 outbreak, 2) well-designed policies can significantly reduce the number of fatalities compared to simpler ones while keeping the amount of social distancing measures on the same level, and 3) imposing stronger social distancing measures early on is more effective and cheaper in the long run than opening up too soon and restoring stricter measures at a later time. |
1612.07897 | Mainak Pal | Indrani Bose and Mainak Pal | Criticality in Cell Differentiation | 16 pages | null | null | null | q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cell differentiation is an important process in living organisms.
Differentiation is mostly based on binary decisions with the progenitor cells
choosing between two specific lineages. The differentiation dynamics have both
deterministic and stochastic components. Several theoretical studies suggest
that cell differentiation is a bifurcation phenomenon, well-known in dynamical
systems theory. The bifurcation point has the character of a critical point
with the system dynamics exhibiting specific features in its vicinity. These
include the critical slowing down, rising variance and lag-1 autocorrelation
function, strong correlations between the fluctuations of key variables and
non-Gaussianity in the distribution of fluctuations. Recent experimental
studies provide considerable support to the idea of criticality in cell
differentiation and in other biological processes like the development of the
fruit fly embryo. In this Review, an elementary introduction is given to the
concept of criticality in cell differentiation. The correspondence between the
signatures of criticality and experimental observations on blood cell
differentiation in mice is further highlighted.
| [
{
"created": "Fri, 23 Dec 2016 08:04:10 GMT",
"version": "v1"
}
] | 2016-12-26 | [
[
"Bose",
"Indrani",
""
],
[
"Pal",
"Mainak",
""
]
] | Cell differentiation is an important process in living organisms. Differentiation is mostly based on binary decisions with the progenitor cells choosing between two specific lineages. The differentiation dynamics have both deterministic and stochastic components. Several theoretical studies suggest that cell differentiation is a bifurcation phenomenon, well-known in dynamical systems theory. The bifurcation point has the character of a critical point with the system dynamics exhibiting specific features in its vicinity. These include the critical slowing down, rising variance and lag-1 autocorrelation function, strong correlations between the fluctuations of key variables and non-Gaussianity in the distribution of fluctuations. Recent experimental studies provide considerable support to the idea of criticality in cell differentiation and in other biological processes like the development of the fruit fly embryo. In this Review, an elementary introduction is given to the concept of criticality in cell differentiation. The correspondence between the signatures of criticality and experimental observations on blood cell differentiation in mice is further highlighted. |
2204.01087 | Amin Dehghani | Amin Dehghani, Hamid Soltanian-Zadeh, Gholam-Ali Hossein-Zadeh | Neural modulation enhancement using connectivity-based EEG neurofeedback
with simultaneous fMRI for emotion regulation | 12pages, 5 figures | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Emotion regulation plays a key role in human behavior and overall well-being.
Neurofeedback is a non-invasive self-brain training technique used for emotion
regulation to enhance brain function and treatment of mental disorders through
behavioral changes. Previous neurofeedback research often focused on using
activity from a single brain region as measured by fMRI or power from one or
two EEG electrodes. In a new study, we employed connectivity-based EEG
neurofeedback through recalling positive autobiographical memories and
simultaneous fMRI to upregulate positive emotion. In our novel approach, the
feedback was determined by the coherence of EEG electrodes rather than the
power of one or two electrodes. We compared the efficiency of this
connectivity-based neurofeedback to traditional activity-based neurofeedback
through multiple experiments. The results showed that connectivity-based
neurofeedback effectively improved BOLD signal change and connectivity in key
emotion regulation regions such as the amygdala, thalamus, and insula, and
increased EEG frontal asymmetry, which is a biomarker for emotion regulation
and treatment of mental disorders such as PTSD, anxiety, and depression and
coherence among EEG channels. The psychometric evaluations conducted both
before and after the neurofeedback experiments revealed that participants
demonstrated improvements in enhancing positive emotions and reducing negative
emotions when utilizing connectivity-based neurofeedback, as compared to
traditional activity-based and sham neurofeedback approaches. These findings
suggest that connectivity-based neurofeedback may be a superior method for
regulating emotions and could be a useful alternative therapy for mental
disorders, providing individuals with greater control over their brain and
mental functions.
| [
{
"created": "Sun, 3 Apr 2022 15:06:46 GMT",
"version": "v1"
},
{
"created": "Mon, 14 Aug 2023 15:27:49 GMT",
"version": "v2"
},
{
"created": "Fri, 3 Nov 2023 01:59:56 GMT",
"version": "v3"
}
] | 2023-11-06 | [
[
"Dehghani",
"Amin",
""
],
[
"Soltanian-Zadeh",
"Hamid",
""
],
[
"Hossein-Zadeh",
"Gholam-Ali",
""
]
] | Emotion regulation plays a key role in human behavior and overall well-being. Neurofeedback is a non-invasive self-brain training technique used for emotion regulation to enhance brain function and treatment of mental disorders through behavioral changes. Previous neurofeedback research often focused on using activity from a single brain region as measured by fMRI or power from one or two EEG electrodes. In a new study, we employed connectivity-based EEG neurofeedback through recalling positive autobiographical memories and simultaneous fMRI to upregulate positive emotion. In our novel approach, the feedback was determined by the coherence of EEG electrodes rather than the power of one or two electrodes. We compared the efficiency of this connectivity-based neurofeedback to traditional activity-based neurofeedback through multiple experiments. The results showed that connectivity-based neurofeedback effectively improved BOLD signal change and connectivity in key emotion regulation regions such as the amygdala, thalamus, and insula, and increased EEG frontal asymmetry, which is a biomarker for emotion regulation and treatment of mental disorders such as PTSD, anxiety, and depression and coherence among EEG channels. The psychometric evaluations conducted both before and after the neurofeedback experiments revealed that participants demonstrated improvements in enhancing positive emotions and reducing negative emotions when utilizing connectivity-based neurofeedback, as compared to traditional activity-based and sham neurofeedback approaches. These findings suggest that connectivity-based neurofeedback may be a superior method for regulating emotions and could be a useful alternative therapy for mental disorders, providing individuals with greater control over their brain and mental functions. |
2104.04283 | Masataka Kuwamura | Masataka Kuwamura, Hirofumi Izuhara, Shin-ichiro Ei | Oscillations and Bifurcation Structure of Reaction-Diffusion Model for
Cell Polarity Formation | 41 pages, 9 figures | null | 10.1007/s00285-022-01723-5 | null | q-bio.CB math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate the oscillatory dynamics and bifurcation structure of a
reaction-diffusion system with bistable nonlinearity and mass conservation,
which was proposed by [Otsuji et al, PLoS Comp. Biol. 3 (2007), e108]. The
system is a useful model for understanding cell polarity formation. We show
that this model exhibits four different spatiotemporal patterns including two
types of oscillatory patterns, which can be regarded as cell polarity
oscillations with the reversal and non-reversal of polarity, respectively. The
trigger causing these patterns is a diffusion-driven (Turing-like) instability.
Moreover, we investigate the effects of extracellular signals on the cell
polarity oscillations.
| [
{
"created": "Fri, 9 Apr 2021 10:06:18 GMT",
"version": "v1"
},
{
"created": "Sat, 29 Jan 2022 00:08:09 GMT",
"version": "v2"
},
{
"created": "Wed, 23 Feb 2022 02:11:55 GMT",
"version": "v3"
}
] | 2022-03-01 | [
[
"Kuwamura",
"Masataka",
""
],
[
"Izuhara",
"Hirofumi",
""
],
[
"Ei",
"Shin-ichiro",
""
]
] | We investigate the oscillatory dynamics and bifurcation structure of a reaction-diffusion system with bistable nonlinearity and mass conservation, which was proposed by [Otsuji et al, PLoS Comp. Biol. 3 (2007), e108]. The system is a useful model for understanding cell polarity formation. We show that this model exhibits four different spatiotemporal patterns including two types of oscillatory patterns, which can be regarded as cell polarity oscillations with the reversal and non-reversal of polarity, respectively. The trigger causing these patterns is a diffusion-driven (Turing-like) instability. Moreover, we investigate the effects of extracellular signals on the cell polarity oscillations. |
2210.08052 | Heng Li | Heng Li | Protein-to-genome alignment with miniprot | 6 pages, 1 table | null | null | null | q-bio.GN | http://creativecommons.org/licenses/by/4.0/ | Motivation: Protein-to-genome alignment is critical to annotating genes in
non-model organisms. While there are a few tools for this purpose, all of them
were developed over ten years ago and did not incorporate the latest advances
in alignment algorithms. They are inefficient and could not keep up with the
rapid production of new genomes and quickly growing protein databases.
Results: Here we describe miniprot, a new aligner for mapping protein
sequences to a complete genome. Miniprot integrates recent techniques such as
k-mer sketch and SIMD-based dynamic programming. It is tens of times faster
than existing tools while achieving comparable accuracy on real data.
Availability and implementation: https://github.com/lh3/miniprot
| [
{
"created": "Fri, 14 Oct 2022 18:43:47 GMT",
"version": "v1"
},
{
"created": "Wed, 28 Dec 2022 16:13:27 GMT",
"version": "v2"
}
] | 2022-12-29 | [
[
"Li",
"Heng",
""
]
] | Motivation: Protein-to-genome alignment is critical to annotating genes in non-model organisms. While there are a few tools for this purpose, all of them were developed over ten years ago and did not incorporate the latest advances in alignment algorithms. They are inefficient and could not keep up with the rapid production of new genomes and quickly growing protein databases. Results: Here we describe miniprot, a new aligner for mapping protein sequences to a complete genome. Miniprot integrates recent techniques such as k-mer sketch and SIMD-based dynamic programming. It is tens of times faster than existing tools while achieving comparable accuracy on real data. Availability and implementation: https://github.com/lh3/miniprot |
1611.10047 | Alain Destexhe | Claude Bedard, Jean-Marie Gomes, Thierry Bal and Alain Destexhe | A framework to reconcile frequency scaling measurements, from
intracellular recordings, local-field potentials, up to EEG and MEG signals | (in press) | Journal of Integrative Neuroscience 16: 3-18, 2017 | 10.3233/JIN-160001 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this viewpoint article, we discuss the electric properties of the medium
around neurons, which are important to correctly interpret extracellular
potentials or electric field effects in neural tissue. We focus on how these
electric properties shape the frequency scaling of brain signals at different
scales, such as intracellular recordings, the local field potential (LFP), the
electroencephalogram (EEG) or the magnetoencephalogram (MEG). These signals
display frequency-scaling properties which are not consistent with resistive
media. The medium appears to exert a frequency filtering scaling as
$1/\sqrt{f}$, which is the typical frequency scaling of ionic diffusion. Such a
scaling was also found recently by impedance measurements in physiological
conditions. Ionic diffusion appears to be the only possible explanation to
reconcile these measurements and the frequency-scaling properties found in
different brain signals. However, other measurements suggest that the
extracellular medium is essentially resistive. To resolve this discrepancy, we
show new evidence that metal-electrode measurements can be perturbed by shunt
currents going through the surface of the brain. Such a shunt may explain the
contradictory measurements, and together with ionic diffusion, provides a
framework where all observations can be reconciled. Finally, we propose a
method to perform measurements avoiding shunting effects, thus enabling to test
the predictions of this framework.
| [
{
"created": "Wed, 30 Nov 2016 08:28:10 GMT",
"version": "v1"
}
] | 2017-03-02 | [
[
"Bedard",
"Claude",
""
],
[
"Gomes",
"Jean-Marie",
""
],
[
"Bal",
"Thierry",
""
],
[
"Destexhe",
"Alain",
""
]
] | In this viewpoint article, we discuss the electric properties of the medium around neurons, which are important to correctly interpret extracellular potentials or electric field effects in neural tissue. We focus on how these electric properties shape the frequency scaling of brain signals at different scales, such as intracellular recordings, the local field potential (LFP), the electroencephalogram (EEG) or the magnetoencephalogram (MEG). These signals display frequency-scaling properties which are not consistent with resistive media. The medium appears to exert a frequency filtering scaling as $1/\sqrt{f}$, which is the typical frequency scaling of ionic diffusion. Such a scaling was also found recently by impedance measurements in physiological conditions. Ionic diffusion appears to be the only possible explanation to reconcile these measurements and the frequency-scaling properties found in different brain signals. However, other measurements suggest that the extracellular medium is essentially resistive. To resolve this discrepancy, we show new evidence that metal-electrode measurements can be perturbed by shunt currents going through the surface of the brain. Such a shunt may explain the contradictory measurements, and together with ionic diffusion, provides a framework where all observations can be reconciled. Finally, we propose a method to perform measurements avoiding shunting effects, thus enabling to test the predictions of this framework. |
2301.04061 | Cooper Harshbarger | Cooper Lars Harshbarger, Alen Pavlic, Davide Cesare Bernardoni, Amelie
Viol, Jess Gerrit Snedeker, J\"urg Dual, Unai Silv\'an | Measuring and simulating the biophysical basis of the acoustic contrast
factor of biological cells | null | null | null | null | q-bio.CB physics.flu-dyn | http://creativecommons.org/licenses/by/4.0/ | The acoustic contrast factor (ACF) is calculated from the relative density
and compressibility differences between a fluid and an object in the fluid. To
name but one application, this acoustic contrast can be exploited using
acoustophoretic systems to isolate cancer cells from a liquid biopsy, such as a
blood sample. Knowing the ACF of a cancer cell represents a crucial step in the
design of acoustophoretic systems for this purpose, potentially allowing the
isolation of circulating cancer cells without labels or contact. For biological
cells the static compressibility is different from the high frequency
counterpart relevant for the ACF. In this study, we started by characterizing
the ACF of low vs. high metastatic cell lines with known associated differences
in phenotypic static E-modulus. The change in the static E-modulus, however,
was not reflected in a change of the ACF, prompting a more in depth analysis of
the influences on the ACF. We demonstrate that static E-modulus increased
biological cells through formaldehyde fixation have an increased ACF.
Conversely static E-modulus decreased biological cells treated with actin
polymerization inhibitor cytochalasin D have a decreased ACF. Complementing
these mechanical tests, a numerical COMSOL model was implemented and used to
parametrically explore the effects of cell density, cell density ratios,
dynamic compressibility and therefore the dynamic bulk modulus. Collectively
the combined laboratory and numerical experiments reveal that a change in the
static E-modulus alone might, but does not automatically lead to a change of
the dynamic ACF for biological cells. This highlights the need for a
multiparametic view of the biophysical basis of the cellular ACF, as well as
the challenges in harnessing acoustophoretic systems to isolate circulating
cells based on their mechanical properties alone.
| [
{
"created": "Tue, 10 Jan 2023 16:27:43 GMT",
"version": "v1"
}
] | 2023-01-11 | [
[
"Harshbarger",
"Cooper Lars",
""
],
[
"Pavlic",
"Alen",
""
],
[
"Bernardoni",
"Davide Cesare",
""
],
[
"Viol",
"Amelie",
""
],
[
"Snedeker",
"Jess Gerrit",
""
],
[
"Dual",
"Jürg",
""
],
[
"Silván",
"Unai",
""
]
] | The acoustic contrast factor (ACF) is calculated from the relative density and compressibility differences between a fluid and an object in the fluid. To name but one application, this acoustic contrast can be exploited using acoustophoretic systems to isolate cancer cells from a liquid biopsy, such as a blood sample. Knowing the ACF of a cancer cell represents a crucial step in the design of acoustophoretic systems for this purpose, potentially allowing the isolation of circulating cancer cells without labels or contact. For biological cells the static compressibility is different from the high frequency counterpart relevant for the ACF. In this study, we started by characterizing the ACF of low vs. high metastatic cell lines with known associated differences in phenotypic static E-modulus. The change in the static E-modulus, however, was not reflected in a change of the ACF, prompting a more in depth analysis of the influences on the ACF. We demonstrate that static E-modulus increased biological cells through formaldehyde fixation have an increased ACF. Conversely static E-modulus decreased biological cells treated with actin polymerization inhibitor cytochalasin D have a decreased ACF. Complementing these mechanical tests, a numerical COMSOL model was implemented and used to parametrically explore the effects of cell density, cell density ratios, dynamic compressibility and therefore the dynamic bulk modulus. Collectively the combined laboratory and numerical experiments reveal that a change in the static E-modulus alone might, but does not automatically lead to a change of the dynamic ACF for biological cells. This highlights the need for a multiparametic view of the biophysical basis of the cellular ACF, as well as the challenges in harnessing acoustophoretic systems to isolate circulating cells based on their mechanical properties alone. |
1503.01919 | S{\o}ren S{\o}nderby | S{\o}ren Kaae S{\o}nderby, Casper Kaae S{\o}nderby, Henrik Nielsen,
Ole Winther | Convolutional LSTM Networks for Subcellular Localization of Proteins | null | Algorithms for Computational Biology 9199 (2015) 68 | 10.1007/978-3-319-21233-3_6 | null | q-bio.QM cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Machine learning is widely used to analyze biological sequence data.
Non-sequential models such as SVMs or feed-forward neural networks are often
used although they have no natural way of handling sequences of varying length.
Recurrent neural networks such as the long short term memory (LSTM) model on
the other hand are designed to handle sequences. In this study we demonstrate
that LSTM networks predict the subcellular location of proteins given only the
protein sequence with high accuracy (0.902) outperforming current state of the
art algorithms. We further improve the performance by introducing convolutional
filters and experiment with an attention mechanism which lets the LSTM focus on
specific parts of the protein. Lastly we introduce new visualizations of both
the convolutional filters and the attention mechanisms and show how they can be
used to extract biological relevant knowledge from the LSTM networks.
| [
{
"created": "Fri, 6 Mar 2015 11:21:26 GMT",
"version": "v1"
}
] | 2016-03-14 | [
[
"Sønderby",
"Søren Kaae",
""
],
[
"Sønderby",
"Casper Kaae",
""
],
[
"Nielsen",
"Henrik",
""
],
[
"Winther",
"Ole",
""
]
] | Machine learning is widely used to analyze biological sequence data. Non-sequential models such as SVMs or feed-forward neural networks are often used although they have no natural way of handling sequences of varying length. Recurrent neural networks such as the long short term memory (LSTM) model on the other hand are designed to handle sequences. In this study we demonstrate that LSTM networks predict the subcellular location of proteins given only the protein sequence with high accuracy (0.902) outperforming current state of the art algorithms. We further improve the performance by introducing convolutional filters and experiment with an attention mechanism which lets the LSTM focus on specific parts of the protein. Lastly we introduce new visualizations of both the convolutional filters and the attention mechanisms and show how they can be used to extract biological relevant knowledge from the LSTM networks. |
1910.01258 | Ellen Kuhl | Mark Alber, Adrian Buganza Tepole, William Cannon, Suvranu De,
Salvador Dura-Bernal, Krishna Garikipati, George Karniadakis, William W.
Lytton, Paris Perdikaris, Linda Petzold, Ellen Kuhl | Integrating Machine Learning and Multiscale Modeling: Perspectives,
Challenges, and Opportunities in the Biological, Biomedical, and Behavioral
Sciences | null | npj Digital Medicine 2 (2019) 115 | 10.1038/s41746-019-0193-y | null | q-bio.QM physics.bio-ph physics.med-ph | http://creativecommons.org/licenses/by/4.0/ | Fueled by breakthrough technology developments, the biological, biomedical,
and behavioral sciences are now collecting more data than ever before. There is
a critical need for time- and cost-efficient strategies to analyze and
interpret these data to advance human health. The recent rise of machine
learning as a powerful technique to integrate multimodality, multifidelity
data, and reveal correlations between intertwined phenomena presents a special
opportunity in this regard. However, classical machine learning techniques
often ignore the fundamental laws of physics and result in ill-posed problems
or non-physical solutions. Multiscale modeling is a successful strategy to
integrate multiscale, multiphysics data and uncover mechanisms that explain the
emergence of function. However, multiscale modeling alone often fails to
efficiently combine large data sets from different sources and different levels
of resolution. We show how machine learning and multiscale modeling can
complement each other to create robust predictive models that integrate the
underlying physics to manage ill-posed problems and explore massive design
spaces. We critically review the current literature, highlight applications and
opportunities, address open questions, and discuss potential challenges and
limitations in four overarching topical areas: ordinary differential equations,
partial differential equations, data-driven approaches, and theory-driven
approaches. Towards these goals, we leverage expertise in applied mathematics,
computer science, computational biology, biophysics, biomechanics, engineering
mechanics, experimentation, and medicine. Our multidisciplinary perspective
suggests that integrating machine learning and multiscale modeling can provide
new insights into disease mechanisms, help identify new targets and treatment
strategies, and inform decision making for the benefit of human health.
| [
{
"created": "Thu, 3 Oct 2019 00:08:14 GMT",
"version": "v1"
},
{
"created": "Thu, 24 Oct 2019 12:15:57 GMT",
"version": "v2"
}
] | 2019-11-28 | [
[
"Alber",
"Mark",
""
],
[
"Tepole",
"Adrian Buganza",
""
],
[
"Cannon",
"William",
""
],
[
"De",
"Suvranu",
""
],
[
"Dura-Bernal",
"Salvador",
""
],
[
"Garikipati",
"Krishna",
""
],
[
"Karniadakis",
"George",
""
],
[
"Lytton",
"William W.",
""
],
[
"Perdikaris",
"Paris",
""
],
[
"Petzold",
"Linda",
""
],
[
"Kuhl",
"Ellen",
""
]
] | Fueled by breakthrough technology developments, the biological, biomedical, and behavioral sciences are now collecting more data than ever before. There is a critical need for time- and cost-efficient strategies to analyze and interpret these data to advance human health. The recent rise of machine learning as a powerful technique to integrate multimodality, multifidelity data, and reveal correlations between intertwined phenomena presents a special opportunity in this regard. However, classical machine learning techniques often ignore the fundamental laws of physics and result in ill-posed problems or non-physical solutions. Multiscale modeling is a successful strategy to integrate multiscale, multiphysics data and uncover mechanisms that explain the emergence of function. However, multiscale modeling alone often fails to efficiently combine large data sets from different sources and different levels of resolution. We show how machine learning and multiscale modeling can complement each other to create robust predictive models that integrate the underlying physics to manage ill-posed problems and explore massive design spaces. We critically review the current literature, highlight applications and opportunities, address open questions, and discuss potential challenges and limitations in four overarching topical areas: ordinary differential equations, partial differential equations, data-driven approaches, and theory-driven approaches. Towards these goals, we leverage expertise in applied mathematics, computer science, computational biology, biophysics, biomechanics, engineering mechanics, experimentation, and medicine. Our multidisciplinary perspective suggests that integrating machine learning and multiscale modeling can provide new insights into disease mechanisms, help identify new targets and treatment strategies, and inform decision making for the benefit of human health. |
2107.05740 | Yury Garcia | Yury E. Garc\'ia, Luis A. Barboza, Fabio Sanchez, Paola V\'asquez, and
Juan G. Calvo | Wavelet Analysis of Dengue Incidence and its Correlation with Weather
and Vegetation Variables in Costa Rica | 46 pages, 6 Figures | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | Dengue represents a serious public health problem in tropical and subtropical
regions worldwide. The number of dengue cases and its geographical expansion
has increased in recent decades, driven mostly after by social and
environmental factors. In Costa Rica, it has been endemic since it was first
introduced in 1993. In this article, wavelet analyzes (wavelet power spectrum
and wavelet coherence) were performed to detect and quantify dengue periodicity
and describe patterns of synchrony between dengue incidence and climatic and
environmental factors: Normalized Difference Water Index, Enhanced Vegetation
Index, Normalized Difference Vegetation Index, Tropical North Atlantic indices,
Land Surface Temperature, and El Ni\~no Southern Oscillation indices in 32
different cantons, using dengue surveillance from 2000 to 2019. Results showed
that the dengue dominant cycles are in periods of 1, 2, and 3 years. The
wavelet coherence analysis showed that the vegetation indices are correlated
with dengue incidence in places located in the central and Northern Pacific of
the country in the period of 1 year. Climatic variables such as El Ni\~no 3,
3.4, 4, showed a strong correlation with dengue incidence in the period of 3
years and the Tropical North Atlantic is correlated with dengue incidence in
the period of 1 year. Land Surface Temperature showed a strong correlation with
dengue time series in the 32 cantons.
| [
{
"created": "Fri, 2 Jul 2021 03:40:04 GMT",
"version": "v1"
}
] | 2021-07-14 | [
[
"García",
"Yury E.",
""
],
[
"Barboza",
"Luis A.",
""
],
[
"Sanchez",
"Fabio",
""
],
[
"Vásquez",
"Paola",
""
],
[
"Calvo",
"Juan G.",
""
]
] | Dengue represents a serious public health problem in tropical and subtropical regions worldwide. The number of dengue cases and its geographical expansion has increased in recent decades, driven mostly after by social and environmental factors. In Costa Rica, it has been endemic since it was first introduced in 1993. In this article, wavelet analyzes (wavelet power spectrum and wavelet coherence) were performed to detect and quantify dengue periodicity and describe patterns of synchrony between dengue incidence and climatic and environmental factors: Normalized Difference Water Index, Enhanced Vegetation Index, Normalized Difference Vegetation Index, Tropical North Atlantic indices, Land Surface Temperature, and El Ni\~no Southern Oscillation indices in 32 different cantons, using dengue surveillance from 2000 to 2019. Results showed that the dengue dominant cycles are in periods of 1, 2, and 3 years. The wavelet coherence analysis showed that the vegetation indices are correlated with dengue incidence in places located in the central and Northern Pacific of the country in the period of 1 year. Climatic variables such as El Ni\~no 3, 3.4, 4, showed a strong correlation with dengue incidence in the period of 3 years and the Tropical North Atlantic is correlated with dengue incidence in the period of 1 year. Land Surface Temperature showed a strong correlation with dengue time series in the 32 cantons. |
1406.0213 | Enrico Carlon | R. Frederickx, T. in't Veld, E. Carlon | Anomalous dynamics of DNA hairpin folding | 5 pages, 6 figures; watch video abstract at
http://youtu.be/QLotNZaz76c | Phys. Rev. Lett. 112, 198102 (2014) | 10.1103/PhysRevLett.112.198102 | null | q-bio.BM cond-mat.soft cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | By means of computer simulations of a coarse-grained DNA model we show that
the DNA hairpin zippering dynamics is anomalous, i.e. the characteristic time T
scales non-linearly with N, the hairpin length: T ~ N^a with a>1. This is in
sharp contrast with the prediction of the zipper model for which T ~ N. We show
that the anomalous dynamics originates from an increase in the friction during
zippering due to the tension built in the closing strands. From a simple
polymer model we get a = 1+ nu = 1.59 with nu the Flory exponent, a result
which is in agreement with the simulations. We discuss transition path times
data where such effects should be detected.
| [
{
"created": "Sun, 1 Jun 2014 22:08:04 GMT",
"version": "v1"
}
] | 2014-06-03 | [
[
"Frederickx",
"R.",
""
],
[
"Veld",
"T. in't",
""
],
[
"Carlon",
"E.",
""
]
] | By means of computer simulations of a coarse-grained DNA model we show that the DNA hairpin zippering dynamics is anomalous, i.e. the characteristic time T scales non-linearly with N, the hairpin length: T ~ N^a with a>1. This is in sharp contrast with the prediction of the zipper model for which T ~ N. We show that the anomalous dynamics originates from an increase in the friction during zippering due to the tension built in the closing strands. From a simple polymer model we get a = 1+ nu = 1.59 with nu the Flory exponent, a result which is in agreement with the simulations. We discuss transition path times data where such effects should be detected. |
2103.00690 | Caitlin Kuempel | Caitlin D. Kuempel, Halley E. Froehlich, and Benjamin S. Halpern | An informed thought experiment exploring the potential for a paradigm
shift in aquatic food production | null | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The Neolithic Revolution began c. 10000 years ago and is characterised by the
ultimate, near complete transition from hunting and gathering to agricultural
food production on land. The Neolithic Revolution is thought to have been
catalysed by a combination of local population pressure, cultural diffusion,
property rights and climate change. We undertake a thought experiment that
examines trends in these key hypothesised catalysts and patters of today to
explore whether society could be on a path towards another paradigm shift in
food production: away from hunting of wild fish towards a transition to mostly
fish farming. We find similar environmental and cultural pressures have driven
the rapid rise of aquaculture, during a period that has now been coined the
Blue Revolution, providing impetus for such a transition in coming decades to
centuries. We also highlight the interacting and often mutually reinforcing
impacts of 1)technological and scientific advancement, 2)environmental
awareness and collective action and 3)globalisation and trade influencing the
trajectory and momentum of the Blue Revolution. We present two qualitative
narratives that broadly fall within two future trajectories: 1)a ubiquitous
aquaculture transition and 20commercial aquaculture and fisheries coexistence.
This scenarios approach aims to encourage logical, forward thinking, and
innovative solutions to complex systems dynamics. Scenario-based thought
experiments are useful to explore large scale questions, increase the
accessibility to a wider readership and ideally catalyse discussion around
proactive governance mechanisms. We argue the future is not fixed and society
now has greater foresight and capacity to choose the workable balance between
fisheries sand aquaculture that supports economic, environmental, cultural and
social objectives through combined planning, policies and management.
| [
{
"created": "Mon, 1 Mar 2021 01:55:23 GMT",
"version": "v1"
}
] | 2021-03-02 | [
[
"Kuempel",
"Caitlin D.",
""
],
[
"Froehlich",
"Halley E.",
""
],
[
"Halpern",
"Benjamin S.",
""
]
] | The Neolithic Revolution began c. 10000 years ago and is characterised by the ultimate, near complete transition from hunting and gathering to agricultural food production on land. The Neolithic Revolution is thought to have been catalysed by a combination of local population pressure, cultural diffusion, property rights and climate change. We undertake a thought experiment that examines trends in these key hypothesised catalysts and patters of today to explore whether society could be on a path towards another paradigm shift in food production: away from hunting of wild fish towards a transition to mostly fish farming. We find similar environmental and cultural pressures have driven the rapid rise of aquaculture, during a period that has now been coined the Blue Revolution, providing impetus for such a transition in coming decades to centuries. We also highlight the interacting and often mutually reinforcing impacts of 1)technological and scientific advancement, 2)environmental awareness and collective action and 3)globalisation and trade influencing the trajectory and momentum of the Blue Revolution. We present two qualitative narratives that broadly fall within two future trajectories: 1)a ubiquitous aquaculture transition and 20commercial aquaculture and fisheries coexistence. This scenarios approach aims to encourage logical, forward thinking, and innovative solutions to complex systems dynamics. Scenario-based thought experiments are useful to explore large scale questions, increase the accessibility to a wider readership and ideally catalyse discussion around proactive governance mechanisms. We argue the future is not fixed and society now has greater foresight and capacity to choose the workable balance between fisheries sand aquaculture that supports economic, environmental, cultural and social objectives through combined planning, policies and management. |
1503.05628 | Zhongming Wang | Zijian Dong and Jingzhuo Wang and Zhongming Wang | Accurate Estimation of Quantitative Trait Locus Effects with Epistatic
by Improved Variational Linear Regression | null | null | null | null | q-bio.QM q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bayesian approaches to variable selection have been widely used for
quantitative trait locus (QTL) mapping. The Markov chain Monte Carlo (MCMC)
algorithms for that aim are often difficult to be implemented for
high-dimensional variable selection problems, such as the ones arising in
epistatic analysis. Variational approximation is an alternative to MCMC, and
variational linear regression (VLR) is an effective solution for the variable
selection problems, but lacks accuracy in some QTL mapping problems where there
are many more variables than samples. In this paper, we propose an effective
method with aim to improve the accuracy of VLR in the case of above by
dynamically reducing components (variable or markers) with known effects (zero
or fixed). We show that the proposed method can greatly improve the accuracy of
VLR with little increase in computational cost. The method is compared with
several other variational methods used for QTL mapping, and simulation results
show that its performance is higher than those methods when applied in
high-dimensional cases.
| [
{
"created": "Mon, 12 Jan 2015 02:16:01 GMT",
"version": "v1"
}
] | 2015-03-20 | [
[
"Dong",
"Zijian",
""
],
[
"Wang",
"Jingzhuo",
""
],
[
"Wang",
"Zhongming",
""
]
] | Bayesian approaches to variable selection have been widely used for quantitative trait locus (QTL) mapping. The Markov chain Monte Carlo (MCMC) algorithms for that aim are often difficult to be implemented for high-dimensional variable selection problems, such as the ones arising in epistatic analysis. Variational approximation is an alternative to MCMC, and variational linear regression (VLR) is an effective solution for the variable selection problems, but lacks accuracy in some QTL mapping problems where there are many more variables than samples. In this paper, we propose an effective method with aim to improve the accuracy of VLR in the case of above by dynamically reducing components (variable or markers) with known effects (zero or fixed). We show that the proposed method can greatly improve the accuracy of VLR with little increase in computational cost. The method is compared with several other variational methods used for QTL mapping, and simulation results show that its performance is higher than those methods when applied in high-dimensional cases. |
1210.5234 | Lu Xie | Lu Xie | Avoid Internal Loops in Steady State Flux Space Sampling | arXiv admin note: substantial text overlap with arXiv:0711.1193 | null | null | null | q-bio.QM q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As a widely used method in metabolic network studies, Monte-Carlo sampling in
the steady state flux space is known for its flexibility and convenience of
carrying out different purposes, simply by alternating constraints or objective
functions, or appending post processes. Recently the concept of a non-linear
constraint based on the second thermodynamic law, known as "Loop Law", is
challenging current sampling algorithms which will inevitably give rise to the
internal loops. A generalized method is proposed here to eradicate the
probability of the appearance of internal loops during sampling process. Based
on Artificial Centered Hit and Run (ACHR) method, each step of the new sampling
process will avoid entering "loop-forming" subspaces. This method has been
applied on the metabolic network of Helicobacter pylori with three different
objective functions: uniform sampling, optimizing biomass synthesis, optimizing
biomass synthesis efficiency over resources ingested. Comparison between
results from the new method and conventional ACHR method shows effective
elimination of loop fluxes without affecting non-loop fluxes.
| [
{
"created": "Thu, 18 Oct 2012 19:53:22 GMT",
"version": "v1"
}
] | 2012-10-19 | [
[
"Xie",
"Lu",
""
]
] | As a widely used method in metabolic network studies, Monte-Carlo sampling in the steady state flux space is known for its flexibility and convenience of carrying out different purposes, simply by alternating constraints or objective functions, or appending post processes. Recently the concept of a non-linear constraint based on the second thermodynamic law, known as "Loop Law", is challenging current sampling algorithms which will inevitably give rise to the internal loops. A generalized method is proposed here to eradicate the probability of the appearance of internal loops during sampling process. Based on Artificial Centered Hit and Run (ACHR) method, each step of the new sampling process will avoid entering "loop-forming" subspaces. This method has been applied on the metabolic network of Helicobacter pylori with three different objective functions: uniform sampling, optimizing biomass synthesis, optimizing biomass synthesis efficiency over resources ingested. Comparison between results from the new method and conventional ACHR method shows effective elimination of loop fluxes without affecting non-loop fluxes. |
2312.07172 | Suman Kumar Banik | Mintu Nandi, Sudip Chattopadhyay, Somshubhro Bandyopadhyay, and Suman
K Banik | Channel assisted noise propagation in a two-step cascade | Revised version. 12 pages with 4 figures | null | null | null | q-bio.MN physics.bio-ph | http://creativecommons.org/licenses/by/4.0/ | Signal propagation in biochemical networks is characterized by the inherent
randomness in gene expression and fluctuations of the environmental components,
commonly known as intrinsic and extrinsic noise, respectively. We present a
theoretical framework for noise propagation in a generic two-step cascade
(S$\rightarrow$X$\rightarrow$Y) regarding intrinsic and extrinsic noise. We
identify different channels of noise transmission that regulate the individual
and the overall noise properties of each component. Our analysis shows that the
intrinsic noise of S alleviates the general noise and information transmission
capacity along the cascade. On the other hand, the intrinsic noise of X and Y
acts as a bottleneck of information transmission. We also show a hierarchical
relationship among the intrinsic noise levels of S, X, and Y, with S exhibiting
the highest level of intrinsic noise, followed by X and then Y. This hierarchy
is preserved within the two-step cascade, facilitating the highest information
transmission from S to Y via X.
| [
{
"created": "Tue, 12 Dec 2023 11:16:08 GMT",
"version": "v1"
},
{
"created": "Thu, 8 Aug 2024 06:18:28 GMT",
"version": "v2"
}
] | 2024-08-09 | [
[
"Nandi",
"Mintu",
""
],
[
"Chattopadhyay",
"Sudip",
""
],
[
"Bandyopadhyay",
"Somshubhro",
""
],
[
"Banik",
"Suman K",
""
]
] | Signal propagation in biochemical networks is characterized by the inherent randomness in gene expression and fluctuations of the environmental components, commonly known as intrinsic and extrinsic noise, respectively. We present a theoretical framework for noise propagation in a generic two-step cascade (S$\rightarrow$X$\rightarrow$Y) regarding intrinsic and extrinsic noise. We identify different channels of noise transmission that regulate the individual and the overall noise properties of each component. Our analysis shows that the intrinsic noise of S alleviates the general noise and information transmission capacity along the cascade. On the other hand, the intrinsic noise of X and Y acts as a bottleneck of information transmission. We also show a hierarchical relationship among the intrinsic noise levels of S, X, and Y, with S exhibiting the highest level of intrinsic noise, followed by X and then Y. This hierarchy is preserved within the two-step cascade, facilitating the highest information transmission from S to Y via X. |
0704.0648 | Kaushik Majumdar | Kaushik Majumdar | Behavioral response to strong aversive stimuli: A neurodynamical model | Submitted to journal | null | null | null | q-bio.NC | null | In this paper a theoretical model of functioning of a neural circuit during a
behavioral response has been proposed. A neural circuit can be thought of as a
directed multigraph whose each vertex is a neuron and each edge is a synapse.
It has been assumed in this paper that the behavior of such circuits is
manifested through the collective behavior of neurons belonging to that
circuit. Behavioral information of each neuron is contained in the coefficients
of the fast Fourier transform (FFT) over the output spike train. Those
coefficients form a vector in a multidimensional vector space. Behavioral
dynamics of a neuronal network in response to strong aversive stimuli has been
studied in a vector space in which a suitable pseudometric has been defined.
The neurodynamical model of network behavior has been formulated in terms of
existing memory, synaptic plasticity and feelings. The model has an analogy in
classical electrostatics, by which the notion of force and potential energy has
been introduced. Since the model takes input from each neuron in a network and
produces a behavior as the output, it would be extremely difficult or may even
be impossible to implement. But with the help of the model a possible
explanation for an hitherto unexplained neurological observation in human brain
has been offered. The model is compatible with a recent model of sequential
behavioral dynamics. The model is based on electrophysiology, but its relevance
to hemodynamics has been outlined.
| [
{
"created": "Wed, 4 Apr 2007 20:04:02 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Majumdar",
"Kaushik",
""
]
] | In this paper a theoretical model of functioning of a neural circuit during a behavioral response has been proposed. A neural circuit can be thought of as a directed multigraph whose each vertex is a neuron and each edge is a synapse. It has been assumed in this paper that the behavior of such circuits is manifested through the collective behavior of neurons belonging to that circuit. Behavioral information of each neuron is contained in the coefficients of the fast Fourier transform (FFT) over the output spike train. Those coefficients form a vector in a multidimensional vector space. Behavioral dynamics of a neuronal network in response to strong aversive stimuli has been studied in a vector space in which a suitable pseudometric has been defined. The neurodynamical model of network behavior has been formulated in terms of existing memory, synaptic plasticity and feelings. The model has an analogy in classical electrostatics, by which the notion of force and potential energy has been introduced. Since the model takes input from each neuron in a network and produces a behavior as the output, it would be extremely difficult or may even be impossible to implement. But with the help of the model a possible explanation for an hitherto unexplained neurological observation in human brain has been offered. The model is compatible with a recent model of sequential behavioral dynamics. The model is based on electrophysiology, but its relevance to hemodynamics has been outlined. |
2208.10934 | Qiyao Peng | Qiyao Peng, Fred J Vermolen, Daphne Weihs | Physical Confinement and Cell Proximity Increase Cell Migration Rates
and Invasiveness: A Mathematical Model of Cancer Cell Invasion through
Flexible Channels | null | null | 10.1016/j.jmbbm.2023.105843 | null | q-bio.CB cs.NA math.NA | http://creativecommons.org/licenses/by/4.0/ | Cancer cell migration between different body parts is the driving force
behind cancer metastasis, which is the main cause of mortality of patients.
Migration of cancer cells often proceeds by penetration through narrow cavities
in locally stiff, yet flexible tissues. In our previous work, we developed a
model for cell geometry evolution during invasion, which we extend here to
investigate whether leader and follower (cancer) cells that only interact
mechanically can benefit from sequential transmigration through narrow
micro-channels and cavities.
We consider two cases of cells sequentially migrating through a flexible
channel: leader and follower cells being closely adjacent or distant. Using
Wilcoxon's signed-rank test on the data collected from Monte Carlo simulations,
we conclude that the modelled transmigration speed for the follower cell is
significantly larger than for the leader cell when cells are distant, i.e.
follower cells transmigrate after the leader has completed the crossing.
Furthermore, it appears that there exists an optimum with respect to the width
of the channel such that cell moves fastest. On the other hand, in the case of
closely adjacent cells, effectively performing collective migration, the leader
cell moves $12\%$ faster since the follower cell pushes it. This work shows
that mechanical interactions between cells can increase the net transmigration
speed of cancer cells, resulting in increased invasiveness. In other words,
interaction between cancer cells can accelerate metastatic invasion.
| [
{
"created": "Tue, 23 Aug 2022 13:04:17 GMT",
"version": "v1"
}
] | 2023-05-02 | [
[
"Peng",
"Qiyao",
""
],
[
"Vermolen",
"Fred J",
""
],
[
"Weihs",
"Daphne",
""
]
] | Cancer cell migration between different body parts is the driving force behind cancer metastasis, which is the main cause of mortality of patients. Migration of cancer cells often proceeds by penetration through narrow cavities in locally stiff, yet flexible tissues. In our previous work, we developed a model for cell geometry evolution during invasion, which we extend here to investigate whether leader and follower (cancer) cells that only interact mechanically can benefit from sequential transmigration through narrow micro-channels and cavities. We consider two cases of cells sequentially migrating through a flexible channel: leader and follower cells being closely adjacent or distant. Using Wilcoxon's signed-rank test on the data collected from Monte Carlo simulations, we conclude that the modelled transmigration speed for the follower cell is significantly larger than for the leader cell when cells are distant, i.e. follower cells transmigrate after the leader has completed the crossing. Furthermore, it appears that there exists an optimum with respect to the width of the channel such that cell moves fastest. On the other hand, in the case of closely adjacent cells, effectively performing collective migration, the leader cell moves $12\%$ faster since the follower cell pushes it. This work shows that mechanical interactions between cells can increase the net transmigration speed of cancer cells, resulting in increased invasiveness. In other words, interaction between cancer cells can accelerate metastatic invasion. |
2109.07933 | Christoph Adami | Christoph Adami and Nitash C G (Michigan State University) | Emergence of functional information from multivariate correlations | 20 pages, 5 figures | Phil. Trans. Roy. Society A 380 (2022) 20210250 | 10.1098/rsta.2021.0250 | null | q-bio.BM cs.IT math.IT physics.bio-ph q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The information content of symbolic sequences (such as nucleic- or amino acid
sequences, but also neuronal firings or strings of letters) can be calculated
from an ensemble of such sequences, but because information cannot be assigned
to single sequences, we cannot correlate information to other observables
attached to the sequence. Here we show that an information score obtained from
multivariate (multiple-variable) correlations within sequences of a "training"
ensemble can be used to predict observables of out-of-sample sequences with an
accuracy that scales with the complexity of correlations, showing that
functional information emerges from a hierarchy of multi-variable correlations.
| [
{
"created": "Thu, 16 Sep 2021 12:29:39 GMT",
"version": "v1"
}
] | 2023-02-24 | [
[
"Adami",
"Christoph",
"",
"Michigan State University"
],
[
"G",
"Nitash C",
"",
"Michigan State University"
]
] | The information content of symbolic sequences (such as nucleic- or amino acid sequences, but also neuronal firings or strings of letters) can be calculated from an ensemble of such sequences, but because information cannot be assigned to single sequences, we cannot correlate information to other observables attached to the sequence. Here we show that an information score obtained from multivariate (multiple-variable) correlations within sequences of a "training" ensemble can be used to predict observables of out-of-sample sequences with an accuracy that scales with the complexity of correlations, showing that functional information emerges from a hierarchy of multi-variable correlations. |
1409.1892 | Ting Zhao | Ting Zhao, Stephen M Plaza | Automatic Neuron Type Identification by Neurite Localization in the
Drosophila Medulla | null | null | null | null | q-bio.NC cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mapping the connectivity of neurons in the brain (i.e., connectomics) is a
challenging problem due to both the number of connections in even the smallest
organisms and the nanometer resolution required to resolve them. Because of
this, previous connectomes contain only hundreds of neurons, such as in the
C.elegans connectome. Recent technological advances will unlock the mysteries
of increasingly large connectomes (or partial connectomes). However, the value
of these maps is limited by our ability to reason with this data and understand
any underlying motifs. To aid connectome analysis, we introduce algorithms to
cluster similarly-shaped neurons, where 3D neuronal shapes are represented as
skeletons. In particular, we propose a novel location-sensitive clustering
algorithm. We show clustering results on neurons reconstructed from the
Drosophila medulla that show high-accuracy.
| [
{
"created": "Fri, 5 Sep 2014 18:03:03 GMT",
"version": "v1"
}
] | 2014-09-08 | [
[
"Zhao",
"Ting",
""
],
[
"Plaza",
"Stephen M",
""
]
] | Mapping the connectivity of neurons in the brain (i.e., connectomics) is a challenging problem due to both the number of connections in even the smallest organisms and the nanometer resolution required to resolve them. Because of this, previous connectomes contain only hundreds of neurons, such as in the C.elegans connectome. Recent technological advances will unlock the mysteries of increasingly large connectomes (or partial connectomes). However, the value of these maps is limited by our ability to reason with this data and understand any underlying motifs. To aid connectome analysis, we introduce algorithms to cluster similarly-shaped neurons, where 3D neuronal shapes are represented as skeletons. In particular, we propose a novel location-sensitive clustering algorithm. We show clustering results on neurons reconstructed from the Drosophila medulla that show high-accuracy. |
2302.07005 | Helena Jambor | Christopher Schmied (1), Michael Nelson, Sergiy Avilov, Gert-Jan
Bakker, Cristina Bertocchi, Johanna Bischof, Ulrike Boehm, Jan Brocher,
Mariana Carvalho, Catalin Chiritescu, Jana Christopher, Beth Cimini, Eduardo
Conde-Sousa, Michael Ebner, Rupert Ecker, Kevin Eliceiri, Julia
Fernandez-Rodriguez, Nathalie Gaudreault, Laurent Gelman, David Grunwald,
Tingting Gu, Nadia Halidi, Mathias Hammer, Matthew Hartley, Marie Held,
Florian Jug, Varun Kapoor, Ayse Aslihan Koksoy, Judith Lacoste, Sylvia Le
D\'ev\'edec, Sylvie Le Guyader, Penghuan Liu, Gabriel Martins, Aastha Mathur,
Kota Miura, Paula Montero Llopis, Roland Nitschke, Alison North, Adam
Parslow, Alex Payne-Dwyer, Laure Plantard, Ali Rizwan, Britta Schroth-Diez,
Lucas Sch\"utz, Ryan T. Scott, Arne Seitz, Olaf Selchow, Ved Sharma, Martin
Spitaler, Sathya Srinivasan, Caterina Strambio De Castillia, Douglas Taatjes,
Christian Tischer (2) and Helena Klara Jambor (3) ((1) Fondazione Human
Technopole, Milano, Italy, (2) Centre for Bioimage Analysis, EMBL,
Heidelberg, Germany (3) NCT-UCC, Medizinische Fakult\"at TU Dresden, Dresden,
Germany) | Community-developed checklists for publishing images and image analysis | 28 pages, 8 Figures, 3 Supplmentary Figures, Manuscript, Essential
recommendations for publication of microscopy image data | null | 10.1038/s41592-023-01987-9 | null | q-bio.OT | http://creativecommons.org/licenses/by-sa/4.0/ | Images document scientific discoveries and are prevalent in modern biomedical
research. Microscopy imaging in particular is currently undergoing rapid
technological advancements. However for scientists wishing to publish the
obtained images and image analyses results, there are to date no unified
guidelines. Consequently, microscopy images and image data in publications may
be unclear or difficult to interpret. Here we present community-developed
checklists for preparing light microscopy images and image analysis for
publications. These checklists offer authors, readers, and publishers key
recommendations for image formatting and annotation, color selection, data
availability, and for reporting image analysis workflows. The goal of our
guidelines is to increase the clarity and reproducibility of image figures and
thereby heighten the quality of microscopy data is in publications.
| [
{
"created": "Tue, 14 Feb 2023 12:25:12 GMT",
"version": "v1"
},
{
"created": "Thu, 14 Sep 2023 08:05:07 GMT",
"version": "v2"
}
] | 2024-03-15 | [
[
"Schmied",
"Christopher",
""
],
[
"Nelson",
"Michael",
""
],
[
"Avilov",
"Sergiy",
""
],
[
"Bakker",
"Gert-Jan",
""
],
[
"Bertocchi",
"Cristina",
""
],
[
"Bischof",
"Johanna",
""
],
[
"Boehm",
"Ulrike",
""
],
[
"Brocher",
"Jan",
""
],
[
"Carvalho",
"Mariana",
""
],
[
"Chiritescu",
"Catalin",
""
],
[
"Christopher",
"Jana",
""
],
[
"Cimini",
"Beth",
""
],
[
"Conde-Sousa",
"Eduardo",
""
],
[
"Ebner",
"Michael",
""
],
[
"Ecker",
"Rupert",
""
],
[
"Eliceiri",
"Kevin",
""
],
[
"Fernandez-Rodriguez",
"Julia",
""
],
[
"Gaudreault",
"Nathalie",
""
],
[
"Gelman",
"Laurent",
""
],
[
"Grunwald",
"David",
""
],
[
"Gu",
"Tingting",
""
],
[
"Halidi",
"Nadia",
""
],
[
"Hammer",
"Mathias",
""
],
[
"Hartley",
"Matthew",
""
],
[
"Held",
"Marie",
""
],
[
"Jug",
"Florian",
""
],
[
"Kapoor",
"Varun",
""
],
[
"Koksoy",
"Ayse Aslihan",
""
],
[
"Lacoste",
"Judith",
""
],
[
"Dévédec",
"Sylvia Le",
""
],
[
"Guyader",
"Sylvie Le",
""
],
[
"Liu",
"Penghuan",
""
],
[
"Martins",
"Gabriel",
""
],
[
"Mathur",
"Aastha",
""
],
[
"Miura",
"Kota",
""
],
[
"Llopis",
"Paula Montero",
""
],
[
"Nitschke",
"Roland",
""
],
[
"North",
"Alison",
""
],
[
"Parslow",
"Adam",
""
],
[
"Payne-Dwyer",
"Alex",
""
],
[
"Plantard",
"Laure",
""
],
[
"Rizwan",
"Ali",
""
],
[
"Schroth-Diez",
"Britta",
""
],
[
"Schütz",
"Lucas",
""
],
[
"Scott",
"Ryan T.",
""
],
[
"Seitz",
"Arne",
""
],
[
"Selchow",
"Olaf",
""
],
[
"Sharma",
"Ved",
""
],
[
"Spitaler",
"Martin",
""
],
[
"Srinivasan",
"Sathya",
""
],
[
"De Castillia",
"Caterina Strambio",
""
],
[
"Taatjes",
"Douglas",
""
],
[
"Tischer",
"Christian",
""
],
[
"Jambor",
"Helena Klara",
""
]
] | Images document scientific discoveries and are prevalent in modern biomedical research. Microscopy imaging in particular is currently undergoing rapid technological advancements. However for scientists wishing to publish the obtained images and image analyses results, there are to date no unified guidelines. Consequently, microscopy images and image data in publications may be unclear or difficult to interpret. Here we present community-developed checklists for preparing light microscopy images and image analysis for publications. These checklists offer authors, readers, and publishers key recommendations for image formatting and annotation, color selection, data availability, and for reporting image analysis workflows. The goal of our guidelines is to increase the clarity and reproducibility of image figures and thereby heighten the quality of microscopy data is in publications. |
2005.07268 | Francesca Bassi | Francesca Bassi (Department of Statistical Sciences, University of
Padova, Italy), Giuseppe Arbia (Department of Statistical Sciences, Catholic
University of the Sacred Hearth, Milano, Italy), Pietro Demetrio Falorsi
(Italian National Statistical Institute) | Observed and estimated prevalence of Covid-19 in Italy: Is it possible
to estimate the total cases from medical swabs data? | 7 pages | null | null | null | q-bio.QM q-bio.PE | http://creativecommons.org/publicdomain/zero/1.0/ | During the current Covid-19 pandemic in Italy, official data are collected
with medical swabs following a pure convenience criterion which, at least in an
early phase, has privileged the exam of patients showing evident symptoms.
However, there are evidences of a very high proportion of asymptomatic patients
(e. g. Aguilar et al., 2020; Chugthai et al, 2020; Li, et al., 2020; Mizumoto
et al., 2020a, 2020b and Yelin et al., 2020). In this situation, in order to
estimate the real number of infected (and to estimate the lethality rate), it
should be necessary to run a properly designed sample survey through which it
would be possible to calculate the probability of inclusion and hence draw
sound probabilistic inference. Some researchers proposed estimates of the total
prevalence based on various approaches, including epidemiologic models, time
series and the analysis of data collected in countries that faced the epidemic
in earlier time (Brogi et al., 2020). In this paper, we propose to estimate the
prevalence of Covid-19 in Italy by reweighting the available official data
published by the Istituto Superiore di Sanit\`a so as to obtain a more
representative sample of the Italian population. Reweighting is a procedure
commonly used to artificially modify the sample composition so as to obtain a
distribution which is more similar to the population (Valliant et al., 2018).
In this paper, we will use post-stratification of the official data, in order
to derive the weights necessary for reweighting them using age and gender as
post-stratification variables thus obtaining more reliable estimation of
prevalence and lethality.
| [
{
"created": "Tue, 12 May 2020 20:26:31 GMT",
"version": "v1"
}
] | 2020-05-18 | [
[
"Bassi",
"Francesca",
"",
"Department of Statistical Sciences, University of\n Padova, Italy"
],
[
"Arbia",
"Giuseppe",
"",
"Department of Statistical Sciences, Catholic\n University of the Sacred Hearth, Milano, Italy"
],
[
"Falorsi",
"Pietro Demetrio",
"",
"Italian National Statistical Institute"
]
] | During the current Covid-19 pandemic in Italy, official data are collected with medical swabs following a pure convenience criterion which, at least in an early phase, has privileged the exam of patients showing evident symptoms. However, there are evidences of a very high proportion of asymptomatic patients (e. g. Aguilar et al., 2020; Chugthai et al, 2020; Li, et al., 2020; Mizumoto et al., 2020a, 2020b and Yelin et al., 2020). In this situation, in order to estimate the real number of infected (and to estimate the lethality rate), it should be necessary to run a properly designed sample survey through which it would be possible to calculate the probability of inclusion and hence draw sound probabilistic inference. Some researchers proposed estimates of the total prevalence based on various approaches, including epidemiologic models, time series and the analysis of data collected in countries that faced the epidemic in earlier time (Brogi et al., 2020). In this paper, we propose to estimate the prevalence of Covid-19 in Italy by reweighting the available official data published by the Istituto Superiore di Sanit\`a so as to obtain a more representative sample of the Italian population. Reweighting is a procedure commonly used to artificially modify the sample composition so as to obtain a distribution which is more similar to the population (Valliant et al., 2018). In this paper, we will use post-stratification of the official data, in order to derive the weights necessary for reweighting them using age and gender as post-stratification variables thus obtaining more reliable estimation of prevalence and lethality. |
2103.10915 | Toni Giorgino | Federica Cossu, Luca Sorrentino, Elisa Fagnani, Mattia Zaffaroni,
Mario Milani, Toni Giorgino, and Eloise Mastrangelo | Computational and Experimental Characterization of NF023, A Candidate
Anticancer Compound Inhibiting cIAP2/TRAF2 Assembly | null | J. Chem. Inf. Model. 2020, 60, 10, 5036-5044 | 10.1021/acs.jcim.0c00518 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Protein-protein interactions are the basis of many important physiological
processes and are currently promising, yet difficult, targets for drug
discovery. In this context, inhibitor of apoptosis proteins (IAPs)-mediated
interactions are pivotal for cancer cell survival; the interaction of the BIR1
domain of cIAP2 with TRAF2 was shown to lead the recruitment of cIAPs to the
TNF receptor, promoting the activation of the NF-\kappa B survival pathway. In
this work, using a combined in silico-in vitro approach, we identified a
drug-like molecule, NF023, able to disrupt cIAP2 interaction with TRAF2. We
demonstrated in vitro its ability to interfere with the assembly of the
cIAP2-BIR1/TRAF2 complex and performed a thorough characterization of the
compound's mode of action through 248 parallel unbiased molecular dynamics
simulations of 300 ns (totaling almost 75 {\mu}s of all-atom sampling), which
identified multiple binding modes to the BIR1 domain of cIAP2 via clustering
and ensemble docking. NF023 is, thus, a promising protein-protein interaction
disruptor, representing a starting point to develop modulators of NF-\kappa
B-mediated cell survival in cancer. This study represents a model procedure
that shows the use of large-scale molecular dynamics methods to typify
promiscuous interactors.
| [
{
"created": "Fri, 19 Mar 2021 17:21:32 GMT",
"version": "v1"
}
] | 2021-03-22 | [
[
"Cossu",
"Federica",
""
],
[
"Sorrentino",
"Luca",
""
],
[
"Fagnani",
"Elisa",
""
],
[
"Zaffaroni",
"Mattia",
""
],
[
"Milani",
"Mario",
""
],
[
"Giorgino",
"Toni",
""
],
[
"Mastrangelo",
"Eloise",
""
]
] | Protein-protein interactions are the basis of many important physiological processes and are currently promising, yet difficult, targets for drug discovery. In this context, inhibitor of apoptosis proteins (IAPs)-mediated interactions are pivotal for cancer cell survival; the interaction of the BIR1 domain of cIAP2 with TRAF2 was shown to lead the recruitment of cIAPs to the TNF receptor, promoting the activation of the NF-\kappa B survival pathway. In this work, using a combined in silico-in vitro approach, we identified a drug-like molecule, NF023, able to disrupt cIAP2 interaction with TRAF2. We demonstrated in vitro its ability to interfere with the assembly of the cIAP2-BIR1/TRAF2 complex and performed a thorough characterization of the compound's mode of action through 248 parallel unbiased molecular dynamics simulations of 300 ns (totaling almost 75 {\mu}s of all-atom sampling), which identified multiple binding modes to the BIR1 domain of cIAP2 via clustering and ensemble docking. NF023 is, thus, a promising protein-protein interaction disruptor, representing a starting point to develop modulators of NF-\kappa B-mediated cell survival in cancer. This study represents a model procedure that shows the use of large-scale molecular dynamics methods to typify promiscuous interactors. |
q-bio/0703060 | Eugene Shakhnovich | Eric Deeds, orr Ashenberg, Jaline Gerardine, Eugene Shakhnovich | Robust protein-protein interactions in crowded cellular environments | null | null | 10.1073/pnas.0702766104 | null | q-bio.BM q-bio.MN | null | The capacity of proteins to interact specifically with one another underlies
our conceptual understanding of how living systems function. Systems-level
study of specificity in protein-protein interactions is complicated by the fact
that the cellular environment is crowded and heterogeneous; interaction pairs
may exist at low relative concentrations and thus be presented with many more
opportunities for promiscuous interactions compared to specific interaction
possibilities. Here we address these questions using a simple computational
model that includes specifically designed interacting model proteins immersed
in a mixture containing hundreds of different unrelated ones; all of them
undergo simulated diffusion and interaction. We find that specific complexes
are quite robust to interference from promiscuous interaction partners, only in
the range of temperatures Tdesign>T>Trand. At T>Tdesign specific complexes
become unstable, while at T<Trand formation of specific complexes is suppressed
by promiscuous interactions. Specific interactions can form only if
Tdesign>Trand. This condition requires an energy gap between binding energy in
a specific complex and set of binding energies between randomly associating
proteins, providing a general physical constraint on evolutionary selection or
design of specific interacting protein interfaces. This work has implications
for our understanding of how the protein repertoire functions and evolves
within the context of cellular systems.
| [
{
"created": "Tue, 27 Mar 2007 17:10:26 GMT",
"version": "v1"
}
] | 2009-11-13 | [
[
"Deeds",
"Eric",
""
],
[
"Ashenberg",
"orr",
""
],
[
"Gerardine",
"Jaline",
""
],
[
"Shakhnovich",
"Eugene",
""
]
] | The capacity of proteins to interact specifically with one another underlies our conceptual understanding of how living systems function. Systems-level study of specificity in protein-protein interactions is complicated by the fact that the cellular environment is crowded and heterogeneous; interaction pairs may exist at low relative concentrations and thus be presented with many more opportunities for promiscuous interactions compared to specific interaction possibilities. Here we address these questions using a simple computational model that includes specifically designed interacting model proteins immersed in a mixture containing hundreds of different unrelated ones; all of them undergo simulated diffusion and interaction. We find that specific complexes are quite robust to interference from promiscuous interaction partners, only in the range of temperatures Tdesign>T>Trand. At T>Tdesign specific complexes become unstable, while at T<Trand formation of specific complexes is suppressed by promiscuous interactions. Specific interactions can form only if Tdesign>Trand. This condition requires an energy gap between binding energy in a specific complex and set of binding energies between randomly associating proteins, providing a general physical constraint on evolutionary selection or design of specific interacting protein interfaces. This work has implications for our understanding of how the protein repertoire functions and evolves within the context of cellular systems. |
0809.2973 | Vahid Shahrezaei | Vahid Shahrezaei, Julien F Ollivier, Peter S Swain | Colored extrinsic fluctuations and stochastic gene expression | 16 pages and 5 figures | Molecular Systems Biology 4:196 (2008) | 10.1038/msb.2008.31 | null | q-bio.MN q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Stochasticity is both exploited and controlled by cells. Although the
intrinsic stochasticity inherent in biochemistry is relatively well understood,
cellular variation, or 'noise', is predominantly generated by interactions of
the system of interest with other stochastic systems in the cell or its
environment. Such extrinsic fluctuations are nonspecific, affecting many system
components, and have a substantial lifetime, comparable to the cell cycle (they
are 'colored'). Here, we extend the standard stochastic simulation algorithm to
include extrinsic fluctuations. We show that these fluctuations affect mean
protein numbers and intrinsic noise, can speed up typical network response
times, and can explain trends in high-throughput measurements of variation. If
extrinsic fluctuations in two components of the network are correlated, they
may combine constructively (amplifying each other) or destructively
(attenuating each other). Consequently, we predict that incoherent feedforward
loops attenuate stochasticity, while coherent feedforwards amplify it. Our
results demonstrate that both the timescales of extrinsic fluctuations and
their nonspecificity substantially affect the function and performance of
biochemical networks.
| [
{
"created": "Wed, 17 Sep 2008 18:06:42 GMT",
"version": "v1"
}
] | 2008-09-18 | [
[
"Shahrezaei",
"Vahid",
""
],
[
"Ollivier",
"Julien F",
""
],
[
"Swain",
"Peter S",
""
]
] | Stochasticity is both exploited and controlled by cells. Although the intrinsic stochasticity inherent in biochemistry is relatively well understood, cellular variation, or 'noise', is predominantly generated by interactions of the system of interest with other stochastic systems in the cell or its environment. Such extrinsic fluctuations are nonspecific, affecting many system components, and have a substantial lifetime, comparable to the cell cycle (they are 'colored'). Here, we extend the standard stochastic simulation algorithm to include extrinsic fluctuations. We show that these fluctuations affect mean protein numbers and intrinsic noise, can speed up typical network response times, and can explain trends in high-throughput measurements of variation. If extrinsic fluctuations in two components of the network are correlated, they may combine constructively (amplifying each other) or destructively (attenuating each other). Consequently, we predict that incoherent feedforward loops attenuate stochasticity, while coherent feedforwards amplify it. Our results demonstrate that both the timescales of extrinsic fluctuations and their nonspecificity substantially affect the function and performance of biochemical networks. |
1407.7801 | Brian Williams Dr | Brian Gerard Williams | Optimizing control of HIV in Kenya | 9 pages | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | UNAIDS has embraced an ambitious global target for the implementation of
treatment for people living with HIV. This 90-90-90 target would mean that, by
2020, 90% of all those living with HIV should know their status, 90% of these
would be on treatment and 90% of these would have fully suppressed plasma viral
loads. To reach this target in the next five years presents a major logistical
challenge. However, the prevalence of HIV varies greatly by risk groups, age,
gender, geography and social conditions. For reasons of effectiveness and
impact the focus must first be on those who are most likely to be infected with
HIV and therefore most likely to infect others. In Kenya the prevalence of HIV
in adults varies by two orders of magnitude among the counties. The effective
implementation of 90-90-90 will depend on first providing ART where the
prevalence of infection is greatest, then to those that are most easily reached
in large numbers and finally to the whole population. Here we use routine data
from ante-natal clinics and national survey data to assess the variation of the
prevalence of HIV among counties in Kenya; we suggest reasons for this
variation, and estimate the effectiveness of targeting the role out of ART. The
highest prevalence occurs in some of the counties bordering Lake Victoria and
these are most in need of ART. These districts in Nyanza Province, account for
31% of all cases in Kenya but make up 10% of the population and cover 1.8% of
the land-area. The highest concentrations of HIV cases are in Nairobi and
Mombasa which account for a further 18% of all cases in Kenya but make up 12%
of the population and cover 0.1% of the land-area. Providing ART in these two
cities will be relatively straightforward given their small geographical area.
| [
{
"created": "Mon, 28 Jul 2014 16:38:39 GMT",
"version": "v1"
}
] | 2014-07-30 | [
[
"Williams",
"Brian Gerard",
""
]
] | UNAIDS has embraced an ambitious global target for the implementation of treatment for people living with HIV. This 90-90-90 target would mean that, by 2020, 90% of all those living with HIV should know their status, 90% of these would be on treatment and 90% of these would have fully suppressed plasma viral loads. To reach this target in the next five years presents a major logistical challenge. However, the prevalence of HIV varies greatly by risk groups, age, gender, geography and social conditions. For reasons of effectiveness and impact the focus must first be on those who are most likely to be infected with HIV and therefore most likely to infect others. In Kenya the prevalence of HIV in adults varies by two orders of magnitude among the counties. The effective implementation of 90-90-90 will depend on first providing ART where the prevalence of infection is greatest, then to those that are most easily reached in large numbers and finally to the whole population. Here we use routine data from ante-natal clinics and national survey data to assess the variation of the prevalence of HIV among counties in Kenya; we suggest reasons for this variation, and estimate the effectiveness of targeting the role out of ART. The highest prevalence occurs in some of the counties bordering Lake Victoria and these are most in need of ART. These districts in Nyanza Province, account for 31% of all cases in Kenya but make up 10% of the population and cover 1.8% of the land-area. The highest concentrations of HIV cases are in Nairobi and Mombasa which account for a further 18% of all cases in Kenya but make up 12% of the population and cover 0.1% of the land-area. Providing ART in these two cities will be relatively straightforward given their small geographical area. |
2209.13022 | Yeganeh Madadi | Yeganeh Madadi, Aboozar Monavarfeshani, Hao Chen, W. Daniel Stamer,
Robert W. Williams, and Siamak Yousefi | Artificial Intelligence Models for Cell Type and Subtype Identification
Based on Single-Cell RNA Sequencing Data in Vision Science | null | null | null | null | q-bio.QM eess.IV q-bio.BM q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Single-cell RNA sequencing (scRNA-seq) provides a high throughput,
quantitative and unbiased framework for scientists in many research fields to
identify and characterize cell types within heterogeneous cell populations from
various tissues. However, scRNA-seq based identification of discrete cell-types
is still labor intensive and depends on prior molecular knowledge. Artificial
intelligence has provided faster, more accurate, and user-friendly approaches
for cell-type identification. In this review, we discuss recent advances in
cell-type identification methods using artificial intelligence techniques based
on single-cell and single-nucleus RNA sequencing data in vision science.
| [
{
"created": "Mon, 26 Sep 2022 20:48:41 GMT",
"version": "v1"
}
] | 2022-09-28 | [
[
"Madadi",
"Yeganeh",
""
],
[
"Monavarfeshani",
"Aboozar",
""
],
[
"Chen",
"Hao",
""
],
[
"Stamer",
"W. Daniel",
""
],
[
"Williams",
"Robert W.",
""
],
[
"Yousefi",
"Siamak",
""
]
] | Single-cell RNA sequencing (scRNA-seq) provides a high throughput, quantitative and unbiased framework for scientists in many research fields to identify and characterize cell types within heterogeneous cell populations from various tissues. However, scRNA-seq based identification of discrete cell-types is still labor intensive and depends on prior molecular knowledge. Artificial intelligence has provided faster, more accurate, and user-friendly approaches for cell-type identification. In this review, we discuss recent advances in cell-type identification methods using artificial intelligence techniques based on single-cell and single-nucleus RNA sequencing data in vision science. |
2310.01428 | Maitham Yousif | Maitham G. Yousif, Ghizal Fatima, Hector J. Castro, Fadhil G.
Al-Amran, Salman Rawaf | Unraveling Post-COVID-19 Immune Dysregulation Using Machine
Learning-based Immunophenotyping | null | null | null | null | q-bio.OT | http://creativecommons.org/licenses/by/4.0/ | The COVID-19 pandemic has left a significant mark on global healthcare, with
many individuals experiencing lingering symptoms long after recovering from the
acute phase of the disease, a condition often referred to as "long COVID." This
study delves into the intricate realm of immune dysregulation that ensues in
509 post-COVID-19 patients across multiple Iraqi regions during the years 2022
and 2023. Utilizing advanced machine learning techniques for immunophenotyping,
this research aims to shed light on the diverse immune dysregulation patterns
present in long COVID patients. By analyzing a comprehensive dataset
encompassing clinical, immunological, and demographic information, the study
provides valuable insights into the complex interplay of immune responses
following COVID-19 infection. The findings reveal that long COVID is associated
with a spectrum of immune dysregulation phenomena, including persistent
inflammation, altered cytokine profiles, and abnormal immune cell subsets.
These insights highlight the need for personalized interventions and tailored
treatment strategies for individuals suffering from long COVID-19. This
research represents a significant step forward in our understanding of the
post-COVID-19 immune landscape and opens new avenues for targeted therapies and
clinical management of long COVID patients. As the world grapples with the
long-term implications of the pandemic, these findings offer hope for improving
the quality of life for those affected by this enigmatic condition.
| [
{
"created": "Thu, 28 Sep 2023 14:47:53 GMT",
"version": "v1"
}
] | 2023-10-04 | [
[
"Yousif",
"Maitham G.",
""
],
[
"Fatima",
"Ghizal",
""
],
[
"Castro",
"Hector J.",
""
],
[
"Al-Amran",
"Fadhil G.",
""
],
[
"Rawaf",
"Salman",
""
]
] | The COVID-19 pandemic has left a significant mark on global healthcare, with many individuals experiencing lingering symptoms long after recovering from the acute phase of the disease, a condition often referred to as "long COVID." This study delves into the intricate realm of immune dysregulation that ensues in 509 post-COVID-19 patients across multiple Iraqi regions during the years 2022 and 2023. Utilizing advanced machine learning techniques for immunophenotyping, this research aims to shed light on the diverse immune dysregulation patterns present in long COVID patients. By analyzing a comprehensive dataset encompassing clinical, immunological, and demographic information, the study provides valuable insights into the complex interplay of immune responses following COVID-19 infection. The findings reveal that long COVID is associated with a spectrum of immune dysregulation phenomena, including persistent inflammation, altered cytokine profiles, and abnormal immune cell subsets. These insights highlight the need for personalized interventions and tailored treatment strategies for individuals suffering from long COVID-19. This research represents a significant step forward in our understanding of the post-COVID-19 immune landscape and opens new avenues for targeted therapies and clinical management of long COVID patients. As the world grapples with the long-term implications of the pandemic, these findings offer hope for improving the quality of life for those affected by this enigmatic condition. |
2207.07410 | Maarten Alexander Brems | Maarten A. Brems, Robert Runkel, Todd O. Yeates, Peter Virnau | AlphaFold predicts the most complex protein knot and composite protein
knots | This article appeared openly accessible in M. A. Brems et al.,
Protein Science. 2022; 31( 8):e4380 and may be found at
https://doi.org/10.1002/pro.4380 | Protein Science. 2022; 31( 8):e4380 | 10.1002/pro.4380 | null | q-bio.BM cond-mat.soft physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The computer artificial intelligence system AlphaFold has recently predicted
previously unknown three-dimensional structures of thousands of proteins.
Focusing on the subset with high-confidence scores, we algorithmically analyze
these predictions for cases where the protein backbone exhibits rare
topological complexity, i.e. knotting. Amongst others, we discovered a
$7_1$-knot, the most topologically complex knot ever found in a protein, as
well several 6-crossing composite knots comprised of two methyltransferase or
carbonic anhydrase domains, each containing a simple trefoil knot. These deeply
embedded composite knots occur evidently by gene duplication and
interconnection of knotted dimers. Finally, we report two new five-crossing
knots including the first $5_1$-knot. Our list of analyzed structures forms the
basis for future experimental studies to confirm these novel knotted topologies
and to explore their complex folding mechanisms.
| [
{
"created": "Fri, 15 Jul 2022 11:38:45 GMT",
"version": "v1"
}
] | 2022-07-18 | [
[
"Brems",
"Maarten A.",
""
],
[
"Runkel",
"Robert",
""
],
[
"Yeates",
"Todd O.",
""
],
[
"Virnau",
"Peter",
""
]
] | The computer artificial intelligence system AlphaFold has recently predicted previously unknown three-dimensional structures of thousands of proteins. Focusing on the subset with high-confidence scores, we algorithmically analyze these predictions for cases where the protein backbone exhibits rare topological complexity, i.e. knotting. Amongst others, we discovered a $7_1$-knot, the most topologically complex knot ever found in a protein, as well several 6-crossing composite knots comprised of two methyltransferase or carbonic anhydrase domains, each containing a simple trefoil knot. These deeply embedded composite knots occur evidently by gene duplication and interconnection of knotted dimers. Finally, we report two new five-crossing knots including the first $5_1$-knot. Our list of analyzed structures forms the basis for future experimental studies to confirm these novel knotted topologies and to explore their complex folding mechanisms. |
1506.00602 | Chris Jewell PhD | Chris Jewell and Richard Brown | Bayesian data assimilation provides rapid decision support for
vector-borne diseases | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Predicting the spread of vector-borne diseases in response to incursions
requires knowledge of both host and vector demographics in advance of an
outbreak. Whereas host population data is typically available, for novel
disease introductions there is a high chance of the pathogen utilising a vector
for which data is unavailable. This presents a barrier to estimating the
parameters of dynamical models representing host-vector-pathogen interaction,
and hence limits their ability to provide quantitative risk forecasts. The
Theileria orientalis (Ikeda) outbreak in New Zealand cattle demonstrates this
problem: even though the vector has received extensive laboratory study, a high
degree of uncertainty persists over its national demographic distribution.
Addressing this, we develop a Bayesian data assimilation approach whereby
indirect observations of vector activity inform a seasonal spatio-temporal risk
surface within a stochastic epidemic model. We provide quantitative predictions
for the future spread of the epidemic, quantifying uncertainty in the model
parameters, case infection times, and the disease status of undetected
infections. Importantly, we demonstrate how our model learns sequentially as
the epidemic unfolds, and provides evidence for changing epidemic dynamics
through time. Our approach therefore provides a significant advance in rapid
decision support for novel vector-borne disease outbreaks.
| [
{
"created": "Mon, 1 Jun 2015 18:47:27 GMT",
"version": "v1"
}
] | 2015-06-02 | [
[
"Jewell",
"Chris",
""
],
[
"Brown",
"Richard",
""
]
] | Predicting the spread of vector-borne diseases in response to incursions requires knowledge of both host and vector demographics in advance of an outbreak. Whereas host population data is typically available, for novel disease introductions there is a high chance of the pathogen utilising a vector for which data is unavailable. This presents a barrier to estimating the parameters of dynamical models representing host-vector-pathogen interaction, and hence limits their ability to provide quantitative risk forecasts. The Theileria orientalis (Ikeda) outbreak in New Zealand cattle demonstrates this problem: even though the vector has received extensive laboratory study, a high degree of uncertainty persists over its national demographic distribution. Addressing this, we develop a Bayesian data assimilation approach whereby indirect observations of vector activity inform a seasonal spatio-temporal risk surface within a stochastic epidemic model. We provide quantitative predictions for the future spread of the epidemic, quantifying uncertainty in the model parameters, case infection times, and the disease status of undetected infections. Importantly, we demonstrate how our model learns sequentially as the epidemic unfolds, and provides evidence for changing epidemic dynamics through time. Our approach therefore provides a significant advance in rapid decision support for novel vector-borne disease outbreaks. |
1111.5334 | Konstantin Klemm | Fakhteh Ghanbarnejad, Konstantin Klemm | Impact of individual nodes in Boolean network dynamics | 6 pages, 3 figures, 3 tables | EPL 99, 58006 (2012) | 10.1209/0295-5075/99/58006 | null | q-bio.MN cond-mat.dis-nn | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Boolean networks serve as discrete models of regulation and signaling in
biological cells. Identifying the key controllers of such processes is
important for understanding the dynamical systems and planning further
analysis. Here we quantify the dynamical impact of a node as the probability of
damage spreading after switching the node's state. We find that the leading
eigenvector of the adjacency matrix is a good predictor of dynamical impact in
the case of long-term spreading. This so-called eigenvector centrality is also
a good proxy measure of the influence a node's initial state has on the
attractor the system eventually arrives at. Quality of prediction is further
improved when eigenvector centrality is based on the weighted matrix of
activities rather than the unweighted adjacency matrix. Simulations are
performed with ensembles of random Boolean networks and a Boolean model of
signaling in fibroblasts. The findings are supported by analytic arguments from
a linear approximation of damage spreading.
| [
{
"created": "Tue, 22 Nov 2011 21:00:19 GMT",
"version": "v1"
},
{
"created": "Thu, 13 Sep 2012 16:25:22 GMT",
"version": "v2"
}
] | 2012-09-14 | [
[
"Ghanbarnejad",
"Fakhteh",
""
],
[
"Klemm",
"Konstantin",
""
]
] | Boolean networks serve as discrete models of regulation and signaling in biological cells. Identifying the key controllers of such processes is important for understanding the dynamical systems and planning further analysis. Here we quantify the dynamical impact of a node as the probability of damage spreading after switching the node's state. We find that the leading eigenvector of the adjacency matrix is a good predictor of dynamical impact in the case of long-term spreading. This so-called eigenvector centrality is also a good proxy measure of the influence a node's initial state has on the attractor the system eventually arrives at. Quality of prediction is further improved when eigenvector centrality is based on the weighted matrix of activities rather than the unweighted adjacency matrix. Simulations are performed with ensembles of random Boolean networks and a Boolean model of signaling in fibroblasts. The findings are supported by analytic arguments from a linear approximation of damage spreading. |
2404.03516 | Peng Li | Yuanyuan Zhang, Yingdong Wang, Chaoyong Wu, Lingmin Zhana, Aoyi Wang,
Caiping Cheng, Jinzhong Zhao, Wuxia Zhang, Jianxin Chen, Peng Li | Drug-target interaction prediction by integrating heterogeneous
information with mutual attention network | null | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Identification of drug-target interactions is an indispensable part of drug
discovery. While conventional shallow machine learning and recent deep learning
methods based on chemogenomic properties of drugs and target proteins have
pushed this prediction performance improvement to a new level, these methods
are still difficult to adapt to novel structures. Alternatively, large-scale
biological and pharmacological data provide new ways to accelerate drug-target
interaction prediction. Here, we propose DrugMAN, a deep learning model for
predicting drug-target interaction by integrating multiplex heterogeneous
functional networks with a mutual attention network (MAN). DrugMAN uses a graph
attention network-based integration algorithm to learn network-specific
low-dimensional features for drugs and target proteins by integrating four drug
networks and seven gene/protein networks, respectively. DrugMAN then captures
interaction information between drug and target representations by a mutual
attention network to improve drug-target prediction. DrugMAN achieves the best
prediction performance under four different scenarios, especially in real-world
scenarios. DrugMAN spotlights heterogeneous information to mine drug-target
interactions and can be a powerful tool for drug discovery and drug
repurposing.
| [
{
"created": "Wed, 3 Apr 2024 02:48:22 GMT",
"version": "v1"
}
] | 2024-04-05 | [
[
"Zhang",
"Yuanyuan",
""
],
[
"Wang",
"Yingdong",
""
],
[
"Wu",
"Chaoyong",
""
],
[
"Zhana",
"Lingmin",
""
],
[
"Wang",
"Aoyi",
""
],
[
"Cheng",
"Caiping",
""
],
[
"Zhao",
"Jinzhong",
""
],
[
"Zhang",
"Wuxia",
""
],
[
"Chen",
"Jianxin",
""
],
[
"Li",
"Peng",
""
]
] | Identification of drug-target interactions is an indispensable part of drug discovery. While conventional shallow machine learning and recent deep learning methods based on chemogenomic properties of drugs and target proteins have pushed this prediction performance improvement to a new level, these methods are still difficult to adapt to novel structures. Alternatively, large-scale biological and pharmacological data provide new ways to accelerate drug-target interaction prediction. Here, we propose DrugMAN, a deep learning model for predicting drug-target interaction by integrating multiplex heterogeneous functional networks with a mutual attention network (MAN). DrugMAN uses a graph attention network-based integration algorithm to learn network-specific low-dimensional features for drugs and target proteins by integrating four drug networks and seven gene/protein networks, respectively. DrugMAN then captures interaction information between drug and target representations by a mutual attention network to improve drug-target prediction. DrugMAN achieves the best prediction performance under four different scenarios, especially in real-world scenarios. DrugMAN spotlights heterogeneous information to mine drug-target interactions and can be a powerful tool for drug discovery and drug repurposing. |
2302.06120 | Yiren Jian | Yiren Jian and Chongyang Gao and Chen Zeng and Yunjie Zhao and Soroush
Vosoughi | Knowledge from Large-Scale Protein Contact Prediction Models Can Be
Transferred to the Data-Scarce RNA Contact Prediction Task | The code is available at
https://github.com/yiren-jian/CoT-RNA-Transfer | null | null | null | q-bio.QM cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | RNA, whose functionality is largely determined by its structure, plays an
important role in many biological activities. The prediction of pairwise
structural proximity between each nucleotide of an RNA sequence can
characterize the structural information of the RNA. Historically, this problem
has been tackled by machine learning models using expert-engineered features
and trained on scarce labeled datasets. Here, we find that the knowledge
learned by a protein-coevolution Transformer-based deep neural network can be
transferred to the RNA contact prediction task. As protein datasets are orders
of magnitude larger than those for RNA contact prediction, our findings and the
subsequent framework greatly reduce the data scarcity bottleneck. Experiments
confirm that RNA contact prediction through transfer learning using a publicly
available protein model is greatly improved. Our findings indicate that the
learned structural patterns of proteins can be transferred to RNAs, opening up
potential new avenues for research.
| [
{
"created": "Mon, 13 Feb 2023 06:00:56 GMT",
"version": "v1"
},
{
"created": "Fri, 7 Apr 2023 21:13:21 GMT",
"version": "v2"
},
{
"created": "Fri, 19 Jan 2024 04:13:33 GMT",
"version": "v3"
}
] | 2024-01-22 | [
[
"Jian",
"Yiren",
""
],
[
"Gao",
"Chongyang",
""
],
[
"Zeng",
"Chen",
""
],
[
"Zhao",
"Yunjie",
""
],
[
"Vosoughi",
"Soroush",
""
]
] | RNA, whose functionality is largely determined by its structure, plays an important role in many biological activities. The prediction of pairwise structural proximity between each nucleotide of an RNA sequence can characterize the structural information of the RNA. Historically, this problem has been tackled by machine learning models using expert-engineered features and trained on scarce labeled datasets. Here, we find that the knowledge learned by a protein-coevolution Transformer-based deep neural network can be transferred to the RNA contact prediction task. As protein datasets are orders of magnitude larger than those for RNA contact prediction, our findings and the subsequent framework greatly reduce the data scarcity bottleneck. Experiments confirm that RNA contact prediction through transfer learning using a publicly available protein model is greatly improved. Our findings indicate that the learned structural patterns of proteins can be transferred to RNAs, opening up potential new avenues for research. |
2003.00073 | Yufen Chen | Yufen Chen, Amy A. Herrold, Virginia Gallagher, Brian Vesci, Jeffrey
Mjannes, Leanne R. McCloskey, James L. Reilly, Hans C. Breiter | Preliminary Report: Cerebral blood flow mediates the relationship
between progesterone and perceived stress symptoms among female club athletes
after mild traumatic brain injury | 27pages, 3 figures, 4 tables | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Female athletes are severely understudied in the field of concussion
research, despite higher prevalence for injuries and tendency to have longer
recovery time. Hormonal fluctuations due to normal menstrual cycle (MC) or
hormonal contraceptive (HC) use have been shown to impact both post-injury
symptoms and neuroimaging measures, but have not been accounted for in
concussion studies. In this preliminary study, we compared arterial spin
labeling measured cerebral blood flow (CBF) between concussed female club
athletes 3-10 days post injury (mTBI) and demographic, HC/MC matched controls
(CON). We test whether CBF mediates the relationship between progesterone
levels in blood and post-injury symptoms, which may be evidence for
progesterone`s role in neuroprotection. We found a significant three-way
relationship between progesterone, CBF and perceived stress score (PSS) in the
left middle temporal gyrus. Higher progesterone was associated with lower (more
normative) PSS, as well as higher (more normative) CBF. CBF mediates 100% of
the relationship between progesterone and PSS (Sobel`s p-value=0.017). These
findings suggest progesterone may have a neuroprotective role after concussion
and highlight the importance of controlling for the effects of sex hormones in
future concussion studies.
| [
{
"created": "Fri, 28 Feb 2020 21:26:02 GMT",
"version": "v1"
}
] | 2020-03-03 | [
[
"Chen",
"Yufen",
""
],
[
"Herrold",
"Amy A.",
""
],
[
"Gallagher",
"Virginia",
""
],
[
"Vesci",
"Brian",
""
],
[
"Mjannes",
"Jeffrey",
""
],
[
"McCloskey",
"Leanne R.",
""
],
[
"Reilly",
"James L.",
""
],
[
"Breiter",
"Hans C.",
""
]
] | Female athletes are severely understudied in the field of concussion research, despite higher prevalence for injuries and tendency to have longer recovery time. Hormonal fluctuations due to normal menstrual cycle (MC) or hormonal contraceptive (HC) use have been shown to impact both post-injury symptoms and neuroimaging measures, but have not been accounted for in concussion studies. In this preliminary study, we compared arterial spin labeling measured cerebral blood flow (CBF) between concussed female club athletes 3-10 days post injury (mTBI) and demographic, HC/MC matched controls (CON). We test whether CBF mediates the relationship between progesterone levels in blood and post-injury symptoms, which may be evidence for progesterone`s role in neuroprotection. We found a significant three-way relationship between progesterone, CBF and perceived stress score (PSS) in the left middle temporal gyrus. Higher progesterone was associated with lower (more normative) PSS, as well as higher (more normative) CBF. CBF mediates 100% of the relationship between progesterone and PSS (Sobel`s p-value=0.017). These findings suggest progesterone may have a neuroprotective role after concussion and highlight the importance of controlling for the effects of sex hormones in future concussion studies. |
1301.1426 | David A. Kessler | Shlomit Weisman, Nadav M. Shnerb, David A. Kessler | Evolutionarily Stable Density-Dependent Dispersal | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An ab-initio numerical study of the density-dependent, evolutionary stable
dispersal strategy is presented. The simulations are based on a simple
discretei generation island model with four processes: reproduction, dispersal,
competition and local catastrophe. We do not impose any a priori constraints on
the dispersal schedule, allowing the entire schedule to evolve. We find that
the system converges at long times to a unique nontrivial dispersal schedule
such that the dispersal probability is a monotonically increasing function of
the density. We have explored the dependence of the selected dispersal strategy
on the various system parameters: mean number of offspring, site carrying
capacity, dispersal cost and system size. A few general scaling laws are seen
to emerge from the data.
| [
{
"created": "Tue, 8 Jan 2013 06:53:30 GMT",
"version": "v1"
}
] | 2013-01-09 | [
[
"Weisman",
"Shlomit",
""
],
[
"Shnerb",
"Nadav M.",
""
],
[
"Kessler",
"David A.",
""
]
] | An ab-initio numerical study of the density-dependent, evolutionary stable dispersal strategy is presented. The simulations are based on a simple discretei generation island model with four processes: reproduction, dispersal, competition and local catastrophe. We do not impose any a priori constraints on the dispersal schedule, allowing the entire schedule to evolve. We find that the system converges at long times to a unique nontrivial dispersal schedule such that the dispersal probability is a monotonically increasing function of the density. We have explored the dependence of the selected dispersal strategy on the various system parameters: mean number of offspring, site carrying capacity, dispersal cost and system size. A few general scaling laws are seen to emerge from the data. |
2001.05078 | Dale Zhou | Dale Zhou, Christopher W. Lynn, Zaixu Cui, Rastko Ciric, Graham L.
Baum, Tyler M. Moore, David R. Roalf, John A. Detre, Ruben C. Gur, Raquel E.
Gur, Theodore D. Satterthwaite, Danielle S. Bassett | Efficient Coding in the Economics of Human Brain Connectomics | null | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by-nc-sa/4.0/ | In systems neuroscience, most models posit that brain regions communicate
information under constraints of efficiency. Yet, evidence for efficient
communication in structural brain networks characterized by hierarchical
organization and highly connected hubs remains sparse. The principle of
efficient coding proposes that the brain transmits maximal information in a
metabolically economical or compressed form to improve future behavior. To
determine how structural connectivity supports efficient coding, we develop a
theory specifying minimum rates of message transmission between brain regions
to achieve an expected fidelity, and we test five predictions from the theory
based on random walk communication dynamics. In doing so, we introduce the
metric of compression efficiency, which quantifies the trade-off between lossy
compression and transmission fidelity in structural networks. In a large sample
of youth (n = 1,042; age 8-23 years), we analyze structural networks derived
from diffusion weighted imaging and metabolic expenditure operationalized using
cerebral blood flow. We show that structural networks strike compression
efficiency trade-offs consistent with theoretical predictions. We find that
compression efficiency prioritizes fidelity with development, heightens when
metabolic resources and myelination guide communication, explains advantages of
hierarchical organization, links higher input fidelity to disproportionate
areal expansion, and shows that hubs integrate information by lossy
compression. Lastly, compression efficiency is predictive of behavior--beyond a
conventional metric--for cognitive domains including executive function,
memory, complex reasoning, and social cognition. Our findings elucidate how
macroscale connectivity supports efficient coding, and serve to foreground
communication processes that utilize random walk dynamics constrained by
network connectivity.
| [
{
"created": "Tue, 14 Jan 2020 23:01:06 GMT",
"version": "v1"
},
{
"created": "Tue, 2 Nov 2021 21:19:19 GMT",
"version": "v2"
}
] | 2021-11-04 | [
[
"Zhou",
"Dale",
""
],
[
"Lynn",
"Christopher W.",
""
],
[
"Cui",
"Zaixu",
""
],
[
"Ciric",
"Rastko",
""
],
[
"Baum",
"Graham L.",
""
],
[
"Moore",
"Tyler M.",
""
],
[
"Roalf",
"David R.",
""
],
[
"Detre",
"John A.",
""
],
[
"Gur",
"Ruben C.",
""
],
[
"Gur",
"Raquel E.",
""
],
[
"Satterthwaite",
"Theodore D.",
""
],
[
"Bassett",
"Danielle S.",
""
]
] | In systems neuroscience, most models posit that brain regions communicate information under constraints of efficiency. Yet, evidence for efficient communication in structural brain networks characterized by hierarchical organization and highly connected hubs remains sparse. The principle of efficient coding proposes that the brain transmits maximal information in a metabolically economical or compressed form to improve future behavior. To determine how structural connectivity supports efficient coding, we develop a theory specifying minimum rates of message transmission between brain regions to achieve an expected fidelity, and we test five predictions from the theory based on random walk communication dynamics. In doing so, we introduce the metric of compression efficiency, which quantifies the trade-off between lossy compression and transmission fidelity in structural networks. In a large sample of youth (n = 1,042; age 8-23 years), we analyze structural networks derived from diffusion weighted imaging and metabolic expenditure operationalized using cerebral blood flow. We show that structural networks strike compression efficiency trade-offs consistent with theoretical predictions. We find that compression efficiency prioritizes fidelity with development, heightens when metabolic resources and myelination guide communication, explains advantages of hierarchical organization, links higher input fidelity to disproportionate areal expansion, and shows that hubs integrate information by lossy compression. Lastly, compression efficiency is predictive of behavior--beyond a conventional metric--for cognitive domains including executive function, memory, complex reasoning, and social cognition. Our findings elucidate how macroscale connectivity supports efficient coding, and serve to foreground communication processes that utilize random walk dynamics constrained by network connectivity. |
2103.07061 | Cameron Zachreson | Cameron Zachreson, Sheryl L. Chang, Oliver M. Cliff, Mikhail
Prokopenko | How will mass-vaccination change COVID-19 lockdown requirements in
Australia? | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: To prevent future outbreaks of COVID-19, Australia is pursuing a
mass-vaccination approach in which a targeted group of the population
comprising healthcare workers, aged-care residents and other individuals at
increased risk of exposure will receive a highly effective priority vaccine.
The rest of the population will instead have access to a less effective
vaccine.
Methods: We apply a large-scale agent-based model of COVID-19 in Australia to
investigate the possible implications of this hybrid approach to
mass-vaccination. The model is calibrated to recent epidemiological and
demographic data available in Australia, and accounts for several components of
vaccine efficacy.
Findings: Within a feasible range of vaccine efficacy values, our model
supports the assertion that complete herd immunity due to vaccination is not
likely in the Australian context. For realistic scenarios in which herd
immunity is not achieved, we simulate the effects of mass-vaccination on
epidemic growth rate, and investigate the requirements of lockdown measures
applied to curb subsequent outbreaks. In our simulations, Australia's
vaccination strategy can feasibly reduce required lockdown intensity and
initial epidemic growth rate by 43% and 52%, respectively. The severity of
epidemics, as measured by the peak number of daily new cases, decreases by up
to two orders of magnitude under plausible mass-vaccination and lockdown
strategies.
Interpretation: The study presents a strong argument for a large-scale
vaccination campaign in Australia, which would substantially reduce both the
intensity of future outbreaks and the stringency of non-pharmaceutical
interventions required for their suppression.
| [
{
"created": "Fri, 12 Mar 2021 03:16:10 GMT",
"version": "v1"
},
{
"created": "Fri, 19 Mar 2021 09:28:04 GMT",
"version": "v2"
},
{
"created": "Thu, 1 Apr 2021 03:33:15 GMT",
"version": "v3"
},
{
"created": "Sun, 18 Jul 2021 07:27:18 GMT",
"version": "v4"
},
{
"created": "Fri, 6 Aug 2021 04:13:59 GMT",
"version": "v5"
}
] | 2021-08-09 | [
[
"Zachreson",
"Cameron",
""
],
[
"Chang",
"Sheryl L.",
""
],
[
"Cliff",
"Oliver M.",
""
],
[
"Prokopenko",
"Mikhail",
""
]
] | Background: To prevent future outbreaks of COVID-19, Australia is pursuing a mass-vaccination approach in which a targeted group of the population comprising healthcare workers, aged-care residents and other individuals at increased risk of exposure will receive a highly effective priority vaccine. The rest of the population will instead have access to a less effective vaccine. Methods: We apply a large-scale agent-based model of COVID-19 in Australia to investigate the possible implications of this hybrid approach to mass-vaccination. The model is calibrated to recent epidemiological and demographic data available in Australia, and accounts for several components of vaccine efficacy. Findings: Within a feasible range of vaccine efficacy values, our model supports the assertion that complete herd immunity due to vaccination is not likely in the Australian context. For realistic scenarios in which herd immunity is not achieved, we simulate the effects of mass-vaccination on epidemic growth rate, and investigate the requirements of lockdown measures applied to curb subsequent outbreaks. In our simulations, Australia's vaccination strategy can feasibly reduce required lockdown intensity and initial epidemic growth rate by 43% and 52%, respectively. The severity of epidemics, as measured by the peak number of daily new cases, decreases by up to two orders of magnitude under plausible mass-vaccination and lockdown strategies. Interpretation: The study presents a strong argument for a large-scale vaccination campaign in Australia, which would substantially reduce both the intensity of future outbreaks and the stringency of non-pharmaceutical interventions required for their suppression. |
2208.11518 | Xingyu Li | Anran Liu, Xingyu Li, Hongyi Wu, Bangwei Guo, Jitendra Jonnagaddala,
Hong Zhang, Xu Steven Xu | Prognostic Significance of Tumor-Infiltrating Lymphocytes Using Deep
Learning on Pathology Images in Colorectal Cancers | null | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Purpose Tumor-infiltrating lymphocytes (TILs) have significant prognostic
values in cancers. However, very few automated, deep-learning-based TIL scoring
algorithms have been developed for colorectal cancers (CRC). Methods We
developed an automated, multiscale LinkNet workflow for quantifying
cellular-level TILs for CRC tumors using H&E-stained images. The predictive
performance of the automatic TIL scores (TIL) for disease progression and
overall survival was evaluate using two international datasets, including 554
CRC patients from The Cancer Genome Atlas (TCGA) and 1130 CRC patients from
Molecular and Cellular Oncology (MCO). Results The LinkNet model provided an
outstanding precision (0.9508), recall (0.9185), and overall F1 score (0.9347).
Clear dose-response relationships were observed between TILs and risk of
disease progression or death decreased in both TCGA and MCO cohorts. Both
univariate and multivariate Cox regression analyses for the TCGA data
demonstrated that patients with high TILs had significant (approx. 75%)
reduction of risk for disease progression. In both MCO and TCGA studies, the
TIL-high group was significantly associated with improved overall survival in
univariate analysis (30% and 54% reduction in risk, respectively). However,
potential confounding was observed in the MCO dataset. The favorable effects of
high TILs were consistently observed in different subgroups according to know
risk factors. Conclusion A deep-learning workflow for automatic TIL
quantification based on LinkNet was successfully developed.
| [
{
"created": "Tue, 23 Aug 2022 14:57:20 GMT",
"version": "v1"
},
{
"created": "Wed, 14 Sep 2022 15:35:54 GMT",
"version": "v2"
},
{
"created": "Thu, 15 Sep 2022 14:46:04 GMT",
"version": "v3"
}
] | 2022-09-16 | [
[
"Liu",
"Anran",
""
],
[
"Li",
"Xingyu",
""
],
[
"Wu",
"Hongyi",
""
],
[
"Guo",
"Bangwei",
""
],
[
"Jonnagaddala",
"Jitendra",
""
],
[
"Zhang",
"Hong",
""
],
[
"Xu",
"Xu Steven",
""
]
] | Purpose Tumor-infiltrating lymphocytes (TILs) have significant prognostic values in cancers. However, very few automated, deep-learning-based TIL scoring algorithms have been developed for colorectal cancers (CRC). Methods We developed an automated, multiscale LinkNet workflow for quantifying cellular-level TILs for CRC tumors using H&E-stained images. The predictive performance of the automatic TIL scores (TIL) for disease progression and overall survival was evaluate using two international datasets, including 554 CRC patients from The Cancer Genome Atlas (TCGA) and 1130 CRC patients from Molecular and Cellular Oncology (MCO). Results The LinkNet model provided an outstanding precision (0.9508), recall (0.9185), and overall F1 score (0.9347). Clear dose-response relationships were observed between TILs and risk of disease progression or death decreased in both TCGA and MCO cohorts. Both univariate and multivariate Cox regression analyses for the TCGA data demonstrated that patients with high TILs had significant (approx. 75%) reduction of risk for disease progression. In both MCO and TCGA studies, the TIL-high group was significantly associated with improved overall survival in univariate analysis (30% and 54% reduction in risk, respectively). However, potential confounding was observed in the MCO dataset. The favorable effects of high TILs were consistently observed in different subgroups according to know risk factors. Conclusion A deep-learning workflow for automatic TIL quantification based on LinkNet was successfully developed. |
2004.04144 | Wesley Pegden | Maria Chikina and Wesley Pegden | Modeling strict age-targeted mitigation strategies for COVID-19 | 16 pages, 16 figures, 1 table | null | 10.1371/journal.pone.0236237 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We use a simple SIR-like epidemic model which integrates known age-contact
patterns for the United States to model the effect of age-targeted mitigation
strategies for a COVID-19-like epidemic. We find that, among strategies which
end with population immunity, strict age-targeted mitigation strategies have
the potential to greatly reduce mortalities and ICU utilization for natural
parameter choices.
| [
{
"created": "Wed, 8 Apr 2020 17:56:32 GMT",
"version": "v1"
}
] | 2020-09-09 | [
[
"Chikina",
"Maria",
""
],
[
"Pegden",
"Wesley",
""
]
] | We use a simple SIR-like epidemic model which integrates known age-contact patterns for the United States to model the effect of age-targeted mitigation strategies for a COVID-19-like epidemic. We find that, among strategies which end with population immunity, strict age-targeted mitigation strategies have the potential to greatly reduce mortalities and ICU utilization for natural parameter choices. |
1107.1100 | Laura Hern\'andez | James E. Cresswell and Laura Hernandez | Network structure and phylogenetic signal in an artificially assembled
plant-pollinator community | Key words: chemical ecology, community structure, oligolecty, pollen,
polylecty, social bees, solitary bees, complex networks, mutualist ecosystems | null | null | null | q-bio.PE physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Community ecologists are principally occupied with the proposition that
natural assemblages of species exhibit orderliness and with identifying its
causes. Plant-pollinator networks exhibit a variety of orderly properties, one
of which is 'nestedness'. Nestedness has been attributed to various causes, but
we propose a further influence arising from the phylogenetic structure of the
biochemical constraints on the pollen diets of bees. We use an artificial
assemblage as an opportunity to isolate the action of this mechanism. The
properties of the network that we studied are consistent with the proposition
that nestedness is caused by the phylogeny of diet range in bees, but the claim
is preliminary and we propose that valuable progress in understanding
plant-pollinator systems may be made through applying the techniques of
chemical ecology at the community scale.
| [
{
"created": "Wed, 6 Jul 2011 11:23:07 GMT",
"version": "v1"
}
] | 2011-07-07 | [
[
"Cresswell",
"James E.",
""
],
[
"Hernandez",
"Laura",
""
]
] | Community ecologists are principally occupied with the proposition that natural assemblages of species exhibit orderliness and with identifying its causes. Plant-pollinator networks exhibit a variety of orderly properties, one of which is 'nestedness'. Nestedness has been attributed to various causes, but we propose a further influence arising from the phylogenetic structure of the biochemical constraints on the pollen diets of bees. We use an artificial assemblage as an opportunity to isolate the action of this mechanism. The properties of the network that we studied are consistent with the proposition that nestedness is caused by the phylogeny of diet range in bees, but the claim is preliminary and we propose that valuable progress in understanding plant-pollinator systems may be made through applying the techniques of chemical ecology at the community scale. |
2008.13273 | Vince Grolmusz | Balint Varga and Vince Grolmusz | The braingraph.org Database with more than 1000 Robust Human Structural
Connectomes in Five Resolutions | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The human brain is the most complex object of study we encounter today.
Mapping the neuronal-level connections between the more than 80 billion neurons
in the brain is a hopeless task for science. By the recent advancement of
magnetic resonance imaging (MRI), we are able to map the macroscopic
connections between about 1000 brain areas. The MRI data acquisition and the
subsequent algorithmic workflow contain several complex steps, where errors can
occur. In the present contribution, we describe and publish 1064 human
connectomes, computed from the public release of the Human Connectome Project.
Each connectome is available in 5 resolutions, with 83, 129, 234, 463, and 1015
anatomically labeled nodes. For error correction, we follow an averaging and
extreme value deleting strategy for each edge and for each connectome. The
resulting 5320 braingraphs can be downloaded from the
\url{https://braingraph.org} site. This dataset makes possible the access to
these graphs for scientists unfamiliar with neuroimaging- and
connectome-related tools: mathematicians, physicists, and engineers can use
their expertize and ideas in the analysis of the connections of the human
brain. Brain scientists also have a robust and large, multi-resolution set for
connectomical studies.
| [
{
"created": "Sun, 30 Aug 2020 20:55:53 GMT",
"version": "v1"
}
] | 2020-09-01 | [
[
"Varga",
"Balint",
""
],
[
"Grolmusz",
"Vince",
""
]
] | The human brain is the most complex object of study we encounter today. Mapping the neuronal-level connections between the more than 80 billion neurons in the brain is a hopeless task for science. By the recent advancement of magnetic resonance imaging (MRI), we are able to map the macroscopic connections between about 1000 brain areas. The MRI data acquisition and the subsequent algorithmic workflow contain several complex steps, where errors can occur. In the present contribution, we describe and publish 1064 human connectomes, computed from the public release of the Human Connectome Project. Each connectome is available in 5 resolutions, with 83, 129, 234, 463, and 1015 anatomically labeled nodes. For error correction, we follow an averaging and extreme value deleting strategy for each edge and for each connectome. The resulting 5320 braingraphs can be downloaded from the \url{https://braingraph.org} site. This dataset makes possible the access to these graphs for scientists unfamiliar with neuroimaging- and connectome-related tools: mathematicians, physicists, and engineers can use their expertize and ideas in the analysis of the connections of the human brain. Brain scientists also have a robust and large, multi-resolution set for connectomical studies. |
1001.1399 | Liane Gabora | Liane Gabora and Diederik Aerts | A model of the emergence and evolution of integrated worldviews | null | Gabora, L. & Aerts, D. (2009). A model of the emergence and
evolution of integrated worldviews. Journal of Mathematical Psychology, 53,
434-451 | null | null | q-bio.NC q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It is proposed that the ability of humans to flourish in diverse environments
and evolve complex cultures reflects the following two underlying cognitive
transitions. The transition from the coarse-grained associative memory of Homo
habilis to the fine-grained memory of Homo erectus enabled limited
representational redescription of perceptually similar episodes, abstraction,
and analytic thought, the last of which is modeled as the formation of states
and of lattices of properties and contexts for concepts. The transition to the
modern mind of Homo sapiens is proposed to have resulted from onset of the
capacity to spontaneously and temporarily shift to an associative mode of
thought conducive to interaction amongst seemingly disparate concepts, modeled
as the forging of conjunctions resulting in states of entanglement. The fruits
of associative thought became ingredients for analytic thought, and vice versa.
The ratio of associative pathways to concepts surpassed a percolation threshold
resulting in the emergence of a self-modifying, integrated internal model of
the world, or worldview.
| [
{
"created": "Sat, 9 Jan 2010 04:27:17 GMT",
"version": "v1"
}
] | 2013-08-26 | [
[
"Gabora",
"Liane",
""
],
[
"Aerts",
"Diederik",
""
]
] | It is proposed that the ability of humans to flourish in diverse environments and evolve complex cultures reflects the following two underlying cognitive transitions. The transition from the coarse-grained associative memory of Homo habilis to the fine-grained memory of Homo erectus enabled limited representational redescription of perceptually similar episodes, abstraction, and analytic thought, the last of which is modeled as the formation of states and of lattices of properties and contexts for concepts. The transition to the modern mind of Homo sapiens is proposed to have resulted from onset of the capacity to spontaneously and temporarily shift to an associative mode of thought conducive to interaction amongst seemingly disparate concepts, modeled as the forging of conjunctions resulting in states of entanglement. The fruits of associative thought became ingredients for analytic thought, and vice versa. The ratio of associative pathways to concepts surpassed a percolation threshold resulting in the emergence of a self-modifying, integrated internal model of the world, or worldview. |
2401.04815 | Zhening Li | Zhening Li, John Harte | Nestedness Promotes Stability in Maximum-Entropy Bipartite Food Webs | null | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | Food web topology and energy flow rates across food web linkages can
influence ecosystem properties such as stability. Stability predictions from
current models of energy flow are often sensitive to details in their
formulation, and their complexity makes it difficult to elucidate underlying
mechanisms of general phenomena. Here, within the maximum information entropy
inference framework (MaxEnt), we derive a simple formula for the energy flow
carried by each linkage between two adjacent trophic layers. Inputs to the
model are the topological structure of the food web and aggregate energy fluxes
entering or exiting each species node. For ecosystems with interactions
dominated by consumer-resource interactions between two trophic layers, we
construct a model of species dynamics based on the energy flow predictions from
the MaxEnt model. Mathematical analyses and simulations of the model show that
a food web topology with a higher matrix dipole moment promotes stability
against small perturbations in population sizes, where the \textit{matrix
dipole moment} is a simple nestedness metric that we introduce. Since nested
bipartite subnetworks arise naturally in food webs, our result provides an
explanation for the stability of natural communities.
| [
{
"created": "Tue, 9 Jan 2024 20:48:10 GMT",
"version": "v1"
}
] | 2024-01-11 | [
[
"Li",
"Zhening",
""
],
[
"Harte",
"John",
""
]
] | Food web topology and energy flow rates across food web linkages can influence ecosystem properties such as stability. Stability predictions from current models of energy flow are often sensitive to details in their formulation, and their complexity makes it difficult to elucidate underlying mechanisms of general phenomena. Here, within the maximum information entropy inference framework (MaxEnt), we derive a simple formula for the energy flow carried by each linkage between two adjacent trophic layers. Inputs to the model are the topological structure of the food web and aggregate energy fluxes entering or exiting each species node. For ecosystems with interactions dominated by consumer-resource interactions between two trophic layers, we construct a model of species dynamics based on the energy flow predictions from the MaxEnt model. Mathematical analyses and simulations of the model show that a food web topology with a higher matrix dipole moment promotes stability against small perturbations in population sizes, where the \textit{matrix dipole moment} is a simple nestedness metric that we introduce. Since nested bipartite subnetworks arise naturally in food webs, our result provides an explanation for the stability of natural communities. |
2302.00772 | Roy Siegelmann | Roy Siegelmann and Hava Siegelmann | Meta-Analytic Operation of Threshold-independent Filtering (MOTiF)
Reveals Sub-threshold Genomic Robustness in Trisomy | 20 pages including bibliography, tables, appendices, and five figures | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Trisomy, a form of aneuploidy wherein the cell possesses an additional copy
of a specific chromosome, exhibits a high correlation with cancer. Studies from
across different hosts, cell-lines, and labs into the cellular effects induced
by aneuploidy have conflicted, ranging from small, chaotic global changes to
large instances of either overexpression or underexpression throughout the
trisomic chromosome. We ascertained that conflicting findings may be correct
but miss the overarching ground truth due to careless use of thresholds. To
correct this deficiency, we introduce the Meta-analytic Operation of
Threshold-independent Filtering (MOTiF) method, which begins by providing a
panoramic view of all thresholds, transforms the data to eliminate the effects
accounted for by known mechanisms, and then reconstructs an explanation of the
mechanisms that underly the difference between the baseline and the
uncharacterized effects observed. As a proof of concept, we applied MOTiF to
human colonic epithelial cells, discovering a uniform decrease in gene
expression levels throughout the genome, which while significant, is beneath
most common thresholds. Using Hi-C data we identified the structural correlate,
wherein the physical genomic architecture condenses, compactifying in a
uniform, genome-wide manner, which we hypothesize is a robustness mechanism
counteracting the addition of a chromosome. We were able to decompose the gene
expression alterations into three overlapping mechanisms: the raw chromosome
content, the genomic compartmentalization, and the global structural
condensation. While further studies must be conducted to corroborate the
hypothesized robustness mechanism, MOTiF presents a useful meta-analytic tool
in the realm of gene expression and beyond.
| [
{
"created": "Wed, 1 Feb 2023 22:03:42 GMT",
"version": "v1"
}
] | 2023-02-03 | [
[
"Siegelmann",
"Roy",
""
],
[
"Siegelmann",
"Hava",
""
]
] | Trisomy, a form of aneuploidy wherein the cell possesses an additional copy of a specific chromosome, exhibits a high correlation with cancer. Studies from across different hosts, cell-lines, and labs into the cellular effects induced by aneuploidy have conflicted, ranging from small, chaotic global changes to large instances of either overexpression or underexpression throughout the trisomic chromosome. We ascertained that conflicting findings may be correct but miss the overarching ground truth due to careless use of thresholds. To correct this deficiency, we introduce the Meta-analytic Operation of Threshold-independent Filtering (MOTiF) method, which begins by providing a panoramic view of all thresholds, transforms the data to eliminate the effects accounted for by known mechanisms, and then reconstructs an explanation of the mechanisms that underly the difference between the baseline and the uncharacterized effects observed. As a proof of concept, we applied MOTiF to human colonic epithelial cells, discovering a uniform decrease in gene expression levels throughout the genome, which while significant, is beneath most common thresholds. Using Hi-C data we identified the structural correlate, wherein the physical genomic architecture condenses, compactifying in a uniform, genome-wide manner, which we hypothesize is a robustness mechanism counteracting the addition of a chromosome. We were able to decompose the gene expression alterations into three overlapping mechanisms: the raw chromosome content, the genomic compartmentalization, and the global structural condensation. While further studies must be conducted to corroborate the hypothesized robustness mechanism, MOTiF presents a useful meta-analytic tool in the realm of gene expression and beyond. |
1506.08995 | Diego Fasoli | Diego Fasoli, Anna Cattani, Stefano Panzeri | The complexity of dynamics in small neural circuits | 34 pages, 11 figures. Supplementary materials added, colors of
figures 8 and 9 fixed, results unchanged | null | 10.1371/journal.pcbi.1004992 | null | q-bio.NC math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mean-field theory is a powerful tool for studying large neural networks.
However, when the system is composed of a few neurons, macroscopic differences
between the mean-field approximation and the real behavior of the network can
arise. Here we introduce a study of the dynamics of a small firing-rate network
with excitatory and inhibitory populations, in terms of local and global
bifurcations of the neural activity. Our approach is analytically tractable in
many respects, and sheds new light on the finite-size effects of the system. In
particular, we focus on the formation of multiple branching solutions of the
neural equations through spontaneous symmetry-breaking, since this phenomenon
increases considerably the complexity of the dynamical behavior of the network.
For these reasons, branching points may reveal important mechanisms through
which neurons interact and process information, which are not accounted for by
the mean-field approximation.
| [
{
"created": "Tue, 30 Jun 2015 09:04:25 GMT",
"version": "v1"
},
{
"created": "Wed, 22 Jul 2015 09:09:44 GMT",
"version": "v2"
}
] | 2016-09-28 | [
[
"Fasoli",
"Diego",
""
],
[
"Cattani",
"Anna",
""
],
[
"Panzeri",
"Stefano",
""
]
] | Mean-field theory is a powerful tool for studying large neural networks. However, when the system is composed of a few neurons, macroscopic differences between the mean-field approximation and the real behavior of the network can arise. Here we introduce a study of the dynamics of a small firing-rate network with excitatory and inhibitory populations, in terms of local and global bifurcations of the neural activity. Our approach is analytically tractable in many respects, and sheds new light on the finite-size effects of the system. In particular, we focus on the formation of multiple branching solutions of the neural equations through spontaneous symmetry-breaking, since this phenomenon increases considerably the complexity of the dynamical behavior of the network. For these reasons, branching points may reveal important mechanisms through which neurons interact and process information, which are not accounted for by the mean-field approximation. |
q-bio/0506033 | Karin John | Karin John and Markus Baer | Alternative mechanisms of structuring biomembranes: Self-assembly vs.
self-organization | 4 pages, 5 figures | Phys. Rev. Lett. 96 (2005) 198101 | 10.1103/PhysRevLett.95.198101 | null | q-bio.CB nlin.PS physics.bio-ph | null | We study two mechanisms for the formation of protein patterns near membranes
of living cells by mathematical modelling. Self-assembly of protein domains by
electrostatic lipid-protein interactions is contrasted with self-organization
due to a nonequilibrium biochemical reaction cycle of proteins near the
membrane. While both processes lead eventually to quite similar patterns, their
evolution occurs on very different length and time scales. Self-assembly
produces periodic protein patterns on a spatial scale below 0.1 micron in a few
seconds followed by extremely slow coarsening, whereas self-organization
results in a pattern wavelength comparable to the typical cell size of 100
micron within a few minutes suggesting different biological functions for the
two processes.
| [
{
"created": "Wed, 22 Jun 2005 10:18:11 GMT",
"version": "v1"
},
{
"created": "Fri, 17 Nov 2006 13:06:20 GMT",
"version": "v2"
}
] | 2007-05-23 | [
[
"John",
"Karin",
""
],
[
"Baer",
"Markus",
""
]
] | We study two mechanisms for the formation of protein patterns near membranes of living cells by mathematical modelling. Self-assembly of protein domains by electrostatic lipid-protein interactions is contrasted with self-organization due to a nonequilibrium biochemical reaction cycle of proteins near the membrane. While both processes lead eventually to quite similar patterns, their evolution occurs on very different length and time scales. Self-assembly produces periodic protein patterns on a spatial scale below 0.1 micron in a few seconds followed by extremely slow coarsening, whereas self-organization results in a pattern wavelength comparable to the typical cell size of 100 micron within a few minutes suggesting different biological functions for the two processes. |
1801.09200 | Tomislav Plesa Mr | Tomislav Plesa, Radek Erban, Hans G. Othmer | Noise-induced Mixing and Multimodality in Reaction Networks | null | null | null | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We analyze a class of chemical reaction networks under mass-action kinetics
and involving multiple time-scales, whose deterministic and stochastic models
display qualitative differences. The networks are inspired by gene-regulatory
networks, and consist of a slow-subnetwork, describing conversions among the
different gene states, and fast-subnetworks, describing biochemical
interactions involving the gene products. We show that the long-term dynamics
of such networks can consist of a unique attractor at the deterministic level
(unistability), while the long-term probability distribution at the stochastic
level may display multiple maxima (multimodality). The dynamical differences
stem from a novel phenomenon we call noise-induced mixing, whereby the
probability distribution of the gene products is a linear combination of the
probability distributions of the fast-subnetworks which are `mixed' by the
slow-subnetworks. The results are applied in the context of systems biology,
where noise-induced mixing is shown to play a biochemically important role,
producing phenomena such as stochastic multimodality and oscillations.
| [
{
"created": "Sun, 28 Jan 2018 09:18:30 GMT",
"version": "v1"
}
] | 2018-01-30 | [
[
"Plesa",
"Tomislav",
""
],
[
"Erban",
"Radek",
""
],
[
"Othmer",
"Hans G.",
""
]
] | We analyze a class of chemical reaction networks under mass-action kinetics and involving multiple time-scales, whose deterministic and stochastic models display qualitative differences. The networks are inspired by gene-regulatory networks, and consist of a slow-subnetwork, describing conversions among the different gene states, and fast-subnetworks, describing biochemical interactions involving the gene products. We show that the long-term dynamics of such networks can consist of a unique attractor at the deterministic level (unistability), while the long-term probability distribution at the stochastic level may display multiple maxima (multimodality). The dynamical differences stem from a novel phenomenon we call noise-induced mixing, whereby the probability distribution of the gene products is a linear combination of the probability distributions of the fast-subnetworks which are `mixed' by the slow-subnetworks. The results are applied in the context of systems biology, where noise-induced mixing is shown to play a biochemically important role, producing phenomena such as stochastic multimodality and oscillations. |
1512.01126 | Giulia Menichetti | Giulia Menichetti, Piero Fariselli, Daniel Remondini | Network measures for protein folding state discrimination | null | null | null | null | q-bio.MN q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Proteins fold using a two-state or multi-state kinetic mechanisms, but up to
now there isn't a first-principle model to explain this different behaviour. We
exploit the network properties of protein structures by introducing novel
observables to address the problem of classifying the different types of
folding kinetics. These observables display a plain physical meaning, in terms
of vibrational modes, possible configurations compatible with the native
protein structure, and folding cooperativity. The relevance of these
observables is supported by a classification performance up to 90%, even with
simple classifiers such as disciminant analysis.
| [
{
"created": "Thu, 3 Dec 2015 15:52:55 GMT",
"version": "v1"
}
] | 2015-12-04 | [
[
"Menichetti",
"Giulia",
""
],
[
"Fariselli",
"Piero",
""
],
[
"Remondini",
"Daniel",
""
]
] | Proteins fold using a two-state or multi-state kinetic mechanisms, but up to now there isn't a first-principle model to explain this different behaviour. We exploit the network properties of protein structures by introducing novel observables to address the problem of classifying the different types of folding kinetics. These observables display a plain physical meaning, in terms of vibrational modes, possible configurations compatible with the native protein structure, and folding cooperativity. The relevance of these observables is supported by a classification performance up to 90%, even with simple classifiers such as disciminant analysis. |
q-bio/0608039 | Pierre Sens | Pierre Sens and Nir gov | Force balance and membrane shedding at the Red Blood Cell surface | 4 pages, 3 figures | null | 10.1103/PhysRevLett.98.018102 | null | q-bio.CB | null | During the aging of the red-blood cell, or under conditions of extreme
echinocytosis, membrane is shed from the cell plasma membrane in the form of
nano-vesicles. We propose that this process is the result of the
self-adaptation of the membrane surface area to the elastic stress imposed by
the spectrin cytoskeleton, via the local buckling of membrane under increasing
cytoskeleton stiffness. This model introduces the concept of force balance as a
regulatory process at the cell membrane, and quantitatively reproduces the rate
of area loss in aging red-blood cells.
| [
{
"created": "Sat, 26 Aug 2006 12:47:13 GMT",
"version": "v1"
}
] | 2015-06-26 | [
[
"Sens",
"Pierre",
""
],
[
"gov",
"Nir",
""
]
] | During the aging of the red-blood cell, or under conditions of extreme echinocytosis, membrane is shed from the cell plasma membrane in the form of nano-vesicles. We propose that this process is the result of the self-adaptation of the membrane surface area to the elastic stress imposed by the spectrin cytoskeleton, via the local buckling of membrane under increasing cytoskeleton stiffness. This model introduces the concept of force balance as a regulatory process at the cell membrane, and quantitatively reproduces the rate of area loss in aging red-blood cells. |
1904.12481 | Jeremy Frey | Morgane Hamon, Emma Chabani (ICM), Philippe Giraudeau (Potioc) | Towards a Passive BCI to Induce Lucid Dream | null | Journ{\'e}e Jeunes Chercheurs en Interfaces Cerveau-Ordinateur et
Neurofeedback 2019 (JJC-ICON '19), Mar 2019, Villeneuve d'Ascq, France | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Lucid dreaming (LD) is a phenomenon during which the person is aware that
he/she dreaming and is able to control the dream content. Studies have shown
that only 20% of people can experience lucid dreams on a regular basis.
However, LD frequency can be increased through induction techniques. External
stimulation technique relies on the ability to integrate external information
into the dream content. The aim is to remind the sleeper that she/he is
dreaming. If this type of protocol is not fully efficient, it demonstrates how
sensorial stimuli can be easily incorporated into people's dreams. The
objective of our project was to replicate this induction technique using
material less expensive and more portable. This material could simplify
experimental procedures. Participants could bring the material home, then have
a more ecological setting. First, we used the OpenBCI Cyton, a low-cost EEG
signal acquisition board in order to record and manually live-score sleep.
Then, we designed a mask containing two LEDs, connected to a microcontroller to
flash visual stimulation during sleep. We asked two volunteers to sleep for 2
hours in a dedicated room. One of the participants declared having a dream
during which the blue lights diffused by the mask were embedded into the dream
environment. The other participant woke up during the visual stimulation. These
results are congruent with previous studies. This work marked the first step of
a larger project. Our ongoing research includes the use of an online sleep
stage scoring tool and the possibility to automatically send stimuli according
to the sleep stage. We will also investigate other types of stimulus induction
in the future such as vibro-tactile stimulation that showed great potentials.
| [
{
"created": "Mon, 29 Apr 2019 08:00:59 GMT",
"version": "v1"
}
] | 2019-05-20 | [
[
"Hamon",
"Morgane",
"",
"ICM"
],
[
"Chabani",
"Emma",
"",
"ICM"
],
[
"Giraudeau",
"Philippe",
"",
"Potioc"
]
] | Lucid dreaming (LD) is a phenomenon during which the person is aware that he/she dreaming and is able to control the dream content. Studies have shown that only 20% of people can experience lucid dreams on a regular basis. However, LD frequency can be increased through induction techniques. External stimulation technique relies on the ability to integrate external information into the dream content. The aim is to remind the sleeper that she/he is dreaming. If this type of protocol is not fully efficient, it demonstrates how sensorial stimuli can be easily incorporated into people's dreams. The objective of our project was to replicate this induction technique using material less expensive and more portable. This material could simplify experimental procedures. Participants could bring the material home, then have a more ecological setting. First, we used the OpenBCI Cyton, a low-cost EEG signal acquisition board in order to record and manually live-score sleep. Then, we designed a mask containing two LEDs, connected to a microcontroller to flash visual stimulation during sleep. We asked two volunteers to sleep for 2 hours in a dedicated room. One of the participants declared having a dream during which the blue lights diffused by the mask were embedded into the dream environment. The other participant woke up during the visual stimulation. These results are congruent with previous studies. This work marked the first step of a larger project. Our ongoing research includes the use of an online sleep stage scoring tool and the possibility to automatically send stimuli according to the sleep stage. We will also investigate other types of stimulus induction in the future such as vibro-tactile stimulation that showed great potentials. |
2007.02202 | Jianguo Chen | Jianguo Chen, Kenli Li, Zhaolei Zhang, Keqin Li, Philip S. Yu | A Survey on Applications of Artificial Intelligence in Fighting Against
COVID-19 | This manuscript was submitted to ACM Computing Surveys | null | null | null | q-bio.QM cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The COVID-19 pandemic caused by the SARS-CoV-2 virus has spread rapidly
worldwide, leading to a global outbreak. Most governments, enterprises, and
scientific research institutions are participating in the COVID-19 struggle to
curb the spread of the pandemic. As a powerful tool against COVID-19,
artificial intelligence (AI) technologies are widely used in combating this
pandemic. In this survey, we investigate the main scope and contributions of AI
in combating COVID-19 from the aspects of disease detection and diagnosis,
virology and pathogenesis, drug and vaccine development, and epidemic and
transmission prediction. In addition, we summarize the available data and
resources that can be used for AI-based COVID-19 research. Finally, the main
challenges and potential directions of AI in fighting against COVID-19 are
discussed. Currently, AI mainly focuses on medical image inspection, genomics,
drug development, and transmission prediction, and thus AI still has great
potential in this field. This survey presents medical and AI researchers with a
comprehensive view of the existing and potential applications of AI technology
in combating COVID-19 with the goal of inspiring researchers to continue to
maximize the advantages of AI and big data to fight COVID-19.
| [
{
"created": "Sat, 4 Jul 2020 22:48:15 GMT",
"version": "v1"
},
{
"created": "Thu, 11 Mar 2021 06:48:43 GMT",
"version": "v2"
}
] | 2021-03-12 | [
[
"Chen",
"Jianguo",
""
],
[
"Li",
"Kenli",
""
],
[
"Zhang",
"Zhaolei",
""
],
[
"Li",
"Keqin",
""
],
[
"Yu",
"Philip S.",
""
]
] | The COVID-19 pandemic caused by the SARS-CoV-2 virus has spread rapidly worldwide, leading to a global outbreak. Most governments, enterprises, and scientific research institutions are participating in the COVID-19 struggle to curb the spread of the pandemic. As a powerful tool against COVID-19, artificial intelligence (AI) technologies are widely used in combating this pandemic. In this survey, we investigate the main scope and contributions of AI in combating COVID-19 from the aspects of disease detection and diagnosis, virology and pathogenesis, drug and vaccine development, and epidemic and transmission prediction. In addition, we summarize the available data and resources that can be used for AI-based COVID-19 research. Finally, the main challenges and potential directions of AI in fighting against COVID-19 are discussed. Currently, AI mainly focuses on medical image inspection, genomics, drug development, and transmission prediction, and thus AI still has great potential in this field. This survey presents medical and AI researchers with a comprehensive view of the existing and potential applications of AI technology in combating COVID-19 with the goal of inspiring researchers to continue to maximize the advantages of AI and big data to fight COVID-19. |
0910.1892 | Lee Altenberg | Lee Altenberg | Proof of the Feldman-Karlin Conjecture on the Maximum Number of
Equilibria in an Evolutionary System | 9 pages, 1 figure; v.4: final minor revisions, corrections,
additions; v.3: expands theorem to cover all cases, obviating v.2 distinction
of reducible/irreducible; details added to: discussion of Lyubich (1992),
example that attains upper bound, and homotopy continuation methods | Theoretical Population Biology, Volume 77, Issue 4, June 2010,
Pages 263-269 | 10.1016/j.tpb.2010.02.007 | null | q-bio.PE math.AG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Feldman and Karlin conjectured that the number of isolated fixed points for
deterministic models of viability selection and recombination among n possible
haplotypes has an upper bound of 2^n - 1. Here a proof is provided. The upper
bound of 3^{n-1} obtained by Lyubich et al. (2001) using Bezout's Theorem
(1779) is reduced here to 2^n through a change of representation that reduces
the third-order polynomials to second order. A further reduction to 2^n - 1 is
obtained using the homogeneous representation of the system, which yields
always one solution `at infinity'. While the original conjecture was made for
systems of viability selection and recombination, the results here generalize
to viability selection with any arbitrary system of bi-parental transmission,
which includes recombination and mutation as special cases. An example is
constructed of a mutation-selection system that has 2^n - 1 fixed points given
any n, which shows that 2^n - 1 is the sharpest possible upper bound that can
be found for the general space of selection and transmission coefficients.
| [
{
"created": "Mon, 12 Oct 2009 14:53:09 GMT",
"version": "v1"
},
{
"created": "Thu, 29 Oct 2009 09:13:55 GMT",
"version": "v2"
},
{
"created": "Mon, 15 Feb 2010 11:39:00 GMT",
"version": "v3"
},
{
"created": "Tue, 23 Mar 2010 06:44:24 GMT",
"version": "v4"
}
] | 2013-02-04 | [
[
"Altenberg",
"Lee",
""
]
] | Feldman and Karlin conjectured that the number of isolated fixed points for deterministic models of viability selection and recombination among n possible haplotypes has an upper bound of 2^n - 1. Here a proof is provided. The upper bound of 3^{n-1} obtained by Lyubich et al. (2001) using Bezout's Theorem (1779) is reduced here to 2^n through a change of representation that reduces the third-order polynomials to second order. A further reduction to 2^n - 1 is obtained using the homogeneous representation of the system, which yields always one solution `at infinity'. While the original conjecture was made for systems of viability selection and recombination, the results here generalize to viability selection with any arbitrary system of bi-parental transmission, which includes recombination and mutation as special cases. An example is constructed of a mutation-selection system that has 2^n - 1 fixed points given any n, which shows that 2^n - 1 is the sharpest possible upper bound that can be found for the general space of selection and transmission coefficients. |
2106.00757 | Alice Del Vecchio | Alice Del Vecchio, Andreea Deac, Pietro Li\`o and Petar
Veli\v{c}kovi\'c | Neural message passing for joint paratope-epitope prediction | ICML Workshop on Computational Biology 2021 , 5 pages, 2 figures | null | null | null | q-bio.QM cs.LG q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Antibodies are proteins in the immune system which bind to antigens to detect
and neutralise them. The binding sites in an antibody-antigen interaction are
known as the paratope and epitope, respectively, and the prediction of these
regions is key to vaccine and synthetic antibody development. Contrary to prior
art, we argue that paratope and epitope predictors require asymmetric
treatment, and propose distinct neural message passing architectures that are
geared towards the specific aspects of paratope and epitope prediction,
respectively. We obtain significant improvements on both tasks, setting the new
state-of-the-art and recovering favourable qualitative predictions on antigens
of relevance to COVID-19.
| [
{
"created": "Mon, 31 May 2021 16:37:55 GMT",
"version": "v1"
},
{
"created": "Sun, 25 Jul 2021 16:11:48 GMT",
"version": "v2"
}
] | 2021-07-27 | [
[
"Del Vecchio",
"Alice",
""
],
[
"Deac",
"Andreea",
""
],
[
"Liò",
"Pietro",
""
],
[
"Veličković",
"Petar",
""
]
] | Antibodies are proteins in the immune system which bind to antigens to detect and neutralise them. The binding sites in an antibody-antigen interaction are known as the paratope and epitope, respectively, and the prediction of these regions is key to vaccine and synthetic antibody development. Contrary to prior art, we argue that paratope and epitope predictors require asymmetric treatment, and propose distinct neural message passing architectures that are geared towards the specific aspects of paratope and epitope prediction, respectively. We obtain significant improvements on both tasks, setting the new state-of-the-art and recovering favourable qualitative predictions on antigens of relevance to COVID-19. |
2009.01448 | Supurna Sinha | Joseph Samuel and Supurna Sinha | Optimal Control in Pandemics | 5 pages, 4 figures, revised in light of referee's comments, To appear
as a Letter in PRE | Phys. Rev. E 103, 010301 (2021) | 10.1103/PhysRevE.103.L040301 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | During a pandemic, there are conflicting demands arising from public health
and economic cost. Lockdowns are a common way of containing infections, but
they adversely affect the economy. We study the question of how to minimise the
economic damage of a lockdown while still containing infections. Our analysis
is based on the SIR model, which we analyse using a clock set by the virus.
This use of the "virus time" permits a clean mathematical formulation of our
problem. We optimise the economic cost for a fixed health cost and arrive at a
strategy for navigating the pandemic. This involves adjusting the level of
lockdowns in a controlled manner so as to minimise the economic cost.
| [
{
"created": "Thu, 3 Sep 2020 04:49:40 GMT",
"version": "v1"
},
{
"created": "Fri, 4 Sep 2020 04:52:49 GMT",
"version": "v2"
},
{
"created": "Tue, 5 Jan 2021 23:07:23 GMT",
"version": "v3"
}
] | 2021-04-28 | [
[
"Samuel",
"Joseph",
""
],
[
"Sinha",
"Supurna",
""
]
] | During a pandemic, there are conflicting demands arising from public health and economic cost. Lockdowns are a common way of containing infections, but they adversely affect the economy. We study the question of how to minimise the economic damage of a lockdown while still containing infections. Our analysis is based on the SIR model, which we analyse using a clock set by the virus. This use of the "virus time" permits a clean mathematical formulation of our problem. We optimise the economic cost for a fixed health cost and arrive at a strategy for navigating the pandemic. This involves adjusting the level of lockdowns in a controlled manner so as to minimise the economic cost. |
2205.11084 | Catherine Matias | Blerina Sinaimeri (LUISS, ERABLE), Laura Urbini (ERABLE), Marie-France
Sagot (LBBE, ERABLE), Catherine Matias (LPSM (UMR\_8001)) | Cophylogeny Reconstruction Allowing for Multiple Associations Through
Approximate Bayesian Computation | null | null | null | null | q-bio.QM q-bio.PE | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Phylogenetic tree reconciliation is employed for the examination of
coevolution between host and symbiont species. An important concern is the
requirement for dependable cost values when selecting event-based parsimonious
reconciliation. Although certain approaches deduce event probabilities unique
to each pair of host and symbiont trees, which can subsequently be converted
into cost values, a significant limitation lies in their inability to model the
invasion of diverse host species by the same symbiont species (termed as a
spread event), which is believed to occur in symbiotic relationships. Invasions
lead to the observation of multiple associations between symbionts and their
hosts (indicating that a symbiont is no longer exclusive to a single host),
which are incompatible with the existing methods of coevolution. We present
AmoCoala, an enhanced version of the tool Coala, that provides a more realistic
estimation of cophylogeny event probabilities for a given pair of host and
symbiont trees, even in the presence of spread events. We expand the classical
4-event coevolutionary model to include 2 additional spread events (vertical
and horizontal spreads) that lead to multiple associations. By incorporating
spread events, our reconciliation model enables a more accurate consideration
of multiple associations. This improvement enhances the precision of estimated
cost sets, paving the way to a more reliable reconciliation of host and
symbiont trees. Our results showcase that AmoCoala produces biologically
plausible reconciliation scenarios, further emphasizing its effectiveness. The
software is accessible at https://github.com/sinaimeri/AmoCoala
| [
{
"created": "Mon, 23 May 2022 07:01:17 GMT",
"version": "v1"
},
{
"created": "Thu, 13 Oct 2022 08:09:11 GMT",
"version": "v2"
},
{
"created": "Fri, 21 Jul 2023 11:27:48 GMT",
"version": "v3"
},
{
"created": "Tue, 29 Aug 2023 09:54:41 GMT",
"version": "v4"
}
] | 2023-08-30 | [
[
"Sinaimeri",
"Blerina",
"",
"LUISS, ERABLE"
],
[
"Urbini",
"Laura",
"",
"ERABLE"
],
[
"Sagot",
"Marie-France",
"",
"LBBE, ERABLE"
],
[
"Matias",
"Catherine",
"",
"LPSM"
]
] | Phylogenetic tree reconciliation is employed for the examination of coevolution between host and symbiont species. An important concern is the requirement for dependable cost values when selecting event-based parsimonious reconciliation. Although certain approaches deduce event probabilities unique to each pair of host and symbiont trees, which can subsequently be converted into cost values, a significant limitation lies in their inability to model the invasion of diverse host species by the same symbiont species (termed as a spread event), which is believed to occur in symbiotic relationships. Invasions lead to the observation of multiple associations between symbionts and their hosts (indicating that a symbiont is no longer exclusive to a single host), which are incompatible with the existing methods of coevolution. We present AmoCoala, an enhanced version of the tool Coala, that provides a more realistic estimation of cophylogeny event probabilities for a given pair of host and symbiont trees, even in the presence of spread events. We expand the classical 4-event coevolutionary model to include 2 additional spread events (vertical and horizontal spreads) that lead to multiple associations. By incorporating spread events, our reconciliation model enables a more accurate consideration of multiple associations. This improvement enhances the precision of estimated cost sets, paving the way to a more reliable reconciliation of host and symbiont trees. Our results showcase that AmoCoala produces biologically plausible reconciliation scenarios, further emphasizing its effectiveness. The software is accessible at https://github.com/sinaimeri/AmoCoala |
2110.07605 | Patrick Borel | Patrick Borel (C2VN), Romane Troadec (C2VN), Morgane Damiani (C2VN),
Charlotte Halimi (C2VN), Marion Nowicki (C2VN), Philippe Guichard (C2VN),
Marielle Margier (C2VN), Julien Astier (C2VN), Michel Grino (C2VN),
Emmanuelle Reboul (C2VN), Jean-fran\c{c}ois Landrier (C2VN),
Jean-Fran\c{c}ois Landrier | $\beta$-Carotene bioavailability and conversion efficiency are
significantly affected by sex in rats. First observation suggesting a
possible hormetic regulation of vitamin A metabolism in female rats | null | Molecular Nutrition and Food Research, Wiley-VCH Verlag, 2021,
pp.2100650 | 10.1002/mnfr.202100650 | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Scope: To study the effect of variation in dietary vitamin A (VA) content on
its hepatic and intestinal metabolism. Methods and results: Adult female and
male rats were fed with diets containing 400, 2300, or 9858 IU/kg VA for 31-33
weeks. VA concentrations were measured in plasma and liver. Bioavailability and
intestinal conversion efficiency of $\beta$-carotene to VA were assessed by
measuring postprandial plasma $\beta$-carotene and retinyl palmitate
concentrations after force-feeding rats with $\beta$-carotene. Expression of
genes involved in VA metabolism, together with concentrations of RBP4, BCO1 and
SR-BI proteins, were measured in the intestine and liver of female rats. Plasma
retinol concentrations were lower and hepatic free retinol concentrations were
higher in females than in males. There was no effect of dietary VA content on
$\beta$-carotene bioavailability and its conversion efficiency, but
bioavailability was higher and conversion efficiency was lower in females than
in males. The expression of most genes exhibited a U-shaped dose response curve
depending on VA intake. Main conclusions: $\beta$-Carotene bioavailability and
conversion efficiency to VA are affected by the sex of rats. Results of gene
expression suggest a hormetic regulation of VA metabolism in female rats.
| [
{
"created": "Thu, 14 Oct 2021 08:15:20 GMT",
"version": "v1"
}
] | 2021-10-18 | [
[
"Borel",
"Patrick",
"",
"C2VN"
],
[
"Troadec",
"Romane",
"",
"C2VN"
],
[
"Damiani",
"Morgane",
"",
"C2VN"
],
[
"Halimi",
"Charlotte",
"",
"C2VN"
],
[
"Nowicki",
"Marion",
"",
"C2VN"
],
[
"Guichard",
"Philippe",
"",
"C2VN"
],
[
"Margier",
"Marielle",
"",
"C2VN"
],
[
"Astier",
"Julien",
"",
"C2VN"
],
[
"Grino",
"Michel",
"",
"C2VN"
],
[
"Reboul",
"Emmanuelle",
"",
"C2VN"
],
[
"Landrier",
"Jean-françois",
"",
"C2VN"
],
[
"Landrier",
"Jean-François",
"",
"C2VN"
]
] | Scope: To study the effect of variation in dietary vitamin A (VA) content on its hepatic and intestinal metabolism. Methods and results: Adult female and male rats were fed with diets containing 400, 2300, or 9858 IU/kg VA for 31-33 weeks. VA concentrations were measured in plasma and liver. Bioavailability and intestinal conversion efficiency of $\beta$-carotene to VA were assessed by measuring postprandial plasma $\beta$-carotene and retinyl palmitate concentrations after force-feeding rats with $\beta$-carotene. Expression of genes involved in VA metabolism, together with concentrations of RBP4, BCO1 and SR-BI proteins, were measured in the intestine and liver of female rats. Plasma retinol concentrations were lower and hepatic free retinol concentrations were higher in females than in males. There was no effect of dietary VA content on $\beta$-carotene bioavailability and its conversion efficiency, but bioavailability was higher and conversion efficiency was lower in females than in males. The expression of most genes exhibited a U-shaped dose response curve depending on VA intake. Main conclusions: $\beta$-Carotene bioavailability and conversion efficiency to VA are affected by the sex of rats. Results of gene expression suggest a hormetic regulation of VA metabolism in female rats. |
0710.2808 | Swanand Gore | Swanand Gore and Tom Blundell | Identification of specificity determining residues in enzymes using
environment specific substitution tables | null | null | null | null | q-bio.QM | null | Environment specific substitution tables have been used effectively for
distinguishing structural and functional constraints on proteins and thereby
identify their active sites (Chelliah et al. (2004)). This work explores
whether a similar approach can be used to identify specificity determining
residues (SDRs) responsible for cofactor dependence, substrate specificity or
subtle catalytic variations. We combine structure-sequence information and
functional annotation from various data sources to create structural alignments
for homologous enzymes and functional partitions therein. We develop a scoring
procedure to predict SDRs and assess their accuracy using information from
bound specific ligands and published literature.
| [
{
"created": "Mon, 15 Oct 2007 13:29:32 GMT",
"version": "v1"
}
] | 2007-10-16 | [
[
"Gore",
"Swanand",
""
],
[
"Blundell",
"Tom",
""
]
] | Environment specific substitution tables have been used effectively for distinguishing structural and functional constraints on proteins and thereby identify their active sites (Chelliah et al. (2004)). This work explores whether a similar approach can be used to identify specificity determining residues (SDRs) responsible for cofactor dependence, substrate specificity or subtle catalytic variations. We combine structure-sequence information and functional annotation from various data sources to create structural alignments for homologous enzymes and functional partitions therein. We develop a scoring procedure to predict SDRs and assess their accuracy using information from bound specific ligands and published literature. |
1203.2450 | Alexander Iomin | A. Iomin | A toy model of fractal glioma development under RF electric field
treatment | Submitted for publication in European Physical Journal E | Eur. Phys. J. E (2012) 35: 42 | 10.1140/epje/i2012-12042-9 | null | q-bio.CB cond-mat.soft physics.bio-ph physics.med-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A toy model for glioma treatment by a radio frequency electric field is
suggested. This low-intensity, intermediate-frequency alternating electric
field is known as the tumor-treating-field (TTF). In the framework of this
model the efficiency of this TTF is estimated, and the interplay between the
TTF and the migration-proliferation dichotomy of cancer cells is considered.
The model is based on a modification of a comb model for cancer cells, where
the migration-proliferation dichotomy becomes naturally apparent. Considering
glioma cancer as a fractal dielectric composite of cancer cells and normal
tissue cells, a new effective mechanism of glioma treatment is suggested in the
form of a giant enhancement of the TTF. This leads to the irreversible
electroporation that may be an effective non-invasive method of treating brain
cancer.
| [
{
"created": "Mon, 12 Mar 2012 10:31:54 GMT",
"version": "v1"
}
] | 2012-10-09 | [
[
"Iomin",
"A.",
""
]
] | A toy model for glioma treatment by a radio frequency electric field is suggested. This low-intensity, intermediate-frequency alternating electric field is known as the tumor-treating-field (TTF). In the framework of this model the efficiency of this TTF is estimated, and the interplay between the TTF and the migration-proliferation dichotomy of cancer cells is considered. The model is based on a modification of a comb model for cancer cells, where the migration-proliferation dichotomy becomes naturally apparent. Considering glioma cancer as a fractal dielectric composite of cancer cells and normal tissue cells, a new effective mechanism of glioma treatment is suggested in the form of a giant enhancement of the TTF. This leads to the irreversible electroporation that may be an effective non-invasive method of treating brain cancer. |
1012.2090 | Marc Lefranc | Pierre-Emmanuel Morant (PhLAM, IRI), Quentin Thommen (PhLAM, IRI),
Benjamin Pfeuty (PhLAM, IRI), Constant Vandermo\"ere (PhLAM, IRI), Florence
Corellou, Fran\c{c}ois-Yves Bouget, Marc Lefranc (PhLAM, IRI) | A robust two-gene oscillator at the core of Ostreococcus tauri circadian
clock | 13 pages | Chaos 20, 4 (2010) 045108 | 10.1063/1.3530118 | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The microscopic green alga Ostreococcus tauri is rapidly emerging as a
promising model organism in the green lineage. In particular, recent results by
Corellou et al. [Plant Cell, 21, 3436 (2009)] and Thommen et al. [PLoS Comput.
Biol. 6, e1000990 (2010)] strongly suggest that its circadian clock is a
simplified version of Arabidopsis thaliana clock, and that it is architectured
so as to be robust to natural daylight fluctuations. In this work, we analyze
time series data from luminescent reporters for the two central clock genes
TOC1 and CCA1 and correlate them with microarray data previously analyzed. Our
mathematical analysis strongly supports both the existence of a simple two-gene
oscillator at the core of Ostreococcus tauri clock and the fact that its
dynamics is not affected by light in normal entrainment conditions, a signature
of its robustness.
| [
{
"created": "Thu, 9 Dec 2010 19:32:38 GMT",
"version": "v1"
}
] | 2011-01-04 | [
[
"Morant",
"Pierre-Emmanuel",
"",
"PhLAM, IRI"
],
[
"Thommen",
"Quentin",
"",
"PhLAM, IRI"
],
[
"Pfeuty",
"Benjamin",
"",
"PhLAM, IRI"
],
[
"Vandermoëre",
"Constant",
"",
"PhLAM, IRI"
],
[
"Corellou",
"Florence",
"",
"PhLAM, IRI"
],
[
"Bouget",
"François-Yves",
"",
"PhLAM, IRI"
],
[
"Lefranc",
"Marc",
"",
"PhLAM, IRI"
]
] | The microscopic green alga Ostreococcus tauri is rapidly emerging as a promising model organism in the green lineage. In particular, recent results by Corellou et al. [Plant Cell, 21, 3436 (2009)] and Thommen et al. [PLoS Comput. Biol. 6, e1000990 (2010)] strongly suggest that its circadian clock is a simplified version of Arabidopsis thaliana clock, and that it is architectured so as to be robust to natural daylight fluctuations. In this work, we analyze time series data from luminescent reporters for the two central clock genes TOC1 and CCA1 and correlate them with microarray data previously analyzed. Our mathematical analysis strongly supports both the existence of a simple two-gene oscillator at the core of Ostreococcus tauri clock and the fact that its dynamics is not affected by light in normal entrainment conditions, a signature of its robustness. |
q-bio/0311014 | Steven Duplij | Diana Duplij and Steven Duplij | Determinative degree and nucleotide sequence analysis by trianders | LaTeX 2e (amsmath,graphicx,cite), 24 pages, 30 figures, for other
formats see also http://www-home.univer.kharkov.ua/duplij or
http://www.math.uni-mannheim.de/~duplij | null | null | null | q-bio.GN q-bio.QM | null | A new version of DNA walks, where nucleotides are regarded unequal in their
contribution to a walk is introduced, which allows us to study thoroughly the
"fine structure" of nucleotide sequences. The approach is based on the
assumption that nucleotides have an inner abstract characteristics, the
determinative degree, which reflects phenomenological properties of genetic
code and is adjusted to nucleotides physical properties. We consider each
position in codon independently, which gives three separate walks being
characterized by different angles and lengths, and such an object is called
triander which reflects the "strength" of branch. A general method of
identification of DNA sequence "by triander", which can be treated as a unique
"genogram", "gene passport" is proposed. The two- and three-dimensional
trianders are considered. The difference of sequence fine structure in genes
and the intergenic space is shown. A clear triplet signal in coding locuses is
found which is absent in the intergenic space and is independent from the
sequence length. The topological classification of trianders is presented which
can allow us to provide a detail working out signatures of functionally
different genomic regions.
| [
{
"created": "Mon, 10 Nov 2003 21:17:07 GMT",
"version": "v1"
},
{
"created": "Sun, 23 Nov 2003 21:49:05 GMT",
"version": "v2"
}
] | 2007-05-23 | [
[
"Duplij",
"Diana",
""
],
[
"Duplij",
"Steven",
""
]
] | A new version of DNA walks, where nucleotides are regarded unequal in their contribution to a walk is introduced, which allows us to study thoroughly the "fine structure" of nucleotide sequences. The approach is based on the assumption that nucleotides have an inner abstract characteristics, the determinative degree, which reflects phenomenological properties of genetic code and is adjusted to nucleotides physical properties. We consider each position in codon independently, which gives three separate walks being characterized by different angles and lengths, and such an object is called triander which reflects the "strength" of branch. A general method of identification of DNA sequence "by triander", which can be treated as a unique "genogram", "gene passport" is proposed. The two- and three-dimensional trianders are considered. The difference of sequence fine structure in genes and the intergenic space is shown. A clear triplet signal in coding locuses is found which is absent in the intergenic space and is independent from the sequence length. The topological classification of trianders is presented which can allow us to provide a detail working out signatures of functionally different genomic regions. |
1512.09178 | Shai Pilosof | Shai Pilosof, Gili Greenbaum, Boris R. Krasnov, Yuval R. Zelnik | Asymmetric disease dynamics in multihost interconnected networks | This is a revision of our previous version, but it is significantly
different | Journal of Theoretical Biology 2017 430:237-244 | 10.1016/j.jtbi.2017.07.020 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Epidemic spread in single-host systems strongly depends on the population's
contact network. However, little is known regarding the spread of epidemics
across networks representing populations of multiple hosts. We explored
cross-species transmission in a multilayer network where layers represent
populations of two distinct hosts, and disease can spread across intralayer
(within-host) and interlayer (between-host) edges. We developed an analytic
framework for the SIR epidemic model to examine the effect of (i) source of
infection and (ii) between-host asymmetry in infection probabilities, on
disease risk. We measured risk as outbreak probability and outbreak size in a
focal host, represented by one network layer. Numeric simulations were used to
validate the analytic formulations. We found that outbreak probability is
determined by a complex interaction between source of infection and
between-host infection probabilities, whereas outbreak size is mainly affected
by the non-focal host to focal host infection probability alone. Hence,
inter-specific asymmetry in infection probabilities shapes disease dynamics in
multihost networks. These results expand current theory of monolayer networks,
where outbreak size and probability are considered equal, highlighting the
importance of considering multiple measures of disease risk. Our study advances
understanding of multihost systems and non-biological systems with asymmetric
flow rates.
| [
{
"created": "Wed, 30 Dec 2015 23:00:00 GMT",
"version": "v1"
},
{
"created": "Fri, 17 Jun 2016 21:50:25 GMT",
"version": "v2"
}
] | 2017-12-06 | [
[
"Pilosof",
"Shai",
""
],
[
"Greenbaum",
"Gili",
""
],
[
"Krasnov",
"Boris R.",
""
],
[
"Zelnik",
"Yuval R.",
""
]
] | Epidemic spread in single-host systems strongly depends on the population's contact network. However, little is known regarding the spread of epidemics across networks representing populations of multiple hosts. We explored cross-species transmission in a multilayer network where layers represent populations of two distinct hosts, and disease can spread across intralayer (within-host) and interlayer (between-host) edges. We developed an analytic framework for the SIR epidemic model to examine the effect of (i) source of infection and (ii) between-host asymmetry in infection probabilities, on disease risk. We measured risk as outbreak probability and outbreak size in a focal host, represented by one network layer. Numeric simulations were used to validate the analytic formulations. We found that outbreak probability is determined by a complex interaction between source of infection and between-host infection probabilities, whereas outbreak size is mainly affected by the non-focal host to focal host infection probability alone. Hence, inter-specific asymmetry in infection probabilities shapes disease dynamics in multihost networks. These results expand current theory of monolayer networks, where outbreak size and probability are considered equal, highlighting the importance of considering multiple measures of disease risk. Our study advances understanding of multihost systems and non-biological systems with asymmetric flow rates. |
2008.09398 | Francisco J. Cao-Garcia | Rodrigo Crespo, Javier Jarillo, Francisco J. Cao-Garc\'ia | Dispersal-induced resilience to stochastic environmental fluctuations in
populations with Allee effect | 32 pages, 9 figures, 1 table | Physical Review E 105, 014413 (2022) | 10.1103/PhysRevE.105.014413 | null | q-bio.PE cond-mat.stat-mech | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Many species are unsustainable at small population densities (Allee Effect),
i.e., below a threshold named Allee threshold, the population decreases instead
of growing. In a closed local population, environmental fluctuations always
lead to extinction. Here, we show how, in spatially extended habitats,
dispersal can lead to a sustainable population in a region, provided the
amplitude of environmental fluctuations is below an extinction threshold. We
have identified two types of sustainable populations: high-density and
low-density populations (through a mean-field approximation, valid in the limit
of large dispersal length). Our results show that patches where population is
high, low or extinct, coexist when the population is close to global extinction
(even for homogeneous habitats). The extinction threshold is maximum for
characteristic dispersal distances much larger than the spatial scale of
synchrony of environmental fluctuations. The extinction threshold increases
proportionally to the square root of the dispersal rate and decreases with the
Allee threshold. The low-density population solution can allow understanding
difficulties in recovery after harvesting. This theoretical framework provides
a novel approach to address other factors, such as habitat fragmentation or
harvesting, impacting population resilience to environmental fluctuations.
| [
{
"created": "Fri, 21 Aug 2020 09:59:19 GMT",
"version": "v1"
},
{
"created": "Thu, 25 Nov 2021 19:00:37 GMT",
"version": "v2"
}
] | 2022-01-27 | [
[
"Crespo",
"Rodrigo",
""
],
[
"Jarillo",
"Javier",
""
],
[
"Cao-García",
"Francisco J.",
""
]
] | Many species are unsustainable at small population densities (Allee Effect), i.e., below a threshold named Allee threshold, the population decreases instead of growing. In a closed local population, environmental fluctuations always lead to extinction. Here, we show how, in spatially extended habitats, dispersal can lead to a sustainable population in a region, provided the amplitude of environmental fluctuations is below an extinction threshold. We have identified two types of sustainable populations: high-density and low-density populations (through a mean-field approximation, valid in the limit of large dispersal length). Our results show that patches where population is high, low or extinct, coexist when the population is close to global extinction (even for homogeneous habitats). The extinction threshold is maximum for characteristic dispersal distances much larger than the spatial scale of synchrony of environmental fluctuations. The extinction threshold increases proportionally to the square root of the dispersal rate and decreases with the Allee threshold. The low-density population solution can allow understanding difficulties in recovery after harvesting. This theoretical framework provides a novel approach to address other factors, such as habitat fragmentation or harvesting, impacting population resilience to environmental fluctuations. |
2004.13493 | Pietro Faccioli | Tania Massignan, Alberto Boldrini, Luca Terruzzi, Giovanni Spagnolli,
Andrea Astolfi, Valerio Bonaldo, Francesca Pischedda, Massimo Pizzato,
Graziano Lolli, Maria Letizia Barreca, Emiliano Biasini, Pietro Faccioli, and
Lidia Pieri | Antimalarial Artefenomel Inhibits Human SARS-CoV-2 Replication in Cells
while Suppressing the Receptor ACE2 | Manuscript has extended to include the experimental evidence for
anti-viral effects on cell colonies | null | null | null | q-bio.BM cond-mat.soft physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The steep climbing of victims caused by the new coronavirus disease 2019
(COVID-19) throughout the planet is sparking an unprecedented effort to
identify effective therapeutic regimens to tackle the pandemic. The SARS-CoV-2
virus is known to gain entry into various cell types through the binding of one
of its surface proteins (spike) to the host Angiotensin-Converting Enzyme 2
(ACE2). Thus, spike-ACE2 interaction represents a major target for vaccines and
antiviral drugs. A novel method has been recently described by some of the
authors to pharmacologically downregulate the expression of target proteins at
the post-translational level. This technology builds on computational
advancements in the simulation of folding mechanisms to rationally block
protein expression by targeting folding intermediates, hence hampering the
folding process. Here, we report the all-atom simulations of the entire
sequence of events underlying the folding pathway of ACE2. Our data revealed
the existence of a folding intermediate showing two druggable pockets hidden in
the native conformation. Both pockets were targeted by a virtual screening
repurposing campaign aimed at quickly identifying drugs capable to decrease the
expression of ACE2. We identified four compounds capable of lowering ACE2
expression in Vero cells in a dose-dependent fashion. All these molecules were
found to inhibit the entry into cells of a pseudotyped retrovirus exposing the
SARS-CoV-2 spike protein. Importantly, the antiviral activity has been tested
against live SARS-CoV-2 (MEX-BC2/2020 strain). One of the selected drugs
(Artefenomel) could completely prevent cytopathic effects induced by the
presence of the virus, thus showing antiviral activity against SARS-CoV-2.
Ongoing studies are further evaluating the possibility of repurposing these
drugs for the treatment of COVID-19.
| [
{
"created": "Tue, 28 Apr 2020 13:26:49 GMT",
"version": "v1"
},
{
"created": "Wed, 26 Aug 2020 12:47:17 GMT",
"version": "v2"
},
{
"created": "Tue, 15 Dec 2020 10:50:00 GMT",
"version": "v3"
},
{
"created": "Tue, 5 Jan 2021 13:29:48 GMT",
"version": "v4"
}
] | 2021-01-06 | [
[
"Massignan",
"Tania",
""
],
[
"Boldrini",
"Alberto",
""
],
[
"Terruzzi",
"Luca",
""
],
[
"Spagnolli",
"Giovanni",
""
],
[
"Astolfi",
"Andrea",
""
],
[
"Bonaldo",
"Valerio",
""
],
[
"Pischedda",
"Francesca",
""
],
[
"Pizzato",
"Massimo",
""
],
[
"Lolli",
"Graziano",
""
],
[
"Barreca",
"Maria Letizia",
""
],
[
"Biasini",
"Emiliano",
""
],
[
"Faccioli",
"Pietro",
""
],
[
"Pieri",
"Lidia",
""
]
] | The steep climbing of victims caused by the new coronavirus disease 2019 (COVID-19) throughout the planet is sparking an unprecedented effort to identify effective therapeutic regimens to tackle the pandemic. The SARS-CoV-2 virus is known to gain entry into various cell types through the binding of one of its surface proteins (spike) to the host Angiotensin-Converting Enzyme 2 (ACE2). Thus, spike-ACE2 interaction represents a major target for vaccines and antiviral drugs. A novel method has been recently described by some of the authors to pharmacologically downregulate the expression of target proteins at the post-translational level. This technology builds on computational advancements in the simulation of folding mechanisms to rationally block protein expression by targeting folding intermediates, hence hampering the folding process. Here, we report the all-atom simulations of the entire sequence of events underlying the folding pathway of ACE2. Our data revealed the existence of a folding intermediate showing two druggable pockets hidden in the native conformation. Both pockets were targeted by a virtual screening repurposing campaign aimed at quickly identifying drugs capable to decrease the expression of ACE2. We identified four compounds capable of lowering ACE2 expression in Vero cells in a dose-dependent fashion. All these molecules were found to inhibit the entry into cells of a pseudotyped retrovirus exposing the SARS-CoV-2 spike protein. Importantly, the antiviral activity has been tested against live SARS-CoV-2 (MEX-BC2/2020 strain). One of the selected drugs (Artefenomel) could completely prevent cytopathic effects induced by the presence of the virus, thus showing antiviral activity against SARS-CoV-2. Ongoing studies are further evaluating the possibility of repurposing these drugs for the treatment of COVID-19. |
1210.6599 | Ramon Ferrer i Cancho | Ramon Ferrer-i-Cancho, Antoni Hern\'andez-Fern\'andez, Jaume
Baixeries, {\L}ukasz D\c{e}bowski, J\'an Ma\v{c}utek | When is Menzerath-Altmann law mathematically trivial? A new approach | version improved with a new table, new histograms and a more accurate
statistical analysis; a new interpetation of the results is offered; notation
has undergone minor corrections | null | 10.1515/sagmb-2013-0034 | null | q-bio.GN physics.data-an q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Menzerath's law, the tendency of Z, the mean size of the parts, to decrease
as X, the number of parts, increases is found in language, music and genomes.
Recently, it has been argued that the presence of the law in genomes is an
inevitable consequence of the fact that Z = Y/X, which would imply that Z
scales with X as Z ~ 1/X. That scaling is a very particular case of
Menzerath-Altmann law that has been rejected by means of a correlation test
between X and Y in genomes, being X the number of chromosomes of a species, Y
its genome size in bases and Z the mean chromosome size. Here we review the
statistical foundations of that test and consider three non-parametric tests
based upon different correlation metrics and one parametric test to evaluate if
Z ~ 1/X in genomes. The most powerful test is a new non-parametric based upon
the correlation ratio, which is able to reject Z ~ 1/X in nine out of eleven
taxonomic groups and detect a borderline group. Rather than a fact, Z ~ 1/X is
a baseline that real genomes do not meet. The view of Menzerath-Altmann law as
inevitable is seriously flawed.
| [
{
"created": "Wed, 24 Oct 2012 16:42:31 GMT",
"version": "v1"
},
{
"created": "Mon, 11 Feb 2013 11:06:26 GMT",
"version": "v2"
},
{
"created": "Thu, 13 Jun 2013 12:49:57 GMT",
"version": "v3"
},
{
"created": "Fri, 25 Apr 2014 17:27:40 GMT",
"version": "v4"
}
] | 2014-12-15 | [
[
"Ferrer-i-Cancho",
"Ramon",
""
],
[
"Hernández-Fernández",
"Antoni",
""
],
[
"Baixeries",
"Jaume",
""
],
[
"Dȩbowski",
"Łukasz",
""
],
[
"Mačutek",
"Ján",
""
]
] | Menzerath's law, the tendency of Z, the mean size of the parts, to decrease as X, the number of parts, increases is found in language, music and genomes. Recently, it has been argued that the presence of the law in genomes is an inevitable consequence of the fact that Z = Y/X, which would imply that Z scales with X as Z ~ 1/X. That scaling is a very particular case of Menzerath-Altmann law that has been rejected by means of a correlation test between X and Y in genomes, being X the number of chromosomes of a species, Y its genome size in bases and Z the mean chromosome size. Here we review the statistical foundations of that test and consider three non-parametric tests based upon different correlation metrics and one parametric test to evaluate if Z ~ 1/X in genomes. The most powerful test is a new non-parametric based upon the correlation ratio, which is able to reject Z ~ 1/X in nine out of eleven taxonomic groups and detect a borderline group. Rather than a fact, Z ~ 1/X is a baseline that real genomes do not meet. The view of Menzerath-Altmann law as inevitable is seriously flawed. |
1303.5616 | Thomas Wiecki | Thomas V. Wiecki | Sequential sampling models in computational psychiatry: Bayesian
parameter estimation, model selection and classification | Review. Final prelim paper | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/3.0/ | Current psychiatric research is in crisis. In this review I will describe the
causes of this crisis and highlight recent efforts to overcome current
challenges. One particularly promising approach is the emerging field of
computational psychiatry. By using methods and insights from computational
cognitive neuroscience, computational psychiatry might enable us to move from a
symptom-based description of mental illness to descriptors based on objective
computational multidimensional functional variables. To exemplify this I will
survey recent efforts towards this goal. I will then describe a set of methods
that together form a toolbox of cognitive models to aid this research program.
At the core of this toolbox are sequential sampling models which have been used
to explain diverse cognitive neuroscience phenomena but have so far seen little
adoption in psychiatric research. I will then describe how these models can be
fitted to subject data and highlight how hierarchical Bayesian estimation
provides a rich framework with many desirable properties and benefits compared
to traditional optimization-based approaches. Finally, non-parametric Bayesian
methods provide general solutions to the problem of classifying mental illness
within this framework.
| [
{
"created": "Fri, 22 Mar 2013 13:38:14 GMT",
"version": "v1"
}
] | 2013-03-25 | [
[
"Wiecki",
"Thomas V.",
""
]
] | Current psychiatric research is in crisis. In this review I will describe the causes of this crisis and highlight recent efforts to overcome current challenges. One particularly promising approach is the emerging field of computational psychiatry. By using methods and insights from computational cognitive neuroscience, computational psychiatry might enable us to move from a symptom-based description of mental illness to descriptors based on objective computational multidimensional functional variables. To exemplify this I will survey recent efforts towards this goal. I will then describe a set of methods that together form a toolbox of cognitive models to aid this research program. At the core of this toolbox are sequential sampling models which have been used to explain diverse cognitive neuroscience phenomena but have so far seen little adoption in psychiatric research. I will then describe how these models can be fitted to subject data and highlight how hierarchical Bayesian estimation provides a rich framework with many desirable properties and benefits compared to traditional optimization-based approaches. Finally, non-parametric Bayesian methods provide general solutions to the problem of classifying mental illness within this framework. |
2408.06402 | Guan Jiaojiao | Jiaojiao Guan, Yongxin Ji, Cheng Peng, Wei Zou, Xubo Tang, Jiayu
Shang, Yanni Sun | PhaGO: Protein function annotation for bacteriophages by integrating the
genomic context | 17 pages,6 figures | null | null | null | q-bio.QM cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Bacteriophages are viruses that target bacteria, playing a crucial role in
microbial ecology. Phage proteins are important in understanding phage biology,
such as virus infection, replication, and evolution. Although a large number of
new phages have been identified via metagenomic sequencing, many of them have
limited protein function annotation. Accurate function annotation of phage
proteins presents several challenges, including their inherent diversity and
the scarcity of annotated ones. Existing tools have yet to fully leverage the
unique properties of phages in annotating protein functions. In this work, we
propose a new protein function annotation tool for phages by leveraging the
modular genomic structure of phage genomes. By employing embeddings from the
latest protein foundation models and Transformer to capture contextual
information between proteins in phage genomes, PhaGO surpasses state-of-the-art
methods in annotating diverged proteins and proteins with uncommon functions by
6.78% and 13.05% improvement, respectively. PhaGO can annotate proteins lacking
homology search results, which is critical for characterizing the rapidly
accumulating phage genomes. We demonstrate the utility of PhaGO by identifying
688 potential holins in phages, which exhibit high structural conservation with
known holins. The results show the potential of PhaGO to extend our
understanding of newly discovered phages.
| [
{
"created": "Mon, 12 Aug 2024 13:02:38 GMT",
"version": "v1"
}
] | 2024-08-14 | [
[
"Guan",
"Jiaojiao",
""
],
[
"Ji",
"Yongxin",
""
],
[
"Peng",
"Cheng",
""
],
[
"Zou",
"Wei",
""
],
[
"Tang",
"Xubo",
""
],
[
"Shang",
"Jiayu",
""
],
[
"Sun",
"Yanni",
""
]
] | Bacteriophages are viruses that target bacteria, playing a crucial role in microbial ecology. Phage proteins are important in understanding phage biology, such as virus infection, replication, and evolution. Although a large number of new phages have been identified via metagenomic sequencing, many of them have limited protein function annotation. Accurate function annotation of phage proteins presents several challenges, including their inherent diversity and the scarcity of annotated ones. Existing tools have yet to fully leverage the unique properties of phages in annotating protein functions. In this work, we propose a new protein function annotation tool for phages by leveraging the modular genomic structure of phage genomes. By employing embeddings from the latest protein foundation models and Transformer to capture contextual information between proteins in phage genomes, PhaGO surpasses state-of-the-art methods in annotating diverged proteins and proteins with uncommon functions by 6.78% and 13.05% improvement, respectively. PhaGO can annotate proteins lacking homology search results, which is critical for characterizing the rapidly accumulating phage genomes. We demonstrate the utility of PhaGO by identifying 688 potential holins in phages, which exhibit high structural conservation with known holins. The results show the potential of PhaGO to extend our understanding of newly discovered phages. |
1510.00815 | Sriganesh Srihari Dr | Sriganesh Srihari, Jitin Singla, Limsoon Wong, Mark A. Ragan | Inferring synthetic lethal interactions from mutual exclusivity of
genetic events in cancer | 35 pages, 7 figures | Biology Direct 2015, 10:57 | 10.1186/s13062-015-0086-1 | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: Synthetic lethality (SL) refers to the genetic interaction
between two or more genes where only their co-alteration (e.g. by mutations,
amplifications or deletions) results in cell death. In recent years, SL has
emerged as an attractive therapeutic strategy against cancer: by targeting the
SL partners of altered genes in cancer cells, these cells can be selectively
killed while sparing the normal cells. Consequently, a number of studies have
attempted prediction of SL interactions in human, a majority by extrapolating
SL interactions inferred through large-scale screens in model organisms.
However, these predicted SL interactions either do not hold in human cells or
do not include genes that are (frequently) altered in human cancers, and are
therefore not attractive in the context of cancer therapy.
Results: Here, we develop a computational approach to infer SL interactions
directly from frequently altered genes in human cancers. It is based on the
observation that pairs of genes that are altered in a (significantly) mutually
exclusive manner in cancers are likely to constitute lethal combinations. Using
genomic copy-number and gene-expression data from four cancers, breast,
prostate, ovarian and uterine (total 3980 samples) from The Cancer Genome
Atlas, we identify 718 genes that are frequently amplified or upregulated, and
are likely to be synthetic lethal with six key DNA-damage response (DDR) genes
in these cancers. By comparing with published data on gene essentiality (~16000
genes) from ten DDR-deficient cancer cell lines, we show that our identified
genes are enriched among the top quartile of essential genes in these cell
lines, implying that our inferred genes are highly likely to be (synthetic)
lethal upon knockdown in these cell lines.
| [
{
"created": "Sat, 3 Oct 2015 13:09:36 GMT",
"version": "v1"
}
] | 2015-10-06 | [
[
"Srihari",
"Sriganesh",
""
],
[
"Singla",
"Jitin",
""
],
[
"Wong",
"Limsoon",
""
],
[
"Ragan",
"Mark A.",
""
]
] | Background: Synthetic lethality (SL) refers to the genetic interaction between two or more genes where only their co-alteration (e.g. by mutations, amplifications or deletions) results in cell death. In recent years, SL has emerged as an attractive therapeutic strategy against cancer: by targeting the SL partners of altered genes in cancer cells, these cells can be selectively killed while sparing the normal cells. Consequently, a number of studies have attempted prediction of SL interactions in human, a majority by extrapolating SL interactions inferred through large-scale screens in model organisms. However, these predicted SL interactions either do not hold in human cells or do not include genes that are (frequently) altered in human cancers, and are therefore not attractive in the context of cancer therapy. Results: Here, we develop a computational approach to infer SL interactions directly from frequently altered genes in human cancers. It is based on the observation that pairs of genes that are altered in a (significantly) mutually exclusive manner in cancers are likely to constitute lethal combinations. Using genomic copy-number and gene-expression data from four cancers, breast, prostate, ovarian and uterine (total 3980 samples) from The Cancer Genome Atlas, we identify 718 genes that are frequently amplified or upregulated, and are likely to be synthetic lethal with six key DNA-damage response (DDR) genes in these cancers. By comparing with published data on gene essentiality (~16000 genes) from ten DDR-deficient cancer cell lines, we show that our identified genes are enriched among the top quartile of essential genes in these cell lines, implying that our inferred genes are highly likely to be (synthetic) lethal upon knockdown in these cell lines. |
1612.04295 | Esmaeil Seraj | Esmaeil Seraj | Cerebral Synchrony Assessment Tutorial: A General Review on Cerebral
Signals' Synchronization Estimation Concepts and Methods | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The human brain is ultimately responsible for all thoughts and movements that
the body produces. This allows humans to successfully interact with their
environment. If the brain is not functioning properly many abilities of human
can be damaged. The goal of cerebral signal analysis is to learn about brain
function. The idea that distinct areas of the brain are responsible for
specific tasks, the functional segregation, is a key aspect of brain function.
Functional integration is an important feature of brain function, it is the
concordance of multiple segregated brain areas to produce a unified response.
There is an amplified feedback mechanism in the brain called reentry which
requires specific timing relations. This specific timing requires neurons
within an assembly to synchronize their firing rates. This has led to increased
interest and use of phase variables, particularly their synchronization, to
measure connectivity in cerebral signals. Herein, we propose a comprehensive
review on concepts and methods previously presented for assessing cerebral
synchrony, with focus on phase synchronization, as a tool for brain
connectivity evaluation.
| [
{
"created": "Mon, 12 Dec 2016 14:39:42 GMT",
"version": "v1"
},
{
"created": "Fri, 6 Jul 2018 02:13:07 GMT",
"version": "v2"
}
] | 2018-07-09 | [
[
"Seraj",
"Esmaeil",
""
]
] | The human brain is ultimately responsible for all thoughts and movements that the body produces. This allows humans to successfully interact with their environment. If the brain is not functioning properly many abilities of human can be damaged. The goal of cerebral signal analysis is to learn about brain function. The idea that distinct areas of the brain are responsible for specific tasks, the functional segregation, is a key aspect of brain function. Functional integration is an important feature of brain function, it is the concordance of multiple segregated brain areas to produce a unified response. There is an amplified feedback mechanism in the brain called reentry which requires specific timing relations. This specific timing requires neurons within an assembly to synchronize their firing rates. This has led to increased interest and use of phase variables, particularly their synchronization, to measure connectivity in cerebral signals. Herein, we propose a comprehensive review on concepts and methods previously presented for assessing cerebral synchrony, with focus on phase synchronization, as a tool for brain connectivity evaluation. |
2405.06649 | Mingyu Jin | Mingyu Jin, Haochen Xue, Zhenting Wang, Boming Kang, Ruosong Ye,
Kaixiong Zhou, Mengnan Du, Yongfeng Zhang | ProLLM: Protein Chain-of-Thoughts Enhanced LLM for Protein-Protein
Interaction Prediction | Accepted by COLM 2024 | null | null | null | q-bio.BM cs.LG q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The prediction of protein-protein interactions (PPIs) is crucial for
understanding biological functions and diseases. Previous machine learning
approaches to PPI prediction mainly focus on direct physical interactions,
ignoring the broader context of nonphysical connections through intermediate
proteins, thus limiting their effectiveness. The emergence of Large Language
Models (LLMs) provides a new opportunity for addressing this complex biological
challenge. By transforming structured data into natural language prompts, we
can map the relationships between proteins into texts. This approach allows
LLMs to identify indirect connections between proteins, tracing the path from
upstream to downstream. Therefore, we propose a novel framework ProLLM that
employs an LLM tailored for PPI for the first time. Specifically, we propose
Protein Chain of Thought (ProCoT), which replicates the biological mechanism of
signaling pathways as natural language prompts. ProCoT considers a signaling
pathway as a protein reasoning process, which starts from upstream proteins and
passes through several intermediate proteins to transmit biological signals to
downstream proteins. Thus, we can use ProCoT to predict the interaction between
upstream proteins and downstream proteins. The training of ProLLM employs the
ProCoT format, which enhances the model's understanding of complex biological
problems. In addition to ProCoT, this paper also contributes to the exploration
of embedding replacement of protein sites in natural language prompts, and
instruction fine-tuning in protein knowledge datasets. We demonstrate the
efficacy of ProLLM through rigorous validation against benchmark datasets,
showing significant improvement over existing methods in terms of prediction
accuracy and generalizability. The code is available at:
https://github.com/MingyuJ666/ProLLM.
| [
{
"created": "Sat, 30 Mar 2024 05:32:42 GMT",
"version": "v1"
},
{
"created": "Fri, 12 Jul 2024 11:38:56 GMT",
"version": "v2"
}
] | 2024-07-15 | [
[
"Jin",
"Mingyu",
""
],
[
"Xue",
"Haochen",
""
],
[
"Wang",
"Zhenting",
""
],
[
"Kang",
"Boming",
""
],
[
"Ye",
"Ruosong",
""
],
[
"Zhou",
"Kaixiong",
""
],
[
"Du",
"Mengnan",
""
],
[
"Zhang",
"Yongfeng",
""
]
] | The prediction of protein-protein interactions (PPIs) is crucial for understanding biological functions and diseases. Previous machine learning approaches to PPI prediction mainly focus on direct physical interactions, ignoring the broader context of nonphysical connections through intermediate proteins, thus limiting their effectiveness. The emergence of Large Language Models (LLMs) provides a new opportunity for addressing this complex biological challenge. By transforming structured data into natural language prompts, we can map the relationships between proteins into texts. This approach allows LLMs to identify indirect connections between proteins, tracing the path from upstream to downstream. Therefore, we propose a novel framework ProLLM that employs an LLM tailored for PPI for the first time. Specifically, we propose Protein Chain of Thought (ProCoT), which replicates the biological mechanism of signaling pathways as natural language prompts. ProCoT considers a signaling pathway as a protein reasoning process, which starts from upstream proteins and passes through several intermediate proteins to transmit biological signals to downstream proteins. Thus, we can use ProCoT to predict the interaction between upstream proteins and downstream proteins. The training of ProLLM employs the ProCoT format, which enhances the model's understanding of complex biological problems. In addition to ProCoT, this paper also contributes to the exploration of embedding replacement of protein sites in natural language prompts, and instruction fine-tuning in protein knowledge datasets. We demonstrate the efficacy of ProLLM through rigorous validation against benchmark datasets, showing significant improvement over existing methods in terms of prediction accuracy and generalizability. The code is available at: https://github.com/MingyuJ666/ProLLM. |
2005.03994 | Xishuang Dong | Xishuang Dong, Shanta Chowdhury, Uboho Victor, Xiangfang Li, Lijun
Qian | Cell Type Identification from Single-Cell Transcriptomic Data via
Semi-supervised Learning | null | null | null | null | q-bio.GN cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cell type identification from single-cell transcriptomic data is a common
goal of single-cell RNA sequencing (scRNAseq) data analysis. Neural networks
have been employed to identify cell types from scRNAseq data with high
performance. However, it requires a large mount of individual cells with
accurate and unbiased annotated types to build the identification models.
Unfortunately, labeling the scRNAseq data is cumbersome and time-consuming as
it involves manual inspection of marker genes. To overcome this challenge, we
propose a semi-supervised learning model to use unlabeled scRNAseq cells and
limited amount of labeled scRNAseq cells to implement cell identification.
Firstly, we transform the scRNAseq cells to "gene sentences", which is inspired
by similarities between natural language system and gene system. Then genes in
these sentences are represented as gene embeddings to reduce data sparsity.
With these embeddings, we implement a semi-supervised learning model based on
recurrent convolutional neural networks (RCNN), which includes a shared
network, a supervised network and an unsupervised network. The proposed model
is evaluated on macosko2015, a large scale single-cell transcriptomic dataset
with ground truth of individual cell types. It is observed that the proposed
model is able to achieve encouraging performance by learning on very limited
amount of labeled scRNAseq cells together with a large number of unlabeled
scRNAseq cells.
| [
{
"created": "Wed, 6 May 2020 19:15:43 GMT",
"version": "v1"
}
] | 2020-05-11 | [
[
"Dong",
"Xishuang",
""
],
[
"Chowdhury",
"Shanta",
""
],
[
"Victor",
"Uboho",
""
],
[
"Li",
"Xiangfang",
""
],
[
"Qian",
"Lijun",
""
]
] | Cell type identification from single-cell transcriptomic data is a common goal of single-cell RNA sequencing (scRNAseq) data analysis. Neural networks have been employed to identify cell types from scRNAseq data with high performance. However, it requires a large mount of individual cells with accurate and unbiased annotated types to build the identification models. Unfortunately, labeling the scRNAseq data is cumbersome and time-consuming as it involves manual inspection of marker genes. To overcome this challenge, we propose a semi-supervised learning model to use unlabeled scRNAseq cells and limited amount of labeled scRNAseq cells to implement cell identification. Firstly, we transform the scRNAseq cells to "gene sentences", which is inspired by similarities between natural language system and gene system. Then genes in these sentences are represented as gene embeddings to reduce data sparsity. With these embeddings, we implement a semi-supervised learning model based on recurrent convolutional neural networks (RCNN), which includes a shared network, a supervised network and an unsupervised network. The proposed model is evaluated on macosko2015, a large scale single-cell transcriptomic dataset with ground truth of individual cell types. It is observed that the proposed model is able to achieve encouraging performance by learning on very limited amount of labeled scRNAseq cells together with a large number of unlabeled scRNAseq cells. |
1507.04055 | Delfim F. M. Torres | Filipa Portugal Rocha, Helena Sofia Rodrigues, M. Teresa T. Monteiro,
Delfim F. M. Torres | Coexistence of two dengue virus serotypes and forecasting for Madeira
island | This is a preprint of a paper whose final and definite form is in
'Operations Research For Health Care', ISSN 2211-6923. Paper submitted
21/Nov/2014; revised 17/May/2015 and 07/Jul/2015; accepted for publication
14/Jul/2015 | Oper. Res. Health Care 7 (2015), 122--131 | 10.1016/j.orhc.2015.07.003 | null | q-bio.PE math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The first outbreak of dengue occurred in Madeira Island on 2012, featuring
one virus serotype. Aedes aegypti was the vector of the disease and it is
unlikely that it will be eliminated from the island. Therefore, a new outbreak
of dengue fever can occur and, if it happens, risk to the population increases
if two serotypes coexist. In this paper, mathematical modeling and numerical
simulations are carried out to forecast what may happen in Madeira Island in
such scenario.
| [
{
"created": "Wed, 15 Jul 2015 00:16:58 GMT",
"version": "v1"
}
] | 2015-11-20 | [
[
"Rocha",
"Filipa Portugal",
""
],
[
"Rodrigues",
"Helena Sofia",
""
],
[
"Monteiro",
"M. Teresa T.",
""
],
[
"Torres",
"Delfim F. M.",
""
]
] | The first outbreak of dengue occurred in Madeira Island on 2012, featuring one virus serotype. Aedes aegypti was the vector of the disease and it is unlikely that it will be eliminated from the island. Therefore, a new outbreak of dengue fever can occur and, if it happens, risk to the population increases if two serotypes coexist. In this paper, mathematical modeling and numerical simulations are carried out to forecast what may happen in Madeira Island in such scenario. |
1008.0153 | Bhalchandra Thatte | Bhalchandra D. Thatte | Reconstructing pedigrees: some identifiability questions for a
recombination-mutation model | 40 pages, 9 figures | Journal of Mathematical Biology, 66, issue 1-2 (2013) 37-74 | 10.1007/s00285-011-0503-8 | null | q-bio.PE math.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pedigrees are directed acyclic graphs that represent ancestral relationships
between individuals in a population. Based on a schematic recombination
process, we describe two simple Markov models for sequences evolving on
pedigrees - Model R (recombinations without mutations) and Model RM
(recombinations with mutations). For these models, we ask an identifiability
question: is it possible to construct a pedigree from the joint probability
distribution of extant sequences? We present partial identifiability results
for general pedigrees: we show that when the crossover probabilities are
sufficiently small, certain spanning subgraph sequences can be counted from the
joint distribution of extant sequences. We demonstrate how pedigrees that
earlier seemed difficult to distinguish are distinguished by counting their
spanning subgraph sequences.
| [
{
"created": "Sun, 1 Aug 2010 07:45:11 GMT",
"version": "v1"
},
{
"created": "Fri, 5 Nov 2010 07:34:02 GMT",
"version": "v2"
},
{
"created": "Tue, 6 Sep 2011 03:17:47 GMT",
"version": "v3"
}
] | 2013-01-18 | [
[
"Thatte",
"Bhalchandra D.",
""
]
] | Pedigrees are directed acyclic graphs that represent ancestral relationships between individuals in a population. Based on a schematic recombination process, we describe two simple Markov models for sequences evolving on pedigrees - Model R (recombinations without mutations) and Model RM (recombinations with mutations). For these models, we ask an identifiability question: is it possible to construct a pedigree from the joint probability distribution of extant sequences? We present partial identifiability results for general pedigrees: we show that when the crossover probabilities are sufficiently small, certain spanning subgraph sequences can be counted from the joint distribution of extant sequences. We demonstrate how pedigrees that earlier seemed difficult to distinguish are distinguished by counting their spanning subgraph sequences. |
1004.4764 | Laurence Wilson | Laurence G. Wilson, Vincent A. Martinez, Jana Schwarz-Linek, J.
Tailleur, Peter N. Pusey, Gary Bryant, Wilson C. K. Poon | Differential Dynamic Microscopy of Bacterial Motility | 4 pages, 4 figures. In this updated version we have added simulations
to support our interpretation, and changed the model for the swimming speed
probability distribution from log-normal to a Schulz distribution. Neither
modification significantly changes our conclusions | Phys. Rev. Lett. 106, 018101 (2011) | 10.1103/PhysRevLett.106.018101 | null | q-bio.QM cond-mat.soft physics.bio-ph q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We demonstrate 'differential dynamic microscopy' (DDM) for the fast, high
throughput characterization of the dynamics of active particles. Specifically,
we characterize the swimming speed distribution and the fraction of motile
cells in suspensions of Escherichia coli bacteria. By averaging over ~10^4
cells, our results are highly accurate compared to conventional tracking. The
diffusivity of non-motile cells is enhanced by an amount proportional to the
concentration of motile cells.
| [
{
"created": "Tue, 27 Apr 2010 11:18:39 GMT",
"version": "v1"
},
{
"created": "Sat, 2 Oct 2010 00:36:46 GMT",
"version": "v2"
}
] | 2011-07-12 | [
[
"Wilson",
"Laurence G.",
""
],
[
"Martinez",
"Vincent A.",
""
],
[
"Schwarz-Linek",
"Jana",
""
],
[
"Tailleur",
"J.",
""
],
[
"Pusey",
"Peter N.",
""
],
[
"Bryant",
"Gary",
""
],
[
"Poon",
"Wilson C. K.",
""
]
] | We demonstrate 'differential dynamic microscopy' (DDM) for the fast, high throughput characterization of the dynamics of active particles. Specifically, we characterize the swimming speed distribution and the fraction of motile cells in suspensions of Escherichia coli bacteria. By averaging over ~10^4 cells, our results are highly accurate compared to conventional tracking. The diffusivity of non-motile cells is enhanced by an amount proportional to the concentration of motile cells. |
1403.8057 | Iain Johnston | Iain G. Johnston | Efficient parametric inference for stochastic biological systems with
measured variability | 11 pages, 4 figs | null | null | null | q-bio.QM stat.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Stochastic systems in biology often exhibit substantial variability within
and between cells. This variability, as well as having dramatic functional
consequences, provides information about the underlying details of the system's
behaviour. It is often desirable to infer properties of the parameters
governing such systems given experimental observations of the mean and variance
of observed quantities. In some circumstances, analytic forms for the
likelihood of these observations allow very efficient inference: we present
these forms and demonstrate their usage. When likelihood functions are
unavailable or difficult to calculate, we show that an implementation of
approximate Bayesian computation (ABC) is a powerful tool for parametric
inference in these systems. However, the calculations required to apply ABC to
these systems can also be computationally expensive, relying on repeated
stochastic simulations. We propose an ABC approach that cheaply eliminates
unimportant regions of parameter space, by addressing computationally simple
mean behaviour before explicitly simulating the more computationally demanding
variance behaviour. We show that this approach leads to a substantial increase
in speed when applied to synthetic and experimental datasets.
| [
{
"created": "Mon, 31 Mar 2014 15:42:01 GMT",
"version": "v1"
},
{
"created": "Fri, 6 Nov 2015 15:01:50 GMT",
"version": "v2"
}
] | 2015-11-09 | [
[
"Johnston",
"Iain G.",
""
]
] | Stochastic systems in biology often exhibit substantial variability within and between cells. This variability, as well as having dramatic functional consequences, provides information about the underlying details of the system's behaviour. It is often desirable to infer properties of the parameters governing such systems given experimental observations of the mean and variance of observed quantities. In some circumstances, analytic forms for the likelihood of these observations allow very efficient inference: we present these forms and demonstrate their usage. When likelihood functions are unavailable or difficult to calculate, we show that an implementation of approximate Bayesian computation (ABC) is a powerful tool for parametric inference in these systems. However, the calculations required to apply ABC to these systems can also be computationally expensive, relying on repeated stochastic simulations. We propose an ABC approach that cheaply eliminates unimportant regions of parameter space, by addressing computationally simple mean behaviour before explicitly simulating the more computationally demanding variance behaviour. We show that this approach leads to a substantial increase in speed when applied to synthetic and experimental datasets. |
1608.04276 | Vitaly Vodyanoy | Vitaly Vodyanoy, Oleg Pustovyy, Ludmila Globa, and Iryna Sorokulova | Evaluation of a New Vasculature by High Resolution Light Microscopy:
Primo Vessel and Node | 14 pages, 7 figures | null | null | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The primo vascular system is composed of nodes and vessels. The bundle of
sub-vessels of the promo vessel is laid into an external jacket composed of
endothelial cells. The node is heterogeneous in nature, composed of twisted
sub-vessel bundles that fill up nearly the entire node volume. The enlarged
sub-vessel inside the node harbors microcells that express stem cells and stem
cells niche markers. We conclude that these microcells are progenitors of
multipotent stem cells and the nodes serve as the stem cell niches outside the
bone marrow.
| [
{
"created": "Mon, 15 Aug 2016 14:25:47 GMT",
"version": "v1"
},
{
"created": "Thu, 1 Jun 2017 19:29:33 GMT",
"version": "v2"
}
] | 2017-06-05 | [
[
"Vodyanoy",
"Vitaly",
""
],
[
"Pustovyy",
"Oleg",
""
],
[
"Globa",
"Ludmila",
""
],
[
"Sorokulova",
"Iryna",
""
]
] | The primo vascular system is composed of nodes and vessels. The bundle of sub-vessels of the promo vessel is laid into an external jacket composed of endothelial cells. The node is heterogeneous in nature, composed of twisted sub-vessel bundles that fill up nearly the entire node volume. The enlarged sub-vessel inside the node harbors microcells that express stem cells and stem cells niche markers. We conclude that these microcells are progenitors of multipotent stem cells and the nodes serve as the stem cell niches outside the bone marrow. |
1412.1729 | Jorge Pe\~na | Jorge Pe\~na, Georg N\"oldeke, Laurent Lehmann | Relatedness and synergies of kind and scale in the evolution of helping | 38 pages, 2 figures, 4 tables | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Relatedness and synergy affect the selection pressure on cooperation and
altruism. Although early work investigated the effect of these factors
independently of each other, recent efforts have been aimed at exploring their
interplay. Here, we contribute to this ongoing synthesis in two distinct but
complementary ways. First, we integrate models of $n$-player matrix games into
the direct fitness approach of inclusive fitness theory, hence providing a
framework to consider synergistic social interactions between relatives in
family and spatially structured populations. Second, we illustrate the
usefulness of this framework by delineating three distinct types of helping
traits ("whole-group", "nonexpresser-only" and "expresser-only"), which are
characterized by different synergies of kind (arising from differential fitness
effects on individuals expressing or not expressing helping) and can be
subjected to different synergies of scale (arising from economies or
diseconomies of scale). We find that relatedness and synergies of kind and
scale can interact to generate nontrivial evolutionary dynamics, such as cases
of bistable coexistence featuring both a stable equilibrium with a positive
level of helping and an unstable helping threshold. This broadens the
qualitative effects of relatedness (or spatial structure) on the evolution of
helping.
| [
{
"created": "Thu, 4 Dec 2014 17:00:49 GMT",
"version": "v1"
}
] | 2014-12-05 | [
[
"Peña",
"Jorge",
""
],
[
"Nöldeke",
"Georg",
""
],
[
"Lehmann",
"Laurent",
""
]
] | Relatedness and synergy affect the selection pressure on cooperation and altruism. Although early work investigated the effect of these factors independently of each other, recent efforts have been aimed at exploring their interplay. Here, we contribute to this ongoing synthesis in two distinct but complementary ways. First, we integrate models of $n$-player matrix games into the direct fitness approach of inclusive fitness theory, hence providing a framework to consider synergistic social interactions between relatives in family and spatially structured populations. Second, we illustrate the usefulness of this framework by delineating three distinct types of helping traits ("whole-group", "nonexpresser-only" and "expresser-only"), which are characterized by different synergies of kind (arising from differential fitness effects on individuals expressing or not expressing helping) and can be subjected to different synergies of scale (arising from economies or diseconomies of scale). We find that relatedness and synergies of kind and scale can interact to generate nontrivial evolutionary dynamics, such as cases of bistable coexistence featuring both a stable equilibrium with a positive level of helping and an unstable helping threshold. This broadens the qualitative effects of relatedness (or spatial structure) on the evolution of helping. |
1303.6175 | Ramon Ferrer i Cancho | R. Ferrer-i-Cancho, A. Hern\'andez-Fern\'andez, D. Lusseau, G.
Agoramoorthy, M. J. Hsu and S. Semple | Compression as a universal principle of animal behavior | This is the pre-proofed version. The published version will be
available at
http://onlinelibrary.wiley.com/journal/10.1111/%28ISSN%291551-6709 | null | 10.1111/cogs.12061 | null | q-bio.NC cs.CL cs.IT math.IT physics.data-an q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A key aim in biology and psychology is to identify fundamental principles
underpinning the behavior of animals, including humans. Analyses of human
language and the behavior of a range of non-human animal species have provided
evidence for a common pattern underlying diverse behavioral phenomena: words
follow Zipf's law of brevity (the tendency of more frequently used words to be
shorter), and conformity to this general pattern has been seen in the behavior
of a number of other animals. It has been argued that the presence of this law
is a sign of efficient coding in the information theoretic sense. However, no
strong direct connection has been demonstrated between the law and compression,
the information theoretic principle of minimizing the expected length of a
code. Here we show that minimizing the expected code length implies that the
length of a word cannot increase as its frequency increases. Furthermore, we
show that the mean code length or duration is significantly small in human
language, and also in the behavior of other species in all cases where
agreement with the law of brevity has been found. We argue that compression is
a general principle of animal behavior, that reflects selection for efficiency
of coding.
| [
{
"created": "Mon, 25 Mar 2013 15:43:48 GMT",
"version": "v1"
}
] | 2014-12-03 | [
[
"Ferrer-i-Cancho",
"R.",
""
],
[
"Hernández-Fernández",
"A.",
""
],
[
"Lusseau",
"D.",
""
],
[
"Agoramoorthy",
"G.",
""
],
[
"Hsu",
"M. J.",
""
],
[
"Semple",
"S.",
""
]
] | A key aim in biology and psychology is to identify fundamental principles underpinning the behavior of animals, including humans. Analyses of human language and the behavior of a range of non-human animal species have provided evidence for a common pattern underlying diverse behavioral phenomena: words follow Zipf's law of brevity (the tendency of more frequently used words to be shorter), and conformity to this general pattern has been seen in the behavior of a number of other animals. It has been argued that the presence of this law is a sign of efficient coding in the information theoretic sense. However, no strong direct connection has been demonstrated between the law and compression, the information theoretic principle of minimizing the expected length of a code. Here we show that minimizing the expected code length implies that the length of a word cannot increase as its frequency increases. Furthermore, we show that the mean code length or duration is significantly small in human language, and also in the behavior of other species in all cases where agreement with the law of brevity has been found. We argue that compression is a general principle of animal behavior, that reflects selection for efficiency of coding. |
1308.2953 | Kelin Xia | Kelin Xia and Guo-Wei Wei | Protein folding tames chaos | null | null | null | null | q-bio.BM physics.bio-ph q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Protein folding produces characteristic and functional three-dimensional
structures from unfolded polypeptides or disordered coils. The emergence of
extraordinary complexity in the protein folding process poses astonishing
challenges to theoretical modeling and computer simulations. The present work
introduces molecular nonlinear dynamics (MND), or molecular chaotic dynamics,
as a theoretical framework for describing and analyzing protein folding. We
unveil the existence of intrinsically low dimensional manifolds (ILDMs) in the
chaotic dynamics of folded proteins. Additionally, we reveal that the
transition from disordered to ordered conformations in protein folding
increases the transverse stability of the ILDM. Stated differently, protein
folding reduces the chaoticity of the nonlinear dynamical system, and a folded
protein has the best ability to tame chaos. Additionally, we bring to light the
connection between the ILDM stability and the thermodynamic stability, which
enables us to quantify the disorderliness and relative energies of folded,
misfolded and unfolded protein states. Finally, we exploit chaos for protein
flexibility analysis and develop a robust chaotic algorithm for the prediction
of Debye-Waller factors, or temperature factors, of protein structures.
| [
{
"created": "Tue, 13 Aug 2013 19:32:25 GMT",
"version": "v1"
}
] | 2013-08-14 | [
[
"Xia",
"Kelin",
""
],
[
"Wei",
"Guo-Wei",
""
]
] | Protein folding produces characteristic and functional three-dimensional structures from unfolded polypeptides or disordered coils. The emergence of extraordinary complexity in the protein folding process poses astonishing challenges to theoretical modeling and computer simulations. The present work introduces molecular nonlinear dynamics (MND), or molecular chaotic dynamics, as a theoretical framework for describing and analyzing protein folding. We unveil the existence of intrinsically low dimensional manifolds (ILDMs) in the chaotic dynamics of folded proteins. Additionally, we reveal that the transition from disordered to ordered conformations in protein folding increases the transverse stability of the ILDM. Stated differently, protein folding reduces the chaoticity of the nonlinear dynamical system, and a folded protein has the best ability to tame chaos. Additionally, we bring to light the connection between the ILDM stability and the thermodynamic stability, which enables us to quantify the disorderliness and relative energies of folded, misfolded and unfolded protein states. Finally, we exploit chaos for protein flexibility analysis and develop a robust chaotic algorithm for the prediction of Debye-Waller factors, or temperature factors, of protein structures. |
1708.00385 | Katar\'ina Bo\v{d}ov\'a | Katarina Bodova, Gabriel J. Mitchell, Roy Harpaz, Elad Schneidman,
Gasper Tkacik | Probabilistic models of individual and collective animal behavior | 26 pages, 11 figures | null | 10.1371/journal.pone.0193049 | null | q-bio.PE q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent developments in automated tracking allow uninterrupted,
high-resolution recording of animal trajectories, sometimes coupled with the
identification of stereotyped changes of body pose or other behaviors of
interest. Analysis and interpretation of such data represents a challenge: the
timing of animal behaviors may be stochastic and modulated by kinematic
variables, by the interaction with the environment or with the conspecifics
within the animal group, and dependent on internal cognitive or behavioral
state of the individual. Existing models for collective motion typically fail
to incorporate the discrete, stochastic, and internal-state-dependent aspects
of behavior, while models focusing on individual animal behavior typically
ignore the spatial aspects of the problem. Here we propose a probabilistic
modeling framework to address this gap. Each animal can switch stochastically
between different behavioral states, with each state resulting in a possibly
different law of motion through space. Switching rates for behavioral
transitions can depend in a very general way, which we seek to identify from
data, on the effects of the environment as well as the interaction between the
animals. We represent the switching dynamics as a Generalized Linear Model and
show that: (i) forward simulation of multiple interacting animals is possible
using a variant of the Gillespie's Stochastic Simulation Algorithm; (ii)
formulated properly, the maximum likelihood inference of switching rate
functions is tractably solvable by gradient descent; (iii) model selection can
be used to identify factors that modulate behavioral state switching and to
appropriately adjust model complexity to data. To illustrate our framework, we
apply it to two synthetic models of animal motion and to real zebrafish
tracking data.
| [
{
"created": "Mon, 31 Jul 2017 14:09:09 GMT",
"version": "v1"
}
] | 2018-07-04 | [
[
"Bodova",
"Katarina",
""
],
[
"Mitchell",
"Gabriel J.",
""
],
[
"Harpaz",
"Roy",
""
],
[
"Schneidman",
"Elad",
""
],
[
"Tkacik",
"Gasper",
""
]
] | Recent developments in automated tracking allow uninterrupted, high-resolution recording of animal trajectories, sometimes coupled with the identification of stereotyped changes of body pose or other behaviors of interest. Analysis and interpretation of such data represents a challenge: the timing of animal behaviors may be stochastic and modulated by kinematic variables, by the interaction with the environment or with the conspecifics within the animal group, and dependent on internal cognitive or behavioral state of the individual. Existing models for collective motion typically fail to incorporate the discrete, stochastic, and internal-state-dependent aspects of behavior, while models focusing on individual animal behavior typically ignore the spatial aspects of the problem. Here we propose a probabilistic modeling framework to address this gap. Each animal can switch stochastically between different behavioral states, with each state resulting in a possibly different law of motion through space. Switching rates for behavioral transitions can depend in a very general way, which we seek to identify from data, on the effects of the environment as well as the interaction between the animals. We represent the switching dynamics as a Generalized Linear Model and show that: (i) forward simulation of multiple interacting animals is possible using a variant of the Gillespie's Stochastic Simulation Algorithm; (ii) formulated properly, the maximum likelihood inference of switching rate functions is tractably solvable by gradient descent; (iii) model selection can be used to identify factors that modulate behavioral state switching and to appropriately adjust model complexity to data. To illustrate our framework, we apply it to two synthetic models of animal motion and to real zebrafish tracking data. |
2010.15191 | Anne Churchland | Joao Couto, Simon Musall, Xiaonan R Sun, Anup Khanal, Steven Gluf,
Shreya Saxena, Ian Kinsella, Taiga Abe, John P. Cunningham, Liam Paninski,
Anne K Churchland | Chronic, cortex-wide imaging of specific cell populations during
behavior | 36 pages, 7 figures, 2 supplementary figures | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Measurements of neuronal activity across brain areas are important for
understanding the neural correlates of cognitive and motor processes like
attention, decision-making, and action selection. However, techniques that
allow cellular resolution measurements are expensive and require a high degree
of technical expertise, which limits their broad use. Widefield imaging of
genetically encoded indicators is a high throughput, cost effective, and
flexible approach to measure activity of specific cell populations with high
temporal resolution and a cortex-wide field of view. Here we outline our
protocol for assembling a widefield setup, a surgical preparation to image
through the intact skull, and imaging neural activity chronically in behaving,
transgenic mice that express a calcium indicator in specific subpopulations of
cortical neurons. Further, we highlight a processing pipeline that leverages
novel, cloud-based methods to analyze large-scale imaging datasets. The
protocol targets labs that are seeking to build macroscopes, optimize surgical
procedures for long-term chronic imaging, and/or analyze cortex-wide neuronal
recordings.
| [
{
"created": "Wed, 28 Oct 2020 19:21:52 GMT",
"version": "v1"
}
] | 2020-10-30 | [
[
"Couto",
"Joao",
""
],
[
"Musall",
"Simon",
""
],
[
"Sun",
"Xiaonan R",
""
],
[
"Khanal",
"Anup",
""
],
[
"Gluf",
"Steven",
""
],
[
"Saxena",
"Shreya",
""
],
[
"Kinsella",
"Ian",
""
],
[
"Abe",
"Taiga",
""
],
[
"Cunningham",
"John P.",
""
],
[
"Paninski",
"Liam",
""
],
[
"Churchland",
"Anne K",
""
]
] | Measurements of neuronal activity across brain areas are important for understanding the neural correlates of cognitive and motor processes like attention, decision-making, and action selection. However, techniques that allow cellular resolution measurements are expensive and require a high degree of technical expertise, which limits their broad use. Widefield imaging of genetically encoded indicators is a high throughput, cost effective, and flexible approach to measure activity of specific cell populations with high temporal resolution and a cortex-wide field of view. Here we outline our protocol for assembling a widefield setup, a surgical preparation to image through the intact skull, and imaging neural activity chronically in behaving, transgenic mice that express a calcium indicator in specific subpopulations of cortical neurons. Further, we highlight a processing pipeline that leverages novel, cloud-based methods to analyze large-scale imaging datasets. The protocol targets labs that are seeking to build macroscopes, optimize surgical procedures for long-term chronic imaging, and/or analyze cortex-wide neuronal recordings. |
q-bio/0505024 | Keiji Miura | K. Miura and M. Okada and S. Shinomoto | Search for optimal measure for discriminating spike trains with
different randomness | null | null | null | null | q-bio.NC | null | We wish to discriminate spike sequences based on the degree of irregularity.
For this purpose, we search for a rational expressions of quadratic functions
of consecutive interspike intervals that efficiently measures spiking
irregularity. Under natural assumptions, the functional form of the coefficient
can be parameterized by a single parameter. The parameter is determined so as
to maximize the mutual information between the distributions of coefficients
computed for spike sequences derived from different renewal point processes. We
find that the local variation of interspike intervals, LV (Neural Comput. Vol.
15, pp. 2823-42, 2003), is nearly optimal for whose intrinsic irregularity is
close to that of experimental data.
| [
{
"created": "Fri, 13 May 2005 10:38:28 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Miura",
"K.",
""
],
[
"Okada",
"M.",
""
],
[
"Shinomoto",
"S.",
""
]
] | We wish to discriminate spike sequences based on the degree of irregularity. For this purpose, we search for a rational expressions of quadratic functions of consecutive interspike intervals that efficiently measures spiking irregularity. Under natural assumptions, the functional form of the coefficient can be parameterized by a single parameter. The parameter is determined so as to maximize the mutual information between the distributions of coefficients computed for spike sequences derived from different renewal point processes. We find that the local variation of interspike intervals, LV (Neural Comput. Vol. 15, pp. 2823-42, 2003), is nearly optimal for whose intrinsic irregularity is close to that of experimental data. |
2006.02961 | James Brunner | James D. Brunner and Nicholas Chia | Confidence in the dynamic spread of epidemics under biased sampling
conditions | 11 figures, 2 tables, 15 pages | null | null | null | q-bio.PE physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The interpretation of sampling data plays a crucial role in policy response
to the spread of a disease during an epidemic, such as the COVID-19 epidemic of
2020. However, this is a non-trivial endeavor due to the complexity of real
world conditions and limits to the availability of diagnostic tests, which
necessitate a bias in testing favoring symptomatic individuals. A thorough
understanding of sampling confidence and bias is necessary in order make
accurate conclusions. In this manuscript, we provide a stochastic model of
sampling for assessing confidence in disease metrics such as trend detection,
peak detection, and disease spread estimation. Our model simulates testing for
a disease in an epidemic with known dynamics, allowing us to use Monte-Carlo
sampling to assess metric confidence. This model can provide realistic
simulated data which can be used in the design and calibration of data analysis
and prediction methods. As an example, we use this method to show that trends
in the disease may be identified using under $10000$ biased samples each day,
and an estimate of disease spread can be made with additional $1000-2000$
unbiased samples each day. We also demonstrate that the model can be used to
assess more advanced metrics by finding the precision and recall of a strategy
for finding peaks in the dynamics.
| [
{
"created": "Thu, 4 Jun 2020 15:44:31 GMT",
"version": "v1"
},
{
"created": "Tue, 28 Jul 2020 23:05:13 GMT",
"version": "v2"
}
] | 2020-07-30 | [
[
"Brunner",
"James D.",
""
],
[
"Chia",
"Nicholas",
""
]
] | The interpretation of sampling data plays a crucial role in policy response to the spread of a disease during an epidemic, such as the COVID-19 epidemic of 2020. However, this is a non-trivial endeavor due to the complexity of real world conditions and limits to the availability of diagnostic tests, which necessitate a bias in testing favoring symptomatic individuals. A thorough understanding of sampling confidence and bias is necessary in order make accurate conclusions. In this manuscript, we provide a stochastic model of sampling for assessing confidence in disease metrics such as trend detection, peak detection, and disease spread estimation. Our model simulates testing for a disease in an epidemic with known dynamics, allowing us to use Monte-Carlo sampling to assess metric confidence. This model can provide realistic simulated data which can be used in the design and calibration of data analysis and prediction methods. As an example, we use this method to show that trends in the disease may be identified using under $10000$ biased samples each day, and an estimate of disease spread can be made with additional $1000-2000$ unbiased samples each day. We also demonstrate that the model can be used to assess more advanced metrics by finding the precision and recall of a strategy for finding peaks in the dynamics. |
1310.0448 | David Schwab | David J. Schwab, Ilya Nemenman, Pankaj Mehta | Zipf's law and criticality in multivariate data without fine-tuning | 5 pages, 3 figures | Phys. Rev. Lett. 113, 068102 (2014) | 10.1103/PhysRevLett.113.068102 | null | q-bio.NC cond-mat.stat-mech q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The joint probability distribution of many degrees of freedom in biological
systems, such as firing patterns in neural networks or antibody sequence
composition in zebrafish, often follow Zipf's law, where a power law is
observed on a rank-frequency plot. This behavior has recently been shown to
imply that these systems reside near to a unique critical point where the
extensive parts of the entropy and energy are exactly equal. Here we show
analytically, and via numerical simulations, that Zipf-like probability
distributions arise naturally if there is an unobserved variable (or variables)
that affects the system, e. g. for neural networks an input stimulus that
causes individual neurons in the network to fire at time-varying rates. In
statistics and machine learning, these models are called latent-variable or
mixture models. Our model shows that no fine-tuning is required, i.e. Zipf's
law arises generically without tuning parameters to a point, and gives insight
into the ubiquity of Zipf's law in a wide range of systems.
| [
{
"created": "Tue, 1 Oct 2013 19:45:10 GMT",
"version": "v1"
},
{
"created": "Sun, 17 Nov 2013 22:25:50 GMT",
"version": "v2"
},
{
"created": "Wed, 18 Jun 2014 22:16:41 GMT",
"version": "v3"
}
] | 2014-08-13 | [
[
"Schwab",
"David J.",
""
],
[
"Nemenman",
"Ilya",
""
],
[
"Mehta",
"Pankaj",
""
]
] | The joint probability distribution of many degrees of freedom in biological systems, such as firing patterns in neural networks or antibody sequence composition in zebrafish, often follow Zipf's law, where a power law is observed on a rank-frequency plot. This behavior has recently been shown to imply that these systems reside near to a unique critical point where the extensive parts of the entropy and energy are exactly equal. Here we show analytically, and via numerical simulations, that Zipf-like probability distributions arise naturally if there is an unobserved variable (or variables) that affects the system, e. g. for neural networks an input stimulus that causes individual neurons in the network to fire at time-varying rates. In statistics and machine learning, these models are called latent-variable or mixture models. Our model shows that no fine-tuning is required, i.e. Zipf's law arises generically without tuning parameters to a point, and gives insight into the ubiquity of Zipf's law in a wide range of systems. |
1301.1034 | Richard Clymo | R. S. Clymo (School of Biological and Chemical Sciences, Queen Mary
University of London) | How many of the digits in a mean of 12.3456789012 are worth reporting? | 5 pages, 1 Table, 2 Figures. New simpler index unifies Table and
Figures. Now published. This arXiv-ed version has small amendments to the
published version | BMC Research Notes 2019 12 (148) | 10.1186/s13104-019-4175-6 | null | q-bio.OT stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | OBJECTIVE. A computer program tells me that a mean value is 12.3456789012,
but how many of these digits are significant (the rest being random junk)?
Should I report: 12.3?, 12.3456?, or even 10 (if only the first digit is
significant)? There are several rules-of-thumb but, surprisingly (given that
the problem is so common in science), none seem to be evidence-based. RESULTS.
Here I show how the significance of a digit in a particular decade of a mean
depends on the standard error of the mean (SEM). I define an index, DM that can
be plotted in graphs. From these a simple evidence-based rule for the number of
significant digits ("sigdigs") is distilled: the last sigdig in the mean is in
the same decade as the first or second non-zero digit in the SEM. As example,
for mean 34.63 (SEM 25.62), with n = 17, the reported value should be 35 (SEM
26). Digits beyond these contain little or no useful information, and should
not be reported lest they damage your credibility.
| [
{
"created": "Sun, 6 Jan 2013 18:00:45 GMT",
"version": "v1"
},
{
"created": "Fri, 25 Jan 2013 11:21:39 GMT",
"version": "v2"
},
{
"created": "Thu, 21 Mar 2019 11:36:02 GMT",
"version": "v3"
},
{
"created": "Thu, 6 May 2021 17:41:51 GMT",
"version": "v4"
}
] | 2021-05-07 | [
[
"Clymo",
"R. S.",
"",
"School of Biological and Chemical Sciences, Queen Mary\n University of London"
]
] | OBJECTIVE. A computer program tells me that a mean value is 12.3456789012, but how many of these digits are significant (the rest being random junk)? Should I report: 12.3?, 12.3456?, or even 10 (if only the first digit is significant)? There are several rules-of-thumb but, surprisingly (given that the problem is so common in science), none seem to be evidence-based. RESULTS. Here I show how the significance of a digit in a particular decade of a mean depends on the standard error of the mean (SEM). I define an index, DM that can be plotted in graphs. From these a simple evidence-based rule for the number of significant digits ("sigdigs") is distilled: the last sigdig in the mean is in the same decade as the first or second non-zero digit in the SEM. As example, for mean 34.63 (SEM 25.62), with n = 17, the reported value should be 35 (SEM 26). Digits beyond these contain little or no useful information, and should not be reported lest they damage your credibility. |
1909.06738 | Fernan Villa | Dayana Jim\'enez, Paola Guti\'errez, Yeisson Guti\'errez, Fern\'an
Villa | Preliminary study of mortality by cause and sociodemographic
characteristics, municipality of San Francisco, Antioquia (Columbia),
2001-2010 | in Spanish | Rervista Nacional de Salud P\'ublica, Universidad de Antioquia,
2015 | null | null | q-bio.PE stat.AP | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Objective: Determining the structure of mortality from causes and
sociodemographic characteristics, in the municipality of San Francisco,
Antioquia, 2001-2010. Methodology: Quantitative descriptive study with
retrospective longitudinal data obtained from secondary source of death events
through databases in electronic media supplied by the DANE. A description of
the sociodemographic variables was performed by groups of cause of death, the
life table by sex and years of potential life lost (APVP) for each year group
and cause of death was calculated. Results: External causes and assaults,
homicides, as the main cause of death occurs during the decade of study, and
especially in men, which had higher mortality rates, more likely to die and
less life expectancy during the period. On average men and external causes
showed a higher number of potential years of life lost for the years 2001-2010
life. Conclusions: Men have higher mortality rates over the decade, as external
causes, assaults, murders, and a greater proportion of young men. The causes of
death that bring more potential years of life lost are external causes. Life
expectancy at birth and throughout the decade is greater for women than it is
therefore essential for men, that the municipality analyze the current
situation of these causes of death, and to be carried out public policies that
contribute to its decline and hence to improve the life expectancy of the
population.
| [
{
"created": "Sun, 15 Sep 2019 05:14:15 GMT",
"version": "v1"
}
] | 2019-09-19 | [
[
"Jiménez",
"Dayana",
""
],
[
"Gutiérrez",
"Paola",
""
],
[
"Gutiérrez",
"Yeisson",
""
],
[
"Villa",
"Fernán",
""
]
] | Objective: Determining the structure of mortality from causes and sociodemographic characteristics, in the municipality of San Francisco, Antioquia, 2001-2010. Methodology: Quantitative descriptive study with retrospective longitudinal data obtained from secondary source of death events through databases in electronic media supplied by the DANE. A description of the sociodemographic variables was performed by groups of cause of death, the life table by sex and years of potential life lost (APVP) for each year group and cause of death was calculated. Results: External causes and assaults, homicides, as the main cause of death occurs during the decade of study, and especially in men, which had higher mortality rates, more likely to die and less life expectancy during the period. On average men and external causes showed a higher number of potential years of life lost for the years 2001-2010 life. Conclusions: Men have higher mortality rates over the decade, as external causes, assaults, murders, and a greater proportion of young men. The causes of death that bring more potential years of life lost are external causes. Life expectancy at birth and throughout the decade is greater for women than it is therefore essential for men, that the municipality analyze the current situation of these causes of death, and to be carried out public policies that contribute to its decline and hence to improve the life expectancy of the population. |
2007.05395 | Matteo Fraschini | Matteo Fraschini, Simone Maurizio La Cava, Luca Didaci and Luigi
Barberini | On the variability of functional connectivity and network measures in
source-reconstructed EEG time-series | null | null | 10.3390/e23010005 | null | q-bio.NC eess.SP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The idea to estimate the statistical interdependence among (interacting)
brain regions has motivated numerous researchers to investigate how the
resulting connectivity patterns and networks may organize themselves under any
conceivable scenario. Even though this idea is not at initial stages, its
practical application is still far to be widespread. One concurrent cause may
be related to the proliferation of different approaches that aim to catch the
underlying correlation among the (interacting) units. This issue has probably
contributed to hinder the comparison among different studies. Not only all
these approaches go under the same name (functional connectivity), but they
have been often tested and validated using different methods, therefore, making
it difficult to understand to what extent they are similar or not. In this
study, we aim to compare a set of different approaches commonly used to
estimate the functional connectivity on a public EEG dataset representing a
possible realistic scenario. As expected, our results show that source-level
EEG connectivity estimates and the derived network measures, even though
pointing to the same direction, may display substantial dependency on the
(often arbitrary) choice of the selected connectivity metric and thresholding
approach. In our opinion, the observed variability reflects ambiguity and
concern that should always be discussed when reporting findings based on any
connectivity metric.
| [
{
"created": "Fri, 10 Jul 2020 14:01:31 GMT",
"version": "v1"
},
{
"created": "Tue, 24 Nov 2020 18:32:50 GMT",
"version": "v2"
}
] | 2021-02-03 | [
[
"Fraschini",
"Matteo",
""
],
[
"La Cava",
"Simone Maurizio",
""
],
[
"Didaci",
"Luca",
""
],
[
"Barberini",
"Luigi",
""
]
] | The idea to estimate the statistical interdependence among (interacting) brain regions has motivated numerous researchers to investigate how the resulting connectivity patterns and networks may organize themselves under any conceivable scenario. Even though this idea is not at initial stages, its practical application is still far to be widespread. One concurrent cause may be related to the proliferation of different approaches that aim to catch the underlying correlation among the (interacting) units. This issue has probably contributed to hinder the comparison among different studies. Not only all these approaches go under the same name (functional connectivity), but they have been often tested and validated using different methods, therefore, making it difficult to understand to what extent they are similar or not. In this study, we aim to compare a set of different approaches commonly used to estimate the functional connectivity on a public EEG dataset representing a possible realistic scenario. As expected, our results show that source-level EEG connectivity estimates and the derived network measures, even though pointing to the same direction, may display substantial dependency on the (often arbitrary) choice of the selected connectivity metric and thresholding approach. In our opinion, the observed variability reflects ambiguity and concern that should always be discussed when reporting findings based on any connectivity metric. |
2303.15707 | Jiayu Shang | Jiayu Shang and Cheng Peng and Herui Liao and Xubo Tang and Yanni Sun | PhaBOX: A web server for identifying and characterizing phage contigs in
metagenomic data | 7 pages, 3 figures, 1 table | published on Bioinformatics Advances 2023 | null | null | q-bio.GN | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Motivation: There is accumulating evidence showing the important roles of
bacteriophages (phages) in regulating the structure and functions of the
microbiome. However, lacking an easy-to-use and integrated phage analysis
software hampers microbiome-related research from incorporating phages in the
analysis. Results: In this work, we developed a web server, PhaBOX, which can
comprehensively identify and analyze phage contigs in metagenomic data. It
supports integrated phage analysis, including phage contig identification from
the metagenomic assembly, lifestyle prediction, taxonomic classification, and
host prediction. Instead of treating the algorithms as a black box, PhaBOX also
supports visualization of the essential features for making predictions. The
web server is designed with a user-friendly graphical interface that enables
both informatics-trained and non-specialist users to analyze phages in
microbiome data with ease. Availability: The web server of PhaBOX is available
via: https://phage.ee.cityu.edu.hk. The source code of PhaBOX is available at:
https://github.com/KennthShang/PhaBOX Contact: yannisun@cityu.edu.hk
| [
{
"created": "Tue, 28 Mar 2023 03:29:11 GMT",
"version": "v1"
},
{
"created": "Wed, 12 Apr 2023 05:42:45 GMT",
"version": "v2"
},
{
"created": "Thu, 27 Jul 2023 09:39:35 GMT",
"version": "v3"
}
] | 2023-07-28 | [
[
"Shang",
"Jiayu",
""
],
[
"Peng",
"Cheng",
""
],
[
"Liao",
"Herui",
""
],
[
"Tang",
"Xubo",
""
],
[
"Sun",
"Yanni",
""
]
] | Motivation: There is accumulating evidence showing the important roles of bacteriophages (phages) in regulating the structure and functions of the microbiome. However, lacking an easy-to-use and integrated phage analysis software hampers microbiome-related research from incorporating phages in the analysis. Results: In this work, we developed a web server, PhaBOX, which can comprehensively identify and analyze phage contigs in metagenomic data. It supports integrated phage analysis, including phage contig identification from the metagenomic assembly, lifestyle prediction, taxonomic classification, and host prediction. Instead of treating the algorithms as a black box, PhaBOX also supports visualization of the essential features for making predictions. The web server is designed with a user-friendly graphical interface that enables both informatics-trained and non-specialist users to analyze phages in microbiome data with ease. Availability: The web server of PhaBOX is available via: https://phage.ee.cityu.edu.hk. The source code of PhaBOX is available at: https://github.com/KennthShang/PhaBOX Contact: yannisun@cityu.edu.hk |
1606.00795 | J. C. Phillips | Vedant Sachdeva and J. C. Phillips | Hemoglobin Strain Field Waves and Allometric Functionality | 10 pages, 4 figures | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hemoglobin (Hgb) forms tetramers (dimerized dimers), which enhance its
globular stability and may also facilitate small gas molecule transport, as
shown by recent all-atom Newtonian solvated simulations. Hydropathic
bioinformatic scaling reveals many wave-like features of strained Hgb
structures at the coarse-grained amino acid level, while distinguishing between
these features thermodynamically. Strain fields localized near hemes interfere
with extended strain fields associated with dimer interfacial misfit, resulting
in wave-length dependent dimer correlation function antiresonances.
| [
{
"created": "Mon, 30 May 2016 15:34:33 GMT",
"version": "v1"
}
] | 2016-06-03 | [
[
"Sachdeva",
"Vedant",
""
],
[
"Phillips",
"J. C.",
""
]
] | Hemoglobin (Hgb) forms tetramers (dimerized dimers), which enhance its globular stability and may also facilitate small gas molecule transport, as shown by recent all-atom Newtonian solvated simulations. Hydropathic bioinformatic scaling reveals many wave-like features of strained Hgb structures at the coarse-grained amino acid level, while distinguishing between these features thermodynamically. Strain fields localized near hemes interfere with extended strain fields associated with dimer interfacial misfit, resulting in wave-length dependent dimer correlation function antiresonances. |
1910.03349 | Jessica Dafflon | Jessica Dafflon, Walter H.L Pinaya, Federico Turkheimer, James H.
Cole, Robert Leech, Mathew A. Harris, Simon R. Cox, Heather C. Whalley,
Andrew M. McIntosh, Peter J. Hellyer | Analysis of an Automated Machine Learning Approach in Brain Predictive
Modelling: A data-driven approach to Predict Brain Age from Cortical
Anatomical Measures | null | null | null | null | q-bio.NC stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The use of machine learning (ML) algorithms has significantly increased in
neuroscience. However, from the vast extent of possible ML algorithms, which
one is the optimal model to predict the target variable? What are the
hyperparameters for such a model? Given the plethora of possible answers to
these questions, in the last years, automated machine learning (autoML) has
been gaining attention. Here, we apply an autoML library called TPOT which uses
a tree-based representation of machine learning pipelines and conducts a
genetic-programming based approach to find the model and its hyperparameters
that more closely predicts the subject's true age. To explore autoML and
evaluate its efficacy within neuroimaging datasets, we chose a problem that has
been the focus of previous extensive study: brain age prediction. Without any
prior knowledge, TPOT was able to scan through the model space and create
pipelines that outperformed the state-of-the-art accuracy for Freesurfer-based
models using only thickness and volume information for anatomical structure. In
particular, we compared the performance of TPOT (mean accuracy error (MAE):
$4.612 \pm .124$ years) and a Relevance Vector Regression (MAE $5.474 \pm .140$
years). TPOT also suggested interesting combinations of models that do not
match the current most used models for brain prediction but generalise well to
unseen data. AutoML showed promising results as a data-driven approach to find
optimal models for neuroimaging applications.
| [
{
"created": "Tue, 8 Oct 2019 11:54:43 GMT",
"version": "v1"
}
] | 2019-10-09 | [
[
"Dafflon",
"Jessica",
""
],
[
"Pinaya",
"Walter H. L",
""
],
[
"Turkheimer",
"Federico",
""
],
[
"Cole",
"James H.",
""
],
[
"Leech",
"Robert",
""
],
[
"Harris",
"Mathew A.",
""
],
[
"Cox",
"Simon R.",
""
],
[
"Whalley",
"Heather C.",
""
],
[
"McIntosh",
"Andrew M.",
""
],
[
"Hellyer",
"Peter J.",
""
]
] | The use of machine learning (ML) algorithms has significantly increased in neuroscience. However, from the vast extent of possible ML algorithms, which one is the optimal model to predict the target variable? What are the hyperparameters for such a model? Given the plethora of possible answers to these questions, in the last years, automated machine learning (autoML) has been gaining attention. Here, we apply an autoML library called TPOT which uses a tree-based representation of machine learning pipelines and conducts a genetic-programming based approach to find the model and its hyperparameters that more closely predicts the subject's true age. To explore autoML and evaluate its efficacy within neuroimaging datasets, we chose a problem that has been the focus of previous extensive study: brain age prediction. Without any prior knowledge, TPOT was able to scan through the model space and create pipelines that outperformed the state-of-the-art accuracy for Freesurfer-based models using only thickness and volume information for anatomical structure. In particular, we compared the performance of TPOT (mean accuracy error (MAE): $4.612 \pm .124$ years) and a Relevance Vector Regression (MAE $5.474 \pm .140$ years). TPOT also suggested interesting combinations of models that do not match the current most used models for brain prediction but generalise well to unseen data. AutoML showed promising results as a data-driven approach to find optimal models for neuroimaging applications. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.