id stringlengths 9 13 | submitter stringlengths 4 48 | authors stringlengths 4 9.62k | title stringlengths 4 343 | comments stringlengths 2 480 ⌀ | journal-ref stringlengths 9 309 ⌀ | doi stringlengths 12 138 ⌀ | report-no stringclasses 277 values | categories stringlengths 8 87 | license stringclasses 9 values | orig_abstract stringlengths 27 3.76k | versions listlengths 1 15 | update_date stringlengths 10 10 | authors_parsed listlengths 1 147 | abstract stringlengths 24 3.75k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2203.03840 | Nida Obatake | Nida Obatake, Elise Walker | Newton-Okounkov bodies of chemical reaction systems | 20 pages, 3 figures, 2 tables, 1 appendix | null | null | null | q-bio.MN math.AG | http://creativecommons.org/licenses/by/4.0/ | Despite their noted potential in polynomial-system solving, there are few
concrete examples of Newton-Okounkov bodies arising from applications.
Accordingly, in this paper, we introduce a new application of Newton-Okounkov
body theory to the study of chemical reaction networks, and compute several
examples. An important invariant of a chemical reaction network is its maximum
number of positive steady states, which is realized as the maximum number of
positive real roots of a parametrized polynomial system. Here, we introduce a
new upper bound on this number, namely the `Newton-Okounkov body bound' of a
chemical reaction network. Through explicit examples, we show that the
Newton-Okounkov body bound of a network gives a good upper bound on its maximum
number of positive steady states. We also compare this Newton-Okounkov body
bound to a related upper bound, namely the mixed volume of a chemical reaction
network, and find that it often achieves better bounds.
| [
{
"created": "Tue, 8 Mar 2022 04:20:03 GMT",
"version": "v1"
}
] | 2022-03-09 | [
[
"Obatake",
"Nida",
""
],
[
"Walker",
"Elise",
""
]
] | Despite their noted potential in polynomial-system solving, there are few concrete examples of Newton-Okounkov bodies arising from applications. Accordingly, in this paper, we introduce a new application of Newton-Okounkov body theory to the study of chemical reaction networks, and compute several examples. An important invariant of a chemical reaction network is its maximum number of positive steady states, which is realized as the maximum number of positive real roots of a parametrized polynomial system. Here, we introduce a new upper bound on this number, namely the `Newton-Okounkov body bound' of a chemical reaction network. Through explicit examples, we show that the Newton-Okounkov body bound of a network gives a good upper bound on its maximum number of positive steady states. We also compare this Newton-Okounkov body bound to a related upper bound, namely the mixed volume of a chemical reaction network, and find that it often achieves better bounds. |
1706.04640 | Fernando Antoneli Jr | Luiza Guimar\~aes, Diogo Castro, Bruno Gorzoni, Luiz Mario Ramos
Janini, Fernando Antoneli | Stochastic Modeling and Simulation of Viral Evolution | 34 pages, 10 figures, 4 tables, 1 appendix | Bulletin of Mathematical Biology, Volume 81, April 2019, 1031-1069 | 10.1007/s11538-018-00550-4 | null | q-bio.PE math.PR physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | RNA viruses comprise vast populations of closely related, but highly
genetically diverse, entities known as quasispecies. Understanding the
mechanisms by which this extreme diversity is generated and maintained is
fundamental when approaching viral persistence and pathobiology in infected
hosts. In this paper, we access quasispecies theory through a mathematical
model based on the theory of multitype branching processes, to better
understand the roles of mechanisms resulting in viral diversity, persistence
and extinction. We accomplish this understanding by a combination of
computational simulations and the theoretical analysis of the model. In order
to perform the simulations, we have implemented the mathematical model into a
computational platform capable of running simulations and presenting the
results in a graphical format in real time. Among other things, we show that
the establishment of virus populations may display four distinct regimes from
its introduction into new hosts until achieving equilibrium or undergoing
extinction. Also, we were able to simulate different fitness distributions
representing distinct environments within a host which could either be
favorable or hostile to the viral success. We addressed the most used
mechanisms for explaining the extinction of RNA virus populations called lethal
mutagenesis and mutational meltdown. We were able to demonstrate a
correspondence between these two mechanisms implying the existence of a
unifying principle leading to the extinction of RNA viruses.
| [
{
"created": "Wed, 14 Jun 2017 19:12:15 GMT",
"version": "v1"
},
{
"created": "Fri, 10 Nov 2017 16:22:18 GMT",
"version": "v2"
},
{
"created": "Sun, 21 Jun 2020 14:49:27 GMT",
"version": "v3"
}
] | 2021-05-27 | [
[
"Guimarães",
"Luiza",
""
],
[
"Castro",
"Diogo",
""
],
[
"Gorzoni",
"Bruno",
""
],
[
"Janini",
"Luiz Mario Ramos",
""
],
[
"Antoneli",
"Fernando",
""
]
] | RNA viruses comprise vast populations of closely related, but highly genetically diverse, entities known as quasispecies. Understanding the mechanisms by which this extreme diversity is generated and maintained is fundamental when approaching viral persistence and pathobiology in infected hosts. In this paper, we access quasispecies theory through a mathematical model based on the theory of multitype branching processes, to better understand the roles of mechanisms resulting in viral diversity, persistence and extinction. We accomplish this understanding by a combination of computational simulations and the theoretical analysis of the model. In order to perform the simulations, we have implemented the mathematical model into a computational platform capable of running simulations and presenting the results in a graphical format in real time. Among other things, we show that the establishment of virus populations may display four distinct regimes from its introduction into new hosts until achieving equilibrium or undergoing extinction. Also, we were able to simulate different fitness distributions representing distinct environments within a host which could either be favorable or hostile to the viral success. We addressed the most used mechanisms for explaining the extinction of RNA virus populations called lethal mutagenesis and mutational meltdown. We were able to demonstrate a correspondence between these two mechanisms implying the existence of a unifying principle leading to the extinction of RNA viruses. |
2210.16813 | Yongtong Wu | Yongtong Wu, Kejia Hu, Shenquan Liu | Computational model advance deep brain stimulation for Parkinson's
disease | 27 pages, 8 figures | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Deep brain stimulation(DBS)has become an effective intervention for advanced
Parkinson's disease, but the exact mechanism of DBS is still unclear. In this
review, we discuss the history of DBS, the anatomy and internal architecture of
the basal ganglia(BG), the abnormal pathological changes of the BG in
Parkinson's disease, and how computational models can help understand and
advance DBS. We also describe two types of models:mathematical theoretical
models and clinical predictive models. Mathematical theoretical models simulate
neurons or neural networks of BG to shed light on the mechanistic principle
underlying DBS, while clinical predictive models focus more on patients'
outcomes, helping to adapt treatment plans for each patient and advance novel
electrode designs. Finally, we provide insights and an outlook on future
technologies.
| [
{
"created": "Sun, 30 Oct 2022 11:21:38 GMT",
"version": "v1"
},
{
"created": "Thu, 23 Mar 2023 12:06:41 GMT",
"version": "v2"
}
] | 2023-03-24 | [
[
"Wu",
"Yongtong",
""
],
[
"Hu",
"Kejia",
""
],
[
"Liu",
"Shenquan",
""
]
] | Deep brain stimulation(DBS)has become an effective intervention for advanced Parkinson's disease, but the exact mechanism of DBS is still unclear. In this review, we discuss the history of DBS, the anatomy and internal architecture of the basal ganglia(BG), the abnormal pathological changes of the BG in Parkinson's disease, and how computational models can help understand and advance DBS. We also describe two types of models:mathematical theoretical models and clinical predictive models. Mathematical theoretical models simulate neurons or neural networks of BG to shed light on the mechanistic principle underlying DBS, while clinical predictive models focus more on patients' outcomes, helping to adapt treatment plans for each patient and advance novel electrode designs. Finally, we provide insights and an outlook on future technologies. |
1506.00373 | Mariusz Pietruszka PhD | Mariusz Pietruszka and Aleksandra Haduch-Sendecka | Frequency landscape of tip-growing plants | 9 pages, 3 figures | null | null | null | q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It has been interesting that nearly all of the ion activities that have been
analysed thus far have exhibited oscillations that are tightly coupled to
growth. Here, we present discrete Fourier transform (DFT) spectra with a finite
sampling of tip-growing cells and organs that were obtained from voltage
measurements of the elongating coleoptiles of maize in situ. The electromotive
force (EMF) oscillations (~ 0.1 {\mu}V) were measured in a simple but highly
sensitive RL circuit, in which the solenoid was initially placed at the tip of
the specimen and then was moved thus changing its position in relation to
growth (EMF can be measured first at the tip, then at the sub-apical part and
finally at the shank). The influx- and efflux-induced oscillations of
Ca$^{2+}$, along with H$^{+}$, K$^{+}$ and Cl$^{-}$ were densely sampled
(preserving the Nyquist theorem in order to 'grasp the structure' of the
pulse), the logarithmic amplitude of pulse spectrum was calculated, and the
detected frequencies, which displayed a periodic sequence of pulses, were
compared with the literature data. A band of life vital individual pulses was
obtained in a single run of the experiment, which not only allowed the
fundamental frequencies (and intensities of the processes) to be determined but
also permitted the phase relations of the various transport processes in the
plasma membrane and tonoplast to be established. A discrete frequency spectrum
(like the hydrogen spectrum in quantum physics) was achieved for a growing
plant for the first time, while all of the metabolic and enzymatic functions of
the life cell cycle were preserved using this totally non-invasive treatment.
| [
{
"created": "Mon, 1 Jun 2015 07:56:17 GMT",
"version": "v1"
},
{
"created": "Tue, 28 Jul 2015 07:26:43 GMT",
"version": "v2"
}
] | 2015-07-29 | [
[
"Pietruszka",
"Mariusz",
""
],
[
"Haduch-Sendecka",
"Aleksandra",
""
]
] | It has been interesting that nearly all of the ion activities that have been analysed thus far have exhibited oscillations that are tightly coupled to growth. Here, we present discrete Fourier transform (DFT) spectra with a finite sampling of tip-growing cells and organs that were obtained from voltage measurements of the elongating coleoptiles of maize in situ. The electromotive force (EMF) oscillations (~ 0.1 {\mu}V) were measured in a simple but highly sensitive RL circuit, in which the solenoid was initially placed at the tip of the specimen and then was moved thus changing its position in relation to growth (EMF can be measured first at the tip, then at the sub-apical part and finally at the shank). The influx- and efflux-induced oscillations of Ca$^{2+}$, along with H$^{+}$, K$^{+}$ and Cl$^{-}$ were densely sampled (preserving the Nyquist theorem in order to 'grasp the structure' of the pulse), the logarithmic amplitude of pulse spectrum was calculated, and the detected frequencies, which displayed a periodic sequence of pulses, were compared with the literature data. A band of life vital individual pulses was obtained in a single run of the experiment, which not only allowed the fundamental frequencies (and intensities of the processes) to be determined but also permitted the phase relations of the various transport processes in the plasma membrane and tonoplast to be established. A discrete frequency spectrum (like the hydrogen spectrum in quantum physics) was achieved for a growing plant for the first time, while all of the metabolic and enzymatic functions of the life cell cycle were preserved using this totally non-invasive treatment. |
1801.02665 | Wenpo Yao | Wenpo Yao Wenli Yao and Jun Wang | Symbolic relative entropy in quantifying nonlinear dynamics of
equalities-involved heartbeats | The theory underlying the symbolic relative entropy on nonlinear
dynamics in our manuscript might lead somewhat misleading and is needed
further analysis and discussions | null | null | null | q-bio.QM nlin.CD | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Symbolic relative entropy, an efficient nonlinear complexity parameter
measuring probabilistic divergences of symbolic sequences, is proposed in our
nonlinear dynamics analysis of heart rates considering equal states. Equalities
are not rare in discrete heartbeats because of the limits of resolution of
signals collection, and more importantly equal states contain underlying
important cardiac regulation information which is neglected by some chaotic
deterministic parameters and temporal asymmetric measurements. The relative
entropy of symbolization associated with equal states has satisfied nonlinear
dynamics complexity detections in heartbeats and shows advantages to some
nonlinear dynamics parameters without considering equalities. Researches on
cardiac activities suggest the highest probabilistic divergence of the healthy
young heart rates and highlight the facts that heart diseases and aging reduce
the nonlinear dynamical complexity of heart rates.
| [
{
"created": "Tue, 2 Jan 2018 12:26:45 GMT",
"version": "v1"
},
{
"created": "Sun, 21 Jan 2018 02:52:22 GMT",
"version": "v2"
},
{
"created": "Mon, 28 Jan 2019 04:32:18 GMT",
"version": "v3"
}
] | 2019-01-29 | [
[
"Yao",
"Wenpo Yao Wenli",
""
],
[
"Wang",
"Jun",
""
]
] | Symbolic relative entropy, an efficient nonlinear complexity parameter measuring probabilistic divergences of symbolic sequences, is proposed in our nonlinear dynamics analysis of heart rates considering equal states. Equalities are not rare in discrete heartbeats because of the limits of resolution of signals collection, and more importantly equal states contain underlying important cardiac regulation information which is neglected by some chaotic deterministic parameters and temporal asymmetric measurements. The relative entropy of symbolization associated with equal states has satisfied nonlinear dynamics complexity detections in heartbeats and shows advantages to some nonlinear dynamics parameters without considering equalities. Researches on cardiac activities suggest the highest probabilistic divergence of the healthy young heart rates and highlight the facts that heart diseases and aging reduce the nonlinear dynamical complexity of heart rates. |
2104.11567 | Aditya Sengar | Aditya Sengar, Thomas E. Ouldridge, Oliver Henrich, Lorenzo Rovigatti,
Petr Sulc | A primer on the oxDNA model of DNA: When to use it, how to simulate it
and how to interpret the results | null | Front. Mol. Biosci. 8, 693710 (2021) | 10.3389/fmolb.2021.693710 | null | q-bio.BM cond-mat.soft physics.bio-ph | http://creativecommons.org/licenses/by/4.0/ | The oxDNA model of DNA has been applied widely to systems in biology,
biophysics and nanotechnology. It is currently available via two independent
open source packages. Here we present a set of clearly-documented exemplar
simulations that simultaneously provide both an introduction to simulating the
model, and a review of the model's fundamental properties. We outline how
simulation results can be interpreted in terms of -- and feed into our
understanding of -- less detailed models that operate at larger length scales,
and provide guidance on whether simulating a system with oxDNA is worthwhile.
| [
{
"created": "Fri, 23 Apr 2021 12:49:59 GMT",
"version": "v1"
}
] | 2022-09-26 | [
[
"Sengar",
"Aditya",
""
],
[
"Ouldridge",
"Thomas E.",
""
],
[
"Henrich",
"Oliver",
""
],
[
"Rovigatti",
"Lorenzo",
""
],
[
"Sulc",
"Petr",
""
]
] | The oxDNA model of DNA has been applied widely to systems in biology, biophysics and nanotechnology. It is currently available via two independent open source packages. Here we present a set of clearly-documented exemplar simulations that simultaneously provide both an introduction to simulating the model, and a review of the model's fundamental properties. We outline how simulation results can be interpreted in terms of -- and feed into our understanding of -- less detailed models that operate at larger length scales, and provide guidance on whether simulating a system with oxDNA is worthwhile. |
2306.15428 | Chris Jones | Chris Jones, Damaris Zurell and Karoline Wiesner | Evaluating The Impact Of Species Specialisation On Ecological Network
Robustness Using Analytic Methods | 22 pages, 12 figures, 2 tables | null | null | null | q-bio.PE math.CO physics.soc-ph | http://creativecommons.org/licenses/by/4.0/ | Ecological networks describe the interactions between different species,
informing us of how they rely on one another for food, pollination and
survival. If a species in an ecosystem is under threat of extinction, it can
affect other species in the system and possibly result in their secondary
extinction as well. Understanding how (primary) extinctions cause secondary
extinctions on ecological networks has been considered previously using
computational methods. However, these methods do not provide an explanation for
the properties which make ecological networks robust, and can be
computationally expensive. We develop a new analytic model for predicting
secondary extinctions which requires no non-deterministic computational
simulation. Our model can predict secondary extinctions when primary
extinctions occur at random or due to some targeting based on the number of
links per species or risk of extinction, and can be applied to an ecological
network of any number of layers. Using our model, we consider how false
positives and negatives in network data affect predictions for network
robustness. We have also extended the model to predict scenarios in which
secondary extinctions occur once species lose a certain percentage of
interaction strength, and to model the loss of interactions as opposed to just
species extinction. From our model, it is possible to derive new analytic
results such as how ecological networks are most robust when secondary species
degree variance is minimised. Additionally, we show that both specialisation
and generalisation in distribution of interaction strength can be advantageous
for network robustness, depending upon the extinction scenario being
considered.
| [
{
"created": "Tue, 27 Jun 2023 12:38:35 GMT",
"version": "v1"
},
{
"created": "Wed, 5 Jul 2023 13:42:44 GMT",
"version": "v2"
}
] | 2023-07-06 | [
[
"Jones",
"Chris",
""
],
[
"Zurell",
"Damaris",
""
],
[
"Wiesner",
"Karoline",
""
]
] | Ecological networks describe the interactions between different species, informing us of how they rely on one another for food, pollination and survival. If a species in an ecosystem is under threat of extinction, it can affect other species in the system and possibly result in their secondary extinction as well. Understanding how (primary) extinctions cause secondary extinctions on ecological networks has been considered previously using computational methods. However, these methods do not provide an explanation for the properties which make ecological networks robust, and can be computationally expensive. We develop a new analytic model for predicting secondary extinctions which requires no non-deterministic computational simulation. Our model can predict secondary extinctions when primary extinctions occur at random or due to some targeting based on the number of links per species or risk of extinction, and can be applied to an ecological network of any number of layers. Using our model, we consider how false positives and negatives in network data affect predictions for network robustness. We have also extended the model to predict scenarios in which secondary extinctions occur once species lose a certain percentage of interaction strength, and to model the loss of interactions as opposed to just species extinction. From our model, it is possible to derive new analytic results such as how ecological networks are most robust when secondary species degree variance is minimised. Additionally, we show that both specialisation and generalisation in distribution of interaction strength can be advantageous for network robustness, depending upon the extinction scenario being considered. |
1502.02908 | Stefan Engblom | Pavol Bauer, Stefan Engblom, Stefan Widgren | Fast event-based epidemiological simulations on national scales | 27 pages, 5 figures | Int. J. High Perf. Comput. Appl. 30(4):438--453 (2016) | 10.1177/1094342016635723 | null | q-bio.PE cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a computational modeling framework for data-driven simulations and
analysis of infectious disease spread in large populations. For the purpose of
efficient simulations, we devise a parallel solution algorithm targeting
multi-socket shared memory architectures. The model integrates infectious
dynamics as continuous-time Markov chains and available data such as animal
movements or aging are incorporated as externally defined events. To bring out
parallelism and accelerate the computations, we decompose the spatial domain
and optimize cross-boundary communication using dependency-aware task
scheduling. Using registered livestock data at a high spatio-temporal
resolution, we demonstrate that our approach not only is resilient to varying
model configurations, but also scales on all physical cores at realistic work
loads. Finally, we show that these very features enable the solution of inverse
problems on national scales.
| [
{
"created": "Tue, 10 Feb 2015 13:48:46 GMT",
"version": "v1"
},
{
"created": "Wed, 4 Nov 2015 15:48:54 GMT",
"version": "v2"
},
{
"created": "Wed, 27 Jan 2016 13:57:07 GMT",
"version": "v3"
}
] | 2018-02-19 | [
[
"Bauer",
"Pavol",
""
],
[
"Engblom",
"Stefan",
""
],
[
"Widgren",
"Stefan",
""
]
] | We present a computational modeling framework for data-driven simulations and analysis of infectious disease spread in large populations. For the purpose of efficient simulations, we devise a parallel solution algorithm targeting multi-socket shared memory architectures. The model integrates infectious dynamics as continuous-time Markov chains and available data such as animal movements or aging are incorporated as externally defined events. To bring out parallelism and accelerate the computations, we decompose the spatial domain and optimize cross-boundary communication using dependency-aware task scheduling. Using registered livestock data at a high spatio-temporal resolution, we demonstrate that our approach not only is resilient to varying model configurations, but also scales on all physical cores at realistic work loads. Finally, we show that these very features enable the solution of inverse problems on national scales. |
0707.0394 | Davide Cora | Davide Cora, Ferdinando Di Cunto, Michele Caselle, Paolo Provero | Identification of candidate regulatory sequences in mammalian 3' UTRs by
statistical analysis of oligonucleotide distributions | Added two references | BMC Bioinformatics. 2007 May 24;8:174. PMID: 17524134 | null | null | q-bio.GN | null | 3' untranslated regions (3' UTRs) contain binding sites for many regulatory
elements, and in particular for microRNAs (miRNAs). The importance of
miRNA-mediated post-transcriptional regulation has become increasingly clear in
the last few years.
We propose two complementary approaches to the statistical analysis of
oligonucleotide frequencies in mammalian 3' UTRs aimed at the identification of
candidate binding sites for regulatory elements. The first method is based on
the identification of sets of genes characterized by evolutionarily conserved
overrepresentation of an oligonucleotide. The second method is based on the
identification of oligonucleotides showing statistically significant strand
asymmetry in their distribution in 3' UTRs.
Both methods are able to identify many previously known binding sites located
in 3'UTRs, and in particular seed regions of known miRNAs. Many new candidates
are proposed for experimental verification.
| [
{
"created": "Tue, 3 Jul 2007 11:39:13 GMT",
"version": "v1"
},
{
"created": "Mon, 16 Jul 2007 23:30:34 GMT",
"version": "v2"
}
] | 2007-07-17 | [
[
"Cora",
"Davide",
""
],
[
"Di Cunto",
"Ferdinando",
""
],
[
"Caselle",
"Michele",
""
],
[
"Provero",
"Paolo",
""
]
] | 3' untranslated regions (3' UTRs) contain binding sites for many regulatory elements, and in particular for microRNAs (miRNAs). The importance of miRNA-mediated post-transcriptional regulation has become increasingly clear in the last few years. We propose two complementary approaches to the statistical analysis of oligonucleotide frequencies in mammalian 3' UTRs aimed at the identification of candidate binding sites for regulatory elements. The first method is based on the identification of sets of genes characterized by evolutionarily conserved overrepresentation of an oligonucleotide. The second method is based on the identification of oligonucleotides showing statistically significant strand asymmetry in their distribution in 3' UTRs. Both methods are able to identify many previously known binding sites located in 3'UTRs, and in particular seed regions of known miRNAs. Many new candidates are proposed for experimental verification. |
1408.5803 | Pu Tian | Kai Wang, Shiyang Long, Zhiming Zhang, Lanru Liu, Qimeng Wang and Pu
Tian | Ideal gas behavior of rotamerically defined conformers in native
globular proteins | 6 figures in the main text | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Protein conformational transitions, which are essential for function, may be
driven either by entropy or enthalpy when molecular systems comprising solute
and solvent molecules are the focus. Revealing thermodynamic origin of a given
molecular process is an important but difficult task, and general principles
governing protein conformational distributions remain elusive. Here we
demonstrate that when protein molecules are taken as thermodynamic systems and
solvents being treated as the environment, conformational entropy is an
excellent proxy for free energy and is sufficient to explain protein
conformational distributions. Specifically, by defining each unique combination
of side chain torsional state as a conformer, the population distribution (or
free energy) on an arbitrarily given order parameter is approximately a linear
function of conformational entropy. Additionally, span of various microscopic
potential energy terms is observed to be highly correlated with both
conformational entropy and free energy. Presently widely utilized free energy
proxies, including minimum potential energy, average potential energy terms by
themselves or in combination with vibrational entropy\cite, are found to
correlate with free energy rather poorly. Therefore, our findings provide a
fundamentally new theoretical base for development of significantly more
reliable and efficient next generation computational tools, where the number of
available conformers,rather than poential energy of microscopic configurations,
is the central focus. We anticipate that many related research fields,
including structure based drug design and discovery, protein design, docking
and prediction of general intermolecular interactions involving proteins, are
expected to benefit greatly.
| [
{
"created": "Mon, 25 Aug 2014 15:23:20 GMT",
"version": "v1"
},
{
"created": "Thu, 30 Apr 2015 22:50:28 GMT",
"version": "v2"
}
] | 2015-05-04 | [
[
"Wang",
"Kai",
""
],
[
"Long",
"Shiyang",
""
],
[
"Zhang",
"Zhiming",
""
],
[
"Liu",
"Lanru",
""
],
[
"Wang",
"Qimeng",
""
],
[
"Tian",
"Pu",
""
]
] | Protein conformational transitions, which are essential for function, may be driven either by entropy or enthalpy when molecular systems comprising solute and solvent molecules are the focus. Revealing thermodynamic origin of a given molecular process is an important but difficult task, and general principles governing protein conformational distributions remain elusive. Here we demonstrate that when protein molecules are taken as thermodynamic systems and solvents being treated as the environment, conformational entropy is an excellent proxy for free energy and is sufficient to explain protein conformational distributions. Specifically, by defining each unique combination of side chain torsional state as a conformer, the population distribution (or free energy) on an arbitrarily given order parameter is approximately a linear function of conformational entropy. Additionally, span of various microscopic potential energy terms is observed to be highly correlated with both conformational entropy and free energy. Presently widely utilized free energy proxies, including minimum potential energy, average potential energy terms by themselves or in combination with vibrational entropy\cite, are found to correlate with free energy rather poorly. Therefore, our findings provide a fundamentally new theoretical base for development of significantly more reliable and efficient next generation computational tools, where the number of available conformers,rather than poential energy of microscopic configurations, is the central focus. We anticipate that many related research fields, including structure based drug design and discovery, protein design, docking and prediction of general intermolecular interactions involving proteins, are expected to benefit greatly. |
2112.04266 | Simon Wein | Simon Wein, Alina Sch\"uller, Ana Maria Tom\'e, Wilhelm M. Malloni,
Mark W. Greenlee, Elmar W. Lang | Forecasting Brain Activity Based on Models of Spatio-Temporal Brain
Dynamics: A Comparison of Graph Neural Network Architectures | null | null | null | null | q-bio.NC stat.ML | http://creativecommons.org/licenses/by/4.0/ | Comprehending the interplay between spatial and temporal characteristics of
neural dynamics can contribute to our understanding of information processing
in the human brain. Graph neural networks (GNNs) provide a new possibility to
interpret graph structured signals like those observed in complex brain
networks. In our study we compare different spatio-temporal GNN architectures
and study their ability to model neural activity distributions obtained in
functional MRI (fMRI) studies. We evaluate the performance of the GNN models on
a variety of scenarios in MRI studies and also compare it to a VAR model, which
is currently often used for directed functional connectivity analysis. We show
that by learning localized functional interactions on the anatomical substrate,
GNN based approaches are able to robustly scale to large network studies, even
when available data are scarce. By including anatomical connectivity as the
physical substrate for information propagation, such GNNs also provide a
multi-modal perspective on directed connectivity analysis, offering a novel
possibility to investigate the spatio-temporal dynamics in brain networks.
| [
{
"created": "Wed, 8 Dec 2021 12:57:13 GMT",
"version": "v1"
},
{
"created": "Thu, 28 Apr 2022 12:59:58 GMT",
"version": "v2"
}
] | 2022-04-29 | [
[
"Wein",
"Simon",
""
],
[
"Schüller",
"Alina",
""
],
[
"Tomé",
"Ana Maria",
""
],
[
"Malloni",
"Wilhelm M.",
""
],
[
"Greenlee",
"Mark W.",
""
],
[
"Lang",
"Elmar W.",
""
]
] | Comprehending the interplay between spatial and temporal characteristics of neural dynamics can contribute to our understanding of information processing in the human brain. Graph neural networks (GNNs) provide a new possibility to interpret graph structured signals like those observed in complex brain networks. In our study we compare different spatio-temporal GNN architectures and study their ability to model neural activity distributions obtained in functional MRI (fMRI) studies. We evaluate the performance of the GNN models on a variety of scenarios in MRI studies and also compare it to a VAR model, which is currently often used for directed functional connectivity analysis. We show that by learning localized functional interactions on the anatomical substrate, GNN based approaches are able to robustly scale to large network studies, even when available data are scarce. By including anatomical connectivity as the physical substrate for information propagation, such GNNs also provide a multi-modal perspective on directed connectivity analysis, offering a novel possibility to investigate the spatio-temporal dynamics in brain networks. |
1805.01579 | Iosif Lazaridis | Iosif Lazaridis | The evolutionary history of human populations in Europe | null | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by-nc-sa/4.0/ | I review the evolutionary history of human populations in Europe with an
emphasis on what has been learned in recent years through the study of ancient
DNA. Human populations in Europe ~430-39kya (archaic Europeans) included
Neandertals and their ancestors, who were genetically differentiated from other
archaic Eurasians (such as the Denisovans of Siberia), as well as modern
humans. Modern humans arrived to Europe by ~45kya, and are first genetically
attested by ~39kya when they were still mixing with Neandertals. The first
Europeans who were recognizably genetically related to modern ones appeared in
the genetic record shortly thereafter at ~37kya. At ~15kya a largely
homogeneous set of hunter-gatherers became dominant in most of Europe, but with
some admixture from Siberian hunter-gatherers in the eastern part of the
continent. These hunter-gatherers were joined by migrants from the Near East
beginning at ~8kya: Anatolian farmers settled most of mainland Europe, and
migrants from the Caucasus reached eastern Europe, forming steppe populations.
After ~5kya there was migration from the steppe into mainland Europe and vice
versa. Present-day Europeans (ignoring the long-distance migrations of the
modern era) are largely the product of this Bronze Age collision of steppe
pastoralists with Neolithic farmers.
| [
{
"created": "Fri, 4 May 2018 00:43:47 GMT",
"version": "v1"
}
] | 2018-05-07 | [
[
"Lazaridis",
"Iosif",
""
]
] | I review the evolutionary history of human populations in Europe with an emphasis on what has been learned in recent years through the study of ancient DNA. Human populations in Europe ~430-39kya (archaic Europeans) included Neandertals and their ancestors, who were genetically differentiated from other archaic Eurasians (such as the Denisovans of Siberia), as well as modern humans. Modern humans arrived to Europe by ~45kya, and are first genetically attested by ~39kya when they were still mixing with Neandertals. The first Europeans who were recognizably genetically related to modern ones appeared in the genetic record shortly thereafter at ~37kya. At ~15kya a largely homogeneous set of hunter-gatherers became dominant in most of Europe, but with some admixture from Siberian hunter-gatherers in the eastern part of the continent. These hunter-gatherers were joined by migrants from the Near East beginning at ~8kya: Anatolian farmers settled most of mainland Europe, and migrants from the Caucasus reached eastern Europe, forming steppe populations. After ~5kya there was migration from the steppe into mainland Europe and vice versa. Present-day Europeans (ignoring the long-distance migrations of the modern era) are largely the product of this Bronze Age collision of steppe pastoralists with Neolithic farmers. |
2402.11762 | Tomoharu Suda | Tomoharu Suda | Effective Kinetics of Chemical Reaction Networks | 19 pages, 0 figures. Added a reference and a remark | null | null | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Chemical reaction network theory is a powerful framework to describe and
analyze chemical systems. While much about the concentration profile in an
equilibrium state can be determined in terms of the graph structure, the
overall reaction's time evolution depends on the network's kinetic rate
function. In this article, we consider the problem of the effective kinetics of
a chemical reaction network regarded as a conversion system from the feeding
species to products. We define the notion of effective kinetics as a partial
solution of a system of non-autonomous ordinary differential equations
determined from a chemical reaction network. Examples of actual calculations
include the Michaelis-Menten mechanism, for which it is confirmed that our
notion of effective kinetics yields the classical formula. Further, we
introduce the notion of straight-line solutions of non-autonomous ordinary
differential equations to formalize the situation where a well-defined reaction
rate exists and consider its relation with the quasi-stationary state
approximation used in microkinetics. Our considerations here give a unified
framework to formulate the reaction rate of chemical reaction networks.
| [
{
"created": "Mon, 19 Feb 2024 01:25:33 GMT",
"version": "v1"
},
{
"created": "Wed, 28 Feb 2024 00:04:22 GMT",
"version": "v2"
}
] | 2024-02-29 | [
[
"Suda",
"Tomoharu",
""
]
] | Chemical reaction network theory is a powerful framework to describe and analyze chemical systems. While much about the concentration profile in an equilibrium state can be determined in terms of the graph structure, the overall reaction's time evolution depends on the network's kinetic rate function. In this article, we consider the problem of the effective kinetics of a chemical reaction network regarded as a conversion system from the feeding species to products. We define the notion of effective kinetics as a partial solution of a system of non-autonomous ordinary differential equations determined from a chemical reaction network. Examples of actual calculations include the Michaelis-Menten mechanism, for which it is confirmed that our notion of effective kinetics yields the classical formula. Further, we introduce the notion of straight-line solutions of non-autonomous ordinary differential equations to formalize the situation where a well-defined reaction rate exists and consider its relation with the quasi-stationary state approximation used in microkinetics. Our considerations here give a unified framework to formulate the reaction rate of chemical reaction networks. |
2003.12936 | Yevgeniy Kovchegov | Evgenia Chunikhina, Paul Logan, Yevgeniy Kovchegov, Anatoly
Yambartsev, Debashis Mondal, Andrey Morgun | The C-SHIFT algorithm for normalizing covariances | null | null | null | null | q-bio.GN q-bio.QM stat.AP stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Omics technologies are powerful tools for analyzing patterns in gene
expression data for thousands of genes. Due to a number of systematic
variations in experiments, the raw gene expression data is often obfuscated by
undesirable technical noises. Various normalization techniques were designed in
an attempt to remove these non-biological errors prior to any statistical
analysis. One of the reasons for normalizing data is the need for recovering
the covariance matrix used in gene network analysis. In this paper, we
introduce a novel normalization technique, called the covariance shift
(C-SHIFT) method. This normalization algorithm uses optimization techniques
together with the blessing of dimensionality philosophy and energy minimization
hypothesis for covariance matrix recovery under additive noise (in biology,
known as the bias). Thus, it is perfectly suited for the analysis of
logarithmic gene expression data. Numerical experiments on synthetic data
demonstrate the method's advantage over the classical normalization techniques.
Namely, the comparison is made with Rank, Quantile, cyclic LOESS (locally
estimated scatterplot smoothing), and MAD (median absolute deviation)
normalization methods. We also evaluate the performance of C-SHIFT algorithm on
real biological data.
| [
{
"created": "Sun, 29 Mar 2020 03:24:51 GMT",
"version": "v1"
},
{
"created": "Thu, 5 Aug 2021 06:53:14 GMT",
"version": "v2"
}
] | 2021-08-06 | [
[
"Chunikhina",
"Evgenia",
""
],
[
"Logan",
"Paul",
""
],
[
"Kovchegov",
"Yevgeniy",
""
],
[
"Yambartsev",
"Anatoly",
""
],
[
"Mondal",
"Debashis",
""
],
[
"Morgun",
"Andrey",
""
]
] | Omics technologies are powerful tools for analyzing patterns in gene expression data for thousands of genes. Due to a number of systematic variations in experiments, the raw gene expression data is often obfuscated by undesirable technical noises. Various normalization techniques were designed in an attempt to remove these non-biological errors prior to any statistical analysis. One of the reasons for normalizing data is the need for recovering the covariance matrix used in gene network analysis. In this paper, we introduce a novel normalization technique, called the covariance shift (C-SHIFT) method. This normalization algorithm uses optimization techniques together with the blessing of dimensionality philosophy and energy minimization hypothesis for covariance matrix recovery under additive noise (in biology, known as the bias). Thus, it is perfectly suited for the analysis of logarithmic gene expression data. Numerical experiments on synthetic data demonstrate the method's advantage over the classical normalization techniques. Namely, the comparison is made with Rank, Quantile, cyclic LOESS (locally estimated scatterplot smoothing), and MAD (median absolute deviation) normalization methods. We also evaluate the performance of C-SHIFT algorithm on real biological data. |
1301.4247 | Alfredo Rodriguez | R. Martin, R. Godinez, A. O. Rodriguez | Functional Magnetic Resonance Imaging: a study of malnourished rats | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Malnutrition is a main public health problem in developing countries.
Incidence is increasing and the mortality rate is still high. Malnutrition can
leads mayor problems that can be irreversible if it is present before brain
development is completed. We used BOLD (blood oxygen level-dependent effect)
Functional Magnetic Resonance Imaging to investigate the regions of brain
activity in malnourished rats. The food competition method was applied to a rat
model to provoke malnutrition during lactation. The weight increase is delayed
even if there is plenty milk available. To localize those regions of activity
resulting from the trigeminal nerve stimulation, the vibrissae-barrel axis was
employed due to the functional and morphological correlation between the
vibrissae and the barrels. BOLD response changes caused by the trigeminal nerve
stimulation on brain activity of malnourished and control rats were obtained at
7T. Results showed a major neuronal activity in malnourished rats on regions
like cerebellum, somatosensorial cortex, hippocampus, and hypothalamus. This is
the first study in malnourished rats and illustrates BOLD activation in various
brain structures.
| [
{
"created": "Wed, 16 Jan 2013 01:17:44 GMT",
"version": "v1"
}
] | 2013-01-21 | [
[
"Martin",
"R.",
""
],
[
"Godinez",
"R.",
""
],
[
"Rodriguez",
"A. O.",
""
]
] | Malnutrition is a main public health problem in developing countries. Incidence is increasing and the mortality rate is still high. Malnutrition can leads mayor problems that can be irreversible if it is present before brain development is completed. We used BOLD (blood oxygen level-dependent effect) Functional Magnetic Resonance Imaging to investigate the regions of brain activity in malnourished rats. The food competition method was applied to a rat model to provoke malnutrition during lactation. The weight increase is delayed even if there is plenty milk available. To localize those regions of activity resulting from the trigeminal nerve stimulation, the vibrissae-barrel axis was employed due to the functional and morphological correlation between the vibrissae and the barrels. BOLD response changes caused by the trigeminal nerve stimulation on brain activity of malnourished and control rats were obtained at 7T. Results showed a major neuronal activity in malnourished rats on regions like cerebellum, somatosensorial cortex, hippocampus, and hypothalamus. This is the first study in malnourished rats and illustrates BOLD activation in various brain structures. |
1409.4978 | Iain Johnston | Ben P. Williams, Iain G. Johnston, Sarah Covshoff, Julian M. Hibberd | Phenotypic landscape inference reveals multiple evolutionary paths to
C$_4$ photosynthesis | null | eLife 2 e00961 (2013) | 10.7554/eLife.00961 | null | q-bio.PE stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | C$_4$ photosynthesis has independently evolved from the ancestral C$_3$
pathway in at least 60 plant lineages, but, as with other complex traits, how
it evolved is unclear. Here we show that the polyphyletic appearance of C$_4$
photosynthesis is associated with diverse and flexible evolutionary paths that
group into four major trajectories. We conducted a meta-analysis of 18 lineages
containing species that use C$_3$, C$_4$, or intermediate C$_3$-C$_4$ forms of
photosynthesis to parameterise a 16-dimensional phenotypic landscape. We then
developed and experimentally verified a novel Bayesian approach based on a
hidden Markov model that predicts how the C$_4$ phenotype evolved. The
alternative evolutionary histories underlying the appearance of C$_4$
photosynthesis were determined by ancestral lineage and initial phenotypic
alterations unrelated to photosynthesis. We conclude that the order of C$_4$
trait acquisition is flexible and driven by non-photosynthetic drivers. This
flexibility will have facilitated the convergent evolution of this complex
trait.
| [
{
"created": "Wed, 17 Sep 2014 12:59:01 GMT",
"version": "v1"
}
] | 2014-09-18 | [
[
"Williams",
"Ben P.",
""
],
[
"Johnston",
"Iain G.",
""
],
[
"Covshoff",
"Sarah",
""
],
[
"Hibberd",
"Julian M.",
""
]
] | C$_4$ photosynthesis has independently evolved from the ancestral C$_3$ pathway in at least 60 plant lineages, but, as with other complex traits, how it evolved is unclear. Here we show that the polyphyletic appearance of C$_4$ photosynthesis is associated with diverse and flexible evolutionary paths that group into four major trajectories. We conducted a meta-analysis of 18 lineages containing species that use C$_3$, C$_4$, or intermediate C$_3$-C$_4$ forms of photosynthesis to parameterise a 16-dimensional phenotypic landscape. We then developed and experimentally verified a novel Bayesian approach based on a hidden Markov model that predicts how the C$_4$ phenotype evolved. The alternative evolutionary histories underlying the appearance of C$_4$ photosynthesis were determined by ancestral lineage and initial phenotypic alterations unrelated to photosynthesis. We conclude that the order of C$_4$ trait acquisition is flexible and driven by non-photosynthetic drivers. This flexibility will have facilitated the convergent evolution of this complex trait. |
1501.03578 | Liang Liu | Liang Liu, Zhenxiang Xi, Shaoyuan Wu, Charles Davis, Scott V. Edwards | Estimating phylogenetic trees from genome-scale data | 39 pages, 3 figures | null | 10.1111/nyas.12747 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As researchers collect increasingly large molecular data sets to reconstruct
the Tree of Life, the heterogeneity of signals in the genomes of diverse
organisms poses challenges for traditional phylogenetic analysis. A class of
phylogenetic methods known as "species tree methods" have been proposed to
directly address one important source of gene tree heterogeneity, namely the
incomplete lineage sorting or deep coalescence that occurs when evolving
lineages radiate rapidly, resulting in a diversity of gene trees from a single
underlying species tree. Although such methods are gaining in popularity, they
are being adopted with caution in some quarters, in part because of an
increasing number of examples of strong phylogenetic conflict between
concatenation or supermatrix methods and species tree methods. Here we review
theory and empirical examples that help clarify these conflicts. Thinking of
concatenation as a special case of the more general model provided by the
multispecies coalescent can help explain a number of differences in the
behavior of the two methods on phylogenomic data sets. Recent work suggests
that species tree methods are more robust than concatenation approaches to some
of the classic challenges of phylogenetic analysis, including rapidly evolving
sites in DNA sequences, base compositional heterogeneity and long branch
attraction. We show that approaches such as binning, designed to augment the
signal in species tree analyses, can distort the distribution of gene trees and
are inconsistent. Computationally efficient species tree methods that
incorporate biological realism are a key to phylogenetic analysis of whole
genome data.
| [
{
"created": "Thu, 15 Jan 2015 05:21:26 GMT",
"version": "v1"
}
] | 2015-09-11 | [
[
"Liu",
"Liang",
""
],
[
"Xi",
"Zhenxiang",
""
],
[
"Wu",
"Shaoyuan",
""
],
[
"Davis",
"Charles",
""
],
[
"Edwards",
"Scott V.",
""
]
] | As researchers collect increasingly large molecular data sets to reconstruct the Tree of Life, the heterogeneity of signals in the genomes of diverse organisms poses challenges for traditional phylogenetic analysis. A class of phylogenetic methods known as "species tree methods" have been proposed to directly address one important source of gene tree heterogeneity, namely the incomplete lineage sorting or deep coalescence that occurs when evolving lineages radiate rapidly, resulting in a diversity of gene trees from a single underlying species tree. Although such methods are gaining in popularity, they are being adopted with caution in some quarters, in part because of an increasing number of examples of strong phylogenetic conflict between concatenation or supermatrix methods and species tree methods. Here we review theory and empirical examples that help clarify these conflicts. Thinking of concatenation as a special case of the more general model provided by the multispecies coalescent can help explain a number of differences in the behavior of the two methods on phylogenomic data sets. Recent work suggests that species tree methods are more robust than concatenation approaches to some of the classic challenges of phylogenetic analysis, including rapidly evolving sites in DNA sequences, base compositional heterogeneity and long branch attraction. We show that approaches such as binning, designed to augment the signal in species tree analyses, can distort the distribution of gene trees and are inconsistent. Computationally efficient species tree methods that incorporate biological realism are a key to phylogenetic analysis of whole genome data. |
q-bio/0511022 | Eytan Domany | Yuval Tabach, Michael Milyavsky, Igor Shats, Ran Brosh, Or Zuk, Assif
Yitzhaky, Roberto Mantovani, Eytan Domany, Varda Rotter Yitzhak Pilpel | The promoters of human cell cycle genes integrate signals from two tumor
suppressive pathways during cellular transformation | To appear in Molecular Systems Biology | null | null | null | q-bio.MN q-bio.QM | null | Deciphering regulatory events that drive malignant transformation represents
a major challenge for systems biology. Here we analyzed genome-wide
transcription profiling of an in-vitro transformation process. We focused on a
cluster of genes whose expression levels increased as a function of p53 and
p16INK4A tumor suppressors inactivation. This cluster predominantly consists of
cell cycle genes and constitutes a signature of a diversity of cancers. By
linking expression profiles of the genes in the cluster with the dynamic
behavior of p53 and p16INK4A, we identified a promoter architecture that
integrates signals from the two tumor suppressive channels and that maps their
activity onto distinct levels of expression of the cell cycle genes, which in
turn, correspond to different cellular proliferation rates. Taking components
of the mitotic spindle as an example, we experimentally verified our
predictions that p53-mediated transcriptional repression of several of these
novel targets is dependent on the activities of p21, NFY and E2F. Our study
demonstrates how a well-controlled transformation process allows linking
between gene expression, promoter architecture and activity of upstream
signaling molecules.
| [
{
"created": "Tue, 15 Nov 2005 09:13:10 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Tabach",
"Yuval",
""
],
[
"Milyavsky",
"Michael",
""
],
[
"Shats",
"Igor",
""
],
[
"Brosh",
"Ran",
""
],
[
"Zuk",
"Or",
""
],
[
"Yitzhaky",
"Assif",
""
],
[
"Mantovani",
"Roberto",
""
],
[
"Domany"... | Deciphering regulatory events that drive malignant transformation represents a major challenge for systems biology. Here we analyzed genome-wide transcription profiling of an in-vitro transformation process. We focused on a cluster of genes whose expression levels increased as a function of p53 and p16INK4A tumor suppressors inactivation. This cluster predominantly consists of cell cycle genes and constitutes a signature of a diversity of cancers. By linking expression profiles of the genes in the cluster with the dynamic behavior of p53 and p16INK4A, we identified a promoter architecture that integrates signals from the two tumor suppressive channels and that maps their activity onto distinct levels of expression of the cell cycle genes, which in turn, correspond to different cellular proliferation rates. Taking components of the mitotic spindle as an example, we experimentally verified our predictions that p53-mediated transcriptional repression of several of these novel targets is dependent on the activities of p21, NFY and E2F. Our study demonstrates how a well-controlled transformation process allows linking between gene expression, promoter architecture and activity of upstream signaling molecules. |
1910.08566 | Chiara Villa | Chiara Villa, Mark A. J. Chaplain, Tommaso Lorenzi | Modelling the emergence of phenotypic heterogeneity in vascularised
tumours | 21 pages, 4 figures | null | null | null | q-bio.TO nlin.AO q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a mathematical study of the emergence of phenotypic heterogeneity
in vascularised tumours. Our study is based on formal asymptotic analysis and
numerical simulations of a system of non-local parabolic equations that
describes the phenotypic evolution of tumour cells and their nonlinear dynamic
interactions with the oxygen, which is released from the intratumoural vascular
network. Numerical simulations are carried out both in the case of arbitrary
distributions of intratumour blood vessels and in the case where the
intratumoural vascular network is reconstructed from clinical images obtained
using dynamic optical coherence tomography. The results obtained support a more
in-depth theoretical understanding of the eco-evolutionary process which
underpins the emergence of phenotypic heterogeneity in vascularised tumours. In
particular, our results offer a theoretical basis for empirical evidence
indicating that the phenotypic properties of cancer cells in vascularised
tumours vary with the distance from the blood vessels, and establish a relation
between the degree of tumour tissue vascularisation and the level of
intratumour phenotypic heterogeneity.
| [
{
"created": "Fri, 18 Oct 2019 18:00:23 GMT",
"version": "v1"
},
{
"created": "Tue, 22 Sep 2020 21:53:54 GMT",
"version": "v2"
},
{
"created": "Thu, 1 Oct 2020 08:25:35 GMT",
"version": "v3"
},
{
"created": "Thu, 25 Mar 2021 12:54:17 GMT",
"version": "v4"
}
] | 2021-03-26 | [
[
"Villa",
"Chiara",
""
],
[
"Chaplain",
"Mark A. J.",
""
],
[
"Lorenzi",
"Tommaso",
""
]
] | We present a mathematical study of the emergence of phenotypic heterogeneity in vascularised tumours. Our study is based on formal asymptotic analysis and numerical simulations of a system of non-local parabolic equations that describes the phenotypic evolution of tumour cells and their nonlinear dynamic interactions with the oxygen, which is released from the intratumoural vascular network. Numerical simulations are carried out both in the case of arbitrary distributions of intratumour blood vessels and in the case where the intratumoural vascular network is reconstructed from clinical images obtained using dynamic optical coherence tomography. The results obtained support a more in-depth theoretical understanding of the eco-evolutionary process which underpins the emergence of phenotypic heterogeneity in vascularised tumours. In particular, our results offer a theoretical basis for empirical evidence indicating that the phenotypic properties of cancer cells in vascularised tumours vary with the distance from the blood vessels, and establish a relation between the degree of tumour tissue vascularisation and the level of intratumour phenotypic heterogeneity. |
2008.04849 | Sarath Yasodharan | Shubhada Agrawal, Siddharth Bhandari, Anirban Bhattacharjee, Anand
Deo, Narendra M. Dixit, Prahladh Harsha, Sandeep Juneja, Poonam Kesarwani,
Aditya Krishna Swamy, Preetam Patil, Nihesh Rathod, Ramprasad Saptharishi,
Sharad Shriram, Piyush Srivastava, Rajesh Sundaresan, Nidhin Koshy Vaidhiyan,
Sarath Yasodharan | City-Scale Agent-Based Simulators for the Study of Non-Pharmaceutical
Interventions in the Context of the COVID-19 Epidemic | 56 pages | Journal of the Indian Institute of Science, volume 100, pages
809-847, 2020 | 10.1007/s41745-020-00211-3 | null | q-bio.PE cs.OH physics.soc-ph q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We highlight the usefulness of city-scale agent-based simulators in studying
various non-pharmaceutical interventions to manage an evolving pandemic. We
ground our studies in the context of the COVID-19 pandemic and demonstrate the
power of the simulator via several exploratory case studies in two
metropolises, Bengaluru and Mumbai. Such tools become common-place in any city
administration's tool kit in our march towards digital health.
| [
{
"created": "Tue, 11 Aug 2020 16:49:04 GMT",
"version": "v1"
}
] | 2021-01-05 | [
[
"Agrawal",
"Shubhada",
""
],
[
"Bhandari",
"Siddharth",
""
],
[
"Bhattacharjee",
"Anirban",
""
],
[
"Deo",
"Anand",
""
],
[
"Dixit",
"Narendra M.",
""
],
[
"Harsha",
"Prahladh",
""
],
[
"Juneja",
"Sandeep",
... | We highlight the usefulness of city-scale agent-based simulators in studying various non-pharmaceutical interventions to manage an evolving pandemic. We ground our studies in the context of the COVID-19 pandemic and demonstrate the power of the simulator via several exploratory case studies in two metropolises, Bengaluru and Mumbai. Such tools become common-place in any city administration's tool kit in our march towards digital health. |
1506.07314 | Steven Frank | Steven A. Frank | d'Alembert's direct and inertial forces acting on populations: the Price
equation and the fundamental theorem of natural selection | version 2: New Methods section, revised throughout for minor
corrections and clarity, version 3: minor editing, publication information | Entropy 17:7087-7100 (2015) | 10.3390/e17107087 | null | q-bio.PE cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | I develop a framework for interpreting the forces that act on any population
described by frequencies. The conservation of total frequency, or total
probability, shapes the characteristics of force. I begin with Fisher's
fundamental theorem of natural selection. That theorem partitions the total
evolutionary change of a population into two components. The first component is
the partial change caused by the direct force of natural selection, holding
constant all aspects of the environment. The second component is the partial
change caused by the changing environment. I demonstrate that Fisher's
partition of total change into the direct force of selection and the forces
from the changing environmental frame of reference is identical to d'Alembert's
principle of mechanics, which separates the work done by the direct forces from
the work done by the inertial forces associated with the changing frame of
reference. In d'Alembert's principle, there exist inertial forces from a change
in the frame of reference that exactly balance the direct forces. I show that
the conservation of total probability strongly shapes the form of the balance
between the direct and inertial forces. I then use the strong results for
conserved probability to obtain general results for the change in any system
quantity, such as biological fitness or energy. Those general results derive
from simple coordinate changes between frequencies and system quantities.
Ultimately, d'Alembert's separation of direct and inertial forces provides deep
conceptual insight into the interpretation of forces and the unification of
disparate fields of study.
| [
{
"created": "Wed, 24 Jun 2015 10:43:17 GMT",
"version": "v1"
},
{
"created": "Fri, 4 Sep 2015 16:28:02 GMT",
"version": "v2"
},
{
"created": "Tue, 20 Oct 2015 18:37:25 GMT",
"version": "v3"
}
] | 2015-10-21 | [
[
"Frank",
"Steven A.",
""
]
] | I develop a framework for interpreting the forces that act on any population described by frequencies. The conservation of total frequency, or total probability, shapes the characteristics of force. I begin with Fisher's fundamental theorem of natural selection. That theorem partitions the total evolutionary change of a population into two components. The first component is the partial change caused by the direct force of natural selection, holding constant all aspects of the environment. The second component is the partial change caused by the changing environment. I demonstrate that Fisher's partition of total change into the direct force of selection and the forces from the changing environmental frame of reference is identical to d'Alembert's principle of mechanics, which separates the work done by the direct forces from the work done by the inertial forces associated with the changing frame of reference. In d'Alembert's principle, there exist inertial forces from a change in the frame of reference that exactly balance the direct forces. I show that the conservation of total probability strongly shapes the form of the balance between the direct and inertial forces. I then use the strong results for conserved probability to obtain general results for the change in any system quantity, such as biological fitness or energy. Those general results derive from simple coordinate changes between frequencies and system quantities. Ultimately, d'Alembert's separation of direct and inertial forces provides deep conceptual insight into the interpretation of forces and the unification of disparate fields of study. |
2204.08525 | Yasser Roudi | Iv\'an A. Davidovich, Yasser Roudi | Bayesian interpolation for power laws in neural data analysis | null | null | null | null | q-bio.NC q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Power laws arise in a variety of phenomena ranging from matter undergoing
phase transition to the distribution of word frequencies in the English
language. Usually, their presence is only apparent when data is abundant, and
accurately determining their exponents often requires even larger amounts of
data. As the scale of recordings in neuroscience becomes larger, an increasing
number of studies attempt to characterise potential power-law relationships in
neural data. In this paper, we aim to discuss the potential pitfalls that one
faces in such efforts and to promote a Bayesian interpolation framework for
this purpose. We apply this framework to synthetic data and to data from a
recent study of large-scale recordings in mouse primary visual cortex (V1),
where the exponent of a power-law scaling in the data played an important role:
its value was argued to determine whether the population's stimulus-response
relationship is smooth, and experimental data was provided to confirm that this
is indeed so. Our analysis shows that with such data types and sizes as we
consider here, the best-fit values found for the parameters of the power law
and the uncertainty for these estimates are heavily dependent on the noise
model assumed for the estimation, the range of the data chosen, and (with all
other things being equal) the particular recordings. It is thus challenging to
offer a reliable statement about the exponents of the power law. Our analysis,
however, shows that this does not affect the conclusions regarding the
smoothness of the population response to low-dimensional stimuli but casts
doubt on those to natural images. We discuss the implications of this result
for the neural code in the V1 and offer the approach discussed here as a
framework that future studies, perhaps exploring larger ranges of data, can
employ as their starting point to examine power-law scalings in neural data.
| [
{
"created": "Mon, 18 Apr 2022 19:14:27 GMT",
"version": "v1"
}
] | 2022-04-20 | [
[
"Davidovich",
"Iván A.",
""
],
[
"Roudi",
"Yasser",
""
]
] | Power laws arise in a variety of phenomena ranging from matter undergoing phase transition to the distribution of word frequencies in the English language. Usually, their presence is only apparent when data is abundant, and accurately determining their exponents often requires even larger amounts of data. As the scale of recordings in neuroscience becomes larger, an increasing number of studies attempt to characterise potential power-law relationships in neural data. In this paper, we aim to discuss the potential pitfalls that one faces in such efforts and to promote a Bayesian interpolation framework for this purpose. We apply this framework to synthetic data and to data from a recent study of large-scale recordings in mouse primary visual cortex (V1), where the exponent of a power-law scaling in the data played an important role: its value was argued to determine whether the population's stimulus-response relationship is smooth, and experimental data was provided to confirm that this is indeed so. Our analysis shows that with such data types and sizes as we consider here, the best-fit values found for the parameters of the power law and the uncertainty for these estimates are heavily dependent on the noise model assumed for the estimation, the range of the data chosen, and (with all other things being equal) the particular recordings. It is thus challenging to offer a reliable statement about the exponents of the power law. Our analysis, however, shows that this does not affect the conclusions regarding the smoothness of the population response to low-dimensional stimuli but casts doubt on those to natural images. We discuss the implications of this result for the neural code in the V1 and offer the approach discussed here as a framework that future studies, perhaps exploring larger ranges of data, can employ as their starting point to examine power-law scalings in neural data. |
1303.5986 | Rory Donovan | Rory M. Donovan, Andrew J. Sedgewick, James R. Faeder, Daniel M.
Zuckerman | Efficient Stochastic Simulation of Chemical Kinetics Networks using a
Weighted Ensemble of Trajectories | 11 pages, 6 figures, 4 supplemental files | J. Chem. Phys. 139, 115105 (2013) | 10.1063/1.4821167 | null | q-bio.MN physics.bio-ph physics.chem-ph q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We apply the "weighted ensemble" (WE) simulation strategy, previously
employed in the context of molecular dynamics simulations, to a series of
systems-biology models that range in complexity from one-dimensional to a
system with 354 species and 3680 reactions. WE is relatively easy to implement,
does not require extensive hand-tuning of parameters, does not depend on the
details of the simulation algorithm, and can facilitate the simulation of
extremely rare events.
For the coupled stochastic reaction systems we study, WE is able to produce
accurate and efficient approximations of the joint probability distribution for
all chemical species for all time t. WE is also able to efficiently extract
mean first passage times for the systems, via the construction of a
steady-state condition with feedback. In all cases studied here, WE results
agree with independent calculations, but significantly enhance the precision
with which rare or slow processes can be characterized. Speedups over
"brute-force" in sampling rare events via the Gillespie direct Stochastic
Simulation Algorithm range from ~10^12 to ~10^20 for rare states in a
distribution, and ~10^2 to ~10^4 for finding mean first passage times.
| [
{
"created": "Sun, 24 Mar 2013 20:04:18 GMT",
"version": "v1"
},
{
"created": "Thu, 28 Mar 2013 20:48:20 GMT",
"version": "v2"
}
] | 2015-03-11 | [
[
"Donovan",
"Rory M.",
""
],
[
"Sedgewick",
"Andrew J.",
""
],
[
"Faeder",
"James R.",
""
],
[
"Zuckerman",
"Daniel M.",
""
]
] | We apply the "weighted ensemble" (WE) simulation strategy, previously employed in the context of molecular dynamics simulations, to a series of systems-biology models that range in complexity from one-dimensional to a system with 354 species and 3680 reactions. WE is relatively easy to implement, does not require extensive hand-tuning of parameters, does not depend on the details of the simulation algorithm, and can facilitate the simulation of extremely rare events. For the coupled stochastic reaction systems we study, WE is able to produce accurate and efficient approximations of the joint probability distribution for all chemical species for all time t. WE is also able to efficiently extract mean first passage times for the systems, via the construction of a steady-state condition with feedback. In all cases studied here, WE results agree with independent calculations, but significantly enhance the precision with which rare or slow processes can be characterized. Speedups over "brute-force" in sampling rare events via the Gillespie direct Stochastic Simulation Algorithm range from ~10^12 to ~10^20 for rare states in a distribution, and ~10^2 to ~10^4 for finding mean first passage times. |
1705.09347 | Michael Margaliot | Yoram Zarai and Michael Margaliot and Tamir Tuller | Ribosome Flow Model with Extended Objects | null | null | null | null | q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study a deterministic mechanistic model for the flow of ribosomes along
the mRNA molecule, called the ribosome flow model with extended objects
(RFMEO). This model encapsulates many realistic features of translation
including non-homogeneous transition rates along the mRNA, the fact that every
ribosome covers several codons, and the fact that ribosomes cannot overtake one
another.
The RFMEO is a mean-field approximation of an important model from
statistical mechanics called the totally asymmetric simple exclusion process
with extended objects (TASEPEO). We demonstrate that the RFMEO describes
biophysical aspects of translation better than previous mean-field
approximations, and that its predictions correlate well with those of TASEPEO.
However, unlike TASEPEO, the RFMEO is amenable to rigorous analysis using tools
from systems and control theory. We show that the ribosome density profile
along the mRNA in the RFMEO converges to a unique steady-state density that
depends on the length of the mRNA, the transition rates along it, and the
number of codons covered by every ribosome, but not on the initial density of
ribosomes along the mRNA. In particular, the protein production rate also
converges to a unique steady-state. Furthermore, if the transition rates along
the mRNA are periodic with a common period T then the ribosome density along
the mRNA and the protein production rate converge to a unique periodic pattern
with period T, that is, the model entrains to periodic excitations in the
transition rates.
We believe that the RFMEO could be useful for modeling, understanding, and
re-engineering translation as well as other important biological processes.
| [
{
"created": "Thu, 25 May 2017 20:12:50 GMT",
"version": "v1"
}
] | 2017-05-29 | [
[
"Zarai",
"Yoram",
""
],
[
"Margaliot",
"Michael",
""
],
[
"Tuller",
"Tamir",
""
]
] | We study a deterministic mechanistic model for the flow of ribosomes along the mRNA molecule, called the ribosome flow model with extended objects (RFMEO). This model encapsulates many realistic features of translation including non-homogeneous transition rates along the mRNA, the fact that every ribosome covers several codons, and the fact that ribosomes cannot overtake one another. The RFMEO is a mean-field approximation of an important model from statistical mechanics called the totally asymmetric simple exclusion process with extended objects (TASEPEO). We demonstrate that the RFMEO describes biophysical aspects of translation better than previous mean-field approximations, and that its predictions correlate well with those of TASEPEO. However, unlike TASEPEO, the RFMEO is amenable to rigorous analysis using tools from systems and control theory. We show that the ribosome density profile along the mRNA in the RFMEO converges to a unique steady-state density that depends on the length of the mRNA, the transition rates along it, and the number of codons covered by every ribosome, but not on the initial density of ribosomes along the mRNA. In particular, the protein production rate also converges to a unique steady-state. Furthermore, if the transition rates along the mRNA are periodic with a common period T then the ribosome density along the mRNA and the protein production rate converge to a unique periodic pattern with period T, that is, the model entrains to periodic excitations in the transition rates. We believe that the RFMEO could be useful for modeling, understanding, and re-engineering translation as well as other important biological processes. |
1801.07130 | Yifei Qi | Jingxue Wang, Huali Cao, and John Z.H. Zhang, and Yifei Qi | Computational Protein Design with Deep Learning Neural Networks | 16 pages, 5 figures, 3 tables | Scientific Reports 8: 6349 (2018) | 10.1038/s41598-018-24760-x | null | q-bio.QM q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Computational protein design has a wide variety of applications. Despite its
remarkable success, designing a protein for a given structure and function is
still a challenging task. On the other hand, the number of solved protein
structures is rapidly increasing while the number of unique protein folds has
reached a steady number, suggesting more structural information is being
accumulated on each fold. Deep learning neural network is a powerful method to
learn such big data set and has shown superior performance in many machine
learning fields. In this study, we applied the deep learning neural network
approach to computational protein design for predicting the probability of 20
natural amino acids on each residue in a protein. A large set of protein
structures was collected and a multi-layer neural network was constructed. A
number of structural properties were extracted as input features and the best
network achieved an accuracy of 38.3%. Using the network output as residue type
restraints was able to improve the average sequence identity in designing three
natural proteins using Rosetta. Moreover, the predictions from our network show
~3% higher sequence identity than a previous method. Results from this study
may benefit further development of computational protein design methods.
| [
{
"created": "Mon, 22 Jan 2018 14:59:18 GMT",
"version": "v1"
},
{
"created": "Sat, 24 Feb 2018 03:15:40 GMT",
"version": "v2"
}
] | 2018-04-26 | [
[
"Wang",
"Jingxue",
""
],
[
"Cao",
"Huali",
""
],
[
"Zhang",
"John Z. H.",
""
],
[
"Qi",
"Yifei",
""
]
] | Computational protein design has a wide variety of applications. Despite its remarkable success, designing a protein for a given structure and function is still a challenging task. On the other hand, the number of solved protein structures is rapidly increasing while the number of unique protein folds has reached a steady number, suggesting more structural information is being accumulated on each fold. Deep learning neural network is a powerful method to learn such big data set and has shown superior performance in many machine learning fields. In this study, we applied the deep learning neural network approach to computational protein design for predicting the probability of 20 natural amino acids on each residue in a protein. A large set of protein structures was collected and a multi-layer neural network was constructed. A number of structural properties were extracted as input features and the best network achieved an accuracy of 38.3%. Using the network output as residue type restraints was able to improve the average sequence identity in designing three natural proteins using Rosetta. Moreover, the predictions from our network show ~3% higher sequence identity than a previous method. Results from this study may benefit further development of computational protein design methods. |
1804.00153 | Cristiano Capone | Cristiano Capone, Guido Gigante, Paolo Del Giudice | Spontaneous activity emerging from an inferred network model captures
complex temporal dynamics of spiking data | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The combination of new recording techniques in neuroscience and powerful
inference methods recently held the promise to recover useful effective models,
at the single neuron or network level, directly from observed data. The value
of a model of course should critically depend on its ability to reproduce the
dynamical behavior of the modeled system; however, few attempts have been made
to inquire into the dynamics of inferred models in neuroscience, and none, to
our knowledge, at the network level. Here we introduce a principled
modification of a widely used generalized linear model (GLM), and learn its
structural and dynamic parameters from ex-vivo spiking data. We show that the
new model is able to capture the most prominent features of the highly
non-stationary and non-linear dynamics displayed by the biological network,
where the reference GLM largely fails. Two ingredients turn out to be key for
success. The first one is a bounded transfer function that makes the single
neuron able to respond to its input in a saturating fashion; beyond its
biological plausibility such property, by limiting the capacity of the neuron
to transfer information, makes the coding more robust in the face of the highly
variable network activity, and noise. The second ingredient is a super-Poisson
spikes generative probabilistic mechanism; this feature, that accounts for the
fact that observations largely undersample the network, allows the model neuron
to more flexibly incorporate the observed activity fluctuations. Taken
together, the two ingredients, without increasing complexity, allow the model
to capture the key dynamic elements. When left free to generate its spontaneous
activity, the inferred model proved able to reproduce not only the
non-stationary population dynamics of the network, but also part of the
fine-grained structure of the dynamics at the single neuron level.
| [
{
"created": "Sat, 31 Mar 2018 10:48:34 GMT",
"version": "v1"
},
{
"created": "Fri, 6 Apr 2018 09:06:53 GMT",
"version": "v2"
}
] | 2018-04-09 | [
[
"Capone",
"Cristiano",
""
],
[
"Gigante",
"Guido",
""
],
[
"Del Giudice",
"Paolo",
""
]
] | The combination of new recording techniques in neuroscience and powerful inference methods recently held the promise to recover useful effective models, at the single neuron or network level, directly from observed data. The value of a model of course should critically depend on its ability to reproduce the dynamical behavior of the modeled system; however, few attempts have been made to inquire into the dynamics of inferred models in neuroscience, and none, to our knowledge, at the network level. Here we introduce a principled modification of a widely used generalized linear model (GLM), and learn its structural and dynamic parameters from ex-vivo spiking data. We show that the new model is able to capture the most prominent features of the highly non-stationary and non-linear dynamics displayed by the biological network, where the reference GLM largely fails. Two ingredients turn out to be key for success. The first one is a bounded transfer function that makes the single neuron able to respond to its input in a saturating fashion; beyond its biological plausibility such property, by limiting the capacity of the neuron to transfer information, makes the coding more robust in the face of the highly variable network activity, and noise. The second ingredient is a super-Poisson spikes generative probabilistic mechanism; this feature, that accounts for the fact that observations largely undersample the network, allows the model neuron to more flexibly incorporate the observed activity fluctuations. Taken together, the two ingredients, without increasing complexity, allow the model to capture the key dynamic elements. When left free to generate its spontaneous activity, the inferred model proved able to reproduce not only the non-stationary population dynamics of the network, but also part of the fine-grained structure of the dynamics at the single neuron level. |
1310.5910 | Michael Courtney | Joshua M. Courtney and Michael W. Courtney | Comments on "Analysis of permanent magnets as elasmobranch bycatch
reduction devices in hook-and-line and longline trials" | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A recent study (Fish. Bull. 109:394-401 (2011)) purportedly tests two
hypotheses: 1. that the capture of elasmobranchs would be reduced with hooks
containing magnets in comparison with control hooks in hook-and-line and
longline studies. 2. that the presence of permanent magnets on hooks would not
alter teleost capture because teleosts lack the ampullary organ. Review of this
paper shows some inconsistencies in the data supporting the first hypothesis
and insufficient data and poor experimental design to adequately test the
second hypothesis. Further, since several orders of teleosts are known to
possess ampullary organs and demonstrate electroreception, grouping all
teleosts in a study design or data analysis of magnetic hook catch rates is not
warranted. Adequate tests of the hypothesis that permanent magnets or
magnetized hooks do not alter teleost capture requires a more careful study
design and much larger sample sizes than O'Connell et al. (Fish. Bull.
109:394-401 (2011)).
| [
{
"created": "Tue, 22 Oct 2013 13:25:56 GMT",
"version": "v1"
}
] | 2013-10-23 | [
[
"Courtney",
"Joshua M.",
""
],
[
"Courtney",
"Michael W.",
""
]
] | A recent study (Fish. Bull. 109:394-401 (2011)) purportedly tests two hypotheses: 1. that the capture of elasmobranchs would be reduced with hooks containing magnets in comparison with control hooks in hook-and-line and longline studies. 2. that the presence of permanent magnets on hooks would not alter teleost capture because teleosts lack the ampullary organ. Review of this paper shows some inconsistencies in the data supporting the first hypothesis and insufficient data and poor experimental design to adequately test the second hypothesis. Further, since several orders of teleosts are known to possess ampullary organs and demonstrate electroreception, grouping all teleosts in a study design or data analysis of magnetic hook catch rates is not warranted. Adequate tests of the hypothesis that permanent magnets or magnetized hooks do not alter teleost capture requires a more careful study design and much larger sample sizes than O'Connell et al. (Fish. Bull. 109:394-401 (2011)). |
1508.03354 | Andre Chalom | Andr\'e Chalom and Paulo In\'acio de Knegt L\'opez de Prado | Uncertainty analysis and composite hypothesis under the likelihood
paradigm | null | null | null | null | q-bio.QM stat.ME | http://creativecommons.org/licenses/by-sa/4.0/ | The correct use and interpretation of models depends on several steps, two of
which being the calibration by parameter estimation and the analysis of
uncertainty. In the biological literature, these steps are seldom discussed
together, but they can be seen as fitting pieces of the same puzzle. In
particular, analytical procedures for uncertainty estimation may be masking a
high degree of uncertainty coming from a model with a stable structure, but
insufficient data.
Under a likelihoodist approach, the problem of uncertainty estimation is
closely related to the problem of composite hypothesis. In this paper, we
present a brief historical background on the statistical school of
Likelihoodism, and examine the complex relations between the law of likelihood
and the problem of composite hypothesis, together with the existing proposals
for coping with it. Then, we propose a new integrative methodology for the
uncertainty estimation of models using the information in the collected data.
We argue that this methodology is intuitively appealing under a likelihood
paradigm.
| [
{
"created": "Thu, 13 Aug 2015 20:40:58 GMT",
"version": "v1"
}
] | 2015-08-17 | [
[
"Chalom",
"André",
""
],
[
"de Prado",
"Paulo Inácio de Knegt López",
""
]
] | The correct use and interpretation of models depends on several steps, two of which being the calibration by parameter estimation and the analysis of uncertainty. In the biological literature, these steps are seldom discussed together, but they can be seen as fitting pieces of the same puzzle. In particular, analytical procedures for uncertainty estimation may be masking a high degree of uncertainty coming from a model with a stable structure, but insufficient data. Under a likelihoodist approach, the problem of uncertainty estimation is closely related to the problem of composite hypothesis. In this paper, we present a brief historical background on the statistical school of Likelihoodism, and examine the complex relations between the law of likelihood and the problem of composite hypothesis, together with the existing proposals for coping with it. Then, we propose a new integrative methodology for the uncertainty estimation of models using the information in the collected data. We argue that this methodology is intuitively appealing under a likelihood paradigm. |
1911.00755 | Fernando Racimo | Fernando Racimo, Martin Sikora, Hannes Schroeder, Carles Lalueza-Fox | Beyond broad strokes: sociocultural insights from the study of ancient
genomes | null | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | The amount of sequence data obtained from ancient samples has dramatically
expanded in the last decade, and so have the types of questions that can now be
addressed using ancient DNA. In the field of human history, while ancient DNA
has provided answers to long-standing debates about major movements of people,
it has also recently begun to inform on other important facets of the human
experience. The field is now moving from not only focusing on large-scale
supra-regional studies to also taking a more local perspective, shedding light
on socioeconomic processes, inheritance rules, marriage practices and
technological diffusion. In this review, we summarize recent studies showcasing
these types of insights, focusing on the methods used to infer sociocultural
aspects of human behaviour. This work often involves working across disciplines
that have, until recently, evolved in separation. We argue that
multidisciplinary dialogue is crucial for a more integrated and richer
reconstruction of human history, as it can yield extraordinary insights about
past societies, reproductive behaviours and even lifestyle habits that would
not have been possible to obtain otherwise.
| [
{
"created": "Sat, 2 Nov 2019 17:20:32 GMT",
"version": "v1"
},
{
"created": "Tue, 7 Jan 2020 14:26:26 GMT",
"version": "v2"
}
] | 2020-01-08 | [
[
"Racimo",
"Fernando",
""
],
[
"Sikora",
"Martin",
""
],
[
"Schroeder",
"Hannes",
""
],
[
"Lalueza-Fox",
"Carles",
""
]
] | The amount of sequence data obtained from ancient samples has dramatically expanded in the last decade, and so have the types of questions that can now be addressed using ancient DNA. In the field of human history, while ancient DNA has provided answers to long-standing debates about major movements of people, it has also recently begun to inform on other important facets of the human experience. The field is now moving from not only focusing on large-scale supra-regional studies to also taking a more local perspective, shedding light on socioeconomic processes, inheritance rules, marriage practices and technological diffusion. In this review, we summarize recent studies showcasing these types of insights, focusing on the methods used to infer sociocultural aspects of human behaviour. This work often involves working across disciplines that have, until recently, evolved in separation. We argue that multidisciplinary dialogue is crucial for a more integrated and richer reconstruction of human history, as it can yield extraordinary insights about past societies, reproductive behaviours and even lifestyle habits that would not have been possible to obtain otherwise. |
1505.00116 | David Steinsaltz | David Steinsaltz and Shripad Tuljapurkar | Stochastic growth rates for life histories with rare migration or
diapause | 32 pages, 4 figures | null | null | null | q-bio.PE math.PR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The growth of a population divided among spatial sites, with migration
between the sites, is sometimes modelled by a product of random matrices, with
each diagonal elements representing the growth rate in a given time period, and
off-diagonal elements the migration rate. If the sites are reinterpreted as age
classes, the same model may apply to a single population with age-dependent
mortality and reproduction.
We consider the case where the off-diagonal elements are small, representing
a situation where there is little migration or, alternatively, where a
deterministic life-history has been slightly disrupted, for example by
introducing a rare delay in development. We examine the asymptotic behaviour of
the long-term growth rate. We show that when the highest growth rate is
attained at two different sites in the absence of migration (which is always
the case when modelling a single age-structured population) the increase in
stochastic growth rate due to a migration rate $\epsilon$ is like $(\log
\epsilon^{-1})^{-1}$ as $\epsilon\downarrow 0$, under fairly generic
conditions. When there is a single site with the highest growth rate the
behavior is more delicate, depending on the tails of the growth rates. For the
case when the log growth rates have Gaussian-like tails we show that the
behavior near zero is like a power of $\epsilon$, and derive upper and lower
bounds for the power in terms of the difference in the growth rates and the
distance between the sites.
| [
{
"created": "Fri, 1 May 2015 08:17:45 GMT",
"version": "v1"
}
] | 2015-05-04 | [
[
"Steinsaltz",
"David",
""
],
[
"Tuljapurkar",
"Shripad",
""
]
] | The growth of a population divided among spatial sites, with migration between the sites, is sometimes modelled by a product of random matrices, with each diagonal elements representing the growth rate in a given time period, and off-diagonal elements the migration rate. If the sites are reinterpreted as age classes, the same model may apply to a single population with age-dependent mortality and reproduction. We consider the case where the off-diagonal elements are small, representing a situation where there is little migration or, alternatively, where a deterministic life-history has been slightly disrupted, for example by introducing a rare delay in development. We examine the asymptotic behaviour of the long-term growth rate. We show that when the highest growth rate is attained at two different sites in the absence of migration (which is always the case when modelling a single age-structured population) the increase in stochastic growth rate due to a migration rate $\epsilon$ is like $(\log \epsilon^{-1})^{-1}$ as $\epsilon\downarrow 0$, under fairly generic conditions. When there is a single site with the highest growth rate the behavior is more delicate, depending on the tails of the growth rates. For the case when the log growth rates have Gaussian-like tails we show that the behavior near zero is like a power of $\epsilon$, and derive upper and lower bounds for the power in terms of the difference in the growth rates and the distance between the sites. |
1208.5350 | Man Yi Yim | Man Yi Yim, Ad Aertsen, Stefan Rotter | Impact of intrinsic biophysical diversity on the activity of spiking
neurons | 4 pages, 5 figures | Phys. Rev. E 87, 032710 (2013) | 10.1103/PhysRevE.87.032710 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the effect of intrinsic heterogeneity on the activity of a
population of leaky integrate-and-fire neurons. By rescaling the dynamical
equation, we derive mathematical relations between multiple neuronal parameters
and a fluctuating input noise. To this end, common input to heterogeneous
neurons is conceived as an identical noise with neuron-specific mean and
variance. As a consequence, the neuronal output rates can differ considerably,
and their relative spike timing becomes desynchronized. This theory can
quantitatively explain some recent experimental findings.
| [
{
"created": "Mon, 27 Aug 2012 10:06:12 GMT",
"version": "v1"
},
{
"created": "Wed, 16 Jan 2013 10:27:55 GMT",
"version": "v2"
},
{
"created": "Thu, 31 Jan 2013 14:42:54 GMT",
"version": "v3"
},
{
"created": "Tue, 19 Feb 2013 17:08:00 GMT",
"version": "v4"
}
] | 2013-08-21 | [
[
"Yim",
"Man Yi",
""
],
[
"Aertsen",
"Ad",
""
],
[
"Rotter",
"Stefan",
""
]
] | We study the effect of intrinsic heterogeneity on the activity of a population of leaky integrate-and-fire neurons. By rescaling the dynamical equation, we derive mathematical relations between multiple neuronal parameters and a fluctuating input noise. To this end, common input to heterogeneous neurons is conceived as an identical noise with neuron-specific mean and variance. As a consequence, the neuronal output rates can differ considerably, and their relative spike timing becomes desynchronized. This theory can quantitatively explain some recent experimental findings. |
2304.07273 | Stefano Grasso | Nicol\`o Cangiotti and Stefano Grasso | Genus Comparisons in the Topological Analysis of RNA Structures | 11 pages, 9 figures | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | RNA folding prediction remains challenging, but can be also studied using a
topological mathematical approach. In the present paper, the mathematical
method to compute the topological classification of RNA structures and based on
matrix field theory is shortly reviewed, as well as a computational software,
McGenus, used for topological and folding predictions. Additionally, two types
of analysis are performed: the prediction results from McGenus are compared
with topological information extracted from experimentally-determined RNA
structures, and the topology of RNA structures is investigated for biological
significance, in both evolutionary and functional terms. Lastly, we advocate
for more research efforts to be performed at intersection of
physics-mathematics and biology, and in particular about the possible
contributions that topology can provide to the study of RNA folding and
structure.
| [
{
"created": "Fri, 14 Apr 2023 17:35:44 GMT",
"version": "v1"
},
{
"created": "Mon, 17 Apr 2023 19:41:36 GMT",
"version": "v2"
},
{
"created": "Thu, 11 May 2023 16:14:43 GMT",
"version": "v3"
}
] | 2023-05-12 | [
[
"Cangiotti",
"Nicolò",
""
],
[
"Grasso",
"Stefano",
""
]
] | RNA folding prediction remains challenging, but can be also studied using a topological mathematical approach. In the present paper, the mathematical method to compute the topological classification of RNA structures and based on matrix field theory is shortly reviewed, as well as a computational software, McGenus, used for topological and folding predictions. Additionally, two types of analysis are performed: the prediction results from McGenus are compared with topological information extracted from experimentally-determined RNA structures, and the topology of RNA structures is investigated for biological significance, in both evolutionary and functional terms. Lastly, we advocate for more research efforts to be performed at intersection of physics-mathematics and biology, and in particular about the possible contributions that topology can provide to the study of RNA folding and structure. |
2007.08374 | Markus D Schirmer | Markus D. Schirmer, Kathleen L. Donahue, Marco J. Nardin, Adrian V.
Dalca, Anne-Katrin Giese, Mark R. Etherton, Steven J. T. Mocking, Elissa C.
McIntosh, John W. Cole, Lukas Holmegaard, Katarina Jood, Jordi Jimenez-Conde,
Steven J. Kittner, Robin Lemmens, James F. Meschia, Jonathan Rosand, Jaume
Roquer, Tatjana Rundek, Ralph L. Sacco MD, Reinhold Schmidt, Pankaj Sharma,
Agnieszka Slowik, Tara M. Stanne, Achala Vagal, Johan Wasselius, Daniel Woo,
Stephen Bevan, Laura Heitsch, Chia-Ling Phuah, Daniel Strbian MD, Turgut
Tatlisumak, Christopher R. Levi, John Attia, Patrick F. McArdle, Bradford B.
Worrall, Ona Wu, Christina Jern, Arne Lindgren, Jane Maguire, Vincent Thijs,
Natalia S. Rost | Brain volume: An important determinant of functional outcome after acute
ischemic stroke | null | Mayo Clinic Proceedings. Vol. 95. No. 5. Elsevier, 2020 | 10.1016/j.mayocp.2020.01.027 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Objective: To determine whether brain volume is associated with functional
outcome after acute ischemic stroke (AIS).
Methods: We analyzed cross-sectional data of the multi-site, international
hospital-based MRI-GENetics Interface Exploration (MRI-GENIE) study (July 1,
2014- March 16, 2019) with clinical brain magnetic resonance imaging (MRI)
obtained on admission for index stroke and functional outcome assessment.
Post-stroke outcome was determined using the modified Rankin Scale (mRS) score
(0-6; 0: asymptomatic; 6 death) recorded between 60-190 days after stroke.
Demographics and other clinical variables including acute stroke severity
(measured as National Institutes of Health Stroke Scale score), vascular risk
factors, and etiologic stroke subtypes (Causative Classification of Stroke)
were recorded during index admission.
Results: Utilizing the data from 912 acute ischemic stroke (AIS) patients
(65+/-15 years of age, 58% male, 57% history of smoking, and 65% hypertensive)
in a generalized linear model, brain volume (per 155.1cm^3 ) was associated
with age (beta -0.3 (per 14.4 years)), male sex (beta 1.0) and prior stroke
(beta -0.2). In the multivariable outcome model, brain volume was an
independent predictor of mRS (beta -0.233), with reduced odds of worse
long-term functional outcomes (OR: 0.8, 95% CI 0.7-0.9) in those with larger
brain volumes.
Conclusions: Larger brain volume quantified on clinical MRI of AIS patients
at time of stroke purports a protective mechanism. The role of brain volume as
a prognostic, protective biomarker has the potential to forge new areas of
research and advance current knowledge of mechanisms of post-stroke recovery.
| [
{
"created": "Thu, 16 Jul 2020 14:51:40 GMT",
"version": "v1"
}
] | 2020-07-17 | [
[
"Schirmer",
"Markus D.",
""
],
[
"Donahue",
"Kathleen L.",
""
],
[
"Nardin",
"Marco J.",
""
],
[
"Dalca",
"Adrian V.",
""
],
[
"Giese",
"Anne-Katrin",
""
],
[
"Etherton",
"Mark R.",
""
],
[
"Mocking",
"Steven J... | Objective: To determine whether brain volume is associated with functional outcome after acute ischemic stroke (AIS). Methods: We analyzed cross-sectional data of the multi-site, international hospital-based MRI-GENetics Interface Exploration (MRI-GENIE) study (July 1, 2014- March 16, 2019) with clinical brain magnetic resonance imaging (MRI) obtained on admission for index stroke and functional outcome assessment. Post-stroke outcome was determined using the modified Rankin Scale (mRS) score (0-6; 0: asymptomatic; 6 death) recorded between 60-190 days after stroke. Demographics and other clinical variables including acute stroke severity (measured as National Institutes of Health Stroke Scale score), vascular risk factors, and etiologic stroke subtypes (Causative Classification of Stroke) were recorded during index admission. Results: Utilizing the data from 912 acute ischemic stroke (AIS) patients (65+/-15 years of age, 58% male, 57% history of smoking, and 65% hypertensive) in a generalized linear model, brain volume (per 155.1cm^3 ) was associated with age (beta -0.3 (per 14.4 years)), male sex (beta 1.0) and prior stroke (beta -0.2). In the multivariable outcome model, brain volume was an independent predictor of mRS (beta -0.233), with reduced odds of worse long-term functional outcomes (OR: 0.8, 95% CI 0.7-0.9) in those with larger brain volumes. Conclusions: Larger brain volume quantified on clinical MRI of AIS patients at time of stroke purports a protective mechanism. The role of brain volume as a prognostic, protective biomarker has the potential to forge new areas of research and advance current knowledge of mechanisms of post-stroke recovery. |
1807.01828 | Dirson Jian Li | Dirson Jian Li | Observations and perspectives on the variation of biodiversity | 39 pages, 8 figures | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Based on statistical analysis of the complete genome sequences, a remote
relationship has been observed between the evolution of the genetic code and
the three domain tree of life. The existence of such a remote relationship need
to be explained. The unity of the living system throughout the history of life
relies on the common features of life: the homochirality, the genetic code and
the universal genome format. The universal genome format has been observed in
the genomic codon distributions as a common feature of life at the sequence
level. A main aim of this article is to reconstruct and to explain the
Phanerozoic biodiversity curve. It has been observed that the exponential
growth rate of the Phanerozoic biodiversity curve is about equal to the
exponential growth rate of genome size evolution. Hence it is strongly
indicated that the expansion of genomes causes the exponential trend of the
Phanerozoic biodiversity curve, where the conservative property during the
evolution of life is guaranteed by the universal genome format at the sequence
level. In addition, a consensus curve based on the climatic and eustatic data
is obtained to explain the fluctuations of the Phanerozoic biodiversity curve.
Thus, the reconstructed biodiversity curve based on genomic, climatic and
eustatic data agrees with Sepkoski's curve based on fossil data. The five mass
extinctions can be discerned in this reconstructed biodiversity curve, which
indicates a tectonic cause of the mass extinctions. And the declining
origination rate and extinction rate throughout the Phanerozoic eon might be
due to the growth trend in genome size evolution.
| [
{
"created": "Thu, 5 Jul 2018 02:04:56 GMT",
"version": "v1"
}
] | 2018-07-06 | [
[
"Li",
"Dirson Jian",
""
]
] | Based on statistical analysis of the complete genome sequences, a remote relationship has been observed between the evolution of the genetic code and the three domain tree of life. The existence of such a remote relationship need to be explained. The unity of the living system throughout the history of life relies on the common features of life: the homochirality, the genetic code and the universal genome format. The universal genome format has been observed in the genomic codon distributions as a common feature of life at the sequence level. A main aim of this article is to reconstruct and to explain the Phanerozoic biodiversity curve. It has been observed that the exponential growth rate of the Phanerozoic biodiversity curve is about equal to the exponential growth rate of genome size evolution. Hence it is strongly indicated that the expansion of genomes causes the exponential trend of the Phanerozoic biodiversity curve, where the conservative property during the evolution of life is guaranteed by the universal genome format at the sequence level. In addition, a consensus curve based on the climatic and eustatic data is obtained to explain the fluctuations of the Phanerozoic biodiversity curve. Thus, the reconstructed biodiversity curve based on genomic, climatic and eustatic data agrees with Sepkoski's curve based on fossil data. The five mass extinctions can be discerned in this reconstructed biodiversity curve, which indicates a tectonic cause of the mass extinctions. And the declining origination rate and extinction rate throughout the Phanerozoic eon might be due to the growth trend in genome size evolution. |
1903.05979 | Vince Grolmusz | Mate Fellner and Balint Varga and Vince Grolmusz | The Frequent Complete Subgraphs in the Human Connectome | null | null | 10.1371/journal.pone.0236883 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While it is still not possible to describe the neural-level connections of
the human brain, we can map the human connectome with several hundred vertices,
by the application of diffusion-MRI based techniques. In these graphs, the
nodes correspond to anatomically identified gray matter areas of the brain,
while the edges correspond to the axonal fibers, connecting these areas. In our
previous contributions, we have described numerous graph-theoretical phenomena
of the human connectomes. Here we map the frequent complete subgraphs of the
human brain networks: in these subgraphs, every pair of vertices is connected
by an edge. We also examine sex differences in the results. The mapping of the
frequent subgraphs gives robust substructures in the graph: if a subgraph is
present in the 80% of the graphs, then, most probably, it could not be an
artifact of the measurement or the data processing workflow. We list here the
frequent complete subgraphs of the human braingraphs of 414 subjects, each with
463 nodes, with a frequency threshold of 80%, and identify 812 complete
subgraphs, which are more frequent in male and 224 complete subgraphs, which
are more frequent in female connectomes.
| [
{
"created": "Thu, 14 Mar 2019 13:26:38 GMT",
"version": "v1"
},
{
"created": "Wed, 27 Mar 2019 18:31:13 GMT",
"version": "v2"
}
] | 2020-09-09 | [
[
"Fellner",
"Mate",
""
],
[
"Varga",
"Balint",
""
],
[
"Grolmusz",
"Vince",
""
]
] | While it is still not possible to describe the neural-level connections of the human brain, we can map the human connectome with several hundred vertices, by the application of diffusion-MRI based techniques. In these graphs, the nodes correspond to anatomically identified gray matter areas of the brain, while the edges correspond to the axonal fibers, connecting these areas. In our previous contributions, we have described numerous graph-theoretical phenomena of the human connectomes. Here we map the frequent complete subgraphs of the human brain networks: in these subgraphs, every pair of vertices is connected by an edge. We also examine sex differences in the results. The mapping of the frequent subgraphs gives robust substructures in the graph: if a subgraph is present in the 80% of the graphs, then, most probably, it could not be an artifact of the measurement or the data processing workflow. We list here the frequent complete subgraphs of the human braingraphs of 414 subjects, each with 463 nodes, with a frequency threshold of 80%, and identify 812 complete subgraphs, which are more frequent in male and 224 complete subgraphs, which are more frequent in female connectomes. |
1707.08914 | Anatoly Zhokhin | A.S. Zhokhin, V.P. Gachok | Transition to Chaos in the Kinetic Model of Cellulose Hydrolysis Under
Enzyme Biosynthesis Control | 3 pages, 7 figures | null | null | null | q-bio.CB nlin.AO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the paper the kinetic model of the biochemical process of cellulose
hydrolysis with cell application is presented. The model includes enzyme
biosynthesis control and is open conditions it represents the dynamical system
in the preturbulent regime. The limit cycle and its five consequence
bifurcations of the doubling-period type are found. Also the limit regime of
the system - the strange attractor - is presented.
| [
{
"created": "Thu, 27 Jul 2017 15:52:54 GMT",
"version": "v1"
}
] | 2020-07-09 | [
[
"Zhokhin",
"A. S.",
""
],
[
"Gachok",
"V. P.",
""
]
] | In the paper the kinetic model of the biochemical process of cellulose hydrolysis with cell application is presented. The model includes enzyme biosynthesis control and is open conditions it represents the dynamical system in the preturbulent regime. The limit cycle and its five consequence bifurcations of the doubling-period type are found. Also the limit regime of the system - the strange attractor - is presented. |
1510.07882 | Mareike Fischer | Mareike Fischer, Volkmar Liebscher | On the Balance of Unrooted Trees | 16 pages, 8 figures | null | null | null | q-bio.PE math.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We solve a class of optimization problems for (phylogenetic) $X$-trees or
their shapes. These problems have recently appeared in different contexts, e.g.
in the context of the impact of tree shapes on the size of TBR neighborhoods,
but so far these problems have not been characterized and solved in a
systematic way. In this work we generalize the concept and also present several
applications. Moreover, our results give rise to a nice notion of balance for
trees. Unsurprisingly, so-called caterpillars are the most unbalanced tree
shapes, but it turns out that balanced tree shapes cannot be described so
easily as they need not even be unique.
| [
{
"created": "Tue, 27 Oct 2015 12:31:44 GMT",
"version": "v1"
}
] | 2015-10-28 | [
[
"Fischer",
"Mareike",
""
],
[
"Liebscher",
"Volkmar",
""
]
] | We solve a class of optimization problems for (phylogenetic) $X$-trees or their shapes. These problems have recently appeared in different contexts, e.g. in the context of the impact of tree shapes on the size of TBR neighborhoods, but so far these problems have not been characterized and solved in a systematic way. In this work we generalize the concept and also present several applications. Moreover, our results give rise to a nice notion of balance for trees. Unsurprisingly, so-called caterpillars are the most unbalanced tree shapes, but it turns out that balanced tree shapes cannot be described so easily as they need not even be unique. |
1308.2273 | Naoki Masuda Dr. | Yohei Nakajima and Naoki Masuda | Evolutionary dynamics in finite populations with zealots | 1 table, 4 figures | Journal of Mathematical Biology, 70, 465-484 (2015) | 10.1007/s00285-014-0770-2 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate evolutionary dynamics of two-strategy matrix games with
zealots in finite populations. Zealots are assumed to take either strategy
regardless of the fitness. When the strategy selected by the zealots is the
same, the fixation of the strategy selected by the zealots is a trivial
outcome. We study fixation time in this scenario. We show that the fixation
time is divided into three main regimes, in one of which the fixation time is
short, and in the other two the fixation time is exponentially long in terms of
the population size. Different from the case without zealots, there is a
threshold selection intensity below which the fixation is fast for an arbitrary
payoff matrix. We illustrate our results with examples of various social
dilemma games.
| [
{
"created": "Sat, 10 Aug 2013 03:59:11 GMT",
"version": "v1"
},
{
"created": "Thu, 17 Mar 2016 10:41:31 GMT",
"version": "v2"
}
] | 2016-03-18 | [
[
"Nakajima",
"Yohei",
""
],
[
"Masuda",
"Naoki",
""
]
] | We investigate evolutionary dynamics of two-strategy matrix games with zealots in finite populations. Zealots are assumed to take either strategy regardless of the fitness. When the strategy selected by the zealots is the same, the fixation of the strategy selected by the zealots is a trivial outcome. We study fixation time in this scenario. We show that the fixation time is divided into three main regimes, in one of which the fixation time is short, and in the other two the fixation time is exponentially long in terms of the population size. Different from the case without zealots, there is a threshold selection intensity below which the fixation is fast for an arbitrary payoff matrix. We illustrate our results with examples of various social dilemma games. |
1003.5828 | Arne Traulsen | Chaitanya S. Gokhale, Yoh Iwasa, Martin A. Nowak and Arne Traulsen | The pace of evolution across fitness valleys | null | Journal of Theoretical Biology 259 (2009) 613-620 | 10.1016/j.jtbi.2009.04.011 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | How fast does a population evolve from one fitness peak to another? We study
the dynamics of evolving, asexually reproducing populations in which a certain
number of mutations jointly confer a fitness advantage. We consider the time
until a population has evolved from one fitness peak to another one with a
higher fitness. The order of mutations can either be fixed or random. If the
order of mutations is fixed, then the population follows a metaphorical ridge,
a single path. If the order of mutations is arbitrary, then there are many ways
to evolve to the higher fitness state. We address the time required for
fixation in such scenarios and study how it is affected by the order of
mutations, the population size, the fitness values and the mutation rate.
| [
{
"created": "Tue, 30 Mar 2010 14:10:51 GMT",
"version": "v1"
}
] | 2010-03-31 | [
[
"Gokhale",
"Chaitanya S.",
""
],
[
"Iwasa",
"Yoh",
""
],
[
"Nowak",
"Martin A.",
""
],
[
"Traulsen",
"Arne",
""
]
] | How fast does a population evolve from one fitness peak to another? We study the dynamics of evolving, asexually reproducing populations in which a certain number of mutations jointly confer a fitness advantage. We consider the time until a population has evolved from one fitness peak to another one with a higher fitness. The order of mutations can either be fixed or random. If the order of mutations is fixed, then the population follows a metaphorical ridge, a single path. If the order of mutations is arbitrary, then there are many ways to evolve to the higher fitness state. We address the time required for fixation in such scenarios and study how it is affected by the order of mutations, the population size, the fitness values and the mutation rate. |
1006.0019 | Teruhiko Yoneyama | Teruhiko Yoneyama and Mukkai S. Krishnamoorthy | Simulating the Spread of Influenza Pandemic of 1918-1919 Considering the
Effect of the First World War | null | null | null | null | q-bio.PE physics.soc-ph | http://creativecommons.org/licenses/by/3.0/ | The Influenza Pandemic of 1918-1919, also called Spanish Flu Pandemic, was
one of the severest pandemics in history. It is thought that the First World
War much influenced the spread of the pandemic. In this paper, we model the
pandemic considering both civil and military traffic. We propose a hybrid model
to determine how the pandemic spread through the world. Our approach considers
both the SEIR-based model for local areas and the network model for global
connection between countries. First, we reproduce the situation in 12
countries. Then, we simulate another scenario: there was no military traffic
during the pandemic, to determine the influence of the influenced of the war on
the pandemic. By considering the simulation results, we find that the influence
of the war varies in countries; in countries which were deeply involved in the
war, the infections were much influenced by the war, while in countries which
were not much engaged in the war, the infections were not influenced by the
war.
| [
{
"created": "Tue, 11 May 2010 22:04:52 GMT",
"version": "v1"
}
] | 2010-06-02 | [
[
"Yoneyama",
"Teruhiko",
""
],
[
"Krishnamoorthy",
"Mukkai S.",
""
]
] | The Influenza Pandemic of 1918-1919, also called Spanish Flu Pandemic, was one of the severest pandemics in history. It is thought that the First World War much influenced the spread of the pandemic. In this paper, we model the pandemic considering both civil and military traffic. We propose a hybrid model to determine how the pandemic spread through the world. Our approach considers both the SEIR-based model for local areas and the network model for global connection between countries. First, we reproduce the situation in 12 countries. Then, we simulate another scenario: there was no military traffic during the pandemic, to determine the influence of the influenced of the war on the pandemic. By considering the simulation results, we find that the influence of the war varies in countries; in countries which were deeply involved in the war, the infections were much influenced by the war, while in countries which were not much engaged in the war, the infections were not influenced by the war. |
1103.4347 | Ian Holmes | Oscar Westesson, Gerton Lunter, Benedict Paten, Ian Holmes | Phylogenetic automata, pruning, and multiple alignment | 96 pages: background, informal tutorial, and formal definitions | null | null | null | q-bio.PE q-bio.GN q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an extension of Felsenstein's algorithm to indel models defined on
entire sequences, without the need to condition on one multiple alignment. The
algorithm makes use of a generalization from probabilistic substitution
matrices to weighted finite-state transducers. Our approach may equivalently be
viewed as a probabilistic formulation of progressive multiple sequence
alignment, using partial-order graphs to represent ensemble profiles of
ancestral sequences. We present a hierarchical stochastic approximation
technique which makes this algorithm tractable for alignment analyses of
reasonable size.
| [
{
"created": "Tue, 22 Mar 2011 19:01:39 GMT",
"version": "v1"
},
{
"created": "Fri, 17 Feb 2012 20:59:16 GMT",
"version": "v2"
},
{
"created": "Thu, 23 Oct 2014 07:25:39 GMT",
"version": "v3"
}
] | 2014-10-24 | [
[
"Westesson",
"Oscar",
""
],
[
"Lunter",
"Gerton",
""
],
[
"Paten",
"Benedict",
""
],
[
"Holmes",
"Ian",
""
]
] | We present an extension of Felsenstein's algorithm to indel models defined on entire sequences, without the need to condition on one multiple alignment. The algorithm makes use of a generalization from probabilistic substitution matrices to weighted finite-state transducers. Our approach may equivalently be viewed as a probabilistic formulation of progressive multiple sequence alignment, using partial-order graphs to represent ensemble profiles of ancestral sequences. We present a hierarchical stochastic approximation technique which makes this algorithm tractable for alignment analyses of reasonable size. |
1901.11418 | Weisi Guo | Zhuangkun Wei, Bin Li, Weisi Guo, Wenxiu Hu, Chenglin Zhao | Sequential Bayesian Detection of Spike Activities from Fluorescence
Observations | null | null | null | null | q-bio.NC cs.LG eess.SP stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Extracting and detecting spike activities from the fluorescence observations
is an important step in understanding how neuron systems work. The main
challenge lies in that the combination of the ambient noise with dynamic
baseline fluctuation, often contaminates the observations, thereby
deteriorating the reliability of spike detection. This may be even worse in the
face of the nonlinear biological process, the coupling interactions between
spikes and baseline, and the unknown critical parameters of an underlying
physiological model, in which erroneous estimations of parameters will affect
the detection of spikes causing further error propagation. In this paper, we
propose a random finite set (RFS) based Bayesian approach. The dynamic
behaviors of spike sequence, fluctuated baseline and unknown parameters are
formulated as one RFS. This RFS state is capable of distinguishing the hidden
active/silent states induced by spike and non-spike activities respectively,
thereby \emph{negating the interaction role} played by spikes and other
factors. Then, premised on the RFS states, a Bayesian inference scheme is
designed to simultaneously estimate the model parameters, baseline, and crucial
spike activities. Our results demonstrate that the proposed scheme can gain an
extra $12\%$ detection accuracy in comparison with the state-of-the-art MLSpike
method.
| [
{
"created": "Thu, 31 Jan 2019 15:14:28 GMT",
"version": "v1"
}
] | 2019-02-01 | [
[
"Wei",
"Zhuangkun",
""
],
[
"Li",
"Bin",
""
],
[
"Guo",
"Weisi",
""
],
[
"Hu",
"Wenxiu",
""
],
[
"Zhao",
"Chenglin",
""
]
] | Extracting and detecting spike activities from the fluorescence observations is an important step in understanding how neuron systems work. The main challenge lies in that the combination of the ambient noise with dynamic baseline fluctuation, often contaminates the observations, thereby deteriorating the reliability of spike detection. This may be even worse in the face of the nonlinear biological process, the coupling interactions between spikes and baseline, and the unknown critical parameters of an underlying physiological model, in which erroneous estimations of parameters will affect the detection of spikes causing further error propagation. In this paper, we propose a random finite set (RFS) based Bayesian approach. The dynamic behaviors of spike sequence, fluctuated baseline and unknown parameters are formulated as one RFS. This RFS state is capable of distinguishing the hidden active/silent states induced by spike and non-spike activities respectively, thereby \emph{negating the interaction role} played by spikes and other factors. Then, premised on the RFS states, a Bayesian inference scheme is designed to simultaneously estimate the model parameters, baseline, and crucial spike activities. Our results demonstrate that the proposed scheme can gain an extra $12\%$ detection accuracy in comparison with the state-of-the-art MLSpike method. |
2106.14509 | Herv\'e Turlier | Mathieu Le-Verge-Serandour, Herv\'e Turlier | Blastocoel morphogenesis: a biophysics perspective | 26 pages, 6 figures, review | null | null | null | q-bio.CB | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The blastocoel is a fluid-filled cavity characteristic of animal embryos at
the blastula stage. Its emergence is commonly described as the result of
cleavage patterning, but this historical view conceals a large diversity of
mechanisms and overlooks many unsolved questions from a biophysics perspective.
In this review, we describe generic mechanisms for blastocoel morphogenesis,
rooted in biological literature and simple physical principles. We propose
novel directions of study and emphasize the importance to study blastocoel
morphogenesis as an evolutionary and physical continuum.
| [
{
"created": "Mon, 28 Jun 2021 09:51:03 GMT",
"version": "v1"
},
{
"created": "Mon, 11 Oct 2021 12:33:43 GMT",
"version": "v2"
}
] | 2021-10-12 | [
[
"Le-Verge-Serandour",
"Mathieu",
""
],
[
"Turlier",
"Hervé",
""
]
] | The blastocoel is a fluid-filled cavity characteristic of animal embryos at the blastula stage. Its emergence is commonly described as the result of cleavage patterning, but this historical view conceals a large diversity of mechanisms and overlooks many unsolved questions from a biophysics perspective. In this review, we describe generic mechanisms for blastocoel morphogenesis, rooted in biological literature and simple physical principles. We propose novel directions of study and emphasize the importance to study blastocoel morphogenesis as an evolutionary and physical continuum. |
1006.4397 | Ruriko Yoshida | Kai Xu and Ruriko Yoshida | Statistical analysis on detecting recombination sites in DNA-beta
satellites associated with the old world geminiviruses | 8 figures and 2 tables. To appear in Frontiers in Systems Biology | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although an exchange of genetic information by recombination plays an
important role in the evolution of viruses, it is not clear how it generates
diversity. {\it Geminiviruses} are plant viruses which have ambisense
single-stranded circular DNA genomes and one of the most economically important
plant viruses in agricultural production. Small circular single-stranded DNA
satellites, termed DNA-$\beta$, have recently been found associated with some
geminivirus infections. In this paper we analyze a satellite molecule
DNA-$\beta$ of geminiviruses for recombination events using phylogenetic and
statistical analysis and we find that one strain from ToLCMaB has a
recombination pattern and is possibly recombinant molecule between two strains
from two species, PaLCuB-[IN:Chi:05] (major parent) and ToLCB-[IN:CP:04] (minor
parent).
| [
{
"created": "Wed, 23 Jun 2010 00:15:33 GMT",
"version": "v1"
},
{
"created": "Mon, 13 Sep 2010 03:13:42 GMT",
"version": "v2"
}
] | 2010-09-14 | [
[
"Xu",
"Kai",
""
],
[
"Yoshida",
"Ruriko",
""
]
] | Although an exchange of genetic information by recombination plays an important role in the evolution of viruses, it is not clear how it generates diversity. {\it Geminiviruses} are plant viruses which have ambisense single-stranded circular DNA genomes and one of the most economically important plant viruses in agricultural production. Small circular single-stranded DNA satellites, termed DNA-$\beta$, have recently been found associated with some geminivirus infections. In this paper we analyze a satellite molecule DNA-$\beta$ of geminiviruses for recombination events using phylogenetic and statistical analysis and we find that one strain from ToLCMaB has a recombination pattern and is possibly recombinant molecule between two strains from two species, PaLCuB-[IN:Chi:05] (major parent) and ToLCB-[IN:CP:04] (minor parent). |
2309.00061 | Avigail Taylor | Avigail Taylor, Valentine M Macaulay, Anand K Maurya, Matthieu J
Miossec and Francesca M Buffa | GeneFEAST: the pivotal, gene-centric step in functional enrichment
analysis interpretation | Main text: 3 pages, 1 figure. Supplementary Information: 16 pages, 3
figures, 2 tables, 4 boxes | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Summary: GeneFEAST, implemented in Python, is a gene-centric functional
enrichment analysis summarisation and visualisation tool that can be applied to
large functional enrichment analysis (FEA) results arising from upstream FEA
pipelines. It produces a systematic, navigable HTML report, making it easy to
identify sets of genes putatively driving multiple enrichments and to explore
gene-level quantitative data first used to identify input genes. Further,
GeneFEAST can compare FEA results from multiple studies, making it possible,
for example, to highlight patterns of gene expression amongst genes commonly
differentially expressed in two sets of conditions, and giving rise to shared
enrichments under those conditions. GeneFEAST offers a novel, effective way to
address the complexities of linking up many overlapping FEA results to their
underlying genes and data, advancing gene-centric hypotheses, and providing
pivotal information for downstream validation experiments.
Availability: GeneFEAST is available at
https://github.com/avigailtaylor/GeneFEAST
Contact: avigail.taylor@well.ox.ac.uk
| [
{
"created": "Thu, 31 Aug 2023 18:05:20 GMT",
"version": "v1"
}
] | 2023-09-04 | [
[
"Taylor",
"Avigail",
""
],
[
"Macaulay",
"Valentine M",
""
],
[
"Maurya",
"Anand K",
""
],
[
"Miossec",
"Matthieu J",
""
],
[
"Buffa",
"Francesca M",
""
]
] | Summary: GeneFEAST, implemented in Python, is a gene-centric functional enrichment analysis summarisation and visualisation tool that can be applied to large functional enrichment analysis (FEA) results arising from upstream FEA pipelines. It produces a systematic, navigable HTML report, making it easy to identify sets of genes putatively driving multiple enrichments and to explore gene-level quantitative data first used to identify input genes. Further, GeneFEAST can compare FEA results from multiple studies, making it possible, for example, to highlight patterns of gene expression amongst genes commonly differentially expressed in two sets of conditions, and giving rise to shared enrichments under those conditions. GeneFEAST offers a novel, effective way to address the complexities of linking up many overlapping FEA results to their underlying genes and data, advancing gene-centric hypotheses, and providing pivotal information for downstream validation experiments. Availability: GeneFEAST is available at https://github.com/avigailtaylor/GeneFEAST Contact: avigail.taylor@well.ox.ac.uk |
1712.00407 | K. Anton Feenstra | Sanne Abeln, Jaap Heringa, K. Anton Feenstra | Introduction to Protein Structure Prediction | null | null | null | null | q-bio.BM | http://creativecommons.org/licenses/by/4.0/ | This chapter gives a graceful introduction to problem of protein three-
dimensional structure prediction, and focuses on how to make structural sense
out of a single input sequence with unknown structure, the 'query' or 'target'
sequence. We give an overview of the different classes of modelling techniques,
notably template-based and template free. We also discuss the way in which
structural predictions are validated within the global com- munity, and
elaborate on the extent to which predicted structures may be trusted and used
in practice. Finally we discuss whether the concept of a sin- gle fold
pertaining to a protein structure is sustainable given recent insights. In
short, we conclude that the general protein three-dimensional structure
prediction problem remains unsolved, especially if we desire quantitative
predictions. However, if a homologous structural template is available in the
PDB model or reasonable to high accuracy may be generated.
| [
{
"created": "Fri, 1 Dec 2017 17:09:40 GMT",
"version": "v1"
}
] | 2017-12-04 | [
[
"Abeln",
"Sanne",
""
],
[
"Heringa",
"Jaap",
""
],
[
"Feenstra",
"K. Anton",
""
]
] | This chapter gives a graceful introduction to problem of protein three- dimensional structure prediction, and focuses on how to make structural sense out of a single input sequence with unknown structure, the 'query' or 'target' sequence. We give an overview of the different classes of modelling techniques, notably template-based and template free. We also discuss the way in which structural predictions are validated within the global com- munity, and elaborate on the extent to which predicted structures may be trusted and used in practice. Finally we discuss whether the concept of a sin- gle fold pertaining to a protein structure is sustainable given recent insights. In short, we conclude that the general protein three-dimensional structure prediction problem remains unsolved, especially if we desire quantitative predictions. However, if a homologous structural template is available in the PDB model or reasonable to high accuracy may be generated. |
2401.05470 | Joaquim Estopinan | Joaquim Estopinan, Pierre Bonnet, Maximilien Servajean, Fran\c{c}ois
Munoz, Alexis Joly | Modelling Species Distributions with Deep Learning to Predict Plant
Extinction Risk and Assess Climate Change Impacts | 18 pages, 5 figures. Coda and data:
https://github.com/estopinj/IUCN_classification | null | null | null | q-bio.PE cs.LG stat.AP | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The post-2020 global biodiversity framework needs ambitious, research-based
targets. Estimating the accelerated extinction risk due to climate change is
critical. The International Union for Conservation of Nature (IUCN) measures
the extinction risk of species. Automatic methods have been developed to
provide information on the IUCN status of under-assessed taxa. However, these
compensatory methods are based on current species characteristics, mainly
geographical, which precludes their use in future projections. Here, we
evaluate a novel method for classifying the IUCN status of species benefiting
from the generalisation power of species distribution models based on deep
learning. Our method matches state-of-the-art classification performance while
relying on flexible SDM-based features that capture species' environmental
preferences. Cross-validation yields average accuracies of 0.61 for status
classification and 0.78 for binary classification. Climate change will reshape
future species distributions. Under the species-environment equilibrium
hypothesis, SDM projections approximate plausible future outcomes. Two extremes
of species dispersal capacity are considered: unlimited or null. The projected
species distributions are translated into features feeding our IUCN
classification method. Finally, trends in threatened species are analysed over
time and i) by continent and as a function of average ii) latitude or iii)
altitude. The proportion of threatened species is increasing globally, with
critical rates in Africa, Asia and South America. Furthermore, the proportion
of threatened species is predicted to peak around the two Tropics, at the
Equator, in the lowlands and at altitudes of 800-1,500 m.
| [
{
"created": "Wed, 10 Jan 2024 15:24:27 GMT",
"version": "v1"
}
] | 2024-01-12 | [
[
"Estopinan",
"Joaquim",
""
],
[
"Bonnet",
"Pierre",
""
],
[
"Servajean",
"Maximilien",
""
],
[
"Munoz",
"François",
""
],
[
"Joly",
"Alexis",
""
]
] | The post-2020 global biodiversity framework needs ambitious, research-based targets. Estimating the accelerated extinction risk due to climate change is critical. The International Union for Conservation of Nature (IUCN) measures the extinction risk of species. Automatic methods have been developed to provide information on the IUCN status of under-assessed taxa. However, these compensatory methods are based on current species characteristics, mainly geographical, which precludes their use in future projections. Here, we evaluate a novel method for classifying the IUCN status of species benefiting from the generalisation power of species distribution models based on deep learning. Our method matches state-of-the-art classification performance while relying on flexible SDM-based features that capture species' environmental preferences. Cross-validation yields average accuracies of 0.61 for status classification and 0.78 for binary classification. Climate change will reshape future species distributions. Under the species-environment equilibrium hypothesis, SDM projections approximate plausible future outcomes. Two extremes of species dispersal capacity are considered: unlimited or null. The projected species distributions are translated into features feeding our IUCN classification method. Finally, trends in threatened species are analysed over time and i) by continent and as a function of average ii) latitude or iii) altitude. The proportion of threatened species is increasing globally, with critical rates in Africa, Asia and South America. Furthermore, the proportion of threatened species is predicted to peak around the two Tropics, at the Equator, in the lowlands and at altitudes of 800-1,500 m. |
2205.05451 | Ganesh Bagler Dr | Nishant Grover, Mansi Goel, Devansh Batra, Neelansh Garg, Rudraksh
Tuwani, Apuroop Sethupathy and Ganesh Bagler | FlavorDB2: An Updated Database of Flavor Molecules | 5 pages, 2 figures | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Flavor is expressed through interaction of molecules via gustatory and
olfactory mechanisms. Knowing the utility of flavor molecules in food and
fragrances, it is valuable to add a comprehensive repository of flavor
compounds characterizing their flavor profile, chemical properties, regulatory
status, consumption statistics, taste/aroma threshold values, reported uses in
food categories, and synthesis. FlavorDB2
(https://cosylab.iiitd.edu.in/flavordb2/) is an updated database of flavor
molecules with an user-friendly interface. This repository simplifies the
search for flavor molecules, their attributes and offers a range of
applications including food pairing. FlavorDB2 serves as a standard repository
of flavor compounds.
| [
{
"created": "Tue, 10 May 2022 12:12:41 GMT",
"version": "v1"
}
] | 2022-05-12 | [
[
"Grover",
"Nishant",
""
],
[
"Goel",
"Mansi",
""
],
[
"Batra",
"Devansh",
""
],
[
"Garg",
"Neelansh",
""
],
[
"Tuwani",
"Rudraksh",
""
],
[
"Sethupathy",
"Apuroop",
""
],
[
"Bagler",
"Ganesh",
""
]
] | Flavor is expressed through interaction of molecules via gustatory and olfactory mechanisms. Knowing the utility of flavor molecules in food and fragrances, it is valuable to add a comprehensive repository of flavor compounds characterizing their flavor profile, chemical properties, regulatory status, consumption statistics, taste/aroma threshold values, reported uses in food categories, and synthesis. FlavorDB2 (https://cosylab.iiitd.edu.in/flavordb2/) is an updated database of flavor molecules with an user-friendly interface. This repository simplifies the search for flavor molecules, their attributes and offers a range of applications including food pairing. FlavorDB2 serves as a standard repository of flavor compounds. |
2007.14375 | Ramachandran Vijayan | Ramachandran Vijayan, Samudrala Gourinath | Structure-based inhibitor screening of natural products against NSP15 of
SARS- CoV-2 revealed Thymopentin and Oleuropein as potent inhibitors | 21 pages, 7 figures | null | null | null | q-bio.BM | http://creativecommons.org/licenses/by/4.0/ | Coronaviruses are enveloped, non-segmented positive-sense RNA viruses that
have the largest genome among RNA viruses. The genome contains a large
replicase ORF encodes nonstructural proteins (NSPs), structural and accessory
genes. NSP15 is a nidoviral RNA uridylate-specific endoribonuclease (NendoU)
has C-terminal catalytic domain. The endoribonuclease activity of NSP15
interferes with the innate immune response of the host. Here, we screened
Selleckchem Natural product database of compounds against the NSP15,
Thymopentin and Oleuropein showed highest binding energies. The binding of
these molecules was further validated by Molecular dynamic simulation and found
very stable complexes. These drugs might serve as effective counter molecules
in the reduction of virulence of this virus. Future validation of both these
inhibitors are worth consideration for patients being treated for COVID -19.
| [
{
"created": "Tue, 28 Jul 2020 17:37:58 GMT",
"version": "v1"
}
] | 2020-07-29 | [
[
"Vijayan",
"Ramachandran",
""
],
[
"Gourinath",
"Samudrala",
""
]
] | Coronaviruses are enveloped, non-segmented positive-sense RNA viruses that have the largest genome among RNA viruses. The genome contains a large replicase ORF encodes nonstructural proteins (NSPs), structural and accessory genes. NSP15 is a nidoviral RNA uridylate-specific endoribonuclease (NendoU) has C-terminal catalytic domain. The endoribonuclease activity of NSP15 interferes with the innate immune response of the host. Here, we screened Selleckchem Natural product database of compounds against the NSP15, Thymopentin and Oleuropein showed highest binding energies. The binding of these molecules was further validated by Molecular dynamic simulation and found very stable complexes. These drugs might serve as effective counter molecules in the reduction of virulence of this virus. Future validation of both these inhibitors are worth consideration for patients being treated for COVID -19. |
2407.08812 | Joan Carles Pons Mayol | Joan Carles Pons, Pau Vives L\'opez, Yukihiro Murakami, Leo Van Iersel | Fence decompositions and cherry covers in non-binary phylogenetic
networks | 16 pages | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reticulate evolution can be modelled using phylogenetic networks. Tree-based
networks, which are one of the more general classes of phylogenetic networks,
have recently gained eminence for its ability to represent evolutionary
histories with an underlying tree structure. To better understand tree-based
networks, numerous characterizations have been proposed, based on tree
embeddings, matchings, and arc partitions. Here, we build a bridge between two
arc partition characterizations, namely maximal fence decompositions and cherry
covers. Results on cherry covers have been found for general phylogenetic
networks. We first show that the number of cherry covers is the same as the
number of support trees (underlying tree structure of tree-based networks) for
a given semibinary network. Maximal fence decompositions have only been defined
thus far for binary networks (constraints on vertex degrees). We remedy this by
generalizing fence decompositions to non-binary networks, and using this, we
characterize semi-binary tree-based networks in terms of forbidden structures.
Furthermore, we give an explicit enumeration of cherry covers of semi-binary
networks, by studying its fence decomposition. Finally, we prove that it is
possible to characterize semi-binary tree-child networks, a subclass of
tree-based networks, in terms of the number of their cherry covers.
| [
{
"created": "Thu, 11 Jul 2024 18:50:53 GMT",
"version": "v1"
}
] | 2024-07-15 | [
[
"Pons",
"Joan Carles",
""
],
[
"López",
"Pau Vives",
""
],
[
"Murakami",
"Yukihiro",
""
],
[
"Van Iersel",
"Leo",
""
]
] | Reticulate evolution can be modelled using phylogenetic networks. Tree-based networks, which are one of the more general classes of phylogenetic networks, have recently gained eminence for its ability to represent evolutionary histories with an underlying tree structure. To better understand tree-based networks, numerous characterizations have been proposed, based on tree embeddings, matchings, and arc partitions. Here, we build a bridge between two arc partition characterizations, namely maximal fence decompositions and cherry covers. Results on cherry covers have been found for general phylogenetic networks. We first show that the number of cherry covers is the same as the number of support trees (underlying tree structure of tree-based networks) for a given semibinary network. Maximal fence decompositions have only been defined thus far for binary networks (constraints on vertex degrees). We remedy this by generalizing fence decompositions to non-binary networks, and using this, we characterize semi-binary tree-based networks in terms of forbidden structures. Furthermore, we give an explicit enumeration of cherry covers of semi-binary networks, by studying its fence decomposition. Finally, we prove that it is possible to characterize semi-binary tree-child networks, a subclass of tree-based networks, in terms of the number of their cherry covers. |
2310.15263 | Chengrui Li | Chengrui Li, Soon Ho Kim, Chris Rodgers, Hannah Choi, Anqi Wu | One-hot Generalized Linear Model for Switching Brain State Discovery | null | null | null | null | q-bio.NC cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Exposing meaningful and interpretable neural interactions is critical to
understanding neural circuits. Inferred neural interactions from neural signals
primarily reflect functional interactions. In a long experiment, subject
animals may experience different stages defined by the experiment, stimuli, or
behavioral states, and hence functional interactions can change over time. To
model dynamically changing functional interactions, prior work employs
state-switching generalized linear models with hidden Markov models (i.e.,
HMM-GLMs). However, we argue they lack biological plausibility, as functional
interactions are shaped and confined by the underlying anatomical connectome.
Here, we propose a novel prior-informed state-switching GLM. We introduce both
a Gaussian prior and a one-hot prior over the GLM in each state. The priors are
learnable. We will show that the learned prior should capture the
state-constant interaction, shedding light on the underlying anatomical
connectome and revealing more likely physical neuron interactions. The
state-dependent interaction modeled by each GLM offers traceability to capture
functional variations across multiple brain states. Our methods effectively
recover true interaction structures in simulated data, achieve the highest
predictive likelihood with real neural datasets, and render interaction
structures and hidden states more interpretable when applied to real neural
data.
| [
{
"created": "Mon, 23 Oct 2023 18:10:22 GMT",
"version": "v1"
}
] | 2023-10-25 | [
[
"Li",
"Chengrui",
""
],
[
"Kim",
"Soon Ho",
""
],
[
"Rodgers",
"Chris",
""
],
[
"Choi",
"Hannah",
""
],
[
"Wu",
"Anqi",
""
]
] | Exposing meaningful and interpretable neural interactions is critical to understanding neural circuits. Inferred neural interactions from neural signals primarily reflect functional interactions. In a long experiment, subject animals may experience different stages defined by the experiment, stimuli, or behavioral states, and hence functional interactions can change over time. To model dynamically changing functional interactions, prior work employs state-switching generalized linear models with hidden Markov models (i.e., HMM-GLMs). However, we argue they lack biological plausibility, as functional interactions are shaped and confined by the underlying anatomical connectome. Here, we propose a novel prior-informed state-switching GLM. We introduce both a Gaussian prior and a one-hot prior over the GLM in each state. The priors are learnable. We will show that the learned prior should capture the state-constant interaction, shedding light on the underlying anatomical connectome and revealing more likely physical neuron interactions. The state-dependent interaction modeled by each GLM offers traceability to capture functional variations across multiple brain states. Our methods effectively recover true interaction structures in simulated data, achieve the highest predictive likelihood with real neural datasets, and render interaction structures and hidden states more interpretable when applied to real neural data. |
1011.0370 | Sylvain Tollis | Sylvain Tollis, Anna E. Dart, George Tzircotis, and Robert G. Endres | The zipper mechanism in phagocytosis: energetic requirements and
variability in phagocytic cup shape | Accepted for publication in BMC Systems Biology. 17 pages, 6 Figures,
+ supplementary information | null | null | null | q-bio.CB physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Phagocytosis is the fundamental cellular process by which eukaryotic cells
bind and engulf particles by their cell membrane. Particle engulfment involves
particle recognition by cell-surface receptors, signaling and remodeling of the
actin cytoskeleton to guide the membrane around the particle in a zipper-like
fashion. Despite the signaling complexity, phagocytosis also depends strongly
on biophysical parameters, such as particle shape, and the need for
actin-driven force generation remains poorly understood. Here, we propose a
novel, three-dimensional and stochastic biophysical model of phagocytosis, and
study the engulfment of particles of various sizes and shapes, including spiral
and rod-shaped particles reminiscent of bacteria. Highly curved shapes are not
taken up, in line with recent experimental results. Furthermore, we
surprisingly find that even without actin-driven force generation, engulfment
proceeds in a large regime of parameter values, albeit more slowly and with
highly variable phagocytic cups. We experimentally confirm these predictions
using fibroblasts, transfected with immunoreceptor FcyRIIa for engulfment of
immunoglobulin G-opsonized particles. Specifically, we compare the wild-type
receptor with a mutant receptor, unable to signal to the actin cytoskeleton.
Based on the reconstruction of phagocytic cups from imaging data, we indeed
show that cells are able to engulf small particles even without support from
biological actin-driven processes. This suggests that biochemical pathways
render the evolutionary ancient process of phagocytic highly robust, allowing
cells to engulf even very large particles. The particle-shape dependence of
phagocytosis makes a systematic investigation of host-pathogen interactions and
an efficient design of a vehicle for drug delivery possible.
| [
{
"created": "Mon, 1 Nov 2010 16:28:18 GMT",
"version": "v1"
}
] | 2010-11-02 | [
[
"Tollis",
"Sylvain",
""
],
[
"Dart",
"Anna E.",
""
],
[
"Tzircotis",
"George",
""
],
[
"Endres",
"Robert G.",
""
]
] | Phagocytosis is the fundamental cellular process by which eukaryotic cells bind and engulf particles by their cell membrane. Particle engulfment involves particle recognition by cell-surface receptors, signaling and remodeling of the actin cytoskeleton to guide the membrane around the particle in a zipper-like fashion. Despite the signaling complexity, phagocytosis also depends strongly on biophysical parameters, such as particle shape, and the need for actin-driven force generation remains poorly understood. Here, we propose a novel, three-dimensional and stochastic biophysical model of phagocytosis, and study the engulfment of particles of various sizes and shapes, including spiral and rod-shaped particles reminiscent of bacteria. Highly curved shapes are not taken up, in line with recent experimental results. Furthermore, we surprisingly find that even without actin-driven force generation, engulfment proceeds in a large regime of parameter values, albeit more slowly and with highly variable phagocytic cups. We experimentally confirm these predictions using fibroblasts, transfected with immunoreceptor FcyRIIa for engulfment of immunoglobulin G-opsonized particles. Specifically, we compare the wild-type receptor with a mutant receptor, unable to signal to the actin cytoskeleton. Based on the reconstruction of phagocytic cups from imaging data, we indeed show that cells are able to engulf small particles even without support from biological actin-driven processes. This suggests that biochemical pathways render the evolutionary ancient process of phagocytic highly robust, allowing cells to engulf even very large particles. The particle-shape dependence of phagocytosis makes a systematic investigation of host-pathogen interactions and an efficient design of a vehicle for drug delivery possible. |
2107.05388 | Adrian Buganza Tepole | Vahidullah Tac, Vivek D. Sree, Manuel K. Rausch, Adrian B. Tepole | Data-driven Modeling of the Mechanical Behavior of Anisotropic Soft
Biological Tissue | 19 pages, 10 figures | null | null | null | q-bio.QM cond-mat.soft cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Constitutive models that describe the mechanical behavior of soft tissues
have advanced greatly over the past few decades. These expert models are
generalizable and require the calibration of a number of parameters to fit
experimental data. However, inherent pitfalls stemming from the restriction to
a specific functional form include poor fits to the data, non-uniqueness of
fit, and high sensitivity to parameters. In this study we design and train
fully connected neural networks as material models to replace or augment expert
models. To guarantee objectivity, the neural network takes isochoric strain
invariants as inputs, and outputs the value of strain energy and its
derivatives with respect to the invariants. Convexity of the material model is
enforced through the loss function. Direct prediction of the derivative
functions -- rather than just predicting the energy -- serves two purposes: it
provides flexibility during training, and it enables the calculation of the
elasticity tensor through back-propagation. We showcase the ability of the
neural network to learn the mechanical behavior of porcine and murine skin from
biaxial test data. Crucially, we show that a multi-fidelity scheme which
combines high fidelity experimental data with low fidelity analytical data
yields the best performance. The neural network material model can then be
interpreted as the best extension of an expert model: it learns the features
that an expert has encoded in the analytical model while fitting the
experimental data better. Finally, we implemented a general user material
subroutine (UMAT) for the finite element software Abaqus and thereby make our
advances available to the broader computational community. We expect that the
methods and software generated in this work will broaden the use of data-driven
constitutive models in biomedical applications.
| [
{
"created": "Thu, 8 Jul 2021 01:58:05 GMT",
"version": "v1"
}
] | 2021-07-13 | [
[
"Tac",
"Vahidullah",
""
],
[
"Sree",
"Vivek D.",
""
],
[
"Rausch",
"Manuel K.",
""
],
[
"Tepole",
"Adrian B.",
""
]
] | Constitutive models that describe the mechanical behavior of soft tissues have advanced greatly over the past few decades. These expert models are generalizable and require the calibration of a number of parameters to fit experimental data. However, inherent pitfalls stemming from the restriction to a specific functional form include poor fits to the data, non-uniqueness of fit, and high sensitivity to parameters. In this study we design and train fully connected neural networks as material models to replace or augment expert models. To guarantee objectivity, the neural network takes isochoric strain invariants as inputs, and outputs the value of strain energy and its derivatives with respect to the invariants. Convexity of the material model is enforced through the loss function. Direct prediction of the derivative functions -- rather than just predicting the energy -- serves two purposes: it provides flexibility during training, and it enables the calculation of the elasticity tensor through back-propagation. We showcase the ability of the neural network to learn the mechanical behavior of porcine and murine skin from biaxial test data. Crucially, we show that a multi-fidelity scheme which combines high fidelity experimental data with low fidelity analytical data yields the best performance. The neural network material model can then be interpreted as the best extension of an expert model: it learns the features that an expert has encoded in the analytical model while fitting the experimental data better. Finally, we implemented a general user material subroutine (UMAT) for the finite element software Abaqus and thereby make our advances available to the broader computational community. We expect that the methods and software generated in this work will broaden the use of data-driven constitutive models in biomedical applications. |
1105.0592 | Joachim Krug | Johannes Neidhart and Joachim Krug | Adaptive walks and extreme value theory | 4 pages, 2 figures; final version, to appear in Physical Review
Letters | Physical Review Letters 107, 178102 (2011) | 10.1103/PhysRevLett.107.178102 | null | q-bio.PE cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study biological evolution in a high-dimensional genotype space in the
regime of rare mutations and strong selection. The population performs an
uphill walk which terminates at local fitness maxima. Assigning fitness
randomly to genotypes, we show that the mean walk length is logarithmic in the
number of initially available beneficial mutations, with a prefactor determined
by the tail of the fitness distribution. This result is derived analytically in
a simplified setting where the mutational neighborhood is fixed during the
adaptive process, and confirmed by numerical simulations.
| [
{
"created": "Tue, 3 May 2011 14:20:58 GMT",
"version": "v1"
},
{
"created": "Tue, 20 Sep 2011 11:26:56 GMT",
"version": "v2"
}
] | 2015-05-28 | [
[
"Neidhart",
"Johannes",
""
],
[
"Krug",
"Joachim",
""
]
] | We study biological evolution in a high-dimensional genotype space in the regime of rare mutations and strong selection. The population performs an uphill walk which terminates at local fitness maxima. Assigning fitness randomly to genotypes, we show that the mean walk length is logarithmic in the number of initially available beneficial mutations, with a prefactor determined by the tail of the fitness distribution. This result is derived analytically in a simplified setting where the mutational neighborhood is fixed during the adaptive process, and confirmed by numerical simulations. |
2110.13951 | Kirti Prakash | Johannes Hohlbein, Benedict Diederich, Barbora Marsikova, Emmanuel G.
Reynaud, Seamus Holden, Wiebke Jahr, Robert Haase, and Kirti Prakash | Open microscopy in the life sciences: Quo Vadis? | 28 pages, 2 figures | null | null | null | q-bio.OT physics.bio-ph physics.data-an physics.ins-det physics.optics | http://creativecommons.org/licenses/by-sa/4.0/ | Light microscopy allows observing cellular features and objects with
sub-micrometer resolution. As such, light microscopy has been playing a
fundamental role in the life sciences for more than a hundred years. Fueled by
the availability of mass-produced electronics and hardware, publicly shared
documentation and building instructions, open-source software, wide access to
rapid prototyping and 3D printing, and the enthusiasm of contributors and users
involved, the concept of open microscopy has been gaining incredible momentum,
bringing new sophisticated tools to an expanding user base. Here, we will first
discuss the ideas behind open science and open microscopy before highlighting
recent projects and developments in open microscopy. We argue that the
availability of well-designed open hardware and software solutions targeting
broad user groups or even non-experts, will increasingly be relevant to cope
with the increasing complexity of cutting-edge imaging technologies. We will
then extensively discuss the current and future challenges of open microscopy.
| [
{
"created": "Tue, 26 Oct 2021 18:31:34 GMT",
"version": "v1"
}
] | 2021-10-28 | [
[
"Hohlbein",
"Johannes",
""
],
[
"Diederich",
"Benedict",
""
],
[
"Marsikova",
"Barbora",
""
],
[
"Reynaud",
"Emmanuel G.",
""
],
[
"Holden",
"Seamus",
""
],
[
"Jahr",
"Wiebke",
""
],
[
"Haase",
"Robert",
""... | Light microscopy allows observing cellular features and objects with sub-micrometer resolution. As such, light microscopy has been playing a fundamental role in the life sciences for more than a hundred years. Fueled by the availability of mass-produced electronics and hardware, publicly shared documentation and building instructions, open-source software, wide access to rapid prototyping and 3D printing, and the enthusiasm of contributors and users involved, the concept of open microscopy has been gaining incredible momentum, bringing new sophisticated tools to an expanding user base. Here, we will first discuss the ideas behind open science and open microscopy before highlighting recent projects and developments in open microscopy. We argue that the availability of well-designed open hardware and software solutions targeting broad user groups or even non-experts, will increasingly be relevant to cope with the increasing complexity of cutting-edge imaging technologies. We will then extensively discuss the current and future challenges of open microscopy. |
1305.5803 | Vladimir Privman | Sergii Domanskyi, Vladimir Privman | Design of Digital Response in Enzyme-Based Bioanalytical Systems for
Information Processing Applications | null | J. Phys. Chem. B 116, 13690-13695 (2012) | 10.1021/jp309001j | VP-250 | q-bio.MN physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate performance and optimization of the "digital" bioanalytical
response. Specifically, we consider the recently introduced approach of a
partial input conversion into inactive compounds, resulting in the "branch
point effect" similar to that encountered in biological systems. This
corresponds to an "intensity filter," which can yield a binary-type
sigmoid-response output signal of interest in information and signal processing
and in biosensing applications. We define measures for optimizing the response
for information processing applications based on the kinetic modeling of the
enzymatic reactions involved, and apply the developed approach to the recently
published data for glucose detection.
| [
{
"created": "Fri, 24 May 2013 17:11:36 GMT",
"version": "v1"
}
] | 2013-05-27 | [
[
"Domanskyi",
"Sergii",
""
],
[
"Privman",
"Vladimir",
""
]
] | We investigate performance and optimization of the "digital" bioanalytical response. Specifically, we consider the recently introduced approach of a partial input conversion into inactive compounds, resulting in the "branch point effect" similar to that encountered in biological systems. This corresponds to an "intensity filter," which can yield a binary-type sigmoid-response output signal of interest in information and signal processing and in biosensing applications. We define measures for optimizing the response for information processing applications based on the kinetic modeling of the enzymatic reactions involved, and apply the developed approach to the recently published data for glucose detection. |
1406.5641 | Cristiano Nisoli | Cristiano Nisoli and A. R. Bishop | Thermomechanical Stability and Mechanochemical Response of DNA: a
Minimal Mesoscale Model | 18 pages, 11 figures | The Journal of Chemical Physics 141 (11), 115101 (2014) | 10.1063/1.4895724 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We show that a mesoscale model, with a minimal number of parameters, can well
describe the thermomechanical and mechanochemical behavior of homogeneous DNA
at thermal equilibrium under tension and torque. We predict critical
temperatures for denaturation under torque and stretch, phase diagrams for
stable DNA, probe/response profiles under mechanical loads, and the density of
dsDNA as a function of stretch and twist. We compare our predictions with
available single molecule manipulation experiments and find strong agreement.
In particular we elucidate the difference between angularly constrained and
unconstrained overstretching. We propose that the smoothness of the angularly
constrained overstreching transition is a consequence of the molecule being in
the vicinity of criticality for a broad range of values of applied tension.
| [
{
"created": "Sat, 21 Jun 2014 18:17:52 GMT",
"version": "v1"
},
{
"created": "Sat, 18 Oct 2014 14:45:47 GMT",
"version": "v2"
}
] | 2015-06-22 | [
[
"Nisoli",
"Cristiano",
""
],
[
"Bishop",
"A. R.",
""
]
] | We show that a mesoscale model, with a minimal number of parameters, can well describe the thermomechanical and mechanochemical behavior of homogeneous DNA at thermal equilibrium under tension and torque. We predict critical temperatures for denaturation under torque and stretch, phase diagrams for stable DNA, probe/response profiles under mechanical loads, and the density of dsDNA as a function of stretch and twist. We compare our predictions with available single molecule manipulation experiments and find strong agreement. In particular we elucidate the difference between angularly constrained and unconstrained overstretching. We propose that the smoothness of the angularly constrained overstreching transition is a consequence of the molecule being in the vicinity of criticality for a broad range of values of applied tension. |
1404.7591 | Randall O'Reilly | Randall C. O'Reilly, Thomas E. Hazy, Jessica Mollick, Prescott Mackie,
Seth Herd | Goal-Driven Cognition in the Brain: A Computational Framework | 62 pages, 11 figures | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current theoretical and computational models of dopamine-based reinforcement
learning are largely rooted in the classical behaviorist tradition, and
envision the organism as a purely reactive recipient of rewards and
punishments, with resulting behavior that essentially reflects the sum of this
reinforcement history. This framework is missing some fundamental features of
the affective nervous system, most importantly, the central role of goals in
driving and organizing behavior in a teleological manner. Even when
goal-directed behaviors are considered in current frameworks, they are
typically conceived of as arising in reaction to the environment, rather than
being in place from the start. We hypothesize that goal-driven cognition is
primary, and organized into two discrete phases: goal selection and goal
engaged, which each have a substantially different effective value function.
This dichotomy can potentially explain a wide range of phenomena, playing a
central role in many clinical disorders, such as depression, OCD, ADHD, and
PTSD, and providing a sensible account of the detailed biology and function of
the dopamine system and larger limbic system, including critical ventral and
medial prefrontal cortex. Computationally, reasoning backward from active goals
to action selection is more tractable than projecting alternative action
choices forward to compute possible outcomes. An explicit computational model
of these brain areas and their function in this goal-driven framework is
described, as are numerous testable predictions from this framework.
| [
{
"created": "Wed, 30 Apr 2014 05:04:13 GMT",
"version": "v1"
}
] | 2014-05-01 | [
[
"O'Reilly",
"Randall C.",
""
],
[
"Hazy",
"Thomas E.",
""
],
[
"Mollick",
"Jessica",
""
],
[
"Mackie",
"Prescott",
""
],
[
"Herd",
"Seth",
""
]
] | Current theoretical and computational models of dopamine-based reinforcement learning are largely rooted in the classical behaviorist tradition, and envision the organism as a purely reactive recipient of rewards and punishments, with resulting behavior that essentially reflects the sum of this reinforcement history. This framework is missing some fundamental features of the affective nervous system, most importantly, the central role of goals in driving and organizing behavior in a teleological manner. Even when goal-directed behaviors are considered in current frameworks, they are typically conceived of as arising in reaction to the environment, rather than being in place from the start. We hypothesize that goal-driven cognition is primary, and organized into two discrete phases: goal selection and goal engaged, which each have a substantially different effective value function. This dichotomy can potentially explain a wide range of phenomena, playing a central role in many clinical disorders, such as depression, OCD, ADHD, and PTSD, and providing a sensible account of the detailed biology and function of the dopamine system and larger limbic system, including critical ventral and medial prefrontal cortex. Computationally, reasoning backward from active goals to action selection is more tractable than projecting alternative action choices forward to compute possible outcomes. An explicit computational model of these brain areas and their function in this goal-driven framework is described, as are numerous testable predictions from this framework. |
2003.11376 | Jagadish Kumar Dr. | Jagadish Kumar, K. P. S. S. Hembram | Epidemiological study of novel coronavirus (COVID-19) | 9 pages, 8 figures | International Journal of Community Medicine and Public Health,
Volume 8, Issue 3, Pages 1369, 2021 | 10.18203/2394-6040.ijcmph20210828 | null | q-bio.PE physics.app-ph physics.bio-ph physics.med-ph | http://creativecommons.org/publicdomain/zero/1.0/ | We report a statistical analysis of some highly infected countries by the
novel coronavirus (COVID-19). The cumulative infected data were fitted with
various growth models (e.g. Logistic equation, Weibull equation and Hill
equation) and obtained the power index of top ten highly infected countries.
The newly infected data were fitted with Gaussian distribution with the peak at
~40 days for the countries whose infection curves are seem to be saturated. The
similarity in growth kinetics of infected people of different countries
provides first-hand guidelines to take proper precautions to minimize human
damage.
| [
{
"created": "Wed, 25 Mar 2020 12:54:40 GMT",
"version": "v1"
}
] | 2021-04-16 | [
[
"Kumar",
"Jagadish",
""
],
[
"Hembram",
"K. P. S. S.",
""
]
] | We report a statistical analysis of some highly infected countries by the novel coronavirus (COVID-19). The cumulative infected data were fitted with various growth models (e.g. Logistic equation, Weibull equation and Hill equation) and obtained the power index of top ten highly infected countries. The newly infected data were fitted with Gaussian distribution with the peak at ~40 days for the countries whose infection curves are seem to be saturated. The similarity in growth kinetics of infected people of different countries provides first-hand guidelines to take proper precautions to minimize human damage. |
1301.5277 | Franck Rapaport | Franck Rapaport, Raya Khanin, Yupu Liang, Azra Krek, Paul Zumbo,
Christopher E. Mason, Nicholas D. Socci, Doron Betel | Comprehensive evaluation of differential expression analysis methods for
RNA-seq data | Manuscript includes supplementary figures | null | null | null | q-bio.GN q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | High-throughput sequencing of RNA transcripts (RNA-seq) has become the method
of choice for detection of differential expression (DE). Concurrent with the
growing popularity of this technology there has been a significant research
effort devoted towards understanding the statistical properties of this data
and the development of analysis methods. We report on a comprehensive
evaluation of the commonly used DE methods using the SEQC benchmark data set.
We evaluate a number of key features including: assessment of normalization,
accuracy of DE detection, modeling of genes expressed in only one condition,
and the impact of sequencing depth and number of replications on identifying DE
genes. We find significant differences among the methods with no single method
consistently outperforming the others. Furthermore, the performance of
array-based approach is comparable to methods customized for RNA-seq data.
Perhaps most importantly, our results demonstrate that increasing the number of
replicate samples provides significantly more detection power than increased
sequencing depth.
| [
{
"created": "Tue, 22 Jan 2013 19:06:32 GMT",
"version": "v1"
},
{
"created": "Wed, 23 Jan 2013 03:26:19 GMT",
"version": "v2"
}
] | 2013-01-24 | [
[
"Rapaport",
"Franck",
""
],
[
"Khanin",
"Raya",
""
],
[
"Liang",
"Yupu",
""
],
[
"Krek",
"Azra",
""
],
[
"Zumbo",
"Paul",
""
],
[
"Mason",
"Christopher E.",
""
],
[
"Socci",
"Nicholas D.",
""
],
[
"... | High-throughput sequencing of RNA transcripts (RNA-seq) has become the method of choice for detection of differential expression (DE). Concurrent with the growing popularity of this technology there has been a significant research effort devoted towards understanding the statistical properties of this data and the development of analysis methods. We report on a comprehensive evaluation of the commonly used DE methods using the SEQC benchmark data set. We evaluate a number of key features including: assessment of normalization, accuracy of DE detection, modeling of genes expressed in only one condition, and the impact of sequencing depth and number of replications on identifying DE genes. We find significant differences among the methods with no single method consistently outperforming the others. Furthermore, the performance of array-based approach is comparable to methods customized for RNA-seq data. Perhaps most importantly, our results demonstrate that increasing the number of replicate samples provides significantly more detection power than increased sequencing depth. |
q-bio/0607036 | Andrew Tan | Andrew Y. Y. Tan, Craig A. Atencio, Daniel B. Polley, Michael M.
Merzenich and Christoph E. Schreiner | Unbalanced synaptic inhibition can create intensity-tuned auditory
cortex neurons | 22 pages, 5 figures | Neuroscience 146: 449-462 (2007) | 10.1016/j.neuroscience.2007.01.019 | null | q-bio.NC | null | Intensity-tuned auditory cortex neurons may be formed by intensity-tuned
synaptic excitation. Synaptic inhibition has also been shown to enhance, and
possibly even create intensity-tuned neurons. Here we show, using in vivo whole
cell recordings in pentobarbital-anesthetized rats, that some intensity-tuned
neurons are indeed created solely through disproportionally large inhibition at
high intensities, without any intensity-tuned excitation. Since inhibition is
essentially cortical in origin, these neurons provide examples of auditory
feature-selectivity arising de novo at the cortex.
| [
{
"created": "Fri, 21 Jul 2006 18:44:32 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Tan",
"Andrew Y. Y.",
""
],
[
"Atencio",
"Craig A.",
""
],
[
"Polley",
"Daniel B.",
""
],
[
"Merzenich",
"Michael M.",
""
],
[
"Schreiner",
"Christoph E.",
""
]
] | Intensity-tuned auditory cortex neurons may be formed by intensity-tuned synaptic excitation. Synaptic inhibition has also been shown to enhance, and possibly even create intensity-tuned neurons. Here we show, using in vivo whole cell recordings in pentobarbital-anesthetized rats, that some intensity-tuned neurons are indeed created solely through disproportionally large inhibition at high intensities, without any intensity-tuned excitation. Since inhibition is essentially cortical in origin, these neurons provide examples of auditory feature-selectivity arising de novo at the cortex. |
2308.03198 | Shuyang Bian | Shuyang Bian, Yuanyuan Xie, Flora Zhang | Re-imagining the Future of Forest Management -- An Age-Dependent
Approach towards Harvesting | null | null | null | null | q-bio.OT | http://creativecommons.org/licenses/by-sa/4.0/ | Facing the drastic climate changes, current strategies for enhancing carbon
dioxide stocks need to be thoroughly honed. To address the problem, we first
built a carbon sequestration growth model driven by growth rate dependency
(GRDM). We abstracted the carbon cycling system into the process of
photosynthesis, the humidity fluctuation, and the original storage of carbon in
the trees. In the photosynthesis model, we considered various factors,
including transition rate of absorption and organic matter production. We also
designed an Economic Return Evaluation Model (EREM) to estimate the optimal
distribution of trees in the forest based on the utility function. Maximizing
the utility brought by the amount of carbon storage, we derived the equation
for profit optimization with the constraints of total economic expenses
allowed. To assess its performance, we took an object-oriented approach,
simulated an ideal forest by placing instances of trees and plotted a
time-dependent forest composition graph. After proper normalization of climate
and economic data, we also make predictions for 169 worldwide forest-covered
countries. Our model further suggests high sensitivity and robustness with a
similar trend of overall utility when environmental aridity or proportion of
harvested woods are varied. Finally, we apply the model to Georgia temperate
deciduous forest, and we evaluate the carbon storage ability to adjust the Red
Spruce based on available biological literature research. We recognize that
while the model is preliminary in its failure to identify a diverse array of
variables, it has encapsulated key features of idealized forests.
| [
{
"created": "Sun, 6 Aug 2023 19:52:41 GMT",
"version": "v1"
}
] | 2023-08-08 | [
[
"Bian",
"Shuyang",
""
],
[
"Xie",
"Yuanyuan",
""
],
[
"Zhang",
"Flora",
""
]
] | Facing the drastic climate changes, current strategies for enhancing carbon dioxide stocks need to be thoroughly honed. To address the problem, we first built a carbon sequestration growth model driven by growth rate dependency (GRDM). We abstracted the carbon cycling system into the process of photosynthesis, the humidity fluctuation, and the original storage of carbon in the trees. In the photosynthesis model, we considered various factors, including transition rate of absorption and organic matter production. We also designed an Economic Return Evaluation Model (EREM) to estimate the optimal distribution of trees in the forest based on the utility function. Maximizing the utility brought by the amount of carbon storage, we derived the equation for profit optimization with the constraints of total economic expenses allowed. To assess its performance, we took an object-oriented approach, simulated an ideal forest by placing instances of trees and plotted a time-dependent forest composition graph. After proper normalization of climate and economic data, we also make predictions for 169 worldwide forest-covered countries. Our model further suggests high sensitivity and robustness with a similar trend of overall utility when environmental aridity or proportion of harvested woods are varied. Finally, we apply the model to Georgia temperate deciduous forest, and we evaluate the carbon storage ability to adjust the Red Spruce based on available biological literature research. We recognize that while the model is preliminary in its failure to identify a diverse array of variables, it has encapsulated key features of idealized forests. |
1903.07533 | Konstantinos Michmizos | Ioannis E. Polykretis, Vladimir A. Ivanov, Konstantinos P. Michmizos | Computational Astrocyence: Astrocytes encode inhibitory activity into
the frequency and spatial extent of their calcium elevations | 4 pages, 3 figures, IEEE-EMBS International Conference on Biomedical
and Health Informatics (BHI '19) | null | null | null | q-bio.CB cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deciphering the complex interactions between neurotransmission and astrocytic
$Ca^{2+}$ elevations is a target promising a comprehensive understanding of
brain function. While the astrocytic response to excitatory synaptic activity
has been extensively studied, how inhibitory activity results to intracellular
$Ca^{2+}$ waves remains elusive. In this study, we developed a compartmental
astrocytic model that exhibits distinct levels of responsiveness to inhibitory
activity. Our model suggested that the astrocytic coverage of inhibitory
terminals defines the spatial and temporal scale of their $Ca^{2+}$ elevations.
Understanding the interplay between the synaptic pathways and the astrocytic
responses will help us identify how astrocytes work independently and
cooperatively with neurons, in health and disease.
| [
{
"created": "Mon, 18 Mar 2019 16:21:55 GMT",
"version": "v1"
}
] | 2019-03-19 | [
[
"Polykretis",
"Ioannis E.",
""
],
[
"Ivanov",
"Vladimir A.",
""
],
[
"Michmizos",
"Konstantinos P.",
""
]
] | Deciphering the complex interactions between neurotransmission and astrocytic $Ca^{2+}$ elevations is a target promising a comprehensive understanding of brain function. While the astrocytic response to excitatory synaptic activity has been extensively studied, how inhibitory activity results to intracellular $Ca^{2+}$ waves remains elusive. In this study, we developed a compartmental astrocytic model that exhibits distinct levels of responsiveness to inhibitory activity. Our model suggested that the astrocytic coverage of inhibitory terminals defines the spatial and temporal scale of their $Ca^{2+}$ elevations. Understanding the interplay between the synaptic pathways and the astrocytic responses will help us identify how astrocytes work independently and cooperatively with neurons, in health and disease. |
1401.5130 | Aaron Darling | David Coil and Guillaume Jospin and Aaron E. Darling | A5-miseq: an updated pipeline to assemble microbial genomes from
Illumina MiSeq data | This is a revision of a manuscript submitted to Bioinformatics as an
application note | null | null | null | q-bio.QM q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Motivation: Open-source bacterial genome assembly remains inaccessible to
many biologists due to its complexity. Few software solutions exist that are
capable of automating all steps in the process of de novo genome assembly from
Illumina data.
Results: A5-miseq can produce high quality microbial genome assemblies from
as little as 20-fold sequence data coverage on a laptop computer without any
parameter tuning. A5-miseq does this by automating the process of adapter
trimming, quality filtering, error correction, contig and scaffold generation,
and detection of misassemblies. Unlike the original A5 pipeline, A5-miseq can
use long reads from the Illumina MiSeq, use read pairing information during
contig generation, and includes several improvements to read trimming. Together
these changes result in substantially improved assemblies that recover a more
complete set of reference genes than previous methods.
Availability: A5-miseq is licensed under the GPL open source license. Source
code and precompiled binaries for Mac OS X 10.6+ and Linux 2.6.15+ are
available from http://sourceforge.net/projects/ngopt
| [
{
"created": "Tue, 21 Jan 2014 00:34:24 GMT",
"version": "v1"
},
{
"created": "Wed, 4 Jun 2014 12:21:24 GMT",
"version": "v2"
}
] | 2014-06-05 | [
[
"Coil",
"David",
""
],
[
"Jospin",
"Guillaume",
""
],
[
"Darling",
"Aaron E.",
""
]
] | Motivation: Open-source bacterial genome assembly remains inaccessible to many biologists due to its complexity. Few software solutions exist that are capable of automating all steps in the process of de novo genome assembly from Illumina data. Results: A5-miseq can produce high quality microbial genome assemblies from as little as 20-fold sequence data coverage on a laptop computer without any parameter tuning. A5-miseq does this by automating the process of adapter trimming, quality filtering, error correction, contig and scaffold generation, and detection of misassemblies. Unlike the original A5 pipeline, A5-miseq can use long reads from the Illumina MiSeq, use read pairing information during contig generation, and includes several improvements to read trimming. Together these changes result in substantially improved assemblies that recover a more complete set of reference genes than previous methods. Availability: A5-miseq is licensed under the GPL open source license. Source code and precompiled binaries for Mac OS X 10.6+ and Linux 2.6.15+ are available from http://sourceforge.net/projects/ngopt |
2011.11742 | Tatiana Yakushkina S. | Igor Samokhin and Tatiana Yakushkina and Alexander S. Bratus | Open Quasispecies Systems: New Approach to Evolutionary Adaptation | null | null | null | null | q-bio.PE math.DS nlin.AO | http://creativecommons.org/licenses/by/4.0/ | Consider a mathematical model of evolutionary adaptation of fitness landscape
and mutation matrix as a reaction to population changes. As a basis, we use an
open quasispecies model, which is modified to include explicit death flow. We
assume that evolutionary parameters of mutation and selection processes vary in
a way to maximize the mean fitness of the system. From this standpoint,
Fisher's theorem of natural selection is being rethought and discussed. Another
assumption is that system dynamics has two significant timescales. According to
our central hypothesis, major evolutionary transitions happen in the
steady-state of the corresponding dynamical system, so the evolutionary time is
much slower than the one of internal dynamics. For the specific cases of
quasispecies systems, we show how our premises form the fitness landscape
adaptation process.
| [
{
"created": "Mon, 23 Nov 2020 21:36:15 GMT",
"version": "v1"
}
] | 2020-11-25 | [
[
"Samokhin",
"Igor",
""
],
[
"Yakushkina",
"Tatiana",
""
],
[
"Bratus",
"Alexander S.",
""
]
] | Consider a mathematical model of evolutionary adaptation of fitness landscape and mutation matrix as a reaction to population changes. As a basis, we use an open quasispecies model, which is modified to include explicit death flow. We assume that evolutionary parameters of mutation and selection processes vary in a way to maximize the mean fitness of the system. From this standpoint, Fisher's theorem of natural selection is being rethought and discussed. Another assumption is that system dynamics has two significant timescales. According to our central hypothesis, major evolutionary transitions happen in the steady-state of the corresponding dynamical system, so the evolutionary time is much slower than the one of internal dynamics. For the specific cases of quasispecies systems, we show how our premises form the fitness landscape adaptation process. |
2211.08503 | Erin Ellefsen | Erin Ellefsen and Nancy Rodriguez | Nonlocal Mechanistic Models in Ecology: Numerical Methods and Parameter
Inferencing | 29 pages, 21 figures | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | Animals use various processes to inform themselves about their environment
and make decisions about how to move and form their territory. In some cases,
populations inform themselves of competing groups through observations at
distances, scent markings, or memories of locations where an individual has
encountered competing populations. As the process of gathering this information
is inherently nonlocal, mechanistic models that include nonlocal terms have
been proposed to investigate the movement of species. Naturally, these models
present analytical and computational challenges. In this work we study a
multi-species model with nonlocal advection. We introduce an efficient
numerical scheme using spectral methods to compute solutions of a nonlocal
reaction-advection-diffusion system for a large number of interacting species.
Moreover, we investigate the effects that the parameters and interaction
potentials have on the population densities. Finally, we propose a method using
maximum likelihood estimation to determine the most important factors driving
species' movements and test this method using synthetic data.
| [
{
"created": "Tue, 15 Nov 2022 21:01:19 GMT",
"version": "v1"
}
] | 2022-11-17 | [
[
"Ellefsen",
"Erin",
""
],
[
"Rodriguez",
"Nancy",
""
]
] | Animals use various processes to inform themselves about their environment and make decisions about how to move and form their territory. In some cases, populations inform themselves of competing groups through observations at distances, scent markings, or memories of locations where an individual has encountered competing populations. As the process of gathering this information is inherently nonlocal, mechanistic models that include nonlocal terms have been proposed to investigate the movement of species. Naturally, these models present analytical and computational challenges. In this work we study a multi-species model with nonlocal advection. We introduce an efficient numerical scheme using spectral methods to compute solutions of a nonlocal reaction-advection-diffusion system for a large number of interacting species. Moreover, we investigate the effects that the parameters and interaction potentials have on the population densities. Finally, we propose a method using maximum likelihood estimation to determine the most important factors driving species' movements and test this method using synthetic data. |
2209.08267 | Te Wu | Te Wu, Feng Fu, and Long Wang | Evolutionary games and spatial periodicity | 35 pages, 10 figures, and supplementary information | null | null | null | q-bio.PE nlin.AO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We establish a theoretical framework to address evolutionary dynamics of
spatial games under strong selection. As the selection intensity tends to
infinity, strategy competition unfolds in the deterministic way of winners
taking all. We rigorously prove that the evolutionary process soon or later
either enters a cycle and from then on repeats the cycle periodically, or
stabilizes at some state almost everywhere. This conclusion holds for any
population graph and a large class of finite games. This framework suffices to
reveal the underlying mathematical rationale for the kaleidoscopic cooperation
of Nowak and May's pioneering work on spatial games: highly symmetric starting
configuration causes a very long transient phase covering a large number of
extremely beautiful spatial patterns. For all starting configurations, spatial
patterns transit definitely over generations, so cooperators and defectors
persist definitely. This framework can be extended to explore games including
the snowdrift game, the public goods games (with or without loner, punishment),
and repeated games on graphs. Aspiration dynamics can also be fully addressed
when players deterministically switch strategy for unmet aspirations by virtue
of our framework. Our results have potential implications for exploring the
dynamics of a large variety of spatially extended systems in biology and
physics.
| [
{
"created": "Sat, 17 Sep 2022 07:06:56 GMT",
"version": "v1"
}
] | 2022-09-20 | [
[
"Wu",
"Te",
""
],
[
"Fu",
"Feng",
""
],
[
"Wang",
"Long",
""
]
] | We establish a theoretical framework to address evolutionary dynamics of spatial games under strong selection. As the selection intensity tends to infinity, strategy competition unfolds in the deterministic way of winners taking all. We rigorously prove that the evolutionary process soon or later either enters a cycle and from then on repeats the cycle periodically, or stabilizes at some state almost everywhere. This conclusion holds for any population graph and a large class of finite games. This framework suffices to reveal the underlying mathematical rationale for the kaleidoscopic cooperation of Nowak and May's pioneering work on spatial games: highly symmetric starting configuration causes a very long transient phase covering a large number of extremely beautiful spatial patterns. For all starting configurations, spatial patterns transit definitely over generations, so cooperators and defectors persist definitely. This framework can be extended to explore games including the snowdrift game, the public goods games (with or without loner, punishment), and repeated games on graphs. Aspiration dynamics can also be fully addressed when players deterministically switch strategy for unmet aspirations by virtue of our framework. Our results have potential implications for exploring the dynamics of a large variety of spatially extended systems in biology and physics. |
2109.00824 | Tsvi Tlusty | Somya Mani and Tsvi Tlusty | de novo gene birth as an inevitable consequence of adaptive evolution | null | null | null | null | q-bio.PE physics.bio-ph q-bio.GN | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Phylostratigraphy suggests that new genes are continually born de novo from
non-genic sequences, and the genes that persist found new lineages,
contributing to the adaptive evolution of organisms. While recent evidence
supports the view that de novo gene birth is frequent and widespread, the
mechanisms underlying this process are yet to be discovered. Here we
hypothesize and examine a potential general mechanism of gene birth driven by
the accumulation of beneficial mutations at non-genic loci. To demonstrate this
possibility, we model this mechanism within the boundaries set by current
knowledge on mutation effects. Estimates from this analysis are in line with
observations of recurrent and extensive gene birth in genomics studies. Thus,
we propose that, rather than being inactive and silent, non-genic regions are
likely to be dynamic storehouses of potential genes.
| [
{
"created": "Thu, 2 Sep 2021 10:09:41 GMT",
"version": "v1"
},
{
"created": "Mon, 20 Sep 2021 10:34:54 GMT",
"version": "v2"
},
{
"created": "Wed, 6 Oct 2021 04:40:30 GMT",
"version": "v3"
}
] | 2021-10-07 | [
[
"Mani",
"Somya",
""
],
[
"Tlusty",
"Tsvi",
""
]
] | Phylostratigraphy suggests that new genes are continually born de novo from non-genic sequences, and the genes that persist found new lineages, contributing to the adaptive evolution of organisms. While recent evidence supports the view that de novo gene birth is frequent and widespread, the mechanisms underlying this process are yet to be discovered. Here we hypothesize and examine a potential general mechanism of gene birth driven by the accumulation of beneficial mutations at non-genic loci. To demonstrate this possibility, we model this mechanism within the boundaries set by current knowledge on mutation effects. Estimates from this analysis are in line with observations of recurrent and extensive gene birth in genomics studies. Thus, we propose that, rather than being inactive and silent, non-genic regions are likely to be dynamic storehouses of potential genes. |
0906.5173 | Bob Eisenberg | Bob Eisenberg | Self-organized Models of Selectivity in Ca and Na Channels | Version of
http://www.ima.umn.edu/2008-2009/W12.8-12.08/abstracts.html, talk given at
the Institute for Mathematics and its Applications, University of Minnesota,
November 19, 2008. Abstract published in Biophysical Journal, Volume 96,
Issue 3, 253a | null | 10.1016/j.bpj.2008.12.1247 | null | q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A simple pillbox model with two adjustable parameters accounts for the
selectivity of both DEEA Ca channels and DEKA Na channels in many ionic
solutions of different composition and concentration. Only the side chains are
different in the model of the Ca and Na channels. Parameters are the same for
both channels in all solutions. 'Pauling' radii are used for ions. No
information from crystal structures is used in the model. Side chains are
grossly approximated as spheres. The predicted properties of the Na and Ca
channels are very different. How can such a simple model give such powerful
results when chemical intuition says that selectivity depends on the precise
relation of ions and side chains? We use Monte Carlo simulations of this model
that determine the most stable-lowest free energy-structure of the ions and
side chains. Structure is the computed consequence of the forces in this model.
The relationship of ions and side chains vary with ionic solution and are very
different in simulations of the Na and Ca channels. Selectivity is a
consequence of the 'induced fit' of side chains to ions and depends on the
flexibility (entropy) of the side chains as well as their location. The model
captures the relation of side chains and ions well enough to account for
selectivity of both Na channels and Ca channels in the wide range of conditions
measured in experiments. Evidently, the structures in the real Na and Ca
channels responsible for selectivity are self-organized, at their free energy
minimum. Oversimplified models are enough to account for selectivity if the
models calculate the 'most stable' structure as it changes from solution to
solution, and mutation to mutation.
| [
{
"created": "Sun, 28 Jun 2009 22:17:44 GMT",
"version": "v1"
}
] | 2015-05-13 | [
[
"Eisenberg",
"Bob",
""
]
] | A simple pillbox model with two adjustable parameters accounts for the selectivity of both DEEA Ca channels and DEKA Na channels in many ionic solutions of different composition and concentration. Only the side chains are different in the model of the Ca and Na channels. Parameters are the same for both channels in all solutions. 'Pauling' radii are used for ions. No information from crystal structures is used in the model. Side chains are grossly approximated as spheres. The predicted properties of the Na and Ca channels are very different. How can such a simple model give such powerful results when chemical intuition says that selectivity depends on the precise relation of ions and side chains? We use Monte Carlo simulations of this model that determine the most stable-lowest free energy-structure of the ions and side chains. Structure is the computed consequence of the forces in this model. The relationship of ions and side chains vary with ionic solution and are very different in simulations of the Na and Ca channels. Selectivity is a consequence of the 'induced fit' of side chains to ions and depends on the flexibility (entropy) of the side chains as well as their location. The model captures the relation of side chains and ions well enough to account for selectivity of both Na channels and Ca channels in the wide range of conditions measured in experiments. Evidently, the structures in the real Na and Ca channels responsible for selectivity are self-organized, at their free energy minimum. Oversimplified models are enough to account for selectivity if the models calculate the 'most stable' structure as it changes from solution to solution, and mutation to mutation. |
0909.3691 | Yaniv Erlich | Yaniv Erlich, Assaf Gordon, Michael Brand, Gregory J. Hannon and
Partha P. Mitra | Compressed Genotyping | Submitted to IEEE Transaction on Information Theory - Special Issue
on Molecular Biology and Neuroscience | null | 10.1109/TIT.2009.2037043 | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Significant volumes of knowledge have been accumulated in recent years
linking subtle genetic variations to a wide variety of medical disorders from
Cystic Fibrosis to mental retardation. Nevertheless, there are still great
challenges in applying this knowledge routinely in the clinic, largely due to
the relatively tedious and expensive process of DNA sequencing. Since the
genetic polymorphisms that underlie these disorders are relatively rare in the
human population, the presence or absence of a disease-linked polymorphism can
be thought of as a sparse signal. Using methods and ideas from compressed
sensing and group testing, we have developed a cost-effective genotyping
protocol. In particular, we have adapted our scheme to a recently developed
class of high throughput DNA sequencing technologies, and assembled a
mathematical framework that has some important distinctions from 'traditional'
compressed sensing ideas in order to address different biological and technical
constraints.
| [
{
"created": "Mon, 21 Sep 2009 05:53:20 GMT",
"version": "v1"
}
] | 2016-11-17 | [
[
"Erlich",
"Yaniv",
""
],
[
"Gordon",
"Assaf",
""
],
[
"Brand",
"Michael",
""
],
[
"Hannon",
"Gregory J.",
""
],
[
"Mitra",
"Partha P.",
""
]
] | Significant volumes of knowledge have been accumulated in recent years linking subtle genetic variations to a wide variety of medical disorders from Cystic Fibrosis to mental retardation. Nevertheless, there are still great challenges in applying this knowledge routinely in the clinic, largely due to the relatively tedious and expensive process of DNA sequencing. Since the genetic polymorphisms that underlie these disorders are relatively rare in the human population, the presence or absence of a disease-linked polymorphism can be thought of as a sparse signal. Using methods and ideas from compressed sensing and group testing, we have developed a cost-effective genotyping protocol. In particular, we have adapted our scheme to a recently developed class of high throughput DNA sequencing technologies, and assembled a mathematical framework that has some important distinctions from 'traditional' compressed sensing ideas in order to address different biological and technical constraints. |
2205.14747 | Valeriy Ginzburg | Anne V. Ginzburg (Michigan State University), Valeriy V. Ginzburg
(Michigan State University and VVG Consulting LLC), Julia O. Ginzburg (VVG
Physics Consulting LLC), Ana Garcia Arias (Central Michigan University), and
Leela Rakesh (Central Michigan University) | Modeling the Dynamics of the Coronavirus SARS-CoV-2 Pandemic using
Modified SIR Model with the 'Damped-Oscillator' Dynamics of the Effective
Reproduction Number | null | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | The COVID-19 pandemic has been a great catastrophe that upended human lives
and caused millions of deaths all over the world. The rapid spread of the
virus, with its early-stage exponential growth and subsequent 'waves', caught
many medical professionals and decision-makers unprepared. Even though
epidemiological models have been known for almost a century (since the 'Spanish
Influenza' pandemic of 1918-20), the real-life spread of the SARS-CoV-2 virus
often confounded the modelers. While the general framework of epidemiological
models like SEIR (susceptible-exposed-infected-recovered) or SIR
(susceptible-exposed-infected) was not in question, the behavior of model
parameters turned out to be unpredictable and complicated. In particular, while
the 'basic' reproduction number, R0, can be considered a constant (for the
original SARS-CoV-2 virus, prior to the emergence of variants, R0 is between
2.5 and 3.0), the 'effective' reproduction number, R(t), was a complex function
of time, influenced by human behavior in response to the pandemic (e.g.,
masking, lockdowns, transition to remote work, etc.) To better understand these
phenomena, we model the first year of the pandemic (between February 2020 and
February 2021) for a number of localities (fifty US states, as well as several
countries) using a simple SIR model. We show that the evolution of the pandemic
can be described quite successfully by assuming that R(t) behaves in a
'viscoelastic' manner, as a sum of two or three 'damped oscillators' with
different natural frequencies and damping coefficients. These oscillators
likely correspond to different sub-populations having different reactions to
proposed mitigation measures. The proposed approach can offer future data
modelers new ways to fit the reproduction number evolution with time (as
compared to the purely data-driven approaches most prevalent today).
| [
{
"created": "Sun, 29 May 2022 19:42:23 GMT",
"version": "v1"
}
] | 2022-05-31 | [
[
"Ginzburg",
"Anne V.",
"",
"Michigan State University"
],
[
"Ginzburg",
"Valeriy V.",
"",
"Michigan State University and VVG Consulting LLC"
],
[
"Ginzburg",
"Julia O.",
"",
"VVG\n Physics Consulting LLC"
],
[
"Arias",
"Ana Garcia",
"",
... | The COVID-19 pandemic has been a great catastrophe that upended human lives and caused millions of deaths all over the world. The rapid spread of the virus, with its early-stage exponential growth and subsequent 'waves', caught many medical professionals and decision-makers unprepared. Even though epidemiological models have been known for almost a century (since the 'Spanish Influenza' pandemic of 1918-20), the real-life spread of the SARS-CoV-2 virus often confounded the modelers. While the general framework of epidemiological models like SEIR (susceptible-exposed-infected-recovered) or SIR (susceptible-exposed-infected) was not in question, the behavior of model parameters turned out to be unpredictable and complicated. In particular, while the 'basic' reproduction number, R0, can be considered a constant (for the original SARS-CoV-2 virus, prior to the emergence of variants, R0 is between 2.5 and 3.0), the 'effective' reproduction number, R(t), was a complex function of time, influenced by human behavior in response to the pandemic (e.g., masking, lockdowns, transition to remote work, etc.) To better understand these phenomena, we model the first year of the pandemic (between February 2020 and February 2021) for a number of localities (fifty US states, as well as several countries) using a simple SIR model. We show that the evolution of the pandemic can be described quite successfully by assuming that R(t) behaves in a 'viscoelastic' manner, as a sum of two or three 'damped oscillators' with different natural frequencies and damping coefficients. These oscillators likely correspond to different sub-populations having different reactions to proposed mitigation measures. The proposed approach can offer future data modelers new ways to fit the reproduction number evolution with time (as compared to the purely data-driven approaches most prevalent today). |
1902.09345 | Thomas McGrath | Thomas M McGrath, Eleanor Spreckley, Aina Fernandez Rodriguez, Carlo
Viscomi, Amin Alamshah, Elina Akalestou, Kevin G Murphy, Nick S Jones | The homeostatic dynamics of feeding behaviour identify novel mechanisms
of anorectic agents | null | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Better understanding of feeding behaviour will be vital in reducing obesity
and metabolic syndrome, but we lack a standard model that captures the
complexity of feeding behaviour. We construct an accurate stochastic model of
rodent feeding at the bout level in order to perform quantitative behavioural
analysis. Analysing the different effects on feeding behaviour of PYY 3-36,
lithium chloride, GLP-1 and leptin shows the precise behavioural changes caused
by each anorectic agent, and demonstrates that these changes do not mimic
satiety. In the ad libitum fed state during the light period, meal initiation
is governed by complete stomach emptying, whereas in all other conditions there
is a graduated response. We show how robust homeostatic control of feeding
thwarts attempts to reduce food intake, and how this might be overcome. In
silico experiments suggest that introducing a minimum intermeal interval or
modulating gastric emptying can be as effective as anorectic drug
administration.
| [
{
"created": "Fri, 22 Feb 2019 11:57:59 GMT",
"version": "v1"
},
{
"created": "Thu, 14 Mar 2019 16:47:53 GMT",
"version": "v2"
},
{
"created": "Thu, 9 May 2019 09:16:59 GMT",
"version": "v3"
}
] | 2019-05-10 | [
[
"McGrath",
"Thomas M",
""
],
[
"Spreckley",
"Eleanor",
""
],
[
"Rodriguez",
"Aina Fernandez",
""
],
[
"Viscomi",
"Carlo",
""
],
[
"Alamshah",
"Amin",
""
],
[
"Akalestou",
"Elina",
""
],
[
"Murphy",
"Kevin G",
... | Better understanding of feeding behaviour will be vital in reducing obesity and metabolic syndrome, but we lack a standard model that captures the complexity of feeding behaviour. We construct an accurate stochastic model of rodent feeding at the bout level in order to perform quantitative behavioural analysis. Analysing the different effects on feeding behaviour of PYY 3-36, lithium chloride, GLP-1 and leptin shows the precise behavioural changes caused by each anorectic agent, and demonstrates that these changes do not mimic satiety. In the ad libitum fed state during the light period, meal initiation is governed by complete stomach emptying, whereas in all other conditions there is a graduated response. We show how robust homeostatic control of feeding thwarts attempts to reduce food intake, and how this might be overcome. In silico experiments suggest that introducing a minimum intermeal interval or modulating gastric emptying can be as effective as anorectic drug administration. |
2007.09780 | Anurag Mishra | Abhishek Srivastava, Anurag Mishra, Trusha Jayant Parekh, Sampreeti
Jena | Implementing Stepped Pooled Testing for Rapid COVID-19 Detection | 6 pages, including three tables and four figures | null | null | null | q-bio.PE stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | COVID-19, a viral respiratory pandemic, has rapidly spread throughout the
globe. Large scale and rapid testing of the population is required to contain
the disease, but such testing is prohibitive in terms of resources, cost and
time. Recently RT-PCR based pooled testing has emerged as a promising way to
boost testing efficiency. We introduce a stepped pooled testing strategy, a
probability driven approach which significantly reduces the number of tests
required to identify infected individuals in a large population. Our
comprehensive methodology incorporates the effect of false negative and
positive rates to accurately determine not only the efficiency of pooling but
also it's accuracy. Under various plausible scenarios, we show that this
approach significantly reduces the cost of testing and also reduces the
effective false positive rate of tests when compared to a strategy of testing
every individual of a population. We also outline an optimization strategy to
obtain the pool size that maximizes the efficiency of pooling given the
diagnostic protocol parameters and local infection conditions.
| [
{
"created": "Sun, 19 Jul 2020 20:47:02 GMT",
"version": "v1"
}
] | 2020-07-21 | [
[
"Srivastava",
"Abhishek",
""
],
[
"Mishra",
"Anurag",
""
],
[
"Parekh",
"Trusha Jayant",
""
],
[
"Jena",
"Sampreeti",
""
]
] | COVID-19, a viral respiratory pandemic, has rapidly spread throughout the globe. Large scale and rapid testing of the population is required to contain the disease, but such testing is prohibitive in terms of resources, cost and time. Recently RT-PCR based pooled testing has emerged as a promising way to boost testing efficiency. We introduce a stepped pooled testing strategy, a probability driven approach which significantly reduces the number of tests required to identify infected individuals in a large population. Our comprehensive methodology incorporates the effect of false negative and positive rates to accurately determine not only the efficiency of pooling but also it's accuracy. Under various plausible scenarios, we show that this approach significantly reduces the cost of testing and also reduces the effective false positive rate of tests when compared to a strategy of testing every individual of a population. We also outline an optimization strategy to obtain the pool size that maximizes the efficiency of pooling given the diagnostic protocol parameters and local infection conditions. |
0811.3315 | Farshid Mohammad-Rafiee | Davood Norouzi, Farshid Mohammad-Rafiee, Ramin Golestanian | Effect of Bending Anisotropy on the 3D Conformation of Short DNA Loops | null | Phys. Rev. Lett. 101, 168103 (2008) | 10.1103/PhysRevLett.101.168103 | null | q-bio.BM cond-mat.soft | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The equilibrium three dimensional shape of relatively short loops of DNA is
studied using an elastic model that takes into account anisotropy in bending
rigidities. Using a reasonable estimate for the anisotropy, it is found that
cyclized DNA with lengths that are not integer multiples of the pitch take on
nontrivial shapes that involve bending out of planes and formation of kinks.
The effect of sequence inhomogeneity on the shape of DNA is addressed, and
shown to enhance the geometrical features. These findings could shed some light
on the role of DNA conformation in protein--DNA interactions.
| [
{
"created": "Thu, 20 Nov 2008 11:40:43 GMT",
"version": "v1"
}
] | 2008-11-24 | [
[
"Norouzi",
"Davood",
""
],
[
"Mohammad-Rafiee",
"Farshid",
""
],
[
"Golestanian",
"Ramin",
""
]
] | The equilibrium three dimensional shape of relatively short loops of DNA is studied using an elastic model that takes into account anisotropy in bending rigidities. Using a reasonable estimate for the anisotropy, it is found that cyclized DNA with lengths that are not integer multiples of the pitch take on nontrivial shapes that involve bending out of planes and formation of kinks. The effect of sequence inhomogeneity on the shape of DNA is addressed, and shown to enhance the geometrical features. These findings could shed some light on the role of DNA conformation in protein--DNA interactions. |
1907.00118 | Colin Targonski | Colin Targonski, Benjamin T. Shealy, Melissa C. Smith, F. Alex Feltus | Cellular State Transformations using Generative Adversarial Networks | 11 pages, 5 figures | null | null | null | q-bio.QM cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a novel method to unite deep learning with biology by which
generative adversarial networks (GANs) generate transcriptome perturbations and
reveal condition-defining gene expression patterns. We find that a generator
conditioned to perturb any input gene expression profile simulates a realistic
transition between source and target RNA expression states. The perturbed
samples follow a similar distribution to original samples from the dataset,
also suggesting these are biologically meaningful perturbations. Finally, we
show that it is possible to identify the genes most positively and negatively
perturbed by the generator and that the enriched biological function of the
perturbed genes are realistic. We call the framework the Transcriptome State
Perturbation Generator (TSPG), which is open source software available at
https://github.com/ctargon/TSPG.
| [
{
"created": "Fri, 28 Jun 2019 23:59:57 GMT",
"version": "v1"
}
] | 2019-07-02 | [
[
"Targonski",
"Colin",
""
],
[
"Shealy",
"Benjamin T.",
""
],
[
"Smith",
"Melissa C.",
""
],
[
"Feltus",
"F. Alex",
""
]
] | We introduce a novel method to unite deep learning with biology by which generative adversarial networks (GANs) generate transcriptome perturbations and reveal condition-defining gene expression patterns. We find that a generator conditioned to perturb any input gene expression profile simulates a realistic transition between source and target RNA expression states. The perturbed samples follow a similar distribution to original samples from the dataset, also suggesting these are biologically meaningful perturbations. Finally, we show that it is possible to identify the genes most positively and negatively perturbed by the generator and that the enriched biological function of the perturbed genes are realistic. We call the framework the Transcriptome State Perturbation Generator (TSPG), which is open source software available at https://github.com/ctargon/TSPG. |
1805.05696 | Luka Ribar | Luka Ribar, Rodolphe Sepulchre | Neuromodulation of Neuromorphic Circuits | null | IEEE Transactions on Circuits and Systems I: Regular Papers, vol.
66, no. 8, pp. 3028-3040, Aug. 2019 | 10.1109/TCSI.2019.2907113 | null | q-bio.NC cs.NE cs.SY eess.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a novel methodology to enable control of a neuromorphic circuit in
close analogy with the physiological neuromodulation of a single neuron. The
methodology is general in that it only relies on a parallel interconnection of
elementary voltage-controlled current sources. In contrast to controlling a
nonlinear circuit through the parameter tuning of a state-space model, our
approach is purely input-output. The circuit elements are controlled and
interconnected to shape the current-voltage characteristics (I-V curves) of the
circuit in prescribed timescales. In turn, shaping those I-V curves determines
the excitability properties of the circuit. We show that this methodology
enables both robust and accurate control of the circuit behavior and resembles
the biophysical mechanisms of neuromodulation. As a proof of concept, we
simulate a SPICE model composed of MOSFET transconductance amplifiers operating
in the weak inversion regime.
| [
{
"created": "Tue, 15 May 2018 10:43:18 GMT",
"version": "v1"
},
{
"created": "Thu, 25 Oct 2018 14:13:15 GMT",
"version": "v2"
},
{
"created": "Thu, 21 Mar 2019 19:49:20 GMT",
"version": "v3"
}
] | 2020-11-19 | [
[
"Ribar",
"Luka",
""
],
[
"Sepulchre",
"Rodolphe",
""
]
] | We present a novel methodology to enable control of a neuromorphic circuit in close analogy with the physiological neuromodulation of a single neuron. The methodology is general in that it only relies on a parallel interconnection of elementary voltage-controlled current sources. In contrast to controlling a nonlinear circuit through the parameter tuning of a state-space model, our approach is purely input-output. The circuit elements are controlled and interconnected to shape the current-voltage characteristics (I-V curves) of the circuit in prescribed timescales. In turn, shaping those I-V curves determines the excitability properties of the circuit. We show that this methodology enables both robust and accurate control of the circuit behavior and resembles the biophysical mechanisms of neuromodulation. As a proof of concept, we simulate a SPICE model composed of MOSFET transconductance amplifiers operating in the weak inversion regime. |
1512.00843 | Sheng Wang | Sheng Wang, Jian Peng, Jianzhu Ma and Jinbo Xu | Protein secondary structure prediction using deep convolutional neural
fields | null | null | null | null | q-bio.BM cs.LG q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Protein secondary structure (SS) prediction is important for studying protein
structure and function. When only the sequence (profile) information is used as
input feature, currently the best predictors can obtain ~80% Q3 accuracy, which
has not been improved in the past decade. Here we present DeepCNF (Deep
Convolutional Neural Fields) for protein SS prediction. DeepCNF is a Deep
Learning extension of Conditional Neural Fields (CNF), which is an integration
of Conditional Random Fields (CRF) and shallow neural networks. DeepCNF can
model not only complex sequence-structure relationship by a deep hierarchical
architecture, but also interdependency between adjacent SS labels, so it is
much more powerful than CNF. Experimental results show that DeepCNF can obtain
~84% Q3 accuracy, ~85% SOV score, and ~72% Q8 accuracy, respectively, on the
CASP and CAMEO test proteins, greatly outperforming currently popular
predictors. As a general framework, DeepCNF can be used to predict other
protein structure properties such as contact number, disorder regions, and
solvent accessibility.
| [
{
"created": "Wed, 2 Dec 2015 01:53:57 GMT",
"version": "v1"
},
{
"created": "Thu, 10 Dec 2015 17:01:59 GMT",
"version": "v2"
},
{
"created": "Fri, 11 Dec 2015 03:16:55 GMT",
"version": "v3"
}
] | 2015-12-14 | [
[
"Wang",
"Sheng",
""
],
[
"Peng",
"Jian",
""
],
[
"Ma",
"Jianzhu",
""
],
[
"Xu",
"Jinbo",
""
]
] | Protein secondary structure (SS) prediction is important for studying protein structure and function. When only the sequence (profile) information is used as input feature, currently the best predictors can obtain ~80% Q3 accuracy, which has not been improved in the past decade. Here we present DeepCNF (Deep Convolutional Neural Fields) for protein SS prediction. DeepCNF is a Deep Learning extension of Conditional Neural Fields (CNF), which is an integration of Conditional Random Fields (CRF) and shallow neural networks. DeepCNF can model not only complex sequence-structure relationship by a deep hierarchical architecture, but also interdependency between adjacent SS labels, so it is much more powerful than CNF. Experimental results show that DeepCNF can obtain ~84% Q3 accuracy, ~85% SOV score, and ~72% Q8 accuracy, respectively, on the CASP and CAMEO test proteins, greatly outperforming currently popular predictors. As a general framework, DeepCNF can be used to predict other protein structure properties such as contact number, disorder regions, and solvent accessibility. |
q-bio/0507043 | Heiko Rieger | D.-S. Lee, H. Rieger, K. Bartha | Flow correlated percolation during vascular network formation in tumors | 4 pages, 3 figures (higher resolution at
http://www.uni-saarland.de/fak7/rieger/HOMEPAGE/flow.eps) | Phys. Rev. Lett. 96, 058104 (2006) | 10.1103/PhysRevLett.96.058104 | null | q-bio.TO | null | A theoretical model based on the molecular interactions between a growing
tumor and a dynamically evolving blood vessel network describes the
transformation of the regular vasculature in normal tissues into a highly
inhomogeneous tumor specific capillary network. The emerging morphology,
characterized by the compartmentalization of the tumor into several regions
differing in vessel density, diameter and necrosis, is in accordance with
experimental data for human melanoma. Vessel collapse due to a combination of
severely reduced blood flow and solid stress exerted by the tumor, leads to a
correlated percolation process that is driven towards criticality by the
mechanism of hydrodynamic vessel stabilization.
| [
{
"created": "Thu, 28 Jul 2005 13:53:44 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Lee",
"D. -S.",
""
],
[
"Rieger",
"H.",
""
],
[
"Bartha",
"K.",
""
]
] | A theoretical model based on the molecular interactions between a growing tumor and a dynamically evolving blood vessel network describes the transformation of the regular vasculature in normal tissues into a highly inhomogeneous tumor specific capillary network. The emerging morphology, characterized by the compartmentalization of the tumor into several regions differing in vessel density, diameter and necrosis, is in accordance with experimental data for human melanoma. Vessel collapse due to a combination of severely reduced blood flow and solid stress exerted by the tumor, leads to a correlated percolation process that is driven towards criticality by the mechanism of hydrodynamic vessel stabilization. |
1001.4972 | Elke Markert | Elke K. Markert, Nils Baas, Arnold J. Levine, Alexei Vazquez | Higher Order Boolean networks as models of cell state dynamics | 26 pages, 5 figures | null | null | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The regulation of the cell state is a complex process involving several
components. These complex dynamics can be modeled using Boolean networks,
allowing us to explain the existence of different cell states and the
transition between them. Boolean models have been introduced both as specific
examples and as ensemble or distribution network models. However, current
ensemble Boolean network models do not make a systematic distinction between
different cell components such as epigenetic factors, gene and transcription
factors. Consequently, we still do not understand their relative contributions
in controlling the cell fate. In this work we introduce and study higher order
Boolean networks, which feature an explicit distinction between the different
cell components and the types of interactions between them. We show that the
stability of the cell state dynamics can be determined solving the eigenvalue
problem of a matrix representing the regulatory interactions and their
strengths. The qualitative analysis of this problem indicates that, in addition
to the classification into stable and chaotic regimes, the cell state can be
simple or complex depending on whether it can be deduced from the independent
study of its components or not. Finally, we illustrate how the model can be
expanded considering higher levels and higher order dynamics.
| [
{
"created": "Wed, 27 Jan 2010 15:29:27 GMT",
"version": "v1"
}
] | 2010-01-28 | [
[
"Markert",
"Elke K.",
""
],
[
"Baas",
"Nils",
""
],
[
"Levine",
"Arnold J.",
""
],
[
"Vazquez",
"Alexei",
""
]
] | The regulation of the cell state is a complex process involving several components. These complex dynamics can be modeled using Boolean networks, allowing us to explain the existence of different cell states and the transition between them. Boolean models have been introduced both as specific examples and as ensemble or distribution network models. However, current ensemble Boolean network models do not make a systematic distinction between different cell components such as epigenetic factors, gene and transcription factors. Consequently, we still do not understand their relative contributions in controlling the cell fate. In this work we introduce and study higher order Boolean networks, which feature an explicit distinction between the different cell components and the types of interactions between them. We show that the stability of the cell state dynamics can be determined solving the eigenvalue problem of a matrix representing the regulatory interactions and their strengths. The qualitative analysis of this problem indicates that, in addition to the classification into stable and chaotic regimes, the cell state can be simple or complex depending on whether it can be deduced from the independent study of its components or not. Finally, we illustrate how the model can be expanded considering higher levels and higher order dynamics. |
1701.07940 | Pavel Sumazin | Matteo Manica, Hyunjae Ryan Kim, Roland Mathis, Philippe Chouvarine,
Dorothea Rutishauser, Laura De Vargas Roditi, Bence Szalai, Ulrich Wagner,
Kathrin Oehl, Karim Saba, Arati Pati, Julio Saez-Rodriguez, Angshumoy Roy,
Donald W. Parsons, Peter J. Wild, Mar\'ia Rodr\'iguez Mart\'inez, Pavel
Sumazin | Inferring clonal composition from multiple tumor biopsies | null | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Explicit accounting for copy number alterations can dramatically improve
mutation frequency estimates, leading to more accurate phylogeny
reconstructions and subclone characterizations.
| [
{
"created": "Fri, 27 Jan 2017 04:32:45 GMT",
"version": "v1"
},
{
"created": "Fri, 22 Nov 2019 19:22:19 GMT",
"version": "v2"
}
] | 2019-11-26 | [
[
"Manica",
"Matteo",
""
],
[
"Kim",
"Hyunjae Ryan",
""
],
[
"Mathis",
"Roland",
""
],
[
"Chouvarine",
"Philippe",
""
],
[
"Rutishauser",
"Dorothea",
""
],
[
"Roditi",
"Laura De Vargas",
""
],
[
"Szalai",
"Bence"... | Explicit accounting for copy number alterations can dramatically improve mutation frequency estimates, leading to more accurate phylogeny reconstructions and subclone characterizations. |
2008.09597 | Nicole Eikmeier | Riti Bahl, Nicole Eikmeier, Alexandra Fraser, Matthew Junge, Felicia
Keesing, Kukai Nakahata, and Lily Z. Wang | Modeling COVID-19 Spread in Small Colleges | 17 pages, 8 figures, 5 tables | null | 10.1371/journal.pone.0255654 | null | q-bio.PE physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We develop an agent-based model on a network meant to capture features unique
to COVID-19 spread through a small residential college. We find that a safe
reopening requires strong policy from administrators combined with cautious
behavior from students. Strong policy includes weekly screening tests with
quick turnaround and halving the campus population. Cautious behavior from
students means wearing facemasks, socializing less, and showing up for COVID-19
testing. We also find that comprehensive testing and facemasks are the most
effective single interventions, building closures can lead to infection spikes
in other areas depending on student behavior, and faster return of test results
significantly reduces total infections.
| [
{
"created": "Fri, 21 Aug 2020 17:52:27 GMT",
"version": "v1"
}
] | 2021-09-15 | [
[
"Bahl",
"Riti",
""
],
[
"Eikmeier",
"Nicole",
""
],
[
"Fraser",
"Alexandra",
""
],
[
"Junge",
"Matthew",
""
],
[
"Keesing",
"Felicia",
""
],
[
"Nakahata",
"Kukai",
""
],
[
"Wang",
"Lily Z.",
""
]
] | We develop an agent-based model on a network meant to capture features unique to COVID-19 spread through a small residential college. We find that a safe reopening requires strong policy from administrators combined with cautious behavior from students. Strong policy includes weekly screening tests with quick turnaround and halving the campus population. Cautious behavior from students means wearing facemasks, socializing less, and showing up for COVID-19 testing. We also find that comprehensive testing and facemasks are the most effective single interventions, building closures can lead to infection spikes in other areas depending on student behavior, and faster return of test results significantly reduces total infections. |
1104.4235 | Pau Ru\'e | Pau Ru\'e, G\"urol M. S\"uel and Jordi Garcia-Ojalvo | Optimizing periodicity and polymodality in noise-induced genetic
oscillators | 9 pages, 6 figures | Phys. Rev. E 83, 061904 (2011) | 10.1103/PhysRevE.83.061904 | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many cellular functions are based on the rhythmic organization of biological
processes into self-repeating cascades of events. Some of these periodic
processes, such as the cell cycles of several species, exhibit conspicuous
irregularities in the form of period skippings, which lead to polymodal
distributions of cycle lengths. A recently proposed mechanism that accounts for
this quantized behavior is the stabilization of a Hopf-unstable state by
molecular noise. Here we investigate the effect of varying noise in a model
system, namely an excitable activator-repressor genetic circuit, that displays
this noise-induced stabilization effect. Our results show that an optimal noise
level enhances the regularity (coherence) of the cycles, in a form of coherence
resonance. Similar noise levels also optimize the multimodal nature of the
cycle lengths. Together, these results illustrate how molecular noise within a
minimal gene regulatory motif confers robust generation of polymodal patterns
of periodicity.
| [
{
"created": "Thu, 21 Apr 2011 11:36:19 GMT",
"version": "v1"
}
] | 2011-09-30 | [
[
"Rué",
"Pau",
""
],
[
"Süel",
"Gürol M.",
""
],
[
"Garcia-Ojalvo",
"Jordi",
""
]
] | Many cellular functions are based on the rhythmic organization of biological processes into self-repeating cascades of events. Some of these periodic processes, such as the cell cycles of several species, exhibit conspicuous irregularities in the form of period skippings, which lead to polymodal distributions of cycle lengths. A recently proposed mechanism that accounts for this quantized behavior is the stabilization of a Hopf-unstable state by molecular noise. Here we investigate the effect of varying noise in a model system, namely an excitable activator-repressor genetic circuit, that displays this noise-induced stabilization effect. Our results show that an optimal noise level enhances the regularity (coherence) of the cycles, in a form of coherence resonance. Similar noise levels also optimize the multimodal nature of the cycle lengths. Together, these results illustrate how molecular noise within a minimal gene regulatory motif confers robust generation of polymodal patterns of periodicity. |
1212.3888 | Jeremy Sumner | Jeremy G. Sumner, Peter D. Jarvis, and Barbara R. Holland | A tensorial approach to the inversion of group-based phylogenetic models | 24 pages, 2 figures | null | null | null | q-bio.PE math.ST stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Using a tensorial approach, we show how to construct a one-one correspondence
between pattern probabilities and edge parameters for any group-based model.
This is a generalisation of the "Hadamard conjugation" and is equivalent to
standard results that use Fourier analysis. In our derivation we focus on the
connections to group representation theory and emphasize that the inversion is
possible because, under their usual definition, group-based models are defined
for abelian groups only. We also argue that our approach is elementary in the
sense that it can be understood as simple matrix multiplication where matrices
are rectangular and indexed by ordered-partitions of varying sizes.
| [
{
"created": "Mon, 17 Dec 2012 05:09:34 GMT",
"version": "v1"
}
] | 2012-12-18 | [
[
"Sumner",
"Jeremy G.",
""
],
[
"Jarvis",
"Peter D.",
""
],
[
"Holland",
"Barbara R.",
""
]
] | Using a tensorial approach, we show how to construct a one-one correspondence between pattern probabilities and edge parameters for any group-based model. This is a generalisation of the "Hadamard conjugation" and is equivalent to standard results that use Fourier analysis. In our derivation we focus on the connections to group representation theory and emphasize that the inversion is possible because, under their usual definition, group-based models are defined for abelian groups only. We also argue that our approach is elementary in the sense that it can be understood as simple matrix multiplication where matrices are rectangular and indexed by ordered-partitions of varying sizes. |
1408.3940 | David A. Kessler | David A. Kessler and Nadav M. Shnerb | A generalized model of island biodiversity | null | null | 10.1103/PhysRevE.91.042705 | null | q-bio.PE cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The dynamics of a local community of competing species with weak immigration
from a static regional pool is studied. Implementing the generalized
competitive Lotka-Volterra model with demographic noise, a rich dynamics
structure with four qualitatively distinct phases is unfolded. When the overall
interspecies competition is weak, the island species are a sample of the
mainland species. For higher values of the competition parameter the system
still admit an equilibrium community, but now some of the mainland species are
absent on the island. Further increase in competition leads to an intermittent
"chaotic" phase, where the dynamics is controlled by invadable combinations of
species and the turnover rate is governed by the migration. Finally, the strong
competition phase is glassy, dominated by uninvadable state and noise-induced
transitions. Our model contains, as a spatial case, the celebrated neutral
island theories of Wilson-MacArthur and Hubbell. Moreover, we show that slight
deviations from perfect neutrality may lead to each of the phases, as the
Hubbell point appears to be quadracritical.
| [
{
"created": "Mon, 18 Aug 2014 09:03:57 GMT",
"version": "v1"
}
] | 2015-06-22 | [
[
"Kessler",
"David A.",
""
],
[
"Shnerb",
"Nadav M.",
""
]
] | The dynamics of a local community of competing species with weak immigration from a static regional pool is studied. Implementing the generalized competitive Lotka-Volterra model with demographic noise, a rich dynamics structure with four qualitatively distinct phases is unfolded. When the overall interspecies competition is weak, the island species are a sample of the mainland species. For higher values of the competition parameter the system still admit an equilibrium community, but now some of the mainland species are absent on the island. Further increase in competition leads to an intermittent "chaotic" phase, where the dynamics is controlled by invadable combinations of species and the turnover rate is governed by the migration. Finally, the strong competition phase is glassy, dominated by uninvadable state and noise-induced transitions. Our model contains, as a spatial case, the celebrated neutral island theories of Wilson-MacArthur and Hubbell. Moreover, we show that slight deviations from perfect neutrality may lead to each of the phases, as the Hubbell point appears to be quadracritical. |
1411.1528 | Roger Guimera | Roger Guimera, Marta Sales-Pardo | A network inference method for large-scale unsupervised identification
of novel drug-drug interactions | null | PLoS Comput Biol 9(12): e1003374 (2013) | 10.1371/journal.pcbi.1003374 | null | q-bio.MN cond-mat.dis-nn physics.bio-ph physics.data-an | http://creativecommons.org/licenses/by/3.0/ | Characterizing interactions between drugs is important to avoid potentially
harmful combinations, to reduce off-target effects of treatments and to fight
antibiotic resistant pathogens, among others. Here we present a network
inference algorithm to predict uncharacterized drug-drug interactions. Our
algorithm takes, as its only input, sets of previously reported interactions,
and does not require any pharmacological or biochemical information about the
drugs, their targets or their mechanisms of action. Because the models we use
are abstract, our approach can deal with adverse interactions,
synergistic/antagonistic/suppressing interactions, or any other type of drug
interaction. We show that our method is able to accurately predict
interactions, both in exhaustive pairwise interaction data between small sets
of drugs, and in large-scale databases. We also demonstrate that our algorithm
can be used efficiently to discover interactions of new drugs as part of the
drug discovery process.
| [
{
"created": "Thu, 6 Nov 2014 08:32:47 GMT",
"version": "v1"
}
] | 2014-11-07 | [
[
"Guimera",
"Roger",
""
],
[
"Sales-Pardo",
"Marta",
""
]
] | Characterizing interactions between drugs is important to avoid potentially harmful combinations, to reduce off-target effects of treatments and to fight antibiotic resistant pathogens, among others. Here we present a network inference algorithm to predict uncharacterized drug-drug interactions. Our algorithm takes, as its only input, sets of previously reported interactions, and does not require any pharmacological or biochemical information about the drugs, their targets or their mechanisms of action. Because the models we use are abstract, our approach can deal with adverse interactions, synergistic/antagonistic/suppressing interactions, or any other type of drug interaction. We show that our method is able to accurately predict interactions, both in exhaustive pairwise interaction data between small sets of drugs, and in large-scale databases. We also demonstrate that our algorithm can be used efficiently to discover interactions of new drugs as part of the drug discovery process. |
2108.04951 | Farzaneh Esmaili | Farzaneh Esmaili, Mahdi Pourmirzaei, Shahin Ramazi, Elham Yavari | A Brief Review of Machine Learning Techniques for Protein
Phosphorylation Sites Prediction | null | null | null | null | q-bio.QM cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Post-translational modifications (PTMs) have vital roles in extending the
functional diversity of proteins and as a result, regulating diverse cellular
processes in prokaryotic and eukaryotic organisms. Phosphorylation modification
is a vital PTM that occurs in most proteins and plays significant roles in many
biological processes. Disorders in the phosphorylation process lead to multiple
diseases including neurological disorders and cancers. At first, this study
comprehensively reviewed all databases related to phosphorylation sites
(p-sites). Secondly, we introduced all steps regarding dataset creation, data
preprocessing and method evaluation in p-sites prediction. Next, we
investigated p-sites prediction methods which fall into two computational and
Machine Learning (ML) groups. Additionally, it was shown that there are
basically two main approaches for p-sites prediction by ML: conventional and
End-to-End learning, which were given an overview for both of them. Moreover,
this study introduced the most important feature extraction techniques which
have mostly been used in ML approaches. Finally, we created three test sets
from new proteins related to the 2022th released version of the dbPTM database
based on general and human species. After evaluating available online tools on
the test sets, results showed that the performance of online tools for p-sites
prediction are quite weak on new reported phospho-proteins.
| [
{
"created": "Tue, 10 Aug 2021 22:23:30 GMT",
"version": "v1"
},
{
"created": "Sun, 6 Feb 2022 21:33:46 GMT",
"version": "v2"
}
] | 2022-02-08 | [
[
"Esmaili",
"Farzaneh",
""
],
[
"Pourmirzaei",
"Mahdi",
""
],
[
"Ramazi",
"Shahin",
""
],
[
"Yavari",
"Elham",
""
]
] | Post-translational modifications (PTMs) have vital roles in extending the functional diversity of proteins and as a result, regulating diverse cellular processes in prokaryotic and eukaryotic organisms. Phosphorylation modification is a vital PTM that occurs in most proteins and plays significant roles in many biological processes. Disorders in the phosphorylation process lead to multiple diseases including neurological disorders and cancers. At first, this study comprehensively reviewed all databases related to phosphorylation sites (p-sites). Secondly, we introduced all steps regarding dataset creation, data preprocessing and method evaluation in p-sites prediction. Next, we investigated p-sites prediction methods which fall into two computational and Machine Learning (ML) groups. Additionally, it was shown that there are basically two main approaches for p-sites prediction by ML: conventional and End-to-End learning, which were given an overview for both of them. Moreover, this study introduced the most important feature extraction techniques which have mostly been used in ML approaches. Finally, we created three test sets from new proteins related to the 2022th released version of the dbPTM database based on general and human species. After evaluating available online tools on the test sets, results showed that the performance of online tools for p-sites prediction are quite weak on new reported phospho-proteins. |
2210.15207 | Lucie Pellissier | Anil Annamneedi (PRC, BIOS), Caroline Gora (PRC, BIOS), Ana Dudas
(PRC, BIOS), Xavier Leray (PRC, BIOS), V\'eronique Bozon (PRC, BIOS), Pascale
Crepieux (PRC, BIOS), Lucie P. Pellissier (PRC, BIOS) | Towards the convergent therapeutic potential of GPCRs in autism spectrum
disorders | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Changes in genetic and/or environmental factors to developing neural circuits
and subsequent synaptic functions are known to be a causative underlying the
varied socio-emotional behavioural patterns associated with autism spectrum
disorders (ASD). Seven transmembrane G protein-coupled receptors (GPCRs)
comprising the largest family of cell-surface receptors, mediate the transfer
of extracellular signals to downstream cellular responses. Disruption of GPCR
and their signalling have been implicated as a convergent pathologic mechanism
of ASD. Here, we aim to review the literature about the 23 GPCRs that are
genetically associated to ASD pathology according to Simons Foundation Autism
Research Initiative (SFARI) database such as oxytocin (OXTR) and vasopressin
(V1A, V1B) receptors, metabotropic glutamate (mGlu5, mGlu7) and
gamma-aminobutyric acid (GABAB) receptors, dopamine (D1, D2), serotoninergic
(5-HT1B and additionally included the 5-HT2A, 5-HT7 receptors for their strong
relevance to ASD), adrenergic ($\beta$2) and cholinergic (M3) receptors,
adenosine (A2A, A3) receptors, angiotensin (AT2) receptors, cannabinoid (CB1)
receptors, chemokine (CX3CR1) receptors, orphan (GPR37, GPR85) and olfactory
(OR1C1, OR2M4, OR2T10, OR52M1) receptors. We discussed the genetic variants,
relation to core ASD behavioural deficits and update on pharmacological
compounds targeting these 23 GPCRs. Of these OTR, V1A, mGlu5, D2, 5-HT2A, CB1,
and GPR37 serve as the best therapeutic targets and have potential towards core
domains of ASD pathology. With a functional crosstalk between different GPCRs
and converging pharmacological responses, there is an urge to develop novel
therapeutic strategies based on multiple GPCRs to reduce the socioeconomic
burden associated with ASD and we strongly emphasize the need to prioritize the
increased clinical trials targeting the multiple GPCRs.
| [
{
"created": "Thu, 27 Oct 2022 06:40:50 GMT",
"version": "v1"
}
] | 2022-10-28 | [
[
"Annamneedi",
"Anil",
"",
"PRC, BIOS"
],
[
"Gora",
"Caroline",
"",
"PRC, BIOS"
],
[
"Dudas",
"Ana",
"",
"PRC, BIOS"
],
[
"Leray",
"Xavier",
"",
"PRC, BIOS"
],
[
"Bozon",
"Véronique",
"",
"PRC, BIOS"
],
[
"Crepi... | Changes in genetic and/or environmental factors to developing neural circuits and subsequent synaptic functions are known to be a causative underlying the varied socio-emotional behavioural patterns associated with autism spectrum disorders (ASD). Seven transmembrane G protein-coupled receptors (GPCRs) comprising the largest family of cell-surface receptors, mediate the transfer of extracellular signals to downstream cellular responses. Disruption of GPCR and their signalling have been implicated as a convergent pathologic mechanism of ASD. Here, we aim to review the literature about the 23 GPCRs that are genetically associated to ASD pathology according to Simons Foundation Autism Research Initiative (SFARI) database such as oxytocin (OXTR) and vasopressin (V1A, V1B) receptors, metabotropic glutamate (mGlu5, mGlu7) and gamma-aminobutyric acid (GABAB) receptors, dopamine (D1, D2), serotoninergic (5-HT1B and additionally included the 5-HT2A, 5-HT7 receptors for their strong relevance to ASD), adrenergic ($\beta$2) and cholinergic (M3) receptors, adenosine (A2A, A3) receptors, angiotensin (AT2) receptors, cannabinoid (CB1) receptors, chemokine (CX3CR1) receptors, orphan (GPR37, GPR85) and olfactory (OR1C1, OR2M4, OR2T10, OR52M1) receptors. We discussed the genetic variants, relation to core ASD behavioural deficits and update on pharmacological compounds targeting these 23 GPCRs. Of these OTR, V1A, mGlu5, D2, 5-HT2A, CB1, and GPR37 serve as the best therapeutic targets and have potential towards core domains of ASD pathology. With a functional crosstalk between different GPCRs and converging pharmacological responses, there is an urge to develop novel therapeutic strategies based on multiple GPCRs to reduce the socioeconomic burden associated with ASD and we strongly emphasize the need to prioritize the increased clinical trials targeting the multiple GPCRs. |
1305.4333 | Nachol Chaiyaratana PhD | Anunchai Assawamakin, Nachol Chaiyaratana, Chanin Limwongse, Saravudh
Sinsomros, Pa-thai Yenchitsomanus, Prakarnkiat Youngkong | Variable-length haplotype construction for gene-gene interaction studies | 7 pages, 2 figures | IEEE Engineering in Medicine and Biology Magazine, 28(4), 25-31 | 10.1109/MEMB.2009.932902 | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a non-parametric classification technique for identifying
a candidate bi-allelic genetic marker set that best describes disease
susceptibility in gene-gene interaction studies. The developed technique
functions by creating a mapping between inferred haplotypes and case/control
status. The technique cycles through all possible marker combination models
generated from the available marker set where the best interaction model is
determined from prediction accuracy and two auxiliary criteria including
low-to-high order haplotype propagation capability and model parsimony. Since
variable-length haplotypes are created during the best model identification,
the developed technique is referred to as a variable-length haplotype
construction for gene-gene interaction (VarHAP) technique. VarHAP has been
benchmarked against a multifactor dimensionality reduction (MDR) program and a
haplotype interaction technique embedded in a FAMHAP program in various
two-locus interaction problems. The results reveal that VarHAP is suitable for
all interaction situations with the presence of weak and strong linkage
disequilibrium among genetic markers.
| [
{
"created": "Sun, 19 May 2013 07:11:22 GMT",
"version": "v1"
}
] | 2013-05-21 | [
[
"Assawamakin",
"Anunchai",
""
],
[
"Chaiyaratana",
"Nachol",
""
],
[
"Limwongse",
"Chanin",
""
],
[
"Sinsomros",
"Saravudh",
""
],
[
"Yenchitsomanus",
"Pa-thai",
""
],
[
"Youngkong",
"Prakarnkiat",
""
]
] | This paper presents a non-parametric classification technique for identifying a candidate bi-allelic genetic marker set that best describes disease susceptibility in gene-gene interaction studies. The developed technique functions by creating a mapping between inferred haplotypes and case/control status. The technique cycles through all possible marker combination models generated from the available marker set where the best interaction model is determined from prediction accuracy and two auxiliary criteria including low-to-high order haplotype propagation capability and model parsimony. Since variable-length haplotypes are created during the best model identification, the developed technique is referred to as a variable-length haplotype construction for gene-gene interaction (VarHAP) technique. VarHAP has been benchmarked against a multifactor dimensionality reduction (MDR) program and a haplotype interaction technique embedded in a FAMHAP program in various two-locus interaction problems. The results reveal that VarHAP is suitable for all interaction situations with the presence of weak and strong linkage disequilibrium among genetic markers. |
2208.05204 | Jana Massing | Jana C. Massing, Ashkaan Fahimipour, Carina Bunse, Jarone Pinhassi,
Thilo Gross | Quantification of metabolic niche occupancy dynamics in a Baltic Sea
bacterial community | 20 pages, 7 figures | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | Progress in molecular methods has enabled the monitoring of bacterial
populations in time. Nevertheless, understanding community dynamics and its
links with ecosystem functioning remains challenging due to the tremendous
diversity of microorganisms. Conceptual frameworks that make sense of
time-series of taxonomically-rich bacterial communities, regarding their
potential ecological function, are needed. A key concept for organizing
ecological functions is the niche, the set of strategies that enable a
population to persist and define its impacts on the surroundings. Here we
present a framework based on manifold learning, to organize genomic information
into potentially occupied bacterial metabolic niches over time. We apply the
method to re-construct the dynamics of putatively occupied metabolic niches
using a long-term bacterial time-series from the Baltic Sea, the Linnaeus
Microbial Observatory (LMO). The results reveal a relatively low-dimensional
space of occupied metabolic niches comprising groups of taxa with similar
functional capabilities. Time patterns of occupied niches were strongly driven
by seasonality. Some metabolic niches were dominated by one bacterial taxon
whereas others were occupied by multiple taxa, and this depended on season.
These results illustrate the power of manifold learning approaches to advance
our understanding of the links between community composition and functioning in
microbial systems.
| [
{
"created": "Wed, 10 Aug 2022 07:57:49 GMT",
"version": "v1"
}
] | 2022-08-11 | [
[
"Massing",
"Jana C.",
""
],
[
"Fahimipour",
"Ashkaan",
""
],
[
"Bunse",
"Carina",
""
],
[
"Pinhassi",
"Jarone",
""
],
[
"Gross",
"Thilo",
""
]
] | Progress in molecular methods has enabled the monitoring of bacterial populations in time. Nevertheless, understanding community dynamics and its links with ecosystem functioning remains challenging due to the tremendous diversity of microorganisms. Conceptual frameworks that make sense of time-series of taxonomically-rich bacterial communities, regarding their potential ecological function, are needed. A key concept for organizing ecological functions is the niche, the set of strategies that enable a population to persist and define its impacts on the surroundings. Here we present a framework based on manifold learning, to organize genomic information into potentially occupied bacterial metabolic niches over time. We apply the method to re-construct the dynamics of putatively occupied metabolic niches using a long-term bacterial time-series from the Baltic Sea, the Linnaeus Microbial Observatory (LMO). The results reveal a relatively low-dimensional space of occupied metabolic niches comprising groups of taxa with similar functional capabilities. Time patterns of occupied niches were strongly driven by seasonality. Some metabolic niches were dominated by one bacterial taxon whereas others were occupied by multiple taxa, and this depended on season. These results illustrate the power of manifold learning approaches to advance our understanding of the links between community composition and functioning in microbial systems. |
1610.03427 | Nicholas Battista | Nicholas A. Battista, Andrea N. Lane, Laura A. Miller | On the dynamic suction pumping of blood cells in tubular hearts | 21 pages, 19 figures | null | 10.1007/978-3-319-60304-9 | null | q-bio.TO physics.flu-dyn | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Around the third week after gestation in embryonic development, the human
heart consists only of a valvless tube, unlike a fully developed adult heart,
which is multi-chambered. At this stage in development, the heart valves have
not formed and so net flow of blood through the heart must be driven by a
different mechanism. It is hypothesized that there are two possible mechanisms
that drive blood flow at this stage - Liebau pumping (dynamic suction pumping
or valveless pumping) and peristaltic pumping. We implement the immersed
boundary method with adaptive mesh refinement (IBAMR) to numerically study the
effect of hematocrit on the circulation around a valveless. Both peristalsis
and dynamic suction pumping are considered. In the case of dynamic suction
pumping, the heart and circulatory system is simplified as a flexible tube
attached to a relatively rigid racetrack. For some Womersley number (Wo)
regimes, there is significant net flow around the racetrack. We find that the
addition of flexible blood cells does not significantly affect flow rates
within the tube for Wo $\leq$ 10. On the other hand, peristalsis consistently
drives blood around the racetrack for all Wo and for all hematocrit considered.
| [
{
"created": "Tue, 11 Oct 2016 17:08:46 GMT",
"version": "v1"
}
] | 2018-09-19 | [
[
"Battista",
"Nicholas A.",
""
],
[
"Lane",
"Andrea N.",
""
],
[
"Miller",
"Laura A.",
""
]
] | Around the third week after gestation in embryonic development, the human heart consists only of a valvless tube, unlike a fully developed adult heart, which is multi-chambered. At this stage in development, the heart valves have not formed and so net flow of blood through the heart must be driven by a different mechanism. It is hypothesized that there are two possible mechanisms that drive blood flow at this stage - Liebau pumping (dynamic suction pumping or valveless pumping) and peristaltic pumping. We implement the immersed boundary method with adaptive mesh refinement (IBAMR) to numerically study the effect of hematocrit on the circulation around a valveless. Both peristalsis and dynamic suction pumping are considered. In the case of dynamic suction pumping, the heart and circulatory system is simplified as a flexible tube attached to a relatively rigid racetrack. For some Womersley number (Wo) regimes, there is significant net flow around the racetrack. We find that the addition of flexible blood cells does not significantly affect flow rates within the tube for Wo $\leq$ 10. On the other hand, peristalsis consistently drives blood around the racetrack for all Wo and for all hematocrit considered. |
2004.00217 | Arun Sharma Dr | Arun Dev Sharma and Inderjeet Kaur | Molecular docking studies on Jensenone from eucalyptus essential oil as
a potential inhibitor of COVID 19 corona virus infection | 10 p. Research and reviews in biotechnology and biosciences
2020,7,59-66 | null | null | null | q-bio.BM | http://creativecommons.org/licenses/by/4.0/ | COVID-19, a member of corona virus family is spreading its tentacles across
the world due to lack of drugs at present. However, the main viral proteinase
(Mpro/3CLpro) has recently been regarded as a suitable target for drug design
against SARS infection due to its vital role in polyproteins processing
necessary for coronavirus reproduction. The present in silico study was
designed to evaluate the effect of Jensenone, a essential oil component from
eucalyptus oil, on Mpro by docking study. In the present study, molecular
docking studies were conducted by using 1-click dock and swiss dock tools.
Protein interaction mode was calculated by Protein Interactions Calculator.The
calculated parameters such as binding energy, and binding site similarity
indicated effective binding of Jensenone to COVID-19 proteinase. Active site
prediction further validated the role of active site residues in ligand
binding. PIC results indicated that, Mpro/ Jensenone complexes forms
hydrophobic interactions, hydrogen bond interactions and strong ionic
interactions. Therefore, Jensenone may represent potential treatment potential
to act as COVID-19 Mpro inhibitor. However, further research is necessary to
investigate their potential medicinal use.
| [
{
"created": "Wed, 1 Apr 2020 03:50:53 GMT",
"version": "v1"
},
{
"created": "Fri, 17 Apr 2020 21:22:11 GMT",
"version": "v2"
}
] | 2020-04-21 | [
[
"Sharma",
"Arun Dev",
""
],
[
"Kaur",
"Inderjeet",
""
]
] | COVID-19, a member of corona virus family is spreading its tentacles across the world due to lack of drugs at present. However, the main viral proteinase (Mpro/3CLpro) has recently been regarded as a suitable target for drug design against SARS infection due to its vital role in polyproteins processing necessary for coronavirus reproduction. The present in silico study was designed to evaluate the effect of Jensenone, a essential oil component from eucalyptus oil, on Mpro by docking study. In the present study, molecular docking studies were conducted by using 1-click dock and swiss dock tools. Protein interaction mode was calculated by Protein Interactions Calculator.The calculated parameters such as binding energy, and binding site similarity indicated effective binding of Jensenone to COVID-19 proteinase. Active site prediction further validated the role of active site residues in ligand binding. PIC results indicated that, Mpro/ Jensenone complexes forms hydrophobic interactions, hydrogen bond interactions and strong ionic interactions. Therefore, Jensenone may represent potential treatment potential to act as COVID-19 Mpro inhibitor. However, further research is necessary to investigate their potential medicinal use. |
q-bio/0506006 | Olga Issaeva | O.G. Isaeva and V.A. Osipov | Modeling of anti-tumor immune response: immunocorrective effect of weak
centimeter electromagnetic waves | 23 pages, 7 figures. This is a final version of our e-print that was
accepted for publication in the Journal of Computational and Mathematical
Methods in Medicine on July 24,2008 | null | null | null | q-bio.CB q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We formulate the dynamical model for the anti-tumor immune response based on
intercellular cytokine-mediated interactions with the interleukin-2 (IL-2)
taken into account. The analysis shows that the expression level of tumor
antigens on antigen presenting cells has a distinct influence on the tumor
dynamics. At low antigen presentation a progressive tumor growth takes place to
the highest possible value. At high antigen presentation there is a decrease in
tumor size to some value when the dynamical equilibrium between the tumor and
the immune system is reached. In the case of the medium antigen presentation
both these regimes can be realized depending on the initial tumor size and the
condition of the immune system. A pronounced immunomodulating effect (the
suppression of tumor growth and the normalization of IL-2 concentration) is
established by considering the influence of low-intensity electromagnetic
microwaves as a parametric perturbation of the dynamical system. This finding
is in qualitative agreement with the recent experimental results on
immunocorrective effects of centimeter electromagnetic waves in tumor-bearing
mice.
| [
{
"created": "Tue, 7 Jun 2005 14:04:44 GMT",
"version": "v1"
},
{
"created": "Mon, 10 Oct 2005 13:44:16 GMT",
"version": "v2"
},
{
"created": "Tue, 17 Jul 2007 21:28:59 GMT",
"version": "v3"
},
{
"created": "Sat, 26 Jul 2008 11:37:13 GMT",
"version": "v4"
}
] | 2008-07-26 | [
[
"Isaeva",
"O. G.",
""
],
[
"Osipov",
"V. A.",
""
]
] | We formulate the dynamical model for the anti-tumor immune response based on intercellular cytokine-mediated interactions with the interleukin-2 (IL-2) taken into account. The analysis shows that the expression level of tumor antigens on antigen presenting cells has a distinct influence on the tumor dynamics. At low antigen presentation a progressive tumor growth takes place to the highest possible value. At high antigen presentation there is a decrease in tumor size to some value when the dynamical equilibrium between the tumor and the immune system is reached. In the case of the medium antigen presentation both these regimes can be realized depending on the initial tumor size and the condition of the immune system. A pronounced immunomodulating effect (the suppression of tumor growth and the normalization of IL-2 concentration) is established by considering the influence of low-intensity electromagnetic microwaves as a parametric perturbation of the dynamical system. This finding is in qualitative agreement with the recent experimental results on immunocorrective effects of centimeter electromagnetic waves in tumor-bearing mice. |
2011.01069 | Natasha Savage Dr | Natasha S. Savage | Describing the movement of molecules in reduced-dimension models | Main Text. Methods. Supplementary Text. References | null | 10.1038/s42003-021-02200-3 | null | q-bio.QM | http://creativecommons.org/licenses/by-nc-sa/4.0/ | When addressing spatial biological questions using mathematical models,
symmetries within the system are often exploited to simplify the problem by
reducing its physical dimension. In a reduced-dimension model molecular
movement is restricted to the reduced dimension, changing the nature of
molecular movement. This change in molecular movement can lead to
quantitatively and even qualitatively different results in the full and reduced
systems. Within this manuscript we discuss the condition under which restricted
molecular movement in reduced-dimension models accurately approximates
molecular movement in the full system. For those systems which do not satisfy
the condition, we present a general method for approximating unrestricted
molecular movement in reduced-dimension models. We will derive a mathematically
robust, finite difference method for solving the 2D diffusion equation within a
1D reduced-dimension model. The methods described here can be used to improve
the accuracy of many reduced-dimension models while retaining benefits of
system simplification.
| [
{
"created": "Mon, 2 Nov 2020 15:55:56 GMT",
"version": "v1"
},
{
"created": "Sat, 13 Feb 2021 15:27:00 GMT",
"version": "v2"
},
{
"created": "Tue, 16 Feb 2021 10:16:43 GMT",
"version": "v3"
}
] | 2022-08-24 | [
[
"Savage",
"Natasha S.",
""
]
] | When addressing spatial biological questions using mathematical models, symmetries within the system are often exploited to simplify the problem by reducing its physical dimension. In a reduced-dimension model molecular movement is restricted to the reduced dimension, changing the nature of molecular movement. This change in molecular movement can lead to quantitatively and even qualitatively different results in the full and reduced systems. Within this manuscript we discuss the condition under which restricted molecular movement in reduced-dimension models accurately approximates molecular movement in the full system. For those systems which do not satisfy the condition, we present a general method for approximating unrestricted molecular movement in reduced-dimension models. We will derive a mathematically robust, finite difference method for solving the 2D diffusion equation within a 1D reduced-dimension model. The methods described here can be used to improve the accuracy of many reduced-dimension models while retaining benefits of system simplification. |
2011.05862 | Caner Ercan | Yetkin Agackiran, Funda Aksu, Nalan Akyurek, Caner Ercan, Mustafa
Demiroz, Kurtulus Aksu | Programmed death ligand 1 expression levels, clinicopathologic features,
and survival in surgically resected sarcomatoid lung carcinoma | null | Asia Pac J Clin Oncol 2020 1 9 | 10.1111/ajco.13460 | null | q-bio.TO | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Aim: To determine the programmed death ligand-1 (PD-L1) expression rates in
sarcomatoid lung carcinomas and to compare clinicopathologic features and
survival rates of PD-L1-positive and negative patients. Methods: PD-L1
expression was evaluated in 65 surgically resected sarcomatoid carcinomas. The
clinicopathologic features of cases with PD-L1-positive and negative tumors
were compared. Kaplan-Meier survival analysis was performed. Multiple Cox
proportional hazard regression analysis was performed to determine independent
predictors of overall survival. Results: PD-L1 antibody positivity was found in
72.3% of surgically resected sarcomatoid lung carcinomas. Regarding
histopathologic subtypes, PD-L1 expression was positive in 80.4% of pleomorphic
carcinomas, 62.5% of spindle- and/or giant-cell carcinomas, and 16.7% of
carcinosarcomas. Pleural invasion was observed in 68.1% of PD-L1-positive cases
and 27.8% of PD-L1-negative cases (p = 0.008). No difference in survival was
found between PD-L1-positive and negative tumors. The only factor significantly
associated with poor survival was the pathological stage of the tumor.
Conclusions: This study reveals a high rate of PD-L1 positivity in a large
number of sarcomatoid lung carcinoma cases with pleomorphic carcinoma, spindle-
and/or giantcell carcinoma, and carcinosarcoma subtypes. The only significantly
different clinicopathologic feature in PD-L1-positive cases is pleural
invasion. PD-L1 positivity is not a significant predictor of survival in
sarcomatoid lung carcinomas.
| [
{
"created": "Sun, 8 Nov 2020 18:59:24 GMT",
"version": "v1"
}
] | 2020-11-12 | [
[
"Agackiran",
"Yetkin",
""
],
[
"Aksu",
"Funda",
""
],
[
"Akyurek",
"Nalan",
""
],
[
"Ercan",
"Caner",
""
],
[
"Demiroz",
"Mustafa",
""
],
[
"Aksu",
"Kurtulus",
""
]
] | Aim: To determine the programmed death ligand-1 (PD-L1) expression rates in sarcomatoid lung carcinomas and to compare clinicopathologic features and survival rates of PD-L1-positive and negative patients. Methods: PD-L1 expression was evaluated in 65 surgically resected sarcomatoid carcinomas. The clinicopathologic features of cases with PD-L1-positive and negative tumors were compared. Kaplan-Meier survival analysis was performed. Multiple Cox proportional hazard regression analysis was performed to determine independent predictors of overall survival. Results: PD-L1 antibody positivity was found in 72.3% of surgically resected sarcomatoid lung carcinomas. Regarding histopathologic subtypes, PD-L1 expression was positive in 80.4% of pleomorphic carcinomas, 62.5% of spindle- and/or giant-cell carcinomas, and 16.7% of carcinosarcomas. Pleural invasion was observed in 68.1% of PD-L1-positive cases and 27.8% of PD-L1-negative cases (p = 0.008). No difference in survival was found between PD-L1-positive and negative tumors. The only factor significantly associated with poor survival was the pathological stage of the tumor. Conclusions: This study reveals a high rate of PD-L1 positivity in a large number of sarcomatoid lung carcinoma cases with pleomorphic carcinoma, spindle- and/or giantcell carcinoma, and carcinosarcoma subtypes. The only significantly different clinicopathologic feature in PD-L1-positive cases is pleural invasion. PD-L1 positivity is not a significant predictor of survival in sarcomatoid lung carcinomas. |
1511.01958 | Thierry Mora | Thierry Mora, Aleksandra M. Walczak, Lorenzo Del Castello, Francesco
Ginelli, Stefania Melillo, Leonardo Parisi, Massimiliano Viale, Andrea
Cavagna, and Irene Giardina | Local equilibrium in bird flocks | null | Nature Physics 12, 1153-1157 (2016) | 10.1038/nphys3846 | null | q-bio.PE cond-mat.stat-mech physics.bio-ph q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The correlated motion of flocks is an instance of global order emerging from
local interactions. An essential difference with analogous ferromagnetic
systems is that flocks are active: animals move relative to each other,
dynamically rearranging their interaction network. The effect of this
off-equilibrium element is well studied theoretically, but its impact on actual
biological groups deserves more experimental attention. Here, we introduce a
novel dynamical inference technique, based on the principle of maximum entropy,
which accodomates network rearrangements and overcomes the problem of slow
experimental sampling rates. We use this method to infer the strength and range
of alignment forces from data of starling flocks. We find that local bird
alignment happens on a much faster timescale than neighbour rearrangement.
Accordingly, equilibrium inference, which assumes a fixed interaction network,
gives results consistent with dynamical inference. We conclude that bird
orientations are in a state of local quasi-equilibrium over the interaction
length scale, providing firm ground for the applicability of statistical
physics in certain active systems.
| [
{
"created": "Fri, 6 Nov 2015 00:00:52 GMT",
"version": "v1"
},
{
"created": "Mon, 27 Jun 2016 22:15:11 GMT",
"version": "v2"
}
] | 2016-12-26 | [
[
"Mora",
"Thierry",
""
],
[
"Walczak",
"Aleksandra M.",
""
],
[
"Del Castello",
"Lorenzo",
""
],
[
"Ginelli",
"Francesco",
""
],
[
"Melillo",
"Stefania",
""
],
[
"Parisi",
"Leonardo",
""
],
[
"Viale",
"Massimili... | The correlated motion of flocks is an instance of global order emerging from local interactions. An essential difference with analogous ferromagnetic systems is that flocks are active: animals move relative to each other, dynamically rearranging their interaction network. The effect of this off-equilibrium element is well studied theoretically, but its impact on actual biological groups deserves more experimental attention. Here, we introduce a novel dynamical inference technique, based on the principle of maximum entropy, which accodomates network rearrangements and overcomes the problem of slow experimental sampling rates. We use this method to infer the strength and range of alignment forces from data of starling flocks. We find that local bird alignment happens on a much faster timescale than neighbour rearrangement. Accordingly, equilibrium inference, which assumes a fixed interaction network, gives results consistent with dynamical inference. We conclude that bird orientations are in a state of local quasi-equilibrium over the interaction length scale, providing firm ground for the applicability of statistical physics in certain active systems. |
1605.05371 | Ronan M.T. Fleming Dr | Hulda S. Haraldsd\'ottir and Ronan M. T. Fleming | Identification of conserved moieties in metabolic networks by graph
theoretical analysis of atom transition networks | 28 pages, 11 figures | null | 10.1371/journal.pcbi.1004999 | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Conserved moieties are groups of atoms that remain intact in all reactions of
a metabolic network. Identification of conserved moieties gives insight into
the structure and function of metabolic networks and facilitates metabolic
modelling. All moiety conservation relations can be represented as nonnegative
integer vectors in the left null space of the stoichiometric matrix
corresponding to a biochemical network. Algorithms exist to compute such
vectors based only on reaction stoichiometry but their computational complexity
has limited their application to relatively small metabolic networks. Moreover,
the vectors returned by existing algorithms do not, in general, represent
conservation of a specific moiety with a defined atomic structure. Here, we
show that identification of conserved moieties requires data on reaction atom
mappings in addition to stoichiometry. We present a novel method to identify
conserved moieties in metabolic networks by graph theoretical analysis of their
underlying atom transition networks. Our method returns the exact group of
atoms belonging to each conserved moiety as well as the corresponding vector in
the left null space of the stoichiometric matrix. It can be implemented as a
pipeline of polynomial time algorithms. Our implementation completes in under
five minutes on a metabolic network with more than 4,000 mass balanced
reactions. The scalability of the method enables extension of existing
applications for moiety conservation relations to genome-scale metabolic
networks. We also give examples of new applications made possible by
elucidating the atomic structure of conserved moieties.
| [
{
"created": "Tue, 17 May 2016 21:30:54 GMT",
"version": "v1"
}
] | 2017-02-08 | [
[
"Haraldsdóttir",
"Hulda S.",
""
],
[
"Fleming",
"Ronan M. T.",
""
]
] | Conserved moieties are groups of atoms that remain intact in all reactions of a metabolic network. Identification of conserved moieties gives insight into the structure and function of metabolic networks and facilitates metabolic modelling. All moiety conservation relations can be represented as nonnegative integer vectors in the left null space of the stoichiometric matrix corresponding to a biochemical network. Algorithms exist to compute such vectors based only on reaction stoichiometry but their computational complexity has limited their application to relatively small metabolic networks. Moreover, the vectors returned by existing algorithms do not, in general, represent conservation of a specific moiety with a defined atomic structure. Here, we show that identification of conserved moieties requires data on reaction atom mappings in addition to stoichiometry. We present a novel method to identify conserved moieties in metabolic networks by graph theoretical analysis of their underlying atom transition networks. Our method returns the exact group of atoms belonging to each conserved moiety as well as the corresponding vector in the left null space of the stoichiometric matrix. It can be implemented as a pipeline of polynomial time algorithms. Our implementation completes in under five minutes on a metabolic network with more than 4,000 mass balanced reactions. The scalability of the method enables extension of existing applications for moiety conservation relations to genome-scale metabolic networks. We also give examples of new applications made possible by elucidating the atomic structure of conserved moieties. |
2302.13753 | Leroy Cronin Prof | Michael Jirasek, Abhishek Sharma, Jessica R. Bame, S. Hessam M. Mehr,
Nicola Bell, Stuart M. Marshall, Cole Mathis, Alasdair Macleod, Geoffrey J.
T. Cooper, Marcel Swart, Rosa Mollfulleda, Leroy Cronin | Determining Molecular Complexity using Assembly Theory and Spectroscopy | 27 pages, 7 figures plus supplementary data | null | null | null | q-bio.QM physics.bio-ph physics.chem-ph | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Determining the complexity of molecules has important applications from
molecular design to understanding the history of the process that led to the
formation of the molecule. Currently, it is not possible to experimentally
determine, without full structure elucidation, how complex a molecule is.
Assembly Theory has been developed to quantify the complexity of a molecule by
finding the shortest path to construct the molecule from building blocks,
revealing its molecular assembly index (MA). In this study, we present an
approach to rapidly and exhaustively calculate the MA of molecules from the
spectroscopic measurements. We demonstrate that molecular complexity (MA) can
be experimentally estimated using three independent techniques: nuclear
magnetic resonance (NMR), tandem mass spectrometry (MS/MS), and infrared
spectroscopy (IR), and these give consistent results with good correlations
with the theoretically determined values from assembly theory. By identifying
and analysing the number of absorbances in IR spectra, carbon resonances in
NMR, or molecular fragments in tandem MS, the molecular assembly of an unknown
molecule can be reliably estimated from experimental data. This represents the
first experimentally quantifiable approach to defining molecular assembly, a
reliable metric for complexity, as an intrinsic property of molecules and can
also be performed on complex mixtures. This paves the way to use spectroscopic
and spectrometric techniques to unambiguously detect alien life in the solar
system, and beyond on exoplanets.
| [
{
"created": "Fri, 24 Feb 2023 12:05:57 GMT",
"version": "v1"
},
{
"created": "Tue, 7 Nov 2023 13:42:37 GMT",
"version": "v2"
}
] | 2023-11-08 | [
[
"Jirasek",
"Michael",
""
],
[
"Sharma",
"Abhishek",
""
],
[
"Bame",
"Jessica R.",
""
],
[
"Mehr",
"S. Hessam M.",
""
],
[
"Bell",
"Nicola",
""
],
[
"Marshall",
"Stuart M.",
""
],
[
"Mathis",
"Cole",
""
],... | Determining the complexity of molecules has important applications from molecular design to understanding the history of the process that led to the formation of the molecule. Currently, it is not possible to experimentally determine, without full structure elucidation, how complex a molecule is. Assembly Theory has been developed to quantify the complexity of a molecule by finding the shortest path to construct the molecule from building blocks, revealing its molecular assembly index (MA). In this study, we present an approach to rapidly and exhaustively calculate the MA of molecules from the spectroscopic measurements. We demonstrate that molecular complexity (MA) can be experimentally estimated using three independent techniques: nuclear magnetic resonance (NMR), tandem mass spectrometry (MS/MS), and infrared spectroscopy (IR), and these give consistent results with good correlations with the theoretically determined values from assembly theory. By identifying and analysing the number of absorbances in IR spectra, carbon resonances in NMR, or molecular fragments in tandem MS, the molecular assembly of an unknown molecule can be reliably estimated from experimental data. This represents the first experimentally quantifiable approach to defining molecular assembly, a reliable metric for complexity, as an intrinsic property of molecules and can also be performed on complex mixtures. This paves the way to use spectroscopic and spectrometric techniques to unambiguously detect alien life in the solar system, and beyond on exoplanets. |
1106.3791 | Shanika Kuruppu Ms | Shanika Kuruppu, Simon Puglisi and Justin Zobel | Reference Sequence Construction for Relative Compression of Genomes | 12 pages, 2 figures, to appear in the Proceedings of SPIRE2011 as a
short paper | null | null | null | q-bio.QM cs.CE cs.IT math.IT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Relative compression, where a set of similar strings are compressed with
respect to a reference string, is a very effective method of compressing DNA
datasets containing multiple similar sequences. Relative compression is fast to
perform and also supports rapid random access to the underlying data. The main
difficulty of relative compression is in selecting an appropriate reference
sequence. In this paper, we explore using the dictionary of repeats generated
by Comrad, Re-pair and Dna-x algorithms as reference sequences for relative
compression. We show this technique allows better compression and supports
random access just as well. The technique also allows more general repetitive
datasets to be compressed using relative compression.
| [
{
"created": "Mon, 20 Jun 2011 01:10:01 GMT",
"version": "v1"
}
] | 2011-06-21 | [
[
"Kuruppu",
"Shanika",
""
],
[
"Puglisi",
"Simon",
""
],
[
"Zobel",
"Justin",
""
]
] | Relative compression, where a set of similar strings are compressed with respect to a reference string, is a very effective method of compressing DNA datasets containing multiple similar sequences. Relative compression is fast to perform and also supports rapid random access to the underlying data. The main difficulty of relative compression is in selecting an appropriate reference sequence. In this paper, we explore using the dictionary of repeats generated by Comrad, Re-pair and Dna-x algorithms as reference sequences for relative compression. We show this technique allows better compression and supports random access just as well. The technique also allows more general repetitive datasets to be compressed using relative compression. |
2401.03095 | Andrew Baumgartner | Andrew Baumgartner, Sui Huang, Jennifer Hadlock, and Cory Funk | Dimensional reduction of gradient-like stochastic systems with
multiplicative noise via Fokker-Planck diffusion maps | null | null | null | null | q-bio.QM physics.bio-ph q-bio.CB q-bio.GN | http://creativecommons.org/licenses/by-sa/4.0/ | Dimensional reduction techniques have long been used to visualize the
structure and geometry of high dimensional data. However, most widely used
techniques are difficult to interpret due to nonlinearities and opaque
optimization processes. Here we present a specific graph based construction for
dimensionally reducing continuous stochastic systems with multiplicative noise
moving under the influence of a potential. To achieve this, we present a
specific graph construction which generates the Fokker-Planck equation of the
stochastic system in the continuum limit. The eigenvectors and eigenvalues of
the normalized graph Laplacian are used as a basis for the dimensional
reduction and yield a low dimensional representation of the dynamics which can
be used for downstream analysis such as spectral clustering. We focus on the
use case of single cell RNA sequencing data and show how current diffusion map
implementations popular in the single cell literature fit into this framework.
| [
{
"created": "Fri, 5 Jan 2024 23:59:07 GMT",
"version": "v1"
}
] | 2024-01-09 | [
[
"Baumgartner",
"Andrew",
""
],
[
"Huang",
"Sui",
""
],
[
"Hadlock",
"Jennifer",
""
],
[
"Funk",
"Cory",
""
]
] | Dimensional reduction techniques have long been used to visualize the structure and geometry of high dimensional data. However, most widely used techniques are difficult to interpret due to nonlinearities and opaque optimization processes. Here we present a specific graph based construction for dimensionally reducing continuous stochastic systems with multiplicative noise moving under the influence of a potential. To achieve this, we present a specific graph construction which generates the Fokker-Planck equation of the stochastic system in the continuum limit. The eigenvectors and eigenvalues of the normalized graph Laplacian are used as a basis for the dimensional reduction and yield a low dimensional representation of the dynamics which can be used for downstream analysis such as spectral clustering. We focus on the use case of single cell RNA sequencing data and show how current diffusion map implementations popular in the single cell literature fit into this framework. |
2207.00330 | Alexander Gorban | A.N. Gorban, T.A. Tyukina, L.I. Pokidysheva, E.V. Smirnova | It is useful to analyze correlation graphs | Mini-review, 9 pages, 62 bibliography | Physics of Life Reviews, Volume 40, March 2022, Pages 15-23 | 10.1016/j.plrev.2021.10.002 | null | q-bio.TO | http://creativecommons.org/licenses/by-sa/4.0/ | In 1987, we analyzed the changes in correlation graphs between various
features of the organism during stress and adaptation. After 33 years of
research of many authors, discoveries and rediscoveries, we can say with
complete confidence: It is useful to analyze correlation graphs. In addition,
we should add that the concept of adaptability ('adaptation energy') introduced
by Selye is useful, especially if it is supplemented by 'adaptation entropy'
and free energy, as well as an analysis of limiting factors. Our review of
these topics, Dynamic and Thermodynamic Adaptation Models" (Phys Life Rev,
2021, arXiv:2103.01959 [q-bio.OT]), attracted many comments from leading
experts, with new ideas and new problems, from the dynamics of aging and the
training of athletes to single-cell omics. Methodological backgrounds, like
free energy analysis, were also discussed in depth. In this article, we provide
an analytical overview of twelve commenting papers and some related
publications.
| [
{
"created": "Fri, 1 Jul 2022 10:48:18 GMT",
"version": "v1"
}
] | 2022-07-04 | [
[
"Gorban",
"A. N.",
""
],
[
"Tyukina",
"T. A.",
""
],
[
"Pokidysheva",
"L. I.",
""
],
[
"Smirnova",
"E. V.",
""
]
] | In 1987, we analyzed the changes in correlation graphs between various features of the organism during stress and adaptation. After 33 years of research of many authors, discoveries and rediscoveries, we can say with complete confidence: It is useful to analyze correlation graphs. In addition, we should add that the concept of adaptability ('adaptation energy') introduced by Selye is useful, especially if it is supplemented by 'adaptation entropy' and free energy, as well as an analysis of limiting factors. Our review of these topics, Dynamic and Thermodynamic Adaptation Models" (Phys Life Rev, 2021, arXiv:2103.01959 [q-bio.OT]), attracted many comments from leading experts, with new ideas and new problems, from the dynamics of aging and the training of athletes to single-cell omics. Methodological backgrounds, like free energy analysis, were also discussed in depth. In this article, we provide an analytical overview of twelve commenting papers and some related publications. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.