id stringlengths 9 13 | submitter stringlengths 4 48 | authors stringlengths 4 9.62k | title stringlengths 4 343 | comments stringlengths 2 480 ⌀ | journal-ref stringlengths 9 309 ⌀ | doi stringlengths 12 138 ⌀ | report-no stringclasses 277 values | categories stringlengths 8 87 | license stringclasses 9 values | orig_abstract stringlengths 27 3.76k | versions listlengths 1 15 | update_date stringlengths 10 10 | authors_parsed listlengths 1 147 | abstract stringlengths 24 3.75k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2105.03323 | Anna Weber | Anna Weber, Jannis Born and Mar\'ia Rodr\'iguez Mart\'inez | TITAN: T Cell Receptor Specificity Prediction with Bimodal Attention
Networks | 9 pages, 5 figures, to be published in ISMB 2021 conference
proceedings | Bioinformatics 37 (2021): i237-i244 | 10.1093/bioinformatics/btab294 | null | q-bio.BM cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Motivation: The activity of the adaptive immune system is governed by T-cells
and their specific T-cell receptors (TCR), which selectively recognize foreign
antigens. Recent advances in experimental techniques have enabled sequencing of
TCRs and their antigenic targets (epitopes), allowing to research the missing
link between TCR sequence and epitope binding specificity. Scarcity of data and
a large sequence space make this task challenging, and to date only models
limited to a small set of epitopes have achieved good performance. Here, we
establish a k-nearest-neighbor (K-NN) classifier as a strong baseline and then
propose TITAN (Tcr epITope bimodal Attention Networks), a bimodal neural
network that explicitly encodes both TCR sequences and epitopes to enable the
independent study of generalization capabilities to unseen TCRs and/or
epitopes. Results: By encoding epitopes at the atomic level with SMILES
sequences, we leverage transfer learning and data augmentation to enrich the
input data space and boost performance. TITAN achieves high performance in the
prediction of specificity of unseen TCRs (ROC-AUC 0.87 in 10-fold CV) and
surpasses the results of the current state-of-the-art (ImRex) by a large
margin. Notably, our Levenshtein-distance-based K-NN classifier also exhibits
competitive performance on unseen TCRs. While the generalization to unseen
epitopes remains challenging, we report two major breakthroughs. First, by
dissecting the attention heatmaps, we demonstrate that the sparsity of
available epitope data favors an implicit treatment of epitopes as classes.
This may be a general problem that limits unseen epitope performance for
sufficiently complex models. Second, we show that TITAN nevertheless exhibits
significantly improved performance on unseen epitopes and is capable of
focusing attention on chemically meaningful molecular structures.
| [
{
"created": "Wed, 21 Apr 2021 09:25:14 GMT",
"version": "v1"
}
] | 2023-08-28 | [
[
"Weber",
"Anna",
""
],
[
"Born",
"Jannis",
""
],
[
"Martínez",
"María Rodríguez",
""
]
] | Motivation: The activity of the adaptive immune system is governed by T-cells and their specific T-cell receptors (TCR), which selectively recognize foreign antigens. Recent advances in experimental techniques have enabled sequencing of TCRs and their antigenic targets (epitopes), allowing to research the missing link between TCR sequence and epitope binding specificity. Scarcity of data and a large sequence space make this task challenging, and to date only models limited to a small set of epitopes have achieved good performance. Here, we establish a k-nearest-neighbor (K-NN) classifier as a strong baseline and then propose TITAN (Tcr epITope bimodal Attention Networks), a bimodal neural network that explicitly encodes both TCR sequences and epitopes to enable the independent study of generalization capabilities to unseen TCRs and/or epitopes. Results: By encoding epitopes at the atomic level with SMILES sequences, we leverage transfer learning and data augmentation to enrich the input data space and boost performance. TITAN achieves high performance in the prediction of specificity of unseen TCRs (ROC-AUC 0.87 in 10-fold CV) and surpasses the results of the current state-of-the-art (ImRex) by a large margin. Notably, our Levenshtein-distance-based K-NN classifier also exhibits competitive performance on unseen TCRs. While the generalization to unseen epitopes remains challenging, we report two major breakthroughs. First, by dissecting the attention heatmaps, we demonstrate that the sparsity of available epitope data favors an implicit treatment of epitopes as classes. This may be a general problem that limits unseen epitope performance for sufficiently complex models. Second, we show that TITAN nevertheless exhibits significantly improved performance on unseen epitopes and is capable of focusing attention on chemically meaningful molecular structures. |
1506.00678 | Dan Gusfield | Dan Gusfield | Persistent Phylogeny: A Galled-Tree and Integer Linear Programming
Approach | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Persistent-Phylogeny Model is an extension of the widely studied
Perfect-Phylogeny Model, encompassing a broader range of evolutionary
phenomena. Biological and algorithmic questions concerning persistent phylogeny
have been intensely investigated in recent years. In this paper, we explore two
alternative approaches to the persistent-phylogeny problem that grow out of our
previous work on perfect phylogeny, and on galled trees. We develop an integer
programming solution to the Persistent-Phylogeny Problem; empirically explore
its efficiency; and empirically explore the utility of using fast algorithms
that recognize galled trees, to recognize persistent phylogeny. The empirical
results identify parameter ranges where persistent phylogeny are galled trees
with high frequency, and show that the integer programming approach can
efficiently identify persistent phylogeny of much larger size than has been
previously reported.
| [
{
"created": "Mon, 1 Jun 2015 21:27:40 GMT",
"version": "v1"
}
] | 2015-06-03 | [
[
"Gusfield",
"Dan",
""
]
] | The Persistent-Phylogeny Model is an extension of the widely studied Perfect-Phylogeny Model, encompassing a broader range of evolutionary phenomena. Biological and algorithmic questions concerning persistent phylogeny have been intensely investigated in recent years. In this paper, we explore two alternative approaches to the persistent-phylogeny problem that grow out of our previous work on perfect phylogeny, and on galled trees. We develop an integer programming solution to the Persistent-Phylogeny Problem; empirically explore its efficiency; and empirically explore the utility of using fast algorithms that recognize galled trees, to recognize persistent phylogeny. The empirical results identify parameter ranges where persistent phylogeny are galled trees with high frequency, and show that the integer programming approach can efficiently identify persistent phylogeny of much larger size than has been previously reported. |
1810.05543 | Sandip Bankar Prof. | Manisha A. Khedkar, Pranhita R. Nimbalkar, Sanjay P. Kamble, Shashank
G. Gaikwad, Prakash V. Chavan, Sandip B. Bankar | Process intensification strategies for enhanced holocellulose
solubilization: Beneficiation of pineapple peel waste for cleaner butanol
production | null | Journal of Cleaner Production 199 (2018) 937-947 | 10.1016/j.jclepro.2018.07.205 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Biorefinery sector has become a serious dispute for cleaner and sustainable
development in recent years. In the present study, pretreatment of pineapple
peel waste was carried out in high pressure reactor using various
pretreatment-enhancers. The type and concentration effect of each enhancer on
hemicellulose solubilization was systematically investigated. The binary acid
(phenol + sulfuric acid) at 180 {\deg}C was found to be superior amongst other
studied enhancers, giving 81.17% (w/v) hemicellulose solubilization in
liquid-fraction under optimized conditions. Solid residue thus obtained was
subjected to enzymatic hydrolysis that resulted into 24.50% (w/v) cellulose
breakdown. Treated solid residue was further characterized by scanning electron
microscopy and Fourier transform infrared spectroscopy to elucidate structural
changes. The pooled fractions (acid treated and enzymatically hydrolyzed) were
fermented using Clostridium acetobutylicum NRRL B 527 which resulted in butanol
production of 5.18 g/L with yield of 0.13 g butanol/g sugar consumed.
Therefore, pretreatment of pineapple peel waste evaluated in this study can be
considered as milestone in utilization of low cost feedstock, for bioenergy
production.
| [
{
"created": "Fri, 12 Oct 2018 14:20:41 GMT",
"version": "v1"
}
] | 2018-10-15 | [
[
"Khedkar",
"Manisha A.",
""
],
[
"Nimbalkar",
"Pranhita R.",
""
],
[
"Kamble",
"Sanjay P.",
""
],
[
"Gaikwad",
"Shashank G.",
""
],
[
"Chavan",
"Prakash V.",
""
],
[
"Bankar",
"Sandip B.",
""
]
] | Biorefinery sector has become a serious dispute for cleaner and sustainable development in recent years. In the present study, pretreatment of pineapple peel waste was carried out in high pressure reactor using various pretreatment-enhancers. The type and concentration effect of each enhancer on hemicellulose solubilization was systematically investigated. The binary acid (phenol + sulfuric acid) at 180 {\deg}C was found to be superior amongst other studied enhancers, giving 81.17% (w/v) hemicellulose solubilization in liquid-fraction under optimized conditions. Solid residue thus obtained was subjected to enzymatic hydrolysis that resulted into 24.50% (w/v) cellulose breakdown. Treated solid residue was further characterized by scanning electron microscopy and Fourier transform infrared spectroscopy to elucidate structural changes. The pooled fractions (acid treated and enzymatically hydrolyzed) were fermented using Clostridium acetobutylicum NRRL B 527 which resulted in butanol production of 5.18 g/L with yield of 0.13 g butanol/g sugar consumed. Therefore, pretreatment of pineapple peel waste evaluated in this study can be considered as milestone in utilization of low cost feedstock, for bioenergy production. |
1111.2759 | Matthew Macauley | Lori Layne, Elena Dimitrova, Matthew Macauley | Nested canalyzing depth and network stability | 13 pages, 2 figures | Bull. Math. Biol. 74(2) (2012), 422-433 | null | null | q-bio.MN physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce the nested canalyzing depth of a function, which measures the
extent to which it retains a nested canalyzing structure. We characterize the
structure of functions with a given depth and compute the expected activities
and sensitivities of the variables. This analysis quantifies how canalyzation
leads to higher stability in Boolean networks. It generalizes the notion of
nested canalyzing functions (NCFs), which are precisely the functions with
maximum depth. NCFs have been proposed as gene regulatory network models, but
their structure is frequently too restrictive and they are extremely sparse. We
find that functions become decreasingly sensitive to input perturbations as the
canalyzing depth increases, but exhibit rapidly diminishing returns in
stability. Additionally, we show that as depth increases, the dynamics of
networks using these functions quickly approach the critical regime, suggesting
that real networks exhibit some degree of canalyzing depth, and that NCFs are
not significantly better than functions of sufficient depth for many
applications of the modeling and reverse engineering of biological networks.
| [
{
"created": "Fri, 11 Nov 2011 14:48:27 GMT",
"version": "v1"
}
] | 2012-03-01 | [
[
"Layne",
"Lori",
""
],
[
"Dimitrova",
"Elena",
""
],
[
"Macauley",
"Matthew",
""
]
] | We introduce the nested canalyzing depth of a function, which measures the extent to which it retains a nested canalyzing structure. We characterize the structure of functions with a given depth and compute the expected activities and sensitivities of the variables. This analysis quantifies how canalyzation leads to higher stability in Boolean networks. It generalizes the notion of nested canalyzing functions (NCFs), which are precisely the functions with maximum depth. NCFs have been proposed as gene regulatory network models, but their structure is frequently too restrictive and they are extremely sparse. We find that functions become decreasingly sensitive to input perturbations as the canalyzing depth increases, but exhibit rapidly diminishing returns in stability. Additionally, we show that as depth increases, the dynamics of networks using these functions quickly approach the critical regime, suggesting that real networks exhibit some degree of canalyzing depth, and that NCFs are not significantly better than functions of sufficient depth for many applications of the modeling and reverse engineering of biological networks. |
1403.5878 | Sabine Fischer | Sabine C. Fischer, Guy B. Blanchard, Julia Duque, Richard J. Adams,
Alfonso Martinez Arias, Simon D. Guest and Nicole Gorfinkiel | Contractile and mechanical properties in epithelia with perturbed
actomyosin dynamics | null | null | null | null | q-bio.CB q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mechanics has an important role during morphogenesis, both in the generation
of forces driving cell shape changes and in determining the effective material
properties of cells and tissues. Drosophila dorsal closure (DC) has emerged as
a model system for studying the interplay between tissue mechanics and cellular
activity. Thereby, the amnioserosa (AS) generates one of the major forces that
drive DC through the apical contraction of its constituent cells. We combined
quantitation of live data, genetic and mechanical perturbation and cell
biology, to investigate how mechanical properties and contraction rate emerge
from cytoskeletal activity. We found that a decrease in Myosin phosphorylation
induces a fluidization of AS cells which become more compliant. Conversely, an
increase in Myosin phosphorylation and an increase in actin linear
polymerization induce a solidification of cells. Contrary to expectation, these
two perturbations have an opposite effect on the strain rate of cells during
DC. While an increase in actin polymerization increases the contraction rate of
AS cells, an increase in Myosin phosphorylation gives rise to cells that
contract very slowly. The quantification of how the perturbation induced by
laser ablation decays throughout the tissue revealed that the tissue in these
two mutant backgrounds reacts very differently. We suggest that the differences
in the strain rate of cells in situations where Myosin activity or actin
polymerization is increased arise from changes in how the contractile forces
are transmitted and coordinated across the tissue through ECadherin mediated
adhesion. Our results show that there is an optimal level of Myosin activity to
generate efficient contraction and suggest that the architecture of the actin
cytoskeleton and the dynamics of adhesion complexes are important parameters
for the emergence of coordinated activity throughout the tissue.
| [
{
"created": "Mon, 24 Mar 2014 08:34:13 GMT",
"version": "v1"
}
] | 2014-03-25 | [
[
"Fischer",
"Sabine C.",
""
],
[
"Blanchard",
"Guy B.",
""
],
[
"Duque",
"Julia",
""
],
[
"Adams",
"Richard J.",
""
],
[
"Arias",
"Alfonso Martinez",
""
],
[
"Guest",
"Simon D.",
""
],
[
"Gorfinkiel",
"Nicole",
""
]
] | Mechanics has an important role during morphogenesis, both in the generation of forces driving cell shape changes and in determining the effective material properties of cells and tissues. Drosophila dorsal closure (DC) has emerged as a model system for studying the interplay between tissue mechanics and cellular activity. Thereby, the amnioserosa (AS) generates one of the major forces that drive DC through the apical contraction of its constituent cells. We combined quantitation of live data, genetic and mechanical perturbation and cell biology, to investigate how mechanical properties and contraction rate emerge from cytoskeletal activity. We found that a decrease in Myosin phosphorylation induces a fluidization of AS cells which become more compliant. Conversely, an increase in Myosin phosphorylation and an increase in actin linear polymerization induce a solidification of cells. Contrary to expectation, these two perturbations have an opposite effect on the strain rate of cells during DC. While an increase in actin polymerization increases the contraction rate of AS cells, an increase in Myosin phosphorylation gives rise to cells that contract very slowly. The quantification of how the perturbation induced by laser ablation decays throughout the tissue revealed that the tissue in these two mutant backgrounds reacts very differently. We suggest that the differences in the strain rate of cells in situations where Myosin activity or actin polymerization is increased arise from changes in how the contractile forces are transmitted and coordinated across the tissue through ECadherin mediated adhesion. Our results show that there is an optimal level of Myosin activity to generate efficient contraction and suggest that the architecture of the actin cytoskeleton and the dynamics of adhesion complexes are important parameters for the emergence of coordinated activity throughout the tissue. |
1510.05850 | Joris Paijmans | Joris Paijmans, Mark Bosman, Pieter Rein ten Wolde and David K.
Lubensky | Discrete gene replication events drive coupling between the cell cycle
and circadian clocks | 21 pages, 5 figures | null | 10.1073/pnas.1507291113 | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many organisms possess both a cell cycle to control DNA replication and a
circadian clock to anticipate changes between day and night. In some cases,
these two rhythmic systems are known to be coupled by specific,
cross-regulatory interactions. Here, we use mathematical modeling to show that,
additionally, the cell cycle generically influences circadian clocks in a
non-specific fashion: The regular, discrete jumps in gene-copy number arising
from DNA replication during the cell cycle cause a periodic driving of the
circadian clock, which can dramatically alter its behavior and impair its
function. A clock built on negative transcriptional feedback either phase locks
to the cell cycle, so that the clock period tracks the cell division time, or
exhibits erratic behavior. We argue that the cyanobacterium Synechococcus
elongatus has evolved two features that protect its clock from such
disturbances, both of which are needed to fully insulate it from the cell cycle
and give it its observed robustness: a phosphorylation-based protein
modification oscillator, together with its accompanying push-pull read-out
circuit that responds primarily to the ratios of the different phosphoforms,
makes the clock less susceptible to perturbations in protein synthesis; and the
presence of multiple, asynchronously replicating copies of the same chromosome
diminishes the effect of replicating any single copy of a gene.
| [
{
"created": "Tue, 20 Oct 2015 12:12:31 GMT",
"version": "v1"
}
] | 2016-04-12 | [
[
"Paijmans",
"Joris",
""
],
[
"Bosman",
"Mark",
""
],
[
"Wolde",
"Pieter Rein ten",
""
],
[
"Lubensky",
"David K.",
""
]
] | Many organisms possess both a cell cycle to control DNA replication and a circadian clock to anticipate changes between day and night. In some cases, these two rhythmic systems are known to be coupled by specific, cross-regulatory interactions. Here, we use mathematical modeling to show that, additionally, the cell cycle generically influences circadian clocks in a non-specific fashion: The regular, discrete jumps in gene-copy number arising from DNA replication during the cell cycle cause a periodic driving of the circadian clock, which can dramatically alter its behavior and impair its function. A clock built on negative transcriptional feedback either phase locks to the cell cycle, so that the clock period tracks the cell division time, or exhibits erratic behavior. We argue that the cyanobacterium Synechococcus elongatus has evolved two features that protect its clock from such disturbances, both of which are needed to fully insulate it from the cell cycle and give it its observed robustness: a phosphorylation-based protein modification oscillator, together with its accompanying push-pull read-out circuit that responds primarily to the ratios of the different phosphoforms, makes the clock less susceptible to perturbations in protein synthesis; and the presence of multiple, asynchronously replicating copies of the same chromosome diminishes the effect of replicating any single copy of a gene. |
1912.10810 | Mehrzad Saremi | Mehrzad Saremi and Maryam Amirmazlaghani | Reconstruction of Gene Regulatory Networks usingMultiple Datasets | 17 pages, 7 figures, 7 tables | null | null | null | q-bio.GN cs.LG q-bio.MN | http://creativecommons.org/licenses/by/4.0/ | Motivation: Laboratory gene regulatory data for a species are sporadic.
Despite the abundance of gene regulatory network algorithms that employ single
data sets, few algorithms can combine the vast but disperse sources of data and
extract the potential information. With a motivation to compensate for this
shortage, we developed an algorithm called GENEREF that can accumulate
information from multiple types of data sets in an iterative manner, with each
iteration boosting the performance of the prediction results.
Results: The algorithm is examined extensively on data extracted from the
quintuple DREAM4 networks and DREAM5's Escherichia coli and Saccharomyces
cerevisiae networks and sub-networks. Many single-dataset and multi-dataset
algorithms were compared to test the performance of the algorithm. Results show
that GENEREF surpasses non-ensemble state-of-the-art multi-perturbation
algorithms on the selected networks and is competitive to present
multiple-dataset algorithms. Specifically, it outperforms dynGENIE3 and is on
par with iRafNet. Also, we argued that a scoring method solely based on the
AUPR criterion would be more trustworthy than the traditional score.
Availability: The Python implementation along with the data sets and results
can be downloaded from github.com/msaremi/GENEREF
| [
{
"created": "Thu, 19 Dec 2019 22:54:59 GMT",
"version": "v1"
},
{
"created": "Thu, 13 Aug 2020 19:32:05 GMT",
"version": "v2"
}
] | 2020-08-17 | [
[
"Saremi",
"Mehrzad",
""
],
[
"Amirmazlaghani",
"Maryam",
""
]
] | Motivation: Laboratory gene regulatory data for a species are sporadic. Despite the abundance of gene regulatory network algorithms that employ single data sets, few algorithms can combine the vast but disperse sources of data and extract the potential information. With a motivation to compensate for this shortage, we developed an algorithm called GENEREF that can accumulate information from multiple types of data sets in an iterative manner, with each iteration boosting the performance of the prediction results. Results: The algorithm is examined extensively on data extracted from the quintuple DREAM4 networks and DREAM5's Escherichia coli and Saccharomyces cerevisiae networks and sub-networks. Many single-dataset and multi-dataset algorithms were compared to test the performance of the algorithm. Results show that GENEREF surpasses non-ensemble state-of-the-art multi-perturbation algorithms on the selected networks and is competitive to present multiple-dataset algorithms. Specifically, it outperforms dynGENIE3 and is on par with iRafNet. Also, we argued that a scoring method solely based on the AUPR criterion would be more trustworthy than the traditional score. Availability: The Python implementation along with the data sets and results can be downloaded from github.com/msaremi/GENEREF |
2208.04841 | Adaora Nwosu | P. Murali Doraiswamy, Terry E. Goldberg, Min Qian, Alexandra R.
Linares, Adaora Nwosu, Izael Nino, Jessica D'Antonio, Julia Phillips, Charlie
Ndouli, Caroline Hellegers, Andrew M. Michael, Jeffrey R. Petrella, Howards
Andrews, Joel Sneed, Davangere P. Devanand | Validity of Web-based, Self-directed, NeuroCognitive Performance Test in
MCI | 17 Pages | J Alzheimers Dis. 2022;86(3):1131-1136. PMID: 35180109 | 10.3233/JAD-220015 | null | q-bio.NC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Digital cognitive tests offer several potential advantages over established
paper-pencil tests but have not yet been fully evaluated for the clinical
evaluation of mild cognitive impairment. The NeuroCognitive Performance Test
(NCPT) is a web-based, self-directed, modular battery intended for repeated
assessments of multiple cognitive domains. Our objective was to examine its
relationship with the ADAS-Cog and MMSE as well as with established
paper-pencil tests of cognition and daily functioning in MCI. We used Spearman
correlations, regressions and principal components analysis followed by a
factor analysis (varimax rotated) to examine our objectives. In MCI subjects,
the NCPT composite is significantly correlated with both a composite measure of
established tests (r=0.78, p<0.0001) as well as with the ADAS-Cog (r=0.55,
p<0.0001). Both NCPT and paper-pencil test batteries had a similar factor
structure that included a large g component with a high eigenvalue. The
correlation for the analogous tests (e.g. Trails A and B, learning memory
tests) were significant (p<0.0001). Further, both the NCPT and established
tests significantly (p< 0.01) predicted the University of California San Diego
Performance-Based Skills Assessment and Functional Activities Questionnaire,
measures of daily functioning. The NCPT, a web-based, self-directed,
computerized test, shows high concurrent validity with established tests and
hence offers promise for use as a research or clinical tool in MCI. Despite
limitations such as a relatively small sample, absence of control group and
cross-sectional nature, these findings are consistent with the growing
literature on the promise of self-directed, web-based cognitive assessments for
MCI.
| [
{
"created": "Mon, 11 Jul 2022 17:01:33 GMT",
"version": "v1"
}
] | 2022-08-10 | [
[
"Doraiswamy",
"P. Murali",
""
],
[
"Goldberg",
"Terry E.",
""
],
[
"Qian",
"Min",
""
],
[
"Linares",
"Alexandra R.",
""
],
[
"Nwosu",
"Adaora",
""
],
[
"Nino",
"Izael",
""
],
[
"D'Antonio",
"Jessica",
""
],
[
"Phillips",
"Julia",
""
],
[
"Ndouli",
"Charlie",
""
],
[
"Hellegers",
"Caroline",
""
],
[
"Michael",
"Andrew M.",
""
],
[
"Petrella",
"Jeffrey R.",
""
],
[
"Andrews",
"Howards",
""
],
[
"Sneed",
"Joel",
""
],
[
"Devanand",
"Davangere P.",
""
]
] | Digital cognitive tests offer several potential advantages over established paper-pencil tests but have not yet been fully evaluated for the clinical evaluation of mild cognitive impairment. The NeuroCognitive Performance Test (NCPT) is a web-based, self-directed, modular battery intended for repeated assessments of multiple cognitive domains. Our objective was to examine its relationship with the ADAS-Cog and MMSE as well as with established paper-pencil tests of cognition and daily functioning in MCI. We used Spearman correlations, regressions and principal components analysis followed by a factor analysis (varimax rotated) to examine our objectives. In MCI subjects, the NCPT composite is significantly correlated with both a composite measure of established tests (r=0.78, p<0.0001) as well as with the ADAS-Cog (r=0.55, p<0.0001). Both NCPT and paper-pencil test batteries had a similar factor structure that included a large g component with a high eigenvalue. The correlation for the analogous tests (e.g. Trails A and B, learning memory tests) were significant (p<0.0001). Further, both the NCPT and established tests significantly (p< 0.01) predicted the University of California San Diego Performance-Based Skills Assessment and Functional Activities Questionnaire, measures of daily functioning. The NCPT, a web-based, self-directed, computerized test, shows high concurrent validity with established tests and hence offers promise for use as a research or clinical tool in MCI. Despite limitations such as a relatively small sample, absence of control group and cross-sectional nature, these findings are consistent with the growing literature on the promise of self-directed, web-based cognitive assessments for MCI. |
1710.11227 | Ivan Yu. Tyukin | Ivan Y. Tyukin, Alexander N. Gorban, Carlos Calvo, Julia Makarova,
Valeri A. Makarov | High-dimensional brain. A tool for encoding and rapid learning of
memories by single neurons | null | Bulletin of mathematical biology, 81(11), 4856-4888, 2019 | 10.1007/s11538-018-0415-5 | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Codifying memories is one of the fundamental problems of modern Neuroscience.
The functional mechanisms behind this phenomenon remain largely unknown.
Experimental evidence suggests that some of the memory functions are performed
by stratified brain structures such as, e.g., the hippocampus. In this
particular case, single neurons in the CA1 region receive a highly
multidimensional input from the CA3 area, which is a hub for information
processing. We thus assess the implication of the abundance of neuronal
signalling routes converging onto single cells on the information processing.
We show that single neurons can selectively detect and learn arbitrary
information items, given that they operate in high dimensions. The argument is
based on Stochastic Separation Theorems and the concentration of measure
phenomena. We demonstrate that a simple enough functional neuronal model is
capable of explaining: i) the extreme selectivity of single neurons to the
information content, ii) simultaneous separation of several uncorrelated
stimuli or informational items from a large set, and iii) dynamic learning of
new items by associating them with already "known" ones. These results
constitute a basis for organization of complex memories in ensembles of single
neurons. Moreover, they show that no a priori assumptions on the structural
organization of neuronal ensembles are necessary for explaining basic concepts
of static and dynamic memories.
| [
{
"created": "Mon, 30 Oct 2017 20:30:00 GMT",
"version": "v1"
},
{
"created": "Sat, 27 Jan 2018 11:28:38 GMT",
"version": "v2"
}
] | 2022-05-17 | [
[
"Tyukin",
"Ivan Y.",
""
],
[
"Gorban",
"Alexander N.",
""
],
[
"Calvo",
"Carlos",
""
],
[
"Makarova",
"Julia",
""
],
[
"Makarov",
"Valeri A.",
""
]
] | Codifying memories is one of the fundamental problems of modern Neuroscience. The functional mechanisms behind this phenomenon remain largely unknown. Experimental evidence suggests that some of the memory functions are performed by stratified brain structures such as, e.g., the hippocampus. In this particular case, single neurons in the CA1 region receive a highly multidimensional input from the CA3 area, which is a hub for information processing. We thus assess the implication of the abundance of neuronal signalling routes converging onto single cells on the information processing. We show that single neurons can selectively detect and learn arbitrary information items, given that they operate in high dimensions. The argument is based on Stochastic Separation Theorems and the concentration of measure phenomena. We demonstrate that a simple enough functional neuronal model is capable of explaining: i) the extreme selectivity of single neurons to the information content, ii) simultaneous separation of several uncorrelated stimuli or informational items from a large set, and iii) dynamic learning of new items by associating them with already "known" ones. These results constitute a basis for organization of complex memories in ensembles of single neurons. Moreover, they show that no a priori assumptions on the structural organization of neuronal ensembles are necessary for explaining basic concepts of static and dynamic memories. |
2004.04157 | Theerawit Wilaiprasitporn | Nannapas Banluesombatkul, Pichayoot Ouppaphan, Pitshaporn Leelaarporn,
Payongkit Lakhan, Busarakum Chaitusaney, Nattapong Jaimchariyatam, Ekapol
Chuangsuwanich, Wei Chen, Huy Phan, Nat Dilokthanakul and Theerawit
Wilaiprasitporn | MetaSleepLearner: A Pilot Study on Fast Adaptation of Bio-signals-Based
Sleep Stage Classifier to New Individual Subject Using Meta-Learning | IEEE Journal of Biomedical and Health Informatics (Accepted) (source
code is available at https://github.com/IoBT-VISTEC/MetaSleepLearner) | IEEE Journal of Biomedical and Health Informatics (2020) | 10.1109/JBHI.2020.3037693 | null | q-bio.NC cs.LG eess.SP | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Identifying bio-signals based-sleep stages requires time-consuming and
tedious labor of skilled clinicians. Deep learning approaches have been
introduced in order to challenge the automatic sleep stage classification
conundrum. However, the difficulties can be posed in replacing the clinicians
with the automatic system due to the differences in many aspects found in
individual bio-signals, causing the inconsistency in the performance of the
model on every incoming individual. Thus, we aim to explore the feasibility of
using a novel approach, capable of assisting the clinicians and lessening the
workload. We propose the transfer learning framework, entitled
MetaSleepLearner, based on Model Agnostic Meta-Learning (MAML), in order to
transfer the acquired sleep staging knowledge from a large dataset to new
individual subjects. The framework was demonstrated to require the labelling of
only a few sleep epochs by the clinicians and allow the remainder to be handled
by the system. Layer-wise Relevance Propagation (LRP) was also applied to
understand the learning course of our approach. In all acquired datasets, in
comparison to the conventional approach, MetaSleepLearner achieved a range of
5.4\% to 17.7\% improvement with statistical difference in the mean of both
approaches. The illustration of the model interpretation after the adaptation
to each subject also confirmed that the performance was directed towards
reasonable learning. MetaSleepLearner outperformed the conventional approaches
as a result from the fine-tuning using the recordings of both healthy subjects
and patients. This is the first work that investigated a non-conventional
pre-training method, MAML, resulting in a possibility for human-machine
collaboration in sleep stage classification and easing the burden of the
clinicians in labelling the sleep stages through only several epochs rather
than an entire recording.
| [
{
"created": "Wed, 8 Apr 2020 16:31:03 GMT",
"version": "v1"
},
{
"created": "Tue, 14 Apr 2020 15:58:42 GMT",
"version": "v2"
},
{
"created": "Sun, 6 Sep 2020 16:14:44 GMT",
"version": "v3"
},
{
"created": "Tue, 10 Nov 2020 17:08:12 GMT",
"version": "v4"
}
] | 2021-02-02 | [
[
"Banluesombatkul",
"Nannapas",
""
],
[
"Ouppaphan",
"Pichayoot",
""
],
[
"Leelaarporn",
"Pitshaporn",
""
],
[
"Lakhan",
"Payongkit",
""
],
[
"Chaitusaney",
"Busarakum",
""
],
[
"Jaimchariyatam",
"Nattapong",
""
],
[
"Chuangsuwanich",
"Ekapol",
""
],
[
"Chen",
"Wei",
""
],
[
"Phan",
"Huy",
""
],
[
"Dilokthanakul",
"Nat",
""
],
[
"Wilaiprasitporn",
"Theerawit",
""
]
] | Identifying bio-signals based-sleep stages requires time-consuming and tedious labor of skilled clinicians. Deep learning approaches have been introduced in order to challenge the automatic sleep stage classification conundrum. However, the difficulties can be posed in replacing the clinicians with the automatic system due to the differences in many aspects found in individual bio-signals, causing the inconsistency in the performance of the model on every incoming individual. Thus, we aim to explore the feasibility of using a novel approach, capable of assisting the clinicians and lessening the workload. We propose the transfer learning framework, entitled MetaSleepLearner, based on Model Agnostic Meta-Learning (MAML), in order to transfer the acquired sleep staging knowledge from a large dataset to new individual subjects. The framework was demonstrated to require the labelling of only a few sleep epochs by the clinicians and allow the remainder to be handled by the system. Layer-wise Relevance Propagation (LRP) was also applied to understand the learning course of our approach. In all acquired datasets, in comparison to the conventional approach, MetaSleepLearner achieved a range of 5.4\% to 17.7\% improvement with statistical difference in the mean of both approaches. The illustration of the model interpretation after the adaptation to each subject also confirmed that the performance was directed towards reasonable learning. MetaSleepLearner outperformed the conventional approaches as a result from the fine-tuning using the recordings of both healthy subjects and patients. This is the first work that investigated a non-conventional pre-training method, MAML, resulting in a possibility for human-machine collaboration in sleep stage classification and easing the burden of the clinicians in labelling the sleep stages through only several epochs rather than an entire recording. |
1911.10964 | Lukas Eigentler | Lukas Eigentler and Jonathan A. Sherratt | An integrodifference model for vegetation patterns in semi-arid
environments with seasonality | null | null | 10.1007/s00285-020-01530-w | null | q-bio.PE nlin.PS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vegetation patterns are a characteristic feature of semi-deserts occurring on
all continents except Antarctica. In some semi-arid regions, the climate is
characterised by seasonality, which yields a synchronisation of seed dispersal
with the dry season or the beginning of the wet season. We reformulate the
Klausmeier model, a reaction-advection-diffusion system that describes the
plant-water dynamics in semi-arid environments, as an integrodifference model
to account for the temporal separation of plant growth processes during the wet
season and seed dispersal processes during the dry season. The model further
accounts for nonlocal processes involved in the dispersal of seeds. Our
analysis focusses on the onset of spatial patterns. The Klausmeier partial
differential equations (PDE) model is linked to the integrodifference model in
an appropriate limit, which yields a control parameter for the temporal
separation of seed dispersal events. We find that the conditions for pattern
onset in the integrodifference model are equivalent to those for the continuous
PDE model and hence independent of the time between seed dispersal events. We
thus conclude that in the context of seed dispersal, a PDE model provides a
sufficiently accurate description, even if the environment is seasonal. This
emphasises the validity of results that have previously been obtained for the
PDE model. Further, we numerically investigate the effects of changes to seed
dispersal behaviour on the onset of patterns. We find that long-range seed
dispersal inhibits the formation of spatial patterns and that the seed
dispersal kernel's decay at infinity is a significant regulator of patterning.
| [
{
"created": "Mon, 25 Nov 2019 15:07:49 GMT",
"version": "v1"
},
{
"created": "Wed, 4 Mar 2020 13:23:50 GMT",
"version": "v2"
}
] | 2020-09-08 | [
[
"Eigentler",
"Lukas",
""
],
[
"Sherratt",
"Jonathan A.",
""
]
] | Vegetation patterns are a characteristic feature of semi-deserts occurring on all continents except Antarctica. In some semi-arid regions, the climate is characterised by seasonality, which yields a synchronisation of seed dispersal with the dry season or the beginning of the wet season. We reformulate the Klausmeier model, a reaction-advection-diffusion system that describes the plant-water dynamics in semi-arid environments, as an integrodifference model to account for the temporal separation of plant growth processes during the wet season and seed dispersal processes during the dry season. The model further accounts for nonlocal processes involved in the dispersal of seeds. Our analysis focusses on the onset of spatial patterns. The Klausmeier partial differential equations (PDE) model is linked to the integrodifference model in an appropriate limit, which yields a control parameter for the temporal separation of seed dispersal events. We find that the conditions for pattern onset in the integrodifference model are equivalent to those for the continuous PDE model and hence independent of the time between seed dispersal events. We thus conclude that in the context of seed dispersal, a PDE model provides a sufficiently accurate description, even if the environment is seasonal. This emphasises the validity of results that have previously been obtained for the PDE model. Further, we numerically investigate the effects of changes to seed dispersal behaviour on the onset of patterns. We find that long-range seed dispersal inhibits the formation of spatial patterns and that the seed dispersal kernel's decay at infinity is a significant regulator of patterning. |
1309.5209 | Oriol G\"uell | Oriol G\"uell and Francesc Sagu\'es and M. \'Angeles Serrano | Essential plasticity and redundancy of metabolism unveiled by synthetic
lethality analysis | 22 pages, 4 figures | null | 10.1371/journal.pcbi.1003637 | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We unravel how functional plasticity and redundancy are essential mechanisms
underlying the ability to survive of metabolic networks. We perform an
exhaustive computational screening of synthetic lethal reaction pairs in
Escherichia coli in a minimal medium and we find that synthetic lethal pairs
divide in two different groups depending on whether the synthetic lethal
interaction works as a backup or as a parallel use mechanism, the first
corresponding to essential plasticity and the second to essential redundancy.
In E. coli, the analysis of pathways entanglement through essential redundancy
supports the view that synthetic lethality affects preferentially a single
function or pathway. In contrast, essential plasticity, the dominant class,
tends to be inter-pathway but strongly localized and unveils Cell Envelope
Biosynthesis as an essential backup for Membrane Lipid Metabolism. When
comparing E. coli and Mycoplasma pneumoniae, we find that the metabolic
networks of the two organisms exhibit a large difference in the relative
importance of plasticity and redundancy which is consistent with the conjecture
that plasticity is a sophisticated mechanism that requires a complex
organization. Finally, coessential reaction pairs are explored in different
environmental conditions to uncover the interplay between the two mechanisms.
We find that synthetic lethal interactions and their classification in
plasticity and redundancy are basically insensitive to medium composition, and
are highly conserved even when the environment is enriched with nonessential
compounds or overconstrained to decrease maximum biomass formation.
| [
{
"created": "Fri, 20 Sep 2013 08:46:57 GMT",
"version": "v1"
},
{
"created": "Wed, 25 Sep 2013 15:01:31 GMT",
"version": "v2"
},
{
"created": "Wed, 27 Nov 2013 15:06:33 GMT",
"version": "v3"
},
{
"created": "Tue, 1 Apr 2014 11:21:58 GMT",
"version": "v4"
}
] | 2015-06-17 | [
[
"Güell",
"Oriol",
""
],
[
"Sagués",
"Francesc",
""
],
[
"Serrano",
"M. Ángeles",
""
]
] | We unravel how functional plasticity and redundancy are essential mechanisms underlying the ability to survive of metabolic networks. We perform an exhaustive computational screening of synthetic lethal reaction pairs in Escherichia coli in a minimal medium and we find that synthetic lethal pairs divide in two different groups depending on whether the synthetic lethal interaction works as a backup or as a parallel use mechanism, the first corresponding to essential plasticity and the second to essential redundancy. In E. coli, the analysis of pathways entanglement through essential redundancy supports the view that synthetic lethality affects preferentially a single function or pathway. In contrast, essential plasticity, the dominant class, tends to be inter-pathway but strongly localized and unveils Cell Envelope Biosynthesis as an essential backup for Membrane Lipid Metabolism. When comparing E. coli and Mycoplasma pneumoniae, we find that the metabolic networks of the two organisms exhibit a large difference in the relative importance of plasticity and redundancy which is consistent with the conjecture that plasticity is a sophisticated mechanism that requires a complex organization. Finally, coessential reaction pairs are explored in different environmental conditions to uncover the interplay between the two mechanisms. We find that synthetic lethal interactions and their classification in plasticity and redundancy are basically insensitive to medium composition, and are highly conserved even when the environment is enriched with nonessential compounds or overconstrained to decrease maximum biomass formation. |
1611.09824 | Yuri Shestopaloff | Yuri K. Shestopaloff | Interspecific allometric scaling of unicellular organisms as an
evolutionary process of food chain creation | 12 pages, 2 figures, 1 table | null | null | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Metabolism of living organisms is a foundation of life. The metabolic rate
(energy production per unit time) increases slower than organisms' mass. When
this phenomenon is considered across different species, it is called
interspecific allometric scaling, whose causes are unknown. We argue that the
cause of interspecific allometric scaling is the total effect of physiological
and adaptation mechanisms inherent to organisms composing a food chain.
Together, the workings of these mechanisms are united by a primary goal of any
living creature - its successful reproduction. This primary necessity of each
organism and of the entire food chain is that common denominator, to which all
organisms adjust their metabolic rates. In this article, we consider
unicellular organisms, while the second paper studies multicellular organisms
and the entire concept in more detail. Here, using the proposed concepts and
experimentally verified growth models of five different unicellular organisms,
we obtain close to experimental findings values of allometric exponents of
0.757 for the end of growth and 0.853 for the beginning of growth. These
results comply with experimental observations and prove our theory that the
requirement of successful reproduction within the food chain is an important
factor shaping interspecific allometric scaling.
| [
{
"created": "Tue, 29 Nov 2016 20:18:29 GMT",
"version": "v1"
}
] | 2016-11-30 | [
[
"Shestopaloff",
"Yuri K.",
""
]
] | Metabolism of living organisms is a foundation of life. The metabolic rate (energy production per unit time) increases slower than organisms' mass. When this phenomenon is considered across different species, it is called interspecific allometric scaling, whose causes are unknown. We argue that the cause of interspecific allometric scaling is the total effect of physiological and adaptation mechanisms inherent to organisms composing a food chain. Together, the workings of these mechanisms are united by a primary goal of any living creature - its successful reproduction. This primary necessity of each organism and of the entire food chain is that common denominator, to which all organisms adjust their metabolic rates. In this article, we consider unicellular organisms, while the second paper studies multicellular organisms and the entire concept in more detail. Here, using the proposed concepts and experimentally verified growth models of five different unicellular organisms, we obtain close to experimental findings values of allometric exponents of 0.757 for the end of growth and 0.853 for the beginning of growth. These results comply with experimental observations and prove our theory that the requirement of successful reproduction within the food chain is an important factor shaping interspecific allometric scaling. |
1706.09083 | Benjamin Schubert | Benjamin Schubert, Charlotta Sch\"arfe, Pierre D\"onnes, Thomas Hopf,
Debora Marks, and Oliver Kohlbacher | Population-specific design of de-immunized protein biotherapeutics | 28 pages, 6 figures, 2 tables, journal pre-submission | null | 10.1371/journal.pcbi.1005983 | null | q-bio.QM cs.DM q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Immunogenicity is a major problem during the development of biotherapeutics
since it can lead to rapid clearance of the drug and adverse reactions. The
challenge for biotherapeutic design is therefore to identify mutants of the
protein sequence that minimize immunogenicity in a target population whilst
retaining pharmaceutical activity and protein function. Current approaches are
moderately successful in designing sequences with reduced immunogenicity, but
do not account for the varying frequencies of different human leucocyte antigen
alleles in a specific population and in addition, since many designs are
non-functional, require costly experimental post-screening. Here we report a
new method for de-immunization design using multi-objective combinatorial
optimization that simultaneously optimizes the likelihood of a functional
protein sequence at the same time as minimizing its immunogenicity tailored to
a target population. We bypass the need for three-dimensional protein structure
or molecular simulations to identify functional designs by automatically
generating sequences using probabilistic models that have been used previously
for mutation effect prediction and structure prediction. As proof-of-principle
we designed sequences of the C2 domain of Factor VIII and tested them
experimentally, resulting in a good correlation with the predicted
immunogenicity of our model.
| [
{
"created": "Wed, 28 Jun 2017 00:18:01 GMT",
"version": "v1"
}
] | 2018-07-04 | [
[
"Schubert",
"Benjamin",
""
],
[
"Schärfe",
"Charlotta",
""
],
[
"Dönnes",
"Pierre",
""
],
[
"Hopf",
"Thomas",
""
],
[
"Marks",
"Debora",
""
],
[
"Kohlbacher",
"Oliver",
""
]
] | Immunogenicity is a major problem during the development of biotherapeutics since it can lead to rapid clearance of the drug and adverse reactions. The challenge for biotherapeutic design is therefore to identify mutants of the protein sequence that minimize immunogenicity in a target population whilst retaining pharmaceutical activity and protein function. Current approaches are moderately successful in designing sequences with reduced immunogenicity, but do not account for the varying frequencies of different human leucocyte antigen alleles in a specific population and in addition, since many designs are non-functional, require costly experimental post-screening. Here we report a new method for de-immunization design using multi-objective combinatorial optimization that simultaneously optimizes the likelihood of a functional protein sequence at the same time as minimizing its immunogenicity tailored to a target population. We bypass the need for three-dimensional protein structure or molecular simulations to identify functional designs by automatically generating sequences using probabilistic models that have been used previously for mutation effect prediction and structure prediction. As proof-of-principle we designed sequences of the C2 domain of Factor VIII and tested them experimentally, resulting in a good correlation with the predicted immunogenicity of our model. |
1612.05660 | Luis David Garcia Puente | Rebecca Garcia, Luis David Garc\'ia Puente, Ryan Kruse, Jessica Liu,
Dane Miyata, Ethan Petersen, Kaitlyn Phillipson, and Anne Shiu | Gr\"obner Bases of Neural Ideals | 13 pages, 2 figures, 1 table | null | null | null | q-bio.NC math.AC math.AG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The brain processes information about the environment via neural codes. The
neural ideal was introduced recently as an algebraic object that can be used to
better understand the combinatorial structure of neural codes. Every neural
ideal has a particular generating set, called the canonical form, that directly
encodes a minimal description of the receptive field structure intrinsic to the
neural code. On the other hand, for a given monomial order, any polynomial
ideal is also generated by its unique (reduced) Gr\"obner basis with respect to
that monomial order. How are these two types of generating sets -- canonical
forms and Gr\"obner bases -- related? Our main result states that if the
canonical form of a neural ideal is a Gr\"obner basis, then it is the universal
Gr\"obner basis (that is, the union of all reduced Gr\"obner bases).
Furthermore, we prove that this situation -- when the canonical form is a
Gr\"obner basis -- occurs precisely when the universal Gr\"obner basis contains
only pseudo-monomials (certain generalizations of monomials). Our results
motivate two questions: (1)~When is the canonical form a Gr\"obner basis?
(2)~When the universal Gr\"obner basis of a neural ideal is {\em not} a
canonical form, what can the non-pseudo-monomial elements in the basis tell us
about the receptive fields of the code? We give partial answers to both
questions. Along the way, we develop a representation of pseudo-monomials as
hypercubes in a Boolean lattice.
| [
{
"created": "Fri, 16 Dec 2016 21:23:02 GMT",
"version": "v1"
},
{
"created": "Tue, 4 Apr 2017 21:23:41 GMT",
"version": "v2"
},
{
"created": "Fri, 20 Apr 2018 20:27:30 GMT",
"version": "v3"
}
] | 2018-04-24 | [
[
"Garcia",
"Rebecca",
""
],
[
"Puente",
"Luis David García",
""
],
[
"Kruse",
"Ryan",
""
],
[
"Liu",
"Jessica",
""
],
[
"Miyata",
"Dane",
""
],
[
"Petersen",
"Ethan",
""
],
[
"Phillipson",
"Kaitlyn",
""
],
[
"Shiu",
"Anne",
""
]
] | The brain processes information about the environment via neural codes. The neural ideal was introduced recently as an algebraic object that can be used to better understand the combinatorial structure of neural codes. Every neural ideal has a particular generating set, called the canonical form, that directly encodes a minimal description of the receptive field structure intrinsic to the neural code. On the other hand, for a given monomial order, any polynomial ideal is also generated by its unique (reduced) Gr\"obner basis with respect to that monomial order. How are these two types of generating sets -- canonical forms and Gr\"obner bases -- related? Our main result states that if the canonical form of a neural ideal is a Gr\"obner basis, then it is the universal Gr\"obner basis (that is, the union of all reduced Gr\"obner bases). Furthermore, we prove that this situation -- when the canonical form is a Gr\"obner basis -- occurs precisely when the universal Gr\"obner basis contains only pseudo-monomials (certain generalizations of monomials). Our results motivate two questions: (1)~When is the canonical form a Gr\"obner basis? (2)~When the universal Gr\"obner basis of a neural ideal is {\em not} a canonical form, what can the non-pseudo-monomial elements in the basis tell us about the receptive fields of the code? We give partial answers to both questions. Along the way, we develop a representation of pseudo-monomials as hypercubes in a Boolean lattice. |
1302.5142 | Mike Taylor | Michael P. Taylor | Aspects of the history, anatomy, taxonomy and palaeobiology of sauropod
dinosaurs | Ph.D dissertation, University of Portsmouth, UK, 2009. Consists of
five chapters each subsequently published as journal articles or book
chapters. 285 pages | null | null | null | q-bio.TO | http://creativecommons.org/licenses/by/3.0/ | Although the sauropod dinosaurs have been recognised for more than a hundred
and sixty years, much remains to be discovered and understood about their
functional anatomy and palaeobiology. Older taxa require revision and new taxa
await description. The characteristic long necks of sauropods are mechanically
perplexing and their evolution is obscure. All these issues are addressed
herein.
The genus Brachiosaurus is represented by the American type species B.
altithorax and the better known African species B. brancai. However,
examination of the overlapping material shows 26 differences between the
species. B. brancai must be removed from Brachiosaurus and referred to the
genus Giraffatitan, which has been previously proposed for it.
Xenoposeidon proneneukos is a new neosauropod from the Lower Cretaceous
Hastings Bed Group, known from a single partial dorsal vertebra. The excellent
preservation of the Xenoposeidon holotype reveals six unique characters. The
distinctive morphology suggests that Xenoposeidon may represent a new sauropod
family, extending sauropod disparity as well as bringing to four the number of
sauropod families known from the Wealden.
A second new Early Cretaceous genus is also described, a titanosauriform from
the Ruby Ranch Member of the Cedar Mountain Formation, in Utah. This taxon is
known from at least two individuals, an adult and a juvenile, which together
provide vertebrae, ribs, a scapula, sternal plates and an ilium. The ilium is
particularly unusual, exhibiting five unique features.
The longest sauropod necks were five times as long as those of the next
longest terrestrial animals, and four separate lineages evolved necks exceeding
10 m. Elongation was enabled by sheer size, vertebral pneumaticity and the
relative smallness of sauropod heads, despite aspects of cervical osteology
that are difficult to understand on mechanical first principles.
| [
{
"created": "Wed, 20 Feb 2013 23:12:21 GMT",
"version": "v1"
}
] | 2013-02-22 | [
[
"Taylor",
"Michael P.",
""
]
] | Although the sauropod dinosaurs have been recognised for more than a hundred and sixty years, much remains to be discovered and understood about their functional anatomy and palaeobiology. Older taxa require revision and new taxa await description. The characteristic long necks of sauropods are mechanically perplexing and their evolution is obscure. All these issues are addressed herein. The genus Brachiosaurus is represented by the American type species B. altithorax and the better known African species B. brancai. However, examination of the overlapping material shows 26 differences between the species. B. brancai must be removed from Brachiosaurus and referred to the genus Giraffatitan, which has been previously proposed for it. Xenoposeidon proneneukos is a new neosauropod from the Lower Cretaceous Hastings Bed Group, known from a single partial dorsal vertebra. The excellent preservation of the Xenoposeidon holotype reveals six unique characters. The distinctive morphology suggests that Xenoposeidon may represent a new sauropod family, extending sauropod disparity as well as bringing to four the number of sauropod families known from the Wealden. A second new Early Cretaceous genus is also described, a titanosauriform from the Ruby Ranch Member of the Cedar Mountain Formation, in Utah. This taxon is known from at least two individuals, an adult and a juvenile, which together provide vertebrae, ribs, a scapula, sternal plates and an ilium. The ilium is particularly unusual, exhibiting five unique features. The longest sauropod necks were five times as long as those of the next longest terrestrial animals, and four separate lineages evolved necks exceeding 10 m. Elongation was enabled by sheer size, vertebral pneumaticity and the relative smallness of sauropod heads, despite aspects of cervical osteology that are difficult to understand on mechanical first principles. |
1901.04990 | Sheikh Muhammad Asher Iqbal | Sheikh Muhammad Asher Iqbal and Nauman Zaffar Butt | Design and Analysis of Microfluidic Cell Counter using Spice Simulation | 18 pages,10 figures | null | null | null | q-bio.CB eess.SP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Microfluidic cytometers based on coulter principle have recently shown a
great potential for point of care biosensors for medical diagnostics. In this
study, the design of coulter based microfluidic cytometer is investigated by
using electrical circuit simulations. We explore the effects of physical
dimensions of the microelectrodes, the measurement volume, size/morphology of
the targeted cells, electrical properties of the reagents in the measurement
volume, and, the impedance of external readout circuit, on the sensitivity of
the sensor. We show that the effect of microelectrode's surface area and the
dielectric properties of the suspension medium should be carefully considered
when characterizing the output response of the sensor. In particular, the area
of microelectrodes can have significant effect on cells electrical opacity( the
ratio of the cell impedance at high to low frequency) which is commonly used to
distinguish between sub-population of the target cells( e.g. lymphocytes vs
monocytes when counting white blood cells).Moreover, we highlight that the
opacity response vs frequency can significantly vary depending upon whether the
absolute cell impedance or the differential output impedance is used in the
calculation. These insights can provide valuable guidelines for the design and
characterization of coulter based microfluidic sensors.
| [
{
"created": "Tue, 15 Jan 2019 18:59:56 GMT",
"version": "v1"
},
{
"created": "Wed, 16 Jan 2019 15:55:04 GMT",
"version": "v2"
},
{
"created": "Wed, 6 Mar 2019 10:33:56 GMT",
"version": "v3"
},
{
"created": "Fri, 22 Mar 2019 06:45:29 GMT",
"version": "v4"
},
{
"created": "Sun, 27 Oct 2019 23:14:49 GMT",
"version": "v5"
}
] | 2019-10-29 | [
[
"Iqbal",
"Sheikh Muhammad Asher",
""
],
[
"Butt",
"Nauman Zaffar",
""
]
] | Microfluidic cytometers based on coulter principle have recently shown a great potential for point of care biosensors for medical diagnostics. In this study, the design of coulter based microfluidic cytometer is investigated by using electrical circuit simulations. We explore the effects of physical dimensions of the microelectrodes, the measurement volume, size/morphology of the targeted cells, electrical properties of the reagents in the measurement volume, and, the impedance of external readout circuit, on the sensitivity of the sensor. We show that the effect of microelectrode's surface area and the dielectric properties of the suspension medium should be carefully considered when characterizing the output response of the sensor. In particular, the area of microelectrodes can have significant effect on cells electrical opacity( the ratio of the cell impedance at high to low frequency) which is commonly used to distinguish between sub-population of the target cells( e.g. lymphocytes vs monocytes when counting white blood cells).Moreover, we highlight that the opacity response vs frequency can significantly vary depending upon whether the absolute cell impedance or the differential output impedance is used in the calculation. These insights can provide valuable guidelines for the design and characterization of coulter based microfluidic sensors. |
2208.11488 | Laura Gwilliams | Laura Gwilliams, Graham Flick, Alec Marantz, Liina Pylkkanen, David
Poeppel and Jean-Remi King | MEG-MASC: a high-quality magneto-encephalography dataset for evaluating
natural speech processing | 11 pages, 4 figures | null | null | null | q-bio.QM cs.CL eess.AS | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The "MEG-MASC" dataset provides a curated set of raw magnetoencephalography
(MEG) recordings of 27 English speakers who listened to two hours of
naturalistic stories. Each participant performed two identical sessions,
involving listening to four fictional stories from the Manually Annotated
Sub-Corpus (MASC) intermixed with random word lists and comprehension
questions. We time-stamp the onset and offset of each word and phoneme in the
metadata of the recording, and organize the dataset according to the 'Brain
Imaging Data Structure' (BIDS). This data collection provides a suitable
benchmark to large-scale encoding and decoding analyses of temporally-resolved
brain responses to speech. We provide the Python code to replicate several
validations analyses of the MEG evoked related fields such as the temporal
decoding of phonetic features and word frequency. All code and MEG, audio and
text data are publicly available to keep with best practices in transparent and
reproducible research.
| [
{
"created": "Tue, 26 Jul 2022 19:17:01 GMT",
"version": "v1"
}
] | 2022-08-25 | [
[
"Gwilliams",
"Laura",
""
],
[
"Flick",
"Graham",
""
],
[
"Marantz",
"Alec",
""
],
[
"Pylkkanen",
"Liina",
""
],
[
"Poeppel",
"David",
""
],
[
"King",
"Jean-Remi",
""
]
] | The "MEG-MASC" dataset provides a curated set of raw magnetoencephalography (MEG) recordings of 27 English speakers who listened to two hours of naturalistic stories. Each participant performed two identical sessions, involving listening to four fictional stories from the Manually Annotated Sub-Corpus (MASC) intermixed with random word lists and comprehension questions. We time-stamp the onset and offset of each word and phoneme in the metadata of the recording, and organize the dataset according to the 'Brain Imaging Data Structure' (BIDS). This data collection provides a suitable benchmark to large-scale encoding and decoding analyses of temporally-resolved brain responses to speech. We provide the Python code to replicate several validations analyses of the MEG evoked related fields such as the temporal decoding of phonetic features and word frequency. All code and MEG, audio and text data are publicly available to keep with best practices in transparent and reproducible research. |
1502.01902 | Adam MacLean Dr | Adam L. MacLean, Heather A. Harrington, Michael P.H. Stumpf, Helen M.
Byrne | Mathematical and Statistical Techniques for Systems Medicine: The Wnt
Signaling Pathway as a Case Study | Submitted to 'Systems Medicine' as a book chapter | null | null | null | q-bio.QM math.DS q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The last decade has seen an explosion in models that describe phenomena in
systems medicine. Such models are especially useful for studying signaling
pathways, such as the Wnt pathway. In this chapter we use the Wnt pathway to
showcase current mathematical and statistical techniques that enable modelers
to gain insight into (models of) gene regulation, and generate testable
predictions. We introduce a range of modeling frameworks, but focus on ordinary
differential equation (ODE) models since they remain the most widely used
approach in systems biology and medicine and continue to offer great potential.
We present methods for the analysis of a single model, comprising applications
of standard dynamical systems approaches such as nondimensionalization, steady
state, asymptotic and sensitivity analysis, and more recent statistical and
algebraic approaches to compare models with data. We present parameter
estimation and model comparison techniques, focusing on Bayesian analysis and
coplanarity via algebraic geometry. Our intention is that this (non exhaustive)
review may serve as a useful starting point for the analysis of models in
systems medicine.
| [
{
"created": "Fri, 6 Feb 2015 14:40:01 GMT",
"version": "v1"
},
{
"created": "Thu, 30 Jul 2015 17:28:59 GMT",
"version": "v2"
}
] | 2015-07-31 | [
[
"MacLean",
"Adam L.",
""
],
[
"Harrington",
"Heather A.",
""
],
[
"Stumpf",
"Michael P. H.",
""
],
[
"Byrne",
"Helen M.",
""
]
] | The last decade has seen an explosion in models that describe phenomena in systems medicine. Such models are especially useful for studying signaling pathways, such as the Wnt pathway. In this chapter we use the Wnt pathway to showcase current mathematical and statistical techniques that enable modelers to gain insight into (models of) gene regulation, and generate testable predictions. We introduce a range of modeling frameworks, but focus on ordinary differential equation (ODE) models since they remain the most widely used approach in systems biology and medicine and continue to offer great potential. We present methods for the analysis of a single model, comprising applications of standard dynamical systems approaches such as nondimensionalization, steady state, asymptotic and sensitivity analysis, and more recent statistical and algebraic approaches to compare models with data. We present parameter estimation and model comparison techniques, focusing on Bayesian analysis and coplanarity via algebraic geometry. Our intention is that this (non exhaustive) review may serve as a useful starting point for the analysis of models in systems medicine. |
0711.3503 | Jeremy Sumner | J. G. Sumner, M. A. Charleston, L. S. Jermiin, and P. D. Jarvis | Markov invariants, plethysms, and phylogenetics (the long version) | 39 pages, 10 figures, 2 tables, 3 appendices. Long arxiv version
includes extended introduction, subsection on mixed-weight invariants, 3rd
appendix on K3ST model and a more relaxed pace with additional discussion
throughout. "Short version" is to appear in Journal of Theoretical Biology.
v4: Sequence length in simulation was corrected from N=1000 to N=10000 | J. Theor. Biol., 253:601--615, 2008 | null | null | q-bio.PE math-ph math.MP q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We explore model based techniques of phylogenetic tree inference exercising
Markov invariants. Markov invariants are group invariant polynomials and are
distinct from what is known in the literature as phylogenetic invariants,
although we establish a commonality in some special cases. We show that the
simplest Markov invariant forms the foundation of the Log-Det distance measure.
We take as our primary tool group representation theory, and show that it
provides a general framework for analysing Markov processes on trees. From this
algebraic perspective, the inherent symmetries of these processes become
apparent, and focusing on plethysms, we are able to define Markov invariants
and give existence proofs. We give an explicit technique for constructing the
invariants, valid for any number of character states and taxa. For phylogenetic
trees with three and four leaves, we demonstrate that the corresponding Markov
invariants can be fruitfully exploited in applied phylogenetic studies.
| [
{
"created": "Thu, 22 Nov 2007 05:09:12 GMT",
"version": "v1"
},
{
"created": "Mon, 26 Nov 2007 10:14:59 GMT",
"version": "v2"
},
{
"created": "Tue, 8 Jul 2008 07:24:02 GMT",
"version": "v3"
},
{
"created": "Tue, 22 Jul 2008 23:00:29 GMT",
"version": "v4"
}
] | 2012-04-24 | [
[
"Sumner",
"J. G.",
""
],
[
"Charleston",
"M. A.",
""
],
[
"Jermiin",
"L. S.",
""
],
[
"Jarvis",
"P. D.",
""
]
] | We explore model based techniques of phylogenetic tree inference exercising Markov invariants. Markov invariants are group invariant polynomials and are distinct from what is known in the literature as phylogenetic invariants, although we establish a commonality in some special cases. We show that the simplest Markov invariant forms the foundation of the Log-Det distance measure. We take as our primary tool group representation theory, and show that it provides a general framework for analysing Markov processes on trees. From this algebraic perspective, the inherent symmetries of these processes become apparent, and focusing on plethysms, we are able to define Markov invariants and give existence proofs. We give an explicit technique for constructing the invariants, valid for any number of character states and taxa. For phylogenetic trees with three and four leaves, we demonstrate that the corresponding Markov invariants can be fruitfully exploited in applied phylogenetic studies. |
1701.00523 | BingKan Xue | BingKan Xue and Stanislas Leibler | Bet-hedging against demographic fluctuations | minor revisions | Phys. Rev. Lett. 119, 108103 (2017) | 10.1103/PhysRevLett.119.108103 | null | q-bio.PE cond-mat.stat-mech physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Biological organisms have to cope with stochastic variations in both the
external environment and the internal population dynamics. Theoretical studies
and laboratory experiments suggest that population diversification could be an
effective bet-hedging strategy for adaptation to varying environments. Here we
show that bet-hedging can also be effective against demographic fluctuations
that pose a trade-off between growth and survival for populations even in a
constant environment. A species can maximize its overall abundance in the long
term by diversifying into coexisting subpopulations of both "fast-growing" and
"better-surviving" individuals. Our model generalizes statistical physics
models of birth-death processes to incorporate dispersal, during which new
populations are founded, and can further incorporate variations of local
environments. In this way we unify different bet-hedging strategies against
demographic and environmental variations as a general means of adaptation to
both types of uncertainties in population growth.
| [
{
"created": "Mon, 2 Jan 2017 21:10:54 GMT",
"version": "v1"
},
{
"created": "Thu, 11 May 2017 05:58:12 GMT",
"version": "v2"
},
{
"created": "Wed, 16 Aug 2017 16:12:35 GMT",
"version": "v3"
}
] | 2017-09-13 | [
[
"Xue",
"BingKan",
""
],
[
"Leibler",
"Stanislas",
""
]
] | Biological organisms have to cope with stochastic variations in both the external environment and the internal population dynamics. Theoretical studies and laboratory experiments suggest that population diversification could be an effective bet-hedging strategy for adaptation to varying environments. Here we show that bet-hedging can also be effective against demographic fluctuations that pose a trade-off between growth and survival for populations even in a constant environment. A species can maximize its overall abundance in the long term by diversifying into coexisting subpopulations of both "fast-growing" and "better-surviving" individuals. Our model generalizes statistical physics models of birth-death processes to incorporate dispersal, during which new populations are founded, and can further incorporate variations of local environments. In this way we unify different bet-hedging strategies against demographic and environmental variations as a general means of adaptation to both types of uncertainties in population growth. |
1409.4178 | Leonardo L. Gollo | Leonardo L. Gollo and Michael Breakspear | The frustrated brain: From dynamics on motifs to communities and
networks | 17 pages, 7 figures | Phil. Trans. R. Soc. B 369: 20130532 (2014) | 10.1098/rstb.2013.0532 | null | q-bio.NC cond-mat.dis-nn nlin.PS physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cognitive function depends on an adaptive balance between flexible dynamics
and integrative processes in distributed cortical networks. Patterns of
zero-lag synchrony likely underpin numerous perceptual and cognitive functions.
Synchronization fulfils integration by reducing entropy, whilst adaptive
function mandates that a broad variety of stable states be readily accessible.
Here, we elucidate two complementary influences on patterns of zero-lag
synchrony that derive from basic properties of brain networks. First, mutually
coupled pairs of neuronal subsystems -- resonance pairs -- promote stable
zero-lag synchrony amongst the small motifs in which they are embedded, and
whose effects can propagate along connected chains. Second, frustrated
closed-loop motifs disrupt synchronous dynamics, enabling metastable
configurations of zero-lag synchrony to coexist. We document these two
complementary influences in small motifs and illustrate how these effects
underpin stable versus metastable phase-synchronization patterns in
prototypical modular networks and in large-scale cortical networks of the
macaque (CoCoMac). We find that the variability of synchronization patterns
depends on the inter-node time delay, increases with the network size, and is
maximized for intermediate coupling strengths. We hypothesize that the
dialectic influences of resonance versus frustration may form a dynamic
substrate for flexible neuronal integration, an essential platform across
diverse cognitive processes.
| [
{
"created": "Mon, 15 Sep 2014 08:27:34 GMT",
"version": "v1"
}
] | 2014-09-16 | [
[
"Gollo",
"Leonardo L.",
""
],
[
"Breakspear",
"Michael",
""
]
] | Cognitive function depends on an adaptive balance between flexible dynamics and integrative processes in distributed cortical networks. Patterns of zero-lag synchrony likely underpin numerous perceptual and cognitive functions. Synchronization fulfils integration by reducing entropy, whilst adaptive function mandates that a broad variety of stable states be readily accessible. Here, we elucidate two complementary influences on patterns of zero-lag synchrony that derive from basic properties of brain networks. First, mutually coupled pairs of neuronal subsystems -- resonance pairs -- promote stable zero-lag synchrony amongst the small motifs in which they are embedded, and whose effects can propagate along connected chains. Second, frustrated closed-loop motifs disrupt synchronous dynamics, enabling metastable configurations of zero-lag synchrony to coexist. We document these two complementary influences in small motifs and illustrate how these effects underpin stable versus metastable phase-synchronization patterns in prototypical modular networks and in large-scale cortical networks of the macaque (CoCoMac). We find that the variability of synchronization patterns depends on the inter-node time delay, increases with the network size, and is maximized for intermediate coupling strengths. We hypothesize that the dialectic influences of resonance versus frustration may form a dynamic substrate for flexible neuronal integration, an essential platform across diverse cognitive processes. |
1602.08079 | Vicente M. Reyes Ph.D. | James DeFelice and Vicente M. Reyes | Spherical Distance Metrics Applied to Protein Structure Classification | 48 pages, 8 figures, 6 tables | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Structural relationships among proteins are important in the study of their
evolution as well as in drug design and development. The protein 3D structure
has been shown to be effective with respect to classifying proteins. Prior work
has shown that the Double Centroid Reduced Representation (DCRR) model is a
useful geometric representation for protein structure with respect to visual
models, reducing the quantity of modeled information for each amino acid, yet
retaining the most important geometrical and chemical features of each: the
centroids of the backbone and of the side-chain. DCRR has not yet been applied
in the calculation of geometric structural similarity. Meanwhile,
multi-dimensional indexing (MDI) of protein structure combines protein
structural analysis with distance metrics to facilitate structural similarity
queries and is also used for clustering protein structures into related groups.
In this respect, the combination of geometric models with MDI has been shown to
be effective. Prior work, notably Distance and Density based Protein Indexing
(DDPIn), applies MDI to protein models based on the geometry of the CA
backbone. DDPIn distance metrics are based on radial and density functions that
incorporate spherical-based metrics, and the indices are built from metric tree
(M-tree) structures. This work combines DCRR with DDPIn for the development of
new DCRR centroid-based metrics: spherical binning distance and inter-centroid
spherical distance. The use of DCRR models will provide additional significant
structural information via the inclusion of side-chain centroids. Additionally,
the newly developed distance metric functions combined with DCRR and M-tree
indexing attempt to improve upon the performance of prior work (DDPIn), given
the same data set, with respect to both individual k-nearest neighbor (kNN)
search queries as well as clustering all proteins in the index.
| [
{
"created": "Fri, 18 Dec 2015 07:19:37 GMT",
"version": "v1"
}
] | 2016-02-26 | [
[
"DeFelice",
"James",
""
],
[
"Reyes",
"Vicente M.",
""
]
] | Structural relationships among proteins are important in the study of their evolution as well as in drug design and development. The protein 3D structure has been shown to be effective with respect to classifying proteins. Prior work has shown that the Double Centroid Reduced Representation (DCRR) model is a useful geometric representation for protein structure with respect to visual models, reducing the quantity of modeled information for each amino acid, yet retaining the most important geometrical and chemical features of each: the centroids of the backbone and of the side-chain. DCRR has not yet been applied in the calculation of geometric structural similarity. Meanwhile, multi-dimensional indexing (MDI) of protein structure combines protein structural analysis with distance metrics to facilitate structural similarity queries and is also used for clustering protein structures into related groups. In this respect, the combination of geometric models with MDI has been shown to be effective. Prior work, notably Distance and Density based Protein Indexing (DDPIn), applies MDI to protein models based on the geometry of the CA backbone. DDPIn distance metrics are based on radial and density functions that incorporate spherical-based metrics, and the indices are built from metric tree (M-tree) structures. This work combines DCRR with DDPIn for the development of new DCRR centroid-based metrics: spherical binning distance and inter-centroid spherical distance. The use of DCRR models will provide additional significant structural information via the inclusion of side-chain centroids. Additionally, the newly developed distance metric functions combined with DCRR and M-tree indexing attempt to improve upon the performance of prior work (DDPIn), given the same data set, with respect to both individual k-nearest neighbor (kNN) search queries as well as clustering all proteins in the index. |
2102.03253 | Pietro Hiram Guzzi | Jayanta Kumar Das, Swarup Roy, Pietro Hiram Guzzi | Analyzing Host-Viral Interactome of SARS-CoV-2 for Identifying
Vulnerable Host Proteins during COVID-19 Pathogenesis | null | null | 10.1016/j.meegid.2021.104921 | null | q-bio.BM cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The development of therapeutic targets for COVID-19 treatment is based on the
understanding of the molecular mechanism of pathogenesis. The identification of
genes and proteins involved in the infection mechanism is the key to shed out
light into the complex molecular mechanisms. The combined effort of many
laboratories distributed throughout the world has produced the accumulation of
both protein and genetic interactions. In this work we integrate these
available results and we obtain an host protein-protein interaction network
composed by 1432 human proteins. We calculate network centrality measures to
identify key proteins. Then we perform functional enrichment of central
proteins. We observed that the identified proteins are mostly associated with
several crucial pathways, including cellular process, signalling transduction,
neurodegenerative disease. Finally, we focused on proteins involved in causing
disease in the human respiratory tract. We conclude that COVID19 is a complex
disease, and we highlighted many potential therapeutic targets including RBX1,
HSPA5, ITCH, RAB7A, RAB5A, RAB8A, PSMC5, CAPZB, CANX, IGF2R, HSPA1A, which are
central and also associated with multiple diseases
| [
{
"created": "Fri, 5 Feb 2021 15:57:48 GMT",
"version": "v1"
}
] | 2021-05-19 | [
[
"Das",
"Jayanta Kumar",
""
],
[
"Roy",
"Swarup",
""
],
[
"Guzzi",
"Pietro Hiram",
""
]
] | The development of therapeutic targets for COVID-19 treatment is based on the understanding of the molecular mechanism of pathogenesis. The identification of genes and proteins involved in the infection mechanism is the key to shed out light into the complex molecular mechanisms. The combined effort of many laboratories distributed throughout the world has produced the accumulation of both protein and genetic interactions. In this work we integrate these available results and we obtain an host protein-protein interaction network composed by 1432 human proteins. We calculate network centrality measures to identify key proteins. Then we perform functional enrichment of central proteins. We observed that the identified proteins are mostly associated with several crucial pathways, including cellular process, signalling transduction, neurodegenerative disease. Finally, we focused on proteins involved in causing disease in the human respiratory tract. We conclude that COVID19 is a complex disease, and we highlighted many potential therapeutic targets including RBX1, HSPA5, ITCH, RAB7A, RAB5A, RAB8A, PSMC5, CAPZB, CANX, IGF2R, HSPA1A, which are central and also associated with multiple diseases |
q-bio/0505017 | Edgardo Brigatti | E. Brigatti, J.S. Sa' Martins, I.Roditi | Evolution of polymorphism and sympatric speciation through competition
in a unimodal distribution of resources | 7 pages, 6 figures, changed content | Physica A 376 (2007) 378 | 10.1016/j.physa.2006.10.031 | null | q-bio.PE | null | A microscopic agent dynamical model for diploid age-structured populations is
used to study evolution of polymorphism and sympatric speciation. The
underlying ecology is represented by a unimodal distribution of resources of
some width. Competition among individuals is also described by a similar
distribution, and its strength is maximum for individuals with the same
phenotype and decreases with distance in phenotype space as a gaussian, with
some width. These two widths define the model's phase space, in which we
identify the regions where an autonomous emergence of stable polymorphism or
speciation is more likely.
| [
{
"created": "Mon, 9 May 2005 17:33:22 GMT",
"version": "v1"
},
{
"created": "Wed, 8 Feb 2006 20:23:49 GMT",
"version": "v2"
}
] | 2015-06-26 | [
[
"Brigatti",
"E.",
""
],
[
"Martins",
"J. S. Sa'",
""
],
[
"Roditi",
"I.",
""
]
] | A microscopic agent dynamical model for diploid age-structured populations is used to study evolution of polymorphism and sympatric speciation. The underlying ecology is represented by a unimodal distribution of resources of some width. Competition among individuals is also described by a similar distribution, and its strength is maximum for individuals with the same phenotype and decreases with distance in phenotype space as a gaussian, with some width. These two widths define the model's phase space, in which we identify the regions where an autonomous emergence of stable polymorphism or speciation is more likely. |
1209.0674 | Taha Sochi | Taha Sochi | Accounting for the Use of Different Length Scale Factors in x, y and z
Directions | 5 pages | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This short article presents a mathematical formula required for metric
corrections in image extraction and processing when using different length
scale factors in three-dimensional space which is normally encountered in
cryomicrotome image construction techniques.
| [
{
"created": "Tue, 4 Sep 2012 15:37:51 GMT",
"version": "v1"
}
] | 2012-09-05 | [
[
"Sochi",
"Taha",
""
]
] | This short article presents a mathematical formula required for metric corrections in image extraction and processing when using different length scale factors in three-dimensional space which is normally encountered in cryomicrotome image construction techniques. |
2102.13471 | Stephane Victor | Alain Oustaloup, Fran\c{c}ois Levron, St\'ephane Victor, Luc Dugard | Non-integer (or fractional) power model of a viral spreading:
application to the COVID-19 | null | Annual Reviews in Control, 2021 | 10.1016/j.arcontrol.2021.09.003 | null | q-bio.PE physics.med-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes a very simple deterministic mathematical model, which, by
using a power-law, is a \emph{non-integer power model} (or \emph{fractional
power model (FPM)}). Such a model, in non-integer power of time, namely $t^m$
up to constants, enables representing at each day, with a good precision, the
totality of the contaminated individuals. Despite being enriched with knowledge
through an internal structure based on a geometric sequence "with variable
ratio", the model (in its non-integer representation) has only three
parameters, among which the non-integer power, $m$, that determines on its own,
according to its value, an aggravation or an improvement of the viral
spreading. Its simplicity comes from the power-law, $t^m$, which simply
expresses the singular dynamics of the operator of non-integer differentiation
or integration, of high parametric compactness, that governs diffusion
phenomena and, as shown in this paper, the spreading phenomena by
contamination. The proposed model is indeed validated with the official data of
Ministry of Health on the COVID-19 spreading. Used in prediction, it well
enables justifying the choice of a lockdown, without which the spreading would
have highly worsened. The comparison of this model in $t^m$ with two known
models having the same number of parameters, well shows that its
representativity of the real data is better or more general. Finally, in a more
fundamental context and particularly in terms of complexity and simplicity, a
self-filtering action enables showing the compatibility between the
\emph{internal complexity} that the internal structure and its stochastic
behavior present, and the \emph{global simplicity} that the model in $t^m$
offers in a deterministic manner: it is true that the non-integer power of a
power-law is well a marker of complexity.
| [
{
"created": "Wed, 24 Feb 2021 09:22:29 GMT",
"version": "v1"
},
{
"created": "Tue, 9 Nov 2021 10:42:51 GMT",
"version": "v2"
}
] | 2022-11-28 | [
[
"Oustaloup",
"Alain",
""
],
[
"Levron",
"François",
""
],
[
"Victor",
"Stéphane",
""
],
[
"Dugard",
"Luc",
""
]
] | This paper proposes a very simple deterministic mathematical model, which, by using a power-law, is a \emph{non-integer power model} (or \emph{fractional power model (FPM)}). Such a model, in non-integer power of time, namely $t^m$ up to constants, enables representing at each day, with a good precision, the totality of the contaminated individuals. Despite being enriched with knowledge through an internal structure based on a geometric sequence "with variable ratio", the model (in its non-integer representation) has only three parameters, among which the non-integer power, $m$, that determines on its own, according to its value, an aggravation or an improvement of the viral spreading. Its simplicity comes from the power-law, $t^m$, which simply expresses the singular dynamics of the operator of non-integer differentiation or integration, of high parametric compactness, that governs diffusion phenomena and, as shown in this paper, the spreading phenomena by contamination. The proposed model is indeed validated with the official data of Ministry of Health on the COVID-19 spreading. Used in prediction, it well enables justifying the choice of a lockdown, without which the spreading would have highly worsened. The comparison of this model in $t^m$ with two known models having the same number of parameters, well shows that its representativity of the real data is better or more general. Finally, in a more fundamental context and particularly in terms of complexity and simplicity, a self-filtering action enables showing the compatibility between the \emph{internal complexity} that the internal structure and its stochastic behavior present, and the \emph{global simplicity} that the model in $t^m$ offers in a deterministic manner: it is true that the non-integer power of a power-law is well a marker of complexity. |
q-bio/0502024 | Stefan Klumpp | Stefan Klumpp, Theo M. Nieuwenhuizen, and Reinhard Lipowsky | Self-organized density patterns of molecular motors in arrays of
cytoskeletal filaments | 48 pages, 8 figures | Biophys. J. 88, 3118-3132 (2005) | 10.1529/biophysj.104.056127 | null | q-bio.SC cond-mat.stat-mech | null | The stationary states of systems with many molecular motors are studied
theoretically for uniaxial and centered (aster-like) arrangements of
cytoskeletal filaments using Monte Carlo simulations and a two-state model.
Mutual exclusion of motors from binding sites of the filaments is taken into
account. For small overall motor concentration, the density profiles are
exponential and algebraic in uniaxial and centered filament systems,
respectively. For uniaxial systems, exclusion leads to the coexistence of
regions of high and low densities of bound motors corresponding to motor
traffic jams, which grow upon increasing the overall motor concentration. These
jams are insensitive to the motor behavior at the end of the filament. In
centered systems, traffic jams remain small and an increase in the motor
concentration leads to a flattening of the profile, if the motors move inwards,
and to the build-up of a concentration maximum in the center of the aster if
motors move outwards. In addition to motors density patterns, we also determine
the corresponding patterns of the motor current.
| [
{
"created": "Tue, 22 Feb 2005 11:21:13 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Klumpp",
"Stefan",
""
],
[
"Nieuwenhuizen",
"Theo M.",
""
],
[
"Lipowsky",
"Reinhard",
""
]
] | The stationary states of systems with many molecular motors are studied theoretically for uniaxial and centered (aster-like) arrangements of cytoskeletal filaments using Monte Carlo simulations and a two-state model. Mutual exclusion of motors from binding sites of the filaments is taken into account. For small overall motor concentration, the density profiles are exponential and algebraic in uniaxial and centered filament systems, respectively. For uniaxial systems, exclusion leads to the coexistence of regions of high and low densities of bound motors corresponding to motor traffic jams, which grow upon increasing the overall motor concentration. These jams are insensitive to the motor behavior at the end of the filament. In centered systems, traffic jams remain small and an increase in the motor concentration leads to a flattening of the profile, if the motors move inwards, and to the build-up of a concentration maximum in the center of the aster if motors move outwards. In addition to motors density patterns, we also determine the corresponding patterns of the motor current. |
1512.01595 | Andrew Dean | Andrew Dean, Ewan Minter, Megan Sorenson, Christopher Lowe, Duncan
Cameron, Michael Brockurst, A. Jamie Wood | Host control and nutrient trading in a photosynthetic symbiosis | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Photosymbiosis is one of the most important evolutionary trajectories,
resulting in the chloroplast and the subsequent development of all complex
photosynthetic organisms. The ciliate Paramecium bursaria and the alga
Chlorella have a well established and well studied light dependent
endosymbiotic relationship. Despite its prominence there remain many unanswered
questions regarding the exact mechanisms of the photosymbiosis. Of particular
interest is how a host maintains and manages its symbiont load in response to
the allocation of nutrients between itself and its symbionts. Here we construct
a detailed mathematical model, parameterised from the literature, that
explicitly incorporates nutrient trading within a deterministic model of both
partners. The model demonstrates how the symbiotic relationship can manifest as
parasitism of the host by the symbionts, mutualism, wherein both partners
benefit, or exploitation of the symbionts by the hosts. We show that the
precise nature of the photosymbiosis is determined by both environmental
conditions (how much light is available for photosynthesis) and the level of
control a host has over its symbiont load. Our model provides a framework
within which it is possible to pose detailed questions regarding the
evolutionary behaviour of this important example of an established light
dependent endosymbiosis; we focus on one question in particular, namely the
evolution of host control, and show using an adaptive dynamics approach that a
moderate level of host control may evolve provided the associated costs are not
prohibitive.
| [
{
"created": "Fri, 4 Dec 2015 23:53:09 GMT",
"version": "v1"
}
] | 2015-12-08 | [
[
"Dean",
"Andrew",
""
],
[
"Minter",
"Ewan",
""
],
[
"Sorenson",
"Megan",
""
],
[
"Lowe",
"Christopher",
""
],
[
"Cameron",
"Duncan",
""
],
[
"Brockurst",
"Michael",
""
],
[
"Wood",
"A. Jamie",
""
]
] | Photosymbiosis is one of the most important evolutionary trajectories, resulting in the chloroplast and the subsequent development of all complex photosynthetic organisms. The ciliate Paramecium bursaria and the alga Chlorella have a well established and well studied light dependent endosymbiotic relationship. Despite its prominence there remain many unanswered questions regarding the exact mechanisms of the photosymbiosis. Of particular interest is how a host maintains and manages its symbiont load in response to the allocation of nutrients between itself and its symbionts. Here we construct a detailed mathematical model, parameterised from the literature, that explicitly incorporates nutrient trading within a deterministic model of both partners. The model demonstrates how the symbiotic relationship can manifest as parasitism of the host by the symbionts, mutualism, wherein both partners benefit, or exploitation of the symbionts by the hosts. We show that the precise nature of the photosymbiosis is determined by both environmental conditions (how much light is available for photosynthesis) and the level of control a host has over its symbiont load. Our model provides a framework within which it is possible to pose detailed questions regarding the evolutionary behaviour of this important example of an established light dependent endosymbiosis; we focus on one question in particular, namely the evolution of host control, and show using an adaptive dynamics approach that a moderate level of host control may evolve provided the associated costs are not prohibitive. |
2307.15170 | Nishant Sinha | Ryan S Gallagher, Nishant Sinha, Akash R Pattnaik, William K.S.
Ojemann, Alfredo Lucas, Joshua J. LaRocque, John M Bernabei, Adam S
Greenblatt, Elizabeth M Sweeney, H Isaac Chen, Kathryn A Davis, Erin C
Conrad, Brian Litt | Quantifying interictal intracranial EEG to predict focal epilepsy | 25 pages, 4 Figures, 1 table | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Intracranial EEG (IEEG) is used for 2 main purposes, to determine: (1) if
epileptic networks are amenable to focal treatment and (2) where to intervene.
Currently these questions are answered qualitatively and sometimes differently
across centers. There is a need for objective, standardized methods to guide
surgical decision making and to enable large scale data analysis across centers
and prospective clinical trials.
We analyzed interictal data from 101 patients with drug resistant epilepsy
who underwent presurgical evaluation with IEEG. We chose interictal data
because of its potential to reduce the morbidity and cost associated with ictal
recording. 65 patients had unifocal seizure onset on IEEG, and 36 were
non-focal or multi-focal. We quantified the spatial dispersion of implanted
electrodes and interictal IEEG abnormalities for each patient. We compared
these measures against the 5 Sense Score (5SS), a pre-implant estimate of the
likelihood of focal seizure onset, and assessed their ability to predict the
clinicians choice of therapeutic intervention and the patient outcome.
The spatial dispersion of IEEG electrodes predicted network focality with
precision similar to the 5SS (AUC = 0.67), indicating that electrode placement
accurately reflected pre-implant information. A cross-validated model combining
the 5SS and the spatial dispersion of interictal IEEG abnormalities
significantly improved this prediction (AUC = 0.79; p<0.05). The combined model
predicted ultimate treatment strategy (surgery vs. device) with an AUC of 0.81
and post-surgical outcome at 2 years with an AUC of 0.70. The 5SS, interictal
IEEG, and electrode placement were not correlated and provided complementary
information.
Quantitative, interictal IEEG significantly improved upon pre-implant
estimates of network focality and predicted treatment with precision
approaching that of clinical experts.
| [
{
"created": "Thu, 27 Jul 2023 19:59:45 GMT",
"version": "v1"
}
] | 2023-07-31 | [
[
"Gallagher",
"Ryan S",
""
],
[
"Sinha",
"Nishant",
""
],
[
"Pattnaik",
"Akash R",
""
],
[
"Ojemann",
"William K. S.",
""
],
[
"Lucas",
"Alfredo",
""
],
[
"LaRocque",
"Joshua J.",
""
],
[
"Bernabei",
"John M",
""
],
[
"Greenblatt",
"Adam S",
""
],
[
"Sweeney",
"Elizabeth M",
""
],
[
"Chen",
"H Isaac",
""
],
[
"Davis",
"Kathryn A",
""
],
[
"Conrad",
"Erin C",
""
],
[
"Litt",
"Brian",
""
]
] | Intracranial EEG (IEEG) is used for 2 main purposes, to determine: (1) if epileptic networks are amenable to focal treatment and (2) where to intervene. Currently these questions are answered qualitatively and sometimes differently across centers. There is a need for objective, standardized methods to guide surgical decision making and to enable large scale data analysis across centers and prospective clinical trials. We analyzed interictal data from 101 patients with drug resistant epilepsy who underwent presurgical evaluation with IEEG. We chose interictal data because of its potential to reduce the morbidity and cost associated with ictal recording. 65 patients had unifocal seizure onset on IEEG, and 36 were non-focal or multi-focal. We quantified the spatial dispersion of implanted electrodes and interictal IEEG abnormalities for each patient. We compared these measures against the 5 Sense Score (5SS), a pre-implant estimate of the likelihood of focal seizure onset, and assessed their ability to predict the clinicians choice of therapeutic intervention and the patient outcome. The spatial dispersion of IEEG electrodes predicted network focality with precision similar to the 5SS (AUC = 0.67), indicating that electrode placement accurately reflected pre-implant information. A cross-validated model combining the 5SS and the spatial dispersion of interictal IEEG abnormalities significantly improved this prediction (AUC = 0.79; p<0.05). The combined model predicted ultimate treatment strategy (surgery vs. device) with an AUC of 0.81 and post-surgical outcome at 2 years with an AUC of 0.70. The 5SS, interictal IEEG, and electrode placement were not correlated and provided complementary information. Quantitative, interictal IEEG significantly improved upon pre-implant estimates of network focality and predicted treatment with precision approaching that of clinical experts. |
1609.02981 | Masamichi Sato | Masamichi Sato | Renormalization Group Transformation for Hamiltonian Dynamical Systems
in Biological Networks | null | null | null | null | q-bio.OT cond-mat.dis-nn math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We apply the renormalization group theory to the dynamical systems with the
simplest example of basic biological motifs. This includes the interpretation
of complex networks as the perturbation to simple network. This is the first
step to build our original framework to infer the properties of biological
networks, and the basis work to see its effectiveness to actual complex
systems.
| [
{
"created": "Sat, 10 Sep 2016 00:46:40 GMT",
"version": "v1"
}
] | 2016-09-13 | [
[
"Sato",
"Masamichi",
""
]
] | We apply the renormalization group theory to the dynamical systems with the simplest example of basic biological motifs. This includes the interpretation of complex networks as the perturbation to simple network. This is the first step to build our original framework to infer the properties of biological networks, and the basis work to see its effectiveness to actual complex systems. |
2004.03539 | Cesar Castilho | C\'esar Castilho, Jo\~ao A. M. Gondim, Marcelo Marchesin and Mehran
Sabeti | Assessing the Efficiency of Different Control Strategies for the
Coronavirus (COVID-19) Epidemic | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The goal of this work is to analyse the effects of control policies for the
coronavirus (COVID-19) epidemic in Brazil. This is done by considering an
age-structured SEIR model with a quarantine class and two types of controls.
The first one studies the sensitivity with regard to the parameters of the
basic reproductive number R0 which is calculated by the next generation method.
The second one evaluates different quarantine strategies by comparing their
relative total number of deaths.
| [
{
"created": "Tue, 7 Apr 2020 17:02:00 GMT",
"version": "v1"
}
] | 2020-04-08 | [
[
"Castilho",
"César",
""
],
[
"Gondim",
"João A. M.",
""
],
[
"Marchesin",
"Marcelo",
""
],
[
"Sabeti",
"Mehran",
""
]
] | The goal of this work is to analyse the effects of control policies for the coronavirus (COVID-19) epidemic in Brazil. This is done by considering an age-structured SEIR model with a quarantine class and two types of controls. The first one studies the sensitivity with regard to the parameters of the basic reproductive number R0 which is calculated by the next generation method. The second one evaluates different quarantine strategies by comparing their relative total number of deaths. |
1210.3154 | Subhomoi Borkotoky | Subhomoi Borkotoky | Docking Studies on HIV Integrase Inhibitors Based On Potential Ligand
Binding Sites | 9 pages, 6 tables | International Journal on Bioinformatics & Biosciences, vol. 2, pp.
21-29, 2012 | 10.5121/ijbb.2012.2303 | null | q-bio.BM q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | HIV integrase is a 32 kDa protein produced from the C-terminal portion of the
Pol gene product, and is an attractive target for new anti-HIV drugs. Integrase
is an enzyme produced by a retrovirus (such as HIV) that enables its genetic
material to be integrated into the DNA of the infected cell. Raltegravir and
Elvitegravir are two important drugs against integrase.
| [
{
"created": "Thu, 11 Oct 2012 09:00:25 GMT",
"version": "v1"
}
] | 2012-10-12 | [
[
"Borkotoky",
"Subhomoi",
""
]
] | HIV integrase is a 32 kDa protein produced from the C-terminal portion of the Pol gene product, and is an attractive target for new anti-HIV drugs. Integrase is an enzyme produced by a retrovirus (such as HIV) that enables its genetic material to be integrated into the DNA of the infected cell. Raltegravir and Elvitegravir are two important drugs against integrase. |
2308.01364 | Daniel P\'erez-Palau | Marc Jorba-Cusc\'o (1) and Ruth I. Oliva-Z\'uniga (2) and Josep
Sardany\'es (1) and Daniel P\'erez-Palau (3) ((1) Centre de Recerca
Matem\`atica, (2) Universidad Nacional Aut\'onoma de Honduras, (3)
Universidad Internacional de la Rioja) | Dispersal-enhanced resilience in two-patch metapopulations: origin's
instability type matters | submitted to International Journal of Bifurcation and Chaos | null | 10.1007/s12064-023-00411-2 | null | q-bio.PE math.DS | http://creativecommons.org/licenses/by/4.0/ | Many populations of animals or plants, exhibit a metapopulation structure
with close, spatially-separated subpopulations. The field of metapopulation
theory has made significant advancements since the influential Levins model.
Various modeling approaches have provided valuable insights to theoretical
Ecology. Despite extensive research on metapopulation models, there are still
challenging questions that are difficult to answer from ecological
metapopulational data or multi-patch models. Low-dimension mathematical models
offer a promising avenue to address these questions, especially for global
dynamics which have been scarcely investigated. In this study, we investigate a
two-patch metapopulation model with logistic growth and diffusion between
patches. By using analytical and numerical methods, we thoroughly analyze the
impact of diffusion on the dynamics of the metapopulation. We identify the
equilibrium points and assess their local and global stability. Furthermore, we
analytically derive the optimal diffusion rate that leads to the highest
metapopulation values. Our findings demonstrate that increased diffusion plays
a crucial role in the preservation of both subpopulations and the full
metapopulation, especially under the presence of stochastic perturbations.
Specifically, at low diffusion values, the origin is a repeller, causing orbits
starting around it to travel closely parallel to the axes. This configuration
makes the metapopulation less resilient and thus more susceptible to local and
global extinctions. However, as diffusion increases, the repeller transitions
to a saddle point, and orbits starting near the origin rapidly converge to the
unstable manifold of the saddle. This phenomenon reduces the likelihood of
stochastic extinctions and the metapopulation becomes more resilient due to
these changes in the vector field of the phase space.
| [
{
"created": "Wed, 2 Aug 2023 18:11:00 GMT",
"version": "v1"
},
{
"created": "Mon, 11 Dec 2023 15:15:37 GMT",
"version": "v2"
}
] | 2024-03-08 | [
[
"Jorba-Cuscó",
"Marc",
""
],
[
"Oliva-Zúniga",
"Ruth I.",
""
],
[
"Sardanyés",
"Josep",
""
],
[
"Pérez-Palau",
"Daniel",
""
]
] | Many populations of animals or plants, exhibit a metapopulation structure with close, spatially-separated subpopulations. The field of metapopulation theory has made significant advancements since the influential Levins model. Various modeling approaches have provided valuable insights to theoretical Ecology. Despite extensive research on metapopulation models, there are still challenging questions that are difficult to answer from ecological metapopulational data or multi-patch models. Low-dimension mathematical models offer a promising avenue to address these questions, especially for global dynamics which have been scarcely investigated. In this study, we investigate a two-patch metapopulation model with logistic growth and diffusion between patches. By using analytical and numerical methods, we thoroughly analyze the impact of diffusion on the dynamics of the metapopulation. We identify the equilibrium points and assess their local and global stability. Furthermore, we analytically derive the optimal diffusion rate that leads to the highest metapopulation values. Our findings demonstrate that increased diffusion plays a crucial role in the preservation of both subpopulations and the full metapopulation, especially under the presence of stochastic perturbations. Specifically, at low diffusion values, the origin is a repeller, causing orbits starting around it to travel closely parallel to the axes. This configuration makes the metapopulation less resilient and thus more susceptible to local and global extinctions. However, as diffusion increases, the repeller transitions to a saddle point, and orbits starting near the origin rapidly converge to the unstable manifold of the saddle. This phenomenon reduces the likelihood of stochastic extinctions and the metapopulation becomes more resilient due to these changes in the vector field of the phase space. |
1703.03030 | Gustav Markkula | Gustav Markkula, Erwin Boer, Richard Romano, Natasha Merat | Sustained sensorimotor control as intermittent decisions about
prediction errors: Computational framework and application to ground vehicle
steering | null | null | 10.1007/s00422-017-0743-9 | null | q-bio.NC cs.CE cs.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A conceptual and computational framework is proposed for modelling of human
sensorimotor control, and is exemplified for the sensorimotor task of steering
a car. The framework emphasises control intermittency, and extends on existing
models by suggesting that the nervous system implements intermittent control
using a combination of (1) motor primitives, (2) prediction of sensory outcomes
of motor actions, and (3) evidence accumulation of prediction errors. It is
shown that approximate but useful sensory predictions in the intermittent
control context can be constructed without detailed forward models, as a
superposition of simple prediction primitives, resembling neurobiologically
observed corollary discharges. The proposed mathematical framework allows
straightforward extension to intermittent behaviour from existing
one-dimensional continuous models in the linear control and ecological
psychology traditions. Empirical observations from a driving simulator provide
support for some of the framework assumptions: It is shown that human steering
control, in routine lane-keeping and in a demanding near-limit task, is better
described as a sequence of discrete stepwise steering adjustments, than as
continuous control. Furthermore, the amplitudes of individual steering
adjustments are well predicted by a compound visual cue signalling steering
error, and even better so if also adjusting for predictions of how the same cue
is affected by previous control. Finally, evidence accumulation is shown to
explain observed covariability between inter-adjustment durations and
adjustment amplitudes, seemingly better so than the type of threshold
mechanisms that are typically assumed in existing models of intermittent
control.
| [
{
"created": "Wed, 8 Mar 2017 20:56:04 GMT",
"version": "v1"
}
] | 2018-10-31 | [
[
"Markkula",
"Gustav",
""
],
[
"Boer",
"Erwin",
""
],
[
"Romano",
"Richard",
""
],
[
"Merat",
"Natasha",
""
]
] | A conceptual and computational framework is proposed for modelling of human sensorimotor control, and is exemplified for the sensorimotor task of steering a car. The framework emphasises control intermittency, and extends on existing models by suggesting that the nervous system implements intermittent control using a combination of (1) motor primitives, (2) prediction of sensory outcomes of motor actions, and (3) evidence accumulation of prediction errors. It is shown that approximate but useful sensory predictions in the intermittent control context can be constructed without detailed forward models, as a superposition of simple prediction primitives, resembling neurobiologically observed corollary discharges. The proposed mathematical framework allows straightforward extension to intermittent behaviour from existing one-dimensional continuous models in the linear control and ecological psychology traditions. Empirical observations from a driving simulator provide support for some of the framework assumptions: It is shown that human steering control, in routine lane-keeping and in a demanding near-limit task, is better described as a sequence of discrete stepwise steering adjustments, than as continuous control. Furthermore, the amplitudes of individual steering adjustments are well predicted by a compound visual cue signalling steering error, and even better so if also adjusting for predictions of how the same cue is affected by previous control. Finally, evidence accumulation is shown to explain observed covariability between inter-adjustment durations and adjustment amplitudes, seemingly better so than the type of threshold mechanisms that are typically assumed in existing models of intermittent control. |
q-bio/0402029 | Ilya M. Nemenman | Ilya Nemenman | Fluctuation-dissipation theorem and models of learning | 23 pages, 1 figure; manuscript restructured following reviewers'
suggestions; references added; misprints corrected | Neural Comp. 17 (9): 2006-2033 SEP 2005 | null | NSF-KITP-04-20 | q-bio.NC cs.LG nlin.AO physics.data-an | null | Advances in statistical learning theory have resulted in a multitude of
different designs of learning machines. But which ones are implemented by
brains and other biological information processors? We analyze how various
abstract Bayesian learners perform on different data and argue that it is
difficult to determine which learning-theoretic computation is performed by a
particular organism using just its performance in learning a stationary target
(learning curve). Basing on the fluctuation-dissipation relation in statistical
physics, we then discuss a different experimental setup that might be able to
solve the problem.
| [
{
"created": "Thu, 12 Feb 2004 22:36:01 GMT",
"version": "v1"
},
{
"created": "Tue, 26 Oct 2004 16:21:17 GMT",
"version": "v2"
}
] | 2007-05-23 | [
[
"Nemenman",
"Ilya",
""
]
] | Advances in statistical learning theory have resulted in a multitude of different designs of learning machines. But which ones are implemented by brains and other biological information processors? We analyze how various abstract Bayesian learners perform on different data and argue that it is difficult to determine which learning-theoretic computation is performed by a particular organism using just its performance in learning a stationary target (learning curve). Basing on the fluctuation-dissipation relation in statistical physics, we then discuss a different experimental setup that might be able to solve the problem. |
1804.01958 | Alain Goriely | Johannes Weickenmeier, Ellen Kuhl, Alain Goriely | The multiphysics of prion-like diseases: progression and atrophy | 5 pages/5 figures | Phys. Rev. Lett. 121, 158101 (2018) | 10.1103/PhysRevLett.121.158101 | null | q-bio.NC q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many neurodegenerative diseases are related to the propagation and
accumulation of toxic proteins throughout the brain. The lesions created by
aggregates of these toxic proteins further lead to cell death and accelerated
tissue atrophy. A striking feature of some of these diseases is their
characteristic pattern and evolution, leading to well-codified disease stages
visible to neuropathology and associated with various cognitive deficits and
pathologies. Here, we simulate the anisotropic propagation and accumulation of
toxic proteins in full brain geometry. We show that the same model with
different initial seeding zones reproduces the characteristic evolution of
different prion-like diseases. We also recover the expected evolution of the
total toxic protein load. Finally, we couple our transport model to a
mechanical atrophy model to obtain the typical degeneration patterns found in
neurodegenerative diseases.
| [
{
"created": "Thu, 5 Apr 2018 17:12:17 GMT",
"version": "v1"
}
] | 2018-10-17 | [
[
"Weickenmeier",
"Johannes",
""
],
[
"Kuhl",
"Ellen",
""
],
[
"Goriely",
"Alain",
""
]
] | Many neurodegenerative diseases are related to the propagation and accumulation of toxic proteins throughout the brain. The lesions created by aggregates of these toxic proteins further lead to cell death and accelerated tissue atrophy. A striking feature of some of these diseases is their characteristic pattern and evolution, leading to well-codified disease stages visible to neuropathology and associated with various cognitive deficits and pathologies. Here, we simulate the anisotropic propagation and accumulation of toxic proteins in full brain geometry. We show that the same model with different initial seeding zones reproduces the characteristic evolution of different prion-like diseases. We also recover the expected evolution of the total toxic protein load. Finally, we couple our transport model to a mechanical atrophy model to obtain the typical degeneration patterns found in neurodegenerative diseases. |
1212.2822 | Anirban Banerji | Charudatta Navare, Anirban Banerji | Residue mobility has decreased during protein evolution | 22 pages, 3 figures, 7 tables | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Upon studying the B-Factors of all the atoms of all non-redundant proteins
belonging to 76 most commonly found structural domains of all four major
structural classes, it was found that the residue mobility has decreased during
the course of evolution. Though increased residue-flexibility was preferred in
the early stages of protein structure evolution, less flexibility is preferred
in the medieval and recent stages. GLU is found to be the most flexible residue
while VAL recorded to have the least flexibility. General trends in decrement
of B-Factors conformed to the general trend in the order of emergence of
protein structural domains. Decrement of B-Factor is observed to be most
decisive (monotonic and uniform) for VAL, while evolution of CYS and LYS
flexibility is found to be most skewed. Barring CYS, flexibility of all the
residues is found to have increased during evolution of alpha by beta folds,
however flexibility of all the residues (barring CYS) is found to have
decreased during evolution of all beta folds. Only in alpha by beta folds the
tendency of preferring higher residue mobility could be observed, neither alpha
plus beta, nor all alpha nor all beta folds were found to support higher
residue-mobility. In all the structural classes, the effect of evolutionary
constraint on polar residues is found to follow an exactly identical trend as
that on hydrophobic residues, only the extent of these effects are found to be
different. Though protein size is found to be decreasing during evolution,
residue mobility of proteins belonging to ancient and old structural domains
showed strong positive dependency upon protein size, however for medieval and
recent domains such dependency vanished. It is found that to optimize residue
fluctuations, alpha by beta class of proteins are subjected to more stringent
evolutionary constraints.
| [
{
"created": "Wed, 12 Dec 2012 14:01:37 GMT",
"version": "v1"
}
] | 2012-12-13 | [
[
"Navare",
"Charudatta",
""
],
[
"Banerji",
"Anirban",
""
]
] | Upon studying the B-Factors of all the atoms of all non-redundant proteins belonging to 76 most commonly found structural domains of all four major structural classes, it was found that the residue mobility has decreased during the course of evolution. Though increased residue-flexibility was preferred in the early stages of protein structure evolution, less flexibility is preferred in the medieval and recent stages. GLU is found to be the most flexible residue while VAL recorded to have the least flexibility. General trends in decrement of B-Factors conformed to the general trend in the order of emergence of protein structural domains. Decrement of B-Factor is observed to be most decisive (monotonic and uniform) for VAL, while evolution of CYS and LYS flexibility is found to be most skewed. Barring CYS, flexibility of all the residues is found to have increased during evolution of alpha by beta folds, however flexibility of all the residues (barring CYS) is found to have decreased during evolution of all beta folds. Only in alpha by beta folds the tendency of preferring higher residue mobility could be observed, neither alpha plus beta, nor all alpha nor all beta folds were found to support higher residue-mobility. In all the structural classes, the effect of evolutionary constraint on polar residues is found to follow an exactly identical trend as that on hydrophobic residues, only the extent of these effects are found to be different. Though protein size is found to be decreasing during evolution, residue mobility of proteins belonging to ancient and old structural domains showed strong positive dependency upon protein size, however for medieval and recent domains such dependency vanished. It is found that to optimize residue fluctuations, alpha by beta class of proteins are subjected to more stringent evolutionary constraints. |
2010.09601 | Carl Nelson | Carl J. Nelson and Stephen Bonner | Neuronal graphs: a graph theory primer for microscopic, functional
networks of neurons recorded by Calcium imaging | 28 pages, 11 figures, 1 table, 2 boxes | null | null | null | q-bio.NC q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Connected networks are a fundamental structure of neurobiology. Understanding
these networks will help us elucidate the neural mechanisms of computation.
Mathematically speaking these networks are `graphs' - structures containing
objects that are connected. In neuroscience, the objects could be regions of
the brain, e.g. fMRI data, or be individual neurons, e.g. calcium imaging with
fluorescence microscopy. The formal study of graphs, graph theory, can provide
neuroscientists with a large bank of algorithms for exploring networks. Graph
theory has already been applied in a variety of ways to fMRI data but, more
recently, has begun to be applied at the scales of neurons, e.g. from
functional calcium imaging. In this primer we explain the basics of graph
theory and relate them to features of microscopic functional networks of
neurons from calcium imaging - neuronal graphs. We explore recent examples of
graph theory applied to calcium imaging and we highlight some areas where
researchers new to the field could go awry.
| [
{
"created": "Mon, 19 Oct 2020 15:33:34 GMT",
"version": "v1"
}
] | 2020-10-20 | [
[
"Nelson",
"Carl J.",
""
],
[
"Bonner",
"Stephen",
""
]
] | Connected networks are a fundamental structure of neurobiology. Understanding these networks will help us elucidate the neural mechanisms of computation. Mathematically speaking these networks are `graphs' - structures containing objects that are connected. In neuroscience, the objects could be regions of the brain, e.g. fMRI data, or be individual neurons, e.g. calcium imaging with fluorescence microscopy. The formal study of graphs, graph theory, can provide neuroscientists with a large bank of algorithms for exploring networks. Graph theory has already been applied in a variety of ways to fMRI data but, more recently, has begun to be applied at the scales of neurons, e.g. from functional calcium imaging. In this primer we explain the basics of graph theory and relate them to features of microscopic functional networks of neurons from calcium imaging - neuronal graphs. We explore recent examples of graph theory applied to calcium imaging and we highlight some areas where researchers new to the field could go awry. |
1307.3628 | Mahashweta Basu | Mahashweta Basu, Nitai P. Bhattacharyya, Pradeep K. Mohanty | Comparison of Modules of Wild Type and Mutant Huntingtin and TP53
Protein Interaction Networks: Implications in Biological Processes and
Functions | 35 pages, 10 eps figures, (Supplementary material and Datasets are
available on request) | PLoS ONE 8(5): e64838 (2013) | 10.1371/journal.pone.0064838 | null | q-bio.MN physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Disease-causing mutations usually change the interacting partners of mutant
proteins. In this article, we propose that the biological consequences of
mutation are directly related to the alteration of corresponding protein
protein interaction networks (PPIN). Mutation of Huntingtin (HTT) which causes
Huntington's disease (HD) and mutations to TP53 which is associated with
different cancers are studied as two example cases. We construct the PPIN of
wild type and mutant proteins separately and identify the structural modules of
each of the networks. The functional role of these modules are then assessed by
Gene Ontology (GO) enrichment analysis for biological processes (BPs). We find
that a large number of significantly enriched (p<0.0001) GO terms in mutant
PPIN were absent in the wild type PPIN indicating the gain of BPs due to
mutation. Similarly some of the GO terms enriched in wild type PPIN cease to
exist in the modules of mutant PPIN, representing the loss. GO terms common in
modules of mutant and wild type networks indicate both loss and gain of BPs. We
further assign relevant biological function(s) to each module by classifying
the enriched GO terms associated with it. It turns out that most of these
biological functions in HTT networks are already known to be altered in HD and
those of TP53 networks are altered in cancers. We argue that gain of BPs, and
the corresponding biological functions, are due to new interacting partners
acquired by mutant proteins. The methodology we adopt here could be applied to
genetic diseases where mutations alter the ability of the protein to interact
with other proteins.
| [
{
"created": "Sat, 13 Jul 2013 08:02:25 GMT",
"version": "v1"
}
] | 2015-06-16 | [
[
"Basu",
"Mahashweta",
""
],
[
"Bhattacharyya",
"Nitai P.",
""
],
[
"Mohanty",
"Pradeep K.",
""
]
] | Disease-causing mutations usually change the interacting partners of mutant proteins. In this article, we propose that the biological consequences of mutation are directly related to the alteration of corresponding protein protein interaction networks (PPIN). Mutation of Huntingtin (HTT) which causes Huntington's disease (HD) and mutations to TP53 which is associated with different cancers are studied as two example cases. We construct the PPIN of wild type and mutant proteins separately and identify the structural modules of each of the networks. The functional role of these modules are then assessed by Gene Ontology (GO) enrichment analysis for biological processes (BPs). We find that a large number of significantly enriched (p<0.0001) GO terms in mutant PPIN were absent in the wild type PPIN indicating the gain of BPs due to mutation. Similarly some of the GO terms enriched in wild type PPIN cease to exist in the modules of mutant PPIN, representing the loss. GO terms common in modules of mutant and wild type networks indicate both loss and gain of BPs. We further assign relevant biological function(s) to each module by classifying the enriched GO terms associated with it. It turns out that most of these biological functions in HTT networks are already known to be altered in HD and those of TP53 networks are altered in cancers. We argue that gain of BPs, and the corresponding biological functions, are due to new interacting partners acquired by mutant proteins. The methodology we adopt here could be applied to genetic diseases where mutations alter the ability of the protein to interact with other proteins. |
2401.03727 | Kohitij Kar | Hamidreza Ramezanpour, Christopher Giverin, and Kohitij Kar | Low-cost, portable, easy-to-use kiosks to facilitate home-cage testing
of non-human primates during vision-based behavioral tasks | Another earlier version available at
https://osf.io/preprints/osf/csdzv | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Non-human primates (NHPs), especially rhesus macaques, have played a
significant role in our current understanding of the neural computations
underlying human vision. Apart from the established homologies in the visual
brain areas between these two species, and our extended abilities to probe
detailed neural mechanisms in monkeys at multiple scales, one major factor that
makes NHPs an extremely appealing animal model of human-vision is their ability
to perform human-like visual behavior. Traditionally, such behavioral studies
have been conducted in controlled laboratory settings. Such in-lab studies
offer the experimenter a tight control over many experimental variables like
overall luminance, eye movements (via eye tracking), auditory interference etc.
However, there are several constraints related to such experiments. These
include, 1) limited total experimental time, 2) requirement of dedicated human
experimenters for the NHPs, 3) requirement of additional lab-space for the
experiments, 4) NHPs often need to undergo invasive surgeries for a head-post
implant, 5) additional time and training required for chairing and head
restraints of monkeys. To overcome these limitations, many laboratories are now
adapting home-cage behavioral training and testing of NHPs. Home-cage
behavioral testing enables the administering of many vision-based behavioral
tasks simultaneously across multiple monkeys with much reduced human personnel
requirements, no NHP head restraint, and provide NHPs access to the experiments
without specific time constraints. To enable more open-source development of
this technology, here we provide the details of operating and building a
portable, easy-to-use kiosk for conducting home-cage vision-based behavioral
tasks in NHPs.
| [
{
"created": "Mon, 8 Jan 2024 08:22:05 GMT",
"version": "v1"
}
] | 2024-01-09 | [
[
"Ramezanpour",
"Hamidreza",
""
],
[
"Giverin",
"Christopher",
""
],
[
"Kar",
"Kohitij",
""
]
] | Non-human primates (NHPs), especially rhesus macaques, have played a significant role in our current understanding of the neural computations underlying human vision. Apart from the established homologies in the visual brain areas between these two species, and our extended abilities to probe detailed neural mechanisms in monkeys at multiple scales, one major factor that makes NHPs an extremely appealing animal model of human-vision is their ability to perform human-like visual behavior. Traditionally, such behavioral studies have been conducted in controlled laboratory settings. Such in-lab studies offer the experimenter a tight control over many experimental variables like overall luminance, eye movements (via eye tracking), auditory interference etc. However, there are several constraints related to such experiments. These include, 1) limited total experimental time, 2) requirement of dedicated human experimenters for the NHPs, 3) requirement of additional lab-space for the experiments, 4) NHPs often need to undergo invasive surgeries for a head-post implant, 5) additional time and training required for chairing and head restraints of monkeys. To overcome these limitations, many laboratories are now adapting home-cage behavioral training and testing of NHPs. Home-cage behavioral testing enables the administering of many vision-based behavioral tasks simultaneously across multiple monkeys with much reduced human personnel requirements, no NHP head restraint, and provide NHPs access to the experiments without specific time constraints. To enable more open-source development of this technology, here we provide the details of operating and building a portable, easy-to-use kiosk for conducting home-cage vision-based behavioral tasks in NHPs. |
1404.1029 | Johannes Dr\"oge | J. Dr\"oge, I. Gregor and A. C. McHardy | Taxator-tk: Fast and Precise Taxonomic Assignment of Metagenomes by
Approximating Evolutionary Neighborhoods | 18 pages, 5 figures, 24 supplementary figures | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Metagenomics characterizes microbial communities by random shotgun sequencing
of DNA isolated directly from an environment of interest. An essential step in
computational metagenome analysis is taxonomic sequence assignment, which
allows us to identify the sequenced community members and to reconstruct
taxonomic bins with sequence data for the individual taxa. We describe an
algorithm and the accompanying software, taxator-tk, which performs taxonomic
sequence assignments by fast approximate determination of evolutionary
neighbors from sequence similarities. Taxator-tk was precise in its taxonomic
assignment across all ranks and taxa for a range of evolutionary distances and
for short sequences. In addition to the taxonomic binning of metagenomes, it is
well suited for profiling microbial communities from metagenome samples
becauseit identifies bacterial, archaeal and eukaryotic community members
without being affected by varying primer binding strengths, as in marker gene
amplification, or copy number variations of marker genes across different taxa.
Taxator-tk has an efficient, parallelized implementation that allows the
assignment of 6 Gb of sequence data per day on a standard multiprocessor system
with ten CPU cores and microbial RefSeq as the genomic reference data.
| [
{
"created": "Thu, 3 Apr 2014 18:14:35 GMT",
"version": "v1"
}
] | 2014-04-04 | [
[
"Dröge",
"J.",
""
],
[
"Gregor",
"I.",
""
],
[
"McHardy",
"A. C.",
""
]
] | Metagenomics characterizes microbial communities by random shotgun sequencing of DNA isolated directly from an environment of interest. An essential step in computational metagenome analysis is taxonomic sequence assignment, which allows us to identify the sequenced community members and to reconstruct taxonomic bins with sequence data for the individual taxa. We describe an algorithm and the accompanying software, taxator-tk, which performs taxonomic sequence assignments by fast approximate determination of evolutionary neighbors from sequence similarities. Taxator-tk was precise in its taxonomic assignment across all ranks and taxa for a range of evolutionary distances and for short sequences. In addition to the taxonomic binning of metagenomes, it is well suited for profiling microbial communities from metagenome samples becauseit identifies bacterial, archaeal and eukaryotic community members without being affected by varying primer binding strengths, as in marker gene amplification, or copy number variations of marker genes across different taxa. Taxator-tk has an efficient, parallelized implementation that allows the assignment of 6 Gb of sequence data per day on a standard multiprocessor system with ten CPU cores and microbial RefSeq as the genomic reference data. |
2007.04921 | Andrew White | Zhiheng Li, Geemi P. Wellawatte, Maghesree Chakraborty, Heta A.
Gandhi, Chenliang Xu, Andrew D. White | Graph Neural Network Based Coarse-Grained Mapping Prediction | null | null | 10.1039/D0SC02458A | null | q-bio.QM cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The selection of coarse-grained (CG) mapping operators is a critical step for
CG molecular dynamics (MD) simulation. It is still an open question about what
is optimal for this choice and there is a need for theory. The current
state-of-the art method is mapping operators manually selected by experts. In
this work, we demonstrate an automated approach by viewing this problem as
supervised learning where we seek to reproduce the mapping operators produced
by experts. We present a graph neural network based CG mapping predictor called
DEEP SUPERVISED GRAPH PARTITIONING MODEL(DSGPM) that treats mapping operators
as a graph segmentation problem. DSGPM is trained on a novel dataset,
Human-annotated Mappings (HAM), consisting of 1,206 molecules with expert
annotated mapping operators. HAM can be used to facilitate further research in
this area. Our model uses a novel metric learning objective to produce
high-quality atomic features that are used in spectral clustering. The results
show that the DSGPM outperforms state-of-the-art methods in the field of graph
segmentation. Finally, we find that predicted CG mapping operators indeed
result in good CG MD models when used in simulation.
| [
{
"created": "Wed, 24 Jun 2020 15:05:39 GMT",
"version": "v1"
},
{
"created": "Mon, 27 Jul 2020 20:05:13 GMT",
"version": "v2"
},
{
"created": "Thu, 19 Aug 2021 16:45:31 GMT",
"version": "v3"
}
] | 2022-04-05 | [
[
"Li",
"Zhiheng",
""
],
[
"Wellawatte",
"Geemi P.",
""
],
[
"Chakraborty",
"Maghesree",
""
],
[
"Gandhi",
"Heta A.",
""
],
[
"Xu",
"Chenliang",
""
],
[
"White",
"Andrew D.",
""
]
] | The selection of coarse-grained (CG) mapping operators is a critical step for CG molecular dynamics (MD) simulation. It is still an open question about what is optimal for this choice and there is a need for theory. The current state-of-the art method is mapping operators manually selected by experts. In this work, we demonstrate an automated approach by viewing this problem as supervised learning where we seek to reproduce the mapping operators produced by experts. We present a graph neural network based CG mapping predictor called DEEP SUPERVISED GRAPH PARTITIONING MODEL(DSGPM) that treats mapping operators as a graph segmentation problem. DSGPM is trained on a novel dataset, Human-annotated Mappings (HAM), consisting of 1,206 molecules with expert annotated mapping operators. HAM can be used to facilitate further research in this area. Our model uses a novel metric learning objective to produce high-quality atomic features that are used in spectral clustering. The results show that the DSGPM outperforms state-of-the-art methods in the field of graph segmentation. Finally, we find that predicted CG mapping operators indeed result in good CG MD models when used in simulation. |
1605.08373 | Ulrich S. Schwarz | Jerome R. Soine, Nils Hersch, Georg Dreissen, Nico Hampe, Bernd
Hoffmann, Rudolf Merkel and Ulrich S. Schwarz | Measuring cellular traction forces on non-planar substrates | 34 pages, 9 figures | null | 10.1098/rsfs.2016.0024 | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Animal cells use traction forces to sense the mechanics and geometry of their
environment. Measuring these traction forces requires a workflow combining cell
experiments, image processing and force reconstruction based on elasticity
theory. Such procedures have been established before mainly for planar
substrates, in which case one can use the Green's function formalism. Here we
introduce a worksflow to measure traction forces of cardiac myofibroblasts on
non-planar elastic substrates. Soft elastic substrates with a wave-like
topology were micromolded from polydimethylsiloxane (PDMS) and fluorescent
marker beads were distributed homogeneously in the substrate. Using feature
vector based tracking of these marker beads, we first constructed a hexahedral
mesh for the substrate. We then solved the direct elastic boundary volume
problem on this mesh using the finite element method (FEM). Using data
simulations, we show that the traction forces can be reconstructed from the
substrate deformations by solving the corresponding inverse problem with a
L1-norm for the residue and a L2-norm for 0th order Tikhonov regularization.
Applying this procedure to the experimental data, we find that cardiac
myofibroblast cells tend to align both their shapes and their forces with the
long axis of the deformable wavy substrate.
| [
{
"created": "Thu, 26 May 2016 17:30:05 GMT",
"version": "v1"
}
] | 2016-08-24 | [
[
"Soine",
"Jerome R.",
""
],
[
"Hersch",
"Nils",
""
],
[
"Dreissen",
"Georg",
""
],
[
"Hampe",
"Nico",
""
],
[
"Hoffmann",
"Bernd",
""
],
[
"Merkel",
"Rudolf",
""
],
[
"Schwarz",
"Ulrich S.",
""
]
] | Animal cells use traction forces to sense the mechanics and geometry of their environment. Measuring these traction forces requires a workflow combining cell experiments, image processing and force reconstruction based on elasticity theory. Such procedures have been established before mainly for planar substrates, in which case one can use the Green's function formalism. Here we introduce a worksflow to measure traction forces of cardiac myofibroblasts on non-planar elastic substrates. Soft elastic substrates with a wave-like topology were micromolded from polydimethylsiloxane (PDMS) and fluorescent marker beads were distributed homogeneously in the substrate. Using feature vector based tracking of these marker beads, we first constructed a hexahedral mesh for the substrate. We then solved the direct elastic boundary volume problem on this mesh using the finite element method (FEM). Using data simulations, we show that the traction forces can be reconstructed from the substrate deformations by solving the corresponding inverse problem with a L1-norm for the residue and a L2-norm for 0th order Tikhonov regularization. Applying this procedure to the experimental data, we find that cardiac myofibroblast cells tend to align both their shapes and their forces with the long axis of the deformable wavy substrate. |
1911.00185 | Ziru Liu | Guangyan Zhang, Ziru Liu, Jichen Dai, Zilan Yu, Shuai Liu, and Wen
Zhang | ItLnc-BXE: a Bagging-XGBoost-ensemble method with multiple features for
identification of plant lncRNAs | 7 pages, 3 figures, 4 tables | null | null | null | q-bio.GN cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Motivation: Since long non-coding RNAs (lncRNAs) have involved in a wide
range of functions in cellular and developmental processes, an increasing
number of methods have been proposed for distinguishing lncRNAs from coding
RNAs. However, most of the existing methods are designed for lncRNAs in animal
systems, and only a few methods focus on the plant lncRNA identification.
Different from lncRNAs in animal systems, plant lncRNAs have distinct
characteristics. It is desirable to develop a computational method for accurate
and robust identification of plant lncRNAs. Results: Herein, we present a plant
lncRNA identification method ItLnc-BXE, which utilizes multiple features and
the ensemble learning strategy. First, a diversity of lncRNA features is
collected and filtered by feature selection to represent RNA transcripts. Then,
several base learners are trained and further combined into a single
meta-learner by ensemble learning, and thus an ItLnc-BXE model is constructed.
ItLnc-BXE models are evaluated on datasets of six plant species, the results
show that ItLnc-BXE outperforms other state-of-the-art plant lncRNA
identification methods, achieving better and robust performances (AUC>95.91%).
We also perform some experiments about cross-species lncRNA identification, and
the results indicate that dicots-based and monocots-based models can be used to
accurately identify lncRNAs in lower plant species, such as mosses and algae.
Availability: source codes are available at
https://github.com/BioMedicalBigDataMiningLab/ItLnc-BXE. Contact:
zhangwen@mail.hzau.edu.cn (or) zhangwen@whu.edu.cn Supplementary information:
Supplementary data are available at Bioinformatics online.
| [
{
"created": "Fri, 1 Nov 2019 02:28:19 GMT",
"version": "v1"
},
{
"created": "Fri, 24 Jan 2020 09:04:28 GMT",
"version": "v2"
}
] | 2020-01-27 | [
[
"Zhang",
"Guangyan",
""
],
[
"Liu",
"Ziru",
""
],
[
"Dai",
"Jichen",
""
],
[
"Yu",
"Zilan",
""
],
[
"Liu",
"Shuai",
""
],
[
"Zhang",
"Wen",
""
]
] | Motivation: Since long non-coding RNAs (lncRNAs) have involved in a wide range of functions in cellular and developmental processes, an increasing number of methods have been proposed for distinguishing lncRNAs from coding RNAs. However, most of the existing methods are designed for lncRNAs in animal systems, and only a few methods focus on the plant lncRNA identification. Different from lncRNAs in animal systems, plant lncRNAs have distinct characteristics. It is desirable to develop a computational method for accurate and robust identification of plant lncRNAs. Results: Herein, we present a plant lncRNA identification method ItLnc-BXE, which utilizes multiple features and the ensemble learning strategy. First, a diversity of lncRNA features is collected and filtered by feature selection to represent RNA transcripts. Then, several base learners are trained and further combined into a single meta-learner by ensemble learning, and thus an ItLnc-BXE model is constructed. ItLnc-BXE models are evaluated on datasets of six plant species, the results show that ItLnc-BXE outperforms other state-of-the-art plant lncRNA identification methods, achieving better and robust performances (AUC>95.91%). We also perform some experiments about cross-species lncRNA identification, and the results indicate that dicots-based and monocots-based models can be used to accurately identify lncRNAs in lower plant species, such as mosses and algae. Availability: source codes are available at https://github.com/BioMedicalBigDataMiningLab/ItLnc-BXE. Contact: zhangwen@mail.hzau.edu.cn (or) zhangwen@whu.edu.cn Supplementary information: Supplementary data are available at Bioinformatics online. |
2305.00009 | Christopher Marcotte | Christopher D. Marcotte, Matthew J. Hoffman, Flavio H. Fenton,
Elizabeth M. Cherry | Reconstructing Cardiac Electrical Excitations from Optical Mapping
Recordings | main text: 18 pages, 10 figures; supplement: 5 pages, 9 figures, 2
movies | null | null | null | q-bio.QM cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The reconstruction of electrical excitation patterns through the unobserved
depth of the tissue is essential to realizing the potential of computational
models in cardiac medicine. We have utilized experimental optical-mapping
recordings of cardiac electrical excitation on the epicardial and endocardial
surfaces of a canine ventricle as observations directing a local ensemble
transform Kalman Filter (LETKF) data assimilation scheme. We demonstrate that
the inclusion of explicit information about the stimulation protocol can
marginally improve the confidence of the ensemble reconstruction and the
reliability of the assimilation over time. Likewise, we consider the efficacy
of stochastic modeling additions to the assimilation scheme in the context of
experimentally derived observation sets. Approximation error is addressed at
both the observation and modeling stages, through the uncertainty of
observations and the specification of the model used in the assimilation
ensemble. We find that perturbative modifications to the observations have
marginal to deleterious effects on the accuracy and robustness of the state
reconstruction. Further, we find that incorporating additional information from
the observations into the model itself (in the case of stimulus and stochastic
currents) has a marginal improvement on the reconstruction accuracy over a
fully autonomous model, while complicating the model itself and thus
introducing potential for new types of model error. That the inclusion of
explicit modeling information has negligible to negative effects on the
reconstruction implies the need for new avenues for optimization of data
assimilation schemes applied to cardiac electrical excitation.
| [
{
"created": "Fri, 28 Apr 2023 12:53:19 GMT",
"version": "v1"
},
{
"created": "Mon, 3 Jul 2023 10:39:48 GMT",
"version": "v2"
},
{
"created": "Tue, 5 Sep 2023 11:16:18 GMT",
"version": "v3"
}
] | 2023-09-06 | [
[
"Marcotte",
"Christopher D.",
""
],
[
"Hoffman",
"Matthew J.",
""
],
[
"Fenton",
"Flavio H.",
""
],
[
"Cherry",
"Elizabeth M.",
""
]
] | The reconstruction of electrical excitation patterns through the unobserved depth of the tissue is essential to realizing the potential of computational models in cardiac medicine. We have utilized experimental optical-mapping recordings of cardiac electrical excitation on the epicardial and endocardial surfaces of a canine ventricle as observations directing a local ensemble transform Kalman Filter (LETKF) data assimilation scheme. We demonstrate that the inclusion of explicit information about the stimulation protocol can marginally improve the confidence of the ensemble reconstruction and the reliability of the assimilation over time. Likewise, we consider the efficacy of stochastic modeling additions to the assimilation scheme in the context of experimentally derived observation sets. Approximation error is addressed at both the observation and modeling stages, through the uncertainty of observations and the specification of the model used in the assimilation ensemble. We find that perturbative modifications to the observations have marginal to deleterious effects on the accuracy and robustness of the state reconstruction. Further, we find that incorporating additional information from the observations into the model itself (in the case of stimulus and stochastic currents) has a marginal improvement on the reconstruction accuracy over a fully autonomous model, while complicating the model itself and thus introducing potential for new types of model error. That the inclusion of explicit modeling information has negligible to negative effects on the reconstruction implies the need for new avenues for optimization of data assimilation schemes applied to cardiac electrical excitation. |
2310.08179 | Katharina Waury | Katharina Waury | Decision Tree for Protein Biomarker Selection for Clinical Applications | null | null | null | null | q-bio.BM | http://creativecommons.org/licenses/by-sa/4.0/ | Discovery of novel protein biomarkers for clinical applications is an active
research field across a manifold of diseases. Despite some successes and
progress, the biomarker development pipeline still frequently ends in failure
as biomarker candidates cannot be validated or translated to immunoassays.
Selection of strong disease biomarker candidates that further constitute
suitable targets for antibody binding in immunoassays is thus important. This
essential selection step can be supported and rationalized using bioinformatics
tools such as protein databases. Here, I present a workflow in the form of
decision trees to computationally investigate biomarker candidates and their
available affinity reagents in depth. This analysis can identify the most
promising biomarker candidates for assay development while minimal time and
effort is required.
| [
{
"created": "Thu, 12 Oct 2023 10:11:02 GMT",
"version": "v1"
}
] | 2023-10-13 | [
[
"Waury",
"Katharina",
""
]
] | Discovery of novel protein biomarkers for clinical applications is an active research field across a manifold of diseases. Despite some successes and progress, the biomarker development pipeline still frequently ends in failure as biomarker candidates cannot be validated or translated to immunoassays. Selection of strong disease biomarker candidates that further constitute suitable targets for antibody binding in immunoassays is thus important. This essential selection step can be supported and rationalized using bioinformatics tools such as protein databases. Here, I present a workflow in the form of decision trees to computationally investigate biomarker candidates and their available affinity reagents in depth. This analysis can identify the most promising biomarker candidates for assay development while minimal time and effort is required. |
1305.2132 | Donald Forsdyke Dr. | Donald R. Forsdyke | Role of HIV RNA structure in recombination and speciation: romping in
purine A, keeps HTLV away | Initially submitted to the Journal of Theoretical Biology 27th
November 2012 | Microbes and Infection (2014) 16, 96-103 | 10.1016/j.micinf.2013.10.017 | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Extreme enrichment of the human immunodeficiency virus (HIV-1) RNA genome for
the purine A parallels the mild purine-loading of the RNAs of most organisms.
This should militate against loop-loop "kissing" interactions between the
structured viral genome and structured host RNAs, which can generate segments
of double-stranded RNA sufficient to trigger intracellular alarms. However,
human T cell leukaemia virus (HTLV-1), with the potential to invade the same
host cell, shows extreme enrichment for the pyrimidine C. Assuming the low GC%
HIV and the high GC% HTLV-1 to share a common ancestor, it was postulated that
differences in GC% arose to prevent homologous recombination between these
emerging lentiviral species. Sympatrically isolated by this intracellular
reproductive barrier, prototypic HIV-1 seized the AU-rich (low GC%) high ground
(thus committing to purine A rather than purine G). Prototypic HTLV-1 forwent
this advantage and evolved an independent evolutionary strategy. Evidence
supporting this hypothesis since its elaboration in the 1990s is growing. The
conflict between the needs to encode accurately both a protein, and nucleic
acid structure, is often resolved in favour of the nucleic acid because, apart
from regulatory roles, structure is critical for recombination. However, above
a sequence difference threshold, structure (and hence recombination) is
impaired. New species can then arise.
| [
{
"created": "Thu, 9 May 2013 16:11:04 GMT",
"version": "v1"
}
] | 2014-06-05 | [
[
"Forsdyke",
"Donald R.",
""
]
] | Extreme enrichment of the human immunodeficiency virus (HIV-1) RNA genome for the purine A parallels the mild purine-loading of the RNAs of most organisms. This should militate against loop-loop "kissing" interactions between the structured viral genome and structured host RNAs, which can generate segments of double-stranded RNA sufficient to trigger intracellular alarms. However, human T cell leukaemia virus (HTLV-1), with the potential to invade the same host cell, shows extreme enrichment for the pyrimidine C. Assuming the low GC% HIV and the high GC% HTLV-1 to share a common ancestor, it was postulated that differences in GC% arose to prevent homologous recombination between these emerging lentiviral species. Sympatrically isolated by this intracellular reproductive barrier, prototypic HIV-1 seized the AU-rich (low GC%) high ground (thus committing to purine A rather than purine G). Prototypic HTLV-1 forwent this advantage and evolved an independent evolutionary strategy. Evidence supporting this hypothesis since its elaboration in the 1990s is growing. The conflict between the needs to encode accurately both a protein, and nucleic acid structure, is often resolved in favour of the nucleic acid because, apart from regulatory roles, structure is critical for recombination. However, above a sequence difference threshold, structure (and hence recombination) is impaired. New species can then arise. |
1801.00065 | Alvaro Ulloa Cerna | Alvaro Ulloa, Anna Basile, Gregory J. Wehner, Linyuan Jing, Marylyn D.
Ritchie, Brett Beaulieu-Jones, Christopher M. Haggerty, Brandon K. Fornwalt | An Unsupervised Homogenization Pipeline for Clustering Similar Patients
using Electronic Health Record Data | conference | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Electronic health records (EHR) contain a large variety of information on the
clinical history of patients such as vital signs, demographics, diagnostic
codes and imaging data. The enormous potential for discovery in this rich
dataset is hampered by its complexity and heterogeneity.
We present the first study to assess unsupervised homogenization pipelines
designed for EHR clustering. To identify the optimal pipeline, we tested
accuracy on simulated data with varying amounts of redundancy, heterogeneity,
and missingness. We identified two optimal pipelines: 1) Multiple Imputation by
Chained Equations (MICE) combined with Local Linear Embedding; and 2) MICE,
Z-scoring, and Deep Autoencoders.
| [
{
"created": "Sat, 30 Dec 2017 01:06:14 GMT",
"version": "v1"
},
{
"created": "Wed, 21 Mar 2018 14:46:41 GMT",
"version": "v2"
}
] | 2018-03-22 | [
[
"Ulloa",
"Alvaro",
""
],
[
"Basile",
"Anna",
""
],
[
"Wehner",
"Gregory J.",
""
],
[
"Jing",
"Linyuan",
""
],
[
"Ritchie",
"Marylyn D.",
""
],
[
"Beaulieu-Jones",
"Brett",
""
],
[
"Haggerty",
"Christopher M.",
""
],
[
"Fornwalt",
"Brandon K.",
""
]
] | Electronic health records (EHR) contain a large variety of information on the clinical history of patients such as vital signs, demographics, diagnostic codes and imaging data. The enormous potential for discovery in this rich dataset is hampered by its complexity and heterogeneity. We present the first study to assess unsupervised homogenization pipelines designed for EHR clustering. To identify the optimal pipeline, we tested accuracy on simulated data with varying amounts of redundancy, heterogeneity, and missingness. We identified two optimal pipelines: 1) Multiple Imputation by Chained Equations (MICE) combined with Local Linear Embedding; and 2) MICE, Z-scoring, and Deep Autoencoders. |
q-bio/0502001 | Caterina Guiot | Caterina Guiot, Pier Paolo Delsanto, Alberto Carpinteri, Nicola Pugno
Yuri Mansury, and Thomas S. Deisboeck | The Dynamic Evolution of the Power Exponent in a Universal Growth Model
of Tumors | null | null | null | null | q-bio.QM q-bio.TO | null | We have previously reported that a universal growth law, as proposed by West
and collaborators for all living organisms, appears to be able to describe also
the growth of tumors in vivo. In contrast to the assumption of a fixed power
exponent p (assumed by West et al. to be equal to 3/4), we show in this paper
the dynamic evolution of p from 2/3 to 1, using experimental data from the
cancer literature and in analogy with results obtained by applying scaling laws
to the study of fragmentation of solids. The dynamic behaviour of p is related
to the evolution of the fractal topology of neoplastic vascular systems and
might be applied for diagnostic purposes to mark the emergence of a
functionally sufficient (or effective) neo-angiogenetic structure.
| [
{
"created": "Tue, 1 Feb 2005 07:42:42 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Guiot",
"Caterina",
""
],
[
"Delsanto",
"Pier Paolo",
""
],
[
"Carpinteri",
"Alberto",
""
],
[
"Mansury",
"Nicola Pugno Yuri",
""
],
[
"Deisboeck",
"Thomas S.",
""
]
] | We have previously reported that a universal growth law, as proposed by West and collaborators for all living organisms, appears to be able to describe also the growth of tumors in vivo. In contrast to the assumption of a fixed power exponent p (assumed by West et al. to be equal to 3/4), we show in this paper the dynamic evolution of p from 2/3 to 1, using experimental data from the cancer literature and in analogy with results obtained by applying scaling laws to the study of fragmentation of solids. The dynamic behaviour of p is related to the evolution of the fractal topology of neoplastic vascular systems and might be applied for diagnostic purposes to mark the emergence of a functionally sufficient (or effective) neo-angiogenetic structure. |
1604.06938 | Tatsuya Sasaki | Tatsuya Sasaki, Isamu Okada, Yutaka Nakai | Indirect reciprocity can overcome free-rider problems on costly moral
assessment | 19 pages (incl. supplementary materials), 1 table, and 1 figure | Biology Letters 12: 20160341 (published 6 July 2016) | 10.1098/rsbl.2016.0341 | null | q-bio.PE physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Indirect reciprocity is one of the major mechanisms of the evolution of
cooperation. Because constant monitoring and accurate evaluation in moral
assessments tend to be costly, indirect reciprocity can be exploited by cost
evaders. A recent study crucially showed that a cooperative state achieved by
indirect reciprocators is easily destabilized by cost evaders in the case with
no supportive mechanism. Here, we present a simple and widely applicable
solution that considers pre-assessment of cost evaders. In the pre-assessment,
those who fail to pay for costly assessment systems are assigned a nasty image
that leads to being rejected by discriminators. We demonstrate that considering
the pre-assessment can crucially stabilize reciprocal cooperation for a broad
range of indirect reciprocity models. In particular for the most leading social
norms we analyse the conditions under which a prosocial state becomes locally
stable.
| [
{
"created": "Sat, 23 Apr 2016 19:11:41 GMT",
"version": "v1"
},
{
"created": "Wed, 6 Jul 2016 11:34:03 GMT",
"version": "v2"
}
] | 2016-07-07 | [
[
"Sasaki",
"Tatsuya",
""
],
[
"Okada",
"Isamu",
""
],
[
"Nakai",
"Yutaka",
""
]
] | Indirect reciprocity is one of the major mechanisms of the evolution of cooperation. Because constant monitoring and accurate evaluation in moral assessments tend to be costly, indirect reciprocity can be exploited by cost evaders. A recent study crucially showed that a cooperative state achieved by indirect reciprocators is easily destabilized by cost evaders in the case with no supportive mechanism. Here, we present a simple and widely applicable solution that considers pre-assessment of cost evaders. In the pre-assessment, those who fail to pay for costly assessment systems are assigned a nasty image that leads to being rejected by discriminators. We demonstrate that considering the pre-assessment can crucially stabilize reciprocal cooperation for a broad range of indirect reciprocity models. In particular for the most leading social norms we analyse the conditions under which a prosocial state becomes locally stable. |
2302.09076 | Christopher Overton | Christopher E. Overton, Sam Abbott, Rachel Christie, Fergus Cumming,
Julie Day, Owen Jones, Rob Paton, Charlie Turner and Thomas Ward | Nowcasting the 2022 mpox outbreak in England | null | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | In May 2022, a cluster of mpox cases were detected in the UK that could not
be traced to recent travel history from an endemic region. Over the coming
months, the outbreak grew, with over 3000 total cases reported in the UK, and
similar outbreaks occurring worldwide. These outbreaks appeared linked to
sexual contact networks between gay, bisexual and other men who have sex with
men. Following the COVID-19 pandemic, local health systems were strained, and
therefore effective surveillance for mpox was essential for managing public
health policy. However, the mpox outbreak in the UK was characterised by
substantial delays in the reporting of the symptom onset date and specimen
collection date for confirmed positive cases. These delays led to substantial
backfilling in the epidemic curve, making it challenging to interpret the
epidemic trajectory in real-time. Many nowcasting models exist to tackle this
challenge in epidemiological data, but these lacked sufficient flexibility. We
have developed a novel nowcasting model using generalised additive models to
correct the mpox epidemic curve in England, and provide real-time
characteristics of the state of the epidemic, including the real-time growth
rate. This model benefited from close collaboration with individuals involved
in collecting and processing the data, enabling temporal changes in the
reporting structure to be built into the model, which improved the robustness
of the nowcasts generated.
| [
{
"created": "Fri, 17 Feb 2023 16:09:59 GMT",
"version": "v1"
}
] | 2023-02-21 | [
[
"Overton",
"Christopher E.",
""
],
[
"Abbott",
"Sam",
""
],
[
"Christie",
"Rachel",
""
],
[
"Cumming",
"Fergus",
""
],
[
"Day",
"Julie",
""
],
[
"Jones",
"Owen",
""
],
[
"Paton",
"Rob",
""
],
[
"Turner",
"Charlie",
""
],
[
"Ward",
"Thomas",
""
]
] | In May 2022, a cluster of mpox cases were detected in the UK that could not be traced to recent travel history from an endemic region. Over the coming months, the outbreak grew, with over 3000 total cases reported in the UK, and similar outbreaks occurring worldwide. These outbreaks appeared linked to sexual contact networks between gay, bisexual and other men who have sex with men. Following the COVID-19 pandemic, local health systems were strained, and therefore effective surveillance for mpox was essential for managing public health policy. However, the mpox outbreak in the UK was characterised by substantial delays in the reporting of the symptom onset date and specimen collection date for confirmed positive cases. These delays led to substantial backfilling in the epidemic curve, making it challenging to interpret the epidemic trajectory in real-time. Many nowcasting models exist to tackle this challenge in epidemiological data, but these lacked sufficient flexibility. We have developed a novel nowcasting model using generalised additive models to correct the mpox epidemic curve in England, and provide real-time characteristics of the state of the epidemic, including the real-time growth rate. This model benefited from close collaboration with individuals involved in collecting and processing the data, enabling temporal changes in the reporting structure to be built into the model, which improved the robustness of the nowcasts generated. |
1507.00125 | Stephanie Elizabeth Palmer | Jared Salisbury, Stephanie E. Palmer | Optimal prediction and natural scene statistics in the retina | 10 pages, 2 figures | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Almost all neural computations involve making predictions. Whether an
organism is trying to catch prey, avoid predators, or simply move through a
complex environment, the data it collects through its senses can guide its
actions only to the extent that it can extract from these data information
about the future state of the world. An essential aspect of the problem in all
these forms is that not all features of the past carry predictive power. Since
there are costs associated with representing and transmitting information, a
natural hypothesis is that sensory systems have developed coding strategies
that are optimized to minimize these costs, keeping only a limited number of
bits of information about the past and ensuring that these bits are maximally
informative about the future. Another important feature of the prediction
problem is that the physics of the world is diverse enough to contain a wide
range of possible statistical ensembles, yet not all motion is probable. Thus,
the brain might not be a generalized predictive machine; it might have evolved
to specifically solve the prediction problems most common in the natural
environment. This paper reviews recent results on predictive coding and optimal
predictive information in the retina and suggests approaches for quantifying
prediction in response to natural motion.
| [
{
"created": "Wed, 1 Jul 2015 06:59:14 GMT",
"version": "v1"
}
] | 2015-07-02 | [
[
"Salisbury",
"Jared",
""
],
[
"Palmer",
"Stephanie E.",
""
]
] | Almost all neural computations involve making predictions. Whether an organism is trying to catch prey, avoid predators, or simply move through a complex environment, the data it collects through its senses can guide its actions only to the extent that it can extract from these data information about the future state of the world. An essential aspect of the problem in all these forms is that not all features of the past carry predictive power. Since there are costs associated with representing and transmitting information, a natural hypothesis is that sensory systems have developed coding strategies that are optimized to minimize these costs, keeping only a limited number of bits of information about the past and ensuring that these bits are maximally informative about the future. Another important feature of the prediction problem is that the physics of the world is diverse enough to contain a wide range of possible statistical ensembles, yet not all motion is probable. Thus, the brain might not be a generalized predictive machine; it might have evolved to specifically solve the prediction problems most common in the natural environment. This paper reviews recent results on predictive coding and optimal predictive information in the retina and suggests approaches for quantifying prediction in response to natural motion. |
2003.12128 | Ruben van Bergen | Ruben S. van Bergen, Nikolaus Kriegeskorte | Going in circles is the way forward: the role of recurrence in visual
inference | null | null | 10.1016/j.conb.2020.11.009 | null | q-bio.NC cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Biological visual systems exhibit abundant recurrent connectivity.
State-of-the-art neural network models for visual recognition, by contrast,
rely heavily or exclusively on feedforward computation. Any finite-time
recurrent neural network (RNN) can be unrolled along time to yield an
equivalent feedforward neural network (FNN). This important insight suggests
that computational neuroscientists may not need to engage recurrent
computation, and that computer-vision engineers may be limiting themselves to a
special case of FNN if they build recurrent models. Here we argue, to the
contrary, that FNNs are a special case of RNNs and that computational
neuroscientists and engineers should engage recurrence to understand how brains
and machines can (1) achieve greater and more flexible computational depth, (2)
compress complex computations into limited hardware, (3) integrate priors and
priorities into visual inference through expectation and attention, (4) exploit
sequential dependencies in their data for better inference and prediction, and
(5) leverage the power of iterative computation.
| [
{
"created": "Thu, 26 Mar 2020 19:53:05 GMT",
"version": "v1"
},
{
"created": "Thu, 6 Aug 2020 19:27:18 GMT",
"version": "v2"
},
{
"created": "Mon, 16 Nov 2020 13:33:25 GMT",
"version": "v3"
}
] | 2020-12-09 | [
[
"van Bergen",
"Ruben S.",
""
],
[
"Kriegeskorte",
"Nikolaus",
""
]
] | Biological visual systems exhibit abundant recurrent connectivity. State-of-the-art neural network models for visual recognition, by contrast, rely heavily or exclusively on feedforward computation. Any finite-time recurrent neural network (RNN) can be unrolled along time to yield an equivalent feedforward neural network (FNN). This important insight suggests that computational neuroscientists may not need to engage recurrent computation, and that computer-vision engineers may be limiting themselves to a special case of FNN if they build recurrent models. Here we argue, to the contrary, that FNNs are a special case of RNNs and that computational neuroscientists and engineers should engage recurrence to understand how brains and machines can (1) achieve greater and more flexible computational depth, (2) compress complex computations into limited hardware, (3) integrate priors and priorities into visual inference through expectation and attention, (4) exploit sequential dependencies in their data for better inference and prediction, and (5) leverage the power of iterative computation. |
1304.3796 | Peter Csermely | Gabor I. Simko, Peter Csermely | Nodes having a major influence to break cooperation define a novel
centrality measure: game centrality | 18 pages, 2 figures, 3 Tables + a supplement containing 8 pages, 1
figure, 2 Tables and the pseudo-code of the algorithm, the NetworGame
algorithm is downloadable from here: http://www.NetworGame.linkgroup.hu | PLoS ONE (2013) 8: e67159 | 10.1371/journal.pone.0067159 | null | q-bio.MN cs.GT cs.SI nlin.AO physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cooperation played a significant role in the self-organization and evolution
of living organisms. Both network topology and the initial position of
cooperators heavily affect the cooperation of social dilemma games. We
developed a novel simulation program package, called 'NetworGame', which is
able to simulate any type of social dilemma games on any model, or real world
networks with any assignment of initial cooperation or defection strategies to
network nodes. The ability of initially defecting single nodes to break overall
cooperation was called as 'game centrality'. The efficiency of this measure was
verified on well-known social networks, and was extended to 'protein games',
i.e. the simulation of cooperation between proteins, or their amino acids. Hubs
and in particular, party hubs of yeast protein-protein interaction networks had
a large influence to convert the cooperation of other nodes to defection.
Simulations on methionyl-tRNA synthetase protein structure network indicated an
increased influence of nodes belonging to intra-protein signaling pathways on
breaking cooperation. The efficiency of single, initially defecting nodes to
convert the cooperation of other nodes to defection in social dilemma games may
be an important measure to predict the importance of nodes in the integration
and regulation of complex systems. Game centrality may help to design more
efficient interventions to cellular networks (in forms of drugs), to ecosystems
and social networks. The NetworGame algorithm is downloadable from here:
www.NetworGame.linkgroup.hu
| [
{
"created": "Sat, 13 Apr 2013 10:04:53 GMT",
"version": "v1"
},
{
"created": "Sun, 30 Jun 2013 14:58:58 GMT",
"version": "v2"
}
] | 2013-07-02 | [
[
"Simko",
"Gabor I.",
""
],
[
"Csermely",
"Peter",
""
]
] | Cooperation played a significant role in the self-organization and evolution of living organisms. Both network topology and the initial position of cooperators heavily affect the cooperation of social dilemma games. We developed a novel simulation program package, called 'NetworGame', which is able to simulate any type of social dilemma games on any model, or real world networks with any assignment of initial cooperation or defection strategies to network nodes. The ability of initially defecting single nodes to break overall cooperation was called as 'game centrality'. The efficiency of this measure was verified on well-known social networks, and was extended to 'protein games', i.e. the simulation of cooperation between proteins, or their amino acids. Hubs and in particular, party hubs of yeast protein-protein interaction networks had a large influence to convert the cooperation of other nodes to defection. Simulations on methionyl-tRNA synthetase protein structure network indicated an increased influence of nodes belonging to intra-protein signaling pathways on breaking cooperation. The efficiency of single, initially defecting nodes to convert the cooperation of other nodes to defection in social dilemma games may be an important measure to predict the importance of nodes in the integration and regulation of complex systems. Game centrality may help to design more efficient interventions to cellular networks (in forms of drugs), to ecosystems and social networks. The NetworGame algorithm is downloadable from here: www.NetworGame.linkgroup.hu |
1906.09054 | Rajesh Karmakar | Rajesh Karmakar | Control of noise in gene expression by transcriptional reinitiation | Accepted for publication in JSTAT | null | 10.1088/1742-5468/ab8382 | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Gene expression is a random or noisy process. The process consists of several
random events among which the reinitiation of transcription by RNAP is an
important one. The RNAP molecules can bind the gene only after the promoter
gets activated by transcription factors. Several transcription factors bind the
promoter to put the gene in the active state. The gene turns into inactive
state as the bound transcription factors leave the promoter. During the active
period of the gene, many RNAP molecules transcribe the gene to synthesize the
mRNAs. The binding event of RNAP to the active state of the gene is a
probabilistic process and therefore, introduces noise or fluctuations in the
mRNA and protein levels. In this paper, we analytically calculate the Fano
factor in mRNA and protein levels and also the probability distribution of mRNA
numbers exactly with the binding event of RNAPs in gene transcription process.
The analytically calculated expression of Fano factor of proteins shows
excellent agreement with an experimental result. Then we show that the Fano
factor in mRNA levels can be sub-Poissonian due to the reinitiation of
transcription by RNAP and the mean mRNA level can be increased without
increasing the Fano factor. Our study show that the Fano factor can also be
reduced keeping mRNA levels fixed. We find that the reinitiation of
transcription can behave as a fine-tuned control process to regulate the
mRNA/protein level in the cell.
| [
{
"created": "Fri, 21 Jun 2019 10:42:56 GMT",
"version": "v1"
},
{
"created": "Fri, 5 Jul 2019 09:52:18 GMT",
"version": "v2"
},
{
"created": "Thu, 30 Jan 2020 07:00:08 GMT",
"version": "v3"
},
{
"created": "Mon, 18 May 2020 11:33:00 GMT",
"version": "v4"
}
] | 2020-08-26 | [
[
"Karmakar",
"Rajesh",
""
]
] | Gene expression is a random or noisy process. The process consists of several random events among which the reinitiation of transcription by RNAP is an important one. The RNAP molecules can bind the gene only after the promoter gets activated by transcription factors. Several transcription factors bind the promoter to put the gene in the active state. The gene turns into inactive state as the bound transcription factors leave the promoter. During the active period of the gene, many RNAP molecules transcribe the gene to synthesize the mRNAs. The binding event of RNAP to the active state of the gene is a probabilistic process and therefore, introduces noise or fluctuations in the mRNA and protein levels. In this paper, we analytically calculate the Fano factor in mRNA and protein levels and also the probability distribution of mRNA numbers exactly with the binding event of RNAPs in gene transcription process. The analytically calculated expression of Fano factor of proteins shows excellent agreement with an experimental result. Then we show that the Fano factor in mRNA levels can be sub-Poissonian due to the reinitiation of transcription by RNAP and the mean mRNA level can be increased without increasing the Fano factor. Our study show that the Fano factor can also be reduced keeping mRNA levels fixed. We find that the reinitiation of transcription can behave as a fine-tuned control process to regulate the mRNA/protein level in the cell. |
2110.03688 | Marco Baity-Jesi | Jimeng Wu, Simone D'Ambrosi, Lorenz Ammann, Julita Stadnicka-Michalak,
Kristin Schirmer, Marco Baity-Jesi | Predicting Chemical Hazard across Taxa through Machine Learning | null | Environment International 163 (2022) 107184 | 10.1016/j.envint.2022.107184 | null | q-bio.QM cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We applied machine learning methods to predict chemical hazards focusing on
fish acute toxicity across taxa. We analyzed the relevance of taxonomy and
experimental setup, showing that taking them into account can lead to
considerable improvements in the classification performance. We quantified the
gain obtained throught the introduction of taxonomic and experimental
information, compared to classification based on chemical information alone. We
used our approach with standard machine learning models (K-nearest neighbors,
random forests and deep neural networks), as well as the recently proposed
Read-Across Structure Activity Relationship (RASAR) models, which were very
successful in predicting chemical hazards to mammals based on chemical
similarity. We were able to obtain accuracies of over 93% on datasets where,
due to noise in the data, the maximum achievable accuracy was expected to be
below 96%. The best performances were obtained by random forests and RASAR
models. We analyzed metrics to compare our results with animal test
reproducibility, and despite most of our models "outperform animal test
reproducibility" as measured through recently proposed metrics, we showed that
the comparison between machine learning performance and animal test
reproducibility should be addressed with particular care. While we focused on
fish mortality, our approach, provided that the right data is available, is
valid for any combination of chemicals, effects and taxa.
| [
{
"created": "Thu, 7 Oct 2021 15:33:58 GMT",
"version": "v1"
},
{
"created": "Fri, 11 Mar 2022 15:01:49 GMT",
"version": "v2"
},
{
"created": "Fri, 6 May 2022 15:24:59 GMT",
"version": "v3"
}
] | 2022-05-09 | [
[
"Wu",
"Jimeng",
""
],
[
"D'Ambrosi",
"Simone",
""
],
[
"Ammann",
"Lorenz",
""
],
[
"Stadnicka-Michalak",
"Julita",
""
],
[
"Schirmer",
"Kristin",
""
],
[
"Baity-Jesi",
"Marco",
""
]
] | We applied machine learning methods to predict chemical hazards focusing on fish acute toxicity across taxa. We analyzed the relevance of taxonomy and experimental setup, showing that taking them into account can lead to considerable improvements in the classification performance. We quantified the gain obtained throught the introduction of taxonomic and experimental information, compared to classification based on chemical information alone. We used our approach with standard machine learning models (K-nearest neighbors, random forests and deep neural networks), as well as the recently proposed Read-Across Structure Activity Relationship (RASAR) models, which were very successful in predicting chemical hazards to mammals based on chemical similarity. We were able to obtain accuracies of over 93% on datasets where, due to noise in the data, the maximum achievable accuracy was expected to be below 96%. The best performances were obtained by random forests and RASAR models. We analyzed metrics to compare our results with animal test reproducibility, and despite most of our models "outperform animal test reproducibility" as measured through recently proposed metrics, we showed that the comparison between machine learning performance and animal test reproducibility should be addressed with particular care. While we focused on fish mortality, our approach, provided that the right data is available, is valid for any combination of chemicals, effects and taxa. |
1109.3211 | Ulrich Gerland | Thomas Sch\"otz, Richard A. Neher, and Ulrich Gerland | Target search on a dynamic DNA molecule | manuscript and supplementary material combined into a single document | null | 10.1103/PhysRevE.84.051911 | null | q-bio.BM physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study a protein-DNA target search model with explicit DNA dynamics
applicable to in vitro experiments. We show that the DNA dynamics plays a
crucial role for the effectiveness of protein "jumps" between sites distant
along the DNA contour but close in 3D space. A strongly binding protein that
searches by 1D sliding and jumping alone, explores the search space less
redundantly when the DNA dynamics is fast on the timescale of protein jumps
than in the opposite "frozen DNA" limit. We characterize the crossover between
these limits using simulations and scaling theory. We also rationalize the slow
exploration in the frozen limit as a subtle interplay between long jumps and
long trapping times of the protein in "islands" within random DNA
configurations in solution.
| [
{
"created": "Wed, 14 Sep 2011 21:19:52 GMT",
"version": "v1"
}
] | 2015-05-30 | [
[
"Schötz",
"Thomas",
""
],
[
"Neher",
"Richard A.",
""
],
[
"Gerland",
"Ulrich",
""
]
] | We study a protein-DNA target search model with explicit DNA dynamics applicable to in vitro experiments. We show that the DNA dynamics plays a crucial role for the effectiveness of protein "jumps" between sites distant along the DNA contour but close in 3D space. A strongly binding protein that searches by 1D sliding and jumping alone, explores the search space less redundantly when the DNA dynamics is fast on the timescale of protein jumps than in the opposite "frozen DNA" limit. We characterize the crossover between these limits using simulations and scaling theory. We also rationalize the slow exploration in the frozen limit as a subtle interplay between long jumps and long trapping times of the protein in "islands" within random DNA configurations in solution. |
2008.00590 | Spencer Farrell | Spencer Farrell, Garrett Stubbings, Kenneth Rockwood, Arnold
Mitnitski, Andrew Rutenberg | The potential for complex computational models of aging | null | null | 10.1016/j.mad.2020.111403 | null | q-bio.QM physics.bio-ph q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The gradual accumulation of damage and dysregulation during the aging of
living organisms can be quantified. Even so, the aging process is complex and
has multiple interacting physiological scales -- from the molecular to cellular
to whole tissues. In the face of this complexity, we can significantly advance
our understanding of aging with the use of computational models that simulate
realistic individual trajectories of health as well as mortality. To do so,
they must be systems-level models that incorporate interactions between
measurable aspects of age-associated changes. To incorporate individual
variability in the aging process, models must be stochastic. To be useful they
should also be predictive, and so must be fit or parameterized by data from
large populations of aging individuals. In this perspective, we outline where
we have been, where we are, and where we hope to go with such computational
models of aging. Our focus is on data-driven systems-level models, and on their
great potential in aging research.
| [
{
"created": "Mon, 3 Aug 2020 00:09:51 GMT",
"version": "v1"
},
{
"created": "Mon, 26 Oct 2020 22:37:53 GMT",
"version": "v2"
}
] | 2021-05-06 | [
[
"Farrell",
"Spencer",
""
],
[
"Stubbings",
"Garrett",
""
],
[
"Rockwood",
"Kenneth",
""
],
[
"Mitnitski",
"Arnold",
""
],
[
"Rutenberg",
"Andrew",
""
]
] | The gradual accumulation of damage and dysregulation during the aging of living organisms can be quantified. Even so, the aging process is complex and has multiple interacting physiological scales -- from the molecular to cellular to whole tissues. In the face of this complexity, we can significantly advance our understanding of aging with the use of computational models that simulate realistic individual trajectories of health as well as mortality. To do so, they must be systems-level models that incorporate interactions between measurable aspects of age-associated changes. To incorporate individual variability in the aging process, models must be stochastic. To be useful they should also be predictive, and so must be fit or parameterized by data from large populations of aging individuals. In this perspective, we outline where we have been, where we are, and where we hope to go with such computational models of aging. Our focus is on data-driven systems-level models, and on their great potential in aging research. |
1011.4382 | Tsvi Tlusty | Yonatan Savir and Tsvi Tlusty | RecA-mediated homology search as a nearly optimal signal detection
system | www.weizmann.ac.il/complex/tlusty/papers/MolCell2010.pdf | Molecular Cell 40(3) 388-396 (2010) | 10.1016/j.molcel.2010.10.020 | null | q-bio.BM physics.bio-ph q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Homologous recombination facilitates the exchange of genetic material between
homologous DNA molecules. This crucial process requires detecting a specific
homologous DNA sequence within a huge variety of heterologous sequences. The
detection is mediated by RecA in E. coli, or members of its superfamily in
other organisms. Here we examine how well is the RecA-DNA interaction adjusted
to its task. By formulating the DNA recognition process as a signal detection
problem, we find the optimal value of binding energy that maximizes the ability
to detect homologous sequences. We show that the experimentally observed
binding energy is nearly optimal. This implies that the RecA-induced
deformation and the binding energetics are fine-tuned to ensure optimal
sequence detection. Our analysis suggests a possible role for DNA extension by
RecA, in which deformation enhances detection. The present signal detection
approach provides a general recipe for testing the optimality of other
molecular recognition systems.
| [
{
"created": "Fri, 19 Nov 2010 10:24:36 GMT",
"version": "v1"
}
] | 2010-11-22 | [
[
"Savir",
"Yonatan",
""
],
[
"Tlusty",
"Tsvi",
""
]
] | Homologous recombination facilitates the exchange of genetic material between homologous DNA molecules. This crucial process requires detecting a specific homologous DNA sequence within a huge variety of heterologous sequences. The detection is mediated by RecA in E. coli, or members of its superfamily in other organisms. Here we examine how well is the RecA-DNA interaction adjusted to its task. By formulating the DNA recognition process as a signal detection problem, we find the optimal value of binding energy that maximizes the ability to detect homologous sequences. We show that the experimentally observed binding energy is nearly optimal. This implies that the RecA-induced deformation and the binding energetics are fine-tuned to ensure optimal sequence detection. Our analysis suggests a possible role for DNA extension by RecA, in which deformation enhances detection. The present signal detection approach provides a general recipe for testing the optimality of other molecular recognition systems. |
1210.3229 | Kseniia Kravchuk | Kseniia Kravchuk and Alexander Vidybida | Firing statistics of inhibitory neuron with delayed feedback. II.
Non-Markovian behavior | The paper was presented at the BIOCOMP2012 meeting at Vietri sul
Mare, Italy. Paper contains 33 pages, including 7 figures | BioSystems 112 (2013) 233-248 | 10.1016/j.biosystems.2013.02.002 | null | q-bio.NC math.PR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The instantaneous state of a neural network consists of both the degree of
excitation of each neuron the network is composed of and positions of impulses
in communication lines between the neurons. In neurophysiological experiments,
the neuronal firing moments are registered, but not the state of communication
lines. But future spiking moments depend essentially on the past positions of
impulses in the lines. This suggests, that the sequence of intervals between
firing moments (inter-spike intervals, ISIs) in the network could be
non-Markovian.
In this paper, we address this question for a simplest possible neural "net",
namely, a single inhibitory neuron with delayed feedback. The neuron receives
excitatory input from the driving Poisson stream and inhibitory impulses from
its own output through the feedback line. We obtain analytic expressions for
conditional probability density P(t_{n+1}| t_n,...,t_1,t_0), which gives the
probability to get an output ISI of duration t_{n+1} provided the previous
(n+1) output ISIs had durations t_n,...,t_1,t_0. It is proven exactly, that
P(t_{n+1}| t_n,...,t_1,t_0) does not reduce to P(t_{n+1}| t_n,...,t_1) for any
n>=0. This means that the output ISIs stream cannot be represented as a Markov
chain of any finite order.
| [
{
"created": "Thu, 11 Oct 2012 13:28:23 GMT",
"version": "v1"
},
{
"created": "Mon, 9 Sep 2013 12:04:02 GMT",
"version": "v2"
}
] | 2013-09-10 | [
[
"Kravchuk",
"Kseniia",
""
],
[
"Vidybida",
"Alexander",
""
]
] | The instantaneous state of a neural network consists of both the degree of excitation of each neuron the network is composed of and positions of impulses in communication lines between the neurons. In neurophysiological experiments, the neuronal firing moments are registered, but not the state of communication lines. But future spiking moments depend essentially on the past positions of impulses in the lines. This suggests, that the sequence of intervals between firing moments (inter-spike intervals, ISIs) in the network could be non-Markovian. In this paper, we address this question for a simplest possible neural "net", namely, a single inhibitory neuron with delayed feedback. The neuron receives excitatory input from the driving Poisson stream and inhibitory impulses from its own output through the feedback line. We obtain analytic expressions for conditional probability density P(t_{n+1}| t_n,...,t_1,t_0), which gives the probability to get an output ISI of duration t_{n+1} provided the previous (n+1) output ISIs had durations t_n,...,t_1,t_0. It is proven exactly, that P(t_{n+1}| t_n,...,t_1,t_0) does not reduce to P(t_{n+1}| t_n,...,t_1) for any n>=0. This means that the output ISIs stream cannot be represented as a Markov chain of any finite order. |
1703.00853 | Daniele De Martino | Daniele De Martino | The free lunch of a scale-free metabolism | Comments are welcome | Phys. Rev. E 95, 062419 (2017) | 10.1103/PhysRevE.95.062419 | null | q-bio.MN cond-mat.stat-mech physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work it is shown that scale free tails in metabolic flux
distributions inferred from realistic large scale models can be simply an
artefact due to reactions involved in thermodynamically unfeasible cycles, that
are unbounded by physical constraints and would be able to perform work without
expenditure of free energy. After correcting for thermodynamics, the metabolic
space scales meaningfully with the physical limiting factors, acquiring in turn
a richer multimodal structure potentially leading to symmetry breaking while
optimizing for objective functions.
| [
{
"created": "Thu, 2 Mar 2017 16:50:33 GMT",
"version": "v1"
}
] | 2017-07-05 | [
[
"De Martino",
"Daniele",
""
]
] | In this work it is shown that scale free tails in metabolic flux distributions inferred from realistic large scale models can be simply an artefact due to reactions involved in thermodynamically unfeasible cycles, that are unbounded by physical constraints and would be able to perform work without expenditure of free energy. After correcting for thermodynamics, the metabolic space scales meaningfully with the physical limiting factors, acquiring in turn a richer multimodal structure potentially leading to symmetry breaking while optimizing for objective functions. |
0803.0195 | Louxin Zhang | G. L. Li, M. Steel and L. X. Zhang | More Taxa Are Not Necessarily Better for the Reconstruction of Ancestral
Character States | 21 pages | null | null | null | q-bio.PE q-bio.QM | http://creativecommons.org/licenses/by/3.0/ | We show that the accuracy of reconstrucing an ancestral state is not an
increasing function of the size of taxon sampling.
| [
{
"created": "Mon, 3 Mar 2008 08:57:12 GMT",
"version": "v1"
}
] | 2008-03-04 | [
[
"Li",
"G. L.",
""
],
[
"Steel",
"M.",
""
],
[
"Zhang",
"L. X.",
""
]
] | We show that the accuracy of reconstrucing an ancestral state is not an increasing function of the size of taxon sampling. |
1503.01997 | Robert Endres | Robert G. Endres | Bistability: Requirements on Cell-Volume, Protein Diffusion, and
Thermodynamics | 23 pages, 8 figures | null | 10.1371/journal.pone.0121681 | null | q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bistability is considered wide-spread among bacteria and eukaryotic cells,
useful e.g. for enzyme induction, bet hedging, and epigenetic switching.
However, this phenomenon has mostly been described with deterministic dynamic
or well-mixed stochastic models. Here, we map known biological bistable systems
onto the well-characterized biochemical Schloegl model, using analytical
calculations and stochastic spatio-temporal simulations. In addition to network
architecture and strong thermodynamic driving away from equilibrium, we show
that bistability requires fine-tuning towards small cell volumes (or
compartments) and fast protein diffusion (well mixing). Bistability is thus
fragile and hence may be restricted to small bacteria and eukaryotic nuclei,
with switching triggered by volume changes during the cell cycle. For large
volumes, single cells generally loose their ability for bistable switching and
instead undergo a first-order phase transition.
| [
{
"created": "Fri, 6 Mar 2015 15:58:22 GMT",
"version": "v1"
}
] | 2017-02-08 | [
[
"Endres",
"Robert G.",
""
]
] | Bistability is considered wide-spread among bacteria and eukaryotic cells, useful e.g. for enzyme induction, bet hedging, and epigenetic switching. However, this phenomenon has mostly been described with deterministic dynamic or well-mixed stochastic models. Here, we map known biological bistable systems onto the well-characterized biochemical Schloegl model, using analytical calculations and stochastic spatio-temporal simulations. In addition to network architecture and strong thermodynamic driving away from equilibrium, we show that bistability requires fine-tuning towards small cell volumes (or compartments) and fast protein diffusion (well mixing). Bistability is thus fragile and hence may be restricted to small bacteria and eukaryotic nuclei, with switching triggered by volume changes during the cell cycle. For large volumes, single cells generally loose their ability for bistable switching and instead undergo a first-order phase transition. |
2104.06275 | Justinas \v{C}esonis | Justinas \v{C}esonis, David W. Franklin (Technical University of
Munich, Germany) | Mixed-horizon optimal feedback control as a model of human movement | null | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Computational optimal feedback control (OFC) models in the sensorimotor
control literature span a vast range of different implementations. Among the
popular algorithms, finite-horizon, receding-horizon or infinite-horizon
linear-quadratic regulators (LQR) have been broadly used to model human
reaching movements. While these different implementations have their unique
merits, all three have limitations in simulating the temporal evolution of
visuomotor feedback responses. Here we propose a novel approach - a
mixed-horizon OFC - by combining the strengths of the traditional
finite-horizon and the infinite-horizon controllers to address their individual
limitations. Specifically, we use the infinite-horizon OFC to generate
durations of the movements, which are then fed into the finite-horizon
controller to generate control gains. We then demonstrate the stability of our
model by performing extensive sensitivity analysis of both re-optimisation and
different cost functions. Finally, we use our model to provide a fresh look to
previously published studies by reinforcing the previous results, providing
alternative explanations to previous studies, or generating new predictive
results for prior experiments.
| [
{
"created": "Tue, 13 Apr 2021 15:07:54 GMT",
"version": "v1"
},
{
"created": "Fri, 8 Oct 2021 10:43:51 GMT",
"version": "v2"
}
] | 2021-10-11 | [
[
"Česonis",
"Justinas",
"",
"Technical University of\n Munich, Germany"
],
[
"Franklin",
"David W.",
"",
"Technical University of\n Munich, Germany"
]
] | Computational optimal feedback control (OFC) models in the sensorimotor control literature span a vast range of different implementations. Among the popular algorithms, finite-horizon, receding-horizon or infinite-horizon linear-quadratic regulators (LQR) have been broadly used to model human reaching movements. While these different implementations have their unique merits, all three have limitations in simulating the temporal evolution of visuomotor feedback responses. Here we propose a novel approach - a mixed-horizon OFC - by combining the strengths of the traditional finite-horizon and the infinite-horizon controllers to address their individual limitations. Specifically, we use the infinite-horizon OFC to generate durations of the movements, which are then fed into the finite-horizon controller to generate control gains. We then demonstrate the stability of our model by performing extensive sensitivity analysis of both re-optimisation and different cost functions. Finally, we use our model to provide a fresh look to previously published studies by reinforcing the previous results, providing alternative explanations to previous studies, or generating new predictive results for prior experiments. |
1204.5655 | Anna Melbinger | Anna Melbinger, Louis Reese, Erwin Frey | Microtubule Length-Regulation by Molecular Motors | 7 pages (5 p. letter, 3 p. supplementary information), 4 figures (3
f. letter, 1 f. supplementary information) | Phys. Rev. Lett. 108, 258104 (2012) | 10.1103/PhysRevLett.108.258104 | LMU-ASC 14/12 | q-bio.SC cond-mat.stat-mech physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Length-regulation of microtubules (MTs) is essential for many cellular
processes. Molecular motors like kinesin 8, which move along MTs and also act
as depolymerases, are known as key players in MT dynamics. However, the
regulatory mechanisms of length control remain elusive. Here, we investigate a
stochastic model accounting for the interplay between polymerization kinetics
and motor-induced depolymerization. We determine the dependence of MT length
and variance on rate constants and motor concentration. Moreover, our analyses
reveal how collective phenomena lead to a well-defined MT length.
| [
{
"created": "Wed, 25 Apr 2012 13:40:49 GMT",
"version": "v1"
}
] | 2012-10-18 | [
[
"Melbinger",
"Anna",
""
],
[
"Reese",
"Louis",
""
],
[
"Frey",
"Erwin",
""
]
] | Length-regulation of microtubules (MTs) is essential for many cellular processes. Molecular motors like kinesin 8, which move along MTs and also act as depolymerases, are known as key players in MT dynamics. However, the regulatory mechanisms of length control remain elusive. Here, we investigate a stochastic model accounting for the interplay between polymerization kinetics and motor-induced depolymerization. We determine the dependence of MT length and variance on rate constants and motor concentration. Moreover, our analyses reveal how collective phenomena lead to a well-defined MT length. |
1812.03048 | Georgy Karev | Georgy Karev, Faina Berezovskaya | Struggle for Existence: the models for Darwinian and non-Darwinian
selection | 47 pages, 12 figures | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Classical understanding of the outcome of the struggle for existence results
in the Darwinian survival of the fittest. Here we show that the situation may
be different, more complex and arguably more interesting. Specifically, we show
that different versions of non-homogeneous logistic-like models with a
distributed Malthusian parameter imply non-Darwinian survival of everybody. In
contrast, the non-homogeneous logistic equation with distributed carrying
capacity shows Darwinian survival of the fittest. We also consider an
non-homogeneous birth-and-death equation and give a simple proof that this
equation results in the survival of the fittest. In addition to this known
result, we find an exact limit distribution of the parameters of this equation.
We also consider frequency-dependent non-homogeneous models and show that
although some of these models show Darwinian survival of the fittest, there is
not enough time for selection of the fittest species. We discuss the well-known
Gauze Competitive exclusion principle that states that Complete competitors
cannot coexist. While this principle is often considered as a direct
consequence of the Darwinian survival of the fittest, we show that from the
point of view of developed mathematical theory complete competitors can in fact
coexist indefinitely.
| [
{
"created": "Fri, 7 Dec 2018 14:52:01 GMT",
"version": "v1"
}
] | 2018-12-10 | [
[
"Karev",
"Georgy",
""
],
[
"Berezovskaya",
"Faina",
""
]
] | Classical understanding of the outcome of the struggle for existence results in the Darwinian survival of the fittest. Here we show that the situation may be different, more complex and arguably more interesting. Specifically, we show that different versions of non-homogeneous logistic-like models with a distributed Malthusian parameter imply non-Darwinian survival of everybody. In contrast, the non-homogeneous logistic equation with distributed carrying capacity shows Darwinian survival of the fittest. We also consider an non-homogeneous birth-and-death equation and give a simple proof that this equation results in the survival of the fittest. In addition to this known result, we find an exact limit distribution of the parameters of this equation. We also consider frequency-dependent non-homogeneous models and show that although some of these models show Darwinian survival of the fittest, there is not enough time for selection of the fittest species. We discuss the well-known Gauze Competitive exclusion principle that states that Complete competitors cannot coexist. While this principle is often considered as a direct consequence of the Darwinian survival of the fittest, we show that from the point of view of developed mathematical theory complete competitors can in fact coexist indefinitely. |
1407.7600 | Benjamin Machta | Ahmed El Hady and Benjamin B. Machta | Mechanical Surface Waves Accompany Action Potential Propagation | 6 pages 3 figures + 2 page supplement | null | 10.1038/ncomms7697 | null | q-bio.NC cond-mat.soft physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many studies have shown that a mechanical displacement of the axonal membrane
accompanies the electrical pulse defining the Action Potential (AP). Despite a
large and diverse body of experimental evidence, there is no theoretical
consensus either for the physical basis of this mechanical wave nor its
interdependence with the electrical signal. In this manuscript we present a
model for these mechanical displacements as arising from the driving of surface
wave modes in which potential energy is stored in elastic properties of the
neuronal membrane and cytoskeleton while kinetic energy is carried by the
axoplasmic fluid. In our model these surface waves are driven by the traveling
wave of electrical depolarization that characterizes the AP, altering the
compressive electrostatic forces across the membrane as it passes. This driving
leads to co-propagating mechanical displacements, which we term Action Waves
(AWs). Our model for these AWs allows us to predict, in terms of elastic
constants, axon radius and axoplasmic density and viscosity, the shape of the
AW that should accompany any traveling wave of voltage, including the AP
predicted by the Hodgkin and Huxley (HH) equations. We show that our model
makes predictions that are in agreement with results in experimental systems
including the garfish olfactory nerve and the squid giant axon. We expect our
model to serve as a framework for understanding the physical origins and
possible functional roles of these AWs in neurobiology.
| [
{
"created": "Mon, 28 Jul 2014 23:46:21 GMT",
"version": "v1"
},
{
"created": "Sun, 5 Oct 2014 17:28:18 GMT",
"version": "v2"
}
] | 2015-06-22 | [
[
"Hady",
"Ahmed El",
""
],
[
"Machta",
"Benjamin B.",
""
]
] | Many studies have shown that a mechanical displacement of the axonal membrane accompanies the electrical pulse defining the Action Potential (AP). Despite a large and diverse body of experimental evidence, there is no theoretical consensus either for the physical basis of this mechanical wave nor its interdependence with the electrical signal. In this manuscript we present a model for these mechanical displacements as arising from the driving of surface wave modes in which potential energy is stored in elastic properties of the neuronal membrane and cytoskeleton while kinetic energy is carried by the axoplasmic fluid. In our model these surface waves are driven by the traveling wave of electrical depolarization that characterizes the AP, altering the compressive electrostatic forces across the membrane as it passes. This driving leads to co-propagating mechanical displacements, which we term Action Waves (AWs). Our model for these AWs allows us to predict, in terms of elastic constants, axon radius and axoplasmic density and viscosity, the shape of the AW that should accompany any traveling wave of voltage, including the AP predicted by the Hodgkin and Huxley (HH) equations. We show that our model makes predictions that are in agreement with results in experimental systems including the garfish olfactory nerve and the squid giant axon. We expect our model to serve as a framework for understanding the physical origins and possible functional roles of these AWs in neurobiology. |
1509.06304 | Fabrizio Cleri | Maxime Tomezak, Corinne Abbadie, Eric Lartigau, Fabrizio Cleri | A biophysical model of cell evolution after cytotoxic treatments:
damage, repair and cell response | 17 pages, 9 figures, 1 summary table. Submitted to Journal of
Theoretical Biology | null | null | null | q-bio.CB cond-mat.soft | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a theoretical agent-based model of cell evolution under the action
of cytotoxic treatments, such as radioteraphy or chemoteraphy. The major
features of cell cycle and proliferation, cell damage and repair, and chemical
diffusion are included. Cell evolution is based on a discrete Markov chain,
with cells stepping along a sequence of discrete internal states from 'normal'
to 'inactive'. Probabilistic laws are introduced for each type of event a cell
can undergo during its life cycle: duplication, arrest, apoptosis, senescence,
damage, healing. We adjust the model parameters on a series of cell irradiation
experiments, carried out in a clinical LINAC at 20 MV, in which the damage and
repair kinetics of single- and double-strand breaks are followed. Two showcase
applications of the model are then presented. In the first one, we reconstruct
the cell survival curves from a number of published low- and high-dose
irradiation experiments. We reobtain a very good description of the data
without assuming the well-known linear-quadratic model, but instead including a
variable DSB repair probability, which is found to spontaneously saturate with
an exponential decay at increasingly high doses. As a second test, we attempt
to simulate the two extreme possibilities of the so-called 'bystander' effect
in radiotherapy: the 'local' effect versus a 'global' effect, respectively
activated by the short-range or long-range diffusion of some factor, presumably
secreted by the irradiated cells. Even with an oversimplified simulation, we
could demonstrate a sizeable difference in the proliferation rate of
non-irradiated cells, the proliferation acceleration being much larger for the
global than the local effect, for relatively small fractions of irradiated
cells in the colony.
| [
{
"created": "Mon, 21 Sep 2015 17:11:16 GMT",
"version": "v1"
}
] | 2015-09-22 | [
[
"Tomezak",
"Maxime",
""
],
[
"Abbadie",
"Corinne",
""
],
[
"Lartigau",
"Eric",
""
],
[
"Cleri",
"Fabrizio",
""
]
] | We present a theoretical agent-based model of cell evolution under the action of cytotoxic treatments, such as radioteraphy or chemoteraphy. The major features of cell cycle and proliferation, cell damage and repair, and chemical diffusion are included. Cell evolution is based on a discrete Markov chain, with cells stepping along a sequence of discrete internal states from 'normal' to 'inactive'. Probabilistic laws are introduced for each type of event a cell can undergo during its life cycle: duplication, arrest, apoptosis, senescence, damage, healing. We adjust the model parameters on a series of cell irradiation experiments, carried out in a clinical LINAC at 20 MV, in which the damage and repair kinetics of single- and double-strand breaks are followed. Two showcase applications of the model are then presented. In the first one, we reconstruct the cell survival curves from a number of published low- and high-dose irradiation experiments. We reobtain a very good description of the data without assuming the well-known linear-quadratic model, but instead including a variable DSB repair probability, which is found to spontaneously saturate with an exponential decay at increasingly high doses. As a second test, we attempt to simulate the two extreme possibilities of the so-called 'bystander' effect in radiotherapy: the 'local' effect versus a 'global' effect, respectively activated by the short-range or long-range diffusion of some factor, presumably secreted by the irradiated cells. Even with an oversimplified simulation, we could demonstrate a sizeable difference in the proliferation rate of non-irradiated cells, the proliferation acceleration being much larger for the global than the local effect, for relatively small fractions of irradiated cells in the colony. |
1310.8357 | Daniele Marinazzo | Guorong Wu, Enzo Tagliazucchi, Dante R. Chialvo, Daniele Marinazzo | Point-process deconvolution of fMRI reveals effective connectivity
alterations in chronic pain patients | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It is now recognized that important information can be extracted from the
brain spontaneous activity, as exposed by recent analysis using a repertoire of
computational methods. In this context a novel method, based on a blind
deconvolution technique, is used in this paper to analyze potential changes due
to chronic pain in the brain pain matrix's effective connectivity. The approach
is able to deconvolve the hemodynamic response function to spontaneous neural
events, i.e., in the absence of explicit onset timings, and to evaluate
information transfer between two regions as a joint probability of the
occurrence of such spontaneous events. The method revealed that the chronic
pain patients exhibit important changes in the Insula's effective connectivity
which can be relevant to understand the overall impact of chronic pain on brain
function.
| [
{
"created": "Thu, 31 Oct 2013 01:12:51 GMT",
"version": "v1"
}
] | 2013-11-01 | [
[
"Wu",
"Guorong",
""
],
[
"Tagliazucchi",
"Enzo",
""
],
[
"Chialvo",
"Dante R.",
""
],
[
"Marinazzo",
"Daniele",
""
]
] | It is now recognized that important information can be extracted from the brain spontaneous activity, as exposed by recent analysis using a repertoire of computational methods. In this context a novel method, based on a blind deconvolution technique, is used in this paper to analyze potential changes due to chronic pain in the brain pain matrix's effective connectivity. The approach is able to deconvolve the hemodynamic response function to spontaneous neural events, i.e., in the absence of explicit onset timings, and to evaluate information transfer between two regions as a joint probability of the occurrence of such spontaneous events. The method revealed that the chronic pain patients exhibit important changes in the Insula's effective connectivity which can be relevant to understand the overall impact of chronic pain on brain function. |
1312.5528 | Juan A. Bonachela | Juan A. Bonachela and Simon A. Levin | Evolutionary comparison between viral lysis rate and latent period | to appear in J. Theor. Biol | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Marine viruses shape the structure of the microbial community. They are,
thus, a key determinant of the most important biogeochemical cycles in the
planet. Therefore, a correct description of the ecological and evolutionary
behavior of these viruses is essential to make reliable predictions about their
role in marine ecosystems. The infection cycle, for example, is indistinctly
modeled in two very different ways. In one representation, the process is
described including explicitly a fixed delay between infection and offspring
release. In the other, the offspring are released at exponentially distributed
times according to a fixed release rate. By considering obvious quantitative
differences pointed out in the past, the latter description is widely used as a
simplification of the former. However, it is still unclear how the dichotomy
"delay versus rate description" affects long-term predictions of host-virus
interaction models. Here, we study the ecological and evolutionary implications
of using one or the other approaches, applied to marine microbes. To this end,
we use mathematical and eco-evolutionary computational analysis. We show that
the rate model exhibits improved competitive abilities from both ecological and
evolutionary perspectives in steady environments. However, rate-based
descriptions can fail to describe properly long-term microbe-virus
interactions. Moreover, additional information about trade-offs between
life-history traits is needed in order to choose the most reliable
representation for oceanic bacteriophage dynamics. This result affects deeply
most of the marine ecosystem models that include viruses, especially when used
to answer evolutionary questions.
| [
{
"created": "Thu, 19 Dec 2013 13:08:41 GMT",
"version": "v1"
}
] | 2013-12-20 | [
[
"Bonachela",
"Juan A.",
""
],
[
"Levin",
"Simon A.",
""
]
] | Marine viruses shape the structure of the microbial community. They are, thus, a key determinant of the most important biogeochemical cycles in the planet. Therefore, a correct description of the ecological and evolutionary behavior of these viruses is essential to make reliable predictions about their role in marine ecosystems. The infection cycle, for example, is indistinctly modeled in two very different ways. In one representation, the process is described including explicitly a fixed delay between infection and offspring release. In the other, the offspring are released at exponentially distributed times according to a fixed release rate. By considering obvious quantitative differences pointed out in the past, the latter description is widely used as a simplification of the former. However, it is still unclear how the dichotomy "delay versus rate description" affects long-term predictions of host-virus interaction models. Here, we study the ecological and evolutionary implications of using one or the other approaches, applied to marine microbes. To this end, we use mathematical and eco-evolutionary computational analysis. We show that the rate model exhibits improved competitive abilities from both ecological and evolutionary perspectives in steady environments. However, rate-based descriptions can fail to describe properly long-term microbe-virus interactions. Moreover, additional information about trade-offs between life-history traits is needed in order to choose the most reliable representation for oceanic bacteriophage dynamics. This result affects deeply most of the marine ecosystem models that include viruses, especially when used to answer evolutionary questions. |
2211.15963 | Wenlian Lu | Wenlian Lu, Qibao Zheng, Ningsheng Xu, Jianfeng Feng, DTB Consortium | The human digital twin brain in the resting state and in action | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We simulate the human brain at the scale of up to 86 billion neurons, i.e.,
digital twin brain (DTB), which mimics certain aspects of its biological
counterpart both in the resting state and in action. A novel routing
communication layout between 10,000 GPUs to implement simulations and a
hierarchical mesoscale data assimilation method to be capable to achieve more
than trillions of parameters from the estimated hyperparameters are developed.
The constructed DTB is able to track its resting-state biological counterpart
with a very high correlation (0.9). The DTB provides a testbed for various
"dry" experiments in neuroscience and medicine and illustrated in two examples:
exploring the information flow in our brain and testing deep brain stimulation
mechanisms. Finally, we enable the DTB to interact with environments by
demonstrating some possible applications in vision and auditory tasks and
validate the power of DTB with achieving significant correlation with the
experimental counterparts.
| [
{
"created": "Tue, 29 Nov 2022 06:44:57 GMT",
"version": "v1"
}
] | 2022-11-30 | [
[
"Lu",
"Wenlian",
""
],
[
"Zheng",
"Qibao",
""
],
[
"Xu",
"Ningsheng",
""
],
[
"Feng",
"Jianfeng",
""
],
[
"Consortium",
"DTB",
""
]
] | We simulate the human brain at the scale of up to 86 billion neurons, i.e., digital twin brain (DTB), which mimics certain aspects of its biological counterpart both in the resting state and in action. A novel routing communication layout between 10,000 GPUs to implement simulations and a hierarchical mesoscale data assimilation method to be capable to achieve more than trillions of parameters from the estimated hyperparameters are developed. The constructed DTB is able to track its resting-state biological counterpart with a very high correlation (0.9). The DTB provides a testbed for various "dry" experiments in neuroscience and medicine and illustrated in two examples: exploring the information flow in our brain and testing deep brain stimulation mechanisms. Finally, we enable the DTB to interact with environments by demonstrating some possible applications in vision and auditory tasks and validate the power of DTB with achieving significant correlation with the experimental counterparts. |
1911.04420 | Mohammad Nami | Ali-Mohammad Kamali, Mohammad Reza Hossein Tehrani, Seyedeh-Saeedeh
Yahyavi, Siavash Baneshi, Zahra Kheradmand-Saadi, Masoume Nazeri, Maryam
Poursadeghfard and Mohammad Nami | Transcranial direct current stimulation to remediate myasthenia gravis
symptoms | null | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Our findings indicated that brain stimulation exerted no effect on the
patients cognitive functions. The study outcome suggest that tDCS over primary
motor cortex may be considered as a potential nonpharmochological treatment
add-on in MG. Larger-sized studies need to evaluate the significance of this
approach is real-life practice.
| [
{
"created": "Mon, 11 Nov 2019 17:49:35 GMT",
"version": "v1"
}
] | 2019-11-12 | [
[
"Kamali",
"Ali-Mohammad",
""
],
[
"Tehrani",
"Mohammad Reza Hossein",
""
],
[
"Yahyavi",
"Seyedeh-Saeedeh",
""
],
[
"Baneshi",
"Siavash",
""
],
[
"Kheradmand-Saadi",
"Zahra",
""
],
[
"Nazeri",
"Masoume",
""
],
[
"Poursadeghfard",
"Maryam",
""
],
[
"Nami",
"Mohammad",
""
]
] | Our findings indicated that brain stimulation exerted no effect on the patients cognitive functions. The study outcome suggest that tDCS over primary motor cortex may be considered as a potential nonpharmochological treatment add-on in MG. Larger-sized studies need to evaluate the significance of this approach is real-life practice. |
q-bio/0601037 | Mauro Copelli | Osame Kinouchi and Mauro Copelli | Optimal Dynamical Range of Excitable Networks at Criticality | 2 figures, 6 pages | Nature Physics, 2, 348-351 (2006) | 10.1038/nphys289 | null | q-bio.NC cond-mat.dis-nn nlin.CG physics.bio-ph | null | A recurrent idea in the study of complex systems is that optimal information
processing is to be found near bifurcation points or phase transitions.
However, this heuristic hypothesis has few (if any) concrete realizations where
a standard and biologically relevant quantity is optimized at criticality. Here
we give a clear example of such a phenomenon: a network of excitable elements
has its sensitivity and dynamic range maximized at the critical point of a
non-equilibrium phase transition. Our results are compatible with the essential
role of gap junctions in olfactory glomeruli and retinal ganglionar cell
output. Synchronization and global oscillations also appear in the network
dynamics. We propose that the main functional role of electrical coupling is to
provide an enhancement of dynamic range, therefore allowing the coding of
information spanning several orders of magnitude. The mechanism could provide a
microscopic neural basis for psychophysical laws.
| [
{
"created": "Mon, 23 Jan 2006 13:47:28 GMT",
"version": "v1"
},
{
"created": "Tue, 15 May 2007 15:10:39 GMT",
"version": "v2"
}
] | 2007-05-23 | [
[
"Kinouchi",
"Osame",
""
],
[
"Copelli",
"Mauro",
""
]
] | A recurrent idea in the study of complex systems is that optimal information processing is to be found near bifurcation points or phase transitions. However, this heuristic hypothesis has few (if any) concrete realizations where a standard and biologically relevant quantity is optimized at criticality. Here we give a clear example of such a phenomenon: a network of excitable elements has its sensitivity and dynamic range maximized at the critical point of a non-equilibrium phase transition. Our results are compatible with the essential role of gap junctions in olfactory glomeruli and retinal ganglionar cell output. Synchronization and global oscillations also appear in the network dynamics. We propose that the main functional role of electrical coupling is to provide an enhancement of dynamic range, therefore allowing the coding of information spanning several orders of magnitude. The mechanism could provide a microscopic neural basis for psychophysical laws. |
1301.3077 | Robert Aboukhalil | Robert Aboukhalil, Bernard Fendler and Gurinder S. Atwal | Kerfuffle: a web tool for multi-species gene colocalization analysis | BMC Bioinformatics, In press | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The evolutionary pressures that underlie the large-scale functional
organization of the genome are not well understood in eukaryotes. Recent
evidence suggests that functionally similar genes may colocalize (cluster) in
the eukaryotic genome, suggesting the role of chromatin-level gene regulation
in shaping the physical distribution of coordinated genes. However, few of the
bioinformatic tools currently available allow for a systematic study of gene
colocalization across several, evolutionarily distant species. Kerfuffle is a
web tool designed to help discover, visualize, and quantify the physical
organization of genomes by identifying significant gene colocalization and
conservation across the assembled genomes of available species (currently up to
47, from humans to worms). Kerfuffle only requires the user to specify a list
of human genes and the names of other species of interest. Without further
input from the user, the software queries the e!Ensembl BioMart server to
obtain positional information and discovers homology relations in all genes and
species specified. Using this information, Kerfuffle performs a multi-species
clustering analysis, presents downloadable lists of clustered genes, performs
Monte Carlo statistical significance calculations, estimates how conserved gene
clusters are across species, plots histograms and interactive graphs, allows
users to save their queries, and generates a downloadable visualization of the
clusters using the Circos software. These analyses may be used to further
explore the functional roles of gene clusters by interrogating the enriched
molecular pathways associated with each cluster.
| [
{
"created": "Mon, 14 Jan 2013 17:52:37 GMT",
"version": "v1"
}
] | 2013-01-15 | [
[
"Aboukhalil",
"Robert",
""
],
[
"Fendler",
"Bernard",
""
],
[
"Atwal",
"Gurinder S.",
""
]
] | The evolutionary pressures that underlie the large-scale functional organization of the genome are not well understood in eukaryotes. Recent evidence suggests that functionally similar genes may colocalize (cluster) in the eukaryotic genome, suggesting the role of chromatin-level gene regulation in shaping the physical distribution of coordinated genes. However, few of the bioinformatic tools currently available allow for a systematic study of gene colocalization across several, evolutionarily distant species. Kerfuffle is a web tool designed to help discover, visualize, and quantify the physical organization of genomes by identifying significant gene colocalization and conservation across the assembled genomes of available species (currently up to 47, from humans to worms). Kerfuffle only requires the user to specify a list of human genes and the names of other species of interest. Without further input from the user, the software queries the e!Ensembl BioMart server to obtain positional information and discovers homology relations in all genes and species specified. Using this information, Kerfuffle performs a multi-species clustering analysis, presents downloadable lists of clustered genes, performs Monte Carlo statistical significance calculations, estimates how conserved gene clusters are across species, plots histograms and interactive graphs, allows users to save their queries, and generates a downloadable visualization of the clusters using the Circos software. These analyses may be used to further explore the functional roles of gene clusters by interrogating the enriched molecular pathways associated with each cluster. |
2006.02757 | Beatriz Seoane | Beatriz Seoane | A scaling approach to estimate the COVID-19 infection fatality ratio
from incomplete data | 28 pages, 10 figures, 3 tables. Version judged scientifically
suitable for publication in PLOS ONE | PLoS ONE 16(2): e0246831 (2021) | 10.1371/journal.pone.0246831 | null | q-bio.PE physics.med-ph physics.soc-ph | http://creativecommons.org/licenses/by-nc-nd/4.0/ | SARS-CoV-2 has disrupted the life of billions of people around the world
since the first outbreak was officially declared in China at the beginning of
2020. Yet, important questions such as how deadly it is or its degree of spread
within different countries remain unanswered. In this work, we exploit the
`universal' growth of the mortality rate with age observed in different
countries since the beginning of their respective outbreaks, combined with the
results of the antibody prevalence tests in the population of Spain, to unveil
both unknowns. We validate these results with an analogous antibody rate survey
in the canton of Geneva, Switzerland. We also argue that the official number of
deaths over 70 years old is importantly underestimated in most of the
countries, and we use the comparison between the official records with the
number of deaths mentioning COVID-19 in the death certificates to quantify by
how much. Using this information, we estimate the fatality infection ratio
(IFR) for the different age segments and the fraction of the population
infected in different countries assuming a uniform exposure to the virus in all
age segments. We also give estimations for the non-uniform IFR using the
sero-epidemiological results of Spain, showing a very similar growth of the
fatality ratio with age. Only for Spain, we estimate the probability (if
infected) of being identified as a case, being hospitalized or admitted in the
intensive care units as function of age. In general, we observe a nearly
exponential growth of the fatality ratio with age, which anticipates large
differences in total IFR in countries with different demographic distributions,
with numbers that range from 1.82\% in Italy, to 0.62\% in China or even 0.14\%
in middle Africa.
| [
{
"created": "Thu, 4 Jun 2020 10:33:53 GMT",
"version": "v1"
},
{
"created": "Wed, 27 Jan 2021 16:49:16 GMT",
"version": "v2"
}
] | 2021-03-01 | [
[
"Seoane",
"Beatriz",
""
]
] | SARS-CoV-2 has disrupted the life of billions of people around the world since the first outbreak was officially declared in China at the beginning of 2020. Yet, important questions such as how deadly it is or its degree of spread within different countries remain unanswered. In this work, we exploit the `universal' growth of the mortality rate with age observed in different countries since the beginning of their respective outbreaks, combined with the results of the antibody prevalence tests in the population of Spain, to unveil both unknowns. We validate these results with an analogous antibody rate survey in the canton of Geneva, Switzerland. We also argue that the official number of deaths over 70 years old is importantly underestimated in most of the countries, and we use the comparison between the official records with the number of deaths mentioning COVID-19 in the death certificates to quantify by how much. Using this information, we estimate the fatality infection ratio (IFR) for the different age segments and the fraction of the population infected in different countries assuming a uniform exposure to the virus in all age segments. We also give estimations for the non-uniform IFR using the sero-epidemiological results of Spain, showing a very similar growth of the fatality ratio with age. Only for Spain, we estimate the probability (if infected) of being identified as a case, being hospitalized or admitted in the intensive care units as function of age. In general, we observe a nearly exponential growth of the fatality ratio with age, which anticipates large differences in total IFR in countries with different demographic distributions, with numbers that range from 1.82\% in Italy, to 0.62\% in China or even 0.14\% in middle Africa. |
2008.05579 | Zhuolin Qu | Zhuolin Qu, Asma Azizi, Norine Schmidt, Megan Clare Craig-Kuhn,
Charles Stoecker, James M Hyman, Patricia J Kissinger | Modelling the Impact of Screening Men for Chlamydia Trachomatis on the
Prevalence in Women | null | BMJ Open 2021;11:e040789 | 10.1136/bmjopen-2020-040789 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Chlamydia trachomatis is the most commonly reported infectious disease in the
United States and causes important reproductive morbidity in women. The Centers
for Disease Control and Prevention have recommended routine screening of
sexually active women under age 25 but have not recommended screening among
men. Consequently, untested and untreated men may serve as a reservoir of
infection in women. Despite three decades of screening women, the chlamydia
prevalence has continued to increase. Moreover, chlamydia is five times more
common in African American (AA) youth compared to Whites, constituting an
important health disparity. The Check It program is a bundled Ct intervention
targeting AA men aged 15-24 who have sex with women. We created an
individual-based network model to simulate a realistic chlamydia epidemic on
sexual contact networks for the target population. Based on the practice in
Check It, we quantified the impact of screening young AA men on the chlamydia
prevalence in women. We used sensitivity analysis to quantify the relative
importance of each Check It intervention component, and the significance ranked
from high to low was venue-based screening, expedited index treatment,
expedited partner treatment, rescreening. We estimated that by annually
screening 7.5% of the target male population, the chlamydia prevalence would be
reduced by 8.1% and 8.8% in men and women, respectively. The findings suggested
that male-screening has the potential to significantly reduce the prevalence
among women.
| [
{
"created": "Wed, 12 Aug 2020 21:41:39 GMT",
"version": "v1"
},
{
"created": "Fri, 29 Jan 2021 17:34:47 GMT",
"version": "v2"
}
] | 2021-02-01 | [
[
"Qu",
"Zhuolin",
""
],
[
"Azizi",
"Asma",
""
],
[
"Schmidt",
"Norine",
""
],
[
"Craig-Kuhn",
"Megan Clare",
""
],
[
"Stoecker",
"Charles",
""
],
[
"Hyman",
"James M",
""
],
[
"Kissinger",
"Patricia J",
""
]
] | Chlamydia trachomatis is the most commonly reported infectious disease in the United States and causes important reproductive morbidity in women. The Centers for Disease Control and Prevention have recommended routine screening of sexually active women under age 25 but have not recommended screening among men. Consequently, untested and untreated men may serve as a reservoir of infection in women. Despite three decades of screening women, the chlamydia prevalence has continued to increase. Moreover, chlamydia is five times more common in African American (AA) youth compared to Whites, constituting an important health disparity. The Check It program is a bundled Ct intervention targeting AA men aged 15-24 who have sex with women. We created an individual-based network model to simulate a realistic chlamydia epidemic on sexual contact networks for the target population. Based on the practice in Check It, we quantified the impact of screening young AA men on the chlamydia prevalence in women. We used sensitivity analysis to quantify the relative importance of each Check It intervention component, and the significance ranked from high to low was venue-based screening, expedited index treatment, expedited partner treatment, rescreening. We estimated that by annually screening 7.5% of the target male population, the chlamydia prevalence would be reduced by 8.1% and 8.8% in men and women, respectively. The findings suggested that male-screening has the potential to significantly reduce the prevalence among women. |
2307.16252 | Joan Carles Pons Mayol | Gabriel Cardona, Gerard Ribas, Joan Carles Pons | Generation of orchard and tree-child networks | 13 pages, 4 figures | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Phylogenetic networks are an extension of phylogenetic trees that allow for
the representation of reticulate evolution events. One of the classes of
networks that has gained the attention of the scientific community over the
last years is the class of orchard networks, that generalizes tree-child
networks, one of the most studied classes of networks.
In this paper we focus on the combinatorial and algorithmic problem of the
generation of orchard networks, and also of tree-child networks. To this end,
we use that these networks are defined as those that can be recovered by a
reversing a certain reduction process. Then, we show how to choose a
``minimum'' reduction process among all that can be applied to a network, and
hence we get a unique representation of the network that, in fact, can be given
in terms of sequences of pairs of integers, whose length is related to the
number of leaves and reticulations of the network. Therefore, the generation of
networks is reduced to the generation of such sequences of pairs. Our main
result is a recursive method for the efficient generation of all minimum
sequences, and hence of all orchard (or tree-child) networks with a given
number of leaves and reticulations.
An implementation in C of the algorithms described in this paper, along with
some computational experiments, can be downloaded from the public repository
https://github.com/gerardet46/OrchardGenerator. Using this implementation, we
have computed the number of orchard networks with at most 6 leaves and 8
reticulations.
| [
{
"created": "Sun, 30 Jul 2023 15:05:04 GMT",
"version": "v1"
}
] | 2023-08-01 | [
[
"Cardona",
"Gabriel",
""
],
[
"Ribas",
"Gerard",
""
],
[
"Pons",
"Joan Carles",
""
]
] | Phylogenetic networks are an extension of phylogenetic trees that allow for the representation of reticulate evolution events. One of the classes of networks that has gained the attention of the scientific community over the last years is the class of orchard networks, that generalizes tree-child networks, one of the most studied classes of networks. In this paper we focus on the combinatorial and algorithmic problem of the generation of orchard networks, and also of tree-child networks. To this end, we use that these networks are defined as those that can be recovered by a reversing a certain reduction process. Then, we show how to choose a ``minimum'' reduction process among all that can be applied to a network, and hence we get a unique representation of the network that, in fact, can be given in terms of sequences of pairs of integers, whose length is related to the number of leaves and reticulations of the network. Therefore, the generation of networks is reduced to the generation of such sequences of pairs. Our main result is a recursive method for the efficient generation of all minimum sequences, and hence of all orchard (or tree-child) networks with a given number of leaves and reticulations. An implementation in C of the algorithms described in this paper, along with some computational experiments, can be downloaded from the public repository https://github.com/gerardet46/OrchardGenerator. Using this implementation, we have computed the number of orchard networks with at most 6 leaves and 8 reticulations. |
1107.5426 | Alan D. Rendall | Alan D. Rendall | Multiple steady states in a mathematical model for interactions between
T cells and macrophages | 16 pages, 1 figure | null | null | AEI-2011-047 | q-bio.CB math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The aim of this paper is to prove results about the existence and stability
of multiple steady states in a system of ordinary differential equations
introduced by R. Lev Bar-Or to model the interactions between T cells and
macrophages. Previous results showed that for certain values of the parameters
these equations have three stationary solutions, two of which are stable. Here
it is shown that there are values of the parameters for which the number of
stationary solutions is at least seven and the number of stable stationary
solutions at least four. This requires approaches different to those used in
existing work on this subject. In addition, a rather explicit characterization
is obtained of regions of parameter space for which the system has a given
number of stationary solutions.
| [
{
"created": "Wed, 27 Jul 2011 09:51:26 GMT",
"version": "v1"
}
] | 2011-07-28 | [
[
"Rendall",
"Alan D.",
""
]
] | The aim of this paper is to prove results about the existence and stability of multiple steady states in a system of ordinary differential equations introduced by R. Lev Bar-Or to model the interactions between T cells and macrophages. Previous results showed that for certain values of the parameters these equations have three stationary solutions, two of which are stable. Here it is shown that there are values of the parameters for which the number of stationary solutions is at least seven and the number of stable stationary solutions at least four. This requires approaches different to those used in existing work on this subject. In addition, a rather explicit characterization is obtained of regions of parameter space for which the system has a given number of stationary solutions. |
2403.12113 | Santiago Forero | Santiago Forero (PatriNat, PatriNat) | Evaluation cartographique du niveau de potentialit{\'e}s {\'e}cologiques
des sites des partenaires (CARPO). Cadre m{\'e}thodologique V0 | in French language | null | null | null | q-bio.PE physics.geo-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Cartographic Assessment of Ecological Potential (CARPO) allows users to
characterize the ecological context of a group of sites at several scales (its
perimeter and surrounding area) and on the basis of 4 themes: biodiversity
zonings, land use (major types of environment, natural character and degree of
permeability of territories), ecological connectivity (biodiversity reservoirs
and corridors, non-fragmentation of the sector studied), as well as the
richness of the territory in terms of heritage species. It is based on a
battery of 8 indicators linked to these themes and a system of evaluation
thresholds, used to define 3 levels of ecological potential (average, strong
and very strong) for each indicator and at each scale of study. The level of
potentiality reflects the importance that an ecological element takes on in a
study area, and therefore the degree of contribution and responsibility of a
site in maintaining this element favorable to biodiversity. It also translates
into an alert level for biodiversity that needs to be taken into account at a
site.The ecological diagnosis is materialized by means of a cartographic atlas
containing a set of commented maps and figures, as well as evaluation grids and
radars for the various indicators.CARPO is a decision-making tool, as it
enables users to compare the ecological potential within a site with that of
its surroundings, in order to define preservation, management or restoration
actions; and to compare the extent of potential between the different sites in
the group, with a goal of prioritizing action.This guide presents the
methodological framework for applying CARPO to a group of sites.
| [
{
"created": "Mon, 18 Mar 2024 10:48:21 GMT",
"version": "v1"
}
] | 2024-03-20 | [
[
"Forero",
"Santiago",
"",
"PatriNat, PatriNat"
]
] | The Cartographic Assessment of Ecological Potential (CARPO) allows users to characterize the ecological context of a group of sites at several scales (its perimeter and surrounding area) and on the basis of 4 themes: biodiversity zonings, land use (major types of environment, natural character and degree of permeability of territories), ecological connectivity (biodiversity reservoirs and corridors, non-fragmentation of the sector studied), as well as the richness of the territory in terms of heritage species. It is based on a battery of 8 indicators linked to these themes and a system of evaluation thresholds, used to define 3 levels of ecological potential (average, strong and very strong) for each indicator and at each scale of study. The level of potentiality reflects the importance that an ecological element takes on in a study area, and therefore the degree of contribution and responsibility of a site in maintaining this element favorable to biodiversity. It also translates into an alert level for biodiversity that needs to be taken into account at a site.The ecological diagnosis is materialized by means of a cartographic atlas containing a set of commented maps and figures, as well as evaluation grids and radars for the various indicators.CARPO is a decision-making tool, as it enables users to compare the ecological potential within a site with that of its surroundings, in order to define preservation, management or restoration actions; and to compare the extent of potential between the different sites in the group, with a goal of prioritizing action.This guide presents the methodological framework for applying CARPO to a group of sites. |
1307.7817 | Aaron Darling | Rolf Backofen, Markus Fricke, Manja Marz, Jing Qin, and Peter F.
Stadler | Distribution of graph-distances in Boltzmann ensembles of RNA secondary
structures | Peer-reviewed and presented as part of the 13th Workshop on
Algorithms in Bioinformatics (WABI2013) | null | null | null | q-bio.QM q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large RNA molecules often carry multiple functional domains whose spatial
arrangement is an important determinant of their function. Pre-mRNA splicing,
furthermore, relies on the spatial proximity of the splice junctions that can
be separated by very long introns. Similar effects appear in the processing of
RNA virus genomes. Albeit a crude measure, the distribution of spatial
distances in thermodynamic equilibrium therefore provides useful information on
the overall shape of the molecule can provide insights into the interplay of
its functional domains. Spatial distance can be approximated by the
graph-distance in RNA secondary structure. We show here that the equilibrium
distribution of graph-distances between arbitrary nucleotides can be computed
in polynomial time by means of dynamic programming. A naive implementation
would yield recursions with a very high time complexity of O(n^11). Although we
were able to reduce this to O(n^6) for many practical applications a further
reduction seems difficult. We conclude, therefore, that sampling approaches,
which are much easier to implement, are also theoretically favorable for most
real-life applications, in particular since these primarily concern long-range
interactions in very large RNA molecules.
| [
{
"created": "Tue, 30 Jul 2013 05:03:11 GMT",
"version": "v1"
}
] | 2013-07-31 | [
[
"Backofen",
"Rolf",
""
],
[
"Fricke",
"Markus",
""
],
[
"Marz",
"Manja",
""
],
[
"Qin",
"Jing",
""
],
[
"Stadler",
"Peter F.",
""
]
] | Large RNA molecules often carry multiple functional domains whose spatial arrangement is an important determinant of their function. Pre-mRNA splicing, furthermore, relies on the spatial proximity of the splice junctions that can be separated by very long introns. Similar effects appear in the processing of RNA virus genomes. Albeit a crude measure, the distribution of spatial distances in thermodynamic equilibrium therefore provides useful information on the overall shape of the molecule can provide insights into the interplay of its functional domains. Spatial distance can be approximated by the graph-distance in RNA secondary structure. We show here that the equilibrium distribution of graph-distances between arbitrary nucleotides can be computed in polynomial time by means of dynamic programming. A naive implementation would yield recursions with a very high time complexity of O(n^11). Although we were able to reduce this to O(n^6) for many practical applications a further reduction seems difficult. We conclude, therefore, that sampling approaches, which are much easier to implement, are also theoretically favorable for most real-life applications, in particular since these primarily concern long-range interactions in very large RNA molecules. |
q-bio/0511036 | Sara Cuenda | Sara Cuenda and Angel Sanchez | On the discrete Peyrard-Bishop model of DNA: stationary solutions and
stability | 15 pages, 12 figures | null | 10.1063/1.2194468 | null | q-bio.OT cond-mat.soft nlin.PS | null | As a first step in the search of an analytical study of mechanical
denaturation of DNA in terms of the sequence, we study stable, stationary
solutions in the discrete, finite and homogeneous Peyrard-Bishop DNA model. We
find and classify all the stationary solutions of the model, as well as
analytic approximations of them, both in the continuum and in the discrete
limits. Our results explain the structure of the solutions reported by
Theodorakopoulos {\em et al.} [Phys. Rev. Lett. {\bf 93}, 258101 (2004)] and
provide a way to proceed to the analysis of the generalized version of the
model incorporating the genetic information.
| [
{
"created": "Tue, 22 Nov 2005 11:30:24 GMT",
"version": "v1"
}
] | 2009-11-11 | [
[
"Cuenda",
"Sara",
""
],
[
"Sanchez",
"Angel",
""
]
] | As a first step in the search of an analytical study of mechanical denaturation of DNA in terms of the sequence, we study stable, stationary solutions in the discrete, finite and homogeneous Peyrard-Bishop DNA model. We find and classify all the stationary solutions of the model, as well as analytic approximations of them, both in the continuum and in the discrete limits. Our results explain the structure of the solutions reported by Theodorakopoulos {\em et al.} [Phys. Rev. Lett. {\bf 93}, 258101 (2004)] and provide a way to proceed to the analysis of the generalized version of the model incorporating the genetic information. |
1311.0050 | Sandeep Choubey | Sandeep Choubey, Jane Kondev, Alvaro Sanchez | Deciphering transcriptional dynamics in vivo by counting nascent RNA
molecules | 28 pages including SI | null | 10.1016/j.bpj.2013.11.2123 | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Transcription of genes is the focus of most forms of regulation of gene
expression. Even though careful biochemical experimentation has revealed the
molecular mechanisms of transcription initiation for a number of different
promoters in vitro, the dynamics of this process in cells is still poorly
understood. One approach has been to measure the transcriptional output
(fluorescently labeled messenger RNAs or proteins) from single cells in a
genetically identical population, which could then be compared to predictions
from models that incorporate different molecular mechanisms of transcription
initiation. However, this approach suffers from the problem, that processes
downstream from transcription can affect the measured output and therefore mask
the signature of stochastic transcription initiation on the cell-to-cell
variability of the transcriptional outputs. Here we show theoretically that
measurements of the cell-to-cell variability in the number of nascent RNAs
provide a more direct test of the mechanism of transcription initiation. We
derive exact expressions for the first two moments of the distribution of
nascent RNA molecules and apply our theory to published data for a collection
of constitutively expressed yeast genes. We find that the measured nascent RNA
distributions are inconsistent with transcription initiation proceeding via one
rate-limiting step, which has been generally inferred from measurements of
cytoplasmic messenger RNA. Instead, we propose a two-step mechanism of
initiation, which is consistent with the available data. These findings for the
yeast promoters highlight the utility of our theory for deciphering
transcriptional dynamics in vivo from experiments that count nascent RNA
molecules in single cells.
| [
{
"created": "Thu, 31 Oct 2013 22:15:53 GMT",
"version": "v1"
},
{
"created": "Wed, 8 Jan 2014 11:37:08 GMT",
"version": "v2"
}
] | 2018-10-17 | [
[
"Choubey",
"Sandeep",
""
],
[
"Kondev",
"Jane",
""
],
[
"Sanchez",
"Alvaro",
""
]
] | Transcription of genes is the focus of most forms of regulation of gene expression. Even though careful biochemical experimentation has revealed the molecular mechanisms of transcription initiation for a number of different promoters in vitro, the dynamics of this process in cells is still poorly understood. One approach has been to measure the transcriptional output (fluorescently labeled messenger RNAs or proteins) from single cells in a genetically identical population, which could then be compared to predictions from models that incorporate different molecular mechanisms of transcription initiation. However, this approach suffers from the problem, that processes downstream from transcription can affect the measured output and therefore mask the signature of stochastic transcription initiation on the cell-to-cell variability of the transcriptional outputs. Here we show theoretically that measurements of the cell-to-cell variability in the number of nascent RNAs provide a more direct test of the mechanism of transcription initiation. We derive exact expressions for the first two moments of the distribution of nascent RNA molecules and apply our theory to published data for a collection of constitutively expressed yeast genes. We find that the measured nascent RNA distributions are inconsistent with transcription initiation proceeding via one rate-limiting step, which has been generally inferred from measurements of cytoplasmic messenger RNA. Instead, we propose a two-step mechanism of initiation, which is consistent with the available data. These findings for the yeast promoters highlight the utility of our theory for deciphering transcriptional dynamics in vivo from experiments that count nascent RNA molecules in single cells. |
1705.05647 | Diego Fasoli | Diego Fasoli, Stefano Panzeri | Optimized brute-force algorithms for the bifurcation analysis of a
spin-glass-like neural network model | 22 pages, 5 figures, 4 Python scripts | Phys. Rev. E 99, 012316 (2019) | 10.1103/PhysRevE.99.012316 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bifurcation theory is a powerful tool for studying how the dynamics of a
neural network model depends on its underlying neurophysiological parameters.
However, bifurcation theory has been developed mostly for smooth dynamical
systems and for continuous-time non-smooth models, which prevents us from
understanding the changes of dynamics in some widely used classes of artificial
neural network models. This article is an attempt to fill this gap, through the
introduction of algorithms that perform a semi-analytical bifurcation analysis
of a spin-glass-like neural network model with binary firing rates and
discrete-time evolution. Our approach is based on a numerical brute-force
search of the stationary and oscillatory solutions of the spin-glass model,
from which we derive analytical expressions of its bifurcation structure by
means of the state-to-state transition probability matrix. The algorithms
determine how the network parameters affect the degree of multistability, the
emergence and the period of the neural oscillations, and the formation of
symmetry-breaking in the neural populations. While this technique can be
applied to networks with arbitrary (generally asymmetric) connectivity
matrices, in particular we introduce a highly efficient algorithm for the
bifurcation analysis of sparse networks. We also provide some examples of the
obtained bifurcation diagrams and a Python implementation of the algorithms.
| [
{
"created": "Tue, 16 May 2017 11:17:21 GMT",
"version": "v1"
}
] | 2019-01-16 | [
[
"Fasoli",
"Diego",
""
],
[
"Panzeri",
"Stefano",
""
]
] | Bifurcation theory is a powerful tool for studying how the dynamics of a neural network model depends on its underlying neurophysiological parameters. However, bifurcation theory has been developed mostly for smooth dynamical systems and for continuous-time non-smooth models, which prevents us from understanding the changes of dynamics in some widely used classes of artificial neural network models. This article is an attempt to fill this gap, through the introduction of algorithms that perform a semi-analytical bifurcation analysis of a spin-glass-like neural network model with binary firing rates and discrete-time evolution. Our approach is based on a numerical brute-force search of the stationary and oscillatory solutions of the spin-glass model, from which we derive analytical expressions of its bifurcation structure by means of the state-to-state transition probability matrix. The algorithms determine how the network parameters affect the degree of multistability, the emergence and the period of the neural oscillations, and the formation of symmetry-breaking in the neural populations. While this technique can be applied to networks with arbitrary (generally asymmetric) connectivity matrices, in particular we introduce a highly efficient algorithm for the bifurcation analysis of sparse networks. We also provide some examples of the obtained bifurcation diagrams and a Python implementation of the algorithms. |
2106.10306 | Aritro Sinharoy | Aritro Sinha Roy | An Automated Global Method for Extraction of Distance Distributions from
Electron Spin Resonance Pulsed Dipolar Signals | 4 pages, 3 figures | null | null | null | q-bio.QM physics.chem-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Electron spin resonance (ESR) pulsed dipolar spectroscopy (PDS) is used
effectively in measuring nano-meter range distances for protein structure
prediction. The current global approach in extracting the distance distribution
from time domain PDS signal has multiple limitations. We present a parameter
free global method, which is more efficacious and less sensitive to signal
noise compared to the current method.
| [
{
"created": "Fri, 18 Jun 2021 18:30:36 GMT",
"version": "v1"
}
] | 2021-06-22 | [
[
"Roy",
"Aritro Sinha",
""
]
] | Electron spin resonance (ESR) pulsed dipolar spectroscopy (PDS) is used effectively in measuring nano-meter range distances for protein structure prediction. The current global approach in extracting the distance distribution from time domain PDS signal has multiple limitations. We present a parameter free global method, which is more efficacious and less sensitive to signal noise compared to the current method. |
1502.06406 | Armita Nourmohammad | Armita Nourmohammad, Joachim Rambeau, Torsten Held, Johannes Berg,
Michael Lassig | Pervasive adaptation of gene expression in Drosophila | minor changes in evaluation of the dataset | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Gene expression levels are important molecular quantitative traits that link
genotypes to molecular functions and fitness. In Drosophila, population-genetic
studies in recent years have revealed substantial adaptive evolution at the
genomic level. However, the evolutionary modes of gene expression have remained
controversial. Here we present evidence that adaptation dominates the evolution
of gene expression levels in flies. We show that 63% of the observed expression
divergence across seven Drosophila species are adaptive changes driven by
directional selection. Our results are derived from the variation of expression
within species and the time-resolved divergence across a family of related
species, using a new inference method for selection. We identify functional
classes of adaptively regulated genes, as well as sex-specific adaptation
occurring predominantly in males. Our analysis opens a new avenue to map
system-wide selection on molecular quantitative traits independently of their
genetic basis.
| [
{
"created": "Mon, 23 Feb 2015 12:51:54 GMT",
"version": "v1"
},
{
"created": "Fri, 3 Apr 2015 01:27:46 GMT",
"version": "v2"
}
] | 2015-04-06 | [
[
"Nourmohammad",
"Armita",
""
],
[
"Rambeau",
"Joachim",
""
],
[
"Held",
"Torsten",
""
],
[
"Berg",
"Johannes",
""
],
[
"Lassig",
"Michael",
""
]
] | Gene expression levels are important molecular quantitative traits that link genotypes to molecular functions and fitness. In Drosophila, population-genetic studies in recent years have revealed substantial adaptive evolution at the genomic level. However, the evolutionary modes of gene expression have remained controversial. Here we present evidence that adaptation dominates the evolution of gene expression levels in flies. We show that 63% of the observed expression divergence across seven Drosophila species are adaptive changes driven by directional selection. Our results are derived from the variation of expression within species and the time-resolved divergence across a family of related species, using a new inference method for selection. We identify functional classes of adaptively regulated genes, as well as sex-specific adaptation occurring predominantly in males. Our analysis opens a new avenue to map system-wide selection on molecular quantitative traits independently of their genetic basis. |
1109.2648 | Simon DeDeo | Simon DeDeo and David C. Krakauer | Dynamics and Processing in Finite Self-Similar Networks | 31 pages, 8 figures, to appear in J. Roy. Soc. Interface | null | 10.1098/rsif.2011.0840 | SFI Working Paper #12-03-003 | q-bio.MN cond-mat.stat-mech q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A common feature of biological networks is the geometric property of
self-similarity. Molecular regulatory networks through to circulatory systems,
nervous systems, social systems and ecological trophic networks, show
self-similar connectivity at multiple scales. We analyze the relationship
between topology and signaling in contrasting classes of such topologies. We
find that networks differ in their ability to contain or propagate signals
between arbitrary nodes in a network depending on whether they possess
branching or loop-like features. Networks also differ in how they respond to
noise, such that one allows for greater integration at high noise, and this
performance is reversed at low noise. Surprisingly, small-world topologies,
with diameters logarithmic in system size, have slower dynamical timescales,
and may be less integrated (more modular) than networks with longer path
lengths. All of these phenomena are essentially mesoscopic, vanishing in the
infinite limit but producing strong effects at sizes and timescales relevant to
biology.
| [
{
"created": "Mon, 12 Sep 2011 23:38:20 GMT",
"version": "v1"
},
{
"created": "Mon, 20 Feb 2012 18:00:20 GMT",
"version": "v2"
}
] | 2012-03-09 | [
[
"DeDeo",
"Simon",
""
],
[
"Krakauer",
"David C.",
""
]
] | A common feature of biological networks is the geometric property of self-similarity. Molecular regulatory networks through to circulatory systems, nervous systems, social systems and ecological trophic networks, show self-similar connectivity at multiple scales. We analyze the relationship between topology and signaling in contrasting classes of such topologies. We find that networks differ in their ability to contain or propagate signals between arbitrary nodes in a network depending on whether they possess branching or loop-like features. Networks also differ in how they respond to noise, such that one allows for greater integration at high noise, and this performance is reversed at low noise. Surprisingly, small-world topologies, with diameters logarithmic in system size, have slower dynamical timescales, and may be less integrated (more modular) than networks with longer path lengths. All of these phenomena are essentially mesoscopic, vanishing in the infinite limit but producing strong effects at sizes and timescales relevant to biology. |
1410.6526 | Alexey Zaikin | Russell Bates, Oleg Blyuss, Ahmed Alsaedi, and Alexey Zaikin | Effect of noise in intelligent cellular decision making | null | null | 10.1371/journal.pone.0125079 | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Similar to intelligent multicellular neural networks controlling human
brains, even single cells surprisingly are able to make intelligent decisions
to classify several external stimuli or to associate them. This happens because
of the fact that gene regulatory networks can perform as perceptrons, simple
intelligent schemes known from studies on Artificial Intelligence. We study the
role of genetic noise in intelligent decision making at the genetic level and
show that noise can play a constructive role helping cells to make a proper
decision. We show this using the example of a simple genetic classifier able to
classify two external stimuli.
| [
{
"created": "Thu, 23 Oct 2014 23:25:56 GMT",
"version": "v1"
}
] | 2018-11-21 | [
[
"Bates",
"Russell",
""
],
[
"Blyuss",
"Oleg",
""
],
[
"Alsaedi",
"Ahmed",
""
],
[
"Zaikin",
"Alexey",
""
]
] | Similar to intelligent multicellular neural networks controlling human brains, even single cells surprisingly are able to make intelligent decisions to classify several external stimuli or to associate them. This happens because of the fact that gene regulatory networks can perform as perceptrons, simple intelligent schemes known from studies on Artificial Intelligence. We study the role of genetic noise in intelligent decision making at the genetic level and show that noise can play a constructive role helping cells to make a proper decision. We show this using the example of a simple genetic classifier able to classify two external stimuli. |
1503.05216 | Pierre-Alexandre Jacques Bliman | Pierre-Alexandre J. Bliman, M. Soledad Aronna, Fl\'avio C. Coelho,
Moacyr A.H.B. da Silva | Ensuring successful introduction of Wolbachia in natural populations of
Aedes aegypti by means of feedback control | 24 pages, 5 figures | Journal of Mathematical Biology, 76(5):1269-1300, 2018 | 10.1007/s00285-017-1174-x | null | q-bio.QM math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The control of the spread of dengue fever by introduction of the
intracellular parasitic bacterium Wolbachia in populations of the vector Aedes
aegypti, is presently one of the most promising tools for eliminating dengue,
in the absence of an efficient vaccine. The success of this operation requires
locally careful planning to determine the adequate number of individuals
carrying the Wolbachia parasite that need to be introduced into the natural
population. The introduced mosquitoes are expected to eventually replace the
Wolbachia-free population and guarantee permanent protection against the
transmission of dengue to human.
In this study, we propose and analyze a model describing the fundamental
aspects of the competition between mosquitoes carrying Wolbachia and mosquitoes
free of the parasite. We then use feedback control techniques to devise an
introduction protocol which is proved to guarantee that the population
converges to a stable equilibrium where the totality of mosquitoes carry
Wolbachia.
| [
{
"created": "Tue, 17 Mar 2015 20:59:20 GMT",
"version": "v1"
}
] | 2020-10-02 | [
[
"Bliman",
"Pierre-Alexandre J.",
""
],
[
"Aronna",
"M. Soledad",
""
],
[
"Coelho",
"Flávio C.",
""
],
[
"da Silva",
"Moacyr A. H. B.",
""
]
] | The control of the spread of dengue fever by introduction of the intracellular parasitic bacterium Wolbachia in populations of the vector Aedes aegypti, is presently one of the most promising tools for eliminating dengue, in the absence of an efficient vaccine. The success of this operation requires locally careful planning to determine the adequate number of individuals carrying the Wolbachia parasite that need to be introduced into the natural population. The introduced mosquitoes are expected to eventually replace the Wolbachia-free population and guarantee permanent protection against the transmission of dengue to human. In this study, we propose and analyze a model describing the fundamental aspects of the competition between mosquitoes carrying Wolbachia and mosquitoes free of the parasite. We then use feedback control techniques to devise an introduction protocol which is proved to guarantee that the population converges to a stable equilibrium where the totality of mosquitoes carry Wolbachia. |
q-bio/0504028 | Timothy Newman | T. J. Newman | Modeling multi-cellular systems using sub-cellular elements | 20 pages, 4 figures | null | null | null | q-bio.QM | null | We introduce a model for describing the dynamics of large numbers of
interacting cells. The fundamental dynamical variables in the model are
sub-cellular elements, which interact with each other through phenomenological
intra- and inter-cellular potentials. Advantages of the model include i)
adaptive cell-shape dynamics, ii) flexible accommodation of additional
intra-cellular biology, and iii) the absence of an underlying grid. We present
here a detailed description of the model, and use successive mean-field
approximations to connect it to more coarse-grained approaches, such as
discrete cell-based algorithms and coupled partial differential equations. We
also discuss efficient algorithms for encoding the model, and give an example
of a simulation of an epithelial sheet. Given the biological flexibility of the
model, we propose that it can be used effectively for modeling a range of
multi-cellular processes, such as tumor dynamics and embryogenesis.
| [
{
"created": "Wed, 20 Apr 2005 19:17:01 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Newman",
"T. J.",
""
]
] | We introduce a model for describing the dynamics of large numbers of interacting cells. The fundamental dynamical variables in the model are sub-cellular elements, which interact with each other through phenomenological intra- and inter-cellular potentials. Advantages of the model include i) adaptive cell-shape dynamics, ii) flexible accommodation of additional intra-cellular biology, and iii) the absence of an underlying grid. We present here a detailed description of the model, and use successive mean-field approximations to connect it to more coarse-grained approaches, such as discrete cell-based algorithms and coupled partial differential equations. We also discuss efficient algorithms for encoding the model, and give an example of a simulation of an epithelial sheet. Given the biological flexibility of the model, we propose that it can be used effectively for modeling a range of multi-cellular processes, such as tumor dynamics and embryogenesis. |
1708.04457 | Cameron Smith | Cameron A. Smith and Christian A. Yates | The auxiliary region method: A hybrid method for coupling PDE- and
Brownian-based dynamics for reaction-diffusion systems | 29 pages, 14 figures, 2 tables | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reaction-diffusion systems are used to represent many biological and physical
phenomena. They model the random motion of particles (diffusion) and
interactions between them (reactions). Such systems can be modelled at multiple
scales with varying degrees of accuracy and computational efficiency. When
representing genuinely multiscale phenomena, fine-scale models can be
prohibitively expensive, whereas coarser models, although cheaper, often lack
sufficient detail to accurately represent the phenomenon at hand. Spatial
hybrid methods couple two or more of these representations in order to improve
efficiency without compromising accuracy.
In this paper, we present a novel spatial hybrid method, which we call the
auxiliary region method (ARM), which couples PDE and Brownian-based
representations of reaction-diffusion systems. Numerical PDE solutions on one
side of an interface are coupled to Brownian-based dynamics on the other side
using compartment-based "auxiliary regions". We demonstrate that the hybrid
method is able to simulate reaction-diffusion dynamics for a number of
different test problems with high accuracy. Further, we undertake error
analysis on the ARM which demonstrates that it is robust to changes in the free
parameters in the model, where previous coupling algorithms are not. In
particular, we envisage that the method will be applicable for a wide range of
spatial multi-scales problems including, filopodial dynamics, intracellular
signalling, embryogenesis and travelling wave phenomena.
| [
{
"created": "Tue, 15 Aug 2017 10:48:44 GMT",
"version": "v1"
},
{
"created": "Thu, 7 Jun 2018 13:51:07 GMT",
"version": "v2"
}
] | 2018-06-08 | [
[
"Smith",
"Cameron A.",
""
],
[
"Yates",
"Christian A.",
""
]
] | Reaction-diffusion systems are used to represent many biological and physical phenomena. They model the random motion of particles (diffusion) and interactions between them (reactions). Such systems can be modelled at multiple scales with varying degrees of accuracy and computational efficiency. When representing genuinely multiscale phenomena, fine-scale models can be prohibitively expensive, whereas coarser models, although cheaper, often lack sufficient detail to accurately represent the phenomenon at hand. Spatial hybrid methods couple two or more of these representations in order to improve efficiency without compromising accuracy. In this paper, we present a novel spatial hybrid method, which we call the auxiliary region method (ARM), which couples PDE and Brownian-based representations of reaction-diffusion systems. Numerical PDE solutions on one side of an interface are coupled to Brownian-based dynamics on the other side using compartment-based "auxiliary regions". We demonstrate that the hybrid method is able to simulate reaction-diffusion dynamics for a number of different test problems with high accuracy. Further, we undertake error analysis on the ARM which demonstrates that it is robust to changes in the free parameters in the model, where previous coupling algorithms are not. In particular, we envisage that the method will be applicable for a wide range of spatial multi-scales problems including, filopodial dynamics, intracellular signalling, embryogenesis and travelling wave phenomena. |
2301.01103 | Andrew Schulz | Andrew Schulz (1 and 2), Cassie Shriver (3), Suzanne Stathatos (4),
Benjamin Seleb (3), Emily Weigel (3), Young-Hui Chang (3), M. Saad Bhamla
(5), David Hu (1 and 3), Joseph R. Mendelson III (3 and 6). ((1) School of
Mechanical Engineering Georgia Tech, (2) Max Planck Institute for Intelligent
Systems, (3) School of Biological Sciences Georgia Tech, (4) School of
Computing and Mathematical Sciences California Institute of Technology, (5)
School of Chemical and Biomolecular Engineering Georgia Tech, (6) Zoo
Atlanta) | Conservation Tools: The Next Generation of Engineering--Biology
Collaborations | null | null | 10.1098/rsif.2023.0232 | null | q-bio.QM cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The recent increase in public and academic interest in preserving
biodiversity has led to the growth of the field of conservation technology.
This field involves designing and constructing tools that utilize technology to
aid in the conservation of wildlife. In this article, we will use case studies
to demonstrate the importance of designing conservation tools with
human-wildlife interaction in mind and provide a framework for creating
successful tools. These case studies include a range of complexities, from
simple cat collars to machine learning and game theory methodologies. Our goal
is to introduce and inform current and future researchers in the field of
conservation technology and provide references for educating the next
generation of conservation technologists. Conservation technology not only has
the potential to benefit biodiversity but also has broader impacts on fields
such as sustainability and environmental protection. By using innovative
technologies to address conservation challenges, we can find more effective and
efficient solutions to protect and preserve our planet's resources.
| [
{
"created": "Tue, 3 Jan 2023 13:58:31 GMT",
"version": "v1"
}
] | 2023-08-17 | [
[
"Schulz",
"Andrew",
"",
"1 and 2"
],
[
"Shriver",
"Cassie",
"",
"1 and 3"
],
[
"Stathatos",
"Suzanne",
"",
"1 and 3"
],
[
"Seleb",
"Benjamin",
"",
"1 and 3"
],
[
"Weigel",
"Emily",
"",
"1 and 3"
],
[
"Chang",
"Young-Hui",
"",
"1 and 3"
],
[
"Bhamla",
"M. Saad",
"",
"1 and 3"
],
[
"Hu",
"David",
"",
"1 and 3"
],
[
"Mendelson",
"Joseph R.",
"III",
"3 and 6"
],
[
".",
"",
""
]
] | The recent increase in public and academic interest in preserving biodiversity has led to the growth of the field of conservation technology. This field involves designing and constructing tools that utilize technology to aid in the conservation of wildlife. In this article, we will use case studies to demonstrate the importance of designing conservation tools with human-wildlife interaction in mind and provide a framework for creating successful tools. These case studies include a range of complexities, from simple cat collars to machine learning and game theory methodologies. Our goal is to introduce and inform current and future researchers in the field of conservation technology and provide references for educating the next generation of conservation technologists. Conservation technology not only has the potential to benefit biodiversity but also has broader impacts on fields such as sustainability and environmental protection. By using innovative technologies to address conservation challenges, we can find more effective and efficient solutions to protect and preserve our planet's resources. |
2309.00646 | Igor \v{S}evo | Igor \v{S}evo | Intelligence as a Measure of Consciousness | 10 pages | null | null | null | q-bio.NC cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Evaluating artificial systems for signs of consciousness is increasingly
becoming a pressing concern, and a rigorous psychometric measurement framework
may be of crucial importance in evaluating large language models in this
regard. Most prominent theories of consciousness, both scientific and
metaphysical, argue for different kinds of information coupling as a necessary
component of human-like consciousness. By comparing information coupling in
human and animal brains, human cognitive development, emergent abilities, and
mental representation development to analogous phenomena in large language
models, I argue that psychometric measures of intelligence, such as the
g-factor or IQ, indirectly approximate the extent of conscious experience.
Based on a broader source of both scientific and metaphysical theories of
consciousness, I argue that all systems possess a degree of consciousness
ascertainable psychometrically and that psychometric measures of intelligence
may be used to gauge relative similarities of conscious experiences across
disparate systems, be they artificial or human.
| [
{
"created": "Wed, 30 Aug 2023 17:15:04 GMT",
"version": "v1"
},
{
"created": "Wed, 6 Sep 2023 18:17:15 GMT",
"version": "v2"
}
] | 2023-09-08 | [
[
"Ševo",
"Igor",
""
]
] | Evaluating artificial systems for signs of consciousness is increasingly becoming a pressing concern, and a rigorous psychometric measurement framework may be of crucial importance in evaluating large language models in this regard. Most prominent theories of consciousness, both scientific and metaphysical, argue for different kinds of information coupling as a necessary component of human-like consciousness. By comparing information coupling in human and animal brains, human cognitive development, emergent abilities, and mental representation development to analogous phenomena in large language models, I argue that psychometric measures of intelligence, such as the g-factor or IQ, indirectly approximate the extent of conscious experience. Based on a broader source of both scientific and metaphysical theories of consciousness, I argue that all systems possess a degree of consciousness ascertainable psychometrically and that psychometric measures of intelligence may be used to gauge relative similarities of conscious experiences across disparate systems, be they artificial or human. |
1701.00707 | Kushal Shah | Shubham Kundal, Raunak Lohiya and Kushal Shah | iCorr : Complex correlation method to detect origin of replication in
prokaryotic and eukaryotic genomes | null | null | null | null | q-bio.GN physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Computational prediction of origin of replication (ORI) has been of great
interest in bioinformatics and several methods including GC Skew, Z curve,
auto-correlation etc. have been explored in the past. In this paper, we have
extended the auto-correlation method to predict ORI location with much higher
resolution for prokaryotes. The proposed complex correlation method (iCorr)
converts the genome sequence into a sequence of complex numbers by mapping the
nucleotides to {+1,-1,+i,-i} instead of {+1,-1} used in the auto-correlation
method (here, 'i' is square root of -1). Thus, the iCorr method uses
information about the positions of all the four nucleotides unlike the earlier
auto-correlation method which uses the positional information of only one
nucleotide. Also, this earlier method required visual inspection of the
obtained graphs to identify the location of origin of replication. The proposed
iCorr method does away with this need and is able to identify the origin
location simply by picking the peak in the iCorr graph. The iCorr method also
works for a much smaller segment size compared to the earlier auto-correlation
method, which can be very helpful in experimental validation of the
computational predictions. We have also developed a variant of the iCorr method
to predict ORI location in eukaryotes and have tested it with the
experimentally known origin locations of S. cerevisiae with an average accuracy
of 71.76%.
| [
{
"created": "Sat, 31 Dec 2016 02:07:16 GMT",
"version": "v1"
},
{
"created": "Sun, 19 Feb 2017 06:53:39 GMT",
"version": "v2"
}
] | 2017-02-21 | [
[
"Kundal",
"Shubham",
""
],
[
"Lohiya",
"Raunak",
""
],
[
"Shah",
"Kushal",
""
]
] | Computational prediction of origin of replication (ORI) has been of great interest in bioinformatics and several methods including GC Skew, Z curve, auto-correlation etc. have been explored in the past. In this paper, we have extended the auto-correlation method to predict ORI location with much higher resolution for prokaryotes. The proposed complex correlation method (iCorr) converts the genome sequence into a sequence of complex numbers by mapping the nucleotides to {+1,-1,+i,-i} instead of {+1,-1} used in the auto-correlation method (here, 'i' is square root of -1). Thus, the iCorr method uses information about the positions of all the four nucleotides unlike the earlier auto-correlation method which uses the positional information of only one nucleotide. Also, this earlier method required visual inspection of the obtained graphs to identify the location of origin of replication. The proposed iCorr method does away with this need and is able to identify the origin location simply by picking the peak in the iCorr graph. The iCorr method also works for a much smaller segment size compared to the earlier auto-correlation method, which can be very helpful in experimental validation of the computational predictions. We have also developed a variant of the iCorr method to predict ORI location in eukaryotes and have tested it with the experimentally known origin locations of S. cerevisiae with an average accuracy of 71.76%. |
1108.4795 | Primoz Ziherl | A. Hocevar and P. Ziherl | Collective mechanics of embryogenesis: Formation of ventral furrow in
Drosophila | 4 pages, 3 figures | null | null | null | q-bio.CB cond-mat.soft physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a 2D mechanical model of the ventral furrow formation in
Drosophila that is based on undifferentiated epithelial cells of identical
properties whose energy resides in their membrane. Depending on the relative
tensions of the apical, basal, and lateral sides, the minimal-energy states of
the embryo cross-section includes circular, elliptical, biconcave, and buckled
furrow shapes. We discuss the possible shape transformation consistent with
reported experimental observations, arguing that generic collective mechanics
may play an important role in the embryonic development in Drosophila.
| [
{
"created": "Wed, 24 Aug 2011 09:48:51 GMT",
"version": "v1"
}
] | 2011-08-25 | [
[
"Hocevar",
"A.",
""
],
[
"Ziherl",
"P.",
""
]
] | We propose a 2D mechanical model of the ventral furrow formation in Drosophila that is based on undifferentiated epithelial cells of identical properties whose energy resides in their membrane. Depending on the relative tensions of the apical, basal, and lateral sides, the minimal-energy states of the embryo cross-section includes circular, elliptical, biconcave, and buckled furrow shapes. We discuss the possible shape transformation consistent with reported experimental observations, arguing that generic collective mechanics may play an important role in the embryonic development in Drosophila. |
0806.2694 | William Bialek | Greg J Stephens, Thierry Mora, Gasper Tkacik and William Bialek | Thermodynamics of natural images | null | null | 10.1103/PhysRevLett.110.018701 | null | q-bio.NC cond-mat.stat-mech q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The scale invariance of natural images suggests an analogy to the statistical
mechanics of physical systems at a critical point. Here we examine the
distribution of pixels in small image patches and show how to construct the
corresponding thermodynamics. We find evidence for criticality in a diverging
specific heat, which corresponds to large fluctuations in how "surprising" we
find individual images, and in the quantitative form of the entropy vs. energy.
The energy landscape derived from our thermodynamic framework identifies
special image configurations that have intrinsic error correcting properties,
and neurons which could detect these features have a strong resemblance to the
cells found in primary visual cortex.
| [
{
"created": "Tue, 17 Jun 2008 02:25:52 GMT",
"version": "v1"
}
] | 2013-05-30 | [
[
"Stephens",
"Greg J",
""
],
[
"Mora",
"Thierry",
""
],
[
"Tkacik",
"Gasper",
""
],
[
"Bialek",
"William",
""
]
] | The scale invariance of natural images suggests an analogy to the statistical mechanics of physical systems at a critical point. Here we examine the distribution of pixels in small image patches and show how to construct the corresponding thermodynamics. We find evidence for criticality in a diverging specific heat, which corresponds to large fluctuations in how "surprising" we find individual images, and in the quantitative form of the entropy vs. energy. The energy landscape derived from our thermodynamic framework identifies special image configurations that have intrinsic error correcting properties, and neurons which could detect these features have a strong resemblance to the cells found in primary visual cortex. |
2311.03569 | Robert Noble | Blair Colyer, Maciej Bak, David Basanta, Robert Noble | A seven-step guide to spatial, agent-based modelling of tumour evolution | 19 pages, 4 figures | null | null | null | q-bio.QM q-bio.PE q-bio.TO | http://creativecommons.org/licenses/by-sa/4.0/ | Spatial agent-based models are increasingly used to investigate the evolution
of solid tumours subject to localised cell-cell interactions and
microenvironmental heterogeneity. Here we present a non-technical step by step
guide to developing such a model from first principles, aimed at both aspiring
modellers and other biologists and oncologists who wish to understand the
assumptions and limitations of this approach. Stressing the importance of
tailoring the model structure to that of the biological system, we describe
methods of increasing complexity, from the basic Eden growth model up to
off-lattice simulations with diffusible factors. We examine choices that
unavoidably arise in model design, such as implementation, parameterisation,
visualisation, and reproducibility. Each topic is illustrated with examples
drawn from recent research studies and state of the art modelling platforms. We
emphasise the benefits of simpler models that aim to match the complexity of
the phenomena of interest, rather than that of the entire biological system.
| [
{
"created": "Mon, 6 Nov 2023 22:12:54 GMT",
"version": "v1"
}
] | 2023-11-08 | [
[
"Colyer",
"Blair",
""
],
[
"Bak",
"Maciej",
""
],
[
"Basanta",
"David",
""
],
[
"Noble",
"Robert",
""
]
] | Spatial agent-based models are increasingly used to investigate the evolution of solid tumours subject to localised cell-cell interactions and microenvironmental heterogeneity. Here we present a non-technical step by step guide to developing such a model from first principles, aimed at both aspiring modellers and other biologists and oncologists who wish to understand the assumptions and limitations of this approach. Stressing the importance of tailoring the model structure to that of the biological system, we describe methods of increasing complexity, from the basic Eden growth model up to off-lattice simulations with diffusible factors. We examine choices that unavoidably arise in model design, such as implementation, parameterisation, visualisation, and reproducibility. Each topic is illustrated with examples drawn from recent research studies and state of the art modelling platforms. We emphasise the benefits of simpler models that aim to match the complexity of the phenomena of interest, rather than that of the entire biological system. |
1001.3101 | Edgar Delgado-Eckert PhD | Edgar Delgado-Eckert, Samuel Ojosnegros, Niko Beerenwinkel | The evolution of virulence in RNA viruses under a
competition-colonization trade-off | Submitted to peer-reviewed journal | Bull Math Biol, 2010 | 10.1007/s11538-010-9596-2 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | RNA viruses exist in large intra-host populations which display great
genotypic and phenotypic diversity. We analyze a model of viral competition
between two different viral strains infecting a constantly replenished cell
pool, in which we assume a trade-off between the virus' colonization skills
(cell killing ability or virulence) and its local competition skills
(replication performance within coinfected cells). We characterize the
conditions that allow for viral spread by means of the basic reproductive
number and show that a local coexistence equilibrium exists, which is
asymptotically stable. At this equilibrium, the less virulent competitor has a
reproductive advantage over the more virulent colonizer. The equilibria at
which one strain outcompetes the other one are unstable, i.e., a second viral
strain is always able to permanently invade. One generalization of the model is
to consider multiple viral strains, each one displaying a different virulence.
However, to account for the large phenotypic diversity in viral populations, we
consider a continuous spectrum of virulences and present a continuum limit of
this multiple viral strains model that describes the time evolution of an
initial continuous distribution of virulence. We provide a proof of the
existence of solutions of the model's equations and present numerical
approximations of solutions for different initial distributions. Our
simulations suggest that initial continuous distributions of virulence evolve
towards a stationary distribution that is extremely skewed in favor of
competitors. Consequently, collective virulence attenuation takes place. This
finding may contribute to understanding the phenomenon of virulence
attenuation, which has been reported in previous experimental studies.
| [
{
"created": "Mon, 18 Jan 2010 16:43:19 GMT",
"version": "v1"
}
] | 2011-01-18 | [
[
"Delgado-Eckert",
"Edgar",
""
],
[
"Ojosnegros",
"Samuel",
""
],
[
"Beerenwinkel",
"Niko",
""
]
] | RNA viruses exist in large intra-host populations which display great genotypic and phenotypic diversity. We analyze a model of viral competition between two different viral strains infecting a constantly replenished cell pool, in which we assume a trade-off between the virus' colonization skills (cell killing ability or virulence) and its local competition skills (replication performance within coinfected cells). We characterize the conditions that allow for viral spread by means of the basic reproductive number and show that a local coexistence equilibrium exists, which is asymptotically stable. At this equilibrium, the less virulent competitor has a reproductive advantage over the more virulent colonizer. The equilibria at which one strain outcompetes the other one are unstable, i.e., a second viral strain is always able to permanently invade. One generalization of the model is to consider multiple viral strains, each one displaying a different virulence. However, to account for the large phenotypic diversity in viral populations, we consider a continuous spectrum of virulences and present a continuum limit of this multiple viral strains model that describes the time evolution of an initial continuous distribution of virulence. We provide a proof of the existence of solutions of the model's equations and present numerical approximations of solutions for different initial distributions. Our simulations suggest that initial continuous distributions of virulence evolve towards a stationary distribution that is extremely skewed in favor of competitors. Consequently, collective virulence attenuation takes place. This finding may contribute to understanding the phenomenon of virulence attenuation, which has been reported in previous experimental studies. |
1403.0379 | Marcin Skwark | Christoph Feinauer, Marcin J. Skwark, Andrea Pagnani and Erik Aurell | Improving contact prediction along three dimensions | 19 pages, 8 figures in main text; 7 pages, 6 figures in supporting
information | null | 10.1371/journal.pcbi.1003847 | null | q-bio.BM cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Correlation patterns in multiple sequence alignments of homologous proteins
can be exploited to infer information on the three-dimensional structure of
their members. The typical pipeline to address this task, which we in this
paper refer to as the three dimensions of contact prediction, is to: (i) filter
and align the raw sequence data representing the evolutionarily related
proteins; (ii) choose a predictive model to describe a sequence alignment;
(iii) infer the model parameters and interpret them in terms of structural
properties, such as an accurate contact map. We show here that all three
dimensions are important for overall prediction success. In particular, we show
that it is possible to improve significantly along the second dimension by
going beyond the pair-wise Potts models from statistical physics, which have
hitherto been the focus of the field. These (simple) extensions are motivated
by multiple sequence alignments often containing long stretches of gaps which,
as a data feature, would be rather untypical for independent samples drawn from
a Potts model. Using a large test set of proteins we show that the combined
improvements along the three dimensions are as large as any reported to date.
| [
{
"created": "Mon, 3 Mar 2014 10:46:01 GMT",
"version": "v1"
},
{
"created": "Wed, 5 Mar 2014 10:02:31 GMT",
"version": "v2"
}
] | 2015-06-18 | [
[
"Feinauer",
"Christoph",
""
],
[
"Skwark",
"Marcin J.",
""
],
[
"Pagnani",
"Andrea",
""
],
[
"Aurell",
"Erik",
""
]
] | Correlation patterns in multiple sequence alignments of homologous proteins can be exploited to infer information on the three-dimensional structure of their members. The typical pipeline to address this task, which we in this paper refer to as the three dimensions of contact prediction, is to: (i) filter and align the raw sequence data representing the evolutionarily related proteins; (ii) choose a predictive model to describe a sequence alignment; (iii) infer the model parameters and interpret them in terms of structural properties, such as an accurate contact map. We show here that all three dimensions are important for overall prediction success. In particular, we show that it is possible to improve significantly along the second dimension by going beyond the pair-wise Potts models from statistical physics, which have hitherto been the focus of the field. These (simple) extensions are motivated by multiple sequence alignments often containing long stretches of gaps which, as a data feature, would be rather untypical for independent samples drawn from a Potts model. Using a large test set of proteins we show that the combined improvements along the three dimensions are as large as any reported to date. |
0905.4542 | John Rhodes | Elizabeth S. Allman, Mark T. Holder, John A. Rhodes | Estimating Trees from Filtered Data: Identifiability of Models for
Morphological Phylogenetics | 31 pages, 4 figures; minor changes to reflect version to be published | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As an alternative to parsimony analyses, stochastic models have been proposed
(Lewis, 2001), (Nylander, et al., 2004) for morphological characters, so that
maximum likelihood or Bayesian analyses may be used for phylogenetic inference.
A key feature of these models is that they account for ascertainment bias, in
that only varying, or parsimony-informative characters are observed. However,
statistical consistency of such model-based inference requires that the model
parameters be identifiable from the joint distribution they entail, and this
issue has not been addressed.
Here we prove that parameters for several such models, with finite state
spaces of arbitrary size, are identifiable, provided the tree has at least 8
leaves. If the tree topology is already known, then 7 leaves suffice for
identifiability of the numerical parameters. The method of proof involves first
inferring a full distribution of both parsimony-informative and non-informative
pattern joint probabilities from the parsimony-informative ones, using
phylogenetic invariants. The failure of identifiability of the tree parameter
for 4-taxon trees is also investigated.
| [
{
"created": "Thu, 28 May 2009 00:41:37 GMT",
"version": "v1"
},
{
"created": "Sun, 20 Dec 2009 20:54:47 GMT",
"version": "v2"
}
] | 2009-12-20 | [
[
"Allman",
"Elizabeth S.",
""
],
[
"Holder",
"Mark T.",
""
],
[
"Rhodes",
"John A.",
""
]
] | As an alternative to parsimony analyses, stochastic models have been proposed (Lewis, 2001), (Nylander, et al., 2004) for morphological characters, so that maximum likelihood or Bayesian analyses may be used for phylogenetic inference. A key feature of these models is that they account for ascertainment bias, in that only varying, or parsimony-informative characters are observed. However, statistical consistency of such model-based inference requires that the model parameters be identifiable from the joint distribution they entail, and this issue has not been addressed. Here we prove that parameters for several such models, with finite state spaces of arbitrary size, are identifiable, provided the tree has at least 8 leaves. If the tree topology is already known, then 7 leaves suffice for identifiability of the numerical parameters. The method of proof involves first inferring a full distribution of both parsimony-informative and non-informative pattern joint probabilities from the parsimony-informative ones, using phylogenetic invariants. The failure of identifiability of the tree parameter for 4-taxon trees is also investigated. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.