id stringlengths 9 13 | submitter stringlengths 4 48 | authors stringlengths 4 9.62k | title stringlengths 4 343 | comments stringlengths 2 480 ⌀ | journal-ref stringlengths 9 309 ⌀ | doi stringlengths 12 138 ⌀ | report-no stringclasses 277 values | categories stringlengths 8 87 | license stringclasses 9 values | orig_abstract stringlengths 27 3.76k | versions listlengths 1 15 | update_date stringlengths 10 10 | authors_parsed listlengths 1 147 | abstract stringlengths 24 3.75k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1111.6495 | Taiki Takahashi | Taiki Takahashi | A neuroeconomic theory of bidirectional synaptic plasticity and
addiction | null | null | null | null | q-bio.NC q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neuronal mechanisms underlying addiction have been attracting attention in
neurobiology, economics, neuropsychiatry, and neuroeconomics. This paper
proposes a possible link between economic theory of addiction (Becker and
Murphy, 1988) and neurobiological theory of bidirectional synaptic plasticity
(Bienenstock, Cooper, Munro, 1982) based on recent findings in neuroeconomics
and neurobiology of addiction. Furthermore, it is suggested that several
neurobiological substrates such as cortisol (a stress hormone), NMDA and AMPA
receptors/subunits and intracellular calcium in the postsynaptic neurons are
critical factors determining parameters in Becker and Murphy's economic theory
of addiction. Future directions in the application of the theory to studies in
neuroeconomics and neuropsychiatry of addiction and its relation to stress at
the molecular level are discussed.
| [
{
"created": "Tue, 22 Nov 2011 22:45:23 GMT",
"version": "v1"
}
] | 2011-11-29 | [
[
"Takahashi",
"Taiki",
""
]
] | Neuronal mechanisms underlying addiction have been attracting attention in neurobiology, economics, neuropsychiatry, and neuroeconomics. This paper proposes a possible link between economic theory of addiction (Becker and Murphy, 1988) and neurobiological theory of bidirectional synaptic plasticity (Bienenstock, Cooper, Munro, 1982) based on recent findings in neuroeconomics and neurobiology of addiction. Furthermore, it is suggested that several neurobiological substrates such as cortisol (a stress hormone), NMDA and AMPA receptors/subunits and intracellular calcium in the postsynaptic neurons are critical factors determining parameters in Becker and Murphy's economic theory of addiction. Future directions in the application of the theory to studies in neuroeconomics and neuropsychiatry of addiction and its relation to stress at the molecular level are discussed. |
q-bio/0504008 | Ulrich S. Schwarz | Ulrich S. Schwarz (1) and Ronen Alon (2) ((1) MPI Colloids and
Interfaces, (2) Weizmann Institute) | L-selectin mediated leukocyte tethering in shear flow is controlled by
multiple contacts and cytoskeletal anchorage facilitating fast rebinding
events | 9 pages, Revtex, 4 Postscript figures included | PNAS 101: 6940-6945 (2004) | 10.1073/pnas.0305822101 | null | q-bio.SC | null | L-selectin mediated tethers result in leukocyte rolling only above a
threshold in shear. Here we present biophysical modeling based on recently
published data from flow chamber experiments (Dwir et al., J. Cell Biol. 163:
649-659, 2003) which supports the interpretation that L-selectin mediated
tethers below the shear threshold correspond to single L-selectin carbohydrate
bonds dissociating on the time scale of milliseconds, whereas L-selectin
mediated tethers above the shear threshold are stabilized by multiple bonds and
fast rebinding of broken bonds, resulting in tether lifetimes on the timescale
of $10^{-1}$ seconds. Our calculations for cluster dissociation suggest that
the single molecule rebinding rate is of the order of $10^4$ Hz. A similar
estimate results if increased tether dissociation for tail-truncated L-selectin
mutants above the shear threshold is modeled as diffusive escape of single
receptors from the rebinding region due to increased mobility. Using computer
simulations, we show that our model yields first order dissociation kinetics
and exponential dependence of tether dissociation rates on shear stress. Our
results suggest that multiple contacts, cytoskeletal anchorage of L-selectin
and local rebinding of ligand play important roles in L-selectin tether
stabilization and progression of tethers into persistent rolling on endothelial
surfaces.
| [
{
"created": "Wed, 6 Apr 2005 02:13:15 GMT",
"version": "v1"
}
] | 2009-11-11 | [
[
"Schwarz",
"Ulrich S.",
""
],
[
"Alon",
"Ronen",
""
]
] | L-selectin mediated tethers result in leukocyte rolling only above a threshold in shear. Here we present biophysical modeling based on recently published data from flow chamber experiments (Dwir et al., J. Cell Biol. 163: 649-659, 2003) which supports the interpretation that L-selectin mediated tethers below the shear threshold correspond to single L-selectin carbohydrate bonds dissociating on the time scale of milliseconds, whereas L-selectin mediated tethers above the shear threshold are stabilized by multiple bonds and fast rebinding of broken bonds, resulting in tether lifetimes on the timescale of $10^{-1}$ seconds. Our calculations for cluster dissociation suggest that the single molecule rebinding rate is of the order of $10^4$ Hz. A similar estimate results if increased tether dissociation for tail-truncated L-selectin mutants above the shear threshold is modeled as diffusive escape of single receptors from the rebinding region due to increased mobility. Using computer simulations, we show that our model yields first order dissociation kinetics and exponential dependence of tether dissociation rates on shear stress. Our results suggest that multiple contacts, cytoskeletal anchorage of L-selectin and local rebinding of ligand play important roles in L-selectin tether stabilization and progression of tethers into persistent rolling on endothelial surfaces. |
2202.01516 | Shu Guo | Tao Liu, Shu Guo, Hao Liu, Rui Kang, Mingyang Bai, Jiyang Jiang, Wei
Wen, Xing Pan, Jun Tai, Jianxin Li, Jian Cheng, Jing Jing, Zhenzhou Wu,
Haijun Niu, Haogang Zhu, Zixiao Li, Yongjun Wang, Henry Brodaty, Perminder
Sachdev, Daqing Li | Network resilience in the aging brain | 24 pages, 6 figures | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Degeneration and adaptation are two competing sides of the same coin called
resilience in the progressive processes of brain aging or diseases.
Degeneration accumulates during brain aging and other cerebral activities,
causing structural atrophy and dysfunction. At the same time, adaptation allows
brain network reorganize to compensate for structural loss to maintain
cognition function. Although hidden resilience mechanism is critical and
fundamental to uncover the brain aging law, due to the lack of datasets and
appropriate methodology, it remains essentially unknown how these two processes
interact dynamically across brain networks. To quantitatively investigate this
complex process, we analyze aging brains based on 6-year follow-up multimodal
neuroimaging database from 63 persons. We reveal the critical mechanism of
network resilience that various perturbation may cause fast brain structural
atrophy, and then brain can reorganize its functional layout to lower its
operational efficiency, which helps to slow down the structural atrophy and
finally recover its functional efficiency equilibrium. This empirical finding
could be explained by our theoretical model, suggesting one universal
resilience dynamical function. This resilience is achieved in the brain
functional network with evolving percolation and rich-club features. Our
findings can help to understand the brain aging process and design possible
mitigation methods to adjust interaction between degeneration and adaptation
from resilience viewpoint.
| [
{
"created": "Thu, 3 Feb 2022 10:53:00 GMT",
"version": "v1"
}
] | 2022-02-04 | [
[
"Liu",
"Tao",
""
],
[
"Guo",
"Shu",
""
],
[
"Liu",
"Hao",
""
],
[
"Kang",
"Rui",
""
],
[
"Bai",
"Mingyang",
""
],
[
"Jiang",
"Jiyang",
""
],
[
"Wen",
"Wei",
""
],
[
"Pan",
"Xing",
""
],
[
"Tai",
"Jun",
""
],
[
"Li",
"Jianxin",
""
],
[
"Cheng",
"Jian",
""
],
[
"Jing",
"Jing",
""
],
[
"Wu",
"Zhenzhou",
""
],
[
"Niu",
"Haijun",
""
],
[
"Zhu",
"Haogang",
""
],
[
"Li",
"Zixiao",
""
],
[
"Wang",
"Yongjun",
""
],
[
"Brodaty",
"Henry",
""
],
[
"Sachdev",
"Perminder",
""
],
[
"Li",
"Daqing",
""
]
] | Degeneration and adaptation are two competing sides of the same coin called resilience in the progressive processes of brain aging or diseases. Degeneration accumulates during brain aging and other cerebral activities, causing structural atrophy and dysfunction. At the same time, adaptation allows brain network reorganize to compensate for structural loss to maintain cognition function. Although hidden resilience mechanism is critical and fundamental to uncover the brain aging law, due to the lack of datasets and appropriate methodology, it remains essentially unknown how these two processes interact dynamically across brain networks. To quantitatively investigate this complex process, we analyze aging brains based on 6-year follow-up multimodal neuroimaging database from 63 persons. We reveal the critical mechanism of network resilience that various perturbation may cause fast brain structural atrophy, and then brain can reorganize its functional layout to lower its operational efficiency, which helps to slow down the structural atrophy and finally recover its functional efficiency equilibrium. This empirical finding could be explained by our theoretical model, suggesting one universal resilience dynamical function. This resilience is achieved in the brain functional network with evolving percolation and rich-club features. Our findings can help to understand the brain aging process and design possible mitigation methods to adjust interaction between degeneration and adaptation from resilience viewpoint. |
0907.1127 | Shuhei Mano | Shuhei Mano | Ancestral Graph with Bias in Gene Conversion | 29 pages, 2 figures | J. Appl. Probab. 50 (2013) 239-255 | null | null | q-bio.PE math.PR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Gene conversion is a mechanism by which a double-strand break in a DNA
molecule is repaired using a homologous DNA molecule as a template. As a
result, one gene is 'copied and pasted' onto the other gene. It was recently
reported that the direction of gene conversion appears to be biased towards G
and C nucleotides. In this paper a stochastic model of the dynamics of the bias
in gene conversion is developed for a finite population of members in a
multigene family. The dual process is the biased voter model, which generates
an ancestral random graph for a given sample. An importance-sampling algorithm
for computing the likelihood of the sample is also given.
| [
{
"created": "Tue, 7 Jul 2009 01:43:24 GMT",
"version": "v1"
},
{
"created": "Fri, 20 Jan 2012 08:58:42 GMT",
"version": "v2"
}
] | 2013-04-08 | [
[
"Mano",
"Shuhei",
""
]
] | Gene conversion is a mechanism by which a double-strand break in a DNA molecule is repaired using a homologous DNA molecule as a template. As a result, one gene is 'copied and pasted' onto the other gene. It was recently reported that the direction of gene conversion appears to be biased towards G and C nucleotides. In this paper a stochastic model of the dynamics of the bias in gene conversion is developed for a finite population of members in a multigene family. The dual process is the biased voter model, which generates an ancestral random graph for a given sample. An importance-sampling algorithm for computing the likelihood of the sample is also given. |
2110.01339 | {\L}ukasz Struski | Dawid Warszycki, {\L}ukasz Struski, Marek \'Smieja, Rafa{\l} Kafel,
Rafa{\l} Kurczab | Pharmacoprint -- a combination of pharmacophore fingerprint and
artificial intelligence as a tool for computer-aided drug design | Journal of Chemical Information and Modeling (2021) | null | 10.1021/acs.jcim.1c00589 | null | q-bio.QM cs.LG | http://creativecommons.org/licenses/by/4.0/ | Structural fingerprints and pharmacophore modeling are methodologies that
have been used for at least two decades in various fields of cheminformatics:
from similarity searching to machine learning (ML). Advances in silico
techniques consequently led to combining both these methodologies into a new
approach known as pharmacophore fingerprint. Herein, we propose a
high-resolution, pharmacophore fingerprint called Pharmacoprint that encodes
the presence, types, and relationships between pharmacophore features of a
molecule. Pharmacoprint was evaluated in classification experiments by using ML
algorithms (logistic regression, support vector machines, linear support vector
machines, and neural networks) and outperformed other popular molecular
fingerprints (i.e., Estate, MACCS, PubChem, Substructure, Klekotha-Roth, CDK,
Extended, and GraphOnly) and ChemAxon Pharmacophoric Features fingerprint.
Pharmacoprint consisted of 39973 bits; several methods were applied for
dimensionality reduction, and the best algorithm not only reduced the length of
bit string but also improved the efficiency of ML tests. Further optimization
allowed us to define the best parameter settings for using Pharmacoprint in
discrimination tests and for maximizing statistical parameters. Finally,
Pharmacoprint generated for 3D structures with defined hydrogens as input data
was applied to neural networks with a supervised autoencoder for selecting the
most important bits and allowed to maximize Matthews Correlation Coefficient up
to 0.962. The results show the potential of Pharmacoprint as a new, perspective
tool for computer-aided drug design.
| [
{
"created": "Mon, 4 Oct 2021 11:36:39 GMT",
"version": "v1"
},
{
"created": "Tue, 31 Oct 2023 09:30:08 GMT",
"version": "v2"
}
] | 2023-11-01 | [
[
"Warszycki",
"Dawid",
""
],
[
"Struski",
"Łukasz",
""
],
[
"Śmieja",
"Marek",
""
],
[
"Kafel",
"Rafał",
""
],
[
"Kurczab",
"Rafał",
""
]
] | Structural fingerprints and pharmacophore modeling are methodologies that have been used for at least two decades in various fields of cheminformatics: from similarity searching to machine learning (ML). Advances in silico techniques consequently led to combining both these methodologies into a new approach known as pharmacophore fingerprint. Herein, we propose a high-resolution, pharmacophore fingerprint called Pharmacoprint that encodes the presence, types, and relationships between pharmacophore features of a molecule. Pharmacoprint was evaluated in classification experiments by using ML algorithms (logistic regression, support vector machines, linear support vector machines, and neural networks) and outperformed other popular molecular fingerprints (i.e., Estate, MACCS, PubChem, Substructure, Klekotha-Roth, CDK, Extended, and GraphOnly) and ChemAxon Pharmacophoric Features fingerprint. Pharmacoprint consisted of 39973 bits; several methods were applied for dimensionality reduction, and the best algorithm not only reduced the length of bit string but also improved the efficiency of ML tests. Further optimization allowed us to define the best parameter settings for using Pharmacoprint in discrimination tests and for maximizing statistical parameters. Finally, Pharmacoprint generated for 3D structures with defined hydrogens as input data was applied to neural networks with a supervised autoencoder for selecting the most important bits and allowed to maximize Matthews Correlation Coefficient up to 0.962. The results show the potential of Pharmacoprint as a new, perspective tool for computer-aided drug design. |
1404.0568 | Samuela Pasquali | Tristan Cragnolini, Yoann Laurin, Philippe Derreumaux, Samuela
Pasquali | The coarse-grained HiRE-RNA model for de novo calculations of RNA free
energy surfaces, folding pathways and complex structure prediction | null | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | HiRE-RNA is a simplified, coarse-grained RNA model for the prediction of
equilibrium configurations, dynamics and thermodynamics. Using a reduced set of
particles and detailed interactions accounting for base-pairing and stacking we
show that non-canonical and multiple base interactions are necessary to capture
the full physical behavior of complex RNAs. In this paper we give a full
account of the model and we present results on the folding, stability and free
energy surfaces of 16 systems with 12 to 76 nucleotides of increasingly complex
architectures, ranging from monomers to dimers, using a total of 850$\mu$s
simulation time.
| [
{
"created": "Wed, 2 Apr 2014 14:27:09 GMT",
"version": "v1"
},
{
"created": "Mon, 9 Mar 2015 08:17:53 GMT",
"version": "v2"
}
] | 2015-03-10 | [
[
"Cragnolini",
"Tristan",
""
],
[
"Laurin",
"Yoann",
""
],
[
"Derreumaux",
"Philippe",
""
],
[
"Pasquali",
"Samuela",
""
]
] | HiRE-RNA is a simplified, coarse-grained RNA model for the prediction of equilibrium configurations, dynamics and thermodynamics. Using a reduced set of particles and detailed interactions accounting for base-pairing and stacking we show that non-canonical and multiple base interactions are necessary to capture the full physical behavior of complex RNAs. In this paper we give a full account of the model and we present results on the folding, stability and free energy surfaces of 16 systems with 12 to 76 nucleotides of increasingly complex architectures, ranging from monomers to dimers, using a total of 850$\mu$s simulation time. |
1005.0361 | Michael Denker | Michael Denker and S\'ebastien Roux and Henrik Lind\'en and Markus
Diesmann and Alexa Riehle and Sonja Gr\"un | The Local Field Potential Reflects Surplus Spike Synchrony | 45 pages, 8 figures, 3 supplemental figures | Cereb. Cortex (2011) 21(12): 2681-2695 | 10.1093/cercor/bhr040 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The oscillatory nature of the cortical local field potential (LFP) is
commonly interpreted as a reflection of synchronized network activity, but its
relationship to observed transient coincident firing of neurons on the
millisecond time-scale remains unclear. Here we present experimental evidence
to reconcile the notions of synchrony at the level of neuronal spiking and at
the mesoscopic scale. We demonstrate that only in time intervals of excess
spike synchrony, coincident spikes are better entrained to the LFP than
predicted by the locking of the individual spikes. This effect is enhanced in
periods of large LFP amplitudes. A quantitative model explains the LFP dynamics
by the orchestrated spiking activity in neuronal groups that contribute the
observed surplus synchrony. From the correlation analysis, we infer that
neurons participate in different constellations but contribute only a fraction
of their spikes to temporally precise spike configurations, suggesting a dual
coding scheme of rate and synchrony. This finding provides direct evidence for
the hypothesized relation that precise spike synchrony constitutes a major
temporally and spatially organized component of the LFP. Revealing that
transient spike synchronization correlates not only with behavior, but with a
mesoscopic brain signal corroborates its relevance in cortical processing.
| [
{
"created": "Mon, 3 May 2010 18:10:19 GMT",
"version": "v1"
}
] | 2011-11-24 | [
[
"Denker",
"Michael",
""
],
[
"Roux",
"Sébastien",
""
],
[
"Lindén",
"Henrik",
""
],
[
"Diesmann",
"Markus",
""
],
[
"Riehle",
"Alexa",
""
],
[
"Grün",
"Sonja",
""
]
] | The oscillatory nature of the cortical local field potential (LFP) is commonly interpreted as a reflection of synchronized network activity, but its relationship to observed transient coincident firing of neurons on the millisecond time-scale remains unclear. Here we present experimental evidence to reconcile the notions of synchrony at the level of neuronal spiking and at the mesoscopic scale. We demonstrate that only in time intervals of excess spike synchrony, coincident spikes are better entrained to the LFP than predicted by the locking of the individual spikes. This effect is enhanced in periods of large LFP amplitudes. A quantitative model explains the LFP dynamics by the orchestrated spiking activity in neuronal groups that contribute the observed surplus synchrony. From the correlation analysis, we infer that neurons participate in different constellations but contribute only a fraction of their spikes to temporally precise spike configurations, suggesting a dual coding scheme of rate and synchrony. This finding provides direct evidence for the hypothesized relation that precise spike synchrony constitutes a major temporally and spatially organized component of the LFP. Revealing that transient spike synchronization correlates not only with behavior, but with a mesoscopic brain signal corroborates its relevance in cortical processing. |
2212.00735 | Yining Wang | Yining Wang, Xumeng Gong, Shaochuan Li, Bing Yang, YiWu Sun, Chuan
Shi, Yangang Wang, Cheng Yang, Hui Li, Le Song | xTrimoABFold: De novo Antibody Structure Prediction without MSA | 14 pages, 5 figures | null | null | null | q-bio.QM cs.AI cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the field of antibody engineering, an essential task is to design a novel
antibody whose paratopes bind to a specific antigen with correct epitopes.
Understanding antibody structure and its paratope can facilitate a mechanistic
understanding of its function. Therefore, antibody structure prediction from
its sequence alone has always been a highly valuable problem for de novo
antibody design. AlphaFold2, a breakthrough in the field of structural biology,
provides a solution to predict protein structure based on protein sequences and
computationally expensive coevolutionary multiple sequence alignments (MSAs).
However, the computational efficiency and undesirable prediction accuracy of
antibodies, especially on the complementarity-determining regions (CDRs) of
antibodies limit their applications in the industrially high-throughput drug
design. To learn an informative representation of antibodies, we employed a
deep antibody language model (ALM) on curated sequences from the observed
antibody space database via a transformer model. We also developed a novel
model named xTrimoABFold to predict antibody structure from antibody sequence
based on the pretrained ALM as well as efficient evoformers and structural
modules. The model was trained end-to-end on the antibody structures in PDB by
minimizing the ensemble loss of domain-specific focal loss on CDR and the
frame-aligned point loss. xTrimoABFold outperforms AlphaFold2 and other protein
language model based SOTAs, e.g., OmegaFold, HelixFold-Single, and IgFold with
a large significant margin (30+\% improvement on RMSD) while performing 151
times faster than AlphaFold2. To the best of our knowledge, xTrimoABFold
achieved state-of-the-art antibody structure prediction. Its improvement in
both accuracy and efficiency makes it a valuable tool for de novo antibody
design and could make further improvements in immuno-theory.
| [
{
"created": "Wed, 30 Nov 2022 09:26:08 GMT",
"version": "v1"
},
{
"created": "Sun, 11 Dec 2022 07:42:49 GMT",
"version": "v2"
},
{
"created": "Fri, 5 May 2023 03:52:01 GMT",
"version": "v3"
}
] | 2023-05-08 | [
[
"Wang",
"Yining",
""
],
[
"Gong",
"Xumeng",
""
],
[
"Li",
"Shaochuan",
""
],
[
"Yang",
"Bing",
""
],
[
"Sun",
"YiWu",
""
],
[
"Shi",
"Chuan",
""
],
[
"Wang",
"Yangang",
""
],
[
"Yang",
"Cheng",
""
],
[
"Li",
"Hui",
""
],
[
"Song",
"Le",
""
]
] | In the field of antibody engineering, an essential task is to design a novel antibody whose paratopes bind to a specific antigen with correct epitopes. Understanding antibody structure and its paratope can facilitate a mechanistic understanding of its function. Therefore, antibody structure prediction from its sequence alone has always been a highly valuable problem for de novo antibody design. AlphaFold2, a breakthrough in the field of structural biology, provides a solution to predict protein structure based on protein sequences and computationally expensive coevolutionary multiple sequence alignments (MSAs). However, the computational efficiency and undesirable prediction accuracy of antibodies, especially on the complementarity-determining regions (CDRs) of antibodies limit their applications in the industrially high-throughput drug design. To learn an informative representation of antibodies, we employed a deep antibody language model (ALM) on curated sequences from the observed antibody space database via a transformer model. We also developed a novel model named xTrimoABFold to predict antibody structure from antibody sequence based on the pretrained ALM as well as efficient evoformers and structural modules. The model was trained end-to-end on the antibody structures in PDB by minimizing the ensemble loss of domain-specific focal loss on CDR and the frame-aligned point loss. xTrimoABFold outperforms AlphaFold2 and other protein language model based SOTAs, e.g., OmegaFold, HelixFold-Single, and IgFold with a large significant margin (30+\% improvement on RMSD) while performing 151 times faster than AlphaFold2. To the best of our knowledge, xTrimoABFold achieved state-of-the-art antibody structure prediction. Its improvement in both accuracy and efficiency makes it a valuable tool for de novo antibody design and could make further improvements in immuno-theory. |
1005.2648 | Aleksandra Walczak | Aleksandra M. Walczak, Andrew Mugler and Chris H. WIggins | Analytic methods for modeling stochastic regulatory networks | null | Methods Mol. Biol. (2012) 880, 273-322 | 10.1007/978-1-61779-833-7_13 | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The past decade has seen a revived interest in the unavoidable or intrinsic
noise in biochemical and genetic networks arising from the finite copy number
of the participating species. That is, rather than modeling regulatory networks
in terms of the deterministic dynamics of concentrations, we model the dynamics
of the probability of a given copy number of the reactants in single cells.
Most of the modeling activity of the last decade has centered on stochastic
simulation of individual realizations, i.e., Monte-Carlo methods for generating
stochastic time series. Here we review the mathematical description in terms of
probability distributions, introducing the relevant derivations and
illustrating several cases for which analytic progress can be made either
instead of or before turning to numerical computation.
| [
{
"created": "Sat, 15 May 2010 04:03:31 GMT",
"version": "v1"
}
] | 2015-03-17 | [
[
"Walczak",
"Aleksandra M.",
""
],
[
"Mugler",
"Andrew",
""
],
[
"WIggins",
"Chris H.",
""
]
] | The past decade has seen a revived interest in the unavoidable or intrinsic noise in biochemical and genetic networks arising from the finite copy number of the participating species. That is, rather than modeling regulatory networks in terms of the deterministic dynamics of concentrations, we model the dynamics of the probability of a given copy number of the reactants in single cells. Most of the modeling activity of the last decade has centered on stochastic simulation of individual realizations, i.e., Monte-Carlo methods for generating stochastic time series. Here we review the mathematical description in terms of probability distributions, introducing the relevant derivations and illustrating several cases for which analytic progress can be made either instead of or before turning to numerical computation. |
1803.04659 | Aaron Tuor | Richard Olney, Aaron Tuor, Filip Jagodzinski, Brian Hutchinson | Protein Mutation Stability Ternary Classification using Neural Networks
and Rigidity Analysis | To appear in the Proceedings of 10th International Conference on
Bioinformatics and Computational Biology (BICOB 2018) | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Discerning how a mutation affects the stability of a protein is central to
the study of a wide range of diseases. Machine learning and statistical
analysis techniques can inform how to allocate limited resources to the
considerable time and cost associated with wet lab mutagenesis experiments. In
this work we explore the effectiveness of using a neural network classifier to
predict the change in the stability of a protein due to a mutation. Assessing
the accuracy of our approach is dependent on the use of experimental data about
the effects of mutations performed in vitro. Because the experimental data is
prone to discrepancies when similar experiments have been performed by multiple
laboratories, the use of the data near the juncture of stabilizing and
destabilizing mutations is questionable. We address this later problem via a
systematic approach in which we explore the use of a three-way classification
scheme with stabilizing, destabilizing, and inconclusive labels. For a
systematic search of potential classification cutoff values our classifier
achieved 68 percent accuracy on ternary classification for cutoff values of
-0.6 and 0.7 with a low rate of classifying stabilizing as destabilizing and
vice versa.
| [
{
"created": "Tue, 13 Mar 2018 07:11:29 GMT",
"version": "v1"
}
] | 2018-03-14 | [
[
"Olney",
"Richard",
""
],
[
"Tuor",
"Aaron",
""
],
[
"Jagodzinski",
"Filip",
""
],
[
"Hutchinson",
"Brian",
""
]
] | Discerning how a mutation affects the stability of a protein is central to the study of a wide range of diseases. Machine learning and statistical analysis techniques can inform how to allocate limited resources to the considerable time and cost associated with wet lab mutagenesis experiments. In this work we explore the effectiveness of using a neural network classifier to predict the change in the stability of a protein due to a mutation. Assessing the accuracy of our approach is dependent on the use of experimental data about the effects of mutations performed in vitro. Because the experimental data is prone to discrepancies when similar experiments have been performed by multiple laboratories, the use of the data near the juncture of stabilizing and destabilizing mutations is questionable. We address this later problem via a systematic approach in which we explore the use of a three-way classification scheme with stabilizing, destabilizing, and inconclusive labels. For a systematic search of potential classification cutoff values our classifier achieved 68 percent accuracy on ternary classification for cutoff values of -0.6 and 0.7 with a low rate of classifying stabilizing as destabilizing and vice versa. |
2006.15626 | Bjorn Johansson | Bjorn Johansson | Masking the general population might attenuate COVID-19 outbreaks | null | null | null | null | q-bio.PE physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The effect of masking the general population on a COVID-19 epidemic is
estimated by computer simulation using two separate state-of-the-art web-based
softwares, one of them calibrated for the SARS-CoV-2 virus. The questions
addressed are these: 1. Can mask use by the general population limit the spread
of SARS-CoV-2 in a country? 2. What types of masks exist, and how elaborate
must a mask be to be effective against COVID-19? 3. Does the mask have to be
applied early in an epidemic? 4. A brief general discussion of masks and some
possible future research questions regarding masks and SARS-CoV-2. Results are
as follows: (1) The results indicate that any type of mask, even simple
home-made ones, may be effective. Masks use seems to have an effect in lowering
new patients even the protective effect of each mask (here dubbed "one-mask
protection") is low. Strict adherence to mask use does not appear to be
critical. However, increasing the one-mask protection to > 50% was found to be
advantageous. Masks seemed able to reduce overflow of capacity, e.g. of
intensive care. As the default parameters of the software included another
intervention, it seems possible to combine mask and other interventions. (2)
Masks do seem to reduce the number of new cases even if introduced at a late
stage in an epidemic. However, early implementation helps reduce the cumulative
and total number of cases. (3) The simulations suggest that it might be
possible to eliminate a COVID-19 outbreak by widespread mask use during a
limited period. The results from these simulations are encouraging, but do not
necessarily represent the real-life situation, so it is suggested that clinical
trials of masks are now carried out while continuously monitoring effects and
side-effects.
| [
{
"created": "Sun, 28 Jun 2020 14:57:44 GMT",
"version": "v1"
}
] | 2020-06-30 | [
[
"Johansson",
"Bjorn",
""
]
] | The effect of masking the general population on a COVID-19 epidemic is estimated by computer simulation using two separate state-of-the-art web-based softwares, one of them calibrated for the SARS-CoV-2 virus. The questions addressed are these: 1. Can mask use by the general population limit the spread of SARS-CoV-2 in a country? 2. What types of masks exist, and how elaborate must a mask be to be effective against COVID-19? 3. Does the mask have to be applied early in an epidemic? 4. A brief general discussion of masks and some possible future research questions regarding masks and SARS-CoV-2. Results are as follows: (1) The results indicate that any type of mask, even simple home-made ones, may be effective. Masks use seems to have an effect in lowering new patients even the protective effect of each mask (here dubbed "one-mask protection") is low. Strict adherence to mask use does not appear to be critical. However, increasing the one-mask protection to > 50% was found to be advantageous. Masks seemed able to reduce overflow of capacity, e.g. of intensive care. As the default parameters of the software included another intervention, it seems possible to combine mask and other interventions. (2) Masks do seem to reduce the number of new cases even if introduced at a late stage in an epidemic. However, early implementation helps reduce the cumulative and total number of cases. (3) The simulations suggest that it might be possible to eliminate a COVID-19 outbreak by widespread mask use during a limited period. The results from these simulations are encouraging, but do not necessarily represent the real-life situation, so it is suggested that clinical trials of masks are now carried out while continuously monitoring effects and side-effects. |
1111.1496 | Ekaterina Nikitina G. | E. Nikitina, L. Urazova, O. Churuksaeva | Dinamics of HPV Infection among Women with Cervical Lesions | null | null | null | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A total of 293 women treated at Tomsk Cancer Research Institute were
examined. HPV type 16 had the highest incidence rate (45.0%) followed by HPV
31-17,0%, HPV 56/33-15,0%, HPV 51/18/52-13,0%, HPV 58/35/39/45-7,0%, HPV
59-5,0%. Persistent infection was detected in 35.7% of primarily HPV-positive
cases (10 out of 28 patients), mainly in cervical cancer patients. Total number
of primarily HPV-positive and HPV-negative patients with cervical cancer was
95.0% and 5.0%, respectively. The corresponding values after the complex
treatment were 35.0% and 65.0%, respectively, pointing to the treatment
efficiency.
| [
{
"created": "Mon, 7 Nov 2011 06:45:46 GMT",
"version": "v1"
}
] | 2011-11-08 | [
[
"Nikitina",
"E.",
""
],
[
"Urazova",
"L.",
""
],
[
"Churuksaeva",
"O.",
""
]
] | A total of 293 women treated at Tomsk Cancer Research Institute were examined. HPV type 16 had the highest incidence rate (45.0%) followed by HPV 31-17,0%, HPV 56/33-15,0%, HPV 51/18/52-13,0%, HPV 58/35/39/45-7,0%, HPV 59-5,0%. Persistent infection was detected in 35.7% of primarily HPV-positive cases (10 out of 28 patients), mainly in cervical cancer patients. Total number of primarily HPV-positive and HPV-negative patients with cervical cancer was 95.0% and 5.0%, respectively. The corresponding values after the complex treatment were 35.0% and 65.0%, respectively, pointing to the treatment efficiency. |
2112.14134 | Luka Ribar | Tai Miyazaki Kirby, Luka Ribar, Rodolphe Sepulchre | Reliability of Event Timing in Silicon Neurons | null | null | null | null | q-bio.NC cs.NE cs.SY eess.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Analog, low-voltage electronics show great promise in producing silicon
neurons (SiNs) with unprecedented levels of energy efficiency. Yet, their
inherently high susceptibility to process, voltage and temperature (PVT)
variations, and noise has long been recognised as a major bottleneck in
developing effective neuromorphic solutions. Inspired by spike transmission
studies in biophysical, neocortical neurons, we demonstrate that the inherent
noise and variability can coexist with reliable spike transmission in analog
SiNs, similarly to biological neurons. We illustrate this property on a recent
neuromorphic model of a bursting neuron by showcasing three different relevant
types of reliable event transmission: single spike transmission, burst
transmission, and the on-off control of a half-centre oscillator (HCO) network.
| [
{
"created": "Tue, 28 Dec 2021 13:24:23 GMT",
"version": "v1"
}
] | 2021-12-30 | [
[
"Kirby",
"Tai Miyazaki",
""
],
[
"Ribar",
"Luka",
""
],
[
"Sepulchre",
"Rodolphe",
""
]
] | Analog, low-voltage electronics show great promise in producing silicon neurons (SiNs) with unprecedented levels of energy efficiency. Yet, their inherently high susceptibility to process, voltage and temperature (PVT) variations, and noise has long been recognised as a major bottleneck in developing effective neuromorphic solutions. Inspired by spike transmission studies in biophysical, neocortical neurons, we demonstrate that the inherent noise and variability can coexist with reliable spike transmission in analog SiNs, similarly to biological neurons. We illustrate this property on a recent neuromorphic model of a bursting neuron by showcasing three different relevant types of reliable event transmission: single spike transmission, burst transmission, and the on-off control of a half-centre oscillator (HCO) network. |
1606.02349 | Marius C\u{a}t\u{a}lin Iordan | Marius C\u{a}t\u{a}lin Iordan, Armand Joulin, Diane M. Beck, Li
Fei-Fei | Locally-Optimized Inter-Subject Alignment of Functional Cortical Regions | Presented at MLINI-2015 workshop, 2015 (arXiv:cs/0101200) | null | null | MLINI/2015/04 | q-bio.NC stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Inter-subject registration of cortical areas is necessary in functional
imaging (fMRI) studies for making inferences about equivalent brain function
across a population. However, many high-level visual brain areas are defined as
peaks of functional contrasts whose cortical position is highly variable. As
such, most alignment methods fail to accurately map functional regions of
interest (ROIs) across participants. To address this problem, we propose a
locally optimized registration method that directly predicts the location of a
seed ROI on a separate target cortical sheet by maximizing the functional
correlation between their time courses, while simultaneously allowing for
non-smooth local deformations in region topology. Our method outperforms the
two most commonly used alternatives (anatomical landmark-based AFNI alignment
and cortical convexity-based FreeSurfer alignment) in overlap between predicted
region and functionally-defined LOC. Furthermore, the maps obtained using our
method are more consistent across subjects than both baseline measures.
Critically, our method represents an important step forward towards predicting
brain regions without explicit localizer scans and deciphering the poorly
understood relationship between the location of functional regions, their
anatomical extent, and the consistency of computations those regions perform
across people.
| [
{
"created": "Tue, 7 Jun 2016 22:40:30 GMT",
"version": "v1"
}
] | 2016-06-09 | [
[
"Iordan",
"Marius Cătălin",
""
],
[
"Joulin",
"Armand",
""
],
[
"Beck",
"Diane M.",
""
],
[
"Fei-Fei",
"Li",
""
]
] | Inter-subject registration of cortical areas is necessary in functional imaging (fMRI) studies for making inferences about equivalent brain function across a population. However, many high-level visual brain areas are defined as peaks of functional contrasts whose cortical position is highly variable. As such, most alignment methods fail to accurately map functional regions of interest (ROIs) across participants. To address this problem, we propose a locally optimized registration method that directly predicts the location of a seed ROI on a separate target cortical sheet by maximizing the functional correlation between their time courses, while simultaneously allowing for non-smooth local deformations in region topology. Our method outperforms the two most commonly used alternatives (anatomical landmark-based AFNI alignment and cortical convexity-based FreeSurfer alignment) in overlap between predicted region and functionally-defined LOC. Furthermore, the maps obtained using our method are more consistent across subjects than both baseline measures. Critically, our method represents an important step forward towards predicting brain regions without explicit localizer scans and deciphering the poorly understood relationship between the location of functional regions, their anatomical extent, and the consistency of computations those regions perform across people. |
2302.08024 | Hui Wang | Xi Chen, Hui Wang and Jinqiao Duan | The most probable dynamics of receptor-ligand binding on cell membrane | 12 figures | null | null | null | q-bio.MN | http://creativecommons.org/licenses/by/4.0/ | We devise a method for predicting certain receptor-ligand binding behaviors,
based on stochastic dynamical modelling. We consider the dynamics of a receptor
binding to a ligand on the cell membrane, where the receptor and ligand perform
different motions and are thus modeled by stochastic differential equations
with Gaussian noise or non-Gaussian noise. We use neural networks based on
Onsager-Machlup function to compute the probability $P_1$ of the unbounded
receptor diffusing to the cell membrane. Meanwhile, we compute the probability
$P_2$ of extracellular ligand arriving at the cell membrane by solving the
associated Fokker-Planck equation. Then, we could predict the most probable
binding probability by combining $P_1$ and $P_2$. In this way, we conclude with
some indication about where the ligand will most probably encounter the
receptor, contributing to better understanding of cell's response to external
stimuli and communication with other cells.
| [
{
"created": "Thu, 16 Feb 2023 01:55:19 GMT",
"version": "v1"
}
] | 2023-02-17 | [
[
"Chen",
"Xi",
""
],
[
"Wang",
"Hui",
""
],
[
"Duan",
"Jinqiao",
""
]
] | We devise a method for predicting certain receptor-ligand binding behaviors, based on stochastic dynamical modelling. We consider the dynamics of a receptor binding to a ligand on the cell membrane, where the receptor and ligand perform different motions and are thus modeled by stochastic differential equations with Gaussian noise or non-Gaussian noise. We use neural networks based on Onsager-Machlup function to compute the probability $P_1$ of the unbounded receptor diffusing to the cell membrane. Meanwhile, we compute the probability $P_2$ of extracellular ligand arriving at the cell membrane by solving the associated Fokker-Planck equation. Then, we could predict the most probable binding probability by combining $P_1$ and $P_2$. In this way, we conclude with some indication about where the ligand will most probably encounter the receptor, contributing to better understanding of cell's response to external stimuli and communication with other cells. |
1809.06917 | Glenn Young | Glenn Young and Andrew Belmonte | Fixation in the stochastic Lotka-Volterra model with small fitness
trade-offs | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the probability of fixation in a stochastic two-species competition
model. By identifying a naturally occurring fast timescale, we derive an
approximation to the associated backward Kolmogorov equation that allows us to
obtain an explicit closed form solution for the probability of fixation of
either species. We use our result to study fitness tradeoff strategies and show
that, despite some tradeoffs having nearly negligible effects on the
corresponding deterministic dynamics, they can have large implications for the
outcome of the stochastic system.
| [
{
"created": "Tue, 18 Sep 2018 20:19:24 GMT",
"version": "v1"
},
{
"created": "Mon, 8 Oct 2018 18:57:12 GMT",
"version": "v2"
},
{
"created": "Wed, 2 Sep 2020 20:13:58 GMT",
"version": "v3"
}
] | 2020-09-04 | [
[
"Young",
"Glenn",
""
],
[
"Belmonte",
"Andrew",
""
]
] | We study the probability of fixation in a stochastic two-species competition model. By identifying a naturally occurring fast timescale, we derive an approximation to the associated backward Kolmogorov equation that allows us to obtain an explicit closed form solution for the probability of fixation of either species. We use our result to study fitness tradeoff strategies and show that, despite some tradeoffs having nearly negligible effects on the corresponding deterministic dynamics, they can have large implications for the outcome of the stochastic system. |
1010.3775 | Danielle Bassett | Danielle S. Bassett, Nicholas F. Wymbs, Mason A. Porter, Peter J.
Mucha, Jean M. Carlson, Scott T. Grafton | Dynamic reconfiguration of human brain networks during learning | Main Text: 19 pages, 4 figures Supplementary Materials: 34 pages, 4
figures, 3 tables | PNAS 2011, vol. 108, no. 18, 7641-7646 | 10.1073/pnas.1018985108 | null | q-bio.NC cond-mat.dis-nn math-ph math.MP nlin.AO physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human learning is a complex phenomenon requiring flexibility to adapt
existing brain function and precision in selecting new neurophysiological
activities to drive desired behavior. These two attributes -- flexibility and
selection -- must operate over multiple temporal scales as performance of a
skill changes from being slow and challenging to being fast and automatic. Such
selective adaptability is naturally provided by modular structure, which plays
a critical role in evolution, development, and optimal network function. Using
functional connectivity measurements of brain activity acquired from initial
training through mastery of a simple motor skill, we explore the role of
modularity in human learning by identifying dynamic changes of modular
organization spanning multiple temporal scales. Our results indicate that
flexibility, which we measure by the allegiance of nodes to modules, in one
experimental session predicts the relative amount of learning in a future
session. We also develop a general statistical framework for the identification
of modular architectures in evolving systems, which is broadly applicable to
disciplines where network adaptability is crucial to the understanding of
system performance.
| [
{
"created": "Tue, 19 Oct 2010 01:30:23 GMT",
"version": "v1"
},
{
"created": "Mon, 24 Oct 2011 03:51:53 GMT",
"version": "v2"
}
] | 2013-06-28 | [
[
"Bassett",
"Danielle S.",
""
],
[
"Wymbs",
"Nicholas F.",
""
],
[
"Porter",
"Mason A.",
""
],
[
"Mucha",
"Peter J.",
""
],
[
"Carlson",
"Jean M.",
""
],
[
"Grafton",
"Scott T.",
""
]
] | Human learning is a complex phenomenon requiring flexibility to adapt existing brain function and precision in selecting new neurophysiological activities to drive desired behavior. These two attributes -- flexibility and selection -- must operate over multiple temporal scales as performance of a skill changes from being slow and challenging to being fast and automatic. Such selective adaptability is naturally provided by modular structure, which plays a critical role in evolution, development, and optimal network function. Using functional connectivity measurements of brain activity acquired from initial training through mastery of a simple motor skill, we explore the role of modularity in human learning by identifying dynamic changes of modular organization spanning multiple temporal scales. Our results indicate that flexibility, which we measure by the allegiance of nodes to modules, in one experimental session predicts the relative amount of learning in a future session. We also develop a general statistical framework for the identification of modular architectures in evolving systems, which is broadly applicable to disciplines where network adaptability is crucial to the understanding of system performance. |
1306.1685 | Roland Schwarz | Roland F Schwarz, Anne Trinh, Botond Sipos, James D Brenton, Nick
Goldman and Florian Markowetz | Phylogenetic quantification of intra-tumour heterogeneity | null | null | 10.1371/journal.pcbi.1003535 | null | q-bio.QM q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: Intra-tumour heterogeneity (ITH) is the result of ongoing
evolutionary change within each cancer. The expansion of genetically distinct
sub-clonal populations may explain the emergence of drug resistance and if so
would have prognostic and predictive utility. However, methods for objectively
quantifying ITH have been missing and are particularly difficult to establish
in cancers where predominant copy number variation prevents accurate
phylogenetic reconstruction owing to horizontal dependencies caused by long and
cascading genomic rearrangements.
Results: To address these challenges we present MEDICC, a method for
phylogenetic reconstruction and ITH quantification based on a Minimum Event
Distance for Intra-tumour Copynumber Comparisons. Using a transducer-based
pairwise comparison function we determine optimal phasing of major and minor
alleles, as well as evolutionary distances between samples, and are able to
reconstruct ancestral genomes. Rigorous simulations and an extensive clinical
study show the power of our method, which outperforms state-of-the-art
competitors in reconstruction accuracy and additionally allows unbiased
numerical quantification of ITH.
Conclusions: Accurate quantification and evolutionary inference are essential
to understand the functional consequences of ITH. The MEDICC algorithms are
independent of the experimental techniques used and are applicable to both
next-generation sequencing and array CGH data.
| [
{
"created": "Fri, 7 Jun 2013 10:50:44 GMT",
"version": "v1"
}
] | 2015-06-16 | [
[
"Schwarz",
"Roland F",
""
],
[
"Trinh",
"Anne",
""
],
[
"Sipos",
"Botond",
""
],
[
"Brenton",
"James D",
""
],
[
"Goldman",
"Nick",
""
],
[
"Markowetz",
"Florian",
""
]
] | Background: Intra-tumour heterogeneity (ITH) is the result of ongoing evolutionary change within each cancer. The expansion of genetically distinct sub-clonal populations may explain the emergence of drug resistance and if so would have prognostic and predictive utility. However, methods for objectively quantifying ITH have been missing and are particularly difficult to establish in cancers where predominant copy number variation prevents accurate phylogenetic reconstruction owing to horizontal dependencies caused by long and cascading genomic rearrangements. Results: To address these challenges we present MEDICC, a method for phylogenetic reconstruction and ITH quantification based on a Minimum Event Distance for Intra-tumour Copynumber Comparisons. Using a transducer-based pairwise comparison function we determine optimal phasing of major and minor alleles, as well as evolutionary distances between samples, and are able to reconstruct ancestral genomes. Rigorous simulations and an extensive clinical study show the power of our method, which outperforms state-of-the-art competitors in reconstruction accuracy and additionally allows unbiased numerical quantification of ITH. Conclusions: Accurate quantification and evolutionary inference are essential to understand the functional consequences of ITH. The MEDICC algorithms are independent of the experimental techniques used and are applicable to both next-generation sequencing and array CGH data. |
1908.02913 | Fabio Sanchez PhD | Fabio Sanchez, Jorge Arroyo-Esquivel, Paola Vasquez | Hospitalization in the transmission of dengue dynamics: The impact on
public health policies | 19 pages, 7 figures | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dengue virus has caused major problems for public health officials for
decades in tropical and subtropical countries. We construct a compartmental
model that includes the risk of hospitalization and its impact on public health
policies. The basic reproductive number, $\mathcal{R}_0$, is computed, as well
as a sensitivity analysis on $\mathcal{R}_0$ parameters and discuss the
relevance in public health policies. The local and global stability of the
disease-free equilibrium is established. Numerical simulations are performed to
better determine future prevention/control strategies.
| [
{
"created": "Thu, 8 Aug 2019 03:21:53 GMT",
"version": "v1"
}
] | 2019-08-09 | [
[
"Sanchez",
"Fabio",
""
],
[
"Arroyo-Esquivel",
"Jorge",
""
],
[
"Vasquez",
"Paola",
""
]
] | Dengue virus has caused major problems for public health officials for decades in tropical and subtropical countries. We construct a compartmental model that includes the risk of hospitalization and its impact on public health policies. The basic reproductive number, $\mathcal{R}_0$, is computed, as well as a sensitivity analysis on $\mathcal{R}_0$ parameters and discuss the relevance in public health policies. The local and global stability of the disease-free equilibrium is established. Numerical simulations are performed to better determine future prevention/control strategies. |
2309.06447 | Malgorzata O'Reilly | Albert C. Soewongsono and Jiahao Diao and Tristan Stark and Amanda E.
Wilson and David A. Liberles and Barbara R. Holland and Malgorzata M.
O'Reilly | Matrix-analytic methods for the evolution of species trees, gene trees,
and their reconciliation | Corrected names of two authors (Liberles, Holland). Added further
details of the contributions in the Acknowledgements | null | null | null | q-bio.PE math.PR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the reconciliation problem, in which the task is to find a
mapping of a gene tree into a species tree, so as to maximize the likelihood of
such fitting, given the available data. We describe a model for the evolution
of the species tree, a subfunctionalisation model for the evolution of the gene
tree, and provide an algorithm to compute the likelihood of the reconciliation.
We derive our results using the theory of matrix-analytic methods and describe
efficient algorithms for the computation of a range of useful metrics. We
illustrate the theory with examples and provide the physical interpretations of
the discussed quantities, with a focus on the practical applications of the
theory to incomplete data.
| [
{
"created": "Tue, 12 Sep 2023 01:26:11 GMT",
"version": "v1"
},
{
"created": "Wed, 8 Nov 2023 06:18:12 GMT",
"version": "v2"
}
] | 2023-11-09 | [
[
"Soewongsono",
"Albert C.",
""
],
[
"Diao",
"Jiahao",
""
],
[
"Stark",
"Tristan",
""
],
[
"Wilson",
"Amanda E.",
""
],
[
"Liberles",
"David A.",
""
],
[
"Holland",
"Barbara R.",
""
],
[
"O'Reilly",
"Malgorzata M.",
""
]
] | We consider the reconciliation problem, in which the task is to find a mapping of a gene tree into a species tree, so as to maximize the likelihood of such fitting, given the available data. We describe a model for the evolution of the species tree, a subfunctionalisation model for the evolution of the gene tree, and provide an algorithm to compute the likelihood of the reconciliation. We derive our results using the theory of matrix-analytic methods and describe efficient algorithms for the computation of a range of useful metrics. We illustrate the theory with examples and provide the physical interpretations of the discussed quantities, with a focus on the practical applications of the theory to incomplete data. |
1605.08228 | Angelo Valleriani | Marco Rusconi, Angelo Valleriani | Predict or classify: The deceptive role of time-locking in brain signal
classification | 23 pages, 5 figures | Scientific Reports 6, 28236 (2016) | 10.1038/srep28236 | null | q-bio.NC physics.bio-ph stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Several experimental studies claim to be able to predict the outcome of
simple decisions from brain signals measured before subjects are aware of their
decision. Often, these studies use multivariate pattern recognition methods
with the underlying assumption that the ability to classify the brain signal is
equivalent to predict the decision itself. Here we show instead that it is
possible to correctly classify a signal even if it does not contain any
predictive information about the decision. We first define a simple stochastic
model that mimics the random decision process between two equivalent
alternatives, and generate a large number of independent trials that contain no
choice-predictive information. The trials are first time-locked to the time
point of the final event and then classified using standard machine-learning
techniques. The resulting classification accuracy is above chance level long
before the time point of time-locking. We then analyze the same trials using
information theory. We demonstrate that the high classification accuracy is a
consequence of time-locking and that its time behavior is simply related to the
large relaxation time of the process. We conclude that when time-locking is a
crucial step in the analysis of neural activity patterns, both the emergence
and the timing of the classification accuracy are affected by structural
properties of the network that generates the signal.
| [
{
"created": "Thu, 26 May 2016 11:17:41 GMT",
"version": "v1"
},
{
"created": "Fri, 10 Jun 2016 12:23:34 GMT",
"version": "v2"
}
] | 2016-06-21 | [
[
"Rusconi",
"Marco",
""
],
[
"Valleriani",
"Angelo",
""
]
] | Several experimental studies claim to be able to predict the outcome of simple decisions from brain signals measured before subjects are aware of their decision. Often, these studies use multivariate pattern recognition methods with the underlying assumption that the ability to classify the brain signal is equivalent to predict the decision itself. Here we show instead that it is possible to correctly classify a signal even if it does not contain any predictive information about the decision. We first define a simple stochastic model that mimics the random decision process between two equivalent alternatives, and generate a large number of independent trials that contain no choice-predictive information. The trials are first time-locked to the time point of the final event and then classified using standard machine-learning techniques. The resulting classification accuracy is above chance level long before the time point of time-locking. We then analyze the same trials using information theory. We demonstrate that the high classification accuracy is a consequence of time-locking and that its time behavior is simply related to the large relaxation time of the process. We conclude that when time-locking is a crucial step in the analysis of neural activity patterns, both the emergence and the timing of the classification accuracy are affected by structural properties of the network that generates the signal. |
2305.18841 | Annie Adhikary | Annie Adhikary | Identification of Novel Diagnostic Neuroimaging Biomarkers for Autism
Spectrum Disorder Through Convolutional Neural Network-Based Analysis of
Functional, Structural, and Diffusion Tensor Imaging Data Towards Enhanced
Autism Diagnosis | 15 pages, 7 figures, 2 tables | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Autism spectrum disorder is one of the leading neurodevelopmental disorders
in our world, present in over 1% of the population and rapidly increasing in
prevalence, yet the condition lacks a robust, objective, and efficient
diagnostic. Clinical diagnostic criteria rely on subjective behavioral
assessments, which are prone to misdiagnosis as they face limitations in terms
of their heterogeneity, specificity, and biases. This study proposes a novel
convolutional neural network-based classification tool that aims to identify
the potential of different neuroimaging features as autism biomarkers. The
model is constructed using a set of sequential layers specifically designed to
extract relevant features from brain imaging data. Trained and tested on over
300,000 distinct features across three imaging types, the model shows promise
in classifying individuals with autism from typical controls, outperforming
metrics of current gold standard diagnostics by achieving an accuracy of 95.4%
on a dataset of 1,111 samples with 521 autistic subjects (260 male and 261
female) and 590 controls (297 male and 293 female). 32 optimal features from
the training data were identified and classified as candidate biomarkers using
an independent samples t-test, in which functional features such as
connectivity and the time series of signal intensity from each voxel exhibited
the highest mean value differences between individuals with autism and typical
control subjects. The p-values of these biomarkers were < 0.001, proving the
statistical significance of the results and indicating that this research could
pave the way towards the usage of neuroimaging in conjunction with behavioral
criteria in clinics.
| [
{
"created": "Tue, 30 May 2023 08:34:00 GMT",
"version": "v1"
},
{
"created": "Thu, 29 Feb 2024 21:08:17 GMT",
"version": "v2"
}
] | 2024-03-04 | [
[
"Adhikary",
"Annie",
""
]
] | Autism spectrum disorder is one of the leading neurodevelopmental disorders in our world, present in over 1% of the population and rapidly increasing in prevalence, yet the condition lacks a robust, objective, and efficient diagnostic. Clinical diagnostic criteria rely on subjective behavioral assessments, which are prone to misdiagnosis as they face limitations in terms of their heterogeneity, specificity, and biases. This study proposes a novel convolutional neural network-based classification tool that aims to identify the potential of different neuroimaging features as autism biomarkers. The model is constructed using a set of sequential layers specifically designed to extract relevant features from brain imaging data. Trained and tested on over 300,000 distinct features across three imaging types, the model shows promise in classifying individuals with autism from typical controls, outperforming metrics of current gold standard diagnostics by achieving an accuracy of 95.4% on a dataset of 1,111 samples with 521 autistic subjects (260 male and 261 female) and 590 controls (297 male and 293 female). 32 optimal features from the training data were identified and classified as candidate biomarkers using an independent samples t-test, in which functional features such as connectivity and the time series of signal intensity from each voxel exhibited the highest mean value differences between individuals with autism and typical control subjects. The p-values of these biomarkers were < 0.001, proving the statistical significance of the results and indicating that this research could pave the way towards the usage of neuroimaging in conjunction with behavioral criteria in clinics. |
2305.12386 | David Morselli | David Morselli, Marcello Edoardo Delitala, Federico Frascoli | Agent-based and continuum models for spatial dynamics of infection by
oncolytic viruses | 29 pages, 10 figures. Supplementary material available at
https://tinyurl.com/5c5nxss8 | Bull Math Biol 85, 92 (2023) | 10.1007/s11538-023-01192-x | null | q-bio.PE q-bio.CB | http://creativecommons.org/licenses/by/4.0/ | The use of oncolytic viruses as cancer treatment has received considerable
attention in recent years, however the spatial dynamics of this viral infection
is still poorly understood. We present here a stochastic agent-based model
describing infected and uninfected cells for solid tumours, which interact with
viruses in the absence of an immune response. Two kinds of movement, namely
undirected random and pressure-driven movements, are considered: the continuum
limit of the models is derived and a systematic comparison between the systems
of partial differential equations and the individual-based model, in one and
two dimensions, is carried out.
In the case of undirected movement, a good agreement between agent-based
simulations and the numerical and well-known analytical results for the
continuum model is possible. For pressure-driven motion, instead, we observe a
wide parameter range in which the infection of the agents remains confined to
the center of the tumour, even though the continuum model shows traveling waves
of infection; outcomes appear to be more sensitive to stochasticity and
uninfected regions appear harder to invade, giving rise to irregular,
unpredictable growth patterns.
Our results show that the presence of spatial constraints in tumours'
microenvironments limiting free expansion has a very significant impact on
virotherapy. Outcomes for these tumours suggest a notable increase in
variability. All these aspects can have important effects when designing
individually tailored therapies where virotherapy is included.
| [
{
"created": "Sun, 21 May 2023 07:57:46 GMT",
"version": "v1"
},
{
"created": "Tue, 5 Sep 2023 14:13:14 GMT",
"version": "v2"
}
] | 2023-09-06 | [
[
"Morselli",
"David",
""
],
[
"Delitala",
"Marcello Edoardo",
""
],
[
"Frascoli",
"Federico",
""
]
] | The use of oncolytic viruses as cancer treatment has received considerable attention in recent years, however the spatial dynamics of this viral infection is still poorly understood. We present here a stochastic agent-based model describing infected and uninfected cells for solid tumours, which interact with viruses in the absence of an immune response. Two kinds of movement, namely undirected random and pressure-driven movements, are considered: the continuum limit of the models is derived and a systematic comparison between the systems of partial differential equations and the individual-based model, in one and two dimensions, is carried out. In the case of undirected movement, a good agreement between agent-based simulations and the numerical and well-known analytical results for the continuum model is possible. For pressure-driven motion, instead, we observe a wide parameter range in which the infection of the agents remains confined to the center of the tumour, even though the continuum model shows traveling waves of infection; outcomes appear to be more sensitive to stochasticity and uninfected regions appear harder to invade, giving rise to irregular, unpredictable growth patterns. Our results show that the presence of spatial constraints in tumours' microenvironments limiting free expansion has a very significant impact on virotherapy. Outcomes for these tumours suggest a notable increase in variability. All these aspects can have important effects when designing individually tailored therapies where virotherapy is included. |
1609.04902 | Momiao Xiong | Nan Lin, Yun Zhu, Ruzong Fan and Momiao Xiong | A Quadratically Regularized Functional Canonical Correlation Analysis
for Identifying the Global Structure of Pleiotropy with NGS Data | 64 pages including 12 figures | null | 10.1371/journal.pcbi.1005788 | null | q-bio.GN stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Investigating the pleiotropic effects of genetic variants can increase
statistical power, provide important information to achieve deep understanding
of the complex genetic structures of disease, and offer powerful tools for
designing effective treatments with fewer side effects. However, the current
multiple phenotype association analysis paradigm lacks breadth (number of
phenotypes and genetic variants jointly analyzed at the same time) and depth
(hierarchical structure of phenotype and genotypes). A key issue for high
dimensional pleiotropic analysis is to effectively extract informative internal
representation and features from high dimensional genotype and phenotype data.
To explore multiple levels of representations of genetic variants, learn their
internal patterns involved in the disease development, and overcome critical
barriers in advancing the development of novel statistical methods and
computational algorithms for genetic pleiotropic analysis, we proposed a new
framework referred to as a quadratically regularized functional CCA (QRFCCA)
for association analysis which combines three approaches: (1) quadratically
regularized matrix factorization, (2) functional data analysis and (3)
canonical correlation analysis (CCA). Large-scale simulations show that the
QRFCCA has a much higher power than that of the nine competing statistics while
retaining the appropriate type 1 errors. To further evaluate performance, the
QRFCCA and nine other statistics are applied to the whole genome sequencing
dataset from the TwinsUK study. We identify a total of 79 genes with rare
variants and 67 genes with common variants significantly associated with the 46
traits using QRFCCA. The results show that the QRFCCA substantially outperforms
the nine other statistics.
| [
{
"created": "Fri, 16 Sep 2016 03:18:44 GMT",
"version": "v1"
}
] | 2018-02-07 | [
[
"Lin",
"Nan",
""
],
[
"Zhu",
"Yun",
""
],
[
"Fan",
"Ruzong",
""
],
[
"Xiong",
"Momiao",
""
]
] | Investigating the pleiotropic effects of genetic variants can increase statistical power, provide important information to achieve deep understanding of the complex genetic structures of disease, and offer powerful tools for designing effective treatments with fewer side effects. However, the current multiple phenotype association analysis paradigm lacks breadth (number of phenotypes and genetic variants jointly analyzed at the same time) and depth (hierarchical structure of phenotype and genotypes). A key issue for high dimensional pleiotropic analysis is to effectively extract informative internal representation and features from high dimensional genotype and phenotype data. To explore multiple levels of representations of genetic variants, learn their internal patterns involved in the disease development, and overcome critical barriers in advancing the development of novel statistical methods and computational algorithms for genetic pleiotropic analysis, we proposed a new framework referred to as a quadratically regularized functional CCA (QRFCCA) for association analysis which combines three approaches: (1) quadratically regularized matrix factorization, (2) functional data analysis and (3) canonical correlation analysis (CCA). Large-scale simulations show that the QRFCCA has a much higher power than that of the nine competing statistics while retaining the appropriate type 1 errors. To further evaluate performance, the QRFCCA and nine other statistics are applied to the whole genome sequencing dataset from the TwinsUK study. We identify a total of 79 genes with rare variants and 67 genes with common variants significantly associated with the 46 traits using QRFCCA. The results show that the QRFCCA substantially outperforms the nine other statistics. |
1702.01703 | Jens Quedenfeld | Jens Quedenfeld and Sven Rahmann | Variant tolerant read mapping using min-hashing | null | null | null | null | q-bio.GN cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | DNA read mapping is a ubiquitous task in bioinformatics, and many tools have
been developed to solve the read mapping problem. However, there are two trends
that are changing the landscape of readmapping: First, new sequencing
technologies provide very long reads with high error rates (up to 15%). Second,
many genetic variants in the population are known, so the reference genome is
not considered as a single string over ACGT, but as a complex object containing
these variants. Most existing read mappers do not handle these new
circumstances appropriately.
We introduce a new read mapper prototype called VATRAM that considers
variants. It is based on Min-Hashing of q-gram sets of reference genome
windows. Min-Hashing is one form of locality sensitive hashing. The variants
are directly inserted into VATRAMs index which leads to a fast mapping process.
Our results show that VATRAM achieves better precision and recall than
state-of-the-art read mappers like BWA under certain cirumstances. VATRAM is
open source and can be accessed at
https://bitbucket.org/Quedenfeld/vatram-src/.
| [
{
"created": "Mon, 6 Feb 2017 16:52:05 GMT",
"version": "v1"
},
{
"created": "Wed, 8 Feb 2017 10:41:23 GMT",
"version": "v2"
}
] | 2017-02-09 | [
[
"Quedenfeld",
"Jens",
""
],
[
"Rahmann",
"Sven",
""
]
] | DNA read mapping is a ubiquitous task in bioinformatics, and many tools have been developed to solve the read mapping problem. However, there are two trends that are changing the landscape of readmapping: First, new sequencing technologies provide very long reads with high error rates (up to 15%). Second, many genetic variants in the population are known, so the reference genome is not considered as a single string over ACGT, but as a complex object containing these variants. Most existing read mappers do not handle these new circumstances appropriately. We introduce a new read mapper prototype called VATRAM that considers variants. It is based on Min-Hashing of q-gram sets of reference genome windows. Min-Hashing is one form of locality sensitive hashing. The variants are directly inserted into VATRAMs index which leads to a fast mapping process. Our results show that VATRAM achieves better precision and recall than state-of-the-art read mappers like BWA under certain cirumstances. VATRAM is open source and can be accessed at https://bitbucket.org/Quedenfeld/vatram-src/. |
2105.08288 | Shuhan Zheng | Shuhan Zheng, Zhichao Liang, Youzhi Qu, Qingyuan Wu, Haiyan Wu,
Quanying Liu | Kuramoto model based analysis reveals oxytocin effects on brain network
dynamics | null | null | null | null | q-bio.NC q-bio.QM | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The oxytocin effects on large-scale brain networks such as Default Mode
Network (DMN) and Frontoparietal Network (FPN) have been largely studied using
fMRI data. However, these studies are mainly based on the statistical
correlation or Bayesian causality inference, lacking interpretability at
physical and neuroscience level. Here, we propose a physics-based framework of
Kuramoto model to investigate oxytocin effects on the phase dynamic neural
coupling in DMN and FPN. Testing on fMRI data of 59 participants administrated
with either oxytocin or placebo, we demonstrate that oxytocin changes the
topology of brain communities in DMN and FPN, leading to higher synchronization
in the FPN and lower synchronization in the DMN, as well as a higher variance
of the coupling strength within the DMN and more flexible coupling patterns
across time. These results together indicate that oxytocin may increase the
ability to overcome the corresponding internal oscillation dispersion and
support the flexibility in neural synchrony in various social contexts,
providing new evidence for explaining the oxytocin modulated social behaviors.
Our proposed Kuramoto model-based framework can be a potential tool in network
neuroscience and offers physical and neural insights into phase dynamics of the
brain.
| [
{
"created": "Tue, 18 May 2021 05:32:07 GMT",
"version": "v1"
},
{
"created": "Wed, 26 May 2021 08:26:21 GMT",
"version": "v2"
},
{
"created": "Sun, 8 Aug 2021 10:55:12 GMT",
"version": "v3"
},
{
"created": "Sat, 9 Oct 2021 05:04:29 GMT",
"version": "v4"
}
] | 2021-10-12 | [
[
"Zheng",
"Shuhan",
""
],
[
"Liang",
"Zhichao",
""
],
[
"Qu",
"Youzhi",
""
],
[
"Wu",
"Qingyuan",
""
],
[
"Wu",
"Haiyan",
""
],
[
"Liu",
"Quanying",
""
]
] | The oxytocin effects on large-scale brain networks such as Default Mode Network (DMN) and Frontoparietal Network (FPN) have been largely studied using fMRI data. However, these studies are mainly based on the statistical correlation or Bayesian causality inference, lacking interpretability at physical and neuroscience level. Here, we propose a physics-based framework of Kuramoto model to investigate oxytocin effects on the phase dynamic neural coupling in DMN and FPN. Testing on fMRI data of 59 participants administrated with either oxytocin or placebo, we demonstrate that oxytocin changes the topology of brain communities in DMN and FPN, leading to higher synchronization in the FPN and lower synchronization in the DMN, as well as a higher variance of the coupling strength within the DMN and more flexible coupling patterns across time. These results together indicate that oxytocin may increase the ability to overcome the corresponding internal oscillation dispersion and support the flexibility in neural synchrony in various social contexts, providing new evidence for explaining the oxytocin modulated social behaviors. Our proposed Kuramoto model-based framework can be a potential tool in network neuroscience and offers physical and neural insights into phase dynamics of the brain. |
1201.0339 | Ido Kanter | Roni Vardi, Avner Wallach, Evi Kopelowitz, Moshe Abeles, Shimon Marom
and Ido Kanter | Synthetic reverberating activity patterns embedded in networks of
cortical neurons | 8 pages, 5 figures | EPL (Europhysics Letters) 97, 66002 (2012) | 10.1209/0295-5075/97/66002 | null | q-bio.NC nlin.CD | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Synthetic reverberating activity patterns are experimentally generated by
stimulation of a subset of neurons embedded in a spontaneously active network
of cortical cells in-vitro. The neurons are artificially connected by means of
conditional stimulation matrix, forming a synthetic local circuit with a
predefined programmable connectivity and time-delays. Possible uses of this
experimental design are demonstrated, analyzing the sensitivity of these
deterministic activity patterns to transmission delays and to the nature of
ongoing network dynamics.
| [
{
"created": "Sun, 1 Jan 2012 08:31:23 GMT",
"version": "v1"
},
{
"created": "Mon, 26 Mar 2012 08:28:01 GMT",
"version": "v2"
}
] | 2012-03-27 | [
[
"Vardi",
"Roni",
""
],
[
"Wallach",
"Avner",
""
],
[
"Kopelowitz",
"Evi",
""
],
[
"Abeles",
"Moshe",
""
],
[
"Marom",
"Shimon",
""
],
[
"Kanter",
"Ido",
""
]
] | Synthetic reverberating activity patterns are experimentally generated by stimulation of a subset of neurons embedded in a spontaneously active network of cortical cells in-vitro. The neurons are artificially connected by means of conditional stimulation matrix, forming a synthetic local circuit with a predefined programmable connectivity and time-delays. Possible uses of this experimental design are demonstrated, analyzing the sensitivity of these deterministic activity patterns to transmission delays and to the nature of ongoing network dynamics. |
1903.04332 | Emmanuel Paradis | Emmanuel Paradis (ISEM) | Interactions between spatial and temporal scales in the evolution of
dispersal rate | null | Evolutionary Ecology, Springer Verlag, 1998, 12 (2), pp.235-244 | 10.1023/A:1006539930788 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The evolution of dispersal rate is studied with a model of several local
populations linked by dispersal. Three dispersal strategies are considered
where all, half, or none of the offspring disperse. The spatial scale (number
of patches) and the temporal scale (probability of local extinction) of the
environment are critical in determining the selective advantage of the
different dispersal strategies. The results from the simulations suggest that
an interaction between group selection and individual selection results in a
different outcome in relation to the spatial and temporal scales of the
environment. Such an interaction is able to maintain a polymorphism in
dispersal strategies. The maintenance of this polymorphism is also
scale-dependent. This study suggests a mechanism for the short-term evolution
of dispersal, and provides a testable prediction of this hypothesis, namely
that loss of dispersal abilities should be more frequent in spatially more
continuous environments, or in temporally more stable environments.
| [
{
"created": "Mon, 11 Mar 2019 14:43:38 GMT",
"version": "v1"
}
] | 2019-03-12 | [
[
"Paradis",
"Emmanuel",
"",
"ISEM"
]
] | The evolution of dispersal rate is studied with a model of several local populations linked by dispersal. Three dispersal strategies are considered where all, half, or none of the offspring disperse. The spatial scale (number of patches) and the temporal scale (probability of local extinction) of the environment are critical in determining the selective advantage of the different dispersal strategies. The results from the simulations suggest that an interaction between group selection and individual selection results in a different outcome in relation to the spatial and temporal scales of the environment. Such an interaction is able to maintain a polymorphism in dispersal strategies. The maintenance of this polymorphism is also scale-dependent. This study suggests a mechanism for the short-term evolution of dispersal, and provides a testable prediction of this hypothesis, namely that loss of dispersal abilities should be more frequent in spatially more continuous environments, or in temporally more stable environments. |
1112.1391 | Diogo Melo | Gabriel Marroig, Diogo Melo, Guilherme Garcia | Modularity, Noise and natural selection | null | Evolution, Vol. 66, Issue 5, pp 1506--1524 (Wiley, May 2012) | 10.1111/j.1558-5646.2011.01555.x | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most biological systems are formed by component parts that to some degree are
inter-related. Groups of parts that are more associated among themselves and
are relatively autonomous from others are called modules. One of the
consequences of modularity is that biological systems usually present an
unequal distribution of the genetic variation among variables. Estimating the
covariance matrix that describes these systems is a difficult problem due to a
number of factors such as poor sample sizes and measurement errors. We show
that this problem will be exacerbated whenever matrix inversion is required, as
in directional selection reconstruction analysis. We explore the consequences
of varying degrees of modularity and signal-to-noise ratio on selection
reconstruction. We then present and test the efficiency of available methods
for controlling noise in matrix estimates. In our simulations, controlling
matrices for noise vastly improves the reconstruction of selection gradients.
We also perform an analysis of selection gradients reconstruction over a New
World Monkeys skull database in order to illustrate the impact of noise on such
analyses. Noise- controlled estimates render far more plausible interpretations
that are in full agreement with previous results.
| [
{
"created": "Tue, 6 Dec 2011 20:18:04 GMT",
"version": "v1"
}
] | 2013-08-12 | [
[
"Marroig",
"Gabriel",
""
],
[
"Melo",
"Diogo",
""
],
[
"Garcia",
"Guilherme",
""
]
] | Most biological systems are formed by component parts that to some degree are inter-related. Groups of parts that are more associated among themselves and are relatively autonomous from others are called modules. One of the consequences of modularity is that biological systems usually present an unequal distribution of the genetic variation among variables. Estimating the covariance matrix that describes these systems is a difficult problem due to a number of factors such as poor sample sizes and measurement errors. We show that this problem will be exacerbated whenever matrix inversion is required, as in directional selection reconstruction analysis. We explore the consequences of varying degrees of modularity and signal-to-noise ratio on selection reconstruction. We then present and test the efficiency of available methods for controlling noise in matrix estimates. In our simulations, controlling matrices for noise vastly improves the reconstruction of selection gradients. We also perform an analysis of selection gradients reconstruction over a New World Monkeys skull database in order to illustrate the impact of noise on such analyses. Noise- controlled estimates render far more plausible interpretations that are in full agreement with previous results. |
1210.6295 | Daniel Fisher | Daniel S. Fisher | Asexual Evolution Waves: Fluctuations and Universality | 3 figures | null | 10.1088/1742-5468/2013/01/P01011 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In large asexual populations, multiple beneficial mutations arise in the
population, compete, interfere with each other, and accumulate on the same
genome, before any of them fix. The resulting dynamics, although studied by
many authors, is still not fully understood, fundamentally because the effects
of fluctuations due to the small numbers of the fittest individuals are large
even in enormous populations. In this paper, branching processes and various
asymptotic methods for analyzing the stochastic dynamics are further developed
and used to obtain information on fluctuations, time dependence, and the
distributions of sizes of subpopulations, jumps in the mean fitness, and other
properties. The focus is on the behavior of a broad class of models: those with
a distribution of selective advantages of available beneficial mutations that
falls off more rapidly than exponentially. For such distributions, many aspects
of the dynamics are universal - quantitatively so for extremely large
populations. On the most important time scale that controls coalescent
properties and fluctuations of the speed, the dynamics is reduced to a simple
stochastic model that couples the peak and the high-fitness "nose" of the
fitness distribution. Extensions to other models and distributions of available
mutations are discussed briefly.
| [
{
"created": "Tue, 23 Oct 2012 17:19:10 GMT",
"version": "v1"
}
] | 2015-06-11 | [
[
"Fisher",
"Daniel S.",
""
]
] | In large asexual populations, multiple beneficial mutations arise in the population, compete, interfere with each other, and accumulate on the same genome, before any of them fix. The resulting dynamics, although studied by many authors, is still not fully understood, fundamentally because the effects of fluctuations due to the small numbers of the fittest individuals are large even in enormous populations. In this paper, branching processes and various asymptotic methods for analyzing the stochastic dynamics are further developed and used to obtain information on fluctuations, time dependence, and the distributions of sizes of subpopulations, jumps in the mean fitness, and other properties. The focus is on the behavior of a broad class of models: those with a distribution of selective advantages of available beneficial mutations that falls off more rapidly than exponentially. For such distributions, many aspects of the dynamics are universal - quantitatively so for extremely large populations. On the most important time scale that controls coalescent properties and fluctuations of the speed, the dynamics is reduced to a simple stochastic model that couples the peak and the high-fitness "nose" of the fitness distribution. Extensions to other models and distributions of available mutations are discussed briefly. |
2403.00951 | Eric Maris | Eric Maris | An internal sensory model allows for balance control based on
non-actionable proprioceptive feedback | 69 pages, 53 pages main text plus 3 figures | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | All motor tasks with a mechanical system (a human body, a rider on a bicycle)
that is approximately linear in the part of the state space where it stays most
of the time (e.g., upright balance control) have the following property:
actionable sensory feedback allows for optimal control actions that are a
simple linear combination of the sensory feedback. When only non-actionable
sensory feedback is available, optimal control for these approximately linear
mechanical systems is based on an internal dynamical system that estimates the
states, and that can be implemented as a recurrent neural network (RNN). It
uses a sensory model to update the state estimates with the non-actionable
sensory feedback, and the weights of this RNN are fully specified by results
from optimal feedback control. This is highly relevant for muscle spindle
afferent firing rates which, under perfectly coordinated fusimotor and
skeletomotor control, scale with the exafferent joint acceleration component.
The resulting control mechanism balances a standing body and a rider-bicycle
combination using realistic parameter values and with forcing torques that are
feasible for humans.
| [
{
"created": "Fri, 1 Mar 2024 19:58:22 GMT",
"version": "v1"
},
{
"created": "Tue, 5 Mar 2024 16:58:19 GMT",
"version": "v2"
},
{
"created": "Fri, 29 Mar 2024 10:27:36 GMT",
"version": "v3"
}
] | 2024-04-01 | [
[
"Maris",
"Eric",
""
]
] | All motor tasks with a mechanical system (a human body, a rider on a bicycle) that is approximately linear in the part of the state space where it stays most of the time (e.g., upright balance control) have the following property: actionable sensory feedback allows for optimal control actions that are a simple linear combination of the sensory feedback. When only non-actionable sensory feedback is available, optimal control for these approximately linear mechanical systems is based on an internal dynamical system that estimates the states, and that can be implemented as a recurrent neural network (RNN). It uses a sensory model to update the state estimates with the non-actionable sensory feedback, and the weights of this RNN are fully specified by results from optimal feedback control. This is highly relevant for muscle spindle afferent firing rates which, under perfectly coordinated fusimotor and skeletomotor control, scale with the exafferent joint acceleration component. The resulting control mechanism balances a standing body and a rider-bicycle combination using realistic parameter values and with forcing torques that are feasible for humans. |
1406.7256 | Seth Sullivant | Colby Long and Seth Sullivant | Identifiability of 3-Class Jukes-Cantor Mixtures | 16 pages, 7 figures | null | null | null | q-bio.PE math.AG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We prove identifiability of the tree parameters of the 3-class Jukes-Cantor
mixture model. The proof uses ideas from algebraic statistics, in particular:
finding phylogenetic invariants that separate the varieties associated to
different triples of trees; computing dimensions of the resulting phylogenetic
varieties; and using the disentangling number to reduce to trees with a small
number of leaves. Symbolic computation also plays a key role in handling the
many different cases and finding relevant phylogenetic invariants.
| [
{
"created": "Fri, 27 Jun 2014 18:09:53 GMT",
"version": "v1"
},
{
"created": "Sat, 9 Aug 2014 18:48:28 GMT",
"version": "v2"
}
] | 2014-08-12 | [
[
"Long",
"Colby",
""
],
[
"Sullivant",
"Seth",
""
]
] | We prove identifiability of the tree parameters of the 3-class Jukes-Cantor mixture model. The proof uses ideas from algebraic statistics, in particular: finding phylogenetic invariants that separate the varieties associated to different triples of trees; computing dimensions of the resulting phylogenetic varieties; and using the disentangling number to reduce to trees with a small number of leaves. Symbolic computation also plays a key role in handling the many different cases and finding relevant phylogenetic invariants. |
2301.06755 | Claus Metzner | Claus Metzner, Achim Schilling, Maximilian Traxdorf, Holger Schulze,
Konstantin Tziridis and Patrick Krauss | Extracting continuous sleep depth from EEG data without machine learning | null | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | The human sleep-cycle has been divided into discrete sleep stages that can be
recognized in electroencephalographic (EEG) and other bio-signals by trained
specialists or machine learning systems. It is however unclear whether these
human-defined stages can be re-discovered with unsupervised methods of data
analysis, using only a minimal amount of generic pre-processing. Based on EEG
data, recorded overnight from sleeping human subjects, we investigate the
degree of clustering of the sleep stages using the General Discrimination Value
as a quantitative measure of class separability. Virtually no clustering is
found in the raw data, even after transforming the EEG signals of each
thirty-second epoch from the time domain into the more informative frequency
domain. However, a Principal Component Analysis (PCA) of these epoch-wise
frequency spectra reveals that the sleep stages separate significantly better
in the low-dimensional sub-space of certain PCA components. In particular the
component $C_1(t)$ can serve as a robust, continuous 'master variable' that
encodes the depth of sleep and therefore correlates strongly with the
'hypnogram', a common plot of the discrete sleep stages over time. Moreover,
$C_1(t)$ shows persistent trends during extended time periods where the sleep
stage is constant, suggesting that sleep may be better understood as a
continuum. These intriguing properties of $C_1(t)$ are not only relevant for
understanding brain dynamics during sleep, but might also be exploited in
low-cost single-channel sleep tracking devices for private and clinical use.
| [
{
"created": "Tue, 17 Jan 2023 08:39:34 GMT",
"version": "v1"
}
] | 2023-01-18 | [
[
"Metzner",
"Claus",
""
],
[
"Schilling",
"Achim",
""
],
[
"Traxdorf",
"Maximilian",
""
],
[
"Schulze",
"Holger",
""
],
[
"Tziridis",
"Konstantin",
""
],
[
"Krauss",
"Patrick",
""
]
] | The human sleep-cycle has been divided into discrete sleep stages that can be recognized in electroencephalographic (EEG) and other bio-signals by trained specialists or machine learning systems. It is however unclear whether these human-defined stages can be re-discovered with unsupervised methods of data analysis, using only a minimal amount of generic pre-processing. Based on EEG data, recorded overnight from sleeping human subjects, we investigate the degree of clustering of the sleep stages using the General Discrimination Value as a quantitative measure of class separability. Virtually no clustering is found in the raw data, even after transforming the EEG signals of each thirty-second epoch from the time domain into the more informative frequency domain. However, a Principal Component Analysis (PCA) of these epoch-wise frequency spectra reveals that the sleep stages separate significantly better in the low-dimensional sub-space of certain PCA components. In particular the component $C_1(t)$ can serve as a robust, continuous 'master variable' that encodes the depth of sleep and therefore correlates strongly with the 'hypnogram', a common plot of the discrete sleep stages over time. Moreover, $C_1(t)$ shows persistent trends during extended time periods where the sleep stage is constant, suggesting that sleep may be better understood as a continuum. These intriguing properties of $C_1(t)$ are not only relevant for understanding brain dynamics during sleep, but might also be exploited in low-cost single-channel sleep tracking devices for private and clinical use. |
1411.0159 | Viktoras Veitas Mr. | David Weinbaum (Weaver) and Viktoras Veitas | Synthetic Cognitive Development: where intelligence comes from | Preprint. 28 pages LaTeX, 5 figures, 1 table; en-US proofreading;
section 4.2 rewritten; bibliography corrected | null | null | null | q-bio.NC nlin.AO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The human cognitive system is a remarkable exemplar of a general intelligent
system whose competence is not confined to a specific problem domain.
Evidently, general cognitive competences are a product of a prolonged and
complex process of cognitive development. Therefore, the process of cognitive
development is a primary key to understanding the emergence of intelligent
behavior. This paper develops the theoretical foundations for a model that
generalizes the process of cognitive development. The model aims to provide a
realistic scheme for the synthesis of scalable cognitive systems with an
open-ended range of capabilities. Major concepts and theories of human
cognitive development are introduced and briefly explored focusing on the
enactive approach to cognition and the concept of sense-making. The initial
scheme of human cognitive development is then generalized by introducing the
philosophy of individuation and the abstract mechanism of transduction. The
theory of individuation provides the ground for the necessary paradigmatic
shift from cognitive systems as given products to cognitive development as a
formative process of self-organization. Next, the conceptual model is specified
as a scalable scheme of networks of agents. The mechanisms of individuation are
formulated in context independent information theoretical terms. Finally, the
paper discusses two concrete aspects of the generative model -- mechanisms of
transduction and value modulating systems. These are topics of further research
towards a computationally realizable model.
| [
{
"created": "Sat, 1 Nov 2014 19:28:51 GMT",
"version": "v1"
},
{
"created": "Sat, 13 Dec 2014 18:57:14 GMT",
"version": "v2"
}
] | 2014-12-16 | [
[
"Weinbaum",
"David",
"",
"Weaver"
],
[
"Veitas",
"Viktoras",
""
]
] | The human cognitive system is a remarkable exemplar of a general intelligent system whose competence is not confined to a specific problem domain. Evidently, general cognitive competences are a product of a prolonged and complex process of cognitive development. Therefore, the process of cognitive development is a primary key to understanding the emergence of intelligent behavior. This paper develops the theoretical foundations for a model that generalizes the process of cognitive development. The model aims to provide a realistic scheme for the synthesis of scalable cognitive systems with an open-ended range of capabilities. Major concepts and theories of human cognitive development are introduced and briefly explored focusing on the enactive approach to cognition and the concept of sense-making. The initial scheme of human cognitive development is then generalized by introducing the philosophy of individuation and the abstract mechanism of transduction. The theory of individuation provides the ground for the necessary paradigmatic shift from cognitive systems as given products to cognitive development as a formative process of self-organization. Next, the conceptual model is specified as a scalable scheme of networks of agents. The mechanisms of individuation are formulated in context independent information theoretical terms. Finally, the paper discusses two concrete aspects of the generative model -- mechanisms of transduction and value modulating systems. These are topics of further research towards a computationally realizable model. |
2301.09569 | Alessandro Sergi | Alessandro Sergi, Antonino Messina, Carmelo M. Vicario, Gabriella
Martino | A Quantum-Classical Model of Brain Dynamics | Submitted to Entropy [MDPI], Special Issue "Quantum Processes in
Living Systems" | null | 10.3390/e25040592 | null | q-bio.NC quant-ph | http://creativecommons.org/licenses/by/4.0/ | The study of the human psyche has elucidated a bipartite structure of
cognition reflecting the quantum-classical nature of any process that generates
knowledge and learning governed by brain activity. Acknowledging the importance
of such a finding for modelization, we posit an approach to study brain by
means of the quantum-classical dynamics of a Mixed Weyl symbol. The Mixed Weyl
symbol is used to describe brain processes at the microscopic level and
provides a link to the results of measurements made at the mesoscopic scale.
Within this approach, quantum variables (such as,for example, nuclear and
electron spins, dipole momenta of particles or molecules, tunneling degrees of
freedom, etc may be represented by spinors while the electromagnetic fields and
phonon modes involved in the processes are treated either classically or
semi-classically, by also considering quantum zero-point fluctuations.
Zero-point quantum effects can be incorporated into numerical simulations by
controlling the temperature of each field mode via coupling to a dedicated
Nos\`e-Hoover chain thermostat. The temperature of each thermostat is chosen in
order to reproduce quantum statistics in the canonical ensemble. In this first
paper, we introduce a quantum-classical model of brain dynamics, clarifying its
mathematical strucure and focusing the discussion on its predictive value.
Analytical consequences of the model are not reported in this paper, since they
are left for future work. Our treatment incorporates compatible features of
three well-known quantum approaches to brain dynamics - namely the
electromagnetic field theory approach, the orchestrated objective reduction
theory, and the dissipative quantum model of the brain - and hints at
convincing arguments that sustain the existence of quantum-classical processes
in the brain activity. All three models are reviewed.
| [
{
"created": "Tue, 17 Jan 2023 15:16:21 GMT",
"version": "v1"
},
{
"created": "Sat, 28 Jan 2023 02:06:55 GMT",
"version": "v2"
},
{
"created": "Sun, 26 Feb 2023 23:06:09 GMT",
"version": "v3"
},
{
"created": "Thu, 30 Mar 2023 10:31:39 GMT",
"version": "v4"
}
] | 2023-04-19 | [
[
"Sergi",
"Alessandro",
""
],
[
"Messina",
"Antonino",
""
],
[
"Vicario",
"Carmelo M.",
""
],
[
"Martino",
"Gabriella",
""
]
] | The study of the human psyche has elucidated a bipartite structure of cognition reflecting the quantum-classical nature of any process that generates knowledge and learning governed by brain activity. Acknowledging the importance of such a finding for modelization, we posit an approach to study brain by means of the quantum-classical dynamics of a Mixed Weyl symbol. The Mixed Weyl symbol is used to describe brain processes at the microscopic level and provides a link to the results of measurements made at the mesoscopic scale. Within this approach, quantum variables (such as,for example, nuclear and electron spins, dipole momenta of particles or molecules, tunneling degrees of freedom, etc may be represented by spinors while the electromagnetic fields and phonon modes involved in the processes are treated either classically or semi-classically, by also considering quantum zero-point fluctuations. Zero-point quantum effects can be incorporated into numerical simulations by controlling the temperature of each field mode via coupling to a dedicated Nos\`e-Hoover chain thermostat. The temperature of each thermostat is chosen in order to reproduce quantum statistics in the canonical ensemble. In this first paper, we introduce a quantum-classical model of brain dynamics, clarifying its mathematical strucure and focusing the discussion on its predictive value. Analytical consequences of the model are not reported in this paper, since they are left for future work. Our treatment incorporates compatible features of three well-known quantum approaches to brain dynamics - namely the electromagnetic field theory approach, the orchestrated objective reduction theory, and the dissipative quantum model of the brain - and hints at convincing arguments that sustain the existence of quantum-classical processes in the brain activity. All three models are reviewed. |
1311.1481 | Chiara Poletto Miss | Chiara Poletto, Camille Pelat, Daniel Levy-Bruhl, Yazdan Yazdanpanah,
Pierre-Yves Boelle, Vittoria Colizza | Assessment of the MERS-CoV epidemic situation in the Middle East region | in press on Eurosurveillance, 16 pages, 3 figures | Eurosurveillance Volume 19, Issue 23, 12/Jun/2014 | 10.2807/1560-7917.ES2014.19.23.20824 | null | q-bio.PE q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The appearance of a novel coronavirus named Middle East (ME) Respiratory
Syndrome Coronavirus (MERS-CoV) has raised global public health concerns
regarding the current situation and its future evolution. Here we propose an
integrative maximum likelihood analysis of both cluster data in the ME region
and importations in Europe to assess transmission scenario and incidence of
sporadic infections. Our approach is based on a spatial-transmission model
integrating mobility data worldwide and allows for variations in the
zoonotic/environmental transmission and underascertainment. Maximum likelihood
estimates for the ME region indicate the occurrence of a subcritical epidemic
(R=0.50, 95% confidence interval (CI) 0.30-0.77) associated with a 0.28 (95% CI
0.12-0.85) daily rate of sporadic introductions. Infections in the region
appear to be mainly dominated by zoonotic/environmental transmissions, with
possible underascertainment (95% CI of estimated to observed sporadic cases in
the range 1.03-7.32). No time evolution of the situation emerges. Analyses of
flight passenger data from the region indicate areas at high risk of
importation. While dismissing an immediate threat for global health security,
this analysis provides a baseline scenario for future reference and updates,
suggests reinforced surveillance to limit underascertainment, and calls for
increased alertness in high-risk areas worldwide.
| [
{
"created": "Wed, 6 Nov 2013 19:45:42 GMT",
"version": "v1"
},
{
"created": "Mon, 5 May 2014 08:35:25 GMT",
"version": "v2"
}
] | 2020-05-30 | [
[
"Poletto",
"Chiara",
""
],
[
"Pelat",
"Camille",
""
],
[
"Levy-Bruhl",
"Daniel",
""
],
[
"Yazdanpanah",
"Yazdan",
""
],
[
"Boelle",
"Pierre-Yves",
""
],
[
"Colizza",
"Vittoria",
""
]
] | The appearance of a novel coronavirus named Middle East (ME) Respiratory Syndrome Coronavirus (MERS-CoV) has raised global public health concerns regarding the current situation and its future evolution. Here we propose an integrative maximum likelihood analysis of both cluster data in the ME region and importations in Europe to assess transmission scenario and incidence of sporadic infections. Our approach is based on a spatial-transmission model integrating mobility data worldwide and allows for variations in the zoonotic/environmental transmission and underascertainment. Maximum likelihood estimates for the ME region indicate the occurrence of a subcritical epidemic (R=0.50, 95% confidence interval (CI) 0.30-0.77) associated with a 0.28 (95% CI 0.12-0.85) daily rate of sporadic introductions. Infections in the region appear to be mainly dominated by zoonotic/environmental transmissions, with possible underascertainment (95% CI of estimated to observed sporadic cases in the range 1.03-7.32). No time evolution of the situation emerges. Analyses of flight passenger data from the region indicate areas at high risk of importation. While dismissing an immediate threat for global health security, this analysis provides a baseline scenario for future reference and updates, suggests reinforced surveillance to limit underascertainment, and calls for increased alertness in high-risk areas worldwide. |
0901.0990 | Wojciech Waga | Jakub Kowalski, Wojciech Waga, Marta Zawierta, Stanislaw Cebrat | Phase transition in the genome evolution favours non-random distribution
of genes on chromosomes | 13 pages, 7 figures, publication | null | 10.1142/S0129183109014370 | null | q-bio.GN q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We have used the Monte Carlo based computer models to show that selection
pressure could affect the distribution of recombination hotspots along the
chromosome. Close to critical crossover rate, where genomes may switch between
the Darwinian purifying selection or complementation of haplotypes, the
distribution of recombination events and the force of selection exerted on
genes affect the structure of chromosomes. The order of expression of gene s
and their location on chromosome may decide about the extinction or survival of
competing populations.
| [
{
"created": "Thu, 8 Jan 2009 08:57:17 GMT",
"version": "v1"
}
] | 2015-05-13 | [
[
"Kowalski",
"Jakub",
""
],
[
"Waga",
"Wojciech",
""
],
[
"Zawierta",
"Marta",
""
],
[
"Cebrat",
"Stanislaw",
""
]
] | We have used the Monte Carlo based computer models to show that selection pressure could affect the distribution of recombination hotspots along the chromosome. Close to critical crossover rate, where genomes may switch between the Darwinian purifying selection or complementation of haplotypes, the distribution of recombination events and the force of selection exerted on genes affect the structure of chromosomes. The order of expression of gene s and their location on chromosome may decide about the extinction or survival of competing populations. |
2101.04411 | Christoph M Augustin | Laura Marx, Justyna A. Niestrawska, Matthias A. F. Gsell, Federica
Caforio, Gernot Plank, Christoph M. Augustin | Efficient identification of myocardial material parameters and the
stress-free reference configuration for patient-specific human heart models | This research has received funding from the European Union's Horizon
2020 research and innovation programme under the ERA-NET co-fund action No.
680969 (ERA-CVD SICVALVES) funded by the Austrian Science Fund (FWF), Grant I
4652-B | null | null | null | q-bio.TO physics.bio-ph physics.med-ph | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Image-based computational models of the heart represent a powerful tool to
shed new light on the mechanisms underlying physiological and pathological
conditions in cardiac function and to improve diagnosis and therapy planning.
However, in order to enable the clinical translation of such models, it is
crucial to develop personalized models that are able to reproduce the
physiological reality of a given patient. There have been numerous
contributions in experimental and computational biomechanics to characterize
the passive behavior of the myocardium. However, most of these studies suffer
from severe limitations and are not applicable to high-resolution geometries.
In this work, we present a novel methodology to perform an automated
identification of in vivo properties of passive cardiac biomechanics. The
highly-efficient algorithm fits material parameters against the shape of a
patient-specific approximation of the end-diastolic pressure-volume relation
(EDPVR). Simultaneously, a stress-free reference configuration is generated,
where a novel fail-safe feature to improve convergence and robustness is
implemented. Only clinical image data or previously generated meshes at one
time point during diastole and one measured data point of the EDPVR are
required as an input. The proposed method can be straightforwardly coupled to
existing finite element (FE) software packages and is applicable to different
constitutive laws and FE formulations. Sensitivity analysis demonstrates that
the algorithm is robust with respect to initial input parameters.
| [
{
"created": "Tue, 12 Jan 2021 11:18:56 GMT",
"version": "v1"
},
{
"created": "Thu, 14 Jan 2021 15:33:43 GMT",
"version": "v2"
}
] | 2021-01-15 | [
[
"Marx",
"Laura",
""
],
[
"Niestrawska",
"Justyna A.",
""
],
[
"Gsell",
"Matthias A. F.",
""
],
[
"Caforio",
"Federica",
""
],
[
"Plank",
"Gernot",
""
],
[
"Augustin",
"Christoph M.",
""
]
] | Image-based computational models of the heart represent a powerful tool to shed new light on the mechanisms underlying physiological and pathological conditions in cardiac function and to improve diagnosis and therapy planning. However, in order to enable the clinical translation of such models, it is crucial to develop personalized models that are able to reproduce the physiological reality of a given patient. There have been numerous contributions in experimental and computational biomechanics to characterize the passive behavior of the myocardium. However, most of these studies suffer from severe limitations and are not applicable to high-resolution geometries. In this work, we present a novel methodology to perform an automated identification of in vivo properties of passive cardiac biomechanics. The highly-efficient algorithm fits material parameters against the shape of a patient-specific approximation of the end-diastolic pressure-volume relation (EDPVR). Simultaneously, a stress-free reference configuration is generated, where a novel fail-safe feature to improve convergence and robustness is implemented. Only clinical image data or previously generated meshes at one time point during diastole and one measured data point of the EDPVR are required as an input. The proposed method can be straightforwardly coupled to existing finite element (FE) software packages and is applicable to different constitutive laws and FE formulations. Sensitivity analysis demonstrates that the algorithm is robust with respect to initial input parameters. |
1308.1446 | Kimberly Schlesinger | Kimberly J. Schlesinger, Sean P. Stromberg, and Jean M. Carlson | Coevolutionary immune system dynamics driving pathogen speciation | main article: 16 pages, 5 figures; supporting information: 3 pages | PLoS ONE 9(7): e102821 | 10.1371/journal.pone.0102821 | null | q-bio.PE physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce and analyze a within-host dynamical model of the coevolution
between rapidly mutating pathogens and the adaptive immune response. Pathogen
mutation and a homeostatic constraint on lymphocytes both play a role in
allowing the development of chronic infection, rather than quick pathogen
clearance. The dynamics of these chronic infections display emergent structure,
including branching patterns corresponding to asexual pathogen speciation,
which is fundamentally driven by the coevolutionary interaction. Over time,
continued branching creates an increasingly fragile immune system, and leads to
the eventual catastrophic loss of immune control.
| [
{
"created": "Tue, 6 Aug 2013 23:35:00 GMT",
"version": "v1"
},
{
"created": "Thu, 29 Aug 2013 20:42:32 GMT",
"version": "v2"
},
{
"created": "Mon, 4 Aug 2014 23:40:20 GMT",
"version": "v3"
}
] | 2014-08-06 | [
[
"Schlesinger",
"Kimberly J.",
""
],
[
"Stromberg",
"Sean P.",
""
],
[
"Carlson",
"Jean M.",
""
]
] | We introduce and analyze a within-host dynamical model of the coevolution between rapidly mutating pathogens and the adaptive immune response. Pathogen mutation and a homeostatic constraint on lymphocytes both play a role in allowing the development of chronic infection, rather than quick pathogen clearance. The dynamics of these chronic infections display emergent structure, including branching patterns corresponding to asexual pathogen speciation, which is fundamentally driven by the coevolutionary interaction. Over time, continued branching creates an increasingly fragile immune system, and leads to the eventual catastrophic loss of immune control. |
1508.07244 | J. C. Phillips | J. C. Phillips | Vaccine escape in 2013-4 and the hydropathic evolution of glycoproteins
of A/H3N2 viruses | 12 pages, 3 figures | null | 10.1016/j.physa.2016.02.040 | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | More virulent strains of influenza virus subtypes H1N1 appeared widely in
2007 and H3N2 in 2011, and especially 2013-4, when the effectiveness of the
H3N2 vaccine decreased nearly to zero. The amino acid differences of
neuraminidase from prior less virulent strains appear to be small (<1%) when
tabulated through sequence alignments and counting site identities and
similarities. Here we show how analyzing fractal hydropathic forces responsible
for neuraminidase globular compaction and modularity quantifies the mutational
origins of increased virulence. It also predicts vaccine escape and specifies
optimized targets for the 2015 H3N2 vaccine different from the WHO target.
Unlike some earlier methods based on measuring hemagglutinin antigenic drift
and ferret sera, which take several years, cover only a few candidate strains,
and are ambiguous, the new methods are timely and can be completed, using NCBI
and GISAID amino acid sequences only, in a few days.
| [
{
"created": "Thu, 27 Aug 2015 19:24:51 GMT",
"version": "v1"
}
] | 2016-04-20 | [
[
"Phillips",
"J. C.",
""
]
] | More virulent strains of influenza virus subtypes H1N1 appeared widely in 2007 and H3N2 in 2011, and especially 2013-4, when the effectiveness of the H3N2 vaccine decreased nearly to zero. The amino acid differences of neuraminidase from prior less virulent strains appear to be small (<1%) when tabulated through sequence alignments and counting site identities and similarities. Here we show how analyzing fractal hydropathic forces responsible for neuraminidase globular compaction and modularity quantifies the mutational origins of increased virulence. It also predicts vaccine escape and specifies optimized targets for the 2015 H3N2 vaccine different from the WHO target. Unlike some earlier methods based on measuring hemagglutinin antigenic drift and ferret sera, which take several years, cover only a few candidate strains, and are ambiguous, the new methods are timely and can be completed, using NCBI and GISAID amino acid sequences only, in a few days. |
1810.03602 | Luis Aparicio | Luis Aparicio, Mykola Bordyuh, Andrew J. Blumberg, Raul Rabadan | Quasi-universality in single-cell sequencing data | Main text has 18 pages and 5 figures. Supplementary material
(methods) has 21 pages and 16 figures | null | null | null | q-bio.QM cond-mat.stat-mech math.PR physics.app-ph physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The development of single-cell technologies provides the opportunity to
identify new cellular states and reconstruct novel cell-to-cell relationships.
Applications range from understanding the transcriptional and epigenetic
processes involved in metazoan development to characterizing distinct cells
types in heterogeneous populations like cancers or immune cells. However,
analysis of the data is impeded by its unknown intrinsic biological and
technical variability together with its sparseness; these factors complicate
the identification of true biological signals amidst artifact and noise. Here
we show that, across technologies, roughly 95% of the eigenvalues derived from
each single-cell data set can be described by universal distributions predicted
by Random Matrix Theory. Interestingly, 5% of the spectrum shows deviations
from these distributions and present a phenomenon known as eigenvector
localization, where information tightly concentrates in groups of cells. Some
of the localized eigenvectors reflect underlying biological signal, and some
are simply a consequence of the sparsity of single cell data; roughly 3% is
artifactual. Based on the universal distributions and a technique for detecting
sparsity induced localization, we present a strategy to identify the residual
2% of directions that encode biological information and thereby denoise
single-cell data. We demonstrate the effectiveness of this approach by
comparing with standard single-cell data analysis techniques in a variety of
examples with marked cell populations.
| [
{
"created": "Fri, 5 Oct 2018 19:44:55 GMT",
"version": "v1"
}
] | 2018-10-11 | [
[
"Aparicio",
"Luis",
""
],
[
"Bordyuh",
"Mykola",
""
],
[
"Blumberg",
"Andrew J.",
""
],
[
"Rabadan",
"Raul",
""
]
] | The development of single-cell technologies provides the opportunity to identify new cellular states and reconstruct novel cell-to-cell relationships. Applications range from understanding the transcriptional and epigenetic processes involved in metazoan development to characterizing distinct cells types in heterogeneous populations like cancers or immune cells. However, analysis of the data is impeded by its unknown intrinsic biological and technical variability together with its sparseness; these factors complicate the identification of true biological signals amidst artifact and noise. Here we show that, across technologies, roughly 95% of the eigenvalues derived from each single-cell data set can be described by universal distributions predicted by Random Matrix Theory. Interestingly, 5% of the spectrum shows deviations from these distributions and present a phenomenon known as eigenvector localization, where information tightly concentrates in groups of cells. Some of the localized eigenvectors reflect underlying biological signal, and some are simply a consequence of the sparsity of single cell data; roughly 3% is artifactual. Based on the universal distributions and a technique for detecting sparsity induced localization, we present a strategy to identify the residual 2% of directions that encode biological information and thereby denoise single-cell data. We demonstrate the effectiveness of this approach by comparing with standard single-cell data analysis techniques in a variety of examples with marked cell populations. |
1202.4482 | David Balduzzi | David Balduzzi, Pedro A Ortega, Michel Besserve | Metabolic cost as an organizing principle for cooperative learning | 14 pages, 2 figures, to appear in Advances in Complex Systems | null | null | null | q-bio.NC cs.LG nlin.AO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper investigates how neurons can use metabolic cost to facilitate
learning at a population level. Although decision-making by individual neurons
has been extensively studied, questions regarding how neurons should behave to
cooperate effectively remain largely unaddressed. Under assumptions that
capture a few basic features of cortical neurons, we show that constraining
reward maximization by metabolic cost aligns the information content of actions
with their expected reward. Thus, metabolic cost provides a mechanism whereby
neurons encode expected reward into their outputs. Further, aside from reducing
energy expenditures, imposing a tight metabolic constraint also increases the
accuracy of empirical estimates of rewards, increasing the robustness of
distributed learning. Finally, we present two implementations of metabolically
constrained learning that confirm our theoretical finding. These results
suggest that metabolic cost may be an organizing principle underlying the
neural code, and may also provide a useful guide to the design and analysis of
other cooperating populations.
| [
{
"created": "Mon, 20 Feb 2012 22:02:16 GMT",
"version": "v1"
},
{
"created": "Sat, 9 Feb 2013 21:34:51 GMT",
"version": "v2"
}
] | 2013-02-12 | [
[
"Balduzzi",
"David",
""
],
[
"Ortega",
"Pedro A",
""
],
[
"Besserve",
"Michel",
""
]
] | This paper investigates how neurons can use metabolic cost to facilitate learning at a population level. Although decision-making by individual neurons has been extensively studied, questions regarding how neurons should behave to cooperate effectively remain largely unaddressed. Under assumptions that capture a few basic features of cortical neurons, we show that constraining reward maximization by metabolic cost aligns the information content of actions with their expected reward. Thus, metabolic cost provides a mechanism whereby neurons encode expected reward into their outputs. Further, aside from reducing energy expenditures, imposing a tight metabolic constraint also increases the accuracy of empirical estimates of rewards, increasing the robustness of distributed learning. Finally, we present two implementations of metabolically constrained learning that confirm our theoretical finding. These results suggest that metabolic cost may be an organizing principle underlying the neural code, and may also provide a useful guide to the design and analysis of other cooperating populations. |
1512.02124 | Nils Becker | Nils B. Becker, Andrew Mugler, Pieter Rein ten Wolde | Optimal Prediction by Cellular Signaling Networks | 5 pages, 4 figures; 15 supplementary pages with 12 figures | null | 10.1103/PhysRevLett.115.258103 | null | q-bio.MN q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Living cells can enhance their fitness by anticipating environmental change.
We study how accurately linear signaling networks in cells can predict future
signals. We find that maximal predictive power results from a combination of
input-noise suppression, linear extrapolation, and selective readout of
correlated past signal values. Single-layer networks generate exponential
response kernels, which suffice to predict Markovian signals optimally.
Multilayer networks allow oscillatory kernels that can optimally predict
non-Markovian signals. At low noise, these kernels exploit the signal
derivative for extrapolation, while at high noise, they capitalize on signal
values in the past that are strongly correlated with the future signal. We show
how the common motifs of negative feedback and incoherent feed-forward can
implement these optimal response functions. Simulations reveal that E. coli can
reliably predict concentration changes for chemotaxis, and that the integration
time of its response kernel arises from a trade-off between rapid response and
noise suppression.
| [
{
"created": "Mon, 7 Dec 2015 17:10:49 GMT",
"version": "v1"
}
] | 2016-01-20 | [
[
"Becker",
"Nils B.",
""
],
[
"Mugler",
"Andrew",
""
],
[
"Wolde",
"Pieter Rein ten",
""
]
] | Living cells can enhance their fitness by anticipating environmental change. We study how accurately linear signaling networks in cells can predict future signals. We find that maximal predictive power results from a combination of input-noise suppression, linear extrapolation, and selective readout of correlated past signal values. Single-layer networks generate exponential response kernels, which suffice to predict Markovian signals optimally. Multilayer networks allow oscillatory kernels that can optimally predict non-Markovian signals. At low noise, these kernels exploit the signal derivative for extrapolation, while at high noise, they capitalize on signal values in the past that are strongly correlated with the future signal. We show how the common motifs of negative feedback and incoherent feed-forward can implement these optimal response functions. Simulations reveal that E. coli can reliably predict concentration changes for chemotaxis, and that the integration time of its response kernel arises from a trade-off between rapid response and noise suppression. |
2310.07275 | Quentin Richard | Quentin Richard (IMAG), Marc Choisy (OUCRU), Rams\`es Djidjou-Demasse
(MIVEGEC), Thierry Lef\`evre (MIVEGEC) | Epidemiological impacts of age structures on human malaria transmission | null | null | null | null | q-bio.PE math.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Malaria is one of the most common mosquito-borne diseases widespread in
tropical and subtropical regions, causing thousands of deaths every year in the
world. In a previous paper, we formulated an age-structured model containing
three structural variables: (i) the chronological age of human and mosquito
populations, (ii) the time since they are infected, and (iii) humans waning
immunity (i.e. the progressive loss of protective antibodies after recovery).
In the present paper, we expand the analysis of this age-structured model and
focus on the derivation of entomological and epidemiological results commonly
used in the literature, following the works of Smith and McKenzie. We
generalize their results to the age-structured case. In order to quantify the
impact of neglecting structuring variables such as chronological age, we
assigned values from the literature to our model parameters. While some
parameters values are readily accessible from the literature, at least those
about the human population, the parameters concerning mosquitoes are less
commonly documented and the values of a number of them (e.g. mosquito survival
in the presence or in absence of infection) can be discussed extensively.
| [
{
"created": "Wed, 11 Oct 2023 07:57:05 GMT",
"version": "v1"
},
{
"created": "Wed, 22 May 2024 09:05:35 GMT",
"version": "v2"
}
] | 2024-05-24 | [
[
"Richard",
"Quentin",
"",
"IMAG"
],
[
"Choisy",
"Marc",
"",
"OUCRU"
],
[
"Djidjou-Demasse",
"Ramsès",
"",
"MIVEGEC"
],
[
"Lefèvre",
"Thierry",
"",
"MIVEGEC"
]
] | Malaria is one of the most common mosquito-borne diseases widespread in tropical and subtropical regions, causing thousands of deaths every year in the world. In a previous paper, we formulated an age-structured model containing three structural variables: (i) the chronological age of human and mosquito populations, (ii) the time since they are infected, and (iii) humans waning immunity (i.e. the progressive loss of protective antibodies after recovery). In the present paper, we expand the analysis of this age-structured model and focus on the derivation of entomological and epidemiological results commonly used in the literature, following the works of Smith and McKenzie. We generalize their results to the age-structured case. In order to quantify the impact of neglecting structuring variables such as chronological age, we assigned values from the literature to our model parameters. While some parameters values are readily accessible from the literature, at least those about the human population, the parameters concerning mosquitoes are less commonly documented and the values of a number of them (e.g. mosquito survival in the presence or in absence of infection) can be discussed extensively. |
1309.4283 | Thomas Pfeil | Thomas Pfeil, Anne-Christine Scherzer, Johannes Schemmel and Karlheinz
Meier | Neuromorphic Learning towards Nano Second Precision | 7 pages, 7 figures, presented at IJCNN 2013 in Dallas, TX, USA. IJCNN
2013. Corrected version with updated STDP curves IJCNN 2013 | Neural Networks (IJCNN), The 2013 International Joint Conference
on , pp. 1-5, 4-9 Aug. 2013 | 10.1109/IJCNN.2013.6706828 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Temporal coding is one approach to representing information in spiking neural
networks. An example of its application is the location of sounds by barn owls
that requires especially precise temporal coding. Dependent upon the azimuthal
angle, the arrival times of sound signals are shifted between both ears. In
order to deter- mine these interaural time differences, the phase difference of
the signals is measured. We implemented this biologically inspired network on a
neuromorphic hardware system and demonstrate spike-timing dependent plasticity
on an analog, highly accelerated hardware substrate. Our neuromorphic
implementation enables the resolution of time differences of less than 50 ns.
On-chip Hebbian learning mechanisms select inputs from a pool of neurons which
code for the same sound frequency. Hence, noise caused by different synaptic
delays across these inputs is reduced. Furthermore, learning compensates for
variations on neuronal and synaptic parameters caused by device mismatch
intrinsic to the neuromorphic substrate.
| [
{
"created": "Tue, 17 Sep 2013 12:31:48 GMT",
"version": "v1"
},
{
"created": "Wed, 18 Sep 2013 06:55:16 GMT",
"version": "v2"
}
] | 2014-01-24 | [
[
"Pfeil",
"Thomas",
""
],
[
"Scherzer",
"Anne-Christine",
""
],
[
"Schemmel",
"Johannes",
""
],
[
"Meier",
"Karlheinz",
""
]
] | Temporal coding is one approach to representing information in spiking neural networks. An example of its application is the location of sounds by barn owls that requires especially precise temporal coding. Dependent upon the azimuthal angle, the arrival times of sound signals are shifted between both ears. In order to deter- mine these interaural time differences, the phase difference of the signals is measured. We implemented this biologically inspired network on a neuromorphic hardware system and demonstrate spike-timing dependent plasticity on an analog, highly accelerated hardware substrate. Our neuromorphic implementation enables the resolution of time differences of less than 50 ns. On-chip Hebbian learning mechanisms select inputs from a pool of neurons which code for the same sound frequency. Hence, noise caused by different synaptic delays across these inputs is reduced. Furthermore, learning compensates for variations on neuronal and synaptic parameters caused by device mismatch intrinsic to the neuromorphic substrate. |
1801.05767 | Denis Michel | Denis Michel and Philippe Ruelle | Polylogarithmic equilibrium treatment of molecular aggregation and
critical concentrations | 14 pages, 2 figures | Phys. Chem. Chem. Phys. 2017 Feb 15;19(7):5273-5284 | 10.1039/C6CP08369B | null | q-bio.QM cond-mat.soft | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A full equilibrium treatment of molecular aggregation is presented for
prototypes of 1D and 3D aggregates, with and without nucleation. By skipping
complex kinetic parameters like aggregate size-dependent diffusion, the
equilibrium treatment allows to predict directly time-independent quantities
such as critical concentrations. The relationships between the macroscopic
equilibrium constants for the different paths are first established by
statistical corrections and so as to comply with the detailed balance
constraints imposed by nucleation, and the composition of the mixture resulting
from homogeneous aggregation is then analyzed using the polylogarithm function.
Several critical concentrations are distinguished: the residual monomer
concentation in equilibrium (RMC) and the critical nucleation concentration
(CNC), that is the threshold concentration of total subunits necessary for
initiating aggregation. When increasing the concentration of total subunits,
the RMC converges more strongly to its asymptotic value, the equilibrium
constant of depolymerization, for 3D aggregates and in case of nucleation. The
CNC moderately depends on the number of subunits in the nucleus, but sharply
increases with the difference between the equilibrium constants of
polymerization and nucleation. As the RMC and CNC can be numerically but not
analytically determined, ansatz equations connecting them to thermodynamic
parameters are proposed.
| [
{
"created": "Sun, 24 Dec 2017 08:21:13 GMT",
"version": "v1"
}
] | 2018-01-24 | [
[
"Michel",
"Denis",
""
],
[
"Ruelle",
"Philippe",
""
]
] | A full equilibrium treatment of molecular aggregation is presented for prototypes of 1D and 3D aggregates, with and without nucleation. By skipping complex kinetic parameters like aggregate size-dependent diffusion, the equilibrium treatment allows to predict directly time-independent quantities such as critical concentrations. The relationships between the macroscopic equilibrium constants for the different paths are first established by statistical corrections and so as to comply with the detailed balance constraints imposed by nucleation, and the composition of the mixture resulting from homogeneous aggregation is then analyzed using the polylogarithm function. Several critical concentrations are distinguished: the residual monomer concentation in equilibrium (RMC) and the critical nucleation concentration (CNC), that is the threshold concentration of total subunits necessary for initiating aggregation. When increasing the concentration of total subunits, the RMC converges more strongly to its asymptotic value, the equilibrium constant of depolymerization, for 3D aggregates and in case of nucleation. The CNC moderately depends on the number of subunits in the nucleus, but sharply increases with the difference between the equilibrium constants of polymerization and nucleation. As the RMC and CNC can be numerically but not analytically determined, ansatz equations connecting them to thermodynamic parameters are proposed. |
2408.07064 | Hamza Coban | Hamza Coban | On Networks and their Applications: Stability of Gene Regulatory
Networks and Gene Function Prediction using Autoencoders | null | null | null | null | q-bio.MN physics.bio-ph | http://creativecommons.org/licenses/by/4.0/ | We prove that nested canalizing functions are the minimum-sensitivity Boolean
functions for any activity ratio and we determine the functional form of this
boundary which has a nontrivial fractal structure. We further observe that the
majority of the gene regulatory functions found in known biological networks
(submitted to the Cell Collective database) lie on the line of minimum
sensitivity which paradoxically remains largely in the unstable regime. Our
results provide a quantitative basis for the argument that an evolutionary
preference for nested canalizing functions in gene regulation (e.g., for higher
robustness) and for elasticity of gene activity are sufficient for
concentration of such systems near the "edge of chaos." The original structure
of gene regulatory networks is unknown due to the undiscovered functions of
some genes. Most gene function discovery approaches make use of unsupervised
clustering or classification methods that discover and exploit patterns in gene
expression profiles. However, existing knowledge in the field derives from
multiple and diverse sources. Incorporating this know-how for novel gene
function prediction can, therefore, be expected to improve such predictions. We
here propose a function-specific novel gene discovery tool that uses a
semi-supervised autoencoder. Our method is thus able to address the needs of a
modern researcher whose expertise is typically confined to a specific
functional domain. Lastly, the dynamics of unorthodox learning approaches like
biologically plausible learning algorithms are investigated and found to
exhibit a general form of Einstein relation.
| [
{
"created": "Tue, 13 Aug 2024 17:57:11 GMT",
"version": "v1"
}
] | 2024-08-14 | [
[
"Coban",
"Hamza",
""
]
] | We prove that nested canalizing functions are the minimum-sensitivity Boolean functions for any activity ratio and we determine the functional form of this boundary which has a nontrivial fractal structure. We further observe that the majority of the gene regulatory functions found in known biological networks (submitted to the Cell Collective database) lie on the line of minimum sensitivity which paradoxically remains largely in the unstable regime. Our results provide a quantitative basis for the argument that an evolutionary preference for nested canalizing functions in gene regulation (e.g., for higher robustness) and for elasticity of gene activity are sufficient for concentration of such systems near the "edge of chaos." The original structure of gene regulatory networks is unknown due to the undiscovered functions of some genes. Most gene function discovery approaches make use of unsupervised clustering or classification methods that discover and exploit patterns in gene expression profiles. However, existing knowledge in the field derives from multiple and diverse sources. Incorporating this know-how for novel gene function prediction can, therefore, be expected to improve such predictions. We here propose a function-specific novel gene discovery tool that uses a semi-supervised autoencoder. Our method is thus able to address the needs of a modern researcher whose expertise is typically confined to a specific functional domain. Lastly, the dynamics of unorthodox learning approaches like biologically plausible learning algorithms are investigated and found to exhibit a general form of Einstein relation. |
1611.05150 | Chi Keung Chan | Yu-Ting Huang, Yu-Lin Chang, Chun-Chung Chen, Pik-Yin Lai, C. K. Chan | Positive Feedback and Synchronized Bursts in Neuronal Cultures | 12 pages, 8 figures | null | 10.1371/journal.pone.0187276 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Synchronized bursts (SBs) with complex structures are common in neuronal
cultures. Although the origin of SBs is still unclear, they have been studied
for their information processing capabilities. Here, we investigate the
properties of these SBs in a culture on multi-electrode array system. We find
that structures of these SBs are related to the different developmental stages
of the cultures. A model based on short term synaptic plasticity, recurrent
connections and astrocytic recycling of neurotransmitters has been developed
successfully to understand these structures. A phase diagram obtained from this
model shows that networks exhibiting SBs are in an oscillatory state due to
large enough positive feedback provided by synaptic facilitation and recurrent
connections. In this model, the structures of the SBs are the results of
intrinsic synaptic interactions; not information stored in the network.
| [
{
"created": "Wed, 16 Nov 2016 05:25:35 GMT",
"version": "v1"
}
] | 2018-02-07 | [
[
"Huang",
"Yu-Ting",
""
],
[
"Chang",
"Yu-Lin",
""
],
[
"Chen",
"Chun-Chung",
""
],
[
"Lai",
"Pik-Yin",
""
],
[
"Chan",
"C. K.",
""
]
] | Synchronized bursts (SBs) with complex structures are common in neuronal cultures. Although the origin of SBs is still unclear, they have been studied for their information processing capabilities. Here, we investigate the properties of these SBs in a culture on multi-electrode array system. We find that structures of these SBs are related to the different developmental stages of the cultures. A model based on short term synaptic plasticity, recurrent connections and astrocytic recycling of neurotransmitters has been developed successfully to understand these structures. A phase diagram obtained from this model shows that networks exhibiting SBs are in an oscillatory state due to large enough positive feedback provided by synaptic facilitation and recurrent connections. In this model, the structures of the SBs are the results of intrinsic synaptic interactions; not information stored in the network. |
1806.07218 | Nele Vandersickel | Nele Vandersickel, Masaya Watanabe, Qian Tao, Jan Fostier, Katja
Zeppenfeld, Alexander V Panfilov | Dynamical anchoring of distant Arrhythmia Sources by Fibrotic Regions
via Restructuring of the Activation Pattern | 16 pages, 7 figures | null | 10.1371/journal.pcbi.1006637 | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Rotors are functional reentry sources identified in clinically relevant
cardiac arrhythmias, such as ventricular and atrial fibrillation. Ablation
targeting rotor sites has resulted in arrhythmia termination. Recent clinical,
experimental and modelling studies demonstrate that rotors are often anchored
around fibrotic scars or regions with increased fibrosis. However the
mechanisms leading to abundance of rotors at these locations are not clear. The
current study explores the hypothesis whether fibrotic scars just serve as
anchoring sites for the rotors or whether there are other active processes
which drive the rotors to these fibrotic regions. Rotors were induced at
different distances from fibrotic scars of various sizes and degree of
fibrosis. Simulations were performed in a 2D model of human ventricular tissue
and in a patient-specific model of the left ventricle of a patient with remote
myocardial infarction. In both the 2D and the patient-specific model we found
that without fibrotic scars, the rotors were stable at the site of their
initiation. However, in the presence of a scar, rotors were eventually
dynamically anchored from large distances by the fibrotic scar via a process of
dynamical reorganization of the excitation pattern. This process coalesces with
a change from polymorphic to monomorphic ventricular tachycardia.
| [
{
"created": "Tue, 19 Jun 2018 13:38:18 GMT",
"version": "v1"
}
] | 2019-03-06 | [
[
"Vandersickel",
"Nele",
""
],
[
"Watanabe",
"Masaya",
""
],
[
"Tao",
"Qian",
""
],
[
"Fostier",
"Jan",
""
],
[
"Zeppenfeld",
"Katja",
""
],
[
"Panfilov",
"Alexander V",
""
]
] | Rotors are functional reentry sources identified in clinically relevant cardiac arrhythmias, such as ventricular and atrial fibrillation. Ablation targeting rotor sites has resulted in arrhythmia termination. Recent clinical, experimental and modelling studies demonstrate that rotors are often anchored around fibrotic scars or regions with increased fibrosis. However the mechanisms leading to abundance of rotors at these locations are not clear. The current study explores the hypothesis whether fibrotic scars just serve as anchoring sites for the rotors or whether there are other active processes which drive the rotors to these fibrotic regions. Rotors were induced at different distances from fibrotic scars of various sizes and degree of fibrosis. Simulations were performed in a 2D model of human ventricular tissue and in a patient-specific model of the left ventricle of a patient with remote myocardial infarction. In both the 2D and the patient-specific model we found that without fibrotic scars, the rotors were stable at the site of their initiation. However, in the presence of a scar, rotors were eventually dynamically anchored from large distances by the fibrotic scar via a process of dynamical reorganization of the excitation pattern. This process coalesces with a change from polymorphic to monomorphic ventricular tachycardia. |
2301.06638 | Yixiang Wu | Shanshan Chen, Jie Liu, Yixiang Wu | Evolution of dispersal in advective patchy environments with varying
drift rates | null | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | In this paper, we study a two stream species Lotka-Volterra competition patch
model with the patches aligned along a line. The two species are supposed to be
identical except for the diffusion rates. For each species, the diffusion rates
between patches are the same, while the drift rates vary. Our results show that
the convexity of the drift rates has a significant impact on the competition
outcomes: if the drift rates are convex, then the species with larger diffusion
rate wins the competition; if the drift rates are concave, then the species
with smaller diffusion rate wins the competition.
| [
{
"created": "Mon, 16 Jan 2023 23:44:50 GMT",
"version": "v1"
}
] | 2023-01-18 | [
[
"Chen",
"Shanshan",
""
],
[
"Liu",
"Jie",
""
],
[
"Wu",
"Yixiang",
""
]
] | In this paper, we study a two stream species Lotka-Volterra competition patch model with the patches aligned along a line. The two species are supposed to be identical except for the diffusion rates. For each species, the diffusion rates between patches are the same, while the drift rates vary. Our results show that the convexity of the drift rates has a significant impact on the competition outcomes: if the drift rates are convex, then the species with larger diffusion rate wins the competition; if the drift rates are concave, then the species with smaller diffusion rate wins the competition. |
1505.05328 | Sriganesh Srihari Dr | Sriganesh Srihari, Chern Han Yong, Ashwini Patil and Limsoon Wong | Methods for protein complex prediction and their contributions towards
understanding the organization, function and dynamics of complexes | 1 Table | null | 10.1016/j.febslet.2015.04.026 | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Complexes of physically interacting proteins constitute fundamental
functional units responsible for driving biological processes within cells. A
faithful reconstruction of the entire set of complexes is therefore essential
to understand the functional organization of cells. In this review, we discuss
the key contributions of computational methods developed till date
(approximately between 2003 and 2015) for identifying complexes from the
network of interacting proteins (PPI network). We evaluate in depth the
performance of these methods on PPI datasets from yeast, and highlight
challenges faced by these methods, in particular detection of sparse and small
or sub- complexes and discerning of overlapping complexes. We describe methods
for integrating diverse information including expression profiles and 3D
structures of proteins with PPI networks to understand the dynamics of complex
formation, for instance, of time-based assembly of complex subunits and
formation of fuzzy complexes from intrinsically disordered proteins. Finally,
we discuss methods for identifying dysfunctional complexes in human diseases,
an application that is proving invaluable to understand disease mechanisms and
to discover novel therapeutic targets. We hope this review aptly commemorates a
decade of research on computational prediction of complexes and constitutes a
valuable reference for further advancements in this exciting area.
| [
{
"created": "Wed, 20 May 2015 11:45:19 GMT",
"version": "v1"
}
] | 2015-05-21 | [
[
"Srihari",
"Sriganesh",
""
],
[
"Yong",
"Chern Han",
""
],
[
"Patil",
"Ashwini",
""
],
[
"Wong",
"Limsoon",
""
]
] | Complexes of physically interacting proteins constitute fundamental functional units responsible for driving biological processes within cells. A faithful reconstruction of the entire set of complexes is therefore essential to understand the functional organization of cells. In this review, we discuss the key contributions of computational methods developed till date (approximately between 2003 and 2015) for identifying complexes from the network of interacting proteins (PPI network). We evaluate in depth the performance of these methods on PPI datasets from yeast, and highlight challenges faced by these methods, in particular detection of sparse and small or sub- complexes and discerning of overlapping complexes. We describe methods for integrating diverse information including expression profiles and 3D structures of proteins with PPI networks to understand the dynamics of complex formation, for instance, of time-based assembly of complex subunits and formation of fuzzy complexes from intrinsically disordered proteins. Finally, we discuss methods for identifying dysfunctional complexes in human diseases, an application that is proving invaluable to understand disease mechanisms and to discover novel therapeutic targets. We hope this review aptly commemorates a decade of research on computational prediction of complexes and constitutes a valuable reference for further advancements in this exciting area. |
2105.09010 | Mar\'ia Vallet-Regi | Carlotta Pontremoli, Isabel Izquierdo-Barba, Giorgia Montalbano, Maria
Vallet-Regi, Chiara Vitale-Brovarone, Sonia Fiorilli | Strontium-releasing mesoporous bioactive glasses with anti-adhesive
zwitterionic surface as advanced biomaterials for bone tissue regeneration | 26 pages, 8 figures | Journal of Colloid and Interface Science 563, 92-103 (2020) | 10.1016/j.jcis.2019.12.047 | null | q-bio.TO physics.bio-ph | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Hypothesis The treatment of bone fractures still represents a challenging
clinical issue when complications due to impaired bone remodelling (i.e.
osteoporosis) or infections occur. These clinical needs still require a radical
improvement of the existing therapeutic approach through the design of advanced
biomaterials combining the ability to promote bone regeneration with
anti-fouling/anti-adhesive properties able to minimise unspecific biomolecules
adsorption and bacterial adhesion. Strontium-containing mesoporous bioactive
glasses (Sr-MBG), able to exert a pro-osteogenic effect by releasing Sr2+ ions,
have been successfully functionalised to provide mixed-charge surface groups
with low-fouling abilities. Experiments Sr-MBG have been post-synthesis
modified by co-grafting hydrolysable short chain silanes containing amino
(aminopropylsilanetriol) and carboxylate (carboxyethylsilanetriol) moieties to
achieve a zwitterionic zero-charge surface and then characterised in terms of
textural-structural properties, bioactivity, cytotoxicity, pro-osteogenic and
low-fouling capabilities. Findings After zwitterionization the in vitro
bioactivity is maintained, as well as the ability to release Sr2+ ions capable
to induce a mineralization process. Irrespective of their size, Sr-MBG
particles did not exhibit any cytotoxicity in pre-osteoblastic MC3T3-E1 up to
the concentration of 75 ug/mL. Finally, the zwitterionic Sr-MBGs show a
significant reduction of serum protein adhesion with respect to pristine ones.
These results open promising future expectations in the design of nanosystems
combining pro-osteogenic and anti-adhesive properties.
| [
{
"created": "Wed, 19 May 2021 09:24:29 GMT",
"version": "v1"
}
] | 2021-05-20 | [
[
"Pontremoli",
"Carlotta",
""
],
[
"Izquierdo-Barba",
"Isabel",
""
],
[
"Montalbano",
"Giorgia",
""
],
[
"Vallet-Regi",
"Maria",
""
],
[
"Vitale-Brovarone",
"Chiara",
""
],
[
"Fiorilli",
"Sonia",
""
]
] | Hypothesis The treatment of bone fractures still represents a challenging clinical issue when complications due to impaired bone remodelling (i.e. osteoporosis) or infections occur. These clinical needs still require a radical improvement of the existing therapeutic approach through the design of advanced biomaterials combining the ability to promote bone regeneration with anti-fouling/anti-adhesive properties able to minimise unspecific biomolecules adsorption and bacterial adhesion. Strontium-containing mesoporous bioactive glasses (Sr-MBG), able to exert a pro-osteogenic effect by releasing Sr2+ ions, have been successfully functionalised to provide mixed-charge surface groups with low-fouling abilities. Experiments Sr-MBG have been post-synthesis modified by co-grafting hydrolysable short chain silanes containing amino (aminopropylsilanetriol) and carboxylate (carboxyethylsilanetriol) moieties to achieve a zwitterionic zero-charge surface and then characterised in terms of textural-structural properties, bioactivity, cytotoxicity, pro-osteogenic and low-fouling capabilities. Findings After zwitterionization the in vitro bioactivity is maintained, as well as the ability to release Sr2+ ions capable to induce a mineralization process. Irrespective of their size, Sr-MBG particles did not exhibit any cytotoxicity in pre-osteoblastic MC3T3-E1 up to the concentration of 75 ug/mL. Finally, the zwitterionic Sr-MBGs show a significant reduction of serum protein adhesion with respect to pristine ones. These results open promising future expectations in the design of nanosystems combining pro-osteogenic and anti-adhesive properties. |
1203.5863 | Jonas Cremer | Jonas Cremer and Anna Melbinger and Erwin Frey | Growth dynamics and the evolution of cooperation in microbial
populations | 26 pages, 6 figures | Scientific Reports 2,281 (2012) | 10.1038/srep00281 | null | q-bio.PE cond-mat.stat-mech physics.bio-ph | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Microbes providing public goods are widespread in nature despite running the
risk of being exploited by free-riders. However, the precise ecological factors
supporting cooperation are still puzzling. Following recent experiments, we
consider the role of population growth and the repetitive fragmentation of
populations into new colonies mimicking simple microbial life-cycles.
Individual-based modeling reveals that demographic fluctuations, which lead to
a large variance in the composition of colonies, promote cooperation. Biased by
population dynamics these fluctuations result in two qualitatively distinct
regimes of robust cooperation under repetitive fragmentation into groups.
First, if the level of cooperation exceeds a threshold, cooperators will take
over the whole population. Second, cooperators can also emerge from a single
mutant leading to a robust coexistence between cooperators and free-riders. We
find frequency and size of population bottlenecks, and growth dynamics to be
the major ecological factors determining the regimes and thereby the
evolutionary pathway towards cooperation.
| [
{
"created": "Tue, 27 Mar 2012 04:00:36 GMT",
"version": "v1"
}
] | 2012-03-28 | [
[
"Cremer",
"Jonas",
""
],
[
"Melbinger",
"Anna",
""
],
[
"Frey",
"Erwin",
""
]
] | Microbes providing public goods are widespread in nature despite running the risk of being exploited by free-riders. However, the precise ecological factors supporting cooperation are still puzzling. Following recent experiments, we consider the role of population growth and the repetitive fragmentation of populations into new colonies mimicking simple microbial life-cycles. Individual-based modeling reveals that demographic fluctuations, which lead to a large variance in the composition of colonies, promote cooperation. Biased by population dynamics these fluctuations result in two qualitatively distinct regimes of robust cooperation under repetitive fragmentation into groups. First, if the level of cooperation exceeds a threshold, cooperators will take over the whole population. Second, cooperators can also emerge from a single mutant leading to a robust coexistence between cooperators and free-riders. We find frequency and size of population bottlenecks, and growth dynamics to be the major ecological factors determining the regimes and thereby the evolutionary pathway towards cooperation. |
1705.09392 | Tomislav Plesa Mr | Tomislav Plesa, Konstantinos C. Zygalakis, David F. Anderson, Radek
Erban | Noise Control for DNA Computing | null | null | null | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Synthetic biology is a growing interdisciplinary field, with far-reaching
applications, which aims to design biochemical systems that behave in a desired
manner. With the advancement of strand-displacement DNA computing, a large
class of abstract biochemical networks may be physically realized using DNA
molecules. Methods for systematic design of the abstract systems with
prescribed behaviors have been predominantly developed at the (less-detailed)
deterministic level. However, stochastic effects, neglected at the
deterministic level, are increasingly found to play an important role in
biochemistry. In such circumstances, methods for controlling the intrinsic
noise in the system are necessary for a successful network design at the
(more-detailed) stochastic level. To bridge the gap, the noise-control
algorithm for designing biochemical networks is developed in this paper. The
algorithm structurally modifies any given reaction network under mass-action
kinetics, in such a way that (i) controllable state-dependent noise is
introduced into the stochastic dynamics, while (ii) the deterministic dynamics
are preserved. The capabilities of the algorithm are demonstrated on a
production-decay reaction system, and on an exotic system displaying
bistability. For the production-decay system, it is shown that the algorithm
may be used to redesign the network to achieve noise-induced multistability.
For the exotic system, the algorithm is used to redesign the network to control
the stochastic switching, and achieve noise-induced oscillations.
| [
{
"created": "Thu, 25 May 2017 23:01:46 GMT",
"version": "v1"
},
{
"created": "Mon, 29 May 2017 12:29:30 GMT",
"version": "v2"
},
{
"created": "Sat, 3 Jun 2017 00:15:15 GMT",
"version": "v3"
},
{
"created": "Sat, 17 Jun 2017 19:28:44 GMT",
"version": "v4"
},
{
"created": "Tue, 20 Jun 2017 16:40:08 GMT",
"version": "v5"
}
] | 2017-06-21 | [
[
"Plesa",
"Tomislav",
""
],
[
"Zygalakis",
"Konstantinos C.",
""
],
[
"Anderson",
"David F.",
""
],
[
"Erban",
"Radek",
""
]
] | Synthetic biology is a growing interdisciplinary field, with far-reaching applications, which aims to design biochemical systems that behave in a desired manner. With the advancement of strand-displacement DNA computing, a large class of abstract biochemical networks may be physically realized using DNA molecules. Methods for systematic design of the abstract systems with prescribed behaviors have been predominantly developed at the (less-detailed) deterministic level. However, stochastic effects, neglected at the deterministic level, are increasingly found to play an important role in biochemistry. In such circumstances, methods for controlling the intrinsic noise in the system are necessary for a successful network design at the (more-detailed) stochastic level. To bridge the gap, the noise-control algorithm for designing biochemical networks is developed in this paper. The algorithm structurally modifies any given reaction network under mass-action kinetics, in such a way that (i) controllable state-dependent noise is introduced into the stochastic dynamics, while (ii) the deterministic dynamics are preserved. The capabilities of the algorithm are demonstrated on a production-decay reaction system, and on an exotic system displaying bistability. For the production-decay system, it is shown that the algorithm may be used to redesign the network to achieve noise-induced multistability. For the exotic system, the algorithm is used to redesign the network to control the stochastic switching, and achieve noise-induced oscillations. |
1208.1652 | Gibran Manasseh | Gibran Manasseh, Chloe de Balthasar, Bruno Sanguinetti, Enrico
Pomarico, Nicolas Gisin, Rolando Grave de Peralta, Sara L. Gonzalez | Retinal and post-retinal contributions to the quantum efficiency of the
human eye | null | null | null | null | q-bio.NC quant-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The retina is one of the best known quantum detectors with rods able to
respond to a single photon. However, estimates on the number of photons
eliciting conscious perception, based on signal detection theory, are
systematically above these values. One possibility is that post-retinal
processing significantly contributes to the decrease in the quantum efficiency
determined by signal detection. We carried out experiments in humans using
controlled sources of light while recording EEG and reaction times. Half of the
participants behaved as noisy detectors reporting perception in trials where no
light was sent. DN subjects were significantly faster to take decisions.
Reaction times significantly increased with the decrease in the number of
photons. This trend was reflected in the latency and onset of the EEG responses
over frontal and parietal contacts where the first significant differences in
latency comparable to differences in reaction time appeared. Delays in latency
of neural responses across intensities were observed later over visual areas
suggesting that they are due to the time required to reach the decision
threshold in decision areas rather than to longer integration times at sensory
areas. Our results suggest that post-retinal processing significantly
contribute to increase detection noise and thresholds, decreasing the
efficiency of the retina brain detector system.
| [
{
"created": "Wed, 8 Aug 2012 12:40:19 GMT",
"version": "v1"
}
] | 2012-08-09 | [
[
"Manasseh",
"Gibran",
""
],
[
"de Balthasar",
"Chloe",
""
],
[
"Sanguinetti",
"Bruno",
""
],
[
"Pomarico",
"Enrico",
""
],
[
"Gisin",
"Nicolas",
""
],
[
"de Peralta",
"Rolando Grave",
""
],
[
"Gonzalez",
"Sara L.",
""
]
] | The retina is one of the best known quantum detectors with rods able to respond to a single photon. However, estimates on the number of photons eliciting conscious perception, based on signal detection theory, are systematically above these values. One possibility is that post-retinal processing significantly contributes to the decrease in the quantum efficiency determined by signal detection. We carried out experiments in humans using controlled sources of light while recording EEG and reaction times. Half of the participants behaved as noisy detectors reporting perception in trials where no light was sent. DN subjects were significantly faster to take decisions. Reaction times significantly increased with the decrease in the number of photons. This trend was reflected in the latency and onset of the EEG responses over frontal and parietal contacts where the first significant differences in latency comparable to differences in reaction time appeared. Delays in latency of neural responses across intensities were observed later over visual areas suggesting that they are due to the time required to reach the decision threshold in decision areas rather than to longer integration times at sensory areas. Our results suggest that post-retinal processing significantly contribute to increase detection noise and thresholds, decreasing the efficiency of the retina brain detector system. |
2404.16040 | Megan Witherow | Megan A. Witherow, Norou Diawara, Janice Keener, John W. Harrington,
and Khan M. Iftekharuddin | Pilot Study to Discover Candidate Biomarkers for Autism based on
Perception and Production of Facial Expressions | 18 pages, 3 figures, 5 tables | null | null | null | q-bio.NC cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Purpose: Facial expression production and perception in autism spectrum
disorder (ASD) suggest potential presence of behavioral biomarkers that may
stratify individuals on the spectrum into prognostic or treatment subgroups.
Construct validity and group discriminability have been recommended as criteria
for identification of candidate stratification biomarkers.
Methods: In an online pilot study of 11 children and young adults diagnosed
with ASD and 11 age- and gender-matched neurotypical (NT) individuals,
participants recognize and mimic static and dynamic facial expressions of 3D
avatars. Webcam-based eye-tracking (ET) and facial video tracking (VT),
including activation and asymmetry of action units (AUs) from the Facial Action
Coding System (FACS) are collected. We assess validity of constructs for each
dependent variable (DV) based on the expected response in the NT group. Then,
the Boruta statistical method identifies DVs that are significant to group
discriminability (ASD or NT).
Results: We identify one candidate ET biomarker (percentage gaze duration to
the face while mimicking static 'disgust' expression) and 14 additional DVs of
interest for future study, including 4 ET DVs, 5 DVs related to VT AU
activation, and 4 DVs related to AU asymmetry in VT. Based on a power analysis,
we provide sample size recommendations for future studies.
Conclusion: This pilot study provides a framework for ASD stratification
biomarker discovery based on perception and production of facial expressions.
| [
{
"created": "Wed, 27 Mar 2024 01:43:50 GMT",
"version": "v1"
}
] | 2024-04-26 | [
[
"Witherow",
"Megan A.",
""
],
[
"Diawara",
"Norou",
""
],
[
"Keener",
"Janice",
""
],
[
"Harrington",
"John W.",
""
],
[
"Iftekharuddin",
"Khan M.",
""
]
] | Purpose: Facial expression production and perception in autism spectrum disorder (ASD) suggest potential presence of behavioral biomarkers that may stratify individuals on the spectrum into prognostic or treatment subgroups. Construct validity and group discriminability have been recommended as criteria for identification of candidate stratification biomarkers. Methods: In an online pilot study of 11 children and young adults diagnosed with ASD and 11 age- and gender-matched neurotypical (NT) individuals, participants recognize and mimic static and dynamic facial expressions of 3D avatars. Webcam-based eye-tracking (ET) and facial video tracking (VT), including activation and asymmetry of action units (AUs) from the Facial Action Coding System (FACS) are collected. We assess validity of constructs for each dependent variable (DV) based on the expected response in the NT group. Then, the Boruta statistical method identifies DVs that are significant to group discriminability (ASD or NT). Results: We identify one candidate ET biomarker (percentage gaze duration to the face while mimicking static 'disgust' expression) and 14 additional DVs of interest for future study, including 4 ET DVs, 5 DVs related to VT AU activation, and 4 DVs related to AU asymmetry in VT. Based on a power analysis, we provide sample size recommendations for future studies. Conclusion: This pilot study provides a framework for ASD stratification biomarker discovery based on perception and production of facial expressions. |
1805.04619 | Stevan Harnad | Fernanda P\'erez-Gay, Tomy Sicotte, Christian Th\'eriault, Stevan
Harnad | Category learning can alter perception and its neural correlates | 40 pages, 15 figures, 8 tables | PLOS ONE 14(12): e0226000 (2019) | 10.1371/journal.pone.0226000 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learned Categorical Perception (CP) occurs when the members of different
categories come to look more dissimilar (between-category separation) and/or
members of the same category come to look more similar (within-category
compression) after a new category has been learned. To measure learned CP and
its physiological correlates we compared dissimilarity judgments and Event
Related Potentials (ERPs) before and after learning to sort multi-featured
visual textures into two categories by trial and error with corrective
feedback. With the same number of trials and feedback, about half the
participants succeeded in learning the categories (learners: criterion 80%
accuracy) and the rest did not (non-learners). At both lower and higher levels
of difficulty, successful learners showed significant between-category
separation in pairwise dissimilarity judgments after learning compared to
before; their late parietal ERP positivity (LPC, usually interpreted as
decisional) also increased and their occipital negativity (N1) (usually
interpreted as perceptual) decreased. LPC increased with response accuracy and
N1 amplitude decreased with between-category separation for the Learners.
Non-learners showed no significant changes in dissimilarity judgments, LPC or
N1, within or between categories. This is behavioral and physiological evidence
that category learning can alter perception. We sketch a neural net model for
this effect.
| [
{
"created": "Fri, 11 May 2018 23:44:01 GMT",
"version": "v1"
},
{
"created": "Mon, 14 Jan 2019 22:20:51 GMT",
"version": "v2"
},
{
"created": "Tue, 10 Dec 2019 22:54:05 GMT",
"version": "v3"
}
] | 2019-12-12 | [
[
"Pérez-Gay",
"Fernanda",
""
],
[
"Sicotte",
"Tomy",
""
],
[
"Thériault",
"Christian",
""
],
[
"Harnad",
"Stevan",
""
]
] | Learned Categorical Perception (CP) occurs when the members of different categories come to look more dissimilar (between-category separation) and/or members of the same category come to look more similar (within-category compression) after a new category has been learned. To measure learned CP and its physiological correlates we compared dissimilarity judgments and Event Related Potentials (ERPs) before and after learning to sort multi-featured visual textures into two categories by trial and error with corrective feedback. With the same number of trials and feedback, about half the participants succeeded in learning the categories (learners: criterion 80% accuracy) and the rest did not (non-learners). At both lower and higher levels of difficulty, successful learners showed significant between-category separation in pairwise dissimilarity judgments after learning compared to before; their late parietal ERP positivity (LPC, usually interpreted as decisional) also increased and their occipital negativity (N1) (usually interpreted as perceptual) decreased. LPC increased with response accuracy and N1 amplitude decreased with between-category separation for the Learners. Non-learners showed no significant changes in dissimilarity judgments, LPC or N1, within or between categories. This is behavioral and physiological evidence that category learning can alter perception. We sketch a neural net model for this effect. |
1309.4952 | Akira Kinjo | Ken Nishikawa and Akira R. Kinjo | Cooperation between genetic mutations and phenotypic plasticity can
bypass the Weismann barrier: The cooperative model of evolution | 23 pages, 1 figure | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Weismann barrier, or the impossibility of inheritance of acquired traits,
comprises a foundation of modern biology, and it has been a major obstacle in
establishing the connection between evolution and ontogenesis. We propose the
cooperative model based on the assumption that evolution is achieved by a
cooperation between genetic mutations and acquired changes (phenotypic
plasticity). It is also assumed in this model that natural selection operates
on phenotypes, rather than genotypes, of individuals, and that the relationship
between phenotypes and genotypes is one-to-many. In the simulations based on
these assumptions, individuals exhibited phenotypic changes in response to an
environmental change, corresponding multiple genetic mutations were
increasingly accumulated in individuals in the population, and phenotypic
plasticity was gradually replaced with genetic mutations. This result suggests
that Lamarck's law of use and disuse can effectively hold without conflicting
the Weismann barrier, and thus evolution can be logically connected with
ontogenesis.
| [
{
"created": "Thu, 19 Sep 2013 12:24:56 GMT",
"version": "v1"
}
] | 2013-09-20 | [
[
"Nishikawa",
"Ken",
""
],
[
"Kinjo",
"Akira R.",
""
]
] | The Weismann barrier, or the impossibility of inheritance of acquired traits, comprises a foundation of modern biology, and it has been a major obstacle in establishing the connection between evolution and ontogenesis. We propose the cooperative model based on the assumption that evolution is achieved by a cooperation between genetic mutations and acquired changes (phenotypic plasticity). It is also assumed in this model that natural selection operates on phenotypes, rather than genotypes, of individuals, and that the relationship between phenotypes and genotypes is one-to-many. In the simulations based on these assumptions, individuals exhibited phenotypic changes in response to an environmental change, corresponding multiple genetic mutations were increasingly accumulated in individuals in the population, and phenotypic plasticity was gradually replaced with genetic mutations. This result suggests that Lamarck's law of use and disuse can effectively hold without conflicting the Weismann barrier, and thus evolution can be logically connected with ontogenesis. |
1411.5179 | Constantino Antonio Garc\'ia | Constantino A. Garc\'ia, Abraham Otero, Xos\'e Vila, David G.
M\'arquez | A new algorithm for wavelet-based heart rate variability analysis | null | Biomedical Signal Processing and Control, Volume 8, Issue 6,
November 2013, Pages 542-550 | 10.1016/j.bspc.2013.05.006 | null | q-bio.QM physics.med-ph stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | One of the most promising non-invasive markers of the activity of the
autonomic nervous system is Heart Rate Variability (HRV). HRV analysis toolkits
often provide spectral analysis techniques using the Fourier transform, which
assumes that the heart rate series is stationary. To overcome this issue, the
Short Time Fourier Transform is often used (STFT). However, the wavelet
transform is thought to be a more suitable tool for analyzing non-stationary
signals than the STFT. Given the lack of support for wavelet-based analysis in
HRV toolkits, such analysis must be implemented by the researcher. This has
made this technique underutilized. This paper presents a new algorithm to
perform HRV power spectrum analysis based on the Maximal Overlap Discrete
Wavelet Packet Transform (MODWPT). The algorithm calculates the power in any
spectral band with a given tolerance for the band's boundaries. The MODWPT
decomposition tree is pruned to avoid calculating unnecessary wavelet
coefficients, thereby optimizing execution time. The center of energy shift
correction is applied to achieve optimum alignment of the wavelet coefficients.
This algorithm has been implemented in RHRV, an open-source package for HRV
analysis. To the best of our knowledge, RHRV is the first HRV toolkit with
support for wavelet-based spectral analysis.
| [
{
"created": "Wed, 19 Nov 2014 11:11:45 GMT",
"version": "v1"
}
] | 2014-11-20 | [
[
"García",
"Constantino A.",
""
],
[
"Otero",
"Abraham",
""
],
[
"Vila",
"Xosé",
""
],
[
"Márquez",
"David G.",
""
]
] | One of the most promising non-invasive markers of the activity of the autonomic nervous system is Heart Rate Variability (HRV). HRV analysis toolkits often provide spectral analysis techniques using the Fourier transform, which assumes that the heart rate series is stationary. To overcome this issue, the Short Time Fourier Transform is often used (STFT). However, the wavelet transform is thought to be a more suitable tool for analyzing non-stationary signals than the STFT. Given the lack of support for wavelet-based analysis in HRV toolkits, such analysis must be implemented by the researcher. This has made this technique underutilized. This paper presents a new algorithm to perform HRV power spectrum analysis based on the Maximal Overlap Discrete Wavelet Packet Transform (MODWPT). The algorithm calculates the power in any spectral band with a given tolerance for the band's boundaries. The MODWPT decomposition tree is pruned to avoid calculating unnecessary wavelet coefficients, thereby optimizing execution time. The center of energy shift correction is applied to achieve optimum alignment of the wavelet coefficients. This algorithm has been implemented in RHRV, an open-source package for HRV analysis. To the best of our knowledge, RHRV is the first HRV toolkit with support for wavelet-based spectral analysis. |
2303.12058 | Jane Ivy Coons | Jane Ivy Coons and Benjamin Hollering | Identifiability of the Rooted Tree Parameter under the
Cavender-Farris-Neyman Model with a Molecular Clock | 6 pages, 1 figure | null | null | null | q-bio.PE math.CO math.ST stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Identifiability of the discrete tree parameter is a key property for
phylogenetic models since it is necessary for statistically consistent
estimation of the tree from sequence data. Algebraic methods have proven to be
very effective at showing that tree and network parameters of phylogenetic
models are identifiable, especially when the underlying models are group-based.
However, since group-based models are time-reversible, only the unrooted tree
topology is identifiable and the location of the root is not. In this note we
show that the rooted tree parameter of the Cavender-Farris-Neyman Model with a
Molecular Clock is generically identifiable by using the invariants of the
model which were characterized by Coons and Sullivant.
| [
{
"created": "Tue, 21 Mar 2023 17:50:37 GMT",
"version": "v1"
}
] | 2023-03-22 | [
[
"Coons",
"Jane Ivy",
""
],
[
"Hollering",
"Benjamin",
""
]
] | Identifiability of the discrete tree parameter is a key property for phylogenetic models since it is necessary for statistically consistent estimation of the tree from sequence data. Algebraic methods have proven to be very effective at showing that tree and network parameters of phylogenetic models are identifiable, especially when the underlying models are group-based. However, since group-based models are time-reversible, only the unrooted tree topology is identifiable and the location of the root is not. In this note we show that the rooted tree parameter of the Cavender-Farris-Neyman Model with a Molecular Clock is generically identifiable by using the invariants of the model which were characterized by Coons and Sullivant. |
q-bio/0404017 | Natasa Przulj | Natasa Przulj, Derek G. Corneil, Igor Jurisica | Modeling Interactome: Scale-Free or Geometric? | 53 pages, 18 figures, 5 tables | null | null | null | q-bio.MN | null | Networks have been used to model many real-world phenomena to better
understand the phenomena and to guide experiments in order to predict their
behavior. Since incorrect models lead to incorrect predictions, it is vital to
have a correct model. As a result, new techniques and models for analyzing and
modeling real-world networks have recently been introduced. One example of
large and complex networks involves protein-protein interaction (PPI) networks.
We demonstrate that the currently popular scale-free model of PPI networks
fails to fit the data in several respects. We show that a random geometric
model provides a much more accurate model of the PPI data.
| [
{
"created": "Sat, 17 Apr 2004 02:36:10 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Przulj",
"Natasa",
""
],
[
"Corneil",
"Derek G.",
""
],
[
"Jurisica",
"Igor",
""
]
] | Networks have been used to model many real-world phenomena to better understand the phenomena and to guide experiments in order to predict their behavior. Since incorrect models lead to incorrect predictions, it is vital to have a correct model. As a result, new techniques and models for analyzing and modeling real-world networks have recently been introduced. One example of large and complex networks involves protein-protein interaction (PPI) networks. We demonstrate that the currently popular scale-free model of PPI networks fails to fit the data in several respects. We show that a random geometric model provides a much more accurate model of the PPI data. |
2010.02346 | Alexander Kaiser | Alexander D. Kaiser, Rohan Shad, William Hiesinger, Alison L. Marsden | A Design-Based Model of the Aortic Valve for Fluid-Structure Interaction | null | null | 10.1007/s10237-021-01516-7 | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a new method for modeling the mechanics of the aortic
valve, and simulates its interaction with blood. As much as possible, the model
construction is based on first principles, but such that the model is
consistent with experimental observations. We require that tension in the
leaflets must support a pressure, then derive a system of partial differential
equations governing its mechanical equilibrium. The solution to these
differential equations is referred to as the predicted loaded configuration; it
includes the loaded leaflet geometry, fiber orientations and tensions needed to
support the prescribed load. From this configuration, we derive a reference
configuration and constitutive law. In fluid-structure interaction simulations
with the immersed boundary method, the model seals reliably under physiological
pressures, and opens freely over multiple cardiac cycles. Further, model
closure is robust to extreme hypo- and hypertensive pressures. Then, exploiting
the unique features of this model construction, we conduct experiments on
reference configurations, constitutive laws, and gross morphology. These
experiments suggest the following conclusions, which are directly applicable to
the design of prosthetic aortic valves. (i) The loaded geometry, tensions and
tangent moduli primarily determine model function. (ii) Alterations to the
reference configuration have little effect if the predicted loaded
configuration is identical. (iii) The leaflets must have sufficiently nonlinear
material response to function over a variety of pressures. (iv) Valve
performance is highly sensitive to free edge length and leaflet height. For
future use, our aortic valve modeling framework offers flexibility in
patient-specific models of cardiac flow.
| [
{
"created": "Mon, 5 Oct 2020 21:43:59 GMT",
"version": "v1"
},
{
"created": "Sat, 4 Sep 2021 18:34:01 GMT",
"version": "v2"
}
] | 2021-10-15 | [
[
"Kaiser",
"Alexander D.",
""
],
[
"Shad",
"Rohan",
""
],
[
"Hiesinger",
"William",
""
],
[
"Marsden",
"Alison L.",
""
]
] | This paper presents a new method for modeling the mechanics of the aortic valve, and simulates its interaction with blood. As much as possible, the model construction is based on first principles, but such that the model is consistent with experimental observations. We require that tension in the leaflets must support a pressure, then derive a system of partial differential equations governing its mechanical equilibrium. The solution to these differential equations is referred to as the predicted loaded configuration; it includes the loaded leaflet geometry, fiber orientations and tensions needed to support the prescribed load. From this configuration, we derive a reference configuration and constitutive law. In fluid-structure interaction simulations with the immersed boundary method, the model seals reliably under physiological pressures, and opens freely over multiple cardiac cycles. Further, model closure is robust to extreme hypo- and hypertensive pressures. Then, exploiting the unique features of this model construction, we conduct experiments on reference configurations, constitutive laws, and gross morphology. These experiments suggest the following conclusions, which are directly applicable to the design of prosthetic aortic valves. (i) The loaded geometry, tensions and tangent moduli primarily determine model function. (ii) Alterations to the reference configuration have little effect if the predicted loaded configuration is identical. (iii) The leaflets must have sufficiently nonlinear material response to function over a variety of pressures. (iv) Valve performance is highly sensitive to free edge length and leaflet height. For future use, our aortic valve modeling framework offers flexibility in patient-specific models of cardiac flow. |
2103.06145 | Abhishek Singh | Abhishek Narain Singh | GraphBreak: Tool for Network Community based Regulatory Medicine, Gene
co-expression, Linkage Disequilibrium analysis, functional annotation and
more | null | null | null | null | q-bio.GN cs.AI cs.DC | http://creativecommons.org/licenses/by/4.0/ | Graph network science is becoming increasingly popular, notably in big-data
perspective where understanding individual entities for individual functional
roles is complex and time consuming. It is likely when a set of genes are
regulated by a set of genetic variants, the genes set is recruited for a common
or related functional purpose. Grouping and extracting communities from network
of associations becomes critical to understand system complexity, thus
prioritizing genes for dis-ease and functional associations. Workload is
reduced when studying entities one at a time. For this, we present GraphBreak,
a suite of tools for community detection application, such as for gene
co-expression, protein interaction, regulation network, etc.Although developed
for use case of eQTLs regulatory genomic net-work community study -- results
shown with our analysis with sample eQTL data. Graphbreak can be deployed for
other studies if input data has been fed in requisite format, including but not
limited to gene co-expression networks, protein-protein interaction network,
signaling pathway and metabolic network. Graph-Break showed critical use case
value in its downstream analysis for disease association of communities
detected. If all independent steps of community detection and analysis are a
step-by-step sub-part of the algorithm, GraphBreak can be considered a new
algorithm for community based functional characterization. Combination of
various algorithmic implementation modules into a single script for this
purpose illustrates GraphBreak novelty. Compared to other similar tools, with
GraphBreak we can better detect communities with over-representation of its
member genes for statistical association with diseases, therefore target genes
which can be prioritized for drug-positioning or drug-re-positioning as the
case be.
| [
{
"created": "Wed, 24 Feb 2021 15:16:38 GMT",
"version": "v1"
}
] | 2021-03-11 | [
[
"Singh",
"Abhishek Narain",
""
]
] | Graph network science is becoming increasingly popular, notably in big-data perspective where understanding individual entities for individual functional roles is complex and time consuming. It is likely when a set of genes are regulated by a set of genetic variants, the genes set is recruited for a common or related functional purpose. Grouping and extracting communities from network of associations becomes critical to understand system complexity, thus prioritizing genes for dis-ease and functional associations. Workload is reduced when studying entities one at a time. For this, we present GraphBreak, a suite of tools for community detection application, such as for gene co-expression, protein interaction, regulation network, etc.Although developed for use case of eQTLs regulatory genomic net-work community study -- results shown with our analysis with sample eQTL data. Graphbreak can be deployed for other studies if input data has been fed in requisite format, including but not limited to gene co-expression networks, protein-protein interaction network, signaling pathway and metabolic network. Graph-Break showed critical use case value in its downstream analysis for disease association of communities detected. If all independent steps of community detection and analysis are a step-by-step sub-part of the algorithm, GraphBreak can be considered a new algorithm for community based functional characterization. Combination of various algorithmic implementation modules into a single script for this purpose illustrates GraphBreak novelty. Compared to other similar tools, with GraphBreak we can better detect communities with over-representation of its member genes for statistical association with diseases, therefore target genes which can be prioritized for drug-positioning or drug-re-positioning as the case be. |
1910.14098 | Sahar Hojjatinia | Sahar Hojjatinia, Constantino M. Lagoa | Comparison of Different Spike Sorting Subtechniques Based on Rat Brain
Basolateral Amygdala Neuronal Activity | 8 pages, 12 figures | null | null | null | q-bio.NC cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Developing electrophysiological recordings of brain neuronal activity and
their analysis provide a basis for exploring the structure of brain function
and nervous system investigation. The recorded signals are typically a
combination of spikes and noise. High amounts of background noise and
possibility of electric signaling recording from several neurons adjacent to
the recording site have led scientists to develop neuronal signal processing
tools such as spike sorting to facilitate brain data analysis. Spike sorting
plays a pivotal role in understanding the electrophysiological activity of
neuronal networks. This process prepares recorded data for interpretations of
neurons interactions and understanding the overall structure of brain
functions. Spike sorting consists of three steps: spike detection, feature
extraction, and spike clustering. There are several methods to implement each
of spike sorting steps. This paper provides a systematic comparison of various
spike sorting sub-techniques applied to real extracellularly recorded data from
a rat brain basolateral amygdala. An efficient sorted data resulted from
careful choice of spike sorting sub-methods leads to better interpretation of
the brain structures connectivity under different conditions, which is a very
sensitive concept in diagnosis and treatment of neurological disorders. Here,
spike detection is performed by appropriate choice of threshold level via three
different approaches. Feature extraction is done through PCA and Kernel PCA
methods, which Kernel PCA outperforms. We have applied four different
algorithms for spike clustering including K-means, Fuzzy C-means, Bayesian and
Fuzzy maximum likelihood estimation. As one requirement of most clustering
algorithms, optimal number of clusters is achieved through validity indices for
each method. Finally, the sorting results are evaluated using inter-spike
interval histograms.
| [
{
"created": "Sun, 27 Oct 2019 00:44:24 GMT",
"version": "v1"
}
] | 2019-11-01 | [
[
"Hojjatinia",
"Sahar",
""
],
[
"Lagoa",
"Constantino M.",
""
]
] | Developing electrophysiological recordings of brain neuronal activity and their analysis provide a basis for exploring the structure of brain function and nervous system investigation. The recorded signals are typically a combination of spikes and noise. High amounts of background noise and possibility of electric signaling recording from several neurons adjacent to the recording site have led scientists to develop neuronal signal processing tools such as spike sorting to facilitate brain data analysis. Spike sorting plays a pivotal role in understanding the electrophysiological activity of neuronal networks. This process prepares recorded data for interpretations of neurons interactions and understanding the overall structure of brain functions. Spike sorting consists of three steps: spike detection, feature extraction, and spike clustering. There are several methods to implement each of spike sorting steps. This paper provides a systematic comparison of various spike sorting sub-techniques applied to real extracellularly recorded data from a rat brain basolateral amygdala. An efficient sorted data resulted from careful choice of spike sorting sub-methods leads to better interpretation of the brain structures connectivity under different conditions, which is a very sensitive concept in diagnosis and treatment of neurological disorders. Here, spike detection is performed by appropriate choice of threshold level via three different approaches. Feature extraction is done through PCA and Kernel PCA methods, which Kernel PCA outperforms. We have applied four different algorithms for spike clustering including K-means, Fuzzy C-means, Bayesian and Fuzzy maximum likelihood estimation. As one requirement of most clustering algorithms, optimal number of clusters is achieved through validity indices for each method. Finally, the sorting results are evaluated using inter-spike interval histograms. |
2305.13254 | Rafael Mena-Yedra | Rafael Mena-Yedra, Juana L. Redondo, Horacio P\'erez-S\'anchez, Pilar
M. Ortigosa | ALMERIA: Boosting pairwise molecular contrasts with scalable methods | null | null | null | null | q-bio.BM cs.CE cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Searching for potential active compounds in large databases is a necessary
step to reduce time and costs in modern drug discovery pipelines. Such virtual
screening methods seek to provide predictions that allow the search space to be
narrowed down. Although cheminformatics has made great progress in exploiting
the potential of available big data, caution is needed to avoid introducing
bias and provide useful predictions with new compounds. In this work, we
propose the decision-support tool ALMERIA (Advanced Ligand Multiconformational
Exploration with Robust Interpretable Artificial Intelligence) for estimating
compound similarities and activity prediction based on pairwise molecular
contrasts while considering their conformation variability. The methodology
covers the entire pipeline from data preparation to model selection and
hyperparameter optimization. It has been implemented using scalable software
and methods to exploit large volumes of data -- in the order of several
terabytes -- , offering a very quick response even for a large batch of
queries. The implementation and experiments have been performed in a
distributed computer cluster using a benchmark, the public access DUD-E
database. In addition to cross-validation, detailed data split criteria have
been used to evaluate the models on different data partitions to assess their
true generalization ability with new compounds. Experiments show
state-of-the-art performance for molecular activity prediction (ROC AUC:
$0.99$, $0.96$, $0.87$), proving that the chosen data representation and
modeling have good properties to generalize. Molecular conformations --
prediction performance and sensitivity analysis -- have also been evaluated.
Finally, an interpretability analysis has been performed using the SHAP method.
| [
{
"created": "Fri, 28 Apr 2023 16:27:06 GMT",
"version": "v1"
}
] | 2023-05-23 | [
[
"Mena-Yedra",
"Rafael",
""
],
[
"Redondo",
"Juana L.",
""
],
[
"Pérez-Sánchez",
"Horacio",
""
],
[
"Ortigosa",
"Pilar M.",
""
]
] | Searching for potential active compounds in large databases is a necessary step to reduce time and costs in modern drug discovery pipelines. Such virtual screening methods seek to provide predictions that allow the search space to be narrowed down. Although cheminformatics has made great progress in exploiting the potential of available big data, caution is needed to avoid introducing bias and provide useful predictions with new compounds. In this work, we propose the decision-support tool ALMERIA (Advanced Ligand Multiconformational Exploration with Robust Interpretable Artificial Intelligence) for estimating compound similarities and activity prediction based on pairwise molecular contrasts while considering their conformation variability. The methodology covers the entire pipeline from data preparation to model selection and hyperparameter optimization. It has been implemented using scalable software and methods to exploit large volumes of data -- in the order of several terabytes -- , offering a very quick response even for a large batch of queries. The implementation and experiments have been performed in a distributed computer cluster using a benchmark, the public access DUD-E database. In addition to cross-validation, detailed data split criteria have been used to evaluate the models on different data partitions to assess their true generalization ability with new compounds. Experiments show state-of-the-art performance for molecular activity prediction (ROC AUC: $0.99$, $0.96$, $0.87$), proving that the chosen data representation and modeling have good properties to generalize. Molecular conformations -- prediction performance and sensitivity analysis -- have also been evaluated. Finally, an interpretability analysis has been performed using the SHAP method. |
1606.04875 | Jesse Greener | Francois Paquet-Mercier, Mazeyar Parvinzadeh Gashti, Julien
Bellavance, Seyed Mohammad Taghavi, Jesse Greener | Effect of NaCl on Pseudomonas biofilm viscosity by continuous,
non-intrusive microfluidic-based approach | Manuscript: 6 pages, 6 figures. Supporting information: 5 pages, 4
figures | null | null | null | q-bio.QM physics.flu-dyn | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A method combining video imaging in parallel microchannels with a
semi-empirical mathematical model provides non-intrusive, high-throughput
measurements of time-varying biofilm viscosity. The approach is demonstrated
for early growth Pseudomonas sp. biofilms exposed to constant flow streams of
nutrient solutions with different ionic strengths. The ability to measure
viscosities at early growth stages, without inducing a shear-thickening
response, enabled measurements that are among the lowest reported to date. In
addition, good time resolution enabled the detection of a rapid thickening
phase, which occurred at different times after the exponential growth phase
finished, depending on the ionic strength. The technique opens the way for a
combinatorial approach to beter understand the complex dynamical response of
biofilm mechanical properties under well-controlled physical, chemical and
biological growth conditions and time-limited perturbations.
| [
{
"created": "Wed, 15 Jun 2016 17:36:41 GMT",
"version": "v1"
}
] | 2016-06-16 | [
[
"Paquet-Mercier",
"Francois",
""
],
[
"Gashti",
"Mazeyar Parvinzadeh",
""
],
[
"Bellavance",
"Julien",
""
],
[
"Taghavi",
"Seyed Mohammad",
""
],
[
"Greener",
"Jesse",
""
]
] | A method combining video imaging in parallel microchannels with a semi-empirical mathematical model provides non-intrusive, high-throughput measurements of time-varying biofilm viscosity. The approach is demonstrated for early growth Pseudomonas sp. biofilms exposed to constant flow streams of nutrient solutions with different ionic strengths. The ability to measure viscosities at early growth stages, without inducing a shear-thickening response, enabled measurements that are among the lowest reported to date. In addition, good time resolution enabled the detection of a rapid thickening phase, which occurred at different times after the exponential growth phase finished, depending on the ionic strength. The technique opens the way for a combinatorial approach to beter understand the complex dynamical response of biofilm mechanical properties under well-controlled physical, chemical and biological growth conditions and time-limited perturbations. |
1101.5814 | Jacopo Grilli | Jacopo Grilli, Bruno Bassetti, Sergei Maslov and Marco Cosentino
Lagomarsino | Joint scaling laws in functional and evolutionary categories in
prokaryotic genomes | 39 pages, 21 figures | Nucleic Acids Research (2012) 40 (2): 530-540 | 10.1093/nar/gkr711 | null | q-bio.GN q-bio.MN q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose and study a class-expansion/innovation/loss model of genome
evolution taking into account biological roles of genes and their constituent
domains. In our model numbers of genes in different functional categories are
coupled to each other. For example, an increase in the number of metabolic
enzymes in a genome is usually accompanied by addition of new transcription
factors regulating these enzymes. Such coupling can be thought of as a
proportional "recipe" for genome composition of the type "a spoonful of sugar
for each egg yolk". The model jointly reproduces two known empirical laws: the
distribution of family sizes and the nonlinear scaling of the number of genes
in certain functional categories (e.g. transcription factors) with genome size.
In addition, it allows us to derive a novel relation between the exponents
characterising these two scaling laws, establishing a direct quantitative
connection between evolutionary and functional categories. It predicts that
functional categories that grow faster-than-linearly with genome size to be
characterised by flatter-than-average family size distributions. This relation
is confirmed by our bioinformatics analysis of prokaryotic genomes. This proves
that the joint quantitative trends of functional and evolutionary classes can
be understood in terms of evolutionary growth with proportional recipes.
| [
{
"created": "Sun, 30 Jan 2011 20:23:10 GMT",
"version": "v1"
},
{
"created": "Tue, 8 Mar 2011 10:35:36 GMT",
"version": "v2"
},
{
"created": "Tue, 9 Aug 2011 19:11:58 GMT",
"version": "v3"
}
] | 2015-03-18 | [
[
"Grilli",
"Jacopo",
""
],
[
"Bassetti",
"Bruno",
""
],
[
"Maslov",
"Sergei",
""
],
[
"Lagomarsino",
"Marco Cosentino",
""
]
] | We propose and study a class-expansion/innovation/loss model of genome evolution taking into account biological roles of genes and their constituent domains. In our model numbers of genes in different functional categories are coupled to each other. For example, an increase in the number of metabolic enzymes in a genome is usually accompanied by addition of new transcription factors regulating these enzymes. Such coupling can be thought of as a proportional "recipe" for genome composition of the type "a spoonful of sugar for each egg yolk". The model jointly reproduces two known empirical laws: the distribution of family sizes and the nonlinear scaling of the number of genes in certain functional categories (e.g. transcription factors) with genome size. In addition, it allows us to derive a novel relation between the exponents characterising these two scaling laws, establishing a direct quantitative connection between evolutionary and functional categories. It predicts that functional categories that grow faster-than-linearly with genome size to be characterised by flatter-than-average family size distributions. This relation is confirmed by our bioinformatics analysis of prokaryotic genomes. This proves that the joint quantitative trends of functional and evolutionary classes can be understood in terms of evolutionary growth with proportional recipes. |
1902.10700 | Imon Banerjee | Imon Banerjee, Luis de Sisternes, Joelle Hallak, Theodore Leng, Aaron
Osborne, Mary Durbin, Daniel Rubin | A Deep-learning Approach for Prognosis of Age-Related Macular
Degeneration Disease using SD-OCT Imaging Biomarkers | null | null | null | null | q-bio.QM cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a hybrid sequential deep learning model to predict the risk of AMD
progression in non-exudative AMD eyes at multiple timepoints, starting from
short-term progression (3-months) up to long-term progression (21-months).
Proposed model combines radiomics and deep learning to handle challenges
related to imperfect ratio of OCT scan dimension and training cohort size. We
considered a retrospective clinical trial dataset that includes 671 fellow eyes
with 13,954 dry AMD observations for training and validating the machine
learning models on a 10-fold cross validation setting. The proposed RNN model
achieved high accuracy (0.96 AUCROC) for the prediction of both short term and
long-term AMD progression, and outperformed the traditional random forest model
trained. High accuracy achieved by the RNN establishes the ability to identify
AMD patients at risk of progressing to advanced AMD at an early stage which
could have a high clinical impact as it allows for optimal clinical follow-up,
with more frequent screening and potential earlier treatment for those patients
at high risk.
| [
{
"created": "Wed, 27 Feb 2019 06:16:12 GMT",
"version": "v1"
}
] | 2019-03-01 | [
[
"Banerjee",
"Imon",
""
],
[
"de Sisternes",
"Luis",
""
],
[
"Hallak",
"Joelle",
""
],
[
"Leng",
"Theodore",
""
],
[
"Osborne",
"Aaron",
""
],
[
"Durbin",
"Mary",
""
],
[
"Rubin",
"Daniel",
""
]
] | We propose a hybrid sequential deep learning model to predict the risk of AMD progression in non-exudative AMD eyes at multiple timepoints, starting from short-term progression (3-months) up to long-term progression (21-months). Proposed model combines radiomics and deep learning to handle challenges related to imperfect ratio of OCT scan dimension and training cohort size. We considered a retrospective clinical trial dataset that includes 671 fellow eyes with 13,954 dry AMD observations for training and validating the machine learning models on a 10-fold cross validation setting. The proposed RNN model achieved high accuracy (0.96 AUCROC) for the prediction of both short term and long-term AMD progression, and outperformed the traditional random forest model trained. High accuracy achieved by the RNN establishes the ability to identify AMD patients at risk of progressing to advanced AMD at an early stage which could have a high clinical impact as it allows for optimal clinical follow-up, with more frequent screening and potential earlier treatment for those patients at high risk. |
2107.04036 | Abicumaran Uthamacumaran | Abicumaran Uthamacumaran | Pattern Detection on Glioblastoma's Waddington landscape via Generative
Adversarial Networks | 15 pages, 3 figures | Cybernetics and Systems (2021) | 10.1080/01969722.2021.1982160 | https://doi.org/10.1080/01969722.2021.1982160 | q-bio.OT nlin.CD physics.bio-ph | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Glioblastoma (GBM) is a highly morbid and lethal disease with poor prognosis.
Their emergent properties such as cellular heterogeneity, therapy resistance,
and self-renewal are largely attributed to the interactions between a subset of
their population known as glioblastoma-derived stem cells (GSCs) and their
microenvironment. Identifying causal patterns in the developmental trajectories
between GSCs and the mature, well-differentiated GBM phenotypes remains a
challenging problem in oncology. The paper presents a blueprint of complex
systems approaches to infer attractor dynamics from the single-cell gene
expression datasets of pediatric GBM and adult GSCs. These algorithms include
Waddington landscape reconstruction, Generative Adversarial Networks, and
fractal dimension analysis. Here I show, a Rossler-like strange attractor with
a fractal dimension of roughly 1.7 emerged in the GAN-reconstructed patterns of
all twelve patients. The findings suggest a strange attractor may be driving
the complex dynamics and adaptive behaviors of GBM in signaling state-space.
| [
{
"created": "Thu, 8 Jul 2021 17:49:52 GMT",
"version": "v1"
},
{
"created": "Mon, 12 Jul 2021 18:56:20 GMT",
"version": "v2"
},
{
"created": "Thu, 23 Sep 2021 23:23:47 GMT",
"version": "v3"
}
] | 2021-10-11 | [
[
"Uthamacumaran",
"Abicumaran",
""
]
] | Glioblastoma (GBM) is a highly morbid and lethal disease with poor prognosis. Their emergent properties such as cellular heterogeneity, therapy resistance, and self-renewal are largely attributed to the interactions between a subset of their population known as glioblastoma-derived stem cells (GSCs) and their microenvironment. Identifying causal patterns in the developmental trajectories between GSCs and the mature, well-differentiated GBM phenotypes remains a challenging problem in oncology. The paper presents a blueprint of complex systems approaches to infer attractor dynamics from the single-cell gene expression datasets of pediatric GBM and adult GSCs. These algorithms include Waddington landscape reconstruction, Generative Adversarial Networks, and fractal dimension analysis. Here I show, a Rossler-like strange attractor with a fractal dimension of roughly 1.7 emerged in the GAN-reconstructed patterns of all twelve patients. The findings suggest a strange attractor may be driving the complex dynamics and adaptive behaviors of GBM in signaling state-space. |
1901.06139 | Sean Murray | Sean M. Murray and Martin Howard | Centre-finding in E. coli and the role of mathematical modelling: past,
present and future | 15 pages, 3 figures | Journal of Molecular Biology 2019 | null | null | q-bio.SC physics.bio-ph | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We review the key role played by mathematical modelling in elucidating two
centre-finding patterning systems in E. coli: midcell division positioning by
the MinCDE system and DNA partitioning by the ParABS system. We focus
particularly on how, despite much experimental effort, these systems were
simply too complex to unravel by experiments alone, and instead required key
injections of quantitative, mathematical thinking. We conclude the review by
analysing the frequency of modelling approaches in microbiology over time. We
find that while such methods are increasing in popularity, they are still
probably heavily under-utilised for optimal progress on complex biological
questions.
| [
{
"created": "Fri, 18 Jan 2019 09:06:58 GMT",
"version": "v1"
}
] | 2019-01-21 | [
[
"Murray",
"Sean M.",
""
],
[
"Howard",
"Martin",
""
]
] | We review the key role played by mathematical modelling in elucidating two centre-finding patterning systems in E. coli: midcell division positioning by the MinCDE system and DNA partitioning by the ParABS system. We focus particularly on how, despite much experimental effort, these systems were simply too complex to unravel by experiments alone, and instead required key injections of quantitative, mathematical thinking. We conclude the review by analysing the frequency of modelling approaches in microbiology over time. We find that while such methods are increasing in popularity, they are still probably heavily under-utilised for optimal progress on complex biological questions. |
1811.00973 | Marek Cieplak | Karol Wolek and Marek Cieplak | Self-assembly of model proteins into virus capsids | 13 figures | J. Phys.:Cond. Matter 47, 474003 (2017) | 10.1088/1361-648X/aa9351 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider self-assembly of proteins into a virus capsid by the methods of
molecular dynamics. The capsid corresponds either to SPMV or CCMV and is
studied with and without the RNA molecule inside. The proteins are flexible and
described by the structure-based coarse-grained model augmented by
electrostatic interactions. Previous studies of the capsid self-assembly
involved solid objects of a supramolecular scale, e.g. corresponding to
capsomeres, with engineered couplings and stochastic movements. In our
approach, a single capsid is dissociated by an application of a high
temperature for a variable period and then the system is cooled down to allow
for self-assembly. The restoration of the capsid proceeds to various extent,
depending on the nature of the dissociated state, but is rarely complete
because some proteins depart too far unless the process takes place in a
confined space.
| [
{
"created": "Fri, 2 Nov 2018 16:37:59 GMT",
"version": "v1"
}
] | 2018-11-05 | [
[
"Wolek",
"Karol",
""
],
[
"Cieplak",
"Marek",
""
]
] | We consider self-assembly of proteins into a virus capsid by the methods of molecular dynamics. The capsid corresponds either to SPMV or CCMV and is studied with and without the RNA molecule inside. The proteins are flexible and described by the structure-based coarse-grained model augmented by electrostatic interactions. Previous studies of the capsid self-assembly involved solid objects of a supramolecular scale, e.g. corresponding to capsomeres, with engineered couplings and stochastic movements. In our approach, a single capsid is dissociated by an application of a high temperature for a variable period and then the system is cooled down to allow for self-assembly. The restoration of the capsid proceeds to various extent, depending on the nature of the dissociated state, but is rarely complete because some proteins depart too far unless the process takes place in a confined space. |
2107.10966 | David Hughes Dr | David J. Hughes, Joseph R Crosswell, Martina A. Doblin, Kevin
Oxborough, Peter J. Ralph, Deepa Varkey and David J. Suggett | Dynamic variability of the phytoplankton electron requirement for carbon
fixation in eastern Australian waters | 57 pages, 14 figures, accepted version | J. Mar. Syst. 2020:103252 | 10.1016/j.jmarsys.2019.103252 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fast Repetition Rate fluorometry (FRRf) generates high-resolution measures of
phytoplankton primary productivity as electron transport rates (ETRs). How ETRs
scale to corresponding inorganic carbon (C) uptake rates (the so-called
electron requirement for carbon fixation, e,C), inherently describes the extent
and effectiveness with which absorbed light energy drives C-fixation. However,
it remains unclear whether and how e,C follows predictable patterns for
oceanographic datasets spanning physically dynamic, and complex, environmental
gradients. We utilise a unique high-throughput approach, coupling ETRs and
14C-incubations to produce a semi-continuous dataset of e,C (n = 80),
predominantly from surface waters, along the Australian coast (Brisbane to the
Tasman Sea), including the East Australian Current (EAC). Environmental
conditions along this transect could be generally grouped into cooler, more
nutrient-rich waters dominated by larger size-fractionated Chl-a (>10 um)
versus warmer nutrient-poorer waters dominated by smaller size-fractionated
Chl-a (< 2 um). Whilst e,C was higher for warmer water samples, environmental
conditions alone explained less than 20% variance of e,C, and changes in
predominant size-fraction(s) distributions of Chl-a (biomass) failed to explain
variance of e,C. Instead, NPQNSV was a better predictor of e,C, explaining 55%
of observed variability. NPQNSV is a physiological descriptor that accounts for
changes in both long-term driven acclimation in non-radiative decay, and
quasi-instantaneous PSII downregulation, and thus may prove a useful predictor
of e,C across physically-dynamic regimes, provided the slope describing their
relationship is predictable.
| [
{
"created": "Fri, 23 Jul 2021 00:12:41 GMT",
"version": "v1"
}
] | 2021-07-26 | [
[
"Hughes",
"David J.",
""
],
[
"Crosswell",
"Joseph R",
""
],
[
"Doblin",
"Martina A.",
""
],
[
"Oxborough",
"Kevin",
""
],
[
"Ralph",
"Peter J.",
""
],
[
"Varkey",
"Deepa",
""
],
[
"Suggett",
"David J.",
""
]
] | Fast Repetition Rate fluorometry (FRRf) generates high-resolution measures of phytoplankton primary productivity as electron transport rates (ETRs). How ETRs scale to corresponding inorganic carbon (C) uptake rates (the so-called electron requirement for carbon fixation, e,C), inherently describes the extent and effectiveness with which absorbed light energy drives C-fixation. However, it remains unclear whether and how e,C follows predictable patterns for oceanographic datasets spanning physically dynamic, and complex, environmental gradients. We utilise a unique high-throughput approach, coupling ETRs and 14C-incubations to produce a semi-continuous dataset of e,C (n = 80), predominantly from surface waters, along the Australian coast (Brisbane to the Tasman Sea), including the East Australian Current (EAC). Environmental conditions along this transect could be generally grouped into cooler, more nutrient-rich waters dominated by larger size-fractionated Chl-a (>10 um) versus warmer nutrient-poorer waters dominated by smaller size-fractionated Chl-a (< 2 um). Whilst e,C was higher for warmer water samples, environmental conditions alone explained less than 20% variance of e,C, and changes in predominant size-fraction(s) distributions of Chl-a (biomass) failed to explain variance of e,C. Instead, NPQNSV was a better predictor of e,C, explaining 55% of observed variability. NPQNSV is a physiological descriptor that accounts for changes in both long-term driven acclimation in non-radiative decay, and quasi-instantaneous PSII downregulation, and thus may prove a useful predictor of e,C across physically-dynamic regimes, provided the slope describing their relationship is predictable. |
1611.04077 | Christoph Adami | Christoph Adami, Jory Schossau, and Arend Hintze | The Reasonable Effectiveness of Agent-Based Simulations in Evolutionary
Game Theory | 5 pages. To appear in Physics of Life Reviews | Physics of Life Reviews 19 (2016) 38-42 | 10.1016/j.plrev.2016.11.005 | null | q-bio.PE cs.GT nlin.AO q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This is a Reply to comments published in Physics of Life Reviews, on our
article "Evolutionary game theory using agent-based methods" (Physics of Life
Reviews, 2016, arXiv:1404.0994).
| [
{
"created": "Sun, 13 Nov 2016 03:54:34 GMT",
"version": "v1"
}
] | 2016-12-07 | [
[
"Adami",
"Christoph",
""
],
[
"Schossau",
"Jory",
""
],
[
"Hintze",
"Arend",
""
]
] | This is a Reply to comments published in Physics of Life Reviews, on our article "Evolutionary game theory using agent-based methods" (Physics of Life Reviews, 2016, arXiv:1404.0994). |
0802.4361 | Luca Sbano | L. Sbano and M. Kirkilionis | Multiscale Analysis of Reaction Networks | null | null | null | 12/2007 | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In most natural sciences there is currently the insight that it is necessary
to bridge gaps between different processes which can be observed on different
scales. This is especially true in the field of chemical reactions where the
abilities to form bonds between different types of atoms and molecules create
much of the properties we experience in our everyday life, especially in all
biological activity. There are essentially two types of processes related to
biochemical reaction networks, the interactions among molecules and
interactions involving their conformational changes, so in a sense, their
internal state. The first type of processes can be conveniently approximated by
the so-called mass-action kinetics, but this is not necessarily so for the
second kind where molecular states do not define any kind of density or
concentration. In this paper we demonstrate the necessity to study reaction
networks in a stochastic formulation for which we can construct a coherent
approximation in terms of specific space-time scales and the number of
particles. The continuum limit procedure naturally creates equations of
Fokker-Planck type where the evolution of the concentration occurs on a slower
time scale when compared to the evolution of the conformational changes, for
example triggered by binding or unbinding events with other (typically smaller)
molecules. We apply the asymptotic theory to derive the effective, i.e.
macroscopic dynamics of the biochemical reaction system. The theory can also be
applied to other processes where entities can be described by finitely many
internal states, with changes of states occuring by arrival of other entities
described by a birth-death process.
| [
{
"created": "Fri, 29 Feb 2008 11:19:35 GMT",
"version": "v1"
}
] | 2008-03-03 | [
[
"Sbano",
"L.",
""
],
[
"Kirkilionis",
"M.",
""
]
] | In most natural sciences there is currently the insight that it is necessary to bridge gaps between different processes which can be observed on different scales. This is especially true in the field of chemical reactions where the abilities to form bonds between different types of atoms and molecules create much of the properties we experience in our everyday life, especially in all biological activity. There are essentially two types of processes related to biochemical reaction networks, the interactions among molecules and interactions involving their conformational changes, so in a sense, their internal state. The first type of processes can be conveniently approximated by the so-called mass-action kinetics, but this is not necessarily so for the second kind where molecular states do not define any kind of density or concentration. In this paper we demonstrate the necessity to study reaction networks in a stochastic formulation for which we can construct a coherent approximation in terms of specific space-time scales and the number of particles. The continuum limit procedure naturally creates equations of Fokker-Planck type where the evolution of the concentration occurs on a slower time scale when compared to the evolution of the conformational changes, for example triggered by binding or unbinding events with other (typically smaller) molecules. We apply the asymptotic theory to derive the effective, i.e. macroscopic dynamics of the biochemical reaction system. The theory can also be applied to other processes where entities can be described by finitely many internal states, with changes of states occuring by arrival of other entities described by a birth-death process. |
2203.15888 | Brenda Delamonica | Brenda Delamonica, Gabor Balazsi, Michael Shub | Cusp Bifurcation in Metastatic Breast Cancer Cells | 57 pages, 22 figures, code included | null | null | null | q-bio.CB math.DS | http://creativecommons.org/licenses/by/4.0/ | Ordinary differential equations (ODEs) can model the transition of cell
states over time. Bifurcation theory is a branch of dynamical systems which
studies changes in the behavior of an ODE system while one or more parameters
are varied. We have found that concepts in bifurcation theory may be applied to
model metastatic cell behavior. Our results show how a specific phenomenon
called a cusp bifurcation describes metastatic cell state transitions,
separating two qualitatively different transition modalities. Moreover, we show
how the cusp bifurcation models other genetic networks, and we relate the
dynamics after the bifurcation to observed phenomena in commitment to enter the
cell cycle.
| [
{
"created": "Tue, 29 Mar 2022 20:21:57 GMT",
"version": "v1"
},
{
"created": "Thu, 31 Mar 2022 12:39:03 GMT",
"version": "v2"
},
{
"created": "Wed, 5 Jul 2023 14:28:54 GMT",
"version": "v3"
}
] | 2023-07-06 | [
[
"Delamonica",
"Brenda",
""
],
[
"Balazsi",
"Gabor",
""
],
[
"Shub",
"Michael",
""
]
] | Ordinary differential equations (ODEs) can model the transition of cell states over time. Bifurcation theory is a branch of dynamical systems which studies changes in the behavior of an ODE system while one or more parameters are varied. We have found that concepts in bifurcation theory may be applied to model metastatic cell behavior. Our results show how a specific phenomenon called a cusp bifurcation describes metastatic cell state transitions, separating two qualitatively different transition modalities. Moreover, we show how the cusp bifurcation models other genetic networks, and we relate the dynamics after the bifurcation to observed phenomena in commitment to enter the cell cycle. |
1103.0675 | Diana Clausznitzer | Diana Clausznitzer, Robert G Endres | Noise characteristics of the Escherichia coli rotary motor | 22 pages, 7 figures, 3 tutorials, supplementary information;
submitted manuscript | null | null | null | q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The chemotaxis pathway in the bacterium Escherichia coli allows cells to
detect changes in external ligand concentration (e.g. nutrients). The pathway
regulates the flagellated rotary motors and hence the cells' swimming
behaviour, steering them towards more favourable environments. While the
molecular components are well characterised, the motor behaviour measured by
tethered cell experiments has been difficult to interpret. Here, we study the
effects of sensing and signalling noise on the motor behaviour. Specifically,
we consider fluctuations stemming from ligand concentration, receptor switching
between their signalling states, adaptation, modification of proteins by
phosphorylation, and motor switching between its two rotational states. We
develop a model which includes all signalling steps in the pathway, and discuss
a simplified version, which captures the essential features of the full model.
We find that the noise characteristics of the motor contain signatures from all
these processes, albeit with varying magnitudes. This allows us to address how
cell-to-cell variation affects motor behaviour and the question of optimal
pathway design. A similar comprehensive analysis can be applied to other
two-component signalling pathways.
| [
{
"created": "Thu, 3 Mar 2011 12:54:55 GMT",
"version": "v1"
}
] | 2011-03-04 | [
[
"Clausznitzer",
"Diana",
""
],
[
"Endres",
"Robert G",
""
]
] | The chemotaxis pathway in the bacterium Escherichia coli allows cells to detect changes in external ligand concentration (e.g. nutrients). The pathway regulates the flagellated rotary motors and hence the cells' swimming behaviour, steering them towards more favourable environments. While the molecular components are well characterised, the motor behaviour measured by tethered cell experiments has been difficult to interpret. Here, we study the effects of sensing and signalling noise on the motor behaviour. Specifically, we consider fluctuations stemming from ligand concentration, receptor switching between their signalling states, adaptation, modification of proteins by phosphorylation, and motor switching between its two rotational states. We develop a model which includes all signalling steps in the pathway, and discuss a simplified version, which captures the essential features of the full model. We find that the noise characteristics of the motor contain signatures from all these processes, albeit with varying magnitudes. This allows us to address how cell-to-cell variation affects motor behaviour and the question of optimal pathway design. A similar comprehensive analysis can be applied to other two-component signalling pathways. |
1002.0559 | Randen Patterson | Gaurav Bhardwaj, Zhenhai Zhang, Yoojin Hong, Kyung Dae Ko, Gue Su
Chang, Evan J. Smith, Lindsay A. Kline, D. Nicholas Hartranft, Edward C.
Holmes, Randen L. Patterson, and Damian B. van Rossum | Theories on PHYlogenetic ReconstructioN (PHYRN) | 13 pages, 6 figures | null | null | null | q-bio.PE q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The inability to resolve deep node relationships of highly divergent/rapidly
evolving protein families is a major factor that stymies evolutionary studies.
In this manuscript, we propose a Multiple Sequence Alignment (MSA) independent
method to infer evolutionary relationships. We previously demonstrated that
phylogenetic profiles built using position specific scoring matrices (PSSMs)
are capable of constructing informative evolutionary histories(1;2). In this
manuscript, we theorize that PSSMs derived specifically from the query
sequences used to construct the phylogenetic tree will improve this method for
the study of rapidly evolving proteins. To test this theory, we performed
phylogenetic analyses of a benchmark protein superfamily (reverse
transcriptases (RT)) as well as simulated datasets. When we compare the results
obtained from our method, PHYlogenetic ReconstructioN (PHYRN), with other MSA
dependent methods, we observe that PHYRN provides a 4- to 100-fold increase in
accurate measurements at deep nodes. As phylogenetic profiles are used as the
information source, rather than MSA, we propose PHYRN as a paradigm shift in
studying evolution when MSA approaches fail. Perhaps most importantly, due to
the improvements in our computational approach and the availability of vast
amount of sequencing data, PHYRN is scalable to thousands of sequences. Taken
together with PHYRNs adaptability to any protein family, this method can serve
as a tool for resolving ambiguities in evolutionary studies of rapidly
evolving/highly divergent protein families.
| [
{
"created": "Tue, 2 Feb 2010 18:13:11 GMT",
"version": "v1"
},
{
"created": "Fri, 26 Feb 2010 16:12:58 GMT",
"version": "v2"
}
] | 2010-02-26 | [
[
"Bhardwaj",
"Gaurav",
""
],
[
"Zhang",
"Zhenhai",
""
],
[
"Hong",
"Yoojin",
""
],
[
"Ko",
"Kyung Dae",
""
],
[
"Chang",
"Gue Su",
""
],
[
"Smith",
"Evan J.",
""
],
[
"Kline",
"Lindsay A.",
""
],
[
"Hartranft",
"D. Nicholas",
""
],
[
"Holmes",
"Edward C.",
""
],
[
"Patterson",
"Randen L.",
""
],
[
"van Rossum",
"Damian B.",
""
]
] | The inability to resolve deep node relationships of highly divergent/rapidly evolving protein families is a major factor that stymies evolutionary studies. In this manuscript, we propose a Multiple Sequence Alignment (MSA) independent method to infer evolutionary relationships. We previously demonstrated that phylogenetic profiles built using position specific scoring matrices (PSSMs) are capable of constructing informative evolutionary histories(1;2). In this manuscript, we theorize that PSSMs derived specifically from the query sequences used to construct the phylogenetic tree will improve this method for the study of rapidly evolving proteins. To test this theory, we performed phylogenetic analyses of a benchmark protein superfamily (reverse transcriptases (RT)) as well as simulated datasets. When we compare the results obtained from our method, PHYlogenetic ReconstructioN (PHYRN), with other MSA dependent methods, we observe that PHYRN provides a 4- to 100-fold increase in accurate measurements at deep nodes. As phylogenetic profiles are used as the information source, rather than MSA, we propose PHYRN as a paradigm shift in studying evolution when MSA approaches fail. Perhaps most importantly, due to the improvements in our computational approach and the availability of vast amount of sequencing data, PHYRN is scalable to thousands of sequences. Taken together with PHYRNs adaptability to any protein family, this method can serve as a tool for resolving ambiguities in evolutionary studies of rapidly evolving/highly divergent protein families. |
2311.17969 | Sonish Sivarajkumar | Sonish Sivarajkumar, Pratyush Tandale, Ankit Bhardwaj, Kipp W.
Johnson, Anoop Titus, Benjamin S. Glicksberg, Shameer Khader, Kamlesh K.
Yadav, Lakshminarayanan Subramanian | Generation of a Compendium of Transcription Factor Cascades and
Identification of Potential Therapeutic Targets using Graph Machine Learning | null | null | null | null | q-bio.MN cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Transcription factors (TFs) play a vital role in the regulation of gene
expression thereby making them critical to many cellular processes. In this
study, we used graph machine learning methods to create a compendium of TF
cascades using data extracted from the STRING database. A TF cascade is a
sequence of TFs that regulate each other, forming a directed path in the TF
network. We constructed a knowledge graph of 81,488 unique TF cascades, with
the longest cascade consisting of 62 TFs. Our results highlight the complex and
intricate nature of TF interactions, where multiple TFs work together to
regulate gene expression. We also identified 10 TFs with the highest regulatory
influence based on centrality measurements, providing valuable information for
researchers interested in studying specific TFs. Furthermore, our pathway
enrichment analysis revealed significant enrichment of various pathways and
functional categories, including those involved in cancer and other diseases,
as well as those involved in development, differentiation, and cell signaling.
The enriched pathways identified in this study may have potential as targets
for therapeutic intervention in diseases associated with dysregulation of
transcription factors. We have released the dataset, knowledge graph, and
graphML methods for the TF cascades, and created a website to display the
results, which can be accessed by researchers interested in using this dataset.
Our study provides a valuable resource for understanding the complex network of
interactions between TFs and their regulatory roles in cellular processes.
| [
{
"created": "Wed, 29 Nov 2023 15:31:58 GMT",
"version": "v1"
}
] | 2023-12-01 | [
[
"Sivarajkumar",
"Sonish",
""
],
[
"Tandale",
"Pratyush",
""
],
[
"Bhardwaj",
"Ankit",
""
],
[
"Johnson",
"Kipp W.",
""
],
[
"Titus",
"Anoop",
""
],
[
"Glicksberg",
"Benjamin S.",
""
],
[
"Khader",
"Shameer",
""
],
[
"Yadav",
"Kamlesh K.",
""
],
[
"Subramanian",
"Lakshminarayanan",
""
]
] | Transcription factors (TFs) play a vital role in the regulation of gene expression thereby making them critical to many cellular processes. In this study, we used graph machine learning methods to create a compendium of TF cascades using data extracted from the STRING database. A TF cascade is a sequence of TFs that regulate each other, forming a directed path in the TF network. We constructed a knowledge graph of 81,488 unique TF cascades, with the longest cascade consisting of 62 TFs. Our results highlight the complex and intricate nature of TF interactions, where multiple TFs work together to regulate gene expression. We also identified 10 TFs with the highest regulatory influence based on centrality measurements, providing valuable information for researchers interested in studying specific TFs. Furthermore, our pathway enrichment analysis revealed significant enrichment of various pathways and functional categories, including those involved in cancer and other diseases, as well as those involved in development, differentiation, and cell signaling. The enriched pathways identified in this study may have potential as targets for therapeutic intervention in diseases associated with dysregulation of transcription factors. We have released the dataset, knowledge graph, and graphML methods for the TF cascades, and created a website to display the results, which can be accessed by researchers interested in using this dataset. Our study provides a valuable resource for understanding the complex network of interactions between TFs and their regulatory roles in cellular processes. |
1108.4575 | Piero Olla | Piero Olla | Demographic fluctuations in a population of anomalously diffusing
individuals | 10 pages, 6 figures | Phys. Rev. E; 85, 021125 (2012) | 10.1103/PhysRevE.85.021125 | null | q-bio.PE cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The phenomenon of spatial clustering induced by death and reproduction in a
population of anomalously diffusing individuals is studied analytically. The
possibility of social behaviors affecting the migration strategies has been
taken into exam, in the case anomalous diffusion is produced by means of a
continuous time random walk (CTRW). In the case of independently diffusing
individuals, the dynamics appears to coincide with that of (dying and
reproducing) Brownian walkers. In the strongly social case, the dynamics
coincides with that of non-migrating individuals. In both limits, the growth
rate of the fluctuations becomes independent of the Hurst exponent of the CTRW.
The social behaviors that arise when transport in a population is induced by a
spatial distribution of random traps, have been analyzed.
| [
{
"created": "Tue, 23 Aug 2011 13:03:17 GMT",
"version": "v1"
},
{
"created": "Thu, 15 Mar 2012 14:08:34 GMT",
"version": "v2"
}
] | 2015-05-30 | [
[
"Olla",
"Piero",
""
]
] | The phenomenon of spatial clustering induced by death and reproduction in a population of anomalously diffusing individuals is studied analytically. The possibility of social behaviors affecting the migration strategies has been taken into exam, in the case anomalous diffusion is produced by means of a continuous time random walk (CTRW). In the case of independently diffusing individuals, the dynamics appears to coincide with that of (dying and reproducing) Brownian walkers. In the strongly social case, the dynamics coincides with that of non-migrating individuals. In both limits, the growth rate of the fluctuations becomes independent of the Hurst exponent of the CTRW. The social behaviors that arise when transport in a population is induced by a spatial distribution of random traps, have been analyzed. |
1709.05059 | David Warne | David J. Warne (1), Ruth E. Baker (2), Matthew J. Simpson (1) ((1)
Queensland University of Technology, (2) University of Oxford) | Optimal quantification of contact inhibition in cell populations | null | null | 10.1016/j.bpj.2017.09.016 | null | q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Contact inhibition refers to a reduction in the rate of cell migration and/or
cell proliferation in regions of high cell density. Under normal conditions
contact inhibition is associated with the proper functioning tissues, whereas
abnormal regulation of contact inhibition is associated with pathological
conditions, such as tumor spreading. Unfortunately, standard mathematical
modeling practices mask the importance of parameters that control contact
inhibition through scaling arguments. Furthermore, standard experimental
protocols are insufficient to quantify the effects of contact inhibition
because they focus on data describing early time, low-density dynamics only.
Here we use the logistic growth equation as a caricature model of contact
inhibition to make recommendations as to how to best mitigate these issues.
Taking a Bayesian approach we quantify the trade-off between different features
of experimental design and estimates of parameter uncertainty so that we can
re-formulate a standard cell proliferation assay to provide estimates of both
the low-density intrinsic growth rate, $\lambda$, and the carrying capacity
density, $K$, which is a measure of contact inhibition.
| [
{
"created": "Fri, 15 Sep 2017 04:52:56 GMT",
"version": "v1"
}
] | 2018-03-01 | [
[
"Warne",
"David J.",
""
],
[
"Baker",
"Ruth E.",
""
],
[
"Simpson",
"Matthew J.",
""
]
] | Contact inhibition refers to a reduction in the rate of cell migration and/or cell proliferation in regions of high cell density. Under normal conditions contact inhibition is associated with the proper functioning tissues, whereas abnormal regulation of contact inhibition is associated with pathological conditions, such as tumor spreading. Unfortunately, standard mathematical modeling practices mask the importance of parameters that control contact inhibition through scaling arguments. Furthermore, standard experimental protocols are insufficient to quantify the effects of contact inhibition because they focus on data describing early time, low-density dynamics only. Here we use the logistic growth equation as a caricature model of contact inhibition to make recommendations as to how to best mitigate these issues. Taking a Bayesian approach we quantify the trade-off between different features of experimental design and estimates of parameter uncertainty so that we can re-formulate a standard cell proliferation assay to provide estimates of both the low-density intrinsic growth rate, $\lambda$, and the carrying capacity density, $K$, which is a measure of contact inhibition. |
2108.02545 | Claus Metzner | Claus Metzner and Patrick Krauss | Dynamical Phases and Resonance Phenomena in Information-Processing
Recurrent Neural Networks | null | null | null | null | q-bio.NC nlin.CD physics.bio-ph | http://creativecommons.org/licenses/by/4.0/ | Recurrent neural networks (RNNs) are complex dynamical systems, capable of
ongoing activity without any driving input. The long-term behavior of
free-running RNNs, described by periodic, chaotic and fixed point attractors,
is controlled by the statistics of the neural connection weights, such as the
density $d$ of non-zero connections, or the balance $b$ between excitatory and
inhibitory connections. However, for information processing purposes, RNNs need
to receive external input signals, and it is not clear which of the dynamical
regimes is optimal for this information import. We use both the average
correlations $C$ and the mutual information $I$ between the momentary input
vector and the next system state vector as quantitative measures of information
import and analyze their dependence on the balance and density of the network.
Remarkably, both resulting phase diagrams $C(b,d)$ and $I(b,d)$ are highly
consistent, pointing to a link between the dynamical systems and the
information-processing approach to complex systems. Information import is
maximal not at the 'edge of chaos', which is optimally suited for computation,
but surprisingly in the low-density chaotic regime and at the border between
the chaotic and fixed point regime. Moreover, we find a completely new type of
resonance phenomenon, called 'Import Resonance' (IR), where the information
import shows a maximum, i.e. a peak-like dependence on the coupling strength
between the RNN and its input. IR complements Recurrence Resonance (RR), where
correlation and mutual information of successive system states peak for a
certain amplitude of noise added to the system. Both IR and RR can be exploited
to optimize information processing in artificial neural networks and might also
play a crucial role in biological neural systems.
| [
{
"created": "Thu, 5 Aug 2021 11:59:56 GMT",
"version": "v1"
}
] | 2021-08-06 | [
[
"Metzner",
"Claus",
""
],
[
"Krauss",
"Patrick",
""
]
] | Recurrent neural networks (RNNs) are complex dynamical systems, capable of ongoing activity without any driving input. The long-term behavior of free-running RNNs, described by periodic, chaotic and fixed point attractors, is controlled by the statistics of the neural connection weights, such as the density $d$ of non-zero connections, or the balance $b$ between excitatory and inhibitory connections. However, for information processing purposes, RNNs need to receive external input signals, and it is not clear which of the dynamical regimes is optimal for this information import. We use both the average correlations $C$ and the mutual information $I$ between the momentary input vector and the next system state vector as quantitative measures of information import and analyze their dependence on the balance and density of the network. Remarkably, both resulting phase diagrams $C(b,d)$ and $I(b,d)$ are highly consistent, pointing to a link between the dynamical systems and the information-processing approach to complex systems. Information import is maximal not at the 'edge of chaos', which is optimally suited for computation, but surprisingly in the low-density chaotic regime and at the border between the chaotic and fixed point regime. Moreover, we find a completely new type of resonance phenomenon, called 'Import Resonance' (IR), where the information import shows a maximum, i.e. a peak-like dependence on the coupling strength between the RNN and its input. IR complements Recurrence Resonance (RR), where correlation and mutual information of successive system states peak for a certain amplitude of noise added to the system. Both IR and RR can be exploited to optimize information processing in artificial neural networks and might also play a crucial role in biological neural systems. |
1908.07520 | Akram Yazdani PhD | Akram Yazdani, Raul Mendez-Giraldez, Michael R Kosorok, Panos Roussos | Transcriptomic Causal Networks identified patterns of differential gene
regulation in human brain from Schizophrenia cases versus controls | null | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Common and complex traits are the consequence of the interaction and
regulation of multiple genes simultaneously, which work in a coordinated way.
However, the vast majority of studies focus on the differential expression of
one individual gene at a time. Here, we aim to provide insight into the
underlying relationships of the genes expressed in the human brain in cases
with schizophrenia (SCZ) and controls. We introduced a novel approach to
identify differential gene regulatory patterns and identify a set of essential
genes in the brain tissue. Our method integrates genetic, transcriptomic, and
Hi-C data and generates a transcriptomic-causal network. Employing this
approach for analysis of RNA-seq data from CommonMind Consortium, we identified
differential regulatory patterns for SCZ cases and control groups to unveil the
mechanisms that control the transcription of the genes in the human brain. Our
analysis identified modules with a high number of SCZ-associated genes as well
as assessing the relationship of the hubs with their down-stream genes in both,
cases and controls. In addition, the results identified essential genes for
brain function and suggested new genes putatively related to SCZ.
| [
{
"created": "Tue, 20 Aug 2019 16:24:28 GMT",
"version": "v1"
}
] | 2019-08-22 | [
[
"Yazdani",
"Akram",
""
],
[
"Mendez-Giraldez",
"Raul",
""
],
[
"Kosorok",
"Michael R",
""
],
[
"Roussos",
"Panos",
""
]
] | Common and complex traits are the consequence of the interaction and regulation of multiple genes simultaneously, which work in a coordinated way. However, the vast majority of studies focus on the differential expression of one individual gene at a time. Here, we aim to provide insight into the underlying relationships of the genes expressed in the human brain in cases with schizophrenia (SCZ) and controls. We introduced a novel approach to identify differential gene regulatory patterns and identify a set of essential genes in the brain tissue. Our method integrates genetic, transcriptomic, and Hi-C data and generates a transcriptomic-causal network. Employing this approach for analysis of RNA-seq data from CommonMind Consortium, we identified differential regulatory patterns for SCZ cases and control groups to unveil the mechanisms that control the transcription of the genes in the human brain. Our analysis identified modules with a high number of SCZ-associated genes as well as assessing the relationship of the hubs with their down-stream genes in both, cases and controls. In addition, the results identified essential genes for brain function and suggested new genes putatively related to SCZ. |
2309.16261 | Pierre Haas | Yu Meng, Szabolcs Horv\'at, Carl D. Modes, Pierre A. Haas | Impossible ecologies: Interaction networks and stability of coexistence
in ecological communities | 14 pages, 6 figures, 3 supplementary figures | null | null | null | q-bio.PE physics.bio-ph | http://creativecommons.org/licenses/by/4.0/ | Does an ecological community allow stable coexistence? Identifying the
general principles that determine the answer to this question is a central
problem of theoretical ecology. Random matrix theory approaches have uncovered
the general trends of the effect of competitive, mutualistic, and predator-prey
interactions between species on stability of coexistence. However, an
ecological community is determined not only by the counts of these different
interaction types, but also by their network arrangement. This cannot be
accounted for in a direct statistical description that would enable random
matrix theory approaches. Here, we therefore develop a different approach, of
exhaustive analysis of small ecological communities, to show that this
arrangement of interactions can influence stability of coexistence more than
these general trends. We analyse all interaction networks of $N\leqslant 5$
species with Lotka-Volterra dynamics by combining exact results for $N\leqslant
3$ species and numerical exploration. Surprisingly, we find that a very small
subset of these networks are "impossible ecologies", in which stable
coexistence is non-trivially impossible. We prove that the possibility of
stable coexistence in general ecologies is determined by similarly rare
"irreducible ecologies". By random sampling of interaction strengths, we then
show that the probability of stable coexistence varies over many orders of
magnitude even in ecologies that differ only in the network arrangement of
identical ecological interactions. Finally, we demonstrate that our approach
can reveal the effect of evolutionary or environmental perturbations of the
interaction network. Overall, this work reveals the importance of the full
structure of the network of interactions for stability of coexistence in
ecological communities.
| [
{
"created": "Thu, 28 Sep 2023 08:54:28 GMT",
"version": "v1"
}
] | 2023-09-29 | [
[
"Meng",
"Yu",
""
],
[
"Horvát",
"Szabolcs",
""
],
[
"Modes",
"Carl D.",
""
],
[
"Haas",
"Pierre A.",
""
]
] | Does an ecological community allow stable coexistence? Identifying the general principles that determine the answer to this question is a central problem of theoretical ecology. Random matrix theory approaches have uncovered the general trends of the effect of competitive, mutualistic, and predator-prey interactions between species on stability of coexistence. However, an ecological community is determined not only by the counts of these different interaction types, but also by their network arrangement. This cannot be accounted for in a direct statistical description that would enable random matrix theory approaches. Here, we therefore develop a different approach, of exhaustive analysis of small ecological communities, to show that this arrangement of interactions can influence stability of coexistence more than these general trends. We analyse all interaction networks of $N\leqslant 5$ species with Lotka-Volterra dynamics by combining exact results for $N\leqslant 3$ species and numerical exploration. Surprisingly, we find that a very small subset of these networks are "impossible ecologies", in which stable coexistence is non-trivially impossible. We prove that the possibility of stable coexistence in general ecologies is determined by similarly rare "irreducible ecologies". By random sampling of interaction strengths, we then show that the probability of stable coexistence varies over many orders of magnitude even in ecologies that differ only in the network arrangement of identical ecological interactions. Finally, we demonstrate that our approach can reveal the effect of evolutionary or environmental perturbations of the interaction network. Overall, this work reveals the importance of the full structure of the network of interactions for stability of coexistence in ecological communities. |
1401.4938 | A. Dietrich | Axel Dietrich and Willem Been | Consciousness and Learning based on DNA Recombination and Memristor
Quality of Microtubules | 11 pages, 3 figures | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper is a completion of an earlier model proposed by us. In the model
different memories are attached at cell surface determinants which are the
result of DNA recombination. Our earlier experiments strongly suggest that DNA
recombination actually takes place during a short period of early development
in the brain in a limited number of neurons. In the present paper a model is
presented in which switchboard neurons play a key role in the storage and
retrieving of memory. And as a consequence, they play a major role in the
process of learning and form the basic material for consciousness. In the
original model there was insufficient explanation for the realization of the
internal connection of one cell surface determinant to the other. We realized
that tubulin should play a role in these intracellular connections. The tubulin
molecules can form a connective wire because of a change of shape of the
individual tubulin dimers. This way the fast switch is realized by the switch
of the tubulin dimer configuration. Because the cell should remember which
switch was activated and which one was not or less activated, we postulate the
memristor quality of microtubules.
| [
{
"created": "Fri, 17 Jan 2014 10:07:47 GMT",
"version": "v1"
}
] | 2014-01-21 | [
[
"Dietrich",
"Axel",
""
],
[
"Been",
"Willem",
""
]
] | This paper is a completion of an earlier model proposed by us. In the model different memories are attached at cell surface determinants which are the result of DNA recombination. Our earlier experiments strongly suggest that DNA recombination actually takes place during a short period of early development in the brain in a limited number of neurons. In the present paper a model is presented in which switchboard neurons play a key role in the storage and retrieving of memory. And as a consequence, they play a major role in the process of learning and form the basic material for consciousness. In the original model there was insufficient explanation for the realization of the internal connection of one cell surface determinant to the other. We realized that tubulin should play a role in these intracellular connections. The tubulin molecules can form a connective wire because of a change of shape of the individual tubulin dimers. This way the fast switch is realized by the switch of the tubulin dimer configuration. Because the cell should remember which switch was activated and which one was not or less activated, we postulate the memristor quality of microtubules. |
1501.05880 | Peter Solymos | Peter Solymos, Subhash R. Lele | Revisiting resource selection probability functions and single-visit
methods: Clarification and extensions | Forum article and rebuttal to Knape, J., & Korner-Nievergelt, F.,
2015. Estimates from non-replicated population surveys rely on critical
assumptions. Methods in Ecology and Evolution, 6, 298--306 | Methods in Ecology and Evolution 7(2), 196--205, 2016 | 10.1111/2041-210X.12432 | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Models accounting for imperfect detection are important. Single-visit methods
have been proposed as an alternative to multiple-visits methods to relax the
assumption of closed population. Knape and Korner-Nievergelt (2015) showed that
under certain models of probability of detection single-visit methods are
statistically non-identifiable leading to biased population estimates. There is
a close relationship between estimation of the resource selection probability
function (RSPF) using weighted distributions and single-visit methods for
occupancy and abundance estimation. We explain the precise mathematical
conditions needed for RSPF estimation as stated in Lele and Keim (2006). The
identical conditions, that remained unstated in our papers on single-visit
methodology, are needed for single-visit methodology to work. We show that the
class of admissible models is quite broad and does not excessively restrict the
application of the RSPF or the single-visit methodology. To complement the work
by Knape and Korner-Nievergelt, we study the performance of multiple-visit
methods under the scaled logistic detection function and a much wider set of
situations. In general, under the scaled logistic detection function
multiple-visits methods also lead to biased estimates. As a solution to this
problem, we extend the single-visit methodology to a class of models that
allows use of scaled probability function. We propose a Multinomial extension
of single visit methodology that can be used to check whether the detection
function satisfies the RSPF condition or not. Furthermore, we show that if the
scaling factor depends on covariates, then it can also be estimated.
| [
{
"created": "Fri, 23 Jan 2015 17:12:07 GMT",
"version": "v1"
},
{
"created": "Thu, 19 Mar 2015 18:28:03 GMT",
"version": "v2"
},
{
"created": "Tue, 9 Jun 2015 22:16:04 GMT",
"version": "v3"
},
{
"created": "Fri, 12 Jun 2015 17:00:18 GMT",
"version": "v4"
}
] | 2016-02-24 | [
[
"Solymos",
"Peter",
""
],
[
"Lele",
"Subhash R.",
""
]
] | Models accounting for imperfect detection are important. Single-visit methods have been proposed as an alternative to multiple-visits methods to relax the assumption of closed population. Knape and Korner-Nievergelt (2015) showed that under certain models of probability of detection single-visit methods are statistically non-identifiable leading to biased population estimates. There is a close relationship between estimation of the resource selection probability function (RSPF) using weighted distributions and single-visit methods for occupancy and abundance estimation. We explain the precise mathematical conditions needed for RSPF estimation as stated in Lele and Keim (2006). The identical conditions, that remained unstated in our papers on single-visit methodology, are needed for single-visit methodology to work. We show that the class of admissible models is quite broad and does not excessively restrict the application of the RSPF or the single-visit methodology. To complement the work by Knape and Korner-Nievergelt, we study the performance of multiple-visit methods under the scaled logistic detection function and a much wider set of situations. In general, under the scaled logistic detection function multiple-visits methods also lead to biased estimates. As a solution to this problem, we extend the single-visit methodology to a class of models that allows use of scaled probability function. We propose a Multinomial extension of single visit methodology that can be used to check whether the detection function satisfies the RSPF condition or not. Furthermore, we show that if the scaling factor depends on covariates, then it can also be estimated. |
1610.00161 | Blake Richards | Jordan Guergiuev, Timothy P. Lillicrap and Blake A. Richards | Towards deep learning with segregated dendrites | 41 pages, 11 figures | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep learning has led to significant advances in artificial intelligence, in
part, by adopting strategies motivated by neurophysiology. However, it is
unclear whether deep learning could occur in the real brain. Here, we show that
a deep learning algorithm that utilizes multi-compartment neurons might help us
to understand how the brain optimizes cost functions. Like neocortical
pyramidal neurons, neurons in our model receive sensory information and
higher-order feedback in electrotonically segregated compartments. Thanks to
this segregation, the neurons in different layers of the network can coordinate
synaptic weight updates. As a result, the network can learn to categorize
images better than a single layer network. Furthermore, we show that our
algorithm takes advantage of multilayer architectures to identify useful
representations---the hallmark of deep learning. This work demonstrates that
deep learning can be achieved using segregated dendritic compartments, which
may help to explain the dendritic morphology of neocortical pyramidal neurons.
| [
{
"created": "Sat, 1 Oct 2016 17:37:34 GMT",
"version": "v1"
},
{
"created": "Wed, 2 Nov 2016 18:07:26 GMT",
"version": "v2"
},
{
"created": "Fri, 7 Apr 2017 18:45:30 GMT",
"version": "v3"
}
] | 2017-04-11 | [
[
"Guergiuev",
"Jordan",
""
],
[
"Lillicrap",
"Timothy P.",
""
],
[
"Richards",
"Blake A.",
""
]
] | Deep learning has led to significant advances in artificial intelligence, in part, by adopting strategies motivated by neurophysiology. However, it is unclear whether deep learning could occur in the real brain. Here, we show that a deep learning algorithm that utilizes multi-compartment neurons might help us to understand how the brain optimizes cost functions. Like neocortical pyramidal neurons, neurons in our model receive sensory information and higher-order feedback in electrotonically segregated compartments. Thanks to this segregation, the neurons in different layers of the network can coordinate synaptic weight updates. As a result, the network can learn to categorize images better than a single layer network. Furthermore, we show that our algorithm takes advantage of multilayer architectures to identify useful representations---the hallmark of deep learning. This work demonstrates that deep learning can be achieved using segregated dendritic compartments, which may help to explain the dendritic morphology of neocortical pyramidal neurons. |
q-bio/0511026 | Luca Giuggioli | L. Giuggioli, G. Abramson, V.M. Kenkre, R.R. Parmenter, T.L. Yates | Theory of Home Range Estimation from Mark-Recapture Measurements of
Animal Populations | 21 pages, 7 figures, in press Journal of Theoretical Biology | null | null | null | q-bio.PE q-bio.OT | null | A theory is provided for the estimation of home ranges of animals from the
standard mark-recapture technique in which data are collected by capturing,
tagging and recapturing the animals. The theoretical tool used is the
Fokker-Planck equation, its characteristic quantities being the diffusion
constant which describes the motion of the animals, and the attractive
potential which addresses their tendency to live near their burrows. The
measurement technique is shown to correspond to the calculation of a certain
kind of mean square displacement of the animals relevant to the specific
probing window in space corresponding to the trapping region. The output of the
theory is a sigmoid curve of the observable mean square displacement as a
function of the ratio of distances characteristic of the home range and the
trapping region, along with an explicit prescription to extract the home range
form observations. Applications of the theory to rodent movement in Panama and
New Mexico are pointed out. An analysis is given of the sensitivity of our
theory to the choice of the confining potential via the use of various
representative cases. A comparison is provided between home range size inferred
from our method and from other procedures employed in the literature.
Consequences of home range overlap are also discussed.
| [
{
"created": "Tue, 15 Nov 2005 22:44:44 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Giuggioli",
"L.",
""
],
[
"Abramson",
"G.",
""
],
[
"Kenkre",
"V. M.",
""
],
[
"Parmenter",
"R. R.",
""
],
[
"Yates",
"T. L.",
""
]
] | A theory is provided for the estimation of home ranges of animals from the standard mark-recapture technique in which data are collected by capturing, tagging and recapturing the animals. The theoretical tool used is the Fokker-Planck equation, its characteristic quantities being the diffusion constant which describes the motion of the animals, and the attractive potential which addresses their tendency to live near their burrows. The measurement technique is shown to correspond to the calculation of a certain kind of mean square displacement of the animals relevant to the specific probing window in space corresponding to the trapping region. The output of the theory is a sigmoid curve of the observable mean square displacement as a function of the ratio of distances characteristic of the home range and the trapping region, along with an explicit prescription to extract the home range form observations. Applications of the theory to rodent movement in Panama and New Mexico are pointed out. An analysis is given of the sensitivity of our theory to the choice of the confining potential via the use of various representative cases. A comparison is provided between home range size inferred from our method and from other procedures employed in the literature. Consequences of home range overlap are also discussed. |
1004.2821 | David Zwicker | David Zwicker, David K. Lubensky, Pieter Rein ten Wolde | Robust circadian clocks from coupled protein modification and
transcription-translation cycles | main text: 7 pages including 5 figures, supplementary information: 13
pages including 9 figures | P Natl Acad Sci Usa (2010) vol. 107 (52) pp. 22540-5 | 10.1073/pnas.1007613107 | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The cyanobacterium Synechococcus elongatus uses both a protein
phosphorylation cycle and a transcription-translation cycle to generate
circadian rhythms that are highly robust against biochemical noise. We use
stochastic simulations to analyze how these cycles interact to generate stable
rhythms in growing, dividing cells. We find that a protein phosphorylation
cycle by itself is robust when protein turnover is low. For high decay or
dilution rates (and co mpensating synthesis rate), however, the
phosphorylation-based oscillator loses its integrity. Circadian rhythms thus
cannot be generated with a phosphorylation cycle alone when the growth rate,
and consequently the rate of protein dilution, is high enough; in practice, a
purely post-translational clock ceases to function well when the cell doubling
time drops below the 24 hour clock period. At higher growth rates, a
transcription-translation cycle becomes essential for generating robust
circadian rhythms. Interestingly, while a transcription-translation cycle is
necessary to sustain a phosphorylation cycle at high growth rates, a
phosphorylation cycle can dramatically enhance the robustness of a
transcription-translation cycle at lower protein decay or dilution rates. Our
analysis thus predicts that both cycles are required to generate robust
circadian rhythms over the full range of growth conditions.
| [
{
"created": "Fri, 16 Apr 2010 11:39:56 GMT",
"version": "v1"
}
] | 2012-02-16 | [
[
"Zwicker",
"David",
""
],
[
"Lubensky",
"David K.",
""
],
[
"Wolde",
"Pieter Rein ten",
""
]
] | The cyanobacterium Synechococcus elongatus uses both a protein phosphorylation cycle and a transcription-translation cycle to generate circadian rhythms that are highly robust against biochemical noise. We use stochastic simulations to analyze how these cycles interact to generate stable rhythms in growing, dividing cells. We find that a protein phosphorylation cycle by itself is robust when protein turnover is low. For high decay or dilution rates (and co mpensating synthesis rate), however, the phosphorylation-based oscillator loses its integrity. Circadian rhythms thus cannot be generated with a phosphorylation cycle alone when the growth rate, and consequently the rate of protein dilution, is high enough; in practice, a purely post-translational clock ceases to function well when the cell doubling time drops below the 24 hour clock period. At higher growth rates, a transcription-translation cycle becomes essential for generating robust circadian rhythms. Interestingly, while a transcription-translation cycle is necessary to sustain a phosphorylation cycle at high growth rates, a phosphorylation cycle can dramatically enhance the robustness of a transcription-translation cycle at lower protein decay or dilution rates. Our analysis thus predicts that both cycles are required to generate robust circadian rhythms over the full range of growth conditions. |
1801.01876 | Yi Sun | Yi Sun | Root Mean Square Minimum Distance as a Quality Metric for Localization
Nanoscopy Images | 11 pages, 5 figures | Y. Sun, "Root mean square minimum distance as a quality metric for
stochastic optical localization nanoscopy images," Scientific Reports, 8(1),
Nov. 21, 2018 | 10.1038/s41598-018-35053-8 | null | q-bio.QM eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A localization algorithm in stochastic optical localization nanoscopy plays
an important role in obtaining a high-quality image. A universal and objective
metric is crucial and necessary to evaluate qualities of nanoscopy images and
performances of localization algorithms. In this paper, we propose root mean
square minimum distance (RMSMD) as a quality metric for localization nanoscopy
images. RMSMD measures an average, local, and mutual fitness between two sets
of points. Its properties common to a distance metric as well as unique to
itself are presented. The ambiguity, discontinuity, and inappropriateness of
the metrics of accuracy, precision, recall, and Jaccard index, which are
currently used in the literature, are analyzed. A numerical example
demonstrates the advantages of RMSMD over the four existing metrics that fail
to distinguish qualities of different nanoscopy images in certain conditions.
The unbiased Gaussian estimator that achieves the Fisher information and
Cramer-Rao lower bound (CRLB) of a single data frame is proposed to benchmark
the quality of localization nanoscopy images and the performance of
localization algorithms. The information-achieving estimator is simulated in an
example and the result demonstrates the superior sensitivity of RMSMD over the
other four metrics. As a universal and objective metric, RMSMD can be broadly
employed in various applications to measure the mutual fitness of two sets of
points.
| [
{
"created": "Sat, 6 Jan 2018 13:34:51 GMT",
"version": "v1"
},
{
"created": "Mon, 15 Oct 2018 18:04:04 GMT",
"version": "v2"
},
{
"created": "Thu, 22 Nov 2018 17:58:02 GMT",
"version": "v3"
}
] | 2018-11-26 | [
[
"Sun",
"Yi",
""
]
] | A localization algorithm in stochastic optical localization nanoscopy plays an important role in obtaining a high-quality image. A universal and objective metric is crucial and necessary to evaluate qualities of nanoscopy images and performances of localization algorithms. In this paper, we propose root mean square minimum distance (RMSMD) as a quality metric for localization nanoscopy images. RMSMD measures an average, local, and mutual fitness between two sets of points. Its properties common to a distance metric as well as unique to itself are presented. The ambiguity, discontinuity, and inappropriateness of the metrics of accuracy, precision, recall, and Jaccard index, which are currently used in the literature, are analyzed. A numerical example demonstrates the advantages of RMSMD over the four existing metrics that fail to distinguish qualities of different nanoscopy images in certain conditions. The unbiased Gaussian estimator that achieves the Fisher information and Cramer-Rao lower bound (CRLB) of a single data frame is proposed to benchmark the quality of localization nanoscopy images and the performance of localization algorithms. The information-achieving estimator is simulated in an example and the result demonstrates the superior sensitivity of RMSMD over the other four metrics. As a universal and objective metric, RMSMD can be broadly employed in various applications to measure the mutual fitness of two sets of points. |
1411.7338 | Katherine St. John | Daniel Irving Bernstein, Lam Si Tung Ho, Colby Long, Mike Steel,
Katherine St. John, Seth Sullivant | Bounds on the Expected Size of the Maximum Agreement Subtree | Revised version | null | null | null | q-bio.PE cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We prove polynomial upper and lower bounds on the expected size of the
maximum agreement subtree of two random binary phylogenetic trees under both
the uniform distribution and Yule-Harding distribution. This positively answers
a question posed in earlier work. Determining tight upper and lower bounds
remains an open problem.
| [
{
"created": "Wed, 26 Nov 2014 19:20:08 GMT",
"version": "v1"
},
{
"created": "Mon, 31 Aug 2015 11:44:57 GMT",
"version": "v2"
}
] | 2015-09-01 | [
[
"Bernstein",
"Daniel Irving",
""
],
[
"Ho",
"Lam Si Tung",
""
],
[
"Long",
"Colby",
""
],
[
"Steel",
"Mike",
""
],
[
"John",
"Katherine St.",
""
],
[
"Sullivant",
"Seth",
""
]
] | We prove polynomial upper and lower bounds on the expected size of the maximum agreement subtree of two random binary phylogenetic trees under both the uniform distribution and Yule-Harding distribution. This positively answers a question posed in earlier work. Determining tight upper and lower bounds remains an open problem. |
2208.07983 | Grzegorz A Rempala | Istvan Z. Kiss, Eben Kenah, Grzegorz A. Rempala | Necessary and sufficient conditions for exact closures of epidemic
equations on configuration model networks | null | null | null | null | q-bio.PE math.PR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We prove that the exact closure of SIR pairwise epidemic equations on a
configuration model network is possible if and only if the degree distribution
is Poisson, Binomial, or Negative Binomial. The proof relies on establishing,
for these specific degree distributions, the equivalence of the closed pairwise
model and the so-called dynamical survival analysis (DSA) edge-based model
which was previously shown to be exact. Indeed, as we show here, the DSA model
is equivalent to the well-known edge-based Volz model. We use this result to
provide reductions of the closed pairwise and Volz models to the same single
equation involving only susceptibles, which has a useful statistical
interpretation in terms of the times to infection. We illustrate our findings
with some numerical examples.
| [
{
"created": "Tue, 16 Aug 2022 22:46:05 GMT",
"version": "v1"
}
] | 2022-08-18 | [
[
"Kiss",
"Istvan Z.",
""
],
[
"Kenah",
"Eben",
""
],
[
"Rempala",
"Grzegorz A.",
""
]
] | We prove that the exact closure of SIR pairwise epidemic equations on a configuration model network is possible if and only if the degree distribution is Poisson, Binomial, or Negative Binomial. The proof relies on establishing, for these specific degree distributions, the equivalence of the closed pairwise model and the so-called dynamical survival analysis (DSA) edge-based model which was previously shown to be exact. Indeed, as we show here, the DSA model is equivalent to the well-known edge-based Volz model. We use this result to provide reductions of the closed pairwise and Volz models to the same single equation involving only susceptibles, which has a useful statistical interpretation in terms of the times to infection. We illustrate our findings with some numerical examples. |
1904.12341 | Gota Morota | Gota Morota, Diego Jarquin, Malachy T. Campbell, and Hiroyoshi Iwata | Statistical methods for the quantitative genetic analysis of
high-throughput phenotyping data | null | null | null | null | q-bio.GN | http://creativecommons.org/licenses/by/4.0/ | The advent of plant phenomics, coupled with the wealth of genotypic data
generated by next-generation sequencing technologies, provides exciting new
resources for investigations into and improvement of complex traits. However,
these new technologies also bring new challenges in quantitative genetics,
namely, a need for the development of robust frameworks that can accommodate
these high-dimensional data. In this chapter, we describe methods for the
statistical analysis of high-throughput phenotyping (HTP) data with the goal of
enhancing the prediction accuracy of genomic selection (GS). Following the
Introduction in Section 1, Section 2 discusses field-based HTP, including the
use of unmanned aerial vehicles and light detection and ranging, as well as how
we can achieve increased genetic gain by utilizing image data derived from HTP.
Section 3 considers extending commonly used GS models to integrate HTP data as
covariates associated with the principal trait response, such as yield.
Particular focus is placed on single-trait, multi-trait, and genotype by
environment interaction models. One unique aspect of HTP data is that phenomics
platforms often produce large-scale data with high spatial and temporal
resolution for capturing dynamic growth, development, and stress responses.
Section 4 discusses the utility of a random regression model for performing
longitudinal GS. The chapter concludes with a discussion of some standing
issues.
| [
{
"created": "Sun, 28 Apr 2019 16:32:06 GMT",
"version": "v1"
}
] | 2019-04-30 | [
[
"Morota",
"Gota",
""
],
[
"Jarquin",
"Diego",
""
],
[
"Campbell",
"Malachy T.",
""
],
[
"Iwata",
"Hiroyoshi",
""
]
] | The advent of plant phenomics, coupled with the wealth of genotypic data generated by next-generation sequencing technologies, provides exciting new resources for investigations into and improvement of complex traits. However, these new technologies also bring new challenges in quantitative genetics, namely, a need for the development of robust frameworks that can accommodate these high-dimensional data. In this chapter, we describe methods for the statistical analysis of high-throughput phenotyping (HTP) data with the goal of enhancing the prediction accuracy of genomic selection (GS). Following the Introduction in Section 1, Section 2 discusses field-based HTP, including the use of unmanned aerial vehicles and light detection and ranging, as well as how we can achieve increased genetic gain by utilizing image data derived from HTP. Section 3 considers extending commonly used GS models to integrate HTP data as covariates associated with the principal trait response, such as yield. Particular focus is placed on single-trait, multi-trait, and genotype by environment interaction models. One unique aspect of HTP data is that phenomics platforms often produce large-scale data with high spatial and temporal resolution for capturing dynamic growth, development, and stress responses. Section 4 discusses the utility of a random regression model for performing longitudinal GS. The chapter concludes with a discussion of some standing issues. |
2306.11756 | Markus D. Solbach | Markus D. Solbach, John K. Tsotsos | The Psychophysics of Human Three-Dimensional Active Visuospatial
Problem-Solving | Submitted at PNAS Nexus | null | null | null | q-bio.NC cs.CV | http://creativecommons.org/licenses/by/4.0/ | Our understanding of how visual systems detect, analyze and interpret visual
stimuli has advanced greatly. However, the visual systems of all animals do
much more; they enable visual behaviours. How well the visual system performs
while interacting with the visual environment and how vision is used in the
real world have not been well studied, especially in humans. It has been
suggested that comparison is the most primitive of psychophysical tasks. Thus,
as a probe into these active visual behaviours, we use a same-different task:
are two physical 3D objects visually the same? This task seems to be a
fundamental cognitive ability. We pose this question to human subjects who are
free to move about and examine two real objects in an actual 3D space. Past
work has dealt solely with a 2D static version of this problem. We have
collected detailed, first-of-its-kind data of humans performing a visuospatial
task in hundreds of trials. Strikingly, humans are remarkably good at this task
without any training, with a mean accuracy of 93.82%. No learning effect was
observed on accuracy after many trials, but some effect was seen for response
time, number of fixations and extent of head movement. Subjects demonstrated a
variety of complex strategies involving a range of movement and eye fixation
changes, suggesting that solutions were developed dynamically and tailored to
the specific task.
| [
{
"created": "Mon, 19 Jun 2023 19:36:42 GMT",
"version": "v1"
}
] | 2023-06-22 | [
[
"Solbach",
"Markus D.",
""
],
[
"Tsotsos",
"John K.",
""
]
] | Our understanding of how visual systems detect, analyze and interpret visual stimuli has advanced greatly. However, the visual systems of all animals do much more; they enable visual behaviours. How well the visual system performs while interacting with the visual environment and how vision is used in the real world have not been well studied, especially in humans. It has been suggested that comparison is the most primitive of psychophysical tasks. Thus, as a probe into these active visual behaviours, we use a same-different task: are two physical 3D objects visually the same? This task seems to be a fundamental cognitive ability. We pose this question to human subjects who are free to move about and examine two real objects in an actual 3D space. Past work has dealt solely with a 2D static version of this problem. We have collected detailed, first-of-its-kind data of humans performing a visuospatial task in hundreds of trials. Strikingly, humans are remarkably good at this task without any training, with a mean accuracy of 93.82%. No learning effect was observed on accuracy after many trials, but some effect was seen for response time, number of fixations and extent of head movement. Subjects demonstrated a variety of complex strategies involving a range of movement and eye fixation changes, suggesting that solutions were developed dynamically and tailored to the specific task. |
1504.06110 | Bassam AlKindy Mr. | Bassam AlKindy, Huda Al-Nayyef, Christophe Guyeux, Jean-Fran\c{c}ois
Couchot, Michel Salomon, Jacques M. Bahi | Improved Core Genes Prediction for Constructing well-supported
Phylogenetic Trees in large sets of Plant Species | 12 pages, 7 figures, IWBBIO 2015 (3rd International Work-Conference
on Bioinformatics and Biomedical Engineering) | Springer LNBI 9043, 2015, 379--390 | 10.1007/978-3-319-16483-0_38 | null | q-bio.GN cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The way to infer well-supported phylogenetic trees that precisely reflect the
evolutionary process is a challenging task that completely depends on the way
the related core genes have been found. In previous computational biology
studies, many similarity based algorithms, mainly dependent on calculating
sequence alignment matrices, have been proposed to find them. In these kinds of
approaches, a significantly high similarity score between two coding sequences
extracted from a given annotation tool means that one has the same genes. In a
previous work article, we presented a quality test approach (QTA) that improves
the core genes quality by combining two annotation tools (namely NCBI, a
partially human-curated database, and DOGMA, an efficient annotation algorithm
for chloroplasts). This method takes the advantages from both sequence
similarity and gene features to guarantee that the core genome contains correct
and well-clustered coding sequences (\emph{i.e.}, genes). We then show in this
article how useful are such well-defined core genes for biomolecular
phylogenetic reconstructions, by investigating various subsets of core genes at
various family or genus levels, leading to subtrees with strong bootstraps that
are finally merged in a well-supported supertree.
| [
{
"created": "Thu, 23 Apr 2015 09:45:07 GMT",
"version": "v1"
}
] | 2015-04-24 | [
[
"AlKindy",
"Bassam",
""
],
[
"Al-Nayyef",
"Huda",
""
],
[
"Guyeux",
"Christophe",
""
],
[
"Couchot",
"Jean-François",
""
],
[
"Salomon",
"Michel",
""
],
[
"Bahi",
"Jacques M.",
""
]
] | The way to infer well-supported phylogenetic trees that precisely reflect the evolutionary process is a challenging task that completely depends on the way the related core genes have been found. In previous computational biology studies, many similarity based algorithms, mainly dependent on calculating sequence alignment matrices, have been proposed to find them. In these kinds of approaches, a significantly high similarity score between two coding sequences extracted from a given annotation tool means that one has the same genes. In a previous work article, we presented a quality test approach (QTA) that improves the core genes quality by combining two annotation tools (namely NCBI, a partially human-curated database, and DOGMA, an efficient annotation algorithm for chloroplasts). This method takes the advantages from both sequence similarity and gene features to guarantee that the core genome contains correct and well-clustered coding sequences (\emph{i.e.}, genes). We then show in this article how useful are such well-defined core genes for biomolecular phylogenetic reconstructions, by investigating various subsets of core genes at various family or genus levels, leading to subtrees with strong bootstraps that are finally merged in a well-supported supertree. |
1406.7441 | Alkan Kabak\c{c}io\u{g}lu | Nese Aral and Alkan Kabakcioglu | Coherent regulation in yeast cell cycle network | 17 pages, 6 figures, 4 tables. Extensively revised and submitted for
publication | Physical Biology 12.3 (2015), 036002 | 10.1088/1478-3975/12/3/036002 | null | q-bio.QM q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We define a measure of coherent activity for gene regulatory networks, a
property that reflects the unity of purpose between the regulatory agents with
a common target. We propose that such harmonious regulatory action is desirable
under a demand for energy efficiency and may be selected for under evolutionary
pressures. We consider two recent models of the cell-cycle regulatory network
of the budding yeast, Saccharomyces cerevisiae, as a case study and calculate
their degree of coherence. A comparison with random networks of similar size
and composition reveals that the yeast's cell-cycle regulation is wired to
yield and exceptionally high level of coherent regulatory activity. We also
investigate the mean degree of coherence as a function of the network size,
connectivity and the fraction of repressory/activatory interactions.
| [
{
"created": "Sat, 28 Jun 2014 21:21:17 GMT",
"version": "v1"
},
{
"created": "Sun, 14 Dec 2014 09:32:08 GMT",
"version": "v2"
}
] | 2015-07-30 | [
[
"Aral",
"Nese",
""
],
[
"Kabakcioglu",
"Alkan",
""
]
] | We define a measure of coherent activity for gene regulatory networks, a property that reflects the unity of purpose between the regulatory agents with a common target. We propose that such harmonious regulatory action is desirable under a demand for energy efficiency and may be selected for under evolutionary pressures. We consider two recent models of the cell-cycle regulatory network of the budding yeast, Saccharomyces cerevisiae, as a case study and calculate their degree of coherence. A comparison with random networks of similar size and composition reveals that the yeast's cell-cycle regulation is wired to yield and exceptionally high level of coherent regulatory activity. We also investigate the mean degree of coherence as a function of the network size, connectivity and the fraction of repressory/activatory interactions. |
2006.16006 | John Vandermeer | John Vandermeer, Zachary Hajian-Forooshani, Nicholas Medina, Ivette
Perfecto | New forms of structure in ecosystems revealed with the Kuramoto model | 18 pages, 5 figures | null | null | null | q-bio.PE nlin.AO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ecological systems, as is often noted, are complex. Equally notable is the
generalization that complex systems tend to be oscillatory, whether Huygens
simple patterns of pendulum entrainment or the twisted chaotic orbits of Lorenz
convection rolls. The analytics of oscillators may thus provide insight into
the structure of ecological systems. One of the most popular analytical tools
for such study is the Kuramoto model of coupled oscillators. Using a
well-studied system of pests and their enemies in an agroecosystem, we apply
this model as a stylized vision of the dynamics of that real system, to ask
whether its actual natural history is reflected in the dynamics of the
qualitatively instantiated Kuramoto model. Emerging from the model is a series
of synchrony groups generally corresponding to subnetworks of the natural
system, with an overlying chimeric structure, depending on the strength of the
inter-oscillator coupling. We conclude that the Kuramoto model presents a novel
window through which interesting questions about the structure of ecological
systems may emerge.
| [
{
"created": "Mon, 29 Jun 2020 12:46:25 GMT",
"version": "v1"
}
] | 2020-06-30 | [
[
"Vandermeer",
"John",
""
],
[
"Hajian-Forooshani",
"Zachary",
""
],
[
"Medina",
"Nicholas",
""
],
[
"Perfecto",
"Ivette",
""
]
] | Ecological systems, as is often noted, are complex. Equally notable is the generalization that complex systems tend to be oscillatory, whether Huygens simple patterns of pendulum entrainment or the twisted chaotic orbits of Lorenz convection rolls. The analytics of oscillators may thus provide insight into the structure of ecological systems. One of the most popular analytical tools for such study is the Kuramoto model of coupled oscillators. Using a well-studied system of pests and their enemies in an agroecosystem, we apply this model as a stylized vision of the dynamics of that real system, to ask whether its actual natural history is reflected in the dynamics of the qualitatively instantiated Kuramoto model. Emerging from the model is a series of synchrony groups generally corresponding to subnetworks of the natural system, with an overlying chimeric structure, depending on the strength of the inter-oscillator coupling. We conclude that the Kuramoto model presents a novel window through which interesting questions about the structure of ecological systems may emerge. |
1509.01577 | J. C. Phillips | J. C. Phillips | Autoantibody recognition mechanisms of p53 epitopes | 20 pages, 12 figures | null | 10.1016/j.physa.2016.01.021 | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There is an urgent need for economical blood based, noninvasive molecular
biomarkers to assist in the detection and diagnosis of cancers in a cost
effective manner at an early stage, when curative interventions are still
possible. Serum autoantibodies are attractive biomarkers for early cancer
detection, but their development has been hindered by the punctuated genetic
nature of the ten million known cancer mutations. A recent study of 50,000
patients (Pedersen et al., 2013) showed p53 15mer epitopes are much more
sensitive colon cancer biomarkers than p53, which in turn is a more sensitive
cancer biomarker than any other protein. The function of p53 as a nearly
universal tumor suppressor is well established, because of its strong
immunogenicity in terms of not only antibody recruitment, but also stimulation
of autoantibodies. Here we examine bioinformatic fractal scaling analysis for
identifying sensitive epitopes from the p53 amino acid sequence, and show how
it could be used for early cancer detection (ECD). We trim 15mers to 7mers, and
identify specific 7mers from other species that could be more sensitive to
aggressive human cancers, such as liver cancer.
| [
{
"created": "Fri, 4 Sep 2015 19:54:44 GMT",
"version": "v1"
}
] | 2016-03-23 | [
[
"Phillips",
"J. C.",
""
]
] | There is an urgent need for economical blood based, noninvasive molecular biomarkers to assist in the detection and diagnosis of cancers in a cost effective manner at an early stage, when curative interventions are still possible. Serum autoantibodies are attractive biomarkers for early cancer detection, but their development has been hindered by the punctuated genetic nature of the ten million known cancer mutations. A recent study of 50,000 patients (Pedersen et al., 2013) showed p53 15mer epitopes are much more sensitive colon cancer biomarkers than p53, which in turn is a more sensitive cancer biomarker than any other protein. The function of p53 as a nearly universal tumor suppressor is well established, because of its strong immunogenicity in terms of not only antibody recruitment, but also stimulation of autoantibodies. Here we examine bioinformatic fractal scaling analysis for identifying sensitive epitopes from the p53 amino acid sequence, and show how it could be used for early cancer detection (ECD). We trim 15mers to 7mers, and identify specific 7mers from other species that could be more sensitive to aggressive human cancers, such as liver cancer. |
1807.07669 | \'Eric Merle | \'Eric Merle | Modelling of consciousness and interpretation of quantum mechanics | 117 pages, 13 figures | null | null | null | q-bio.NC quant-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | I start from the fundamental principles of non-relativistic quantum
mechanics, without probability, and interpret them using the notion of
coexistence: a quantum state can be read, not uniquely, as a coexistence of
other quantum states, which are pairwise orthogonal. In this formalism, I prove
that a conscious observer is necessarily a physical object that can memorize
local events by setting one of its parts in an exactly specified constant
quantum state (hypotheses H1, H2 and H3). Then I define the probability of a
future event as the proportion of initial observers, all identical, who will
actually experience that event. It then becomes possible to establish the usual
results of quantum mechanics. Furthermore, I detail the link between
probabilities and relative frequencies. Additionally, I study the biological
feasibility of this modelling of observer's mind.
The second part of this paper completes the neuronal description of the mind
functions, based on current neuroscientific knowledge. It provides a model that
is compatible with the assumptions of the first part and consistent with our
daily conscious experience. In particular, it develops a model of
self-consciousness based on an explicit use of the random component of neuron
behaviour; according to the first part, that random is in fact the coexistence
of a multiplicity of possibilities. So, when the mind measures the random part
of certain neurons in the brain, he goes himself within each of these
possibilities. The mind has a decision-making component that is active in this
situation, appearing then as the cause of the choice of this possibility among
all the others. This models the self-consciousness which then ensures the unity
of our conscious experience by equating this experience with ``what the ego is
conscious about''. The conclusion details the points that remain to be
developed.
| [
{
"created": "Fri, 20 Jul 2018 00:09:41 GMT",
"version": "v1"
}
] | 2018-07-23 | [
[
"Merle",
"Éric",
""
]
] | I start from the fundamental principles of non-relativistic quantum mechanics, without probability, and interpret them using the notion of coexistence: a quantum state can be read, not uniquely, as a coexistence of other quantum states, which are pairwise orthogonal. In this formalism, I prove that a conscious observer is necessarily a physical object that can memorize local events by setting one of its parts in an exactly specified constant quantum state (hypotheses H1, H2 and H3). Then I define the probability of a future event as the proportion of initial observers, all identical, who will actually experience that event. It then becomes possible to establish the usual results of quantum mechanics. Furthermore, I detail the link between probabilities and relative frequencies. Additionally, I study the biological feasibility of this modelling of observer's mind. The second part of this paper completes the neuronal description of the mind functions, based on current neuroscientific knowledge. It provides a model that is compatible with the assumptions of the first part and consistent with our daily conscious experience. In particular, it develops a model of self-consciousness based on an explicit use of the random component of neuron behaviour; according to the first part, that random is in fact the coexistence of a multiplicity of possibilities. So, when the mind measures the random part of certain neurons in the brain, he goes himself within each of these possibilities. The mind has a decision-making component that is active in this situation, appearing then as the cause of the choice of this possibility among all the others. This models the self-consciousness which then ensures the unity of our conscious experience by equating this experience with ``what the ego is conscious about''. The conclusion details the points that remain to be developed. |
2106.04377 | Aditya Sarkar | Aditya Sarkar, Arnav Bhavsar | Virtual Screening of Pharmaceutical Compounds with hERG Inhibitory
Activity (Cardiotoxicity) using Ensemble Learning | 23 pages, 2 figures, 5 tables | Proceedings of the 14th International Joint Conference on
Biomedical Engineering Systems and Technologies (BIOSTEC 2021) - Volume 2:
BIOIMAGING, | 10.5220/0010267701520159 | null | q-bio.QM cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | In silico prediction of cardiotoxicity with high sensitivity and specificity
for potential drug molecules can be of immense value. Hence, building machine
learning classification models, based on some features extracted from the
molecular structure of drugs, which are capable of efficiently predicting
cardiotoxicity is critical. In this paper, we consider the application of
various machine learning approaches, and then propose an ensemble classifier
for the prediction of molecular activity on a Drug Discovery Hackathon (DDH)
(1st reference) dataset. We have used only 2-D descriptors of SMILE notations
for our prediction. Our ensemble classification uses 5 classifiers (2 Random
Forest Classifiers, 2 Support Vector Machines and a Dense Neural Network) and
uses Max-Voting technique and Weighted-Average technique for final decision.
| [
{
"created": "Sat, 5 Jun 2021 16:57:35 GMT",
"version": "v1"
}
] | 2021-06-09 | [
[
"Sarkar",
"Aditya",
""
],
[
"Bhavsar",
"Arnav",
""
]
] | In silico prediction of cardiotoxicity with high sensitivity and specificity for potential drug molecules can be of immense value. Hence, building machine learning classification models, based on some features extracted from the molecular structure of drugs, which are capable of efficiently predicting cardiotoxicity is critical. In this paper, we consider the application of various machine learning approaches, and then propose an ensemble classifier for the prediction of molecular activity on a Drug Discovery Hackathon (DDH) (1st reference) dataset. We have used only 2-D descriptors of SMILE notations for our prediction. Our ensemble classification uses 5 classifiers (2 Random Forest Classifiers, 2 Support Vector Machines and a Dense Neural Network) and uses Max-Voting technique and Weighted-Average technique for final decision. |
1810.13373 | David Barrett | David G.T. Barrett, Ari S. Morcos and Jakob H. Macke | Analyzing biological and artificial neural networks: challenges with
opportunities for synergy? | null | null | null | null | q-bio.NC cs.AI cs.CV cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep neural networks (DNNs) transform stimuli across multiple processing
stages to produce representations that can be used to solve complex tasks, such
as object recognition in images. However, a full understanding of how they
achieve this remains elusive. The complexity of biological neural networks
substantially exceeds the complexity of DNNs, making it even more challenging
to understand the representations that they learn. Thus, both machine learning
and computational neuroscience are faced with a shared challenge: how can we
analyze their representations in order to understand how they solve complex
tasks?
We review how data-analysis concepts and techniques developed by
computational neuroscientists can be useful for analyzing representations in
DNNs, and in turn, how recently developed techniques for analysis of DNNs can
be useful for understanding representations in biological neural networks. We
explore opportunities for synergy between the two fields, such as the use of
DNNs as in-silico model systems for neuroscience, and how this synergy can lead
to new hypotheses about the operating principles of biological neural networks.
| [
{
"created": "Wed, 31 Oct 2018 16:09:44 GMT",
"version": "v1"
}
] | 2018-11-01 | [
[
"Barrett",
"David G. T.",
""
],
[
"Morcos",
"Ari S.",
""
],
[
"Macke",
"Jakob H.",
""
]
] | Deep neural networks (DNNs) transform stimuli across multiple processing stages to produce representations that can be used to solve complex tasks, such as object recognition in images. However, a full understanding of how they achieve this remains elusive. The complexity of biological neural networks substantially exceeds the complexity of DNNs, making it even more challenging to understand the representations that they learn. Thus, both machine learning and computational neuroscience are faced with a shared challenge: how can we analyze their representations in order to understand how they solve complex tasks? We review how data-analysis concepts and techniques developed by computational neuroscientists can be useful for analyzing representations in DNNs, and in turn, how recently developed techniques for analysis of DNNs can be useful for understanding representations in biological neural networks. We explore opportunities for synergy between the two fields, such as the use of DNNs as in-silico model systems for neuroscience, and how this synergy can lead to new hypotheses about the operating principles of biological neural networks. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.