id stringlengths 9 13 | submitter stringlengths 4 48 | authors stringlengths 4 9.62k | title stringlengths 4 343 | comments stringlengths 2 480 ⌀ | journal-ref stringlengths 9 309 ⌀ | doi stringlengths 12 138 ⌀ | report-no stringclasses 277 values | categories stringlengths 8 87 | license stringclasses 9 values | orig_abstract stringlengths 27 3.76k | versions listlengths 1 15 | update_date stringlengths 10 10 | authors_parsed listlengths 1 147 | abstract stringlengths 24 3.75k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2208.09889 | Enrui Zhang | Enrui Zhang, Bart Spronck, Jay D. Humphrey, George Em Karniadakis | G2{\Phi}net: Relating Genotype and Biomechanical Phenotype of Tissues
with Deep Learning | 41 pages, 9 figures | null | 10.1371/journal.pcbi.1010660 | null | q-bio.TO cs.LG | http://creativecommons.org/licenses/by/4.0/ | Many genetic mutations adversely affect the structure and function of
load-bearing soft tissues, with clinical sequelae often responsible for
disability or death. Parallel advances in genetics and histomechanical
characterization provide significant insight into these conditions, but there
remains a pressing need to integrate such information. We present a novel
genotype-to-biomechanical-phenotype neural network (G2{\Phi}net) for
characterizing and classifying biomechanical properties of soft tissues, which
serve as important functional readouts of tissue health or disease. We
illustrate the utility of our approach by inferring the nonlinear,
genotype-dependent constitutive behavior of the aorta for four mouse models
involving defects or deficiencies in extracellular constituents. We show that
G2{\Phi}net can infer the biomechanical response while simultaneously ascribing
the associated genotype correctly by utilizing limited, noisy, and unstructured
experimental data. More broadly, G2{\Phi}net provides a powerful method and a
paradigm shift for correlating genotype and biomechanical phenotype
quantitatively, promising a better understanding of their interplay in
biological tissues.
| [
{
"created": "Sun, 21 Aug 2022 14:22:37 GMT",
"version": "v1"
}
] | 2023-01-11 | [
[
"Zhang",
"Enrui",
""
],
[
"Spronck",
"Bart",
""
],
[
"Humphrey",
"Jay D.",
""
],
[
"Karniadakis",
"George Em",
""
]
] | Many genetic mutations adversely affect the structure and function of load-bearing soft tissues, with clinical sequelae often responsible for disability or death. Parallel advances in genetics and histomechanical characterization provide significant insight into these conditions, but there remains a pressing need to integrate such information. We present a novel genotype-to-biomechanical-phenotype neural network (G2{\Phi}net) for characterizing and classifying biomechanical properties of soft tissues, which serve as important functional readouts of tissue health or disease. We illustrate the utility of our approach by inferring the nonlinear, genotype-dependent constitutive behavior of the aorta for four mouse models involving defects or deficiencies in extracellular constituents. We show that G2{\Phi}net can infer the biomechanical response while simultaneously ascribing the associated genotype correctly by utilizing limited, noisy, and unstructured experimental data. More broadly, G2{\Phi}net provides a powerful method and a paradigm shift for correlating genotype and biomechanical phenotype quantitatively, promising a better understanding of their interplay in biological tissues. |
q-bio/0501007 | Anna Ochab-Marcinek | Anna Ochab-Marcinek | Pattern formation in a stochastic model of cancer growth | 17 pages, 15 figures | Acta Physica Polonica B 36(6) (2005) 1963 | null | null | q-bio.CB | null | We investigate noise-induced pattern formation in a model of cancer growth
based on Michaelis-Menten kinetics, subject to additive and multiplicative
noises. We analyse stability properties of the system and discuss the role of
diffusion and noises in the system's dynamics. We find that random dichotomous
fluctuations in the immune response intensity along with Gaussian environmental
noise lead to emergence of a spatial pattern of two phases, in which cancer
cells, or, respectively, immune cells predominate.
| [
{
"created": "Wed, 5 Jan 2005 21:50:06 GMT",
"version": "v1"
},
{
"created": "Fri, 2 Dec 2005 14:59:55 GMT",
"version": "v2"
}
] | 2007-05-23 | [
[
"Ochab-Marcinek",
"Anna",
""
]
] | We investigate noise-induced pattern formation in a model of cancer growth based on Michaelis-Menten kinetics, subject to additive and multiplicative noises. We analyse stability properties of the system and discuss the role of diffusion and noises in the system's dynamics. We find that random dichotomous fluctuations in the immune response intensity along with Gaussian environmental noise lead to emergence of a spatial pattern of two phases, in which cancer cells, or, respectively, immune cells predominate. |
1607.04734 | Guy Bunin | Guy Bunin | Interaction patterns and diversity in assembled ecological communities | null | null | null | null | q-bio.PE cond-mat.stat-mech physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The assembly of ecological communities from a pool of species is central to
ecology, but the effect of this process on properties of community interaction
networks is still largely unknown. Here, we use a systematic analytical
framework to describe how assembly from a species pool gives rise to community
network properties that differ from those of the pool: Compared to the pool,
the community shows a bias towards higher carrying capacities, weaker
competitive interactions and stronger beneficial interactions. Moreover, even
if interactions between all pool species are completely random, community
networks are more structured, with correlations between interspecies
interactions, and between interactions and carrying capacities. Nonetheless, we
show that these properties are not sufficient to explain the coexistence of all
community species, and that it is a simple relation between interactions and
species abundances that is responsible for the diversity within a community.
| [
{
"created": "Sat, 16 Jul 2016 12:28:38 GMT",
"version": "v1"
}
] | 2016-07-19 | [
[
"Bunin",
"Guy",
""
]
] | The assembly of ecological communities from a pool of species is central to ecology, but the effect of this process on properties of community interaction networks is still largely unknown. Here, we use a systematic analytical framework to describe how assembly from a species pool gives rise to community network properties that differ from those of the pool: Compared to the pool, the community shows a bias towards higher carrying capacities, weaker competitive interactions and stronger beneficial interactions. Moreover, even if interactions between all pool species are completely random, community networks are more structured, with correlations between interspecies interactions, and between interactions and carrying capacities. Nonetheless, we show that these properties are not sufficient to explain the coexistence of all community species, and that it is a simple relation between interactions and species abundances that is responsible for the diversity within a community. |
1707.02110 | Irina Mizeva | Irina Mizeva, Elena Zharkikh, Victor Dremin, Evgeny Zherebtsov, Irina
Makovik, Elena Potapova, Andrey Dunaev | Spectral analysis of the blood flow in the foot microvascular bed during
thermal testing in patients with diabetes mellitus | 7 pages, 8 figures | null | null | null | q-bio.TO | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Timely diagnostics of microcirculatory system abnormalities which are the
most severe diabetic complications, is a significant problem of modern health
care. Functional abnormalities manifest themselves earlier than the structural
one and their assessment is the focus of present-day studies. In this study,
the Laser Doppler flowmetry, a noninvasive technique for the cutaneous blood
flow monitoring, was used together with local temperature tests and wavelet
analysis. The study of the blood flow in the microvascular bed of toes was
carried out in control group of 40 subjects, and two diabetic groups differing
in the type of diabetes mellitus (17 of DM1 and 23 of DM2).
The temperature tests demonstrated that diabetic patients have impaired
vasodilation in response to local heating. The study of oscillating components
shows a significant difference of the spectral properties even in the basal
conditions. Low frequency pulsations of the blood flow associated with
endothelial and activities are lower in both diabetes groups as well as the
ones connected with cardiac activity. Local thermal tests induce variations
both in the perfusion and its spectral characteristics, which are different in
the groups under consideration. We assume that the results obtained provide a
deeper understanding of pathological processes involved in the progress of
microvascular abnormalities due to diabetes mellitus.
| [
{
"created": "Fri, 7 Jul 2017 10:26:49 GMT",
"version": "v1"
}
] | 2017-07-10 | [
[
"Mizeva",
"Irina",
""
],
[
"Zharkikh",
"Elena",
""
],
[
"Dremin",
"Victor",
""
],
[
"Zherebtsov",
"Evgeny",
""
],
[
"Makovik",
"Irina",
""
],
[
"Potapova",
"Elena",
""
],
[
"Dunaev",
"Andrey",
""
]
] | Timely diagnostics of microcirculatory system abnormalities which are the most severe diabetic complications, is a significant problem of modern health care. Functional abnormalities manifest themselves earlier than the structural one and their assessment is the focus of present-day studies. In this study, the Laser Doppler flowmetry, a noninvasive technique for the cutaneous blood flow monitoring, was used together with local temperature tests and wavelet analysis. The study of the blood flow in the microvascular bed of toes was carried out in control group of 40 subjects, and two diabetic groups differing in the type of diabetes mellitus (17 of DM1 and 23 of DM2). The temperature tests demonstrated that diabetic patients have impaired vasodilation in response to local heating. The study of oscillating components shows a significant difference of the spectral properties even in the basal conditions. Low frequency pulsations of the blood flow associated with endothelial and activities are lower in both diabetes groups as well as the ones connected with cardiac activity. Local thermal tests induce variations both in the perfusion and its spectral characteristics, which are different in the groups under consideration. We assume that the results obtained provide a deeper understanding of pathological processes involved in the progress of microvascular abnormalities due to diabetes mellitus. |
2007.02569 | Joao Teixeira | Jo\~ao C Teixeira and Christian D Huber | Dismantling a dogma: the inflated significance of neutral genetic
diversity in conservation genetics | 31 pages, 4 figures, 1 Table, 1 Box | null | 10.1073/pnas.2015096118 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The current rate of species extinction is rapidly approaching unprecedented
highs and life on Earth presently faces a sixth mass extinction event driven by
anthropogenic activity, climate change and ecological collapse. The field of
conservation genetics aims at preserving species by using their levels of
genetic diversity, usually measured as neutral genome-wide diversity, as a
barometer for evaluating population health and extinction risk. A fundamental
assumption is that higher levels of genetic diversity lead to an increase in
fitness and long-term survival of a species. Here, we argue against the
perceived importance of neutral genetic diversity for the conservation of wild
populations and species. We demonstrate that no simple general relationship
exists between neutral genetic diversity and the risk of species extinction.
Instead, a better understanding of the properties of functional genetic
diversity, demographic history, and ecological relationships, is necessary for
developing and implementing effective conservation genetic strategies.
| [
{
"created": "Mon, 6 Jul 2020 07:35:29 GMT",
"version": "v1"
}
] | 2022-10-12 | [
[
"Teixeira",
"João C",
""
],
[
"Huber",
"Christian D",
""
]
] | The current rate of species extinction is rapidly approaching unprecedented highs and life on Earth presently faces a sixth mass extinction event driven by anthropogenic activity, climate change and ecological collapse. The field of conservation genetics aims at preserving species by using their levels of genetic diversity, usually measured as neutral genome-wide diversity, as a barometer for evaluating population health and extinction risk. A fundamental assumption is that higher levels of genetic diversity lead to an increase in fitness and long-term survival of a species. Here, we argue against the perceived importance of neutral genetic diversity for the conservation of wild populations and species. We demonstrate that no simple general relationship exists between neutral genetic diversity and the risk of species extinction. Instead, a better understanding of the properties of functional genetic diversity, demographic history, and ecological relationships, is necessary for developing and implementing effective conservation genetic strategies. |
1203.1287 | Piyush Srivastava | Narendra M. Dixit and Piyush Srivastava and Nisheeth K. Vishnoi | A Finite Population Model of Molecular Evolution: Theory and Computation | null | null | null | null | q-bio.PE cs.DM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper is concerned with the evolution of haploid organisms that
reproduce asexually. In a seminal piece of work, Eigen and coauthors proposed
the quasispecies model in an attempt to understand such an evolutionary
process. Their work has impacted antiviral treatment and vaccine design
strategies. Yet, predictions of the quasispecies model are at best viewed as a
guideline, primarily because it assumes an infinite population size, whereas
realistic population sizes can be quite small. In this paper we consider a
population genetics-based model aimed at understanding the evolution of such
organisms with finite population sizes and present a rigorous study of the
convergence and computational issues that arise therein. Our first result is
structural and shows that, at any time during the evolution, as the population
size tends to infinity, the distribution of genomes predicted by our model
converges to that predicted by the quasispecies model. This justifies the
continued use of the quasispecies model to derive guidelines for intervention.
While the stationary state in the quasispecies model is readily obtained, due
to the explosion of the state space in our model, exact computations are
prohibitive. Our second set of results are computational in nature and address
this issue. We derive conditions on the parameters of evolution under which our
stochastic model mixes rapidly. Further, for a class of widely used fitness
landscapes we give a fast deterministic algorithm which computes the stationary
distribution of our model. These computational tools are expected to serve as a
framework for the modeling of strategies for the deployment of mutagenic drugs.
| [
{
"created": "Tue, 6 Mar 2012 19:12:24 GMT",
"version": "v1"
}
] | 2012-03-07 | [
[
"Dixit",
"Narendra M.",
""
],
[
"Srivastava",
"Piyush",
""
],
[
"Vishnoi",
"Nisheeth K.",
""
]
] | This paper is concerned with the evolution of haploid organisms that reproduce asexually. In a seminal piece of work, Eigen and coauthors proposed the quasispecies model in an attempt to understand such an evolutionary process. Their work has impacted antiviral treatment and vaccine design strategies. Yet, predictions of the quasispecies model are at best viewed as a guideline, primarily because it assumes an infinite population size, whereas realistic population sizes can be quite small. In this paper we consider a population genetics-based model aimed at understanding the evolution of such organisms with finite population sizes and present a rigorous study of the convergence and computational issues that arise therein. Our first result is structural and shows that, at any time during the evolution, as the population size tends to infinity, the distribution of genomes predicted by our model converges to that predicted by the quasispecies model. This justifies the continued use of the quasispecies model to derive guidelines for intervention. While the stationary state in the quasispecies model is readily obtained, due to the explosion of the state space in our model, exact computations are prohibitive. Our second set of results are computational in nature and address this issue. We derive conditions on the parameters of evolution under which our stochastic model mixes rapidly. Further, for a class of widely used fitness landscapes we give a fast deterministic algorithm which computes the stationary distribution of our model. These computational tools are expected to serve as a framework for the modeling of strategies for the deployment of mutagenic drugs. |
2005.14258 | Yujiang Wang | Christoforos A Papasavvas, Gabrielle M Schroeder, Beate Diehl, Gerold
Baier, Peter N Taylor, Yujiang Wang | Band power modulation through intracranial EEG stimulation and its
cross-session consistency | null | Journal of Neural Engineering, 17-054001 (2020) | 10.1088/1741-2552/abbecf | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: Direct electrical stimulation of the brain through intracranial
electrodes is currently used to probe the epileptic brain as part of
pre-surgical evaluation, and it is also being considered for therapeutic
treatments through neuromodulation. It is still unknown, however, how
consistent intracranial direct electrical stimulation responses are across
sessions, to allow effective neuromodulation design.
Objective: To investigate the cross-session consistency of the
electrophysiological effect of electrical stimulation delivered through
intracranial EEG.
Methods: We analysed data from 79 epilepsy patients implanted with
intracranial EEG who underwent brain stimulation as part of a memory
experiment. We quantified the effect of stimulation in terms of band power
modulation and compared this effect from session to session. As a reference, we
applied the same measures during baseline periods.
Results: In most sessions, the effect of stimulation on band power could not
be distinguished from baseline fluctuations of band power. Stimulation effect
was also not consistent across sessions; only a third of the session pairs had
a higher consistency than the baseline standards. Cross-session consistency is
mainly associated with the strength of positive stimulation effects, and it
also tends to be higher when the baseline conditions are more similar between
sessions.
Conclusion: These findings can inform our practices for designing
neuromodulation with greater efficacy when using direct electrical brain
stimulation as a therapeutic treatment.
| [
{
"created": "Thu, 28 May 2020 19:51:04 GMT",
"version": "v1"
}
] | 2020-11-18 | [
[
"Papasavvas",
"Christoforos A",
""
],
[
"Schroeder",
"Gabrielle M",
""
],
[
"Diehl",
"Beate",
""
],
[
"Baier",
"Gerold",
""
],
[
"Taylor",
"Peter N",
""
],
[
"Wang",
"Yujiang",
""
]
] | Background: Direct electrical stimulation of the brain through intracranial electrodes is currently used to probe the epileptic brain as part of pre-surgical evaluation, and it is also being considered for therapeutic treatments through neuromodulation. It is still unknown, however, how consistent intracranial direct electrical stimulation responses are across sessions, to allow effective neuromodulation design. Objective: To investigate the cross-session consistency of the electrophysiological effect of electrical stimulation delivered through intracranial EEG. Methods: We analysed data from 79 epilepsy patients implanted with intracranial EEG who underwent brain stimulation as part of a memory experiment. We quantified the effect of stimulation in terms of band power modulation and compared this effect from session to session. As a reference, we applied the same measures during baseline periods. Results: In most sessions, the effect of stimulation on band power could not be distinguished from baseline fluctuations of band power. Stimulation effect was also not consistent across sessions; only a third of the session pairs had a higher consistency than the baseline standards. Cross-session consistency is mainly associated with the strength of positive stimulation effects, and it also tends to be higher when the baseline conditions are more similar between sessions. Conclusion: These findings can inform our practices for designing neuromodulation with greater efficacy when using direct electrical brain stimulation as a therapeutic treatment. |
1309.5614 | Boleslaw Szymanski | Konrad R. Fialkowski | Has our brain grown too big to think effectively? | null | null | null | Report 01-13 | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A variant of microcephalin, MCPH1 gene, was introgressed about 37,000 years
ago into Homo sapiens genetic pool from an archaic (Homo erectus) lineage and
rose to exceptionally high frequency of around 70 percent worldwide today. It
is involved in regulating neuroblast proliferation and its changes alter the
rate of division and/or differentiation of neuroblasts during the neurogenic
phase of embriogenesis, which could alter the size and structure of the
resulting brain.
At the time of introgression, images had already been painted on the walls of
caves and speech has been in use for over 100,000 years, as had been abstract
thinking. Like today, reasoning and thinking were the primary faculties of
individuals. Homo erectus either did not possess those faculties or was
markedly inferior to Homo sapiens in them. Its brain was smaller and the cortex
was apparently less convoluted. Thus, introgressed microcephalin allele
directed neurogenesis evolutionary back to less complicated brain structure
typical for our evolutionary forefathers, slightly decreasing the level of
complexity already achieved by Homo sapiens 37,000 years ago. Despite that, it
proliferated at a rapid pace.
It yields a supposition: 37,000 years ago the brains of Homo sapiens were too
big and too complicated for the kind of thinking needed for the highest fitness
of individuals. Since adaptation cannot by definition surpass selection
requirements, the volume and complication of the human brain did not originate
under selective pressure to improve effective thinking and they cannot be
explained in terms of such selection.
A proposal to solve this quandary is presented, claiming that Homo sapiens
originated just by chance. Endurance running led to the emergence of Homo
sapiens. The human mind and larynx used for speech are side-effects of more
than a million years of endurance running by pre-human hunters.
| [
{
"created": "Sun, 22 Sep 2013 16:15:43 GMT",
"version": "v1"
}
] | 2013-09-24 | [
[
"Fialkowski",
"Konrad R.",
""
]
] | A variant of microcephalin, MCPH1 gene, was introgressed about 37,000 years ago into Homo sapiens genetic pool from an archaic (Homo erectus) lineage and rose to exceptionally high frequency of around 70 percent worldwide today. It is involved in regulating neuroblast proliferation and its changes alter the rate of division and/or differentiation of neuroblasts during the neurogenic phase of embriogenesis, which could alter the size and structure of the resulting brain. At the time of introgression, images had already been painted on the walls of caves and speech has been in use for over 100,000 years, as had been abstract thinking. Like today, reasoning and thinking were the primary faculties of individuals. Homo erectus either did not possess those faculties or was markedly inferior to Homo sapiens in them. Its brain was smaller and the cortex was apparently less convoluted. Thus, introgressed microcephalin allele directed neurogenesis evolutionary back to less complicated brain structure typical for our evolutionary forefathers, slightly decreasing the level of complexity already achieved by Homo sapiens 37,000 years ago. Despite that, it proliferated at a rapid pace. It yields a supposition: 37,000 years ago the brains of Homo sapiens were too big and too complicated for the kind of thinking needed for the highest fitness of individuals. Since adaptation cannot by definition surpass selection requirements, the volume and complication of the human brain did not originate under selective pressure to improve effective thinking and they cannot be explained in terms of such selection. A proposal to solve this quandary is presented, claiming that Homo sapiens originated just by chance. Endurance running led to the emergence of Homo sapiens. The human mind and larynx used for speech are side-effects of more than a million years of endurance running by pre-human hunters. |
2303.09649 | Thomas Athey | Thomas L. Athey, Daniel J. Tward, Ulrich Mueller, Laurent Younes,
Joshua T. Vogelstein, Michael I. Miller | Preserving Derivative Information while Transforming Neuronal Curves | null | null | null | null | q-bio.NC cs.NA math.NA | http://creativecommons.org/licenses/by/4.0/ | The international neuroscience community is building the first comprehensive
atlases of brain cell types to understand how the brain functions from a higher
resolution, and more integrated perspective than ever before. In order to build
these atlases, subsets of neurons (e.g. serotonergic neurons, prefrontal
cortical neurons etc.) are traced in individual brain samples by placing points
along dendrites and axons. Then, the traces are mapped to common coordinate
systems by transforming the positions of their points, which neglects how the
transformation bends the line segments in between. In this work, we apply the
theory of jets to describe how to preserve derivatives of neuron traces up to
any order. We provide a framework to compute possible error introduced by
standard mapping methods, which involves the Jacobian of the mapping
transformation. We show how our first order method improves mapping accuracy in
both simulated and real neuron traces under random diffeomorphisms. Our method
is freely available in our open-source Python package brainlit.
| [
{
"created": "Thu, 16 Mar 2023 21:01:18 GMT",
"version": "v1"
},
{
"created": "Tue, 1 Aug 2023 19:32:24 GMT",
"version": "v2"
}
] | 2023-08-03 | [
[
"Athey",
"Thomas L.",
""
],
[
"Tward",
"Daniel J.",
""
],
[
"Mueller",
"Ulrich",
""
],
[
"Younes",
"Laurent",
""
],
[
"Vogelstein",
"Joshua T.",
""
],
[
"Miller",
"Michael I.",
""
]
] | The international neuroscience community is building the first comprehensive atlases of brain cell types to understand how the brain functions from a higher resolution, and more integrated perspective than ever before. In order to build these atlases, subsets of neurons (e.g. serotonergic neurons, prefrontal cortical neurons etc.) are traced in individual brain samples by placing points along dendrites and axons. Then, the traces are mapped to common coordinate systems by transforming the positions of their points, which neglects how the transformation bends the line segments in between. In this work, we apply the theory of jets to describe how to preserve derivatives of neuron traces up to any order. We provide a framework to compute possible error introduced by standard mapping methods, which involves the Jacobian of the mapping transformation. We show how our first order method improves mapping accuracy in both simulated and real neuron traces under random diffeomorphisms. Our method is freely available in our open-source Python package brainlit. |
q-bio/0401038 | Jesse Bloom | Jesse D Bloom, Claus O Wilke, Frances H Arnold, Christoph Adami | Stability and the Evolvability of Function in a Model Protein | Biophysical Journal in press | Biophysical Journal, 86:2758-2764 (2004) | 10.1016/S0006-3495(04)74329-5 | null | q-bio.BM | null | Functional proteins must fold with some minimal stability to a structure that
can perform a biochemical task. Here we use a simple model to investigate the
relationship between the stability requirement and the capacity of a protein to
evolve the function of binding to a ligand. Although our model contains no
built-in tradeoff between stability and function, proteins evolved function
more efficiently when the stability requirement was relaxed. Proteins with both
high stability and high function evolved more efficiently when the stability
requirement was gradually increased than when there was constant selection for
high stability. These results show that in our model, the evolution of function
is enhanced by allowing proteins to explore sequences corresponding to
marginally stable structures, and that it is easier to improve stability while
maintaining high function than to improve function while maintaining high
stability. Our model also demonstrates that even in the absence of a
fundamental biophysical tradeoff between stability and function, the speed with
which function can evolve is limited by the stability requirement imposed on
the protein.
| [
{
"created": "Wed, 28 Jan 2004 02:28:45 GMT",
"version": "v1"
}
] | 2009-11-10 | [
[
"Bloom",
"Jesse D",
""
],
[
"Wilke",
"Claus O",
""
],
[
"Arnold",
"Frances H",
""
],
[
"Adami",
"Christoph",
""
]
] | Functional proteins must fold with some minimal stability to a structure that can perform a biochemical task. Here we use a simple model to investigate the relationship between the stability requirement and the capacity of a protein to evolve the function of binding to a ligand. Although our model contains no built-in tradeoff between stability and function, proteins evolved function more efficiently when the stability requirement was relaxed. Proteins with both high stability and high function evolved more efficiently when the stability requirement was gradually increased than when there was constant selection for high stability. These results show that in our model, the evolution of function is enhanced by allowing proteins to explore sequences corresponding to marginally stable structures, and that it is easier to improve stability while maintaining high function than to improve function while maintaining high stability. Our model also demonstrates that even in the absence of a fundamental biophysical tradeoff between stability and function, the speed with which function can evolve is limited by the stability requirement imposed on the protein. |
2108.11640 | Paul Kirk | Thomas Thorne and Paul D. W. Kirk and Heather A. Harrington | Topological Approximate Bayesian Computation for Parameter Inference of
an Angiogenesis Model | 7 pages, 2 figures. For associated code see:
https://github.com/tt104/tabc_angio | null | null | null | q-bio.QM stat.ME | http://creativecommons.org/licenses/by/4.0/ | Inferring the parameters of models describing biological systems is an
important problem in the reverse engineering of the mechanisms underlying these
systems. Much work has focused on parameter inference of stochastic and
ordinary differential equation models using Approximate Bayesian Computation
(ABC). While there is some recent work on inference in spatial models, this
remains an open problem. Simultaneously, advances in topological data analysis
(TDA), a field of computational mathematics, have enabled spatial patterns in
data to be characterised. Here we focus on recent work using topological data
analysis to study different regimes of parameter space for a well-studied model
of angiogenesis. We propose a method for combining TDA with ABC to infer
parameters in the Anderson-Chaplain model of angiogenesis. We demonstrate that
this topological approach outperforms ABC approaches that use simpler
statistics based on spatial features of the data. This is a first step towards
a general framework of spatial parameter inference for biological systems, for
which there may be a variety of filtrations, vectorisations, and summary
statistics to be considered. All code used to produce our results is available
as a Snakemake workflow.
| [
{
"created": "Thu, 26 Aug 2021 08:12:31 GMT",
"version": "v1"
},
{
"created": "Mon, 8 Nov 2021 11:27:22 GMT",
"version": "v2"
}
] | 2021-11-09 | [
[
"Thorne",
"Thomas",
""
],
[
"Kirk",
"Paul D. W.",
""
],
[
"Harrington",
"Heather A.",
""
]
] | Inferring the parameters of models describing biological systems is an important problem in the reverse engineering of the mechanisms underlying these systems. Much work has focused on parameter inference of stochastic and ordinary differential equation models using Approximate Bayesian Computation (ABC). While there is some recent work on inference in spatial models, this remains an open problem. Simultaneously, advances in topological data analysis (TDA), a field of computational mathematics, have enabled spatial patterns in data to be characterised. Here we focus on recent work using topological data analysis to study different regimes of parameter space for a well-studied model of angiogenesis. We propose a method for combining TDA with ABC to infer parameters in the Anderson-Chaplain model of angiogenesis. We demonstrate that this topological approach outperforms ABC approaches that use simpler statistics based on spatial features of the data. This is a first step towards a general framework of spatial parameter inference for biological systems, for which there may be a variety of filtrations, vectorisations, and summary statistics to be considered. All code used to produce our results is available as a Snakemake workflow. |
2309.10884 | Casey Barkan | Casey O. Barkan and Shenshen Wang | Migration feedback induces emergent ecotypes and abrupt transitions in
evolving populations | 10 pages, 3 figures | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We explore the connection between migration patterns and emergent behaviors
of evolving populations in spatially heterogeneous environments. Despite
extensive studies in ecologically and medically important systems, a unifying
framework that clarifies this connection and makes concrete predictions remains
much needed. Using a simple evolutionary model on a network of interconnected
habitats with distinct fitness landscapes, we demonstrate a fundamental
connection between migration feedback, emergent ecotypes, and an unusual form
of discontinuous critical transition. We show how migration feedback generates
spatially non-local niches in which emergent ecotypes can specialize. Rugged
fitness landscapes lead to a complex, yet understandable, phase diagram in
which different ecotypes coexist under different migration patterns. The
discontinuous transitions are distinct from the standard first-order phase
transitions in statistical physics. They arise due to simultaneous
transcritical bifurcations and exhibit a "fine structure" due to symmetry
breaking between intra- and inter-ecotype interactions. We suggest feasible
experiments to test our predictions.
| [
{
"created": "Tue, 19 Sep 2023 19:18:40 GMT",
"version": "v1"
},
{
"created": "Tue, 9 Jan 2024 00:43:49 GMT",
"version": "v2"
}
] | 2024-01-10 | [
[
"Barkan",
"Casey O.",
""
],
[
"Wang",
"Shenshen",
""
]
] | We explore the connection between migration patterns and emergent behaviors of evolving populations in spatially heterogeneous environments. Despite extensive studies in ecologically and medically important systems, a unifying framework that clarifies this connection and makes concrete predictions remains much needed. Using a simple evolutionary model on a network of interconnected habitats with distinct fitness landscapes, we demonstrate a fundamental connection between migration feedback, emergent ecotypes, and an unusual form of discontinuous critical transition. We show how migration feedback generates spatially non-local niches in which emergent ecotypes can specialize. Rugged fitness landscapes lead to a complex, yet understandable, phase diagram in which different ecotypes coexist under different migration patterns. The discontinuous transitions are distinct from the standard first-order phase transitions in statistical physics. They arise due to simultaneous transcritical bifurcations and exhibit a "fine structure" due to symmetry breaking between intra- and inter-ecotype interactions. We suggest feasible experiments to test our predictions. |
2307.15857 | Maria-Veronica Ciocanel | Maria-Veronica Ciocanel, Lee Ding, Lucas Mastromatteo, Sarah
Reichheld, Sarah Cabral, Kimberly Mowry, Bjorn Sandstede | Parameter identifiability in PDE models of fluorescence recovery after
photobleaching | 19 pages, 10 figures | null | 10.1007/s11538-024-01266-4 | null | q-bio.QM math.DS | http://creativecommons.org/licenses/by/4.0/ | Identifying unique parameters for mathematical models describing biological
data can be challenging and often impossible. Parameter identifiability for
partial differential equations models in cell biology is especially difficult
given that many established \textit{in vivo} measurements of protein dynamics
average out the spatial dimensions. Here, we are motivated by recent
experiments on the binding dynamics of the RNA-binding protein PTBP3 in RNP
granules of frog oocytes based on fluorescence recovery after photobleaching
(FRAP) measurements. FRAP is a widely-used experimental technique for probing
protein dynamics in living cells, and is often modeled using simple
reaction-diffusion models of the protein dynamics. We show that current methods
of structural and practical parameter identifiability provide limited insights
into identifiability of kinetic parameters for these PDE models and
spatially-averaged FRAP data. We thus propose a pipeline for assessing
parameter identifiability and for learning parameter combinations based on
re-parametrization and profile likelihoods analysis. We show that this method
is able to recover parameter combinations for synthetic FRAP datasets and
investigate its application to real experimental data.
| [
{
"created": "Sat, 29 Jul 2023 01:21:02 GMT",
"version": "v1"
},
{
"created": "Sat, 2 Mar 2024 19:50:19 GMT",
"version": "v2"
}
] | 2024-03-05 | [
[
"Ciocanel",
"Maria-Veronica",
""
],
[
"Ding",
"Lee",
""
],
[
"Mastromatteo",
"Lucas",
""
],
[
"Reichheld",
"Sarah",
""
],
[
"Cabral",
"Sarah",
""
],
[
"Mowry",
"Kimberly",
""
],
[
"Sandstede",
"Bjorn",
""
]
] | Identifying unique parameters for mathematical models describing biological data can be challenging and often impossible. Parameter identifiability for partial differential equations models in cell biology is especially difficult given that many established \textit{in vivo} measurements of protein dynamics average out the spatial dimensions. Here, we are motivated by recent experiments on the binding dynamics of the RNA-binding protein PTBP3 in RNP granules of frog oocytes based on fluorescence recovery after photobleaching (FRAP) measurements. FRAP is a widely-used experimental technique for probing protein dynamics in living cells, and is often modeled using simple reaction-diffusion models of the protein dynamics. We show that current methods of structural and practical parameter identifiability provide limited insights into identifiability of kinetic parameters for these PDE models and spatially-averaged FRAP data. We thus propose a pipeline for assessing parameter identifiability and for learning parameter combinations based on re-parametrization and profile likelihoods analysis. We show that this method is able to recover parameter combinations for synthetic FRAP datasets and investigate its application to real experimental data. |
q-bio/0608016 | Romulus Breban | Romulus Breban, Raffaele Vardavas and Sally Blower | Inductive Reasoning Games as Influenza Vaccination Models: Mean Field
Analysis | 20 pages, 7 figures | null | 10.1103/PhysRevE.76.031127 | null | q-bio.PE | null | We define and analyze an inductive reasoning game of voluntary yearly
vaccination in order to establish whether or not a population of individuals
acting in their own self-interest would be able to prevent influenza epidemics.
We find that epidemics are rarely prevented. We also find that severe epidemics
may occur without the introduction of pandemic strains. We further address the
situation where market incentives are introduced to help ameliorating
epidemics. Surprisingly, we find that vaccinating families exacerbates
epidemics. However, a public health program requesting prepayment of
vaccinations may significantly ameliorate influenza epidemics.
| [
{
"created": "Tue, 8 Aug 2006 01:23:56 GMT",
"version": "v1"
}
] | 2013-05-29 | [
[
"Breban",
"Romulus",
""
],
[
"Vardavas",
"Raffaele",
""
],
[
"Blower",
"Sally",
""
]
] | We define and analyze an inductive reasoning game of voluntary yearly vaccination in order to establish whether or not a population of individuals acting in their own self-interest would be able to prevent influenza epidemics. We find that epidemics are rarely prevented. We also find that severe epidemics may occur without the introduction of pandemic strains. We further address the situation where market incentives are introduced to help ameliorating epidemics. Surprisingly, we find that vaccinating families exacerbates epidemics. However, a public health program requesting prepayment of vaccinations may significantly ameliorate influenza epidemics. |
2408.02650 | Rosalind J Allen | Andrea Iglesias-Ramas, Samuele Pio Lipani and Rosalind J. Allen | Population genetics: an introduction for physicists | null | null | null | null | q-bio.PE physics.bio-ph | http://creativecommons.org/licenses/by/4.0/ | Population genetics lies at the heart of evolutionary theory. This topic
forms part of many biological science curricula but is rarely taught to physics
students. Since physicists are becoming increasingly interested in biological
evolution, we aim to provide a brief introduction to population genetics,
written for physicists. We start with two background chapters: chapter 1
provides a brief historical introduction to the topic, while chapter 2 provides
some essential biological background. We begin our main content with chapter 3
which discusses the key concepts behind Darwinian natural selection and
Mendelian inheritance. Chapter 4 covers the basics of how variation is
maintained in populations, while chapter 5 discusses mutation and selection. In
chapter 6 we discuss stochastic effects in population genetics using the
Wright-Fisher model as our example, and finally we offer concluding thoughts
and references to excellent textbooks in chapter 7.
| [
{
"created": "Mon, 5 Aug 2024 17:25:57 GMT",
"version": "v1"
},
{
"created": "Wed, 7 Aug 2024 11:22:32 GMT",
"version": "v2"
},
{
"created": "Thu, 8 Aug 2024 12:09:39 GMT",
"version": "v3"
}
] | 2024-08-09 | [
[
"Iglesias-Ramas",
"Andrea",
""
],
[
"Lipani",
"Samuele Pio",
""
],
[
"Allen",
"Rosalind J.",
""
]
] | Population genetics lies at the heart of evolutionary theory. This topic forms part of many biological science curricula but is rarely taught to physics students. Since physicists are becoming increasingly interested in biological evolution, we aim to provide a brief introduction to population genetics, written for physicists. We start with two background chapters: chapter 1 provides a brief historical introduction to the topic, while chapter 2 provides some essential biological background. We begin our main content with chapter 3 which discusses the key concepts behind Darwinian natural selection and Mendelian inheritance. Chapter 4 covers the basics of how variation is maintained in populations, while chapter 5 discusses mutation and selection. In chapter 6 we discuss stochastic effects in population genetics using the Wright-Fisher model as our example, and finally we offer concluding thoughts and references to excellent textbooks in chapter 7. |
1904.11219 | Dongjie Xie | Dongjie Xie, Meng Pei and Yanjie Su | "Favoring my playmate seems fair": Inhibitory control and theory of mind
in preschoolers' self-disadvantaging behaviors | 24 pages, 3 figures | Journal of Experimental Child Psychology, 2019 | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The purpose of this study was to investigate the relationship between
preschoolers' cognitive abilities and their fairness-related allocation
behaviors in a dilemma of equity-efficiency conflict. Four- to 6-year-olds in
Experiment 1 (N = 99) decided how to allocate 5 reward bells. In the
first-party condition, preschoolers were asked to choose among giving more to
self (self-advantageous inequity), wasting one bell (equity) or giving more to
other (self-disadvantageous inequity); while in the third-party condition, they
chose to allocate the extra bell to one of two equally deserving recipients or
to waste it. Results showed that compared to the pattern of decision in the
third-party condition, preschoolers in the first-party condition were more
likely to give the extra bell to other (self-disadvantaging behaviors), and
age, inhibitory control (IC) and theory of mind (ToM) were positively
correlated with their self-disadvantaging choices, but only IC mediated the
relationship between age and self-disadvantaging behaviors. Experiment 2 (N =
41) showed that IC still predicted preschoolers' self-disadvantaging behaviors
when they could choose only between equity and disadvantageous inequity. These
results suggested that IC played a critical role in the implementation of
self-disadvantaging behaviors when this required the control over selfishness
and envy.
| [
{
"created": "Thu, 25 Apr 2019 08:56:49 GMT",
"version": "v1"
}
] | 2019-04-26 | [
[
"Xie",
"Dongjie",
""
],
[
"Pei",
"Meng",
""
],
[
"Su",
"Yanjie",
""
]
] | The purpose of this study was to investigate the relationship between preschoolers' cognitive abilities and their fairness-related allocation behaviors in a dilemma of equity-efficiency conflict. Four- to 6-year-olds in Experiment 1 (N = 99) decided how to allocate 5 reward bells. In the first-party condition, preschoolers were asked to choose among giving more to self (self-advantageous inequity), wasting one bell (equity) or giving more to other (self-disadvantageous inequity); while in the third-party condition, they chose to allocate the extra bell to one of two equally deserving recipients or to waste it. Results showed that compared to the pattern of decision in the third-party condition, preschoolers in the first-party condition were more likely to give the extra bell to other (self-disadvantaging behaviors), and age, inhibitory control (IC) and theory of mind (ToM) were positively correlated with their self-disadvantaging choices, but only IC mediated the relationship between age and self-disadvantaging behaviors. Experiment 2 (N = 41) showed that IC still predicted preschoolers' self-disadvantaging behaviors when they could choose only between equity and disadvantageous inequity. These results suggested that IC played a critical role in the implementation of self-disadvantaging behaviors when this required the control over selfishness and envy. |
1308.6158 | Harold Fellermann | Shinpei Tanaka, Harold Fellermann and Steen Rasmussen | Sequence selection in an autocatalytic binary polymer model | null | null | 10.1209/0295-5075/107/28004 | null | q-bio.MN nlin.AO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An autocatalytic pattern matching polymer system is studied as an abstract
model for chemical ecosystem evolution. Highly ordered populations with
particular sequence patterns appear spontaneously out of a vast number of
possible states. The interplay between the selected microscopic sequence
patterns and the macroscopic cooperative structures is examined. Stability,
fluctuations, and evolutionary selection mechanisms are investigated for the
involved self-organizing processes.
| [
{
"created": "Wed, 28 Aug 2013 14:15:18 GMT",
"version": "v1"
}
] | 2015-06-17 | [
[
"Tanaka",
"Shinpei",
""
],
[
"Fellermann",
"Harold",
""
],
[
"Rasmussen",
"Steen",
""
]
] | An autocatalytic pattern matching polymer system is studied as an abstract model for chemical ecosystem evolution. Highly ordered populations with particular sequence patterns appear spontaneously out of a vast number of possible states. The interplay between the selected microscopic sequence patterns and the macroscopic cooperative structures is examined. Stability, fluctuations, and evolutionary selection mechanisms are investigated for the involved self-organizing processes. |
2303.06041 | Jamie Mullineaux | Jamie Mullineaux, Takoua Jendoubi, Baptiste Leurent | A Bayesian spatio-temporal study of meteorological factors affecting the
spread of COVID-19 | 23 pages, 13 figures (inclusive of references and appendix) | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | The spread of COVID-19 has brought challenges to health, social and economic
systems around the world. With little to no prior immunity in the global
population transmission has been driven primarily by human interaction.
However, as with common respiratory illnesses such as the flu it's suggested
that COVID-19 may become seasonal as immunity grows. Yet the effects of
meteorological conditions on the spread of COVID-19 are poorly understood with
previous studies producing contrasting results, due at least in part to limited
and inconsistent study designs. This study investigates the effect of
meteorological conditions on COVID-19 infections in England using a
spatio-temporal model applied to case counts during the initial England
lockdown. By modelling spatial and temporal effects to account for the nature
of a human transmissible virus the model isolates meteorological effects.
Inference based on 95% highest posterior density intervals shows humidity is
negatively associated with COVID-19 spread. The lack of evidence for other
weather factors affecting COVID-19 transmission shows care should be taken with
respect to seasonality when designing COVID-19 policies and public
communications.
| [
{
"created": "Fri, 10 Mar 2023 16:30:59 GMT",
"version": "v1"
},
{
"created": "Fri, 24 Mar 2023 00:08:24 GMT",
"version": "v2"
},
{
"created": "Wed, 29 Mar 2023 22:42:50 GMT",
"version": "v3"
},
{
"created": "Mon, 7 Aug 2023 21:12:09 GMT",
"version": "v4"
}
] | 2023-08-09 | [
[
"Mullineaux",
"Jamie",
""
],
[
"Jendoubi",
"Takoua",
""
],
[
"Leurent",
"Baptiste",
""
]
] | The spread of COVID-19 has brought challenges to health, social and economic systems around the world. With little to no prior immunity in the global population transmission has been driven primarily by human interaction. However, as with common respiratory illnesses such as the flu it's suggested that COVID-19 may become seasonal as immunity grows. Yet the effects of meteorological conditions on the spread of COVID-19 are poorly understood with previous studies producing contrasting results, due at least in part to limited and inconsistent study designs. This study investigates the effect of meteorological conditions on COVID-19 infections in England using a spatio-temporal model applied to case counts during the initial England lockdown. By modelling spatial and temporal effects to account for the nature of a human transmissible virus the model isolates meteorological effects. Inference based on 95% highest posterior density intervals shows humidity is negatively associated with COVID-19 spread. The lack of evidence for other weather factors affecting COVID-19 transmission shows care should be taken with respect to seasonality when designing COVID-19 policies and public communications. |
2407.13551 | Eduardo Henrique Colombo | E.H. Colombo, L. Defaveri, C. Anteneodo | Decoding the interaction mediators from landscape-induced spatial
patterns | null | null | null | null | q-bio.PE cond-mat.stat-mech | http://creativecommons.org/licenses/by/4.0/ | Interactions between organisms are mediated by an intricate network of
physico-chemical substances and other organisms. Understanding the dynamics of
mediators and how they shape the population spatial distribution is key to
predict ecological outcomes and how they would be transformed by changes in
environmental constraints. However, due to the inherent complexity involved,
this task is often unfeasible, from the empirical and theoretical perspectives.
In this paper, we make progress in addressing this central issue, creating a
bridge that provides a two-way connection between the features of the ensemble
of underlying mediators and the wrinkles in the population density induced by a
landscape defect (or spatial perturbation). The bridge is constructed by
applying the Feynman-Vernon decomposition, which disentangles the influences
among the focal population and the mediators in a compact way. This is achieved
though an interaction kernel, which effectively incorporates the mediators'
degrees of freedom, explaining the emergence of nonlocal influence between
individuals, an ad hoc assumption in modeling population dynamics. Concrete
examples are worked out and reveal the complexity behind a possible top-down
inference procedure.
| [
{
"created": "Thu, 18 Jul 2024 14:25:18 GMT",
"version": "v1"
}
] | 2024-07-19 | [
[
"Colombo",
"E. H.",
""
],
[
"Defaveri",
"L.",
""
],
[
"Anteneodo",
"C.",
""
]
] | Interactions between organisms are mediated by an intricate network of physico-chemical substances and other organisms. Understanding the dynamics of mediators and how they shape the population spatial distribution is key to predict ecological outcomes and how they would be transformed by changes in environmental constraints. However, due to the inherent complexity involved, this task is often unfeasible, from the empirical and theoretical perspectives. In this paper, we make progress in addressing this central issue, creating a bridge that provides a two-way connection between the features of the ensemble of underlying mediators and the wrinkles in the population density induced by a landscape defect (or spatial perturbation). The bridge is constructed by applying the Feynman-Vernon decomposition, which disentangles the influences among the focal population and the mediators in a compact way. This is achieved though an interaction kernel, which effectively incorporates the mediators' degrees of freedom, explaining the emergence of nonlocal influence between individuals, an ad hoc assumption in modeling population dynamics. Concrete examples are worked out and reveal the complexity behind a possible top-down inference procedure. |
1005.1159 | Igor Kulic | Herve Mohrbach, Albert Johner and Igor M. Kulic | Polymorphic Dynamics of Microtubules | null | null | null | null | q-bio.BM cond-mat.mes-hall cond-mat.soft physics.bio-ph q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Starting from the hypothesis that the tubulin dimer is a conformationally
bistable molecule - fluctuating between a curved and a straight configuration
at room temperature - we develop a model for polymorphic dynamics of the
microtubule lattice. We show that tubulin bistability consistently explains
unusual dynamic fluctuations, the apparent length-stiffness relation of grafted
microtubules and the curved-helical appearance of microtubules in general.
Analyzing experimental data we conclude that taxol stabilized microtubules
exist in highly cooperative yet strongly fluctuating helical states. When
clamped by the end the microtubule undergoes an unusual zero energy motion - in
its effect reminiscent of a limited rotational hinge.
| [
{
"created": "Fri, 7 May 2010 09:03:06 GMT",
"version": "v1"
}
] | 2010-05-10 | [
[
"Mohrbach",
"Herve",
""
],
[
"Johner",
"Albert",
""
],
[
"Kulic",
"Igor M.",
""
]
] | Starting from the hypothesis that the tubulin dimer is a conformationally bistable molecule - fluctuating between a curved and a straight configuration at room temperature - we develop a model for polymorphic dynamics of the microtubule lattice. We show that tubulin bistability consistently explains unusual dynamic fluctuations, the apparent length-stiffness relation of grafted microtubules and the curved-helical appearance of microtubules in general. Analyzing experimental data we conclude that taxol stabilized microtubules exist in highly cooperative yet strongly fluctuating helical states. When clamped by the end the microtubule undergoes an unusual zero energy motion - in its effect reminiscent of a limited rotational hinge. |
2309.01670 | Nathan Ng | Nathan Ng, Ji Won Park, Jae Hyeon Lee, Ryan Lewis Kelly, Stephen Ra,
Kyunghyun Cho | Blind Biological Sequence Denoising with Self-Supervised Set Learning | null | null | null | null | q-bio.GN cs.LG | http://creativecommons.org/licenses/by/4.0/ | Biological sequence analysis relies on the ability to denoise the imprecise
output of sequencing platforms. We consider a common setting where a short
sequence is read out repeatedly using a high-throughput long-read platform to
generate multiple subreads, or noisy observations of the same sequence.
Denoising these subreads with alignment-based approaches often fails when too
few subreads are available or error rates are too high. In this paper, we
propose a novel method for blindly denoising sets of sequences without directly
observing clean source sequence labels. Our method, Self-Supervised Set
Learning (SSSL), gathers subreads together in an embedding space and estimates
a single set embedding as the midpoint of the subreads in both the latent and
sequence spaces. This set embedding represents the "average" of the subreads
and can be decoded into a prediction of the clean sequence. In experiments on
simulated long-read DNA data, SSSL methods denoise small reads of $\leq 6$
subreads with 17% fewer errors and large reads of $>6$ subreads with 8% fewer
errors compared to the best baseline. On a real dataset of antibody sequences,
SSSL improves over baselines on two self-supervised metrics, with a significant
improvement on difficult small reads that comprise over 60% of the test set. By
accurately denoising these reads, SSSL promises to better realize the potential
of high-throughput DNA sequencing data for downstream scientific applications.
| [
{
"created": "Mon, 4 Sep 2023 15:35:04 GMT",
"version": "v1"
}
] | 2023-09-06 | [
[
"Ng",
"Nathan",
""
],
[
"Park",
"Ji Won",
""
],
[
"Lee",
"Jae Hyeon",
""
],
[
"Kelly",
"Ryan Lewis",
""
],
[
"Ra",
"Stephen",
""
],
[
"Cho",
"Kyunghyun",
""
]
] | Biological sequence analysis relies on the ability to denoise the imprecise output of sequencing platforms. We consider a common setting where a short sequence is read out repeatedly using a high-throughput long-read platform to generate multiple subreads, or noisy observations of the same sequence. Denoising these subreads with alignment-based approaches often fails when too few subreads are available or error rates are too high. In this paper, we propose a novel method for blindly denoising sets of sequences without directly observing clean source sequence labels. Our method, Self-Supervised Set Learning (SSSL), gathers subreads together in an embedding space and estimates a single set embedding as the midpoint of the subreads in both the latent and sequence spaces. This set embedding represents the "average" of the subreads and can be decoded into a prediction of the clean sequence. In experiments on simulated long-read DNA data, SSSL methods denoise small reads of $\leq 6$ subreads with 17% fewer errors and large reads of $>6$ subreads with 8% fewer errors compared to the best baseline. On a real dataset of antibody sequences, SSSL improves over baselines on two self-supervised metrics, with a significant improvement on difficult small reads that comprise over 60% of the test set. By accurately denoising these reads, SSSL promises to better realize the potential of high-throughput DNA sequencing data for downstream scientific applications. |
1902.10234 | Arvind Balijepalli | Son T. Le, Nicholas B. Guros, Robert C. Bruce, Antonio Cardone,
Niranjana D. Amin, Siyuan Zhang, Jeffery B. Klauda, Harish C. Pant, Curt A.
Richter and Arvind Balijepalli | Quantum Capacitance-Limited MoS2 Biosensors Enable Remote Label-Free
Enzyme Measurements | null | null | 10.1039/C9NR03171E | null | q-bio.QM physics.app-ph | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We have demonstrated atomically thin, quantum capacitance-limited,
field-effect transistors (FETs) that enable the detection of pH changes with
~75-fold higher sensitivity (4.4 V/pH) over the Nernst value of 59 mV/pH at
room temperature when used as a biosensor. The transistors, which are
fabricated from a monolayer of MoS2 with a room temperature ionic liquid (RTIL)
in place of a conventional oxide gate dielectric, exhibit very low intrinsic
noise resulting in a pH limit of detection (LOD) of 92x10^-6 at 10 Hz. This
high device performance, which is a function of the structure of our device, is
achieved by remotely connecting the gate to a pH sensing element allowing the
FETs to be reused. Because pH measurements are fundamentally important in
biotechnology, the low limit of detection demonstrated here will benefit
numerous applications ranging from pharmaceutical manufacturing to clinical
diagnostics. As an example, we experimentally quantified the function of the
kinase Cdk5, an enzyme implicated in Alzheimer's disease, at concentrations
that are 5-fold lower than physiological values, and with sufficient
time-resolution to allow the estimation of both steady-state and kinetic
parameters in a single experiment. The high sensitivity, low LOD and fast
turnaround time of the measurements will allow the development of early
diagnostic tools and novel therapeutics to detect and treat neurological
conditions years before currently possible.
| [
{
"created": "Fri, 21 Dec 2018 16:34:53 GMT",
"version": "v1"
}
] | 2019-08-08 | [
[
"Le",
"Son T.",
""
],
[
"Guros",
"Nicholas B.",
""
],
[
"Bruce",
"Robert C.",
""
],
[
"Cardone",
"Antonio",
""
],
[
"Amin",
"Niranjana D.",
""
],
[
"Zhang",
"Siyuan",
""
],
[
"Klauda",
"Jeffery B.",
""
],
[
"Pant",
"Harish C.",
""
],
[
"Richter",
"Curt A.",
""
],
[
"Balijepalli",
"Arvind",
""
]
] | We have demonstrated atomically thin, quantum capacitance-limited, field-effect transistors (FETs) that enable the detection of pH changes with ~75-fold higher sensitivity (4.4 V/pH) over the Nernst value of 59 mV/pH at room temperature when used as a biosensor. The transistors, which are fabricated from a monolayer of MoS2 with a room temperature ionic liquid (RTIL) in place of a conventional oxide gate dielectric, exhibit very low intrinsic noise resulting in a pH limit of detection (LOD) of 92x10^-6 at 10 Hz. This high device performance, which is a function of the structure of our device, is achieved by remotely connecting the gate to a pH sensing element allowing the FETs to be reused. Because pH measurements are fundamentally important in biotechnology, the low limit of detection demonstrated here will benefit numerous applications ranging from pharmaceutical manufacturing to clinical diagnostics. As an example, we experimentally quantified the function of the kinase Cdk5, an enzyme implicated in Alzheimer's disease, at concentrations that are 5-fold lower than physiological values, and with sufficient time-resolution to allow the estimation of both steady-state and kinetic parameters in a single experiment. The high sensitivity, low LOD and fast turnaround time of the measurements will allow the development of early diagnostic tools and novel therapeutics to detect and treat neurological conditions years before currently possible. |
1704.05628 | Jae Kyoung Kim | Jae Kyoung Kim, Grzegorz A. Rempala, Hye-Won Kang | Reduction for stochastic biochemical reaction networks with multiscale
conservations | 27 pages, 5 figures, This pre-print has been accepted for publication
in SIAM Multiscale Modeling & Simulation. The final copyedited version of
this paper will be available at https://www.siam.org/journals/mms.php | null | null | null | q-bio.MN math.PR physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Biochemical reaction networks frequently consist of species evolving on
multiple timescales. Stochastic simulations of such networks are often
computationally challenging and therefore various methods have been developed
to obtain sensible stochastic approximations on the timescale of interest. One
of the rigorous and popular approaches is the multiscale approximation method
for continuous time Markov processes. In this approach, by scaling species
abundances and reaction rates, a family of processes parameterized by a scaling
parameter is defined. The limiting process of this family is then used to
approximate the original process. However, we find that such approximations
become inaccurate when combinations of species with disparate abundances either
constitute conservation laws or form virtual slow auxiliary species. To obtain
more accurate approximation in such cases, we propose here an appropriate
modification of the original method.
| [
{
"created": "Wed, 19 Apr 2017 06:49:59 GMT",
"version": "v1"
}
] | 2017-04-20 | [
[
"Kim",
"Jae Kyoung",
""
],
[
"Rempala",
"Grzegorz A.",
""
],
[
"Kang",
"Hye-Won",
""
]
] | Biochemical reaction networks frequently consist of species evolving on multiple timescales. Stochastic simulations of such networks are often computationally challenging and therefore various methods have been developed to obtain sensible stochastic approximations on the timescale of interest. One of the rigorous and popular approaches is the multiscale approximation method for continuous time Markov processes. In this approach, by scaling species abundances and reaction rates, a family of processes parameterized by a scaling parameter is defined. The limiting process of this family is then used to approximate the original process. However, we find that such approximations become inaccurate when combinations of species with disparate abundances either constitute conservation laws or form virtual slow auxiliary species. To obtain more accurate approximation in such cases, we propose here an appropriate modification of the original method. |
2307.15073 | Tim G. J. Rudner | Leo Klarner, Tim G. J. Rudner, Michael Reutlinger, Torsten Schindler,
Garrett M. Morris, Charlotte Deane, Yee Whye Teh | Drug Discovery under Covariate Shift with Domain-Informed Prior
Distributions over Functions | Published in the Proceedings of the 40th International Conference on
Machine Learning (ICML 2023) | null | null | null | q-bio.BM cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accelerating the discovery of novel and more effective therapeutics is an
important pharmaceutical problem in which deep learning is playing an
increasingly significant role. However, real-world drug discovery tasks are
often characterized by a scarcity of labeled data and significant covariate
shift$\unicode{x2013}\unicode{x2013}$a setting that poses a challenge to
standard deep learning methods. In this paper, we present Q-SAVI, a
probabilistic model able to address these challenges by encoding explicit prior
knowledge of the data-generating process into a prior distribution over
functions, presenting researchers with a transparent and probabilistically
principled way to encode data-driven modeling preferences. Building on a novel,
gold-standard bioactivity dataset that facilitates a meaningful comparison of
models in an extrapolative regime, we explore different approaches to induce
data shift and construct a challenging evaluation setup. We then demonstrate
that using Q-SAVI to integrate contextualized prior knowledge of drug-like
chemical space into the modeling process affords substantial gains in
predictive accuracy and calibration, outperforming a broad range of
state-of-the-art self-supervised pre-training and domain adaptation techniques.
| [
{
"created": "Fri, 14 Jul 2023 05:01:10 GMT",
"version": "v1"
}
] | 2023-07-31 | [
[
"Klarner",
"Leo",
""
],
[
"Rudner",
"Tim G. J.",
""
],
[
"Reutlinger",
"Michael",
""
],
[
"Schindler",
"Torsten",
""
],
[
"Morris",
"Garrett M.",
""
],
[
"Deane",
"Charlotte",
""
],
[
"Teh",
"Yee Whye",
""
]
] | Accelerating the discovery of novel and more effective therapeutics is an important pharmaceutical problem in which deep learning is playing an increasingly significant role. However, real-world drug discovery tasks are often characterized by a scarcity of labeled data and significant covariate shift$\unicode{x2013}\unicode{x2013}$a setting that poses a challenge to standard deep learning methods. In this paper, we present Q-SAVI, a probabilistic model able to address these challenges by encoding explicit prior knowledge of the data-generating process into a prior distribution over functions, presenting researchers with a transparent and probabilistically principled way to encode data-driven modeling preferences. Building on a novel, gold-standard bioactivity dataset that facilitates a meaningful comparison of models in an extrapolative regime, we explore different approaches to induce data shift and construct a challenging evaluation setup. We then demonstrate that using Q-SAVI to integrate contextualized prior knowledge of drug-like chemical space into the modeling process affords substantial gains in predictive accuracy and calibration, outperforming a broad range of state-of-the-art self-supervised pre-training and domain adaptation techniques. |
1405.7926 | Grzegorz Nawrocki | Grzegorz Nawrocki and Marek Cieplak | Aqueous Amino Acids and Proteins Near the Surface of Gold in Hydrophilic
and Hydrophobic Force Fields | null | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We calculate potentials of the mean force for twenty amino acids in the
vicinity of the (111) surface of gold, for several dipeptides, and for some
analogs of the side chains, using molecular dynamics simulations and the
umbrella sampling method. We compare results obtained within three different
force fields: one hydrophobic (for a contaminated surface) and two hydrophilic.
All of these fields lead to good binding with very different specificities and
different patterns in the density and polarization of water. The covalent bond
with the sulfur atom on cysteine is modeled by the Morse potential. We
demonstrate that binding energies of dipeptides are different than the combined
binding energies of their amino-acidic components. For the hydrophobic gold,
adsorption events of a small protein are driven by attraction to the strongest
binding amino acids. This is not so in the hydrophilic cases - a result of
smaller specificities combined with the difficulty for proteins, but not for
single amino acids, to penetrate the first layer of water. The properties of
water near the surface sensitively depend on the force field.
| [
{
"created": "Fri, 30 May 2014 17:43:00 GMT",
"version": "v1"
}
] | 2014-06-02 | [
[
"Nawrocki",
"Grzegorz",
""
],
[
"Cieplak",
"Marek",
""
]
] | We calculate potentials of the mean force for twenty amino acids in the vicinity of the (111) surface of gold, for several dipeptides, and for some analogs of the side chains, using molecular dynamics simulations and the umbrella sampling method. We compare results obtained within three different force fields: one hydrophobic (for a contaminated surface) and two hydrophilic. All of these fields lead to good binding with very different specificities and different patterns in the density and polarization of water. The covalent bond with the sulfur atom on cysteine is modeled by the Morse potential. We demonstrate that binding energies of dipeptides are different than the combined binding energies of their amino-acidic components. For the hydrophobic gold, adsorption events of a small protein are driven by attraction to the strongest binding amino acids. This is not so in the hydrophilic cases - a result of smaller specificities combined with the difficulty for proteins, but not for single amino acids, to penetrate the first layer of water. The properties of water near the surface sensitively depend on the force field. |
2303.06975 | Tim Downing Dr | Tim Downing, Nicos Angelopoulos | A primer on correlation-based dimension reduction methods for
multi-omics analysis | 30+ pages, 3 figures, 7 tables | null | null | null | q-bio.GN | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The continuing advances of omic technologies mean that it is now more
tangible to measure the numerous features collectively reflecting the molecular
properties of a sample. When multiple omic methods are used, statistical and
computational approaches can exploit these large, connected profiles.
Multi-omics is the integration of different omic data sources from the same
biological sample. In this review, we focus on correlation-based dimension
reduction approaches for single omic datasets, followed by methods for pairs of
omics datasets, before detailing further techniques for three or more omic
datasets. We also briefly detail network methods when three or more omic
datasets are available and which complement correlation-oriented tools. To aid
readers new to this area, these are all linked to relevant R packages that can
implement these procedures. Finally, we discuss scenarios of experimental
design and present road maps that simplify the selection of appropriate
analysis methods. This review will guide researchers navigate the emerging
methods for multi-omics and help them integrate diverse omic datasets
appropriately and embrace the opportunity of population multi-omics.
| [
{
"created": "Mon, 13 Mar 2023 10:21:27 GMT",
"version": "v1"
},
{
"created": "Wed, 15 Mar 2023 11:01:51 GMT",
"version": "v2"
},
{
"created": "Sat, 27 May 2023 11:44:54 GMT",
"version": "v3"
},
{
"created": "Mon, 5 Jun 2023 19:24:54 GMT",
"version": "v4"
},
{
"created": "Wed, 7 Jun 2023 19:34:11 GMT",
"version": "v5"
},
{
"created": "Thu, 15 Jun 2023 16:52:32 GMT",
"version": "v6"
},
{
"created": "Fri, 11 Aug 2023 16:53:27 GMT",
"version": "v7"
}
] | 2023-08-14 | [
[
"Downing",
"Tim",
""
],
[
"Angelopoulos",
"Nicos",
""
]
] | The continuing advances of omic technologies mean that it is now more tangible to measure the numerous features collectively reflecting the molecular properties of a sample. When multiple omic methods are used, statistical and computational approaches can exploit these large, connected profiles. Multi-omics is the integration of different omic data sources from the same biological sample. In this review, we focus on correlation-based dimension reduction approaches for single omic datasets, followed by methods for pairs of omics datasets, before detailing further techniques for three or more omic datasets. We also briefly detail network methods when three or more omic datasets are available and which complement correlation-oriented tools. To aid readers new to this area, these are all linked to relevant R packages that can implement these procedures. Finally, we discuss scenarios of experimental design and present road maps that simplify the selection of appropriate analysis methods. This review will guide researchers navigate the emerging methods for multi-omics and help them integrate diverse omic datasets appropriately and embrace the opportunity of population multi-omics. |
q-bio/0501032 | Gernot Klein A. | Gernot A. Klein, Karsten Kruse, Gianaurelio Cuniberti, Frank Juelicher | Filament depolymerization by motor molecules | null | null | 10.1103/PhysRevLett.94.108102 | null | q-bio.SC | null | Motor proteins that specifically interact with the ends of cytoskeletal
filaments can induce filament depolymerization. A phenomenological description
of this process is presented. We show that under certain conditions motors
dynamically accumulate at the filament ends. We compare simulations of two
microscopic models to the phenomenological description. The depolymerization
rate can exhibit maxima and dynamic instabilities as a function of the bulk
motor density for processive depolymerization. We discuss our results in
relation to experimental studies of Kin-13 family motor proteins.
| [
{
"created": "Mon, 24 Jan 2005 16:13:27 GMT",
"version": "v1"
}
] | 2009-11-11 | [
[
"Klein",
"Gernot A.",
""
],
[
"Kruse",
"Karsten",
""
],
[
"Cuniberti",
"Gianaurelio",
""
],
[
"Juelicher",
"Frank",
""
]
] | Motor proteins that specifically interact with the ends of cytoskeletal filaments can induce filament depolymerization. A phenomenological description of this process is presented. We show that under certain conditions motors dynamically accumulate at the filament ends. We compare simulations of two microscopic models to the phenomenological description. The depolymerization rate can exhibit maxima and dynamic instabilities as a function of the bulk motor density for processive depolymerization. We discuss our results in relation to experimental studies of Kin-13 family motor proteins. |
2311.13466 | Ian Dunn | Ian Dunn, David Ryan Koes | Accelerating Inference in Molecular Diffusion Models with Latent
Representations of Protein Structure | This paper appeared as a spotlight paper at the NeurIPS 2023
Generative AI and Biology Workshop | null | null | null | q-bio.BM cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | Diffusion generative models have emerged as a powerful framework for
addressing problems in structural biology and structure-based drug design.
These models operate directly on 3D molecular structures. Due to the
unfavorable scaling of graph neural networks (GNNs) with graph size as well as
the relatively slow inference speeds inherent to diffusion models, many
existing molecular diffusion models rely on coarse-grained representations of
protein structure to make training and inference feasible. However, such
coarse-grained representations discard essential information for modeling
molecular interactions and impair the quality of generated structures. In this
work, we present a novel GNN-based architecture for learning latent
representations of molecular structure. When trained end-to-end with a
diffusion model for de novo ligand design, our model achieves comparable
performance to one with an all-atom protein representation while exhibiting a
3-fold reduction in inference time.
| [
{
"created": "Wed, 22 Nov 2023 15:32:31 GMT",
"version": "v1"
},
{
"created": "Wed, 8 May 2024 21:04:32 GMT",
"version": "v2"
}
] | 2024-05-10 | [
[
"Dunn",
"Ian",
""
],
[
"Koes",
"David Ryan",
""
]
] | Diffusion generative models have emerged as a powerful framework for addressing problems in structural biology and structure-based drug design. These models operate directly on 3D molecular structures. Due to the unfavorable scaling of graph neural networks (GNNs) with graph size as well as the relatively slow inference speeds inherent to diffusion models, many existing molecular diffusion models rely on coarse-grained representations of protein structure to make training and inference feasible. However, such coarse-grained representations discard essential information for modeling molecular interactions and impair the quality of generated structures. In this work, we present a novel GNN-based architecture for learning latent representations of molecular structure. When trained end-to-end with a diffusion model for de novo ligand design, our model achieves comparable performance to one with an all-atom protein representation while exhibiting a 3-fold reduction in inference time. |
1305.4963 | Liao Chen | Liao Y Chen | Does Plasmodium falciparum have an Achilles' heel? | 10 pages, 1 figure | Malaria Chemotherapy, Control, and Elimination 3, 114 (2014). DOI:
10.4172/2090-2778.1000114 | null | null | q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Plasmodium falciparum is the parasite that causes the most severe form of
malaria. Currently, science has been established about its cellular structures,
its metabolic processes, and even the molecular structures of its intrinsic
membrane proteins responsible for transporting water, nutrient, and waste
molecules across the parasite plasma membrane (PPM). I hypothesize that
Plasmodium falciparum has an Achilles' heel that can be attacked with
erythritol, the well-known sweetener that is classified as generally safe. Most
organisms have in their cell membrane two types of water-channel proteins:
aquaporins to maintain hydro-homeostasis across the membrane and
aquaglyceroporins to uptake glycerols etc. In contrast, P. falciparum has only
one type of such proteins---the multi-functional aquaglyceroporin (PfAQP)
expressed in the PPM---to do both jobs. Moreover, the parasite also uses PfAQP
to excrete its metabolic wastes (ammonia included) produced at a very high rate
in the blood stage. This extremely high efficiency of the bug using one protein
for multiple essential tasks makes the parasite fatally vulnerable. Erythritol
in the blood stream can kill the parasite by clogging up its PfAQP channel that
needs to be open for maintaining hydro-homeostasis and for excreting toxic
wastes across the bug's PPM. In vitro tests are to measure the growth/death
rate of P. falciparum in blood with various erythritol concentrations. In vivo
experiments are to administer groups of infected mice with various doses of
erythritol and monitor the parasite growth levels from blood samples drawn from
each group. Clinic trials can be performed to observe the added effects of
administering to patients erythritol along with the known drugs because
erythritol was classified as a safe food ingredient.
| [
{
"created": "Tue, 21 May 2013 21:01:28 GMT",
"version": "v1"
}
] | 2014-07-15 | [
[
"Chen",
"Liao Y",
""
]
] | Plasmodium falciparum is the parasite that causes the most severe form of malaria. Currently, science has been established about its cellular structures, its metabolic processes, and even the molecular structures of its intrinsic membrane proteins responsible for transporting water, nutrient, and waste molecules across the parasite plasma membrane (PPM). I hypothesize that Plasmodium falciparum has an Achilles' heel that can be attacked with erythritol, the well-known sweetener that is classified as generally safe. Most organisms have in their cell membrane two types of water-channel proteins: aquaporins to maintain hydro-homeostasis across the membrane and aquaglyceroporins to uptake glycerols etc. In contrast, P. falciparum has only one type of such proteins---the multi-functional aquaglyceroporin (PfAQP) expressed in the PPM---to do both jobs. Moreover, the parasite also uses PfAQP to excrete its metabolic wastes (ammonia included) produced at a very high rate in the blood stage. This extremely high efficiency of the bug using one protein for multiple essential tasks makes the parasite fatally vulnerable. Erythritol in the blood stream can kill the parasite by clogging up its PfAQP channel that needs to be open for maintaining hydro-homeostasis and for excreting toxic wastes across the bug's PPM. In vitro tests are to measure the growth/death rate of P. falciparum in blood with various erythritol concentrations. In vivo experiments are to administer groups of infected mice with various doses of erythritol and monitor the parasite growth levels from blood samples drawn from each group. Clinic trials can be performed to observe the added effects of administering to patients erythritol along with the known drugs because erythritol was classified as a safe food ingredient. |
q-bio/0412023 | Wannapong Triampo | Paisan Kanthang, Waipot Ngamsaad, Charin Modchang, Wannapong Triampo,
Narin Nuttawut, I-Ming Tang, Yongwimol Lenbury | The dynamics of the min proteins of Escherichia coli under the constant
external fields | 25 pages, 11 figures | null | null | null | q-bio.SC | null | In E. coli the determination of the middle of the cell and the proper
placement of the septum is essential to the division of the cell. This step
depends on the proteins MinC, MinD, and MinE. Exposure to a constant external
field e.g., an electric field or magnetic field may cause the bacteria cell
division mechanism to change resulting in an abnormal cytokinesis. To have
insight into the effects of an external field on this process, we model the
process using a set of the deterministic reaction diffusion equations, which
incorporate the influence of an external field, min protein reactions, and
diffusion of all species. Using the numerical method, we have found some
changes in the dynamics of the oscillations of the min proteins from pole to
pole when compared that of without the external field. The results show some
interesting effects, which are qualitatively in good agreement with some
experimental results.
| [
{
"created": "Mon, 13 Dec 2004 01:57:19 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Kanthang",
"Paisan",
""
],
[
"Ngamsaad",
"Waipot",
""
],
[
"Modchang",
"Charin",
""
],
[
"Triampo",
"Wannapong",
""
],
[
"Nuttawut",
"Narin",
""
],
[
"Tang",
"I-Ming",
""
],
[
"Lenbury",
"Yongwimol",
""
]
] | In E. coli the determination of the middle of the cell and the proper placement of the septum is essential to the division of the cell. This step depends on the proteins MinC, MinD, and MinE. Exposure to a constant external field e.g., an electric field or magnetic field may cause the bacteria cell division mechanism to change resulting in an abnormal cytokinesis. To have insight into the effects of an external field on this process, we model the process using a set of the deterministic reaction diffusion equations, which incorporate the influence of an external field, min protein reactions, and diffusion of all species. Using the numerical method, we have found some changes in the dynamics of the oscillations of the min proteins from pole to pole when compared that of without the external field. The results show some interesting effects, which are qualitatively in good agreement with some experimental results. |
2304.14932 | Daniel Schindler | Michel Brueck, Bork A. Berghoff and Daniel Schindler | In silico design, in vitro construction and in vivo application of
synthetic small regulatory RNAs in bacteria | 24 pages, 7 figures | null | 10.1007/978-1-0716-3658-9_27 | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Small regulatory RNAs (sRNAs) are short non-coding RNAs in bacteria capable
of post-transcriptional regulation. sRNAs have recently gained attention as
tools in basic and applied sciences for example to fine-tune genetic circuits
or biotechnological processes. Even though sRNAs often have a rather simple and
modular structure, the design of functional synthetic sRNAs is not necessarily
trivial. This protocol outlines how to use computational predictions and
synthetic biology approaches to design, construct and validate synthetic sRNA
functionality for their application in bacteria. The computational tool,
SEEDling, matches the optimal seed region with the user-selected sRNA scaffold
for repression of target mRNAs. The synthetic sRNAs are assembled using Golden
Gate cloning and their functionality is subsequently validated. The protocol
uses the acrA mRNA as an exemplary proof-of-concept target in Escherichia coli.
Since AcrA is part of a multidrug efflux pump, acrA repression can be revealed
by assessing oxacillin susceptibility in a phenotypic screen. However, in case
target repression does not result in a screenable phenotype, an alternative
validation of synthetic sRNA functionality based on a fluorescence reporter is
described.
| [
{
"created": "Fri, 28 Apr 2023 15:43:07 GMT",
"version": "v1"
}
] | 2024-04-18 | [
[
"Brueck",
"Michel",
""
],
[
"Berghoff",
"Bork A.",
""
],
[
"Schindler",
"Daniel",
""
]
] | Small regulatory RNAs (sRNAs) are short non-coding RNAs in bacteria capable of post-transcriptional regulation. sRNAs have recently gained attention as tools in basic and applied sciences for example to fine-tune genetic circuits or biotechnological processes. Even though sRNAs often have a rather simple and modular structure, the design of functional synthetic sRNAs is not necessarily trivial. This protocol outlines how to use computational predictions and synthetic biology approaches to design, construct and validate synthetic sRNA functionality for their application in bacteria. The computational tool, SEEDling, matches the optimal seed region with the user-selected sRNA scaffold for repression of target mRNAs. The synthetic sRNAs are assembled using Golden Gate cloning and their functionality is subsequently validated. The protocol uses the acrA mRNA as an exemplary proof-of-concept target in Escherichia coli. Since AcrA is part of a multidrug efflux pump, acrA repression can be revealed by assessing oxacillin susceptibility in a phenotypic screen. However, in case target repression does not result in a screenable phenotype, an alternative validation of synthetic sRNA functionality based on a fluorescence reporter is described. |
1007.4490 | Tsvi Tlusty | Shalev Itzkovitz, Tsvi Tlusty, Uri Alon | Coding limits on the number of transcription factors | http://www.weizmann.ac.il/complex/tlusty/papers/BMCGenomics2006.pdf
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1590034/
http://www.biomedcentral.com/1471-2164/7/239 | BMC Genomics 2006, 7:239 | 10.1186/1471-2164-7-239 | null | q-bio.BM physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Transcription factor proteins bind specific DNA sequences to control the
expression of genes. They contain DNA binding domains which belong to several
super-families, each with a specific mechanism of DNA binding. The total number
of transcription factors encoded in a genome increases with the number of genes
in the genome. Here, we examined the number of transcription factors from each
super-family in diverse organisms.
We find that the number of transcription factors from most super-families
appears to be bounded. For example, the number of winged helix factors does not
generally exceed 300, even in very large genomes. The magnitude of the maximal
number of transcription factors from each super-family seems to correlate with
the number of DNA bases effectively recognized by the binding mechanism of that
super-family. Coding theory predicts that such upper bounds on the number of
transcription factors should exist, in order to minimize cross-binding errors
between transcription factors. This theory further predicts that factors with
similar binding sequences should tend to have similar biological effect, so
that errors based on mis-recognition are minimal. We present evidence that
transcription factors with similar binding sequences tend to regulate genes
with similar biological functions, supporting this prediction.
The present study suggests limits on the transcription factor repertoire of
cells, and suggests coding constraints that might apply more generally to the
mapping between binding sites and biological function.
| [
{
"created": "Mon, 26 Jul 2010 15:59:24 GMT",
"version": "v1"
}
] | 2010-07-27 | [
[
"Itzkovitz",
"Shalev",
""
],
[
"Tlusty",
"Tsvi",
""
],
[
"Alon",
"Uri",
""
]
] | Transcription factor proteins bind specific DNA sequences to control the expression of genes. They contain DNA binding domains which belong to several super-families, each with a specific mechanism of DNA binding. The total number of transcription factors encoded in a genome increases with the number of genes in the genome. Here, we examined the number of transcription factors from each super-family in diverse organisms. We find that the number of transcription factors from most super-families appears to be bounded. For example, the number of winged helix factors does not generally exceed 300, even in very large genomes. The magnitude of the maximal number of transcription factors from each super-family seems to correlate with the number of DNA bases effectively recognized by the binding mechanism of that super-family. Coding theory predicts that such upper bounds on the number of transcription factors should exist, in order to minimize cross-binding errors between transcription factors. This theory further predicts that factors with similar binding sequences should tend to have similar biological effect, so that errors based on mis-recognition are minimal. We present evidence that transcription factors with similar binding sequences tend to regulate genes with similar biological functions, supporting this prediction. The present study suggests limits on the transcription factor repertoire of cells, and suggests coding constraints that might apply more generally to the mapping between binding sites and biological function. |
1308.5365 | Shweta Bansal | Eric Mooring and Shweta Bansal | Increasing Herd Immunity with Influenza Revaccination | null | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by-nc-sa/3.0/ | Seasonal influenza is a significant public health concern in the United
States and globally. While influenza vaccines are the single most effective
intervention to reduce influenza morbidity and mortality, there is considerable
debate surrounding the merits and consequences of repeated seasonal
vaccination. Here, we describe a two-season influenza epidemic contact network
model and use it to demonstrate that increasing the level of continuity in
vaccination across seasons reduces the burden on public health. We show that
revaccination reduces the influenza attack rate not only because it reduces the
overall number of susceptible individuals, but also because it better protects
highly-connected individuals, who would otherwise make a disproportionately
large contribution to influenza transmission. Our work thus contributes a
population-level perspective to debates about the merits of repeated influenza
vaccination and advocates for public health policy to incorporate individual
vaccine histories.
| [
{
"created": "Sat, 24 Aug 2013 22:40:07 GMT",
"version": "v1"
},
{
"created": "Tue, 6 Jan 2015 02:13:01 GMT",
"version": "v2"
}
] | 2015-01-07 | [
[
"Mooring",
"Eric",
""
],
[
"Bansal",
"Shweta",
""
]
] | Seasonal influenza is a significant public health concern in the United States and globally. While influenza vaccines are the single most effective intervention to reduce influenza morbidity and mortality, there is considerable debate surrounding the merits and consequences of repeated seasonal vaccination. Here, we describe a two-season influenza epidemic contact network model and use it to demonstrate that increasing the level of continuity in vaccination across seasons reduces the burden on public health. We show that revaccination reduces the influenza attack rate not only because it reduces the overall number of susceptible individuals, but also because it better protects highly-connected individuals, who would otherwise make a disproportionately large contribution to influenza transmission. Our work thus contributes a population-level perspective to debates about the merits of repeated influenza vaccination and advocates for public health policy to incorporate individual vaccine histories. |
2002.03268 | Yen Ting Lin | Steven Sanche, Yen Ting Lin, Chonggang Xu, Ethan Romero-Severson,
Nicolas W. Hengartner, Ruian Ke | The Novel Coronavirus, 2019-nCoV, is Highly Contagious and More
Infectious Than Initially Estimated | 8 pages, 3 figures, 1 Supplementary Text, 6 Supplementary figures, 2
Supplementary tables | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The novel coronavirus (2019-nCoV) is a recently emerged human pathogen that
has spread widely since January 2020. Initially, the basic reproductive number,
R0, was estimated to be 2.2 to 2.7. Here we provide a new estimate of this
quantity. We collected extensive individual case reports and estimated key
epidemiology parameters, including the incubation period. Integrating these
estimates and high-resolution real-time human travel and infection data with
mathematical models, we estimated that the number of infected individuals
during early epidemic double every 2.4 days, and the R0 value is likely to be
between 4.7 and 6.6. We further show that quarantine and contact tracing of
symptomatic individuals alone may not be effective and early, strong control
measures are needed to stop transmission of the virus.
| [
{
"created": "Sun, 9 Feb 2020 02:42:18 GMT",
"version": "v1"
}
] | 2020-02-11 | [
[
"Sanche",
"Steven",
""
],
[
"Lin",
"Yen Ting",
""
],
[
"Xu",
"Chonggang",
""
],
[
"Romero-Severson",
"Ethan",
""
],
[
"Hengartner",
"Nicolas W.",
""
],
[
"Ke",
"Ruian",
""
]
] | The novel coronavirus (2019-nCoV) is a recently emerged human pathogen that has spread widely since January 2020. Initially, the basic reproductive number, R0, was estimated to be 2.2 to 2.7. Here we provide a new estimate of this quantity. We collected extensive individual case reports and estimated key epidemiology parameters, including the incubation period. Integrating these estimates and high-resolution real-time human travel and infection data with mathematical models, we estimated that the number of infected individuals during early epidemic double every 2.4 days, and the R0 value is likely to be between 4.7 and 6.6. We further show that quarantine and contact tracing of symptomatic individuals alone may not be effective and early, strong control measures are needed to stop transmission of the virus. |
2105.08512 | Laura Tupper | Laura L. Tupper and Charles R. Keese and David S. Matteson | Classifying Contaminated Cell Cultures using Time Series Features | 30 pages, 7 figures | null | null | null | q-bio.QM stat.AP | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We examine the use of time series data, derived from Electric Cell-substrate
Impedance Sensing (ECIS), to differentiate between standard mammalian cell
cultures and those infected with a mycoplasma organism. With the goal of
interpretable results, we perform low-dimensional feature-based classification,
extracting application-relevant features from the ECIS time courses. We can
achieve very high classification accuracy using only two features, which depend
on the cell line under examination. Initial results also show the existence of
experimental variation between plates and suggest types of features that may
prove more robust to such variation. Our paper is the first to perform a broad
examination of ECIS time course features in the context of detecting
contamination; to combine different types of features to achieve classification
accuracy while preserving interpretability; and to describe and suggest
possibilities for ameliorating plate-to-plate variation.
| [
{
"created": "Sat, 15 May 2021 01:51:29 GMT",
"version": "v1"
},
{
"created": "Tue, 22 Feb 2022 14:24:25 GMT",
"version": "v2"
}
] | 2022-02-23 | [
[
"Tupper",
"Laura L.",
""
],
[
"Keese",
"Charles R.",
""
],
[
"Matteson",
"David S.",
""
]
] | We examine the use of time series data, derived from Electric Cell-substrate Impedance Sensing (ECIS), to differentiate between standard mammalian cell cultures and those infected with a mycoplasma organism. With the goal of interpretable results, we perform low-dimensional feature-based classification, extracting application-relevant features from the ECIS time courses. We can achieve very high classification accuracy using only two features, which depend on the cell line under examination. Initial results also show the existence of experimental variation between plates and suggest types of features that may prove more robust to such variation. Our paper is the first to perform a broad examination of ECIS time course features in the context of detecting contamination; to combine different types of features to achieve classification accuracy while preserving interpretability; and to describe and suggest possibilities for ameliorating plate-to-plate variation. |
1504.05261 | Takahiro Wada | Takahiro Wada, Norimasa Kamij and Shunichi Doi | A Mathematical Model of Motion Sickness in 6DOF Motion and Its
Application to Vehicle Passengers | in International Digital Human Modeling Symposium, 2013 | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A mathematical model of motion sickness incidence (MSI) is derived by
integrating neurophysiological knowledge of the vestibular system to predict
the severity of motion sickness of humans. Bos et al. proposed the successful
mathematical model of motion sickness based on the neurophysiological mechanism
based on the subject vertical conflict (SVC) theory. We expand this model to
6-DOF motion, including head rotation, by introducing the otolith-canal
interaction. Then the model is applied to an analysis of passengers' comfort.
It is known that the driver is less susceptible to motion sickness than are the
passengers. In addition, it is known that the driver tilts his/her head toward
the curve direction when curve driving, whereas the passengers' head movement
is likely to occur in the opposite direction. Thus, the effect of the head tilt
strategy on motion sickness was investigated by the proposed mathematical
model. The head movements of drivers and passengers were measured in slalom
driving. Then, the MSI of the drivers and that of the passengers predicted by
the proposed model were compared. The results revealed that the head movement
toward the centripetal direction has a significant effect in reducing the MSI
in the sense of SVC theory.
| [
{
"created": "Mon, 20 Apr 2015 23:42:20 GMT",
"version": "v1"
}
] | 2015-04-22 | [
[
"Wada",
"Takahiro",
""
],
[
"Kamij",
"Norimasa",
""
],
[
"Doi",
"Shunichi",
""
]
] | A mathematical model of motion sickness incidence (MSI) is derived by integrating neurophysiological knowledge of the vestibular system to predict the severity of motion sickness of humans. Bos et al. proposed the successful mathematical model of motion sickness based on the neurophysiological mechanism based on the subject vertical conflict (SVC) theory. We expand this model to 6-DOF motion, including head rotation, by introducing the otolith-canal interaction. Then the model is applied to an analysis of passengers' comfort. It is known that the driver is less susceptible to motion sickness than are the passengers. In addition, it is known that the driver tilts his/her head toward the curve direction when curve driving, whereas the passengers' head movement is likely to occur in the opposite direction. Thus, the effect of the head tilt strategy on motion sickness was investigated by the proposed mathematical model. The head movements of drivers and passengers were measured in slalom driving. Then, the MSI of the drivers and that of the passengers predicted by the proposed model were compared. The results revealed that the head movement toward the centripetal direction has a significant effect in reducing the MSI in the sense of SVC theory. |
1308.1865 | Wei Zhang | Eric R. Gamazon, Hae-Kyung Im, Shiwei Duan, Yves A. Lussier, Nancy J.
Cox, M. Eileen Dolan, Wei Zhang | ExprTarget: An Integrative Approach to Predicting Human MicroRNA Targets | null | Gamazon ER, Im H-K, Duan S, Lussier YA, Cox NJ, Dolan ME, Zhang W.
ExprTarget: An integrative approach to predicting human microRNA targets.
PLoS ONE. 2010; 5(10): e13534 | null | null | q-bio.QM q-bio.GN | http://creativecommons.org/licenses/by/3.0/ | We developed an online database, ExprTargetDB, of human miRNA targets
predicted by an approach that integrates gene expression profiling into a
broader framework involving important features of miRNA target site
predictions.
| [
{
"created": "Thu, 8 Aug 2013 14:53:30 GMT",
"version": "v1"
}
] | 2013-08-09 | [
[
"Gamazon",
"Eric R.",
""
],
[
"Im",
"Hae-Kyung",
""
],
[
"Duan",
"Shiwei",
""
],
[
"Lussier",
"Yves A.",
""
],
[
"Cox",
"Nancy J.",
""
],
[
"Dolan",
"M. Eileen",
""
],
[
"Zhang",
"Wei",
""
]
] | We developed an online database, ExprTargetDB, of human miRNA targets predicted by an approach that integrates gene expression profiling into a broader framework involving important features of miRNA target site predictions. |
2301.09566 | Klaus Lehnertz | Klaus Lehnertz | Ordinal methods for a characterization of evolving functional brain
networks | 8 pages, 2 figures | null | 10.1063/5.0136181 | null | q-bio.NC nlin.CD | http://creativecommons.org/licenses/by/4.0/ | Ordinal time series analysis is based on the idea to map time series to
ordinal patterns, i.e., order relations between the values of a time series and
not the values themselves, as introduced in 2002 by C. Bandt and B. Pompe.
Despite a resulting loss of information, this approach captures meaningful
information about the temporal structure of the underlying system dynamics as
well as about properties of interactions between coupled systems. This -
together with its conceptual simplicity and robustness against measurement
noise - makes ordinal time series analysis well suited to improve
characterization of the still poorly understood spatial-temporal dynamics of
the human brain. This minireview briefly summarizes the state-of-the-art of
uni- and bivariate ordinal time-series-analysis techniques together with
applications in the neurosciences. It will highlight current limitations to
stimulate further developments which would be necessary to advance
characterization of evolving functional brain networks.
| [
{
"created": "Fri, 13 Jan 2023 15:26:13 GMT",
"version": "v1"
}
] | 2023-02-03 | [
[
"Lehnertz",
"Klaus",
""
]
] | Ordinal time series analysis is based on the idea to map time series to ordinal patterns, i.e., order relations between the values of a time series and not the values themselves, as introduced in 2002 by C. Bandt and B. Pompe. Despite a resulting loss of information, this approach captures meaningful information about the temporal structure of the underlying system dynamics as well as about properties of interactions between coupled systems. This - together with its conceptual simplicity and robustness against measurement noise - makes ordinal time series analysis well suited to improve characterization of the still poorly understood spatial-temporal dynamics of the human brain. This minireview briefly summarizes the state-of-the-art of uni- and bivariate ordinal time-series-analysis techniques together with applications in the neurosciences. It will highlight current limitations to stimulate further developments which would be necessary to advance characterization of evolving functional brain networks. |
1708.00353 | Xiaobin Guan | Xiaobin Guan, Huanfeng Shen, Wenxia Gan, Gang Yang, Lunche Wang,
Xinghua Li and Liangpei Zhang | A 33-year NPP monitoring study in southwest China by the fusion of
multi-source remote sensing and station data | 20 pages, 11 figures | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Knowledge of regional net primary productivity (NPP) is important for the
systematic understanding of the global carbon cycle. In this study,
multi-source data were employed to conduct a 33-year regional NPP study in
southwest China, at a 1-km scale. A multi-sensor fusion framework was applied
to obtain a new normalized difference vegetation index (NDVI) time series from
1982 to 2014, combining the respective advantages of the different remote
sensing datasets. As another key parameter for NPP modeling, the total solar
radiation was calculated by the improved Yang hybrid model (YHM), using
meteorological station data. The verification described in this paper proved
the feasibility of all the applied data processes, and a greatly improved
accuracy was obtained for the NPP calculated with the final processed NDVI. The
spatio-temporal analysis results indicated that 68.07% of the study area showed
an increasing NPP trend over the past three decades. Significant heterogeneity
was found in the correlation between NPP and precipitation at a monthly scale,
specifically, the negative correlation in the growing season and the positive
correlation in the dry season. The lagged positive correlation in the growing
season and no lag in the dry season indicated the important impact of
precipitation on NPP.
| [
{
"created": "Tue, 1 Aug 2017 14:23:34 GMT",
"version": "v1"
}
] | 2017-08-02 | [
[
"Guan",
"Xiaobin",
""
],
[
"Shen",
"Huanfeng",
""
],
[
"Gan",
"Wenxia",
""
],
[
"Yang",
"Gang",
""
],
[
"Wang",
"Lunche",
""
],
[
"Li",
"Xinghua",
""
],
[
"Zhang",
"Liangpei",
""
]
] | Knowledge of regional net primary productivity (NPP) is important for the systematic understanding of the global carbon cycle. In this study, multi-source data were employed to conduct a 33-year regional NPP study in southwest China, at a 1-km scale. A multi-sensor fusion framework was applied to obtain a new normalized difference vegetation index (NDVI) time series from 1982 to 2014, combining the respective advantages of the different remote sensing datasets. As another key parameter for NPP modeling, the total solar radiation was calculated by the improved Yang hybrid model (YHM), using meteorological station data. The verification described in this paper proved the feasibility of all the applied data processes, and a greatly improved accuracy was obtained for the NPP calculated with the final processed NDVI. The spatio-temporal analysis results indicated that 68.07% of the study area showed an increasing NPP trend over the past three decades. Significant heterogeneity was found in the correlation between NPP and precipitation at a monthly scale, specifically, the negative correlation in the growing season and the positive correlation in the dry season. The lagged positive correlation in the growing season and no lag in the dry season indicated the important impact of precipitation on NPP. |
0906.0114 | Deepak Chandran | Deepak Chandran and Herbert M. Sauro | An Optimization Algorithm for Finding Parameters for Bistability | 5 pages, 4 figures | null | null | null | q-bio.MN q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Motivation: Many biochemical pathways are known, but the numerous parameters
required to correctly explore the dynamics of the pathways are not known. For
this reason, algorithms that can make inferences by looking at the topology of
a network are desirable. In this work, we are particular interested in the
question of whether a given pathway can potentially harbor multiple stable
steady states. In other words, the challenge is to find the set of parameters
such that the dynamical system defined by a set of ordinary differential
equations will contain multiple stable steady states. Being able to find
parameters that cause a network to be bistable may also be benefitial for
engineering synthetic bistable systems where the engineer needs to know a
working set of parameters.
Result: We have developed an algorithm that optimizes the parameters of a
dynamical system so that the system will contain at least one saddle or
unstable point. The algorithm then looks at trajectories around this saddle or
unstable point to see whether the different trajectories converge to different
stable points. The algorithm returns the parameters that causes the system to
exhibit multiple stable points. Since this is an optimization algorithm, it is
not quaranteed to find a solution. Repeated runs are often required to find a
solution for systems where only a narrow set of parameters exhibit bistability.
Availability: The C code for the algorithm is available at
http://tinkercell.googlecode.com
| [
{
"created": "Sat, 30 May 2009 21:33:08 GMT",
"version": "v1"
},
{
"created": "Thu, 23 Jul 2009 15:59:43 GMT",
"version": "v2"
}
] | 2009-07-23 | [
[
"Chandran",
"Deepak",
""
],
[
"Sauro",
"Herbert M.",
""
]
] | Motivation: Many biochemical pathways are known, but the numerous parameters required to correctly explore the dynamics of the pathways are not known. For this reason, algorithms that can make inferences by looking at the topology of a network are desirable. In this work, we are particular interested in the question of whether a given pathway can potentially harbor multiple stable steady states. In other words, the challenge is to find the set of parameters such that the dynamical system defined by a set of ordinary differential equations will contain multiple stable steady states. Being able to find parameters that cause a network to be bistable may also be benefitial for engineering synthetic bistable systems where the engineer needs to know a working set of parameters. Result: We have developed an algorithm that optimizes the parameters of a dynamical system so that the system will contain at least one saddle or unstable point. The algorithm then looks at trajectories around this saddle or unstable point to see whether the different trajectories converge to different stable points. The algorithm returns the parameters that causes the system to exhibit multiple stable points. Since this is an optimization algorithm, it is not quaranteed to find a solution. Repeated runs are often required to find a solution for systems where only a narrow set of parameters exhibit bistability. Availability: The C code for the algorithm is available at http://tinkercell.googlecode.com |
1208.0986 | Stephen Eglen | Stephen J. Eglen and James C. T. Wong | Spatial constraints underlying the retinal mosaics of two types of
horizontal cells in cat and macaque | null | Visual Neuroscience (2008) 25:209--214 | 10.1017/S0952523808080176 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most types of retinal neurons are spatially positioned in non-random
patterns, termed retinal mosaics. Several developmental mechanisms are thought
to be important in the formation of these mosaics. Most evidence to date
suggests that homotypic constraints within a type of neuron are dominant, and
that heterotypic interactions between different types of neuron are rare. In an
analysis of macaque H1 and H2 horizontal cell mosaics, W\"assle et al. (2000)
suggested that the high regularity index of the combined H1 and H2 mosaic might
be caused by heterotypic interactions during development. Here we use computer
modelling to suggest that the high regularity index of the combined H1 and H2
mosaic is a by-product of the basic constraint that two neurons cannot occupy
the same space. The spatial arrangement of type A and type B horizontal cells
in cat retina also follow this same principle.
| [
{
"created": "Sun, 5 Aug 2012 07:17:40 GMT",
"version": "v1"
}
] | 2012-08-07 | [
[
"Eglen",
"Stephen J.",
""
],
[
"Wong",
"James C. T.",
""
]
] | Most types of retinal neurons are spatially positioned in non-random patterns, termed retinal mosaics. Several developmental mechanisms are thought to be important in the formation of these mosaics. Most evidence to date suggests that homotypic constraints within a type of neuron are dominant, and that heterotypic interactions between different types of neuron are rare. In an analysis of macaque H1 and H2 horizontal cell mosaics, W\"assle et al. (2000) suggested that the high regularity index of the combined H1 and H2 mosaic might be caused by heterotypic interactions during development. Here we use computer modelling to suggest that the high regularity index of the combined H1 and H2 mosaic is a by-product of the basic constraint that two neurons cannot occupy the same space. The spatial arrangement of type A and type B horizontal cells in cat retina also follow this same principle. |
1408.4815 | Chitra Nayak R | Chitra R. Nayak, Aidan I. Brown, and Andrew D. Rutenberg | Protein translocation without specific quality control in a
computational model of the Tat system | 20 pages, Accepted for publication in Physical Biology - This is not
a copy edited version of the manuscript | null | 10.1088/1478-3975/11/5/056005 | null | q-bio.SC physics.bio-ph q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The twin-arginine translocation (Tat) system transports folded proteins of
various sizes across both bacterial and plant thylakoid membranes. The
membrane-associated TatA protein is an essential component of the Tat
translocon, and a broad distribution of different sized TatA-clusters is
observed in bacterial membranes. We assume that the size dynamics of TatA
clusters are affected by substrate binding, unbinding, and translocation to
associated TatBC clusters, where clusters with bound translocation substrates
favour growth and those without associated substrates favour shrinkage. With a
stochastic model of substrate binding and cluster dynamics, we numerically
determine the TatA cluster size distribution. We include a proportion of
targeted but non-translocatable (NT) substrates, with the simplifying
hypothesis that the substrate translocatability does not directly affect
cluster dynamical rate constants or substrate binding or unbinding rates. This
amounts to a translocation model without specific quality control.
Nevertheless, NT substrates will remain associated with TatA clusters until
unbound and so will affect cluster sizes and translocation rates. We find that
the number of larger TatA clusters depends on the NT fraction $f$. The
translocation rate can be optimized by tuning the rate of spontaneous substrate
unbinding, $\Gamma_U$. We present an analytically solvable three-state model of
substrate translocation without cluster size dynamics that follows our computed
translocation rates, and that is consistent with {\em in vitro}
Tat-translocation data in the presence of NT substrates.
| [
{
"created": "Wed, 20 Aug 2014 20:33:56 GMT",
"version": "v1"
}
] | 2015-06-22 | [
[
"Nayak",
"Chitra R.",
""
],
[
"Brown",
"Aidan I.",
""
],
[
"Rutenberg",
"Andrew D.",
""
]
] | The twin-arginine translocation (Tat) system transports folded proteins of various sizes across both bacterial and plant thylakoid membranes. The membrane-associated TatA protein is an essential component of the Tat translocon, and a broad distribution of different sized TatA-clusters is observed in bacterial membranes. We assume that the size dynamics of TatA clusters are affected by substrate binding, unbinding, and translocation to associated TatBC clusters, where clusters with bound translocation substrates favour growth and those without associated substrates favour shrinkage. With a stochastic model of substrate binding and cluster dynamics, we numerically determine the TatA cluster size distribution. We include a proportion of targeted but non-translocatable (NT) substrates, with the simplifying hypothesis that the substrate translocatability does not directly affect cluster dynamical rate constants or substrate binding or unbinding rates. This amounts to a translocation model without specific quality control. Nevertheless, NT substrates will remain associated with TatA clusters until unbound and so will affect cluster sizes and translocation rates. We find that the number of larger TatA clusters depends on the NT fraction $f$. The translocation rate can be optimized by tuning the rate of spontaneous substrate unbinding, $\Gamma_U$. We present an analytically solvable three-state model of substrate translocation without cluster size dynamics that follows our computed translocation rates, and that is consistent with {\em in vitro} Tat-translocation data in the presence of NT substrates. |
1711.08145 | Mike Steel Prof. | Anica Hoppe, Sonja T\"urpitz, Mike Steel | Species notions that combine phylogenetic trees and phenotypic
partitions | 19 pages, 5 figures | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A recent paper (Manceau and Lambert, 2016) developed a novel approach for
describing two well-defined notions of 'species' based on a phylogenetic tree
and a phenotypic partition. In this paper, we explore some further
combinatorial properties of this approach and describe an extension that allows
an arbitrary number of phenotypic partitions to be combined with a phylogenetic
tree for these two species notions.
| [
{
"created": "Wed, 22 Nov 2017 06:17:21 GMT",
"version": "v1"
}
] | 2017-11-23 | [
[
"Hoppe",
"Anica",
""
],
[
"Türpitz",
"Sonja",
""
],
[
"Steel",
"Mike",
""
]
] | A recent paper (Manceau and Lambert, 2016) developed a novel approach for describing two well-defined notions of 'species' based on a phylogenetic tree and a phenotypic partition. In this paper, we explore some further combinatorial properties of this approach and describe an extension that allows an arbitrary number of phenotypic partitions to be combined with a phylogenetic tree for these two species notions. |
2008.08875 | Paul Cabacungan | Vanessa Marie V. Calabia, Ma. Lucila M. Perez, Gregory L. Tangonan
Paul M. Cabacungan, Ivan B. Culaba, Jeremy E. De Guzman | Bilirubin lowering effect and safety of a prototype low cost blue light
emitting diode (LED) phototherapy device in the treatment of indirect
hyperbilirubinemia among healthy term infants in a tertiary government
hospital: a pilot study | 38 pages, 6 figures, submitted to Philippines Pediatric Society | null | null | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Objective: This pilot study was done to evaluate the capability of a
prototype low cost blue light emitting diode (LED) phototherapy device in
lowering bilirubin levels among healthy term infants diagnosed with indirect
hyperbilirubinemia.
Methods: Experimental study on term infants diagnosed with indirect
hyperbilirubinemia in Ospital ng Makati from May 2016 to November 2016 who
underwent phototherapy using the low cost blue LED phototherapy prototype.
Results: After 24 hours of phototherapy under the prototype LED phototherapy
unit, 16% of the total patients completed treatment as they were already
classified in the low risk zone, and another 36% of patients completed
treatment after 48 hours. The total bilirubin significantly decreased from
baseline bilirubin levels after 24 hours by 16.5% (p = 0.0001). The mean
percentage of change of bilirubin reduced after 48 hours of 29.9% was also
significant. The proportion of subjects in the high risk zone during baseline
to 24th hour went down significantly from 80% to 28% (p = 0.0003), while
comparing baseline to 48th hour, the percentage of high risk zone went down
from 80% to 9.5% (p = 0.0001). No subjects were reported to have rebound
hyperbilirubinemia after discontinuation of phototherapy treatment under the
LED prototype. No patient experienced any complication while on phototherapy
treatment.
Conclusion: The prototype low cost blue light emitting diode (LED)
phototherapy was able to lower total serum bilirubin among healthy term infants
with indirect hyperbilirubinemia and was safe to use.
| [
{
"created": "Thu, 20 Aug 2020 10:30:49 GMT",
"version": "v1"
}
] | 2020-08-21 | [
[
"Calabia",
"Vanessa Marie V.",
""
],
[
"Perez",
"Ma. Lucila M.",
""
],
[
"Cabacungan",
"Gregory L. Tangonan Paul M.",
""
],
[
"Culaba",
"Ivan B.",
""
],
[
"De Guzman",
"Jeremy E.",
""
]
] | Objective: This pilot study was done to evaluate the capability of a prototype low cost blue light emitting diode (LED) phototherapy device in lowering bilirubin levels among healthy term infants diagnosed with indirect hyperbilirubinemia. Methods: Experimental study on term infants diagnosed with indirect hyperbilirubinemia in Ospital ng Makati from May 2016 to November 2016 who underwent phototherapy using the low cost blue LED phototherapy prototype. Results: After 24 hours of phototherapy under the prototype LED phototherapy unit, 16% of the total patients completed treatment as they were already classified in the low risk zone, and another 36% of patients completed treatment after 48 hours. The total bilirubin significantly decreased from baseline bilirubin levels after 24 hours by 16.5% (p = 0.0001). The mean percentage of change of bilirubin reduced after 48 hours of 29.9% was also significant. The proportion of subjects in the high risk zone during baseline to 24th hour went down significantly from 80% to 28% (p = 0.0003), while comparing baseline to 48th hour, the percentage of high risk zone went down from 80% to 9.5% (p = 0.0001). No subjects were reported to have rebound hyperbilirubinemia after discontinuation of phototherapy treatment under the LED prototype. No patient experienced any complication while on phototherapy treatment. Conclusion: The prototype low cost blue light emitting diode (LED) phototherapy was able to lower total serum bilirubin among healthy term infants with indirect hyperbilirubinemia and was safe to use. |
2011.05860 | Alejandro Ramos Lora | Mar\'ia J. C\'aceres and Alejandro Ramos-Lora | An understanding of the physical solutions and the blow-up phenomenon
for Nonlinear Noisy Leaky Integrate and Fire neuronal models | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Nonlinear Noisy Leaky Integrate and Fire neuronal models are mathematical
models that describe the activity of neural networks. These models have been
studied at a microscopic level, using Stochastic Differential Equations, and at
a mesoscopic/macroscopic level, through the mean field limits using
Fokker-Planck type equations. The aim of this paper is to improve their
understanding, using a numerical study of their particle systems. We analyse in
depth the behaviour of the classical and physical solutions of the Stochastic
Differential Equations and, we compare it with what is already known about the
Fokker-Planck equation. This allows us to better understand what happens in the
neural network when an explosion occurs in finite time. After firing all
neurons at the same time, if the system is weakly connected, the neural network
converges towards its unique steady state. Otherwise, its behaviour is more
complex, because it can tend towards a stationary state or a "plateau"
distribution.
| [
{
"created": "Tue, 27 Oct 2020 20:00:38 GMT",
"version": "v1"
}
] | 2020-11-12 | [
[
"Cáceres",
"María J.",
""
],
[
"Ramos-Lora",
"Alejandro",
""
]
] | The Nonlinear Noisy Leaky Integrate and Fire neuronal models are mathematical models that describe the activity of neural networks. These models have been studied at a microscopic level, using Stochastic Differential Equations, and at a mesoscopic/macroscopic level, through the mean field limits using Fokker-Planck type equations. The aim of this paper is to improve their understanding, using a numerical study of their particle systems. We analyse in depth the behaviour of the classical and physical solutions of the Stochastic Differential Equations and, we compare it with what is already known about the Fokker-Planck equation. This allows us to better understand what happens in the neural network when an explosion occurs in finite time. After firing all neurons at the same time, if the system is weakly connected, the neural network converges towards its unique steady state. Otherwise, its behaviour is more complex, because it can tend towards a stationary state or a "plateau" distribution. |
1407.5847 | Nicolae Radu Zabet | Armin P. Schoech and Nicolae Radu Zabet | Facilitated diffusion buffers noise in gene expression | 12 pages, 5 figures, 1 table | Phys. Rev. E 90:3 (2014) 032701 | 10.1103/PhysRevE.90.032701 | null | q-bio.MN q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Transcription factors perform facilitated diffusion (3D diffusion in the
cytosol and 1D diffusion on the DNA) when binding to their target sites to
regulate gene expression. Here, we investigated the influence of this binding
mechanism on the noise in gene expression. Our results showed that, for
biologically relevant parameters, the binding process can be represented by a
two-state Markov model and that the accelerated target finding due to
facilitated diffusion leads to a reduction in both the mRNA and the protein
noise.
| [
{
"created": "Tue, 22 Jul 2014 13:06:36 GMT",
"version": "v1"
},
{
"created": "Wed, 3 Sep 2014 09:06:07 GMT",
"version": "v2"
}
] | 2014-09-04 | [
[
"Schoech",
"Armin P.",
""
],
[
"Zabet",
"Nicolae Radu",
""
]
] | Transcription factors perform facilitated diffusion (3D diffusion in the cytosol and 1D diffusion on the DNA) when binding to their target sites to regulate gene expression. Here, we investigated the influence of this binding mechanism on the noise in gene expression. Our results showed that, for biologically relevant parameters, the binding process can be represented by a two-state Markov model and that the accelerated target finding due to facilitated diffusion leads to a reduction in both the mRNA and the protein noise. |
2010.16193 | J\"urgen Jost | J\"urgen Jost | Biological Information | to appear in Theory in Biosciences | null | 10.1007/s12064-020-00327-1 | null | q-bio.PE nlin.AO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In computer science, we can theoretically neatly separate transmission and
processing of information, hardware and software, and programs and their
inputs. This is much more intricate in biology, Nevertheless, I argue that
Shannon's concept of information is useful in biology, although its application
is not as straightforward as many people think. In fact, the recently developed
theory of information decomposition can shed much light on the complementarity
between coding and regulatory, or internal and environmental information. The
key challenge that we formulate in this contribution is to understand how
genetic information and external factors combine to create an organism, and
conversely, how the genome has learned in the course of evolution how to
harness the environment, and analogously, how coding, regulation and spatial
organization interact in cellular processes.
| [
{
"created": "Fri, 30 Oct 2020 11:06:39 GMT",
"version": "v1"
}
] | 2020-11-02 | [
[
"Jost",
"Jürgen",
""
]
] | In computer science, we can theoretically neatly separate transmission and processing of information, hardware and software, and programs and their inputs. This is much more intricate in biology, Nevertheless, I argue that Shannon's concept of information is useful in biology, although its application is not as straightforward as many people think. In fact, the recently developed theory of information decomposition can shed much light on the complementarity between coding and regulatory, or internal and environmental information. The key challenge that we formulate in this contribution is to understand how genetic information and external factors combine to create an organism, and conversely, how the genome has learned in the course of evolution how to harness the environment, and analogously, how coding, regulation and spatial organization interact in cellular processes. |
1311.4851 | Vasily Mironov | Vasily Mironov, Alexander Romanov, Alexander Simonov, Maria Vedunova
and Victor Kazantsev | Oscillations in a neurite growth model with extracellular feedback | null | null | null | null | q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We take into account the influence of extracellular signalling on neurite
elongation in a model of neurite growth mediated by building proteins (e.g.
tubulin). Tubulin production dynamics was supplied by a function describing the
influence of the extracellular signalling that can promote or depress the
elongation. We found that such extracellular feedback can generate neurite
length oscillations with periodic sequence of elongations and retractions. The
oscillations prevent further outgrowth of the neurite which becomes trapped in
the non-uniform extracellular field. We analyzed the characteristics of the
elongation process for different distributions of attracting and repelling
sources of the extracellular signal molecules. The model predicts three
different scenarios of the neurite development in the extracellular field
including monotonic and oscillatory outgrowth, localized limit cycle
oscillations and complete depression of the growth.
| [
{
"created": "Tue, 19 Nov 2013 19:43:24 GMT",
"version": "v1"
}
] | 2013-11-20 | [
[
"Mironov",
"Vasily",
""
],
[
"Romanov",
"Alexander",
""
],
[
"Simonov",
"Alexander",
""
],
[
"Vedunova",
"Maria",
""
],
[
"Kazantsev",
"Victor",
""
]
] | We take into account the influence of extracellular signalling on neurite elongation in a model of neurite growth mediated by building proteins (e.g. tubulin). Tubulin production dynamics was supplied by a function describing the influence of the extracellular signalling that can promote or depress the elongation. We found that such extracellular feedback can generate neurite length oscillations with periodic sequence of elongations and retractions. The oscillations prevent further outgrowth of the neurite which becomes trapped in the non-uniform extracellular field. We analyzed the characteristics of the elongation process for different distributions of attracting and repelling sources of the extracellular signal molecules. The model predicts three different scenarios of the neurite development in the extracellular field including monotonic and oscillatory outgrowth, localized limit cycle oscillations and complete depression of the growth. |
2004.00991 | Weixing Ji | Jie Liu, Xiaotian Wu, Kai Zhang, Bing Liu, Renyi Bao, Xiao Chen, Yiran
Cai, Yiming Shen, Xinjun He, Jun Yan, Weixing Ji | Computational Performance of a Germline Variant Calling Pipeline for
Next Generation Sequencing | 6 pages, 6 figures, 3 tables | null | null | null | q-bio.GN cs.PF | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the booming of next generation sequencing technology and its
implementation in clinical practice and life science research, the need for
faster and more efficient data analysis methods becomes pressing in the field
of sequencing. Here we report on the evaluation of an optimized germline
mutation calling pipeline, HummingBird, by assessing its performance against
the widely accepted BWA-GATK pipeline. We found that the HummingBird pipeline
can significantly reduce the running time of the primary data analysis for
whole genome sequencing and whole exome sequencing while without significantly
sacrificing the variant calling accuracy. Thus, we conclude that expansion of
such software usage will help to improve the primary data analysis efficiency
for next generation sequencing.
| [
{
"created": "Wed, 1 Apr 2020 12:55:11 GMT",
"version": "v1"
}
] | 2020-04-03 | [
[
"Liu",
"Jie",
""
],
[
"Wu",
"Xiaotian",
""
],
[
"Zhang",
"Kai",
""
],
[
"Liu",
"Bing",
""
],
[
"Bao",
"Renyi",
""
],
[
"Chen",
"Xiao",
""
],
[
"Cai",
"Yiran",
""
],
[
"Shen",
"Yiming",
""
],
[
"He",
"Xinjun",
""
],
[
"Yan",
"Jun",
""
],
[
"Ji",
"Weixing",
""
]
] | With the booming of next generation sequencing technology and its implementation in clinical practice and life science research, the need for faster and more efficient data analysis methods becomes pressing in the field of sequencing. Here we report on the evaluation of an optimized germline mutation calling pipeline, HummingBird, by assessing its performance against the widely accepted BWA-GATK pipeline. We found that the HummingBird pipeline can significantly reduce the running time of the primary data analysis for whole genome sequencing and whole exome sequencing while without significantly sacrificing the variant calling accuracy. Thus, we conclude that expansion of such software usage will help to improve the primary data analysis efficiency for next generation sequencing. |
1404.5210 | Norshuhaila Mohamed Sunar N.M.Sunar | N. M. Sunar, E.I. Stentiford, D.I. Stewart and L.A. Fletcher | The Process and Pathogen Behavior in Composting: A Review | Proceeding UMT-MSD 2009 Post Graduate Seminar 2009. Universiti
Malaysia Terengganu, Malaysian Student Department UK & Institute for
Transport Studies University of Leeds. pp: 78-87; ISBN: 978-967-5366-04-8 | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Composting is defined as the biological decomposition and stabilization of
organic substrates under aerobic conditions to allow the development of
thermophilic temperatures. This thermophilic temperature is a result of
biologically produced heat. Composting produces the final product which is
sufficiently stable for storage and application to land without adverse
environmental effects. There are many factors which affect the decomposition of
organic matter in the composting process. Since the composting process is very
intricate, it is not easy to estimate the effect of a single factor on the rate
of organic matter decomposition. This paper looked at the main factors
affecting the composting process. Problems regarding the controlling,
inactivation and regrowth of pathogen in compost material are also discussed.
| [
{
"created": "Mon, 21 Apr 2014 14:29:49 GMT",
"version": "v1"
}
] | 2014-04-22 | [
[
"Sunar",
"N. M.",
""
],
[
"Stentiford",
"E. I.",
""
],
[
"Stewart",
"D. I.",
""
],
[
"Fletcher",
"L. A.",
""
]
] | Composting is defined as the biological decomposition and stabilization of organic substrates under aerobic conditions to allow the development of thermophilic temperatures. This thermophilic temperature is a result of biologically produced heat. Composting produces the final product which is sufficiently stable for storage and application to land without adverse environmental effects. There are many factors which affect the decomposition of organic matter in the composting process. Since the composting process is very intricate, it is not easy to estimate the effect of a single factor on the rate of organic matter decomposition. This paper looked at the main factors affecting the composting process. Problems regarding the controlling, inactivation and regrowth of pathogen in compost material are also discussed. |
2102.01303 | Tijl Grootswagers | Tijl Grootswagers, Amanda K. Robinson, Sophia M. Shatek, Thomas A.
Carlson | The neural dynamics underlying prioritisation of task-relevant
information | Published in Neurons, Behavior, Data analysis, and Theory (NBDT) | Neurons, Behavior, Data Analysis, and Theory (2021), 5(1) | 10.51628/001c.21174 | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | The human brain prioritises relevant sensory information to perform different
tasks. Enhancement of task-relevant information requires flexible allocation of
attentional resources, but it is still a mystery how this is operationalised in
the brain. We investigated how attentional mechanisms operate in situations
where multiple stimuli are presented in the same location and at the same time.
In two experiments, participants performed a challenging two-back task on
different types of visual stimuli that were presented simultaneously and
superimposed over each other. Using electroencephalography and multivariate
decoding, we analysed the effect of attention on the neural responses to each
individual stimulus. Whole brain neural responses contained considerable
information about both the attended and unattended stimuli, even though they
were presented simultaneously and represented in overlapping receptive fields.
As expected, attention increased the decodability of stimulus-related
information contained in the neural responses, but this effect was evident
earlier for stimuli that were presented at smaller sizes. Our results show that
early neural responses to stimuli in fast-changing displays contain remarkable
information about the sensory environment but are also modulated by attention
in a manner dependent on perceptual characteristics of the relevant stimuli.
Stimuli, code, and data for this study can be found at https://osf.io/7zhwp/.
| [
{
"created": "Tue, 2 Feb 2021 04:24:51 GMT",
"version": "v1"
},
{
"created": "Fri, 19 Feb 2021 05:02:07 GMT",
"version": "v2"
}
] | 2021-02-22 | [
[
"Grootswagers",
"Tijl",
""
],
[
"Robinson",
"Amanda K.",
""
],
[
"Shatek",
"Sophia M.",
""
],
[
"Carlson",
"Thomas A.",
""
]
] | The human brain prioritises relevant sensory information to perform different tasks. Enhancement of task-relevant information requires flexible allocation of attentional resources, but it is still a mystery how this is operationalised in the brain. We investigated how attentional mechanisms operate in situations where multiple stimuli are presented in the same location and at the same time. In two experiments, participants performed a challenging two-back task on different types of visual stimuli that were presented simultaneously and superimposed over each other. Using electroencephalography and multivariate decoding, we analysed the effect of attention on the neural responses to each individual stimulus. Whole brain neural responses contained considerable information about both the attended and unattended stimuli, even though they were presented simultaneously and represented in overlapping receptive fields. As expected, attention increased the decodability of stimulus-related information contained in the neural responses, but this effect was evident earlier for stimuli that were presented at smaller sizes. Our results show that early neural responses to stimuli in fast-changing displays contain remarkable information about the sensory environment but are also modulated by attention in a manner dependent on perceptual characteristics of the relevant stimuli. Stimuli, code, and data for this study can be found at https://osf.io/7zhwp/. |
1708.06641 | Adam Noel | Adam Noel, Dimitrios Makrakis, Andrew W. Eckford | Distortion Distribution of Neural Spike Train Sequence Matching with
Optogenetics | 13 pages, 10 figures. To appear in IEEE Transactions on Biomedical
Engineering. A conference version, which was presented at 2017 IEEE Globecom,
can be found at arXiv:1704.04795 | null | 10.1109/TBME.2018.2819200 | null | q-bio.NC cs.IT math.IT physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper uses a simple optogenetic model to compare the timing distortion
between a randomly-generated target spike sequence and an externally-stimulated
neuron spike sequence. Optogenetics is an emerging field of neuroscience where
neurons are genetically modified to express light-sensitive receptors that
enable external control over when the neurons fire. Given the prominence of
neuronal signaling within the brain and throughout the body, optogenetics has
significant potential to improve the understanding of the nervous system and to
develop treatments for neurological diseases. This paper primarily considers
two different distortion measures. The first measure is the delay in
externally-stimulated spikes. The second measure is the root mean square error
between the filtered outputs of the target and stimulated spike sequences. The
mean and the distribution of the distortion is derived in closed form when the
target sequence generation rate is sufficiently low. All derived results are
supported with simulations. This work is a step towards an analytical model to
predict whether different spike trains were observed from the same stimulus,
and the broader goal of understanding the quantity and reliability of
information that can be carried by neurons.
| [
{
"created": "Tue, 22 Aug 2017 14:30:13 GMT",
"version": "v1"
},
{
"created": "Thu, 22 Mar 2018 10:09:18 GMT",
"version": "v2"
}
] | 2020-04-24 | [
[
"Noel",
"Adam",
""
],
[
"Makrakis",
"Dimitrios",
""
],
[
"Eckford",
"Andrew W.",
""
]
] | This paper uses a simple optogenetic model to compare the timing distortion between a randomly-generated target spike sequence and an externally-stimulated neuron spike sequence. Optogenetics is an emerging field of neuroscience where neurons are genetically modified to express light-sensitive receptors that enable external control over when the neurons fire. Given the prominence of neuronal signaling within the brain and throughout the body, optogenetics has significant potential to improve the understanding of the nervous system and to develop treatments for neurological diseases. This paper primarily considers two different distortion measures. The first measure is the delay in externally-stimulated spikes. The second measure is the root mean square error between the filtered outputs of the target and stimulated spike sequences. The mean and the distribution of the distortion is derived in closed form when the target sequence generation rate is sufficiently low. All derived results are supported with simulations. This work is a step towards an analytical model to predict whether different spike trains were observed from the same stimulus, and the broader goal of understanding the quantity and reliability of information that can be carried by neurons. |
1602.02492 | Dinesh Kumar Dr. | Dinesh Kumar, Atul Rawat, Durgesh Dubey, Umesh Kumar, Amit K Keshari,
Sudipta Saha and Anupam Guleria | NMR based Pharmaco-metabolomics: An efficient and agile tool for
therapeutic evaluation of Traditional Herbal Medicines | 17 pages, 8 Figures and 62 references | null | null | null | q-bio.BM q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Traditional Indian (Ayurvedic) and Chinese herbal medicines have been used in
the treatment of a variety of diseases for thousands of years because of their
natural origin and lesser side effects. However, the safety and efficacy data
(including dose and quality parameters) on most of these traditional medicines
are far from sufficient to meet the criteria needed to support their world-wide
therapeutic use. Also, the mechanistic understanding of most of these herbal
medicines is still lacking due to their complex components which further limits
their wider application and acceptance. Metabolomics -a novel approach to
reveal altered metabolism (biochemical effects) produced in response to a
disease or its therapeutic intervention- has huge potential to assess the
pharmacology and toxicology of traditional herbal medicines (THMs). Therefore,
it is gradually becoming a mutually complementary technique to genomics,
transcriptomics and proteomics for therapeutic evaluation of pharmaceutical
products (including THMs); the approach is so called pharmaco-metabolomics. The
whole paradigm is based on its ability to provide metabolic signatures to
confirm the diseased condition and then to use the concentration profiles of
these biomarkers to assess the therapeutic response. Nuclear magnetic resonance
(NMR) spectroscopy coupled with multivariate data analysis is currently the
method of choice for pharmaco-metabolomics studies owing to its unbiased,
non-destructive nature and minimal sample preparation requirement. In recent
past, dozens of NMR based pharmaco-metabolomic studies have been devoted to
prove the therapeutic efficacy/safety and to explore the underlying mechanisms
of THMs, with promising results. The current perspective article summarizes
various such studies in addition to describing the technical and conceptual
aspects involved in NMR based pharmaco-metabolomics.
| [
{
"created": "Mon, 8 Feb 2016 08:30:22 GMT",
"version": "v1"
}
] | 2016-02-09 | [
[
"Kumar",
"Dinesh",
""
],
[
"Rawat",
"Atul",
""
],
[
"Dubey",
"Durgesh",
""
],
[
"Kumar",
"Umesh",
""
],
[
"Keshari",
"Amit K",
""
],
[
"Saha",
"Sudipta",
""
],
[
"Guleria",
"Anupam",
""
]
] | Traditional Indian (Ayurvedic) and Chinese herbal medicines have been used in the treatment of a variety of diseases for thousands of years because of their natural origin and lesser side effects. However, the safety and efficacy data (including dose and quality parameters) on most of these traditional medicines are far from sufficient to meet the criteria needed to support their world-wide therapeutic use. Also, the mechanistic understanding of most of these herbal medicines is still lacking due to their complex components which further limits their wider application and acceptance. Metabolomics -a novel approach to reveal altered metabolism (biochemical effects) produced in response to a disease or its therapeutic intervention- has huge potential to assess the pharmacology and toxicology of traditional herbal medicines (THMs). Therefore, it is gradually becoming a mutually complementary technique to genomics, transcriptomics and proteomics for therapeutic evaluation of pharmaceutical products (including THMs); the approach is so called pharmaco-metabolomics. The whole paradigm is based on its ability to provide metabolic signatures to confirm the diseased condition and then to use the concentration profiles of these biomarkers to assess the therapeutic response. Nuclear magnetic resonance (NMR) spectroscopy coupled with multivariate data analysis is currently the method of choice for pharmaco-metabolomics studies owing to its unbiased, non-destructive nature and minimal sample preparation requirement. In recent past, dozens of NMR based pharmaco-metabolomic studies have been devoted to prove the therapeutic efficacy/safety and to explore the underlying mechanisms of THMs, with promising results. The current perspective article summarizes various such studies in addition to describing the technical and conceptual aspects involved in NMR based pharmaco-metabolomics. |
1611.04747 | Bashar Ibrahim | Bashar Ibrahim | Toward a systems-level view of mitotic checkpoints | null | Prog Biophys Mol Biol. 2015 Mar;117(2-3):217-24 | 10.1016/j.pbiomolbio.2015.02.005 | null | q-bio.SC q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reproduction and natural selection are the key elements of life. In order to
reproduce, the genetic material must be doubled, separated and placed into two
new daughter cells, each containing a complete set of chromosomes and
organelles. In mitosis, transition from one process to the next is guided by
intricate surveillance mechanisms, known as the mitotic checkpoints.
Dis-regulation of cell division through checkpoint malfunction can lead to
developmental defects and contribute to the development or progression of
tumors. This review approaches two important mitotic checkpoints, the spindle
assembly checkpoint (SAC) and the spindle position checkpoint (SPOC). The
highly conserved spindle assembly checkpoint (SAC) controls the onset of
anaphase by preventing premature segregation of the sister chromatids of the
duplicated genome, to the spindle poles. In contrast, the spindle position
checkpoint (SPOC), in the budding yeast S. cerevisiae, ensures that during
asymmetric cell division mitotic exit does not occur until the spindle is
properly aligned with the cell polarity axis. Although there are no known
homologs, there is indication that functionally similar checkpoints exist also
in animal cells.
| [
{
"created": "Tue, 15 Nov 2016 09:01:57 GMT",
"version": "v1"
}
] | 2016-11-16 | [
[
"Ibrahim",
"Bashar",
""
]
] | Reproduction and natural selection are the key elements of life. In order to reproduce, the genetic material must be doubled, separated and placed into two new daughter cells, each containing a complete set of chromosomes and organelles. In mitosis, transition from one process to the next is guided by intricate surveillance mechanisms, known as the mitotic checkpoints. Dis-regulation of cell division through checkpoint malfunction can lead to developmental defects and contribute to the development or progression of tumors. This review approaches two important mitotic checkpoints, the spindle assembly checkpoint (SAC) and the spindle position checkpoint (SPOC). The highly conserved spindle assembly checkpoint (SAC) controls the onset of anaphase by preventing premature segregation of the sister chromatids of the duplicated genome, to the spindle poles. In contrast, the spindle position checkpoint (SPOC), in the budding yeast S. cerevisiae, ensures that during asymmetric cell division mitotic exit does not occur until the spindle is properly aligned with the cell polarity axis. Although there are no known homologs, there is indication that functionally similar checkpoints exist also in animal cells. |
2304.10725 | Charles Semple | Magnus Bordewich, Charles Semple | Quantifying the difference between phylogenetic diversity and diversity
indices | 26 pages, 5 figures | null | null | null | q-bio.PE math.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Phylogenetic diversity is a popular measure for quantifying the biodiversity
of a collection $Y$ of species, while phylogenetic diversity indices provide a
way to apportion phylogenetic diversity to individual species. Typically, for
some specific diversity index, the phylogenetic diversity of $Y$ is not equal
to the sum of the diversity indices of the species in $Y.$ In this paper, we
investigate the extent of this difference for two commonly-used indices: Fair
Proportion and Equal Splits. In particular, we determine the maximum value of
this difference under various instances including when the associated rooted
phylogenetic tree is allowed to vary across all root phylogenetic trees with
the same leaf set and whose edge lengths are constrained by either their total
sum or their maximum value.
| [
{
"created": "Fri, 21 Apr 2023 03:40:40 GMT",
"version": "v1"
}
] | 2023-04-24 | [
[
"Bordewich",
"Magnus",
""
],
[
"Semple",
"Charles",
""
]
] | Phylogenetic diversity is a popular measure for quantifying the biodiversity of a collection $Y$ of species, while phylogenetic diversity indices provide a way to apportion phylogenetic diversity to individual species. Typically, for some specific diversity index, the phylogenetic diversity of $Y$ is not equal to the sum of the diversity indices of the species in $Y.$ In this paper, we investigate the extent of this difference for two commonly-used indices: Fair Proportion and Equal Splits. In particular, we determine the maximum value of this difference under various instances including when the associated rooted phylogenetic tree is allowed to vary across all root phylogenetic trees with the same leaf set and whose edge lengths are constrained by either their total sum or their maximum value. |
2007.12692 | Guo-Wei Wei | Rui Wang, Jiahui Chen, Kaifu Gao, Yuta Hozumi, Changchuan Yin, and
Guo-Wei Wei | Characterizing SARS-CoV-2 mutations in the United States | 31 pages, 20 figures, and 4 tables | null | null | null | q-bio.GN q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has been
mutating since it was first sequenced in early January 2020. The genetic
variants have developed into a few distinct clusters with different properties.
Since the United States (US) has the highest number of viral infected patients
globally, it is essential to understand the US SARS-CoV-2. Using genotyping,
sequence-alignment, time-evolution, $k$-means clustering, protein-folding
stability, algebraic topology, and network theory, we reveal that the US
SARS-CoV-2 has four substrains and five top US SARS-CoV-2 mutations were first
detected in China (2 cases), Singapore (2 cases), and the United Kingdom (1
case). The next three top US SARS-CoV-2 mutations were first detected in the
US. These eight top mutations belong to two disconnected groups. The first
group consisting of 5 concurrent mutations is prevailing, while the other group
with three concurrent mutations gradually fades out. Our analysis suggests that
female immune systems are more active than those of males in responding to
SARS-CoV-2 infections. We identify that one of the top mutations,
27964C$>$T-(S24L) on ORF8, has an unusually strong gender dependence. Based on
the analysis of all mutations on the spike protein, we further uncover that
three of four US SASR-CoV-2 substrains become more infectious. Our study calls
for effective viral control and containing strategies in the US.
| [
{
"created": "Fri, 24 Jul 2020 14:25:24 GMT",
"version": "v1"
}
] | 2020-07-28 | [
[
"Wang",
"Rui",
""
],
[
"Chen",
"Jiahui",
""
],
[
"Gao",
"Kaifu",
""
],
[
"Hozumi",
"Yuta",
""
],
[
"Yin",
"Changchuan",
""
],
[
"Wei",
"Guo-Wei",
""
]
] | The severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) has been mutating since it was first sequenced in early January 2020. The genetic variants have developed into a few distinct clusters with different properties. Since the United States (US) has the highest number of viral infected patients globally, it is essential to understand the US SARS-CoV-2. Using genotyping, sequence-alignment, time-evolution, $k$-means clustering, protein-folding stability, algebraic topology, and network theory, we reveal that the US SARS-CoV-2 has four substrains and five top US SARS-CoV-2 mutations were first detected in China (2 cases), Singapore (2 cases), and the United Kingdom (1 case). The next three top US SARS-CoV-2 mutations were first detected in the US. These eight top mutations belong to two disconnected groups. The first group consisting of 5 concurrent mutations is prevailing, while the other group with three concurrent mutations gradually fades out. Our analysis suggests that female immune systems are more active than those of males in responding to SARS-CoV-2 infections. We identify that one of the top mutations, 27964C$>$T-(S24L) on ORF8, has an unusually strong gender dependence. Based on the analysis of all mutations on the spike protein, we further uncover that three of four US SASR-CoV-2 substrains become more infectious. Our study calls for effective viral control and containing strategies in the US. |
1102.5604 | John Hawks | John Hawks | Selection for smaller brains in Holocene human evolution | 17 text pages, 3 bibliography pages, 1 figure | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: Human populations during the last 10,000 years have undergone
rapid decreases in average brain size as measured by endocranial volume or as
estimated from linear measurements of the cranium. A null hypothesis to explain
the evolution of brain size is that reductions result from genetic correlation
of brain size with body mass or stature.
Results: The absolute change of endocranial volume in the study samples was
significantly greater than would be predicted from observed changes in body
mass or stature.
Conclusions: The evolution of smaller brains in many recent human populations
must have resulted from selection upon brain size itself or on other features
more highly correlated with brain size than are gross body dimensions. This
selection may have resulted from energetic or nutritional demands in Holocene
populations, or to life history constraints on brain development.
| [
{
"created": "Mon, 28 Feb 2011 06:22:01 GMT",
"version": "v1"
}
] | 2011-03-01 | [
[
"Hawks",
"John",
""
]
] | Background: Human populations during the last 10,000 years have undergone rapid decreases in average brain size as measured by endocranial volume or as estimated from linear measurements of the cranium. A null hypothesis to explain the evolution of brain size is that reductions result from genetic correlation of brain size with body mass or stature. Results: The absolute change of endocranial volume in the study samples was significantly greater than would be predicted from observed changes in body mass or stature. Conclusions: The evolution of smaller brains in many recent human populations must have resulted from selection upon brain size itself or on other features more highly correlated with brain size than are gross body dimensions. This selection may have resulted from energetic or nutritional demands in Holocene populations, or to life history constraints on brain development. |
2405.15928 | Dea Gogishvili | Dea Gogishvili, Emmanuel Minois-Genin, Jan van Eck, Sanne Abeln | PatchProt: Hydrophobic patch prediction using protein foundation models | null | null | null | null | q-bio.QM cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Hydrophobic patches on protein surfaces play important functional roles in
protein-protein and protein-ligand interactions. Large hydrophobic surfaces are
also involved in the progression of aggregation diseases. Predicting exposed
hydrophobic patches from a protein sequence has been shown to be a difficult
task. Fine-tuning foundation models allows for adapting a model to the specific
nuances of a new task using a much smaller dataset. Additionally, multi-task
deep learning offers a promising solution for addressing data gaps,
simultaneously outperforming single-task methods. In this study, we harnessed a
recently released leading large language model ESM-2. Efficient fine-tuning of
ESM-2 was achieved by leveraging a recently developed parameter-efficient
fine-tuning method. This approach enabled comprehensive training of model
layers without excessive parameters and without the need to include a
computationally expensive multiple sequence analysis. We explored several
related tasks, at local (residue) and global (protein) levels, to improve the
representation of the model. As a result, our fine-tuned ESM-2 model,
PatchProt, cannot only predict hydrophobic patch areas but also outperforms
existing methods at predicting primary tasks, including secondary structure and
surface accessibility predictions. Importantly, our analysis shows that
including related local tasks can improve predictions on more difficult global
tasks. This research sets a new standard for sequence-based protein property
prediction and highlights the remarkable potential of fine-tuning foundation
models enriching the model representation by training over related tasks.
| [
{
"created": "Fri, 24 May 2024 20:37:02 GMT",
"version": "v1"
}
] | 2024-05-28 | [
[
"Gogishvili",
"Dea",
""
],
[
"Minois-Genin",
"Emmanuel",
""
],
[
"van Eck",
"Jan",
""
],
[
"Abeln",
"Sanne",
""
]
] | Hydrophobic patches on protein surfaces play important functional roles in protein-protein and protein-ligand interactions. Large hydrophobic surfaces are also involved in the progression of aggregation diseases. Predicting exposed hydrophobic patches from a protein sequence has been shown to be a difficult task. Fine-tuning foundation models allows for adapting a model to the specific nuances of a new task using a much smaller dataset. Additionally, multi-task deep learning offers a promising solution for addressing data gaps, simultaneously outperforming single-task methods. In this study, we harnessed a recently released leading large language model ESM-2. Efficient fine-tuning of ESM-2 was achieved by leveraging a recently developed parameter-efficient fine-tuning method. This approach enabled comprehensive training of model layers without excessive parameters and without the need to include a computationally expensive multiple sequence analysis. We explored several related tasks, at local (residue) and global (protein) levels, to improve the representation of the model. As a result, our fine-tuned ESM-2 model, PatchProt, cannot only predict hydrophobic patch areas but also outperforms existing methods at predicting primary tasks, including secondary structure and surface accessibility predictions. Importantly, our analysis shows that including related local tasks can improve predictions on more difficult global tasks. This research sets a new standard for sequence-based protein property prediction and highlights the remarkable potential of fine-tuning foundation models enriching the model representation by training over related tasks. |
1406.3316 | Fabricio Forgerini | Fabricio L. Forgerini and Nuno Crokidakis | Competition and evolution in restricted space | null | J. Stat. Mech. P07016 (2014) | 10.1088/1742-5468/2014/07/P07016 | null | q-bio.PE cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the competition and the evolution of nodes embedded in Euclidean
restricted spaces. The population evolves by a branching process in which new
nodes are generated when up to two new nodes are attached to the previous ones
at each time unit. The competition in the population is introduced by
considering the effect of overcrowding of nodes in the embedding space. The
branching process is suppressed if the newborn node is closer than a distance
$\xi$ of the previous nodes. This rule may be relevant to describe a
competition for resources, limiting the density of individuals and therefore
the total population. This results in an exponential growth in the initial
period, and, after some crossover time, approaching some limiting value. Our
results show that the competition among the nodes associated with geometric
restrictions can even, for certain conditions, lead the entire population to
extinction.
| [
{
"created": "Thu, 12 Jun 2014 18:52:57 GMT",
"version": "v1"
}
] | 2014-07-21 | [
[
"Forgerini",
"Fabricio L.",
""
],
[
"Crokidakis",
"Nuno",
""
]
] | We study the competition and the evolution of nodes embedded in Euclidean restricted spaces. The population evolves by a branching process in which new nodes are generated when up to two new nodes are attached to the previous ones at each time unit. The competition in the population is introduced by considering the effect of overcrowding of nodes in the embedding space. The branching process is suppressed if the newborn node is closer than a distance $\xi$ of the previous nodes. This rule may be relevant to describe a competition for resources, limiting the density of individuals and therefore the total population. This results in an exponential growth in the initial period, and, after some crossover time, approaching some limiting value. Our results show that the competition among the nodes associated with geometric restrictions can even, for certain conditions, lead the entire population to extinction. |
1705.09718 | Christian Yates | Christian A Yates, Matthew J Ford and Richard L Mort | A multi-stage representation of cell proliferation as a Markov process | null | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The stochastic simulation algorithm commonly known as Gillespie's algorithm
is now used ubiquitously in the modelling of biological processes in which
stochastic effects play an important role. In well-mixed scenarios at the
sub-cellular level it is often reasonable to assume that times between
successive reaction/interaction events are exponentially distributed and can be
appropriately modelled as a Markov process and hence simulated by the Gillespie
algorithm. However, Gillespie's algorithm is routinely applied to model
biological systems for which it was never intended. In particular, processes in
which cell proliferation is important should not be simulated naively using the
Gillespie algorithm since the history-dependent nature of the cell cycle breaks
the Markov process. The variance in experimentally measured cell cycle times is
far less than in an exponential cell cycle time distribution with the same
mean.
Here we suggest a method of modelling the cell cycle that restores the
memoryless property to the system and is therefore consistent with simulation
via the Gillespie algorithm. By breaking the cell cycle into a number of
independent exponentially distributed stages we can restore the Markov property
at the same time as more accurately approximating the appropriate cell cycle
time distributions. The consequences of our revised mathematical model are
explored analytically. We demonstrate the importance of employing the correct
cell cycle time distribution by considering two models incorporating cellular
proliferation (one spatial and one non-spatial) and demonstrating that changing
the cell cycle time distribution makes quantitative and qualitative differences
to their outcomes. Our adaptation will allow modellers and experimentalists
alike to appropriately represent cellular proliferation, whilst still being
able to take advantage of the Gillespie algorithm.
| [
{
"created": "Fri, 26 May 2017 20:57:10 GMT",
"version": "v1"
},
{
"created": "Mon, 14 Aug 2017 21:24:29 GMT",
"version": "v2"
},
{
"created": "Mon, 22 Jul 2019 16:19:50 GMT",
"version": "v3"
}
] | 2019-07-23 | [
[
"Yates",
"Christian A",
""
],
[
"Ford",
"Matthew J",
""
],
[
"Mort",
"Richard L",
""
]
] | The stochastic simulation algorithm commonly known as Gillespie's algorithm is now used ubiquitously in the modelling of biological processes in which stochastic effects play an important role. In well-mixed scenarios at the sub-cellular level it is often reasonable to assume that times between successive reaction/interaction events are exponentially distributed and can be appropriately modelled as a Markov process and hence simulated by the Gillespie algorithm. However, Gillespie's algorithm is routinely applied to model biological systems for which it was never intended. In particular, processes in which cell proliferation is important should not be simulated naively using the Gillespie algorithm since the history-dependent nature of the cell cycle breaks the Markov process. The variance in experimentally measured cell cycle times is far less than in an exponential cell cycle time distribution with the same mean. Here we suggest a method of modelling the cell cycle that restores the memoryless property to the system and is therefore consistent with simulation via the Gillespie algorithm. By breaking the cell cycle into a number of independent exponentially distributed stages we can restore the Markov property at the same time as more accurately approximating the appropriate cell cycle time distributions. The consequences of our revised mathematical model are explored analytically. We demonstrate the importance of employing the correct cell cycle time distribution by considering two models incorporating cellular proliferation (one spatial and one non-spatial) and demonstrating that changing the cell cycle time distribution makes quantitative and qualitative differences to their outcomes. Our adaptation will allow modellers and experimentalists alike to appropriately represent cellular proliferation, whilst still being able to take advantage of the Gillespie algorithm. |
2004.12676 | Alban Bornet | Adrien Doerig, Alban Bornet, Oh-Hyeon Choung, Micahel H. Herzog | Crowding Reveals Fundamental Differences in Local vs. Global Processing
in Humans and Machines | null | Vision Research, 167, 39-45 (2020) | 10.1016/j.visres.2019.12.006 | null | q-bio.NC | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Feedforward Convolutional Neural Networks (ffCNNs) have become
state-of-the-art models both in computer vision and neuroscience. However,
human-like performance of ffCNNs does not necessarily imply human-like
computations. Previous studies have suggested that current ffCNNs do not make
use of global shape information. However, it is currently unclear whether this
reflects fundamental differences between ffCNN and human processing or is
merely an artefact of how ffCNNs are trained. Here, we use visual crowding as a
well-controlled, specific probe to test global shape computations. Our results
provide evidence that ffCNNs cannot produce human-like global shape
computations for principled architectural reasons. We lay out approaches that
may address shortcomings of ffCNNs to provide better models of the human visual
system.
| [
{
"created": "Mon, 27 Apr 2020 09:43:27 GMT",
"version": "v1"
}
] | 2020-04-29 | [
[
"Doerig",
"Adrien",
""
],
[
"Bornet",
"Alban",
""
],
[
"Choung",
"Oh-Hyeon",
""
],
[
"Herzog",
"Micahel H.",
""
]
] | Feedforward Convolutional Neural Networks (ffCNNs) have become state-of-the-art models both in computer vision and neuroscience. However, human-like performance of ffCNNs does not necessarily imply human-like computations. Previous studies have suggested that current ffCNNs do not make use of global shape information. However, it is currently unclear whether this reflects fundamental differences between ffCNN and human processing or is merely an artefact of how ffCNNs are trained. Here, we use visual crowding as a well-controlled, specific probe to test global shape computations. Our results provide evidence that ffCNNs cannot produce human-like global shape computations for principled architectural reasons. We lay out approaches that may address shortcomings of ffCNNs to provide better models of the human visual system. |
1912.00791 | Anindita Bhadra | Arunita Banerjee and Anindita Bhadra | Time-activity budget of urban-adapted free-ranging dogs | 5 figures | Acta Ethologica 25, 2022 | 10.1007/s10211-021-00379-6 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The domestic dog is known to have evolved from gray wolves, about 15,000
years ago. They majorly exist as free-ranging populations across the world.
They are typically scavengers and well adapted to living among humans. Most
canids living in and around urban habitats tend to avoid humans and show
crepuscular activity peaks. In this study, we carried out a detailed
population-level survey on free-ranging dogs in West Bengal, India, to
understand the activity patterns of free-ranging dogs in relation to human
activity. Using 5669 sightings of dogs, over a period of 1 year, covering the
24 hours of the day, we carried out an analysis of the time-activity budget of
free-ranging dogs to conclude that they are generalists in their habit. They
remain active when humans are active. Their activity levels are affected
significantly by age class and time of the day. Multivariate analysis revealed
the presence of certain behavioural clusters on the basis of time of the day
and energy expenditure in the behaviours. In addition, we provide a detailed
ethogram of free-ranging dogs. This, to our knowledge, is the first study of
this kind, which might be used to further study the eco-ethology of these dogs.
| [
{
"created": "Fri, 29 Nov 2019 15:42:21 GMT",
"version": "v1"
}
] | 2022-08-12 | [
[
"Banerjee",
"Arunita",
""
],
[
"Bhadra",
"Anindita",
""
]
] | The domestic dog is known to have evolved from gray wolves, about 15,000 years ago. They majorly exist as free-ranging populations across the world. They are typically scavengers and well adapted to living among humans. Most canids living in and around urban habitats tend to avoid humans and show crepuscular activity peaks. In this study, we carried out a detailed population-level survey on free-ranging dogs in West Bengal, India, to understand the activity patterns of free-ranging dogs in relation to human activity. Using 5669 sightings of dogs, over a period of 1 year, covering the 24 hours of the day, we carried out an analysis of the time-activity budget of free-ranging dogs to conclude that they are generalists in their habit. They remain active when humans are active. Their activity levels are affected significantly by age class and time of the day. Multivariate analysis revealed the presence of certain behavioural clusters on the basis of time of the day and energy expenditure in the behaviours. In addition, we provide a detailed ethogram of free-ranging dogs. This, to our knowledge, is the first study of this kind, which might be used to further study the eco-ethology of these dogs. |
2008.06996 | Dmitry Krotov | Dmitry Krotov, John Hopfield | Large Associative Memory Problem in Neurobiology and Machine Learning | Accepted for publication at ICLR 2021 | null | null | null | q-bio.NC cond-mat.dis-nn cs.CL cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Dense Associative Memories or modern Hopfield networks permit storage and
reliable retrieval of an exponentially large (in the dimension of feature
space) number of memories. At the same time, their naive implementation is
non-biological, since it seemingly requires the existence of many-body synaptic
junctions between the neurons. We show that these models are effective
descriptions of a more microscopic (written in terms of biological degrees of
freedom) theory that has additional (hidden) neurons and only requires two-body
interactions between them. For this reason our proposed microscopic theory is a
valid model of large associative memory with a degree of biological
plausibility. The dynamics of our network and its reduced dimensional
equivalent both minimize energy (Lyapunov) functions. When certain dynamical
variables (hidden neurons) are integrated out from our microscopic theory, one
can recover many of the models that were previously discussed in the
literature, e.g. the model presented in "Hopfield Networks is All You Need"
paper. We also provide an alternative derivation of the energy function and the
update rule proposed in the aforementioned paper and clarify the relationships
between various models of this class.
| [
{
"created": "Sun, 16 Aug 2020 21:03:52 GMT",
"version": "v1"
},
{
"created": "Tue, 2 Mar 2021 20:06:50 GMT",
"version": "v2"
},
{
"created": "Tue, 27 Apr 2021 22:20:05 GMT",
"version": "v3"
}
] | 2021-04-29 | [
[
"Krotov",
"Dmitry",
""
],
[
"Hopfield",
"John",
""
]
] | Dense Associative Memories or modern Hopfield networks permit storage and reliable retrieval of an exponentially large (in the dimension of feature space) number of memories. At the same time, their naive implementation is non-biological, since it seemingly requires the existence of many-body synaptic junctions between the neurons. We show that these models are effective descriptions of a more microscopic (written in terms of biological degrees of freedom) theory that has additional (hidden) neurons and only requires two-body interactions between them. For this reason our proposed microscopic theory is a valid model of large associative memory with a degree of biological plausibility. The dynamics of our network and its reduced dimensional equivalent both minimize energy (Lyapunov) functions. When certain dynamical variables (hidden neurons) are integrated out from our microscopic theory, one can recover many of the models that were previously discussed in the literature, e.g. the model presented in "Hopfield Networks is All You Need" paper. We also provide an alternative derivation of the energy function and the update rule proposed in the aforementioned paper and clarify the relationships between various models of this class. |
2310.02553 | Zaixi Zhang | Zaixi Zhang, Zepu Lu, Zhongkai Hao, Marinka Zitnik, Qi Liu | Full-Atom Protein Pocket Design via Iterative Refinement | NeurIPS 2023 Spotlight | null | null | null | q-bio.BM | http://creativecommons.org/licenses/by/4.0/ | The design of \emph{de novo} functional proteins that bind specific ligand
molecules is paramount in therapeutics and bio-engineering. A critical yet
formidable task in this endeavor is the design of the protein pocket, which is
the cavity region of the protein where the ligand binds. Current methods are
plagued by inefficient generation, inadequate context modeling of the ligand
molecule, and the inability to generate side-chain atoms. Here, we present the
Full-Atom Iterative Refinement (FAIR) method, designed to address these
challenges by facilitating the co-design of protein pocket sequences,
specifically residue types, and their corresponding 3D structures. FAIR
operates in two steps, proceeding in a coarse-to-fine manner (transitioning
from protein backbone to atoms, including side chains) for a full-atom
generation. In each iteration, all residue types and structures are
simultaneously updated, a process termed full-shot refinement. In the initial
stage, the residue types and backbone coordinates are refined using a
hierarchical context encoder, complemented by two structure refinement modules
that capture both inter-residue and pocket-ligand interactions. The subsequent
stage delves deeper, modeling the side-chain atoms of the pockets and updating
residue types to ensure sequence-structure congruence. Concurrently, the
structure of the binding ligand is refined across iterations to accommodate its
inherent flexibility. Comprehensive experiments show that FAIR surpasses
existing methods in designing superior pocket sequences and structures,
producing average improvement exceeding 10\% in AAR and RMSD metrics. FAIR is
available at \url{https://github.com/zaixizhang/FAIR}.
| [
{
"created": "Wed, 4 Oct 2023 03:23:00 GMT",
"version": "v1"
},
{
"created": "Fri, 20 Oct 2023 03:42:03 GMT",
"version": "v2"
}
] | 2023-10-23 | [
[
"Zhang",
"Zaixi",
""
],
[
"Lu",
"Zepu",
""
],
[
"Hao",
"Zhongkai",
""
],
[
"Zitnik",
"Marinka",
""
],
[
"Liu",
"Qi",
""
]
] | The design of \emph{de novo} functional proteins that bind specific ligand molecules is paramount in therapeutics and bio-engineering. A critical yet formidable task in this endeavor is the design of the protein pocket, which is the cavity region of the protein where the ligand binds. Current methods are plagued by inefficient generation, inadequate context modeling of the ligand molecule, and the inability to generate side-chain atoms. Here, we present the Full-Atom Iterative Refinement (FAIR) method, designed to address these challenges by facilitating the co-design of protein pocket sequences, specifically residue types, and their corresponding 3D structures. FAIR operates in two steps, proceeding in a coarse-to-fine manner (transitioning from protein backbone to atoms, including side chains) for a full-atom generation. In each iteration, all residue types and structures are simultaneously updated, a process termed full-shot refinement. In the initial stage, the residue types and backbone coordinates are refined using a hierarchical context encoder, complemented by two structure refinement modules that capture both inter-residue and pocket-ligand interactions. The subsequent stage delves deeper, modeling the side-chain atoms of the pockets and updating residue types to ensure sequence-structure congruence. Concurrently, the structure of the binding ligand is refined across iterations to accommodate its inherent flexibility. Comprehensive experiments show that FAIR surpasses existing methods in designing superior pocket sequences and structures, producing average improvement exceeding 10\% in AAR and RMSD metrics. FAIR is available at \url{https://github.com/zaixizhang/FAIR}. |
1410.7959 | Sebastiano Stramaglia | Ibai Diez, Paolo Bonifazi, I\~naki Escudero, Beatriz Mateos, Miguel A.
Mu\~noz, Sebastiano Stramaglia and Jesus M. Cortes | A novel brain partition highlights the modular skeleton shared by
structure and function | Accepted in Nature Scientific Reports. 56 pages, 15 figures | null | null | null | q-bio.NC q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Elucidating the intricate relationship between brain structure and function,
both in healthy and pathological conditions, is a key challenge for modern
neuroscience. Recent technical and methodological progress in neuroimaging has
helped advance our understanding of this important issue, with diffusion
weighted images providing information about structural connectivity (SC) and
functional magnetic resonance imaging shedding light on resting state
functional connectivity (rsFC). However, comparing these two distinct datasets,
each of which can be encoded into a different complex network, is by no means
trivial as pairwise link-to-link comparisons represent a relatively restricted
perspective and provide only limited information. Thus, we have adopted a more
integrative systems approach, exploiting theoretical graph analyses to study
both SC and rsFC datasets gathered independently from healthy human subjects.
The aim is to find the main architectural traits shared by the structural and
functional networks by paying special attention to their common hierarchical
modular organization. This approach allows us to identify a common skeleton
from which a new, optimal, brain partition can be extracted, with modules
sharing both structure and function. We describe these emerging common
structure-function modules (SFMs) in detail. In addition, we compare SFMs with
the classical Resting State Networks derived from independent component
analysis of rs-fMRI functional activity, as well as with anatomical
parcellations in the Automated Anatomical Labeling atlas and with the Broadmann
partition, highlighting their similitude and differences. The unveiling of SFMs
brings to light the strong correspondence between brain structure and
resting-state dynamics.
| [
{
"created": "Wed, 29 Oct 2014 12:53:35 GMT",
"version": "v1"
},
{
"created": "Thu, 30 Apr 2015 17:10:37 GMT",
"version": "v2"
}
] | 2015-05-01 | [
[
"Diez",
"Ibai",
""
],
[
"Bonifazi",
"Paolo",
""
],
[
"Escudero",
"Iñaki",
""
],
[
"Mateos",
"Beatriz",
""
],
[
"Muñoz",
"Miguel A.",
""
],
[
"Stramaglia",
"Sebastiano",
""
],
[
"Cortes",
"Jesus M.",
""
]
] | Elucidating the intricate relationship between brain structure and function, both in healthy and pathological conditions, is a key challenge for modern neuroscience. Recent technical and methodological progress in neuroimaging has helped advance our understanding of this important issue, with diffusion weighted images providing information about structural connectivity (SC) and functional magnetic resonance imaging shedding light on resting state functional connectivity (rsFC). However, comparing these two distinct datasets, each of which can be encoded into a different complex network, is by no means trivial as pairwise link-to-link comparisons represent a relatively restricted perspective and provide only limited information. Thus, we have adopted a more integrative systems approach, exploiting theoretical graph analyses to study both SC and rsFC datasets gathered independently from healthy human subjects. The aim is to find the main architectural traits shared by the structural and functional networks by paying special attention to their common hierarchical modular organization. This approach allows us to identify a common skeleton from which a new, optimal, brain partition can be extracted, with modules sharing both structure and function. We describe these emerging common structure-function modules (SFMs) in detail. In addition, we compare SFMs with the classical Resting State Networks derived from independent component analysis of rs-fMRI functional activity, as well as with anatomical parcellations in the Automated Anatomical Labeling atlas and with the Broadmann partition, highlighting their similitude and differences. The unveiling of SFMs brings to light the strong correspondence between brain structure and resting-state dynamics. |
2301.02918 | Hyeongseon Jeon | Hyeongseon Jeon, Juan Xie, Yeseul Jeon, Kyeong Joo Jung, Arkobrato
Gupta, Won Chang, Dongjun Chung | Statistical Power Analysis for Designing Bulk, Single-Cell, and Spatial
Transcriptomics Experiments: Review, Tutorial, and Perspectives | null | null | null | null | q-bio.GN | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Gene expression profiling technologies have been used in various applications
such as cancer biology. The development of gene expression profiling has
expanded the scope of target discovery in transcriptomic studies, and each
technology produces data with distinct characteristics. In order to guarantee
biologically meaningful findings using transcriptomic experiments, it is
important to consider various experimental factors in a systematic way through
statistical power analysis. In this paper, we review and discuss the power
analysis for three types of gene expression profiling technologies from a
practical standpoint, including bulk RNA-seq, single-cell RNA-seq, and
high-throughput spatial transcriptomics. Specifically, we describe the existing
power analysis tools for each research objective for each of the bulk RNA-seq
and scRNA-seq experiments, along with recommendations. On the other hand, since
there are no power analysis tools for high-throughput spatial transcriptomics
at this point, we instead investigate the factors that can influence power
analysis.
| [
{
"created": "Sat, 7 Jan 2023 18:42:28 GMT",
"version": "v1"
}
] | 2023-01-10 | [
[
"Jeon",
"Hyeongseon",
""
],
[
"Xie",
"Juan",
""
],
[
"Jeon",
"Yeseul",
""
],
[
"Jung",
"Kyeong Joo",
""
],
[
"Gupta",
"Arkobrato",
""
],
[
"Chang",
"Won",
""
],
[
"Chung",
"Dongjun",
""
]
] | Gene expression profiling technologies have been used in various applications such as cancer biology. The development of gene expression profiling has expanded the scope of target discovery in transcriptomic studies, and each technology produces data with distinct characteristics. In order to guarantee biologically meaningful findings using transcriptomic experiments, it is important to consider various experimental factors in a systematic way through statistical power analysis. In this paper, we review and discuss the power analysis for three types of gene expression profiling technologies from a practical standpoint, including bulk RNA-seq, single-cell RNA-seq, and high-throughput spatial transcriptomics. Specifically, we describe the existing power analysis tools for each research objective for each of the bulk RNA-seq and scRNA-seq experiments, along with recommendations. On the other hand, since there are no power analysis tools for high-throughput spatial transcriptomics at this point, we instead investigate the factors that can influence power analysis. |
0704.3259 | James P. Sethna | Christopher R. Myers, Ryan N. Gutenkunst, and James. P. Sethna | Python Unleashed on Systems Biology | Submitted to special issue of CiSE | null | null | null | q-bio.QM q-bio.MN | null | We have built an open-source software system for the modeling of biomolecular
reaction networks, SloppyCell, which is written in Python and makes substantial
use of third-party libraries for numerics, visualization, and parallel
programming. We highlight here some of the powerful features that Python
provides that enable SloppyCell to do dynamic code synthesis, symbolic
manipulation, and parallel exploration of complex parameter spaces.
| [
{
"created": "Tue, 24 Apr 2007 18:48:18 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Myers",
"Christopher R.",
""
],
[
"Gutenkunst",
"Ryan N.",
""
],
[
"Sethna",
"James. P.",
""
]
] | We have built an open-source software system for the modeling of biomolecular reaction networks, SloppyCell, which is written in Python and makes substantial use of third-party libraries for numerics, visualization, and parallel programming. We highlight here some of the powerful features that Python provides that enable SloppyCell to do dynamic code synthesis, symbolic manipulation, and parallel exploration of complex parameter spaces. |
q-bio/0508001 | Ruriko Yoshida | Dan Levy, Ruriko Yoshida, Lior Pachter | Neighbor joining with phylogenetic diversity estimates | null | null | null | null | q-bio.QM math.CO | null | The Neighbor-Joining algorithm is a recursive procedure for reconstructing
trees that is based on a transformation of pairwise distances between leaves.
We present a generalization of the neighbor-joining transformation, which uses
estimates of phylogenetic diversity rather than pairwise distances in the tree.
This leads to an improved neighbor-joining algorithm whose total running time
is still polynomial in the number of taxa. On simulated data, the method
outperforms other distance-based methods.
We have implemented neighbor-joining for subtree weights in a program called
MJOIN which is freely available under the Gnu Public License at
http://bio.math.berkeley.edu/mjoin/ .
| [
{
"created": "Sat, 30 Jul 2005 18:28:10 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Levy",
"Dan",
""
],
[
"Yoshida",
"Ruriko",
""
],
[
"Pachter",
"Lior",
""
]
] | The Neighbor-Joining algorithm is a recursive procedure for reconstructing trees that is based on a transformation of pairwise distances between leaves. We present a generalization of the neighbor-joining transformation, which uses estimates of phylogenetic diversity rather than pairwise distances in the tree. This leads to an improved neighbor-joining algorithm whose total running time is still polynomial in the number of taxa. On simulated data, the method outperforms other distance-based methods. We have implemented neighbor-joining for subtree weights in a program called MJOIN which is freely available under the Gnu Public License at http://bio.math.berkeley.edu/mjoin/ . |
2211.01960 | Vladislav Lomtev | Vladislav Lomtev, Alexander Kovalev, Alexey Timchenko | FingerFlex: Inferring Finger Trajectories from ECoG signals | 6 pages, 3 figures, 4 tables. Preprint. Under review | null | null | null | q-bio.NC cs.HC cs.LG | http://creativecommons.org/licenses/by/4.0/ | Motor brain-computer interface (BCI) development relies critically on neural
time series decoding algorithms. Recent advances in deep learning architectures
allow for automatic feature selection to approximate higher-order dependencies
in data. This article presents the FingerFlex model - a convolutional
encoder-decoder architecture adapted for finger movement regression on
electrocorticographic (ECoG) brain data. State-of-the-art performance was
achieved on a publicly available BCI competition IV dataset 4 with a
correlation coefficient between true and predicted trajectories up to 0.74. The
presented method provides the opportunity for developing fully-functional
high-precision cortical motor brain-computer interfaces.
| [
{
"created": "Sun, 23 Oct 2022 16:26:01 GMT",
"version": "v1"
},
{
"created": "Tue, 25 Apr 2023 19:14:18 GMT",
"version": "v2"
}
] | 2023-04-27 | [
[
"Lomtev",
"Vladislav",
""
],
[
"Kovalev",
"Alexander",
""
],
[
"Timchenko",
"Alexey",
""
]
] | Motor brain-computer interface (BCI) development relies critically on neural time series decoding algorithms. Recent advances in deep learning architectures allow for automatic feature selection to approximate higher-order dependencies in data. This article presents the FingerFlex model - a convolutional encoder-decoder architecture adapted for finger movement regression on electrocorticographic (ECoG) brain data. State-of-the-art performance was achieved on a publicly available BCI competition IV dataset 4 with a correlation coefficient between true and predicted trajectories up to 0.74. The presented method provides the opportunity for developing fully-functional high-precision cortical motor brain-computer interfaces. |
1604.04203 | Christian Scheppach | Christian Scheppach | High- and low-conductance NMDA receptors are present in layer 4 spiny
stellate and layer 2/3 pyramidal neurons of mouse barrel cortex | null | Physiological Reports, 30th Dec. 2016, Vol. 4 no. e13051 | 10.14814/phy2.13051 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | NMDA receptors are ion channels activated by the neurotransmitter glutamate
in the mammalian brain and are important in synaptic function and plasticity,
but are also found in extrasynaptic locations and influence neuronal
excitability. There are different NMDA receptor subtypes which differ in their
single-channel conductance. Recently, synaptic plasticity has been studied in
mouse barrel cortex, the primary sensory cortex for input from the animal's
whiskers. Pharmacological data imply the presence of low-conductance NMDA
receptors in spiny stellate neurons of cortical layer 4, but of
high-conductance NMDA receptors in pyramidal neurons of layer 2/3. Here, to
obtain complementary electrophysiological information on the functional NMDA
receptors expressed in layer 4 and layer 2/3 neurons, single NMDA receptor
currents were recorded with the patch-clamp method. Both cell types were found
to contain high-conductance as well as low-conductance NMDA receptors. The
results are consistent with the reported pharmacological data on synaptic
plasticity, and with previous claims of a prominent role of low-conductance
NMDA receptors in layer 4 spiny stellate neurons, including broad integration,
amplification and distribution of excitation within the barrel in response to
whisker stimulation, as well as modulation of excitability by ambient
glutamate. However, layer 4 cells also expressed high-conductance NMDA
receptors. The presence of low-conductance NMDA receptors in layer 2/3
pyramidal neurons suggests that some of these functions may be shared with
layer 4 spiny stellate neurons.
| [
{
"created": "Thu, 14 Apr 2016 16:07:58 GMT",
"version": "v1"
},
{
"created": "Mon, 5 Sep 2016 17:05:44 GMT",
"version": "v2"
}
] | 2017-12-13 | [
[
"Scheppach",
"Christian",
""
]
] | NMDA receptors are ion channels activated by the neurotransmitter glutamate in the mammalian brain and are important in synaptic function and plasticity, but are also found in extrasynaptic locations and influence neuronal excitability. There are different NMDA receptor subtypes which differ in their single-channel conductance. Recently, synaptic plasticity has been studied in mouse barrel cortex, the primary sensory cortex for input from the animal's whiskers. Pharmacological data imply the presence of low-conductance NMDA receptors in spiny stellate neurons of cortical layer 4, but of high-conductance NMDA receptors in pyramidal neurons of layer 2/3. Here, to obtain complementary electrophysiological information on the functional NMDA receptors expressed in layer 4 and layer 2/3 neurons, single NMDA receptor currents were recorded with the patch-clamp method. Both cell types were found to contain high-conductance as well as low-conductance NMDA receptors. The results are consistent with the reported pharmacological data on synaptic plasticity, and with previous claims of a prominent role of low-conductance NMDA receptors in layer 4 spiny stellate neurons, including broad integration, amplification and distribution of excitation within the barrel in response to whisker stimulation, as well as modulation of excitability by ambient glutamate. However, layer 4 cells also expressed high-conductance NMDA receptors. The presence of low-conductance NMDA receptors in layer 2/3 pyramidal neurons suggests that some of these functions may be shared with layer 4 spiny stellate neurons. |
2403.13098 | Patrick Lawton | Patrick Lawton, Ashkaan K. Fahimipour, Kurt E. Anderson | Interspecific dispersal constraints suppress pattern formation in
metacommunities | null | null | null | null | q-bio.PE nlin.AO | http://creativecommons.org/licenses/by/4.0/ | Decisions to disperse from a habitat stand out among organismal behaviors as
pivotal drivers of ecosystem dynamics across scales. Encounters with other
species are an important component of adaptive decision-making in dispersal,
resulting in widespread behaviors like tracking resources or avoiding consumers
in space. Despite this, metacommunity models often treat dispersal as a
function of intraspecific density alone. We show, focusing initially on
three-species network motifs, that interspecific dispersal rules generally
drive a transition in metacommunities from homogeneous steady states to
self-organized heterogeneous spatial patterns. However, when ecologically
realistic constraints reflecting adaptive behaviors are imposed -- prey
tracking and predator avoidance -- a pronounced homogenizing effect emerges
where spatial pattern formation is suppressed. We demonstrate this effect for
each motif by computing master stability functions that separate the
contributions of local and spatial interactions to pattern formation. We extend
this result to species rich food webs using a random matrix approach, where we
find that eventually webs become large enough to override the homogenizing
effect of adaptive dispersal behaviors, leading once again to predominately
pattern forming dynamics. Our results emphasize the critical role of
interspecific dispersal rules in shaping spatial patterns across landscapes,
highlighting the need to incorporate adaptive behavioral constraints in efforts
to link local species interactions and metacommunity structure.
| [
{
"created": "Tue, 19 Mar 2024 19:00:43 GMT",
"version": "v1"
}
] | 2024-03-21 | [
[
"Lawton",
"Patrick",
""
],
[
"Fahimipour",
"Ashkaan K.",
""
],
[
"Anderson",
"Kurt E.",
""
]
] | Decisions to disperse from a habitat stand out among organismal behaviors as pivotal drivers of ecosystem dynamics across scales. Encounters with other species are an important component of adaptive decision-making in dispersal, resulting in widespread behaviors like tracking resources or avoiding consumers in space. Despite this, metacommunity models often treat dispersal as a function of intraspecific density alone. We show, focusing initially on three-species network motifs, that interspecific dispersal rules generally drive a transition in metacommunities from homogeneous steady states to self-organized heterogeneous spatial patterns. However, when ecologically realistic constraints reflecting adaptive behaviors are imposed -- prey tracking and predator avoidance -- a pronounced homogenizing effect emerges where spatial pattern formation is suppressed. We demonstrate this effect for each motif by computing master stability functions that separate the contributions of local and spatial interactions to pattern formation. We extend this result to species rich food webs using a random matrix approach, where we find that eventually webs become large enough to override the homogenizing effect of adaptive dispersal behaviors, leading once again to predominately pattern forming dynamics. Our results emphasize the critical role of interspecific dispersal rules in shaping spatial patterns across landscapes, highlighting the need to incorporate adaptive behavioral constraints in efforts to link local species interactions and metacommunity structure. |
2107.01706 | Hamid Rahkooy | Hamid Rahkooy, Thomas Sturm | Testing Binomiality of Chemical Reaction Networks Using Comprehensive
Gr\"obner Systems | null | null | null | null | q-bio.MN cs.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of binomiality of the steady state ideals of
biochemical reaction networks. We are interested in finding polynomial
conditions on the parameters such that the steady state ideal of a chemical
reaction network is binomial under every specialisation of the parameters if
the conditions on the parameters hold. We approach the binomiality problem
using Comprehensive Gr\"obner systems. Considering rate constants as
parameters, we compute comprehensive Gr\"obner systems for various reactions.
In particular, we make automatic computations on n-site phosphorylations and
biomodels from the Biomodels repository using the grobcov library of the
computer algebra system Singular.
| [
{
"created": "Sun, 4 Jul 2021 18:44:07 GMT",
"version": "v1"
},
{
"created": "Wed, 7 Jul 2021 09:15:25 GMT",
"version": "v2"
}
] | 2021-07-08 | [
[
"Rahkooy",
"Hamid",
""
],
[
"Sturm",
"Thomas",
""
]
] | We consider the problem of binomiality of the steady state ideals of biochemical reaction networks. We are interested in finding polynomial conditions on the parameters such that the steady state ideal of a chemical reaction network is binomial under every specialisation of the parameters if the conditions on the parameters hold. We approach the binomiality problem using Comprehensive Gr\"obner systems. Considering rate constants as parameters, we compute comprehensive Gr\"obner systems for various reactions. In particular, we make automatic computations on n-site phosphorylations and biomodels from the Biomodels repository using the grobcov library of the computer algebra system Singular. |
2010.16265 | Delfim F. M. Torres | Houssine Zine, Adnane Boukhouima, El Mehdi Lotfi, Marouane Mahrouf,
Delfim F. M. Torres, Noura Yousfi | A stochastic time-delayed model for the effectiveness of Moroccan
COVID-19 deconfinement strategy | This is a preprint of a paper whose final and definite form is
published by 'Mathematical Modelling of Natural Phenomena' at
[http://doi.org/10.1051/mmnp/2020040]. Paper Submitted 16-May-2020; Revised
20-Aug-2020; Accepted 28-Oct-2020 | Math. Model. Nat. Phenom. 15 (2020), Art. 50, 14 pp | 10.1051/mmnp/2020040 | null | q-bio.PE math.PR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Coronavirus disease 2019 (COVID-19) poses a great threat to public health and
the economy worldwide. Currently, COVID-19 evolves in many countries to a
second stage, characterized by the need for the liberation of the economy and
relaxation of the human psychological effects. To this end, numerous countries
decided to implement adequate deconfinement strategies. After the first
prolongation of the established confinement, Morocco moves to the deconfinement
stage on May 20, 2020. The relevant question concerns the impact on the
COVID-19 propagation by considering an additional degree of realism related to
stochastic noises due to the effectiveness level of the adapted measures. In
this paper, we propose a delayed stochastic mathematical model to predict the
epidemiological trend of COVID-19 in Morocco after the deconfinement. To ensure
the well-posedness of the model, we prove the existence and uniqueness of a
positive solution. Based on the large number theorem for martingales, we
discuss the extinction of the disease under an appropriate threshold parameter.
Moreover, numerical simulations are performed in order to test the efficiency
of the deconfinement strategies chosen by the Moroccan authorities to help the
policy makers and public health administration to make suitable decisions in
the near future.
| [
{
"created": "Wed, 28 Oct 2020 18:45:02 GMT",
"version": "v1"
}
] | 2020-11-12 | [
[
"Zine",
"Houssine",
""
],
[
"Boukhouima",
"Adnane",
""
],
[
"Lotfi",
"El Mehdi",
""
],
[
"Mahrouf",
"Marouane",
""
],
[
"Torres",
"Delfim F. M.",
""
],
[
"Yousfi",
"Noura",
""
]
] | Coronavirus disease 2019 (COVID-19) poses a great threat to public health and the economy worldwide. Currently, COVID-19 evolves in many countries to a second stage, characterized by the need for the liberation of the economy and relaxation of the human psychological effects. To this end, numerous countries decided to implement adequate deconfinement strategies. After the first prolongation of the established confinement, Morocco moves to the deconfinement stage on May 20, 2020. The relevant question concerns the impact on the COVID-19 propagation by considering an additional degree of realism related to stochastic noises due to the effectiveness level of the adapted measures. In this paper, we propose a delayed stochastic mathematical model to predict the epidemiological trend of COVID-19 in Morocco after the deconfinement. To ensure the well-posedness of the model, we prove the existence and uniqueness of a positive solution. Based on the large number theorem for martingales, we discuss the extinction of the disease under an appropriate threshold parameter. Moreover, numerical simulations are performed in order to test the efficiency of the deconfinement strategies chosen by the Moroccan authorities to help the policy makers and public health administration to make suitable decisions in the near future. |
2006.11036 | Friedrich Schuessler | Friedrich Schuessler, Francesca Mastrogiuseppe, Alexis Dubreuil,
Srdjan Ostojic, Omri Barak | The interplay between randomness and structure during learning in RNNs | Presented at Neurips 2020 | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recurrent neural networks (RNNs) trained on low-dimensional tasks have been
widely used to model functional biological networks. However, the solutions
found by learning and the effect of initial connectivity are not well
understood. Here, we examine RNNs trained using gradient descent on different
tasks inspired by the neuroscience literature. We find that the changes in
recurrent connectivity can be described by low-rank matrices, despite the
unconstrained nature of the learning algorithm. To identify the origin of the
low-rank structure, we turn to an analytically tractable setting: training a
linear RNN on a simplified task. We show how the low-dimensional task structure
leads to low-rank changes to connectivity. This low-rank structure allows us to
explain and quantify the phenomenon of accelerated learning in the presence of
random initial connectivity. Altogether, our study opens a new perspective to
understanding trained RNNs in terms of both the learning process and the
resulting network structure.
| [
{
"created": "Fri, 19 Jun 2020 09:40:19 GMT",
"version": "v1"
},
{
"created": "Sun, 25 Oct 2020 17:57:31 GMT",
"version": "v2"
},
{
"created": "Tue, 16 Mar 2021 13:18:02 GMT",
"version": "v3"
},
{
"created": "Thu, 13 May 2021 19:14:49 GMT",
"version": "v4"
}
] | 2021-05-17 | [
[
"Schuessler",
"Friedrich",
""
],
[
"Mastrogiuseppe",
"Francesca",
""
],
[
"Dubreuil",
"Alexis",
""
],
[
"Ostojic",
"Srdjan",
""
],
[
"Barak",
"Omri",
""
]
] | Recurrent neural networks (RNNs) trained on low-dimensional tasks have been widely used to model functional biological networks. However, the solutions found by learning and the effect of initial connectivity are not well understood. Here, we examine RNNs trained using gradient descent on different tasks inspired by the neuroscience literature. We find that the changes in recurrent connectivity can be described by low-rank matrices, despite the unconstrained nature of the learning algorithm. To identify the origin of the low-rank structure, we turn to an analytically tractable setting: training a linear RNN on a simplified task. We show how the low-dimensional task structure leads to low-rank changes to connectivity. This low-rank structure allows us to explain and quantify the phenomenon of accelerated learning in the presence of random initial connectivity. Altogether, our study opens a new perspective to understanding trained RNNs in terms of both the learning process and the resulting network structure. |
1706.10145 | Alexander L\"uck | Alexander L\"uck, Pascal Giehr, J\"orn Walter, Verena Wolf | A Stochastic Model for the Formation of Spatial Methylation Patterns | 18 pages, 7 figures, content of former appendix now included in the
main part; accepted by 15th International Conference on Computational Methods
in Systems Biology (CMSB), 2017 | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | DNA methylation is an epigenetic mechanism whose important role in
development has been widely recognized. This epigenetic modification results in
heritable changes in gene expression not encoded by the DNA sequence. The
underlying mechanisms controlling DNA methylation are only partly understood
and recently different mechanistic models of enzyme activities responsible for
DNA methylation have been proposed. Here we extend existing Hidden Markov
Models (HMMs) for DNA methylation by describing the occurrence of spatial
methylation patterns over time and propose several models with different
neighborhood dependencies. We perform numerical analysis of the HMMs applied to
bisulfite sequencing measurements and accurately predict wild-type data. In
addition, we find evidence that the enzymes' activities depend on the left 5'
neighborhood but not on the right 3' neighborhood.
| [
{
"created": "Fri, 30 Jun 2017 11:44:06 GMT",
"version": "v1"
},
{
"created": "Mon, 10 Jul 2017 07:37:49 GMT",
"version": "v2"
}
] | 2017-07-11 | [
[
"Lück",
"Alexander",
""
],
[
"Giehr",
"Pascal",
""
],
[
"Walter",
"Jörn",
""
],
[
"Wolf",
"Verena",
""
]
] | DNA methylation is an epigenetic mechanism whose important role in development has been widely recognized. This epigenetic modification results in heritable changes in gene expression not encoded by the DNA sequence. The underlying mechanisms controlling DNA methylation are only partly understood and recently different mechanistic models of enzyme activities responsible for DNA methylation have been proposed. Here we extend existing Hidden Markov Models (HMMs) for DNA methylation by describing the occurrence of spatial methylation patterns over time and propose several models with different neighborhood dependencies. We perform numerical analysis of the HMMs applied to bisulfite sequencing measurements and accurately predict wild-type data. In addition, we find evidence that the enzymes' activities depend on the left 5' neighborhood but not on the right 3' neighborhood. |
2004.12503 | Arturo Sanchez-Lorenzo | Arturo Sanchez-Lorenzo, Javier Vaquero-Mart\'inez, Josep Calb\'o,
Martin Wild, Ana Santurt\'un, Joan-A. Lopez-Bustins, Jose-M. Vaquero, Doris
Folini, Manuel Ant\'on | Anomalous atmospheric circulation favored the spread of COVID-19 in
Europe | 22 pages, 4 figures, Supplementary Information with 8 figures | null | null | null | q-bio.PE physics.ao-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The current pandemic caused by the coronavirus SARS-CoV-2 is having negative
health, social and economic consequences worldwide. In Europe, the pandemic
started to develop strongly at the end of February and beginning of March 2020.
It has subsequently spread over the continent, with special virulence in
northern Italy and inland Spain. In this study we show that an unusual
persistent anticyclonic situation prevailing in southwestern Europe during
February 2020 (i.e. anomalously strong positive phase of the North Atlantic and
Arctic Oscillations) could have resulted in favorable conditions, in terms of
air temperature and humidity, in Italy and Spain for a quicker spread of the
virus compared with the rest of the European countries. It seems plausible that
the strong atmospheric stability and associated dry conditions that dominated
in these regions may have favored the virus's propagation, by short-range
droplet transmission as well as likely by long-range aerosol (airborne)
transmission.
| [
{
"created": "Sun, 26 Apr 2020 23:23:36 GMT",
"version": "v1"
}
] | 2020-04-28 | [
[
"Sanchez-Lorenzo",
"Arturo",
""
],
[
"Vaquero-Martínez",
"Javier",
""
],
[
"Calbó",
"Josep",
""
],
[
"Wild",
"Martin",
""
],
[
"Santurtún",
"Ana",
""
],
[
"Lopez-Bustins",
"Joan-A.",
""
],
[
"Vaquero",
"Jose-M.",
""
],
[
"Folini",
"Doris",
""
],
[
"Antón",
"Manuel",
""
]
] | The current pandemic caused by the coronavirus SARS-CoV-2 is having negative health, social and economic consequences worldwide. In Europe, the pandemic started to develop strongly at the end of February and beginning of March 2020. It has subsequently spread over the continent, with special virulence in northern Italy and inland Spain. In this study we show that an unusual persistent anticyclonic situation prevailing in southwestern Europe during February 2020 (i.e. anomalously strong positive phase of the North Atlantic and Arctic Oscillations) could have resulted in favorable conditions, in terms of air temperature and humidity, in Italy and Spain for a quicker spread of the virus compared with the rest of the European countries. It seems plausible that the strong atmospheric stability and associated dry conditions that dominated in these regions may have favored the virus's propagation, by short-range droplet transmission as well as likely by long-range aerosol (airborne) transmission. |
1602.00776 | Pan-Jun Kim | Mathias Foo, David E. Somers, Pan-Jun Kim | Kernel Architecture of the Genetic Circuitry of the Arabidopsis
Circadian System | Supplementary material is available at the journal website | PLoS Comput. Biol. 12, e1004748 (2016) | 10.1371/journal.pcbi.1004748 | null | q-bio.MN physics.bio-ph q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A wide range of organisms features molecular machines, circadian clocks,
which generate endogenous oscillations with ~24 h periodicity and thereby
synchronize biological processes to diurnal environmental fluctuations.
Recently, it has become clear that plants harbor more complex gene regulatory
circuits within the core circadian clocks than other organisms, inspiring a
fundamental question: are all these regulatory interactions between clock genes
equally crucial for the establishment and maintenance of circadian rhythms? Our
mechanistic simulation for Arabidopsis thaliana demonstrates that at least half
of the total regulatory interactions must be present to express the circadian
molecular profiles observed in wild-type plants. A set of those essential
interactions is called herein a kernel of the circadian system. The kernel
structure unbiasedly reveals four interlocked negative feedback loops
contributing to circadian rhythms, and three feedback loops among them drive
the autonomous oscillation itself. Strikingly, the kernel structure, as well as
the whole clock circuitry, is overwhelmingly composed of inhibitory, rather
than activating, interactions between genes. We found that this tendency
underlies plant circadian molecular profiles which often exhibit
sharply-shaped, cuspidate waveforms. Through the generation of these cuspidate
profiles, inhibitory interactions may facilitate the global coordination of
temporally-distant clock events that are markedly peaked at very specific times
of day. Our systematic approach resulting in experimentally-testable
predictions provides insights into a design principle of biological clockwork,
with implications for synthetic biology.
| [
{
"created": "Tue, 2 Feb 2016 03:24:24 GMT",
"version": "v1"
}
] | 2016-02-03 | [
[
"Foo",
"Mathias",
""
],
[
"Somers",
"David E.",
""
],
[
"Kim",
"Pan-Jun",
""
]
] | A wide range of organisms features molecular machines, circadian clocks, which generate endogenous oscillations with ~24 h periodicity and thereby synchronize biological processes to diurnal environmental fluctuations. Recently, it has become clear that plants harbor more complex gene regulatory circuits within the core circadian clocks than other organisms, inspiring a fundamental question: are all these regulatory interactions between clock genes equally crucial for the establishment and maintenance of circadian rhythms? Our mechanistic simulation for Arabidopsis thaliana demonstrates that at least half of the total regulatory interactions must be present to express the circadian molecular profiles observed in wild-type plants. A set of those essential interactions is called herein a kernel of the circadian system. The kernel structure unbiasedly reveals four interlocked negative feedback loops contributing to circadian rhythms, and three feedback loops among them drive the autonomous oscillation itself. Strikingly, the kernel structure, as well as the whole clock circuitry, is overwhelmingly composed of inhibitory, rather than activating, interactions between genes. We found that this tendency underlies plant circadian molecular profiles which often exhibit sharply-shaped, cuspidate waveforms. Through the generation of these cuspidate profiles, inhibitory interactions may facilitate the global coordination of temporally-distant clock events that are markedly peaked at very specific times of day. Our systematic approach resulting in experimentally-testable predictions provides insights into a design principle of biological clockwork, with implications for synthetic biology. |
2207.02314 | Lucas Flores | Lucas S. Flores, Marco A. Amaral, Mendeli H. Vainstein, Heitor C. M.
Fernandes | Cooperation in regular lattices | null | null | 10.1016/j.chaos.2022.112744 | null | q-bio.PE cond-mat.stat-mech physics.comp-ph physics.soc-ph | http://creativecommons.org/licenses/by/4.0/ | In the context of Evolutionary Game Theory, one of the most noteworthy
mechanisms to support cooperation is spatial reciprocity, usually accomplished
by distributing players in a spatial structure allowing cooperators to cluster
together and avoid exploitation. This raises an important question: how is the
survival of cooperation affected by different topologies? Here, to address this
question, we explore the Focal Public Goods (FPGG) and classic Public Goods
Games (PGG), and the Prisoner's Dilemma (PD) on several regular lattices:
honeycomb, square (with von Neumann and Moore neighborhoods), kagome,
triangular, cubic, and 4D hypercubic lattices using both analytical methods and
agent-based Monte Carlo simulations. We found that for both Public Goods Games,
a consistent trend appears on all two-dimensional lattices: as the number of
first neighbors increases, cooperation is enhanced. However, this is only
visible by analysing the results in terms of the payoff's synergistic factor
normalized by the number of connections. Besides this, clustered topologies,
i.e., those that allow two connected players to share neighbors, are the most
beneficial to cooperation for the FPGG. The same is not always true for the
classic PGG, where having shared neighbors between connected players may or may
not benefit cooperation. We also provide a reinterpretation of the classic PGG
as a focal game by representing the lattice structure of this category of games
as a single interaction game with longer-ranged, weighted neighborhoods, an
approach valid for any regular lattice topology. Finally, we show that
depending on the payoff parametrization of the PD, there can be an equivalency
between the PD and the FPGG; when the mapping between the two games is
imperfect, the definition of an effective synergy parameter can still be useful
to show their similarities.
| [
{
"created": "Tue, 5 Jul 2022 21:05:25 GMT",
"version": "v1"
}
] | 2022-10-12 | [
[
"Flores",
"Lucas S.",
""
],
[
"Amaral",
"Marco A.",
""
],
[
"Vainstein",
"Mendeli H.",
""
],
[
"Fernandes",
"Heitor C. M.",
""
]
] | In the context of Evolutionary Game Theory, one of the most noteworthy mechanisms to support cooperation is spatial reciprocity, usually accomplished by distributing players in a spatial structure allowing cooperators to cluster together and avoid exploitation. This raises an important question: how is the survival of cooperation affected by different topologies? Here, to address this question, we explore the Focal Public Goods (FPGG) and classic Public Goods Games (PGG), and the Prisoner's Dilemma (PD) on several regular lattices: honeycomb, square (with von Neumann and Moore neighborhoods), kagome, triangular, cubic, and 4D hypercubic lattices using both analytical methods and agent-based Monte Carlo simulations. We found that for both Public Goods Games, a consistent trend appears on all two-dimensional lattices: as the number of first neighbors increases, cooperation is enhanced. However, this is only visible by analysing the results in terms of the payoff's synergistic factor normalized by the number of connections. Besides this, clustered topologies, i.e., those that allow two connected players to share neighbors, are the most beneficial to cooperation for the FPGG. The same is not always true for the classic PGG, where having shared neighbors between connected players may or may not benefit cooperation. We also provide a reinterpretation of the classic PGG as a focal game by representing the lattice structure of this category of games as a single interaction game with longer-ranged, weighted neighborhoods, an approach valid for any regular lattice topology. Finally, we show that depending on the payoff parametrization of the PD, there can be an equivalency between the PD and the FPGG; when the mapping between the two games is imperfect, the definition of an effective synergy parameter can still be useful to show their similarities. |
q-bio/0405025 | Mark Ya. Azbel' | Mark Ya. Azbel' | Universal Mortality Law, Life Expectancy and Immortality | null | null | 10.1016/j.physa.2004.06.065 | null | q-bio.PE q-bio.QM | null | Well protected human and laboratory animal populations with abundant
resources are evolutionary unprecedented, and their survival far beyond
reproductive age may be a byproduct rather than tool of evolution. Physical
approach, which takes advantage of their extensively quantified mortality,
establishes that its dominant fraction yields the exact law, and suggests its
unusual mechanism. The law is universal for all animals, from yeast to humans,
despite their drastically different biology and evolution. It predicts that the
universal mortality has short memory of the life history, at any age may be
reset to its value at a significantly younger age, and mean life expectancy
extended (by biologically unprecedented small changes) from its current maximal
value to immortality. Mortality change is rapid and stepwise. Demographic data
and recent experiments verify these predictions for humans, rats, flies,
nematodes and yeast. In particular, mean life expectancy increased 6-fold (to
"human" 430 years), with no apparent loss in health and vitality, in nematodes
with a small number of perturbed genes and tissues. Universality allows one to
study unusual mortality mechanism and the ways to immortality.
| [
{
"created": "Sat, 29 May 2004 08:22:02 GMT",
"version": "v1"
},
{
"created": "Tue, 1 Jun 2004 20:51:11 GMT",
"version": "v2"
}
] | 2015-06-26 | [
[
"Azbel'",
"Mark Ya.",
""
]
] | Well protected human and laboratory animal populations with abundant resources are evolutionary unprecedented, and their survival far beyond reproductive age may be a byproduct rather than tool of evolution. Physical approach, which takes advantage of their extensively quantified mortality, establishes that its dominant fraction yields the exact law, and suggests its unusual mechanism. The law is universal for all animals, from yeast to humans, despite their drastically different biology and evolution. It predicts that the universal mortality has short memory of the life history, at any age may be reset to its value at a significantly younger age, and mean life expectancy extended (by biologically unprecedented small changes) from its current maximal value to immortality. Mortality change is rapid and stepwise. Demographic data and recent experiments verify these predictions for humans, rats, flies, nematodes and yeast. In particular, mean life expectancy increased 6-fold (to "human" 430 years), with no apparent loss in health and vitality, in nematodes with a small number of perturbed genes and tissues. Universality allows one to study unusual mortality mechanism and the ways to immortality. |
1907.00849 | R\'obert Juh\'asz | R. Juh\'asz, I. A. Kov\'acs | Population boundary across an environmental gradient: Effects of
quenched disorder | 13 pages, 14 figures | Phys. Rev. Research 2, 013123 (2020) | 10.1103/PhysRevResearch.2.013123 | null | q-bio.PE cond-mat.dis-nn cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Population boundary is a classic indicator of climatic response in ecology.
In addition to known challenges, the spatial and dynamical characteristics of
the boundary are not only affected by the spatial gradient in the environmental
factors, but also by local heterogeneities in the regional characteristics.
Here, we capture the effects of quenched heterogeneities on the ecological
boundary with the disordered contact process in one and two dimensions with a
linear spatial trend in the local control parameter. We apply the
strong-disorder renormalization group method to calculate the sites occupied
with an $O(1)$ probability in the stationary state, readily yielding the
population front's position as the outermost site locally as well as globally
for the entire boundary. We show that under a quasistatic change of the global
environment, mimicking climate change, the front advances intermittently: long
quiescent periods are interrupted by rare but long jumps. The characteristics
of this intermittent dynamics are found to obey universal scaling laws in terms
of the gradient, conjectured to be related to the correlation-length exponent
of the model. Our results suggest that current observations might misleadingly
show little to no climate response for an extended period of time, concealing
the long-term effects of climate change.
| [
{
"created": "Mon, 1 Jul 2019 15:16:34 GMT",
"version": "v1"
},
{
"created": "Wed, 5 Feb 2020 17:02:47 GMT",
"version": "v2"
}
] | 2020-02-06 | [
[
"Juhász",
"R.",
""
],
[
"Kovács",
"I. A.",
""
]
] | Population boundary is a classic indicator of climatic response in ecology. In addition to known challenges, the spatial and dynamical characteristics of the boundary are not only affected by the spatial gradient in the environmental factors, but also by local heterogeneities in the regional characteristics. Here, we capture the effects of quenched heterogeneities on the ecological boundary with the disordered contact process in one and two dimensions with a linear spatial trend in the local control parameter. We apply the strong-disorder renormalization group method to calculate the sites occupied with an $O(1)$ probability in the stationary state, readily yielding the population front's position as the outermost site locally as well as globally for the entire boundary. We show that under a quasistatic change of the global environment, mimicking climate change, the front advances intermittently: long quiescent periods are interrupted by rare but long jumps. The characteristics of this intermittent dynamics are found to obey universal scaling laws in terms of the gradient, conjectured to be related to the correlation-length exponent of the model. Our results suggest that current observations might misleadingly show little to no climate response for an extended period of time, concealing the long-term effects of climate change. |
1010.4517 | Jake Bouvrie | Jake Bouvrie, Jean-Jacques Slotine | Synchronization and Redundancy: Implications for Robustness of Neural
Learning and Decision Making | Preprint, accepted for publication in Neural Computation | null | null | null | q-bio.NC cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning and decision making in the brain are key processes critical to
survival, and yet are processes implemented by non-ideal biological building
blocks which can impose significant error. We explore quantitatively how the
brain might cope with this inherent source of error by taking advantage of two
ubiquitous mechanisms, redundancy and synchronization. In particular we
consider a neural process whose goal is to learn a decision function by
implementing a nonlinear gradient dynamics. The dynamics, however, are assumed
to be corrupted by perturbations modeling the error which might be incurred due
to limitations of the biology, intrinsic neuronal noise, and imperfect
measurements. We show that error, and the associated uncertainty surrounding a
learned solution, can be controlled in large part by trading off
synchronization strength among multiple redundant neural systems against the
noise amplitude. The impact of the coupling between such redundant systems is
quantified by the spectrum of the network Laplacian, and we discuss the role of
network topology in synchronization and in reducing the effect of noise. A
range of situations in which the mechanisms we model arise in brain science are
discussed, and we draw attention to experimental evidence suggesting that
cortical circuits capable of implementing the computations of interest here can
be found on several scales. Finally, simulations comparing theoretical bounds
to the relevant empirical quantities show that the theoretical estimates we
derive can be tight.
| [
{
"created": "Thu, 21 Oct 2010 16:34:43 GMT",
"version": "v1"
},
{
"created": "Sat, 16 Apr 2011 17:01:04 GMT",
"version": "v2"
}
] | 2011-04-19 | [
[
"Bouvrie",
"Jake",
""
],
[
"Slotine",
"Jean-Jacques",
""
]
] | Learning and decision making in the brain are key processes critical to survival, and yet are processes implemented by non-ideal biological building blocks which can impose significant error. We explore quantitatively how the brain might cope with this inherent source of error by taking advantage of two ubiquitous mechanisms, redundancy and synchronization. In particular we consider a neural process whose goal is to learn a decision function by implementing a nonlinear gradient dynamics. The dynamics, however, are assumed to be corrupted by perturbations modeling the error which might be incurred due to limitations of the biology, intrinsic neuronal noise, and imperfect measurements. We show that error, and the associated uncertainty surrounding a learned solution, can be controlled in large part by trading off synchronization strength among multiple redundant neural systems against the noise amplitude. The impact of the coupling between such redundant systems is quantified by the spectrum of the network Laplacian, and we discuss the role of network topology in synchronization and in reducing the effect of noise. A range of situations in which the mechanisms we model arise in brain science are discussed, and we draw attention to experimental evidence suggesting that cortical circuits capable of implementing the computations of interest here can be found on several scales. Finally, simulations comparing theoretical bounds to the relevant empirical quantities show that the theoretical estimates we derive can be tight. |
2106.00637 | Emmanuelle Tognoli | Emmanuelle Tognoli, Daniela Benites, J. A. Scott Kelso | A Blueprint for the Study of the Brain's Spatiotemporal Patterns | null | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The functioning of an organ such as the brain emerges from interactions
between its constituent parts. Further, this interaction is not immutable in
time but rather unfolds in a succession of patterns, thereby allowing the brain
to adapt to constantly changing exterior and interior milieus. This calls for a
framework able to study patterned spatiotemporal interactions between
components of the brain. A theoretical and methodological framework is
developed to study the brain's coordination dynamics. Here we present a toolset
designed to decipher the continuous dynamics of electrophysiological data and
its relation to (dys-) function. Understanding the spatiotemporal organization
of brain patterns and their association with behavioral, cognitive and
clinically-relevant variables is an important challenge for the fields of
neuroscience and biologically-inspired engineering. It is hoped that such a
comprehensive framework will shed light not only on human behavior and the
human mind but also help in understanding the growing number of pathologies
that are linked to disorders of brain connectivity.
| [
{
"created": "Tue, 1 Jun 2021 17:09:37 GMT",
"version": "v1"
}
] | 2021-06-02 | [
[
"Tognoli",
"Emmanuelle",
""
],
[
"Benites",
"Daniela",
""
],
[
"Kelso",
"J. A. Scott",
""
]
] | The functioning of an organ such as the brain emerges from interactions between its constituent parts. Further, this interaction is not immutable in time but rather unfolds in a succession of patterns, thereby allowing the brain to adapt to constantly changing exterior and interior milieus. This calls for a framework able to study patterned spatiotemporal interactions between components of the brain. A theoretical and methodological framework is developed to study the brain's coordination dynamics. Here we present a toolset designed to decipher the continuous dynamics of electrophysiological data and its relation to (dys-) function. Understanding the spatiotemporal organization of brain patterns and their association with behavioral, cognitive and clinically-relevant variables is an important challenge for the fields of neuroscience and biologically-inspired engineering. It is hoped that such a comprehensive framework will shed light not only on human behavior and the human mind but also help in understanding the growing number of pathologies that are linked to disorders of brain connectivity. |
1110.2189 | Jaewook Joo | Jaewook Joo and Jinmyung Choi | Network architectural conditions for prominent and robust stochastic
oscillations | 5 figures | null | null | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding relationship between noisy dynamics and biological network
architecture is a fundamentally important question, particularly in order to
elucidate how cells encode and process information. We analytically and
numerically investigate general network architectural conditions that are
necessary to generate stochastic amplified and coherent oscillations. We
enumerate all possible topologies of coupled negative feedbacks in the
underlying biochemical networks with three components, negative feedback loops,
and mass action kinetics. Using the linear noise approximation to analytically
obtain the time-dependent solution of the master equation and derive the
algebraic expression of power spectra, we find that (a) all networks with
coupled negative feedbacks are capable of generating stochastic amplified and
coherent oscillations; (b) networks with a single negative feedback are better
stochastic amplified and coherent oscillators than those with multiple coupled
negative feedbacks; (c) multiple timescale difference among the kinetic rate
constants is required for stochastic amplified and coherent oscillations.
| [
{
"created": "Mon, 10 Oct 2011 20:11:37 GMT",
"version": "v1"
}
] | 2011-10-12 | [
[
"Joo",
"Jaewook",
""
],
[
"Choi",
"Jinmyung",
""
]
] | Understanding relationship between noisy dynamics and biological network architecture is a fundamentally important question, particularly in order to elucidate how cells encode and process information. We analytically and numerically investigate general network architectural conditions that are necessary to generate stochastic amplified and coherent oscillations. We enumerate all possible topologies of coupled negative feedbacks in the underlying biochemical networks with three components, negative feedback loops, and mass action kinetics. Using the linear noise approximation to analytically obtain the time-dependent solution of the master equation and derive the algebraic expression of power spectra, we find that (a) all networks with coupled negative feedbacks are capable of generating stochastic amplified and coherent oscillations; (b) networks with a single negative feedback are better stochastic amplified and coherent oscillators than those with multiple coupled negative feedbacks; (c) multiple timescale difference among the kinetic rate constants is required for stochastic amplified and coherent oscillations. |
2212.07747 | Paul Jenkins | Robert C. Griffiths and Paul A. Jenkins | An estimator for the recombination rate from a continuously observed
diffusion of haplotype frequencies | 28 pages, 3 figures | null | null | null | q-bio.PE math.PR math.ST stat.TH | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recombination is a fundamental evolutionary force, but it is difficult to
quantify because the effect of a recombination event on patterns of variation
in a sample of genetic data can be hard to discern. Estimators for the
recombination rate, which are usually based on the idea of integrating over the
unobserved possible evolutionary histories of a sample, can therefore be noisy.
Here we consider a related question: how would an estimator behave if the
evolutionary history actually was observed? This would offer an upper bound on
the performance of estimators used in practice. In this paper we derive an
expression for the maximum likelihood estimator for the recombination rate
based on a continuously observed, multi-locus, Wright--Fisher diffusion of
haplotype frequencies, complementing existing work for an estimator of
selection. We show that, contrary to selection, the estimator has unusual
properties because the observed information matrix can explode in finite time
whereupon the recombination parameter is learned without error. We also show
that the recombination estimator is robust to the presence of selection in the
sense that incorporating selection into the model leaves the estimator
unchanged. We study the properties of the estimator by simulation and show that
its distribution can be quite sensitive to the underlying mutation rates.
| [
{
"created": "Thu, 15 Dec 2022 11:59:30 GMT",
"version": "v1"
},
{
"created": "Thu, 4 May 2023 07:53:47 GMT",
"version": "v2"
}
] | 2023-05-05 | [
[
"Griffiths",
"Robert C.",
""
],
[
"Jenkins",
"Paul A.",
""
]
] | Recombination is a fundamental evolutionary force, but it is difficult to quantify because the effect of a recombination event on patterns of variation in a sample of genetic data can be hard to discern. Estimators for the recombination rate, which are usually based on the idea of integrating over the unobserved possible evolutionary histories of a sample, can therefore be noisy. Here we consider a related question: how would an estimator behave if the evolutionary history actually was observed? This would offer an upper bound on the performance of estimators used in practice. In this paper we derive an expression for the maximum likelihood estimator for the recombination rate based on a continuously observed, multi-locus, Wright--Fisher diffusion of haplotype frequencies, complementing existing work for an estimator of selection. We show that, contrary to selection, the estimator has unusual properties because the observed information matrix can explode in finite time whereupon the recombination parameter is learned without error. We also show that the recombination estimator is robust to the presence of selection in the sense that incorporating selection into the model leaves the estimator unchanged. We study the properties of the estimator by simulation and show that its distribution can be quite sensitive to the underlying mutation rates. |
2110.03518 | Johannes Kleiner | Johannes Kleiner, Stephan Hartmann | The Closure of the Physical, Consciousness and Scientific Practice | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We analyse the implications of the closure of the physical for experiments in
the scientific study of consciousness when all the details are considered,
especially how measurement results relate to physical events. It turns out that
the closure of the physical has surprising implications that conflict with
scientific practice. These implications point to a fundamental flaw in the
paradigm underlying many experiments conducted to date and pose a challenge to
any research programme that aims to ground a physical functionalist or
identity-based understanding of consciousness on empirical observations.
| [
{
"created": "Fri, 24 Sep 2021 14:26:18 GMT",
"version": "v1"
},
{
"created": "Sun, 5 Feb 2023 18:26:41 GMT",
"version": "v2"
}
] | 2023-02-07 | [
[
"Kleiner",
"Johannes",
""
],
[
"Hartmann",
"Stephan",
""
]
] | We analyse the implications of the closure of the physical for experiments in the scientific study of consciousness when all the details are considered, especially how measurement results relate to physical events. It turns out that the closure of the physical has surprising implications that conflict with scientific practice. These implications point to a fundamental flaw in the paradigm underlying many experiments conducted to date and pose a challenge to any research programme that aims to ground a physical functionalist or identity-based understanding of consciousness on empirical observations. |
1708.02967 | Daniel Toker | Daniel Toker and Friedrich T. Sommer | Information Integration In Large Brain Networks | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An outstanding problem in neuroscience is to understand how information is
integrated across the many modules of the brain. While classic
information-theoretic measures have transformed our understanding of
feedforward information processing in the brain's sensory periphery, comparable
measures for information flow in the massively recurrent networks of the rest
of the brain have been lacking. To address this, recent work in information
theory has produced a sound measure of network-wide "integrated information,"
which can be estimated from time-series data. But, a computational hurdle has
stymied attempts to measure large-scale information integration in real brains.
Specifically, the measurement of integrated information involves a
combinatorial search for the informational "weakest link" of a network, a
process whose computation time explodes super-exponentially with network size.
Here, we show that spectral clustering, applied on the correlation matrix of
time-series data, provides an approximate but robust solution to the search for
the the informational weakest link of large networks. This reduces the
computation time for integrated information in large systems from longer than
the lifespan of the universe to just minutes. We evaluate this solution in
brain-like systems of coupled oscillators as well as in high-density
electrocortigraphy data from two macaque monkeys, and show that the
informational "weakest link" of the monkey cortex splits posterior sensory
areas from anterior association areas. Finally, we use our solution to provide
evidence in support of the long-standing hypothesis that information
integration is maximized by networks with a high global efficiency, and that
modular network structures promote the segregation of information.
| [
{
"created": "Wed, 9 Aug 2017 18:37:54 GMT",
"version": "v1"
},
{
"created": "Tue, 23 Jan 2018 21:26:58 GMT",
"version": "v2"
},
{
"created": "Fri, 8 Feb 2019 21:16:03 GMT",
"version": "v3"
}
] | 2019-02-12 | [
[
"Toker",
"Daniel",
""
],
[
"Sommer",
"Friedrich T.",
""
]
] | An outstanding problem in neuroscience is to understand how information is integrated across the many modules of the brain. While classic information-theoretic measures have transformed our understanding of feedforward information processing in the brain's sensory periphery, comparable measures for information flow in the massively recurrent networks of the rest of the brain have been lacking. To address this, recent work in information theory has produced a sound measure of network-wide "integrated information," which can be estimated from time-series data. But, a computational hurdle has stymied attempts to measure large-scale information integration in real brains. Specifically, the measurement of integrated information involves a combinatorial search for the informational "weakest link" of a network, a process whose computation time explodes super-exponentially with network size. Here, we show that spectral clustering, applied on the correlation matrix of time-series data, provides an approximate but robust solution to the search for the the informational weakest link of large networks. This reduces the computation time for integrated information in large systems from longer than the lifespan of the universe to just minutes. We evaluate this solution in brain-like systems of coupled oscillators as well as in high-density electrocortigraphy data from two macaque monkeys, and show that the informational "weakest link" of the monkey cortex splits posterior sensory areas from anterior association areas. Finally, we use our solution to provide evidence in support of the long-standing hypothesis that information integration is maximized by networks with a high global efficiency, and that modular network structures promote the segregation of information. |
1405.2120 | Christopher Whidden | Christopher Whidden and Frederick A. Matsen IV | Quantifying MCMC Exploration of Phylogenetic Tree Space | 62 pages, 17 figures; revised in response to peer review | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In order to gain an understanding of the effectiveness of phylogenetic Markov
chain Monte Carlo (MCMC), it is important to understand how quickly the
empirical distribution of the MCMC converges to the posterior distribution. In
this paper we investigate this problem on phylogenetic tree topologies with a
metric that is especially well suited to the task: the subtree
prune-and-regraft (SPR) metric. This metric directly corresponds to the minimum
number of MCMC rearrangements required to move between trees in common
phylogenetic MCMC implementations. We develop a novel graph-based approach to
analyze tree posteriors and find that the SPR metric is much more informative
than simpler metrics that are unrelated to MCMC moves. In doing so we show
conclusively that topological peaks do occur in Bayesian phylogenetic
posteriors from real data sets as sampled with standard MCMC approaches,
investigate the efficiency of Metropolis-coupled MCMC (MCMCMC) in traversing
the valleys between peaks, and show that conditional clade distribution (CCD)
can have systematic problems when there are multiple peaks.
| [
{
"created": "Thu, 8 May 2014 23:03:35 GMT",
"version": "v1"
},
{
"created": "Fri, 17 Oct 2014 17:56:04 GMT",
"version": "v2"
}
] | 2014-10-20 | [
[
"Whidden",
"Christopher",
""
],
[
"Matsen",
"Frederick A.",
"IV"
]
] | In order to gain an understanding of the effectiveness of phylogenetic Markov chain Monte Carlo (MCMC), it is important to understand how quickly the empirical distribution of the MCMC converges to the posterior distribution. In this paper we investigate this problem on phylogenetic tree topologies with a metric that is especially well suited to the task: the subtree prune-and-regraft (SPR) metric. This metric directly corresponds to the minimum number of MCMC rearrangements required to move between trees in common phylogenetic MCMC implementations. We develop a novel graph-based approach to analyze tree posteriors and find that the SPR metric is much more informative than simpler metrics that are unrelated to MCMC moves. In doing so we show conclusively that topological peaks do occur in Bayesian phylogenetic posteriors from real data sets as sampled with standard MCMC approaches, investigate the efficiency of Metropolis-coupled MCMC (MCMCMC) in traversing the valleys between peaks, and show that conditional clade distribution (CCD) can have systematic problems when there are multiple peaks. |
2407.15322 | Daniel Packwood Dr | Fatemeh Etezadi, Shunichi Ito, Kosuke Yasui, Rodi Kado Abdalkader,
Itsunari Minami, Motonari Uesugi, Ganesh Pandian Namasivayam, Haruko Nakano,
Atsushi Nakano, Daniel M. Packwood | Molecular design for cardiac cell differentiation using a small dataset
and decorated shape features | 26 pages (main paper), including 7 figures and 3 tables. 23 pages of
supporting information. To be submitted to a journal | null | null | null | q-bio.BM | http://creativecommons.org/licenses/by/4.0/ | The discovery of small organic compounds for inducing stem cell
differentiation is a time- and resource-intensive process. While data science
could, in principle, facilitate the discovery of these compounds, novel
approaches are required due to the difficulty of acquiring training data from
large numbers of example compounds. In this paper, we demonstrate the design of
a new compound for inducing cardiomyocyte differentiation using simple
regression models trained with a data set containing only 80 examples. We
introduce decorated shape descriptors, an information-rich molecular feature
representation that integrates both molecular shape and hydrophilicity
information. These models demonstrate improved performance compared to ones
using standard molecular descriptors based on shape alone. Model overtraining
is diagnosed using a new type of sensitivity analysis. Our new compound is
designed using a conservative molecular design strategy, and its effectiveness
is confirmed through expression profiles of cardiomyocyte-related marker genes
using real-time polymerase chain reaction experiments on human iPS cell lines.
This work demonstrates a viable data-driven strategy for designing new
compounds for stem cell differentiation protocols and will be useful in
situations where training data is limited.
| [
{
"created": "Mon, 22 Jul 2024 01:31:29 GMT",
"version": "v1"
}
] | 2024-07-23 | [
[
"Etezadi",
"Fatemeh",
""
],
[
"Ito",
"Shunichi",
""
],
[
"Yasui",
"Kosuke",
""
],
[
"Abdalkader",
"Rodi Kado",
""
],
[
"Minami",
"Itsunari",
""
],
[
"Uesugi",
"Motonari",
""
],
[
"Namasivayam",
"Ganesh Pandian",
""
],
[
"Nakano",
"Haruko",
""
],
[
"Nakano",
"Atsushi",
""
],
[
"Packwood",
"Daniel M.",
""
]
] | The discovery of small organic compounds for inducing stem cell differentiation is a time- and resource-intensive process. While data science could, in principle, facilitate the discovery of these compounds, novel approaches are required due to the difficulty of acquiring training data from large numbers of example compounds. In this paper, we demonstrate the design of a new compound for inducing cardiomyocyte differentiation using simple regression models trained with a data set containing only 80 examples. We introduce decorated shape descriptors, an information-rich molecular feature representation that integrates both molecular shape and hydrophilicity information. These models demonstrate improved performance compared to ones using standard molecular descriptors based on shape alone. Model overtraining is diagnosed using a new type of sensitivity analysis. Our new compound is designed using a conservative molecular design strategy, and its effectiveness is confirmed through expression profiles of cardiomyocyte-related marker genes using real-time polymerase chain reaction experiments on human iPS cell lines. This work demonstrates a viable data-driven strategy for designing new compounds for stem cell differentiation protocols and will be useful in situations where training data is limited. |
q-bio/0601047 | Thomas R. Weikl | Purushottam D. Dixit and Thomas R. Weikl | A simple measure of native-state topology and chain connectivity
predicts the folding rates of two-state proteins with and without crosslinks | 13 pages, 2 tables, and 2 figures | null | null | null | q-bio.BM cond-mat.soft | null | The folding rates of two-state proteins have been found to correlate with
simple measures of native-state topology. The most prominent among these
measures is the relative contact order (CO), which is the average CO or
'localness' of all contacts in the native protein structure, divided by the
chain length. Here, we test whether such measures can be generalized to capture
the effect of chain crosslinks on the folding rate. Crosslinks change the chain
connectivity and therefore also the localness of some of the the native
contacts. These changes in localness can be taken into account by the
graph-theoretical concept of effective contact order (ECO). The relative ECO,
however, the natural extension of the relative CO for proteins with crosslinks,
overestimates the changes in the folding rates caused by crosslinks. We suggest
here a novel measure of native-state topology, the relative logCO, and its
natural extension, the relative logECO. The relative logCO is the average value
for the logarithm of the CO of all contacts, divided by the logarithm of the
chain length. The relative log(E)CO reproduces the folding rates of a set of 26
two-state proteins without crosslinks with essentially the same high
correlation coefficient as the relative CO. In addition, it also captures the
folding rates of 8 two-state proteins with crosslinks.
| [
{
"created": "Sat, 28 Jan 2006 17:38:26 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Dixit",
"Purushottam D.",
""
],
[
"Weikl",
"Thomas R.",
""
]
] | The folding rates of two-state proteins have been found to correlate with simple measures of native-state topology. The most prominent among these measures is the relative contact order (CO), which is the average CO or 'localness' of all contacts in the native protein structure, divided by the chain length. Here, we test whether such measures can be generalized to capture the effect of chain crosslinks on the folding rate. Crosslinks change the chain connectivity and therefore also the localness of some of the the native contacts. These changes in localness can be taken into account by the graph-theoretical concept of effective contact order (ECO). The relative ECO, however, the natural extension of the relative CO for proteins with crosslinks, overestimates the changes in the folding rates caused by crosslinks. We suggest here a novel measure of native-state topology, the relative logCO, and its natural extension, the relative logECO. The relative logCO is the average value for the logarithm of the CO of all contacts, divided by the logarithm of the chain length. The relative log(E)CO reproduces the folding rates of a set of 26 two-state proteins without crosslinks with essentially the same high correlation coefficient as the relative CO. In addition, it also captures the folding rates of 8 two-state proteins with crosslinks. |
2203.13946 | Alexandra Lee | Alexandra J. Lee, Taylor Reiter, Georgia Doing, Julia Oh, Deborah A.
Hogan, Casey S. Greene | Using genome-wide expression compendia to study microorganisms | null | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | A gene expression compendium is a heterogeneous collection of gene expression
experiments assembled from data collected for diverse purposes. The widely
varied experimental conditions and genetic backgrounds across samples creates a
tremendous opportunity for gaining a systems level understanding of the
transcriptional responses that influence phenotypes. Variety in experimental
design is particularly important for studying microbes, where the
transcriptional responses integrate many signals and demonstrate plasticity
across strains including response to what nutrients are available and what
microbes are present. Advances in high-throughput measurement technology have
made it feasible to construct compendia for many microbes. In this review we
discuss how these compendia are constructed and analyzed to reveal
transcriptional patterns.
| [
{
"created": "Sat, 26 Mar 2022 00:16:27 GMT",
"version": "v1"
}
] | 2022-03-29 | [
[
"Lee",
"Alexandra J.",
""
],
[
"Reiter",
"Taylor",
""
],
[
"Doing",
"Georgia",
""
],
[
"Oh",
"Julia",
""
],
[
"Hogan",
"Deborah A.",
""
],
[
"Greene",
"Casey S.",
""
]
] | A gene expression compendium is a heterogeneous collection of gene expression experiments assembled from data collected for diverse purposes. The widely varied experimental conditions and genetic backgrounds across samples creates a tremendous opportunity for gaining a systems level understanding of the transcriptional responses that influence phenotypes. Variety in experimental design is particularly important for studying microbes, where the transcriptional responses integrate many signals and demonstrate plasticity across strains including response to what nutrients are available and what microbes are present. Advances in high-throughput measurement technology have made it feasible to construct compendia for many microbes. In this review we discuss how these compendia are constructed and analyzed to reveal transcriptional patterns. |
1102.3342 | Mateusz Sikora | Lukasz Peplowski, Mateusz Sikora, Wieslaw Nowak and Marek Cieplak | Molecular jamming - the cystine slipknot mechanical clamp in all-atom
simulations | null | null | 10.1063/1.3553801 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A recent survey of 17 134 proteins has identified a new class of proteins
which are expected to yield stretching induced force-peaks in the range of 1
nN. Such high force peaks should be due to forcing of a slip-loop through a
cystine ring, i.e. by generating a cystine slipknot. The survey has been
performed in a simple coarse grained model. Here, we perform all-atom steered
molecular dynamics simulations on 15 cystine knot proteins and determine their
resistance to stretching. In agreement with previous studies within a coarse
grained structure based model, the level of resistance is found to be
substantially higher than in proteins in which the mechanical clamp operates
through shear. The large stretching forces arise through formation of the
cystine slipknot mechanical clamp and the resulting steric jamming. We
elucidate the workings of such a clamp in an atomic detail. We also study the
behavior of five top strength proteins with the shear-based mechanostability in
which no jamming is involved. We show that in the atomic model, the jamming
state is relieved by moving one amino acid at a time and there is a choice in
the selection of the amino acid that advances the first. In contrast, the
coarse grained model also allows for a simultaneous passage of two amino acids.
| [
{
"created": "Wed, 16 Feb 2011 14:13:28 GMT",
"version": "v1"
}
] | 2015-05-27 | [
[
"Peplowski",
"Lukasz",
""
],
[
"Sikora",
"Mateusz",
""
],
[
"Nowak",
"Wieslaw",
""
],
[
"Cieplak",
"Marek",
""
]
] | A recent survey of 17 134 proteins has identified a new class of proteins which are expected to yield stretching induced force-peaks in the range of 1 nN. Such high force peaks should be due to forcing of a slip-loop through a cystine ring, i.e. by generating a cystine slipknot. The survey has been performed in a simple coarse grained model. Here, we perform all-atom steered molecular dynamics simulations on 15 cystine knot proteins and determine their resistance to stretching. In agreement with previous studies within a coarse grained structure based model, the level of resistance is found to be substantially higher than in proteins in which the mechanical clamp operates through shear. The large stretching forces arise through formation of the cystine slipknot mechanical clamp and the resulting steric jamming. We elucidate the workings of such a clamp in an atomic detail. We also study the behavior of five top strength proteins with the shear-based mechanostability in which no jamming is involved. We show that in the atomic model, the jamming state is relieved by moving one amino acid at a time and there is a choice in the selection of the amino acid that advances the first. In contrast, the coarse grained model also allows for a simultaneous passage of two amino acids. |
1209.2911 | Shi Huang | Dejian Yuan, Zuobin Zhu, Xiaohua Tan, Jie Liang, Ceng Zeng, Jiegen
Zhang, Jun Chen, Long Ma, Ayca Dogan, Gudrun Brockmann, Oliver Goldmann, Eva
Medina, Amanda D. Rice, Richard W. Moyer, Xian Man, Ke Yi, Yanke Li, Qing Lu,
Yimin Huang, Dapeng Wang, Jun Yu, Hui Guo, Kun Xia, and Shi Huang | Methods for scoring the collective effect of SNPs: Minor alleles of
common SNPs quantitatively affect traits/diseases and are under both positive
and negative selection | null | Sci China Life Sci. 57:876-888. (2014) | 10.1007/s11427-014-4704-4 | null | q-bio.GN q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most common SNPs are popularly assumed to be neutral. We here developed novel
methods to examine in animal models and humans whether extreme amount of minor
alleles (MAs) carried by an individual may represent extreme trait values and
common diseases. We analyzed panels of genetic reference populations and
identified the MAs in each panel and the MA content (MAC) that each strain
carried. We also analyzed 21 published GWAS datasets of human diseases and
identified the MAC of each case or control. MAC was nearly linearly linked to
quantitative variations in numerous traits in model organisms, including life
span, tumor susceptibility, learning and memory, sensitivity to alcohol and
anti-psychotic drugs, and two correlated traits poor reproductive fitness and
strong immunity. Similarly, in Europeans or European Americans, enrichment of
MAs of fast but not slow evolutionary rate was linked to autoimmune and
numerous other diseases, including type 2 diabetes, Parkinson's disease,
psychiatric disorders, alcohol and cocaine addictions, cancer, and less life
span. Therefore, both high and low MAC correlated with extreme values in many
traits, indicating stabilizing selection on most MAs. The methods here are
broadly applicable and may help solve the missing heritability problem in
complex traits and diseases.
| [
{
"created": "Wed, 12 Sep 2012 06:30:22 GMT",
"version": "v1"
},
{
"created": "Tue, 16 Jul 2013 02:17:45 GMT",
"version": "v2"
}
] | 2019-04-04 | [
[
"Yuan",
"Dejian",
""
],
[
"Zhu",
"Zuobin",
""
],
[
"Tan",
"Xiaohua",
""
],
[
"Liang",
"Jie",
""
],
[
"Zeng",
"Ceng",
""
],
[
"Zhang",
"Jiegen",
""
],
[
"Chen",
"Jun",
""
],
[
"Ma",
"Long",
""
],
[
"Dogan",
"Ayca",
""
],
[
"Brockmann",
"Gudrun",
""
],
[
"Goldmann",
"Oliver",
""
],
[
"Medina",
"Eva",
""
],
[
"Rice",
"Amanda D.",
""
],
[
"Moyer",
"Richard W.",
""
],
[
"Man",
"Xian",
""
],
[
"Yi",
"Ke",
""
],
[
"Li",
"Yanke",
""
],
[
"Lu",
"Qing",
""
],
[
"Huang",
"Yimin",
""
],
[
"Wang",
"Dapeng",
""
],
[
"Yu",
"Jun",
""
],
[
"Guo",
"Hui",
""
],
[
"Xia",
"Kun",
""
],
[
"Huang",
"Shi",
""
]
] | Most common SNPs are popularly assumed to be neutral. We here developed novel methods to examine in animal models and humans whether extreme amount of minor alleles (MAs) carried by an individual may represent extreme trait values and common diseases. We analyzed panels of genetic reference populations and identified the MAs in each panel and the MA content (MAC) that each strain carried. We also analyzed 21 published GWAS datasets of human diseases and identified the MAC of each case or control. MAC was nearly linearly linked to quantitative variations in numerous traits in model organisms, including life span, tumor susceptibility, learning and memory, sensitivity to alcohol and anti-psychotic drugs, and two correlated traits poor reproductive fitness and strong immunity. Similarly, in Europeans or European Americans, enrichment of MAs of fast but not slow evolutionary rate was linked to autoimmune and numerous other diseases, including type 2 diabetes, Parkinson's disease, psychiatric disorders, alcohol and cocaine addictions, cancer, and less life span. Therefore, both high and low MAC correlated with extreme values in many traits, indicating stabilizing selection on most MAs. The methods here are broadly applicable and may help solve the missing heritability problem in complex traits and diseases. |
0904.2254 | Hao Ge | Hao Ge, Hong Qian, Min Qian | Synchronized Dynamics and Nonequilibrium Steady States in a Stochastic
Yeast Cell-Cycle Network | 23 pages,6 figures; in Mathematical Bioscience 2008 | null | null | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Applying the mathematical circulation theory of Markov chains, we investigate
the synchronized stochastic dynamics of a discrete network model of yeast
cell-cycle regulation where stochasticity has been kept rather than being
averaged out. By comparing the network dynamics of the stochastic model with
its corresponding deterministic network counterpart, we show that the
synchronized dynamics can be soundly characterized by a dominant circulation in
the stochastic model, which is the natural generalization of the deterministic
limit cycle in the deterministic system. Moreover, the period of the main peak
in the power spectrum, which is in common use to characterize the synchronized
dynamics, perfectly corresponds to the number of states in the main cycle with
dominant circulation. Such a large separation in the magnitude of the
circulations, between a dominant, main cycle and the rest, gives rise to the
stochastic synchronization phenomenon.
| [
{
"created": "Wed, 15 Apr 2009 07:47:35 GMT",
"version": "v1"
}
] | 2009-04-16 | [
[
"Ge",
"Hao",
""
],
[
"Qian",
"Hong",
""
],
[
"Qian",
"Min",
""
]
] | Applying the mathematical circulation theory of Markov chains, we investigate the synchronized stochastic dynamics of a discrete network model of yeast cell-cycle regulation where stochasticity has been kept rather than being averaged out. By comparing the network dynamics of the stochastic model with its corresponding deterministic network counterpart, we show that the synchronized dynamics can be soundly characterized by a dominant circulation in the stochastic model, which is the natural generalization of the deterministic limit cycle in the deterministic system. Moreover, the period of the main peak in the power spectrum, which is in common use to characterize the synchronized dynamics, perfectly corresponds to the number of states in the main cycle with dominant circulation. Such a large separation in the magnitude of the circulations, between a dominant, main cycle and the rest, gives rise to the stochastic synchronization phenomenon. |
1312.4748 | Gerardo Gonz\'alez-Aguilar | Y. Casta\~no Guerrero and G. Gonz\'alez-Aguilar | Shiga Toxin Detection Methods : A Short Review | 16 pages, 2 figures | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Shiga toxins comprise a family of related protein toxins secreted by
certain types of bacteria. Shigella dysenteriae, some strain of Escherichia
coli and other bacterias can express toxins which caused serious complication
during the infection. Shiga toxin and the closely related Shiga-like toxins
represent a group of very similar cytotoxins that may play an important role in
diarrheal disease and hemolytic-uremic syndrome. The outbreaks caused by this
toxin raised serious public health crisis and caused economic losses. These
toxins have the same biologic activities and according to recent studies also
share the same binding receptor, globotriosyl ceramide (Gb3). Rapid detection
of food contamination is therefore relevant for the containment of food-borne
pathogens. The conventional methods to detect pathogens, such as
microbiological and biochemical identification are time-consuming and
laborious. The immunological or nucleic acid-based techniques require extensive
sample preparation and are not amenable to miniaturization for on-site
detection. In the present are necessary of techniques of rapid identification,
simple and sensitive which can be employed in the countryside with
minimally-sophisticated instrumentation. Biosensors have shown tremendous
promise to overcome these limitations and are being aggressively studied to
provide rapid, reliable and sensitive detection platforms for such
applications.
| [
{
"created": "Tue, 17 Dec 2013 12:38:27 GMT",
"version": "v1"
}
] | 2013-12-18 | [
[
"Guerrero",
"Y. Castaño",
""
],
[
"González-Aguilar",
"G.",
""
]
] | The Shiga toxins comprise a family of related protein toxins secreted by certain types of bacteria. Shigella dysenteriae, some strain of Escherichia coli and other bacterias can express toxins which caused serious complication during the infection. Shiga toxin and the closely related Shiga-like toxins represent a group of very similar cytotoxins that may play an important role in diarrheal disease and hemolytic-uremic syndrome. The outbreaks caused by this toxin raised serious public health crisis and caused economic losses. These toxins have the same biologic activities and according to recent studies also share the same binding receptor, globotriosyl ceramide (Gb3). Rapid detection of food contamination is therefore relevant for the containment of food-borne pathogens. The conventional methods to detect pathogens, such as microbiological and biochemical identification are time-consuming and laborious. The immunological or nucleic acid-based techniques require extensive sample preparation and are not amenable to miniaturization for on-site detection. In the present are necessary of techniques of rapid identification, simple and sensitive which can be employed in the countryside with minimally-sophisticated instrumentation. Biosensors have shown tremendous promise to overcome these limitations and are being aggressively studied to provide rapid, reliable and sensitive detection platforms for such applications. |
2205.13875 | Samuel Johnston | David Cheek and Samuel G. G. Johnston | Ancestral reproductive bias in branching processes | 18 pages, 4 figures | null | null | null | q-bio.PE math.PR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Consider a branching process with a homogeneous reproduction law. Sampling a
single cell uniformly from the population at a time $T > 0$ and looking along
the sampled cell's ancestral lineage, we find that the reproduction law is
heterogeneous - the expected reproductive output of ancestral cells on the
lineage from time $0$ to time $T$ continuously increases. This `inspection
paradox' is due to sampling bias, that cells with a larger number of offspring
are more likely to have one of their descendants sampled by virtue of their
prolificity, and the bias's strength grows with the random population size
and/or the sampling time $T$. Our main result explicitly characterises the
evolution of reproduction rates and sizes along the sampled ancestral lineage
as a mixture of Poisson processes, which simplifies in special cases. The
ancestral bias helps to explain recently observed variation in mutation rates
along lineages of the developing human embryo.
| [
{
"created": "Fri, 27 May 2022 10:12:32 GMT",
"version": "v1"
}
] | 2022-05-30 | [
[
"Cheek",
"David",
""
],
[
"Johnston",
"Samuel G. G.",
""
]
] | Consider a branching process with a homogeneous reproduction law. Sampling a single cell uniformly from the population at a time $T > 0$ and looking along the sampled cell's ancestral lineage, we find that the reproduction law is heterogeneous - the expected reproductive output of ancestral cells on the lineage from time $0$ to time $T$ continuously increases. This `inspection paradox' is due to sampling bias, that cells with a larger number of offspring are more likely to have one of their descendants sampled by virtue of their prolificity, and the bias's strength grows with the random population size and/or the sampling time $T$. Our main result explicitly characterises the evolution of reproduction rates and sizes along the sampled ancestral lineage as a mixture of Poisson processes, which simplifies in special cases. The ancestral bias helps to explain recently observed variation in mutation rates along lineages of the developing human embryo. |
2105.01340 | Anna Maltsev | Guillermo Veron, Victor A. Maltsev, Michael D. Stern, Anna V. Maltsev | Elementary Intracellular Ca Signals are Initiated by a Transition of
Release Channel System from a Metastable State | 12 pages main text, 4 figures, 13 pages Python code, 1 page table | null | null | null | q-bio.SC physics.bio-ph q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cardiac muscle contraction is initiated by an elementary Ca signal (called Ca
spark) which is achieved by collective action of Ca release channels in a
cluster. The mechanism of this synchronization remains uncertain. This paper
approaches Ca spark activation as an emergent phenomenon of an interactive
system of release channels. We construct a Markov chain that applies an Ising
model formalism to such release channel clusters and realistic open channel
configurations to demonstrate that spark activation is described as a system
transition from a metastable to an absorbing state, analogous to the pressure
required to overcome surface tension in bubble formation. This yields
quantitative estimates of the spark generation probability as a function of
various system parameters. Our model of the release channel system yields
similar results for the sarcoplasmic reticulum Ca concentration threshold for
spark activation as previous experimental results, providing a mechanistic
explanation of the spark initiation. Additionally, we perform numerical
simulations to find spark probabilities as a function of sarcoplasmic reticulum
Ca concentration obtaining similar values for spark activation threshold as our
analytic model, as well as those reported in experimental studies.
| [
{
"created": "Tue, 4 May 2021 07:39:46 GMT",
"version": "v1"
},
{
"created": "Fri, 6 Aug 2021 19:05:10 GMT",
"version": "v2"
}
] | 2021-08-10 | [
[
"Veron",
"Guillermo",
""
],
[
"Maltsev",
"Victor A.",
""
],
[
"Stern",
"Michael D.",
""
],
[
"Maltsev",
"Anna V.",
""
]
] | Cardiac muscle contraction is initiated by an elementary Ca signal (called Ca spark) which is achieved by collective action of Ca release channels in a cluster. The mechanism of this synchronization remains uncertain. This paper approaches Ca spark activation as an emergent phenomenon of an interactive system of release channels. We construct a Markov chain that applies an Ising model formalism to such release channel clusters and realistic open channel configurations to demonstrate that spark activation is described as a system transition from a metastable to an absorbing state, analogous to the pressure required to overcome surface tension in bubble formation. This yields quantitative estimates of the spark generation probability as a function of various system parameters. Our model of the release channel system yields similar results for the sarcoplasmic reticulum Ca concentration threshold for spark activation as previous experimental results, providing a mechanistic explanation of the spark initiation. Additionally, we perform numerical simulations to find spark probabilities as a function of sarcoplasmic reticulum Ca concentration obtaining similar values for spark activation threshold as our analytic model, as well as those reported in experimental studies. |
2110.05139 | Arindam Mishra | Mousumi Roy, Abhishek Senapati, Swarup Poria, Arindam Mishra, and
Chittaranjan Hens | Role of assortativity in predicting burst synchronization using echo
state network | null | null | 10.1103/PhysRevE.105.064205 | null | q-bio.NC nlin.CD | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this study, we use a reservoir computing based echo state network (ESN) to
predict the collective burst synchronization of neurons. Specifically, we
investigate the ability of ESN in predicting the burst synchronization of an
ensemble of Rulkov neurons placed on a scale-free network. We have shown that a
limited number of nodal dynamics used as input in the machine can capture the
real trend of burst synchronization in this network. Further, we investigate on
the proper selection of nodal inputs of degree-degree (positive and negative)
correlated networks. We show that for a disassortative network, selection of
different input nodes based on degree has no significant role in machine's
prediction. However, in the case of assortative network, training the machine
with the information (i.e time series) of low-degree nodes gives better results
in predicting the burst synchronization. Finally, we explain the underlying
mechanism responsible for observing this differences in prediction in a degree
correlated network.
| [
{
"created": "Mon, 11 Oct 2021 10:31:08 GMT",
"version": "v1"
},
{
"created": "Wed, 13 Oct 2021 20:52:23 GMT",
"version": "v2"
}
] | 2022-06-22 | [
[
"Roy",
"Mousumi",
""
],
[
"Senapati",
"Abhishek",
""
],
[
"Poria",
"Swarup",
""
],
[
"Mishra",
"Arindam",
""
],
[
"Hens",
"Chittaranjan",
""
]
] | In this study, we use a reservoir computing based echo state network (ESN) to predict the collective burst synchronization of neurons. Specifically, we investigate the ability of ESN in predicting the burst synchronization of an ensemble of Rulkov neurons placed on a scale-free network. We have shown that a limited number of nodal dynamics used as input in the machine can capture the real trend of burst synchronization in this network. Further, we investigate on the proper selection of nodal inputs of degree-degree (positive and negative) correlated networks. We show that for a disassortative network, selection of different input nodes based on degree has no significant role in machine's prediction. However, in the case of assortative network, training the machine with the information (i.e time series) of low-degree nodes gives better results in predicting the burst synchronization. Finally, we explain the underlying mechanism responsible for observing this differences in prediction in a degree correlated network. |
1911.13220 | Daniel Mas Montserrat | Daniel Mas Montserrat, Carlos Bustamante, Alexander Ioannidis | Class-Conditional VAE-GAN for Local-Ancestry Simulation | null | null | null | null | q-bio.GN cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Local ancestry inference (LAI) allows identification of the ancestry of all
chromosomal segments in admixed individuals, and it is a critical step in the
analysis of human genomes with applications from pharmacogenomics and precision
medicine to genome-wide association studies. In recent years, many LAI
techniques have been developed in both industry and academic research. However,
these methods require large training data sets of human genomic sequences from
the ancestries of interest. Such reference data sets are usually limited,
proprietary, protected by privacy restrictions, or otherwise not accessible to
the public. Techniques to generate training samples that resemble real haploid
sequences from ancestries of interest can be useful tools in such scenarios,
since a generalized model can often be shared, but the unique human sample
sequences cannot. In this work we present a class-conditional VAE-GAN to
generate new human genomic sequences that can be used to train local ancestry
inference (LAI) algorithms. We evaluate the quality of our generated data by
comparing the performance of a state-of-the-art LAI method when trained with
generated versus real data.
| [
{
"created": "Wed, 27 Nov 2019 18:06:39 GMT",
"version": "v1"
}
] | 2019-12-02 | [
[
"Montserrat",
"Daniel Mas",
""
],
[
"Bustamante",
"Carlos",
""
],
[
"Ioannidis",
"Alexander",
""
]
] | Local ancestry inference (LAI) allows identification of the ancestry of all chromosomal segments in admixed individuals, and it is a critical step in the analysis of human genomes with applications from pharmacogenomics and precision medicine to genome-wide association studies. In recent years, many LAI techniques have been developed in both industry and academic research. However, these methods require large training data sets of human genomic sequences from the ancestries of interest. Such reference data sets are usually limited, proprietary, protected by privacy restrictions, or otherwise not accessible to the public. Techniques to generate training samples that resemble real haploid sequences from ancestries of interest can be useful tools in such scenarios, since a generalized model can often be shared, but the unique human sample sequences cannot. In this work we present a class-conditional VAE-GAN to generate new human genomic sequences that can be used to train local ancestry inference (LAI) algorithms. We evaluate the quality of our generated data by comparing the performance of a state-of-the-art LAI method when trained with generated versus real data. |
1112.0045 | Aleksandar Stojmirovi\'c | Aleksandar Stojmirovi\'c, Alexander Bliskovsky and Yi-Kuo Yu | CytoITMprobe: a network information flow plugin for Cytoscape | 16 pages, 6 figures. Version 2 | null | null | null | q-bio.QM cs.DB q-bio.MN | http://creativecommons.org/licenses/publicdomain/ | To provide the Cytoscape users the possibility of integrating ITM Probe into
their workflows, we developed CytoITMprobe, a new Cytoscape plugin.
CytoITMprobe maintains all the desirable features of ITM Probe and adds
additional flexibility not achievable through its web service version. It
provides access to ITM Probe either through a web server or locally. The input,
consisting of a Cytoscape network, together with the desired origins and/or
destinations of information and a dissipation coefficient, is specified through
a query form. The results are shown as a subnetwork of significant nodes and
several summary tables. Users can control the composition and appearance of the
subnetwork and interchange their ITM Probe results with other software tools
through tab-delimited files.
The main strength of CytoITMprobe is its flexibility. It allows the user to
specify as input any Cytoscape network, rather than being restricted to the
pre-compiled protein-protein interaction networks available through the ITM
Probe web service. Users may supply their own edge weights and
directionalities. Consequently, as opposed to ITM Probe web service,
CytoITMprobe can be applied to many other domains of network-based research
beyond protein-networks. It also enables seamless integration of ITM Probe
results with other Cytoscape plugins having complementary functionality for
data analysis.
| [
{
"created": "Wed, 30 Nov 2011 22:10:50 GMT",
"version": "v1"
},
{
"created": "Mon, 19 Mar 2012 22:22:23 GMT",
"version": "v2"
}
] | 2012-03-21 | [
[
"Stojmirović",
"Aleksandar",
""
],
[
"Bliskovsky",
"Alexander",
""
],
[
"Yu",
"Yi-Kuo",
""
]
] | To provide the Cytoscape users the possibility of integrating ITM Probe into their workflows, we developed CytoITMprobe, a new Cytoscape plugin. CytoITMprobe maintains all the desirable features of ITM Probe and adds additional flexibility not achievable through its web service version. It provides access to ITM Probe either through a web server or locally. The input, consisting of a Cytoscape network, together with the desired origins and/or destinations of information and a dissipation coefficient, is specified through a query form. The results are shown as a subnetwork of significant nodes and several summary tables. Users can control the composition and appearance of the subnetwork and interchange their ITM Probe results with other software tools through tab-delimited files. The main strength of CytoITMprobe is its flexibility. It allows the user to specify as input any Cytoscape network, rather than being restricted to the pre-compiled protein-protein interaction networks available through the ITM Probe web service. Users may supply their own edge weights and directionalities. Consequently, as opposed to ITM Probe web service, CytoITMprobe can be applied to many other domains of network-based research beyond protein-networks. It also enables seamless integration of ITM Probe results with other Cytoscape plugins having complementary functionality for data analysis. |
0712.4224 | Jens Christian Claussen | Jens Christian Claussen | Drift reversal in asymmetric coevolutionary conflicts: Influence of
microscopic processes and population size | 9 pages, color online figs on p.3+4 | European Physical Journal B 60, 391-399 (2007) | 10.1140/epjb/e2007-00357-2 | null | q-bio.PE physics.soc-ph q-bio.QM | null | The coevolutionary dynamics in finite populations currently is investigated
in a wide range of disciplines, as chemical catalysis, biological evolution,
social and economic systems. The dynamics of those systems can be formulated
within the unifying framework of evolutionary game theory. However it is not a
priori clear which mathematical description is appropriate when populations are
not infinitely large. Whereas the replicator equation approach describes the
infinite population size limit by deterministic differential equations, in
finite populations the dynamics is inherently stochastic which can lead to new
effects. Recently, an explicit mean-field description in the form of a
Fokker-Planck equation was derived for frequency-dependent selection in finite
populations based on microscopic processes. In asymmetric conflicts between two
populations with a cyclic dominance, a finite-size dependent drift reversal was
demonstrated, depending on the underlying microscopic process of the
evolutionary update. Cyclic dynamics appears widely in biological coevolution,
be it within a homogeneous population, or be it between disjunct populations as
female and male. Here explicit analytic address is given and the average drift
is calculated for the frequency-dependent Moran process and for different
pairwise comparison processes. It is explicitely shown that the drift reversal
cannot occur if the process relies on payoff differences between pairs of
individuals. Further, also a linear comparison with the average payoff does not
lead to a drift towards the internal fixed point. Hence the nonlinear
comparison function of the frequency-dependent Moran process, together with its
usage of nonlocal information via the average payoff, is the essential part of
the mechanism.
| [
{
"created": "Thu, 27 Dec 2007 12:08:59 GMT",
"version": "v1"
}
] | 2012-06-12 | [
[
"Claussen",
"Jens Christian",
""
]
] | The coevolutionary dynamics in finite populations currently is investigated in a wide range of disciplines, as chemical catalysis, biological evolution, social and economic systems. The dynamics of those systems can be formulated within the unifying framework of evolutionary game theory. However it is not a priori clear which mathematical description is appropriate when populations are not infinitely large. Whereas the replicator equation approach describes the infinite population size limit by deterministic differential equations, in finite populations the dynamics is inherently stochastic which can lead to new effects. Recently, an explicit mean-field description in the form of a Fokker-Planck equation was derived for frequency-dependent selection in finite populations based on microscopic processes. In asymmetric conflicts between two populations with a cyclic dominance, a finite-size dependent drift reversal was demonstrated, depending on the underlying microscopic process of the evolutionary update. Cyclic dynamics appears widely in biological coevolution, be it within a homogeneous population, or be it between disjunct populations as female and male. Here explicit analytic address is given and the average drift is calculated for the frequency-dependent Moran process and for different pairwise comparison processes. It is explicitely shown that the drift reversal cannot occur if the process relies on payoff differences between pairs of individuals. Further, also a linear comparison with the average payoff does not lead to a drift towards the internal fixed point. Hence the nonlinear comparison function of the frequency-dependent Moran process, together with its usage of nonlocal information via the average payoff, is the essential part of the mechanism. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.