id stringlengths 9 13 | submitter stringlengths 4 48 | authors stringlengths 4 9.62k | title stringlengths 4 343 | comments stringlengths 2 480 ⌀ | journal-ref stringlengths 9 309 ⌀ | doi stringlengths 12 138 ⌀ | report-no stringclasses 277 values | categories stringlengths 8 87 | license stringclasses 9 values | orig_abstract stringlengths 27 3.76k | versions listlengths 1 15 | update_date stringlengths 10 10 | authors_parsed listlengths 1 147 | abstract stringlengths 24 3.75k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2201.01821 | Diego Martinez | J.S. Inca, D.A. Martinez and C. Vilchez | Phenotypic correlation between external and internal egg quality
characteristics in 85-week-old laying hens | null | International Journal of Poultry Science (2020) 19: 346-355 | 10.3923/ijps.2020.346.355 | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Background and Objective: Several studies confirm that the age of hens has a
tremendous impact on external and internal egg quality characteristics. Egg
production could be at serious risk if egg quality characteristics and age of
hens are not seriously considered. This study was conducted to analyze the
phenotypic correlations between some internal and external egg quality
characteristics in old laying hens. Materials and Methods: A total of 288 eggs
of 85-week-old Hy-Line Brown laying hens were collected during 3 weeks and
their internal and external egg characteristics were evaluated. Results:
Phenotypic correlations between egg quality characteristics in old laying hens
indicate a negative impact on shell and albumen quality but not affected yolk
quality characteristics. Conclusion: This study helps to understand that
raising laying hens above 80 weeks would have a negative impact on egg quality
characteristics.
| [
{
"created": "Wed, 5 Jan 2022 21:06:32 GMT",
"version": "v1"
}
] | 2022-01-07 | [
[
"Inca",
"J. S.",
""
],
[
"Martinez",
"D. A.",
""
],
[
"Vilchez",
"C.",
""
]
] | Background and Objective: Several studies confirm that the age of hens has a tremendous impact on external and internal egg quality characteristics. Egg production could be at serious risk if egg quality characteristics and age of hens are not seriously considered. This study was conducted to analyze the phenotypic correlations between some internal and external egg quality characteristics in old laying hens. Materials and Methods: A total of 288 eggs of 85-week-old Hy-Line Brown laying hens were collected during 3 weeks and their internal and external egg characteristics were evaluated. Results: Phenotypic correlations between egg quality characteristics in old laying hens indicate a negative impact on shell and albumen quality but not affected yolk quality characteristics. Conclusion: This study helps to understand that raising laying hens above 80 weeks would have a negative impact on egg quality characteristics. |
1407.3249 | Brian Mulloney PhD | Brian Mulloney, Carmen Smarandache-Wellmann, Cynthia Weller, Wendy M.
Hall, Ralph A. DiCaprio | Proprioceptive feedback modulates coordinating information in a system
of segmentally-distributed microcircuits | 33 pages, 8 figures, two tables | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The system of modular neural circuits that controls crustacean swimmerets
drives a metachronal sequence of power-stroke and return-stroke movements that
propels the animal forward efficiently. These neural modules are synchronized
by an intersegmental coordinating circuit that imposes characteristic phase
differences between these modules. Using a semi-intact preparation that left
one swimmeret attached to an otherwise isolated crayfish central nervous system
of the crayfish, we investigated how the rhythmic activity of this system
responded to imposed movements. We recorded extracellularly from the Ps and Rs
nerves that innervated the attached limb and from coordinating axons that
encode efference copies of the periodic bursts in Ps and Rs axons.
Simultaneously we recorded from homologous nerves in more anterior and
posterior segments. Maintained retractions did not affect cycle period, but
promptly weakened Ps bursts, strengthened Rs bursts, and caused corresponding
changes in the strength and timing of efference copies in the coordinating
axons. These changes in strength and timing of these efference copies then
caused changes in the phase and duration, but not the strength, of Ps bursts in
modules controlling neighboring swimmerets. These changes were promptly
reversed when the limb was released. Each swimmeret is innervated by two
nonspiking stretch receptors, Nssrs, that depolarize when the limb is
retracted. Voltage-clamp of an Nssr changed the durations and strengths of
bursts in Ps and Rs axons innervating the same limb, and caused corresponding
changes in the efference copies of this motor output.
| [
{
"created": "Fri, 11 Jul 2014 18:47:18 GMT",
"version": "v1"
}
] | 2014-07-14 | [
[
"Mulloney",
"Brian",
""
],
[
"Smarandache-Wellmann",
"Carmen",
""
],
[
"Weller",
"Cynthia",
""
],
[
"Hall",
"Wendy M.",
""
],
[
"DiCaprio",
"Ralph A.",
""
]
] | The system of modular neural circuits that controls crustacean swimmerets drives a metachronal sequence of power-stroke and return-stroke movements that propels the animal forward efficiently. These neural modules are synchronized by an intersegmental coordinating circuit that imposes characteristic phase differences between these modules. Using a semi-intact preparation that left one swimmeret attached to an otherwise isolated crayfish central nervous system of the crayfish, we investigated how the rhythmic activity of this system responded to imposed movements. We recorded extracellularly from the Ps and Rs nerves that innervated the attached limb and from coordinating axons that encode efference copies of the periodic bursts in Ps and Rs axons. Simultaneously we recorded from homologous nerves in more anterior and posterior segments. Maintained retractions did not affect cycle period, but promptly weakened Ps bursts, strengthened Rs bursts, and caused corresponding changes in the strength and timing of efference copies in the coordinating axons. These changes in strength and timing of these efference copies then caused changes in the phase and duration, but not the strength, of Ps bursts in modules controlling neighboring swimmerets. These changes were promptly reversed when the limb was released. Each swimmeret is innervated by two nonspiking stretch receptors, Nssrs, that depolarize when the limb is retracted. Voltage-clamp of an Nssr changed the durations and strengths of bursts in Ps and Rs axons innervating the same limb, and caused corresponding changes in the efference copies of this motor output. |
2312.13944 | Tomasz Danel | Tomasz Danel, Jan {\L}\k{e}ski, Sabina Podlewska, Igor T. Podolak | Docking-based generative approaches in the search for new drug
candidates | null | Drug Discovery Today 28.2 (2023) | 10.1016/j.drudis.2022.103439 | null | q-bio.BM cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Despite the great popularity of virtual screening of existing compound
libraries, the search for new potential drug candidates also takes advantage of
generative protocols, where new compound suggestions are enumerated using
various algorithms. To increase the activity potency of generative approaches,
they have recently been coupled with molecular docking, a leading methodology
of structure-based drug design. In this review, we summarize progress since
docking-based generative models emerged. We propose a new taxonomy for these
methods and discuss their importance for the field of computer-aided drug
design. In addition, we discuss the most promising directions for further
development of generative protocols coupled with docking.
| [
{
"created": "Wed, 22 Nov 2023 11:37:09 GMT",
"version": "v1"
}
] | 2023-12-22 | [
[
"Danel",
"Tomasz",
""
],
[
"Łęski",
"Jan",
""
],
[
"Podlewska",
"Sabina",
""
],
[
"Podolak",
"Igor T.",
""
]
] | Despite the great popularity of virtual screening of existing compound libraries, the search for new potential drug candidates also takes advantage of generative protocols, where new compound suggestions are enumerated using various algorithms. To increase the activity potency of generative approaches, they have recently been coupled with molecular docking, a leading methodology of structure-based drug design. In this review, we summarize progress since docking-based generative models emerged. We propose a new taxonomy for these methods and discuss their importance for the field of computer-aided drug design. In addition, we discuss the most promising directions for further development of generative protocols coupled with docking. |
1903.08057 | Andrew Borkowski M.D. | Andrew A. Borkowski, Catherine P. Wilson, Steven A. Borkowski, L.
Brannon Thomas, Lauren A. Deland, Stefanie J. Grewe, Stephen M. Mastorides | Google Auto ML versus Apple Create ML for Histopathologic Cancer
Diagnosis; Which Algorithms Are Better? | 18 pages, 1 table, 4 figures | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Artificial Intelligence is set to revolutionize multiple fields in the coming
years. One subset of AI, machine learning, shows immense potential for
application in a diverse set of medical specialties, including diagnostic
pathology. In this study, we investigate the utility of the Apple Create ML and
Google Cloud Auto ML, two machine learning platforms, in a variety of
pathological scenarios involving lung and colon pathology. First, we evaluate
the ability of the platforms to differentiate normal lung tissue from cancerous
lung tissue. Also, the ability to accurately distinguish two subtypes of lung
cancer (adenocarcinoma and squamous cell carcinoma) is examined and compared.
Similarly, the ability of the two programs to differentiate colon
adenocarcinoma from normal colon is assessed as is done with lung tissue. Also,
cases of colon adenocarcinoma are evaluated for the presence or absence of a
specific gene mutation known as KRAS. Finally, our last experiment examines the
ability of the Apple and Google platforms to differentiate between
adenocarcinomas of lung origin versus colon origin. In our trained models for
lung and colon cancer diagnosis, both Apple and Google machine learning
algorithms performed very well individually and with no statistically
significant differences found between the two platforms. However, some critical
factors set them apart. Apple Create ML can be used on local computers but is
limited to an Apple ecosystem. Google Auto ML is not platform specific but runs
only in Google Cloud with associated computational fees. In the end, both are
excellent machine learning tools that have great potential in the field of
diagnostic pathology, and which one to choose would depend on personal
preference, programming experience, and available storage space.
| [
{
"created": "Tue, 19 Mar 2019 15:36:47 GMT",
"version": "v1"
}
] | 2019-03-20 | [
[
"Borkowski",
"Andrew A.",
""
],
[
"Wilson",
"Catherine P.",
""
],
[
"Borkowski",
"Steven A.",
""
],
[
"Thomas",
"L. Brannon",
""
],
[
"Deland",
"Lauren A.",
""
],
[
"Grewe",
"Stefanie J.",
""
],
[
"Mastorides",
"Stephen M.",
""
]
] | Artificial Intelligence is set to revolutionize multiple fields in the coming years. One subset of AI, machine learning, shows immense potential for application in a diverse set of medical specialties, including diagnostic pathology. In this study, we investigate the utility of the Apple Create ML and Google Cloud Auto ML, two machine learning platforms, in a variety of pathological scenarios involving lung and colon pathology. First, we evaluate the ability of the platforms to differentiate normal lung tissue from cancerous lung tissue. Also, the ability to accurately distinguish two subtypes of lung cancer (adenocarcinoma and squamous cell carcinoma) is examined and compared. Similarly, the ability of the two programs to differentiate colon adenocarcinoma from normal colon is assessed as is done with lung tissue. Also, cases of colon adenocarcinoma are evaluated for the presence or absence of a specific gene mutation known as KRAS. Finally, our last experiment examines the ability of the Apple and Google platforms to differentiate between adenocarcinomas of lung origin versus colon origin. In our trained models for lung and colon cancer diagnosis, both Apple and Google machine learning algorithms performed very well individually and with no statistically significant differences found between the two platforms. However, some critical factors set them apart. Apple Create ML can be used on local computers but is limited to an Apple ecosystem. Google Auto ML is not platform specific but runs only in Google Cloud with associated computational fees. In the end, both are excellent machine learning tools that have great potential in the field of diagnostic pathology, and which one to choose would depend on personal preference, programming experience, and available storage space. |
q-bio/0611062 | Patricia Faisca | P.F.N. Faisca and K.W. Plaxco | Cooperativity and the origins of rapid, single-exponential kinetics in
protein folding | null | Protein Science 15, 1608-1618 (2006) | null | null | q-bio.BM | null | The folding of naturally occurring, single domain proteins is usually
well-described as a simple, single exponential process lacking significant
trapped states. Here we further explore the hypothesis that the smooth energy
landscape this implies, and the rapid kinetics it engenders, arises due to the
extraordinary thermodynamic cooperativity of protein folding. Studying
Miyazawa-Jernigan lattice polymers we find that, even under conditions where
the folding energy landscape is relatively optimized (designed sequences
folding at their temperature of maximum folding rate), the folding of
protein-like heteropolymers is accelerated when their thermodynamic
cooperativity enhanced by enhancing the non-additivity of their energy
potentials. At lower temperatures, where kinetic traps presumably play a more
significant role in defining folding rates, we observe still greater
cooperativity-induced acceleration. Consistent with these observations, we find
that the folding kinetics of our computational models more closely approximate
single-exponential behavior as their cooperativity approaches optimal levels.
These observations suggest that the rapid folding of naturally occurring
proteins is, at least in part, consequences of their remarkably cooperative
folding.
| [
{
"created": "Sat, 18 Nov 2006 19:17:40 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Faisca",
"P. F. N.",
""
],
[
"Plaxco",
"K. W.",
""
]
] | The folding of naturally occurring, single domain proteins is usually well-described as a simple, single exponential process lacking significant trapped states. Here we further explore the hypothesis that the smooth energy landscape this implies, and the rapid kinetics it engenders, arises due to the extraordinary thermodynamic cooperativity of protein folding. Studying Miyazawa-Jernigan lattice polymers we find that, even under conditions where the folding energy landscape is relatively optimized (designed sequences folding at their temperature of maximum folding rate), the folding of protein-like heteropolymers is accelerated when their thermodynamic cooperativity enhanced by enhancing the non-additivity of their energy potentials. At lower temperatures, where kinetic traps presumably play a more significant role in defining folding rates, we observe still greater cooperativity-induced acceleration. Consistent with these observations, we find that the folding kinetics of our computational models more closely approximate single-exponential behavior as their cooperativity approaches optimal levels. These observations suggest that the rapid folding of naturally occurring proteins is, at least in part, consequences of their remarkably cooperative folding. |
1611.04573 | Duan Chen | Duan Chen and Guowei Wei | A Review of Mathematical Modeling, Simulation and Analysis of Membrane
Channel Charge Transport | null | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The molecular mechanism of ion channel gating and substrate modulation is
elusive for many voltage gated ion channels, such as eukaryotic sodium ones.
The understanding of channel functions is a pressing issue in molecular
biophysics and biology. Mathematical modeling, computation and analysis of
membrane channel charge transport have become an emergent field and give rise
to significant contributions to our understanding of ion channel gating and
function. This review summarizes recent progresses and outlines remaining
challenges in mathematical modeling, simulation and analysis of ion channel
charge transport. One of our focuses is the Poisson-Nernst-Planck (PNP) model
and its generalizations. Specifically, the basic framework of the PNP system
and some of its extensions, including size effects, ion-water interactions,
coupling with density functional theory and relation to fluid flow models. A
reduced theory, the Poisson- Boltzmann-Nernst-Planck (PBNP) model, and a
differential geometry based ion transport model are also discussed. For proton
channel, a multiscale and multiphysics Poisson-Boltzmann-Kohn-Sham (PBKS) model
is presented. We show that all of these ion channel models can be cast into a
unified variational multiscale framework with a macroscopic continuum domain of
the solvent and a microscopic discrete domain of the solute. The main strategy
is to construct a total energy functional of a charge transport system to
encompass the polar and nonpolar free energies of solvation and chemical
potential related energies. Current computational algorithms and tools for
numerical simulations and results from mathematical analysis of ion channel
systems are also surveyed.
| [
{
"created": "Thu, 29 Sep 2016 16:08:32 GMT",
"version": "v1"
}
] | 2016-11-15 | [
[
"Chen",
"Duan",
""
],
[
"Wei",
"Guowei",
""
]
] | The molecular mechanism of ion channel gating and substrate modulation is elusive for many voltage gated ion channels, such as eukaryotic sodium ones. The understanding of channel functions is a pressing issue in molecular biophysics and biology. Mathematical modeling, computation and analysis of membrane channel charge transport have become an emergent field and give rise to significant contributions to our understanding of ion channel gating and function. This review summarizes recent progresses and outlines remaining challenges in mathematical modeling, simulation and analysis of ion channel charge transport. One of our focuses is the Poisson-Nernst-Planck (PNP) model and its generalizations. Specifically, the basic framework of the PNP system and some of its extensions, including size effects, ion-water interactions, coupling with density functional theory and relation to fluid flow models. A reduced theory, the Poisson- Boltzmann-Nernst-Planck (PBNP) model, and a differential geometry based ion transport model are also discussed. For proton channel, a multiscale and multiphysics Poisson-Boltzmann-Kohn-Sham (PBKS) model is presented. We show that all of these ion channel models can be cast into a unified variational multiscale framework with a macroscopic continuum domain of the solvent and a microscopic discrete domain of the solute. The main strategy is to construct a total energy functional of a charge transport system to encompass the polar and nonpolar free energies of solvation and chemical potential related energies. Current computational algorithms and tools for numerical simulations and results from mathematical analysis of ion channel systems are also surveyed. |
1708.08133 | Aaron Voelker | Aaron R. Voelker and Chris Eliasmith | Methods for applying the Neural Engineering Framework to neuromorphic
hardware | 11 pages, no figures | null | null | null | q-bio.NC cs.AI cs.SY math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We review our current software tools and theoretical methods for applying the
Neural Engineering Framework to state-of-the-art neuromorphic hardware. These
methods can be used to implement linear and nonlinear dynamical systems that
exploit axonal transmission time-delays, and to fully account for nonideal
mixed-analog-digital synapses that exhibit higher-order dynamics with
heterogeneous time-constants. This summarizes earlier versions of these methods
that have been discussed in a more biological context (Voelker & Eliasmith,
2017) or regarding a specific neuromorphic architecture (Voelker et al., 2017).
| [
{
"created": "Sun, 27 Aug 2017 20:27:01 GMT",
"version": "v1"
}
] | 2017-08-29 | [
[
"Voelker",
"Aaron R.",
""
],
[
"Eliasmith",
"Chris",
""
]
] | We review our current software tools and theoretical methods for applying the Neural Engineering Framework to state-of-the-art neuromorphic hardware. These methods can be used to implement linear and nonlinear dynamical systems that exploit axonal transmission time-delays, and to fully account for nonideal mixed-analog-digital synapses that exhibit higher-order dynamics with heterogeneous time-constants. This summarizes earlier versions of these methods that have been discussed in a more biological context (Voelker & Eliasmith, 2017) or regarding a specific neuromorphic architecture (Voelker et al., 2017). |
2401.08321 | Jeremi K. Ochab | Marcin W\k{a}torek, Wojciech Tomczyk, Magda Gaw{\l}owska, Natalia
Golonka-Afek, Aleksandra \.Zyrkowska, Monika Marona, Marcin Wnuk, Agnieszka
S{\l}owik, Jeremi K. Ochab, Magdalena Fafrowicz, Tadeusz Marek, Pawe{\l}
O\'swi\k{e}cimka | Multifractal organization of EEG signals in Multiple Sclerosis | 39 pages, including supplementary materials (11 figures, 4 tables) | Biomedical Signal Processing and Control 91, 105916 (2024) | 10.1016/j.bspc.2023.105916 | null | q-bio.NC cond-mat.dis-nn nlin.AO q-bio.QM | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Quantifying the complex/multifractal organization of the brain signals is
crucial to fully understanding the brain processes and structure. In this
contribution, we performed the multifractal analysis of the
electroencephalographic (EEG) data obtained from a controlled multiple
sclerosis (MS) study, focusing on the correlation between the degree of
multifractality, disease duration, and disability level. Our results reveal a
significant correspondence between the complexity of the time series and
multiple sclerosis development, quantified respectively by scaling exponents
and the Expanded Disability Status Scale (EDSS). Namely, for some brain
regions, a well-developed multifractality and little persistence of the time
series were identified in patients with a high level of disability, whereas the
control group and patients with low EDSS were characterised by persistence and
monofractality of the signals. The analysis of the cross-correlations between
EEG signals supported these results, with the most significant differences
identified for patients with EDSS $> 1$ and the combined group of patients with
EDSS $\leq 1$ and controls. No association between the multifractality and
disease duration was observed, indicating that the multifractal organisation of
the data is a hallmark of developing the disease. The observed
complexity/multifractality of EEG signals is hypothetically a result of
neuronal compensation -- i.e., of optimizing neural processes in the presence
of structural brain degeneration. The presented study is highly relevant due to
the multifractal formalism used to quantify complexity and due to scarce
resting-state EEG evidence for cortical reorganization associated with
compensation.
| [
{
"created": "Tue, 16 Jan 2024 12:40:03 GMT",
"version": "v1"
},
{
"created": "Mon, 29 Jan 2024 13:11:29 GMT",
"version": "v2"
}
] | 2024-01-30 | [
[
"Wątorek",
"Marcin",
""
],
[
"Tomczyk",
"Wojciech",
""
],
[
"Gawłowska",
"Magda",
""
],
[
"Golonka-Afek",
"Natalia",
""
],
[
"Żyrkowska",
"Aleksandra",
""
],
[
"Marona",
"Monika",
""
],
[
"Wnuk",
"Marcin",
""
],
[
"Słowik",
"Agnieszka",
""
],
[
"Ochab",
"Jeremi K.",
""
],
[
"Fafrowicz",
"Magdalena",
""
],
[
"Marek",
"Tadeusz",
""
],
[
"Oświęcimka",
"Paweł",
""
]
] | Quantifying the complex/multifractal organization of the brain signals is crucial to fully understanding the brain processes and structure. In this contribution, we performed the multifractal analysis of the electroencephalographic (EEG) data obtained from a controlled multiple sclerosis (MS) study, focusing on the correlation between the degree of multifractality, disease duration, and disability level. Our results reveal a significant correspondence between the complexity of the time series and multiple sclerosis development, quantified respectively by scaling exponents and the Expanded Disability Status Scale (EDSS). Namely, for some brain regions, a well-developed multifractality and little persistence of the time series were identified in patients with a high level of disability, whereas the control group and patients with low EDSS were characterised by persistence and monofractality of the signals. The analysis of the cross-correlations between EEG signals supported these results, with the most significant differences identified for patients with EDSS $> 1$ and the combined group of patients with EDSS $\leq 1$ and controls. No association between the multifractality and disease duration was observed, indicating that the multifractal organisation of the data is a hallmark of developing the disease. The observed complexity/multifractality of EEG signals is hypothetically a result of neuronal compensation -- i.e., of optimizing neural processes in the presence of structural brain degeneration. The presented study is highly relevant due to the multifractal formalism used to quantify complexity and due to scarce resting-state EEG evidence for cortical reorganization associated with compensation. |
0709.0217 | Tobias Reichenbach | Tobias Reichenbach, Mauro Mobilia and Erwin Frey | Mobility promotes and jeopardizes biodiversity in rock-paper-scissors
games | Final submitted version; the printed version can be found at
http://dx.doi.org/10.1038/nature06095 Supplementary movies are available at
http://www.theorie.physik.uni-muenchen.de/lsfrey/images_content/movie1.AVI
and
http://www.theorie.physik.uni-muenchen.de/lsfrey/images_content/movie2.AVI | Nature 448, 1046-1049 (2007) | 10.1038/nature06095 | LMU-ASC 62/07 | q-bio.PE cond-mat.stat-mech physics.bio-ph | null | Biodiversity is essential to the viability of ecological systems. Species
diversity in ecosystems is promoted by cyclic, non-hierarchical interactions
among competing populations. Such non-transitive relations lead to an evolution
with central features represented by the `rock-paper-scissors' game, where rock
crushes scissors, scissors cut paper, and paper wraps rock. In combination with
spatial dispersal of static populations, this type of competition results in
the stable coexistence of all species and the long-term maintenance of
biodiversity. However, population mobility is a central feature of real
ecosystems: animals migrate, bacteria run and tumble. Here, we observe a
critical influence of mobility on species diversity. When mobility exceeds a
certain value, biodiversity is jeopardized and lost. In contrast, below this
critical threshold all subpopulations coexist and an entanglement of travelling
spiral waves forms in the course of temporal evolution. We establish that this
phenomenon is robust, it does not depend on the details of cyclic competition
or spatial environment. These findings have important implications for
maintenance and evolution of ecological systems and are relevant for the
formation and propagation of patterns in excitable media, such as chemical
kinetics or epidemic outbreaks.
| [
{
"created": "Mon, 3 Sep 2007 14:56:20 GMT",
"version": "v1"
},
{
"created": "Wed, 9 Apr 2008 19:20:49 GMT",
"version": "v2"
}
] | 2008-04-09 | [
[
"Reichenbach",
"Tobias",
""
],
[
"Mobilia",
"Mauro",
""
],
[
"Frey",
"Erwin",
""
]
] | Biodiversity is essential to the viability of ecological systems. Species diversity in ecosystems is promoted by cyclic, non-hierarchical interactions among competing populations. Such non-transitive relations lead to an evolution with central features represented by the `rock-paper-scissors' game, where rock crushes scissors, scissors cut paper, and paper wraps rock. In combination with spatial dispersal of static populations, this type of competition results in the stable coexistence of all species and the long-term maintenance of biodiversity. However, population mobility is a central feature of real ecosystems: animals migrate, bacteria run and tumble. Here, we observe a critical influence of mobility on species diversity. When mobility exceeds a certain value, biodiversity is jeopardized and lost. In contrast, below this critical threshold all subpopulations coexist and an entanglement of travelling spiral waves forms in the course of temporal evolution. We establish that this phenomenon is robust, it does not depend on the details of cyclic competition or spatial environment. These findings have important implications for maintenance and evolution of ecological systems and are relevant for the formation and propagation of patterns in excitable media, such as chemical kinetics or epidemic outbreaks. |
2012.00104 | Hui Wei Dr. | Hui Wei | A Neural Dynamic Model based on Activation Diffusion and a
Micro-Explanation for Cognitive Operations | null | null | null | null | q-bio.NC cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The neural mechanism of memory has a very close relation with the problem of
representation in artificial intelligence. In this paper a computational model
was proposed to simulate the network of neurons in brain and how they process
information. The model refers to morphological and electrophysiological
characteristics of neural information processing, and is based on the
assumption that neurons encode their firing sequence. The network structure,
functions for neural encoding at different stages, the representation of
stimuli in memory, and an algorithm to form a memory were presented. It also
analyzed the stability and recall rate for learning and the capacity of memory.
Because neural dynamic processes, one succeeding another, achieve a
neuron-level and coherent form by which information is represented and
processed, it may facilitate examination of various branches of Artificial
Intelligence, such as inference, problem solving, pattern recognition, natural
language processing and learning. The processes of cognitive manipulation
occurring in intelligent behavior have a consistent representation while all
being modeled from the perspective of computational neuroscience. Thus, the
dynamics of neurons make it possible to explain the inner mechanisms of
different intelligent behaviors by a unified model of cognitive architecture at
a micro-level.
| [
{
"created": "Fri, 27 Nov 2020 01:34:08 GMT",
"version": "v1"
}
] | 2020-12-02 | [
[
"Wei",
"Hui",
""
]
] | The neural mechanism of memory has a very close relation with the problem of representation in artificial intelligence. In this paper a computational model was proposed to simulate the network of neurons in brain and how they process information. The model refers to morphological and electrophysiological characteristics of neural information processing, and is based on the assumption that neurons encode their firing sequence. The network structure, functions for neural encoding at different stages, the representation of stimuli in memory, and an algorithm to form a memory were presented. It also analyzed the stability and recall rate for learning and the capacity of memory. Because neural dynamic processes, one succeeding another, achieve a neuron-level and coherent form by which information is represented and processed, it may facilitate examination of various branches of Artificial Intelligence, such as inference, problem solving, pattern recognition, natural language processing and learning. The processes of cognitive manipulation occurring in intelligent behavior have a consistent representation while all being modeled from the perspective of computational neuroscience. Thus, the dynamics of neurons make it possible to explain the inner mechanisms of different intelligent behaviors by a unified model of cognitive architecture at a micro-level. |
2111.02510 | William Waites | William Waites, Matteo Cavaliere, Vincent Danos, Ruchira Datta,
Rosalind M. Eggo, Timothy B. Hallett, David Manheim, Jasmina
Panovska-Griffiths, Timothy W. Russell and Veronika I. Zarnitsyna | Compositional modelling of immune response and virus transmission
dynamics | null | null | 10.1098/rsta.2021.0307 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Transmission models for infectious diseases are typically formulated in terms
of dynamics between individuals or groups with processes such as disease
progression or recovery for each individual captured phenomenologically,
without reference to underlying biological processes. Furthermore, the
construction of these models is often monolithic: they don't allow one to
readily modify the processes involved or include the new ones, or to combine
models at different scales. We show how to construct a simple model of immune
response to a respiratory virus and a model of transmission using an easily
modifiable set of rules allowing further refining and merging the two models
together. The immune response model reproduces the expected response curve of
PCR testing for COVID-19 and implies a long-tailed distribution of
infectiousness reflective of individual heterogeneity. This immune response
model, when combined with a transmission model, reproduces the previously
reported shift in the population distribution of viral loads along an epidemic
trajectory.
| [
{
"created": "Wed, 3 Nov 2021 20:24:13 GMT",
"version": "v1"
}
] | 2022-08-17 | [
[
"Waites",
"William",
""
],
[
"Cavaliere",
"Matteo",
""
],
[
"Danos",
"Vincent",
""
],
[
"Datta",
"Ruchira",
""
],
[
"Eggo",
"Rosalind M.",
""
],
[
"Hallett",
"Timothy B.",
""
],
[
"Manheim",
"David",
""
],
[
"Panovska-Griffiths",
"Jasmina",
""
],
[
"Russell",
"Timothy W.",
""
],
[
"Zarnitsyna",
"Veronika I.",
""
]
] | Transmission models for infectious diseases are typically formulated in terms of dynamics between individuals or groups with processes such as disease progression or recovery for each individual captured phenomenologically, without reference to underlying biological processes. Furthermore, the construction of these models is often monolithic: they don't allow one to readily modify the processes involved or include the new ones, or to combine models at different scales. We show how to construct a simple model of immune response to a respiratory virus and a model of transmission using an easily modifiable set of rules allowing further refining and merging the two models together. The immune response model reproduces the expected response curve of PCR testing for COVID-19 and implies a long-tailed distribution of infectiousness reflective of individual heterogeneity. This immune response model, when combined with a transmission model, reproduces the previously reported shift in the population distribution of viral loads along an epidemic trajectory. |
2112.07810 | Mike Steel Prof. | Kerry Manson, Charles Semple, Mike Steel | Counting and optimising maximum phylogenetic diversity sets | 24 pages, 5 figures, 1 table | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In conservation biology, phylogenetic diversity (PD) provides a way to
quantify the impact of the current rapid extinction of species on the
evolutionary `Tree of Life'. This approach recognises that extinction not only
removes species but also the branches of the tree on which unique features
shared by the extinct species arose. In this paper, we investigate three
questions that are relevant to PD. The first asks how many sets of species of
given size $k$ preserve the maximum possible amount of PD in a given tree. The
number of such maximum PD sets can be very large, even for moderate-sized
phylogenies. We provide a combinatorial characterisation of maximum PD sets,
focusing on the setting where the branch lengths are ultrametric (e.g.
proportional to time). This leads to a polynomial-time algorithm for
calculating the number of maximum PD sets of size $k$ by applying a generating
function; we also investigate the types of tree shapes that harbour the most
(or fewest) maximum PD sets of size $k$. Our second question concerns
optimising a linear function on the species (regarded as leaves of the
phylogenetic tree) across all the maximum PD sets of a given size. Using the
characterisation result from the first question, we show how this optimisation
problem can be solved in polynomial time, even though the number of maximum PD
sets can grow exponentially. Our third question considers a dual problem: If
$k$ species were to become extinct, then what is the largest possible {\em
loss} of PD in the resulting tree? For this question, we describe a
polynomial-time solution based on dynamical programming.
| [
{
"created": "Wed, 15 Dec 2021 00:29:15 GMT",
"version": "v1"
},
{
"created": "Tue, 12 Apr 2022 03:29:01 GMT",
"version": "v2"
}
] | 2022-04-13 | [
[
"Manson",
"Kerry",
""
],
[
"Semple",
"Charles",
""
],
[
"Steel",
"Mike",
""
]
] | In conservation biology, phylogenetic diversity (PD) provides a way to quantify the impact of the current rapid extinction of species on the evolutionary `Tree of Life'. This approach recognises that extinction not only removes species but also the branches of the tree on which unique features shared by the extinct species arose. In this paper, we investigate three questions that are relevant to PD. The first asks how many sets of species of given size $k$ preserve the maximum possible amount of PD in a given tree. The number of such maximum PD sets can be very large, even for moderate-sized phylogenies. We provide a combinatorial characterisation of maximum PD sets, focusing on the setting where the branch lengths are ultrametric (e.g. proportional to time). This leads to a polynomial-time algorithm for calculating the number of maximum PD sets of size $k$ by applying a generating function; we also investigate the types of tree shapes that harbour the most (or fewest) maximum PD sets of size $k$. Our second question concerns optimising a linear function on the species (regarded as leaves of the phylogenetic tree) across all the maximum PD sets of a given size. Using the characterisation result from the first question, we show how this optimisation problem can be solved in polynomial time, even though the number of maximum PD sets can grow exponentially. Our third question considers a dual problem: If $k$ species were to become extinct, then what is the largest possible {\em loss} of PD in the resulting tree? For this question, we describe a polynomial-time solution based on dynamical programming. |
2012.12400 | Derrick VanGennep | Chen Shen, Derrick Van Gennep, Alexander F. Siegenfeld, Yaneer Bar-Yam | Comment on: A systematic review and meta-analysis of published research
data on COVID-19 infection-fatality rates | 5 pages, reviewing the work of Meyerowitz-Katz and Merone:
https://doi.org/10.1101/2020.05.03.20089854 | null | null | null | q-bio.PE q-bio.QM | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The infection fatality rate (IFR) of COVID-19 is one of the measures of
disease impact that can be of importance for policy making. Here we show that
many of the studies on which these estimates are based are scientifically
flawed for reasons which include: nonsensical equations, unjustified
assumptions, small sample sizes, non-representative sampling (systematic
biases), incorrect definitions of symptomatic and asymptomatic cases
(identified and unidentified cases), typically assuming that cases which are
asymptomatic at the time of testing are the same as completely asymptomatic
(never symptomatic) cases. Moreover, a widely cited meta-analysis misrepresents
some of the IFR values in the original studies, and makes inappropriate
duplicate use of studies, or the information from studies, so that the results
that are averaged are not independent from each other. The lack of validity of
these research papers is of particular importance in view of their influence on
policies that affect lives and well-being in confronting a worldwide pandemic.
| [
{
"created": "Tue, 22 Dec 2020 22:56:25 GMT",
"version": "v1"
}
] | 2020-12-24 | [
[
"Shen",
"Chen",
""
],
[
"Van Gennep",
"Derrick",
""
],
[
"Siegenfeld",
"Alexander F.",
""
],
[
"Bar-Yam",
"Yaneer",
""
]
] | The infection fatality rate (IFR) of COVID-19 is one of the measures of disease impact that can be of importance for policy making. Here we show that many of the studies on which these estimates are based are scientifically flawed for reasons which include: nonsensical equations, unjustified assumptions, small sample sizes, non-representative sampling (systematic biases), incorrect definitions of symptomatic and asymptomatic cases (identified and unidentified cases), typically assuming that cases which are asymptomatic at the time of testing are the same as completely asymptomatic (never symptomatic) cases. Moreover, a widely cited meta-analysis misrepresents some of the IFR values in the original studies, and makes inappropriate duplicate use of studies, or the information from studies, so that the results that are averaged are not independent from each other. The lack of validity of these research papers is of particular importance in view of their influence on policies that affect lives and well-being in confronting a worldwide pandemic. |
q-bio/0311034 | Rodrick Wallace | Rodrick Wallace, Deborah Wallace, Robert G. Wallace | Coronary heart disease, chronic inflammation, and pathogenic social
hierarchy: a biological limit to possible reductions in morbidity and
mortality | 10 pages, 1 figure. In press, J. Nat. Med. Assn | null | null | null | q-bio.NC q-bio.TO | null | We suggest that a particular form of social hierarchy, which we characterize
as 'pathogenic', can, from the earliest stages of life, exert a formal analog
to evolutionary selection pressure, literally writing a permanent developmental
image of itself upon immune function as chronic vascular inflammation and its
consequences. The staged nature of resulting disease emerges 'naturally' as a
rough analog to punctuated equilibrium in evolutionary theory, although
selection pressure is a passive filter rather than an active agent like
structured psychosocial stress. Exposure differs according to the social
constructs of race, class, and ethnicity, accounting in large measure for
observed population-level differences in rates of coronary heart disease across
industrialized societies. American Apartheid, which enmeshes both majority and
minority communities in a social construct of pathogenic hierarchy, appears to
present a severe biological limit to continuing declines in coronary heart
disease for powerful as well as subordinate subgroups: 'Culture', to use the
words of the evolutionary anthropologist Robert Boyd, 'is as much a part of
human biology as the enamel on our teeth'.
| [
{
"created": "Tue, 25 Nov 2003 17:07:13 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Wallace",
"Rodrick",
""
],
[
"Wallace",
"Deborah",
""
],
[
"Wallace",
"Robert G.",
""
]
] | We suggest that a particular form of social hierarchy, which we characterize as 'pathogenic', can, from the earliest stages of life, exert a formal analog to evolutionary selection pressure, literally writing a permanent developmental image of itself upon immune function as chronic vascular inflammation and its consequences. The staged nature of resulting disease emerges 'naturally' as a rough analog to punctuated equilibrium in evolutionary theory, although selection pressure is a passive filter rather than an active agent like structured psychosocial stress. Exposure differs according to the social constructs of race, class, and ethnicity, accounting in large measure for observed population-level differences in rates of coronary heart disease across industrialized societies. American Apartheid, which enmeshes both majority and minority communities in a social construct of pathogenic hierarchy, appears to present a severe biological limit to continuing declines in coronary heart disease for powerful as well as subordinate subgroups: 'Culture', to use the words of the evolutionary anthropologist Robert Boyd, 'is as much a part of human biology as the enamel on our teeth'. |
q-bio/0502013 | M. J. Gagen | M. J. Gagen and J. S. Mattick | Accelerating, hyper-accelerating, and decelerating probabilistic
networks | 13 pages, 9 figures | Physical Review E, 72, 016123 (2005) | 10.1103/PhysRevE.72.016123 | null | q-bio.MN q-bio.QM | null | Many growing networks possess accelerating statistics where the number of
links added with each new node is an increasing function of network size so the
total number of links increases faster than linearly with network size. In
particular, biological networks can display a quadratic growth in regulator
number with genome size even while remaining sparsely connected. These features
are mutually incompatible in standard treatments of network theory which
typically require that every new network node possesses at least one
connection. To model sparsely connected networks, we generalize existing
approaches and add each new node with a probabilistic number of links to
generate either accelerating, hyper-accelerating, or even decelerating network
statistics in different regimes. Under preferential attachment for example,
slowly accelerating networks display stationary scale-free statistics
relatively independent of network size while more rapidly accelerating networks
display a transition from scale-free to exponential statistics with network
growth. Such transitions explain, for instance, the evolutionary record of
single-celled organisms which display strict size and complexity limits.
| [
{
"created": "Sun, 13 Feb 2005 06:27:32 GMT",
"version": "v1"
}
] | 2017-12-22 | [
[
"Gagen",
"M. J.",
""
],
[
"Mattick",
"J. S.",
""
]
] | Many growing networks possess accelerating statistics where the number of links added with each new node is an increasing function of network size so the total number of links increases faster than linearly with network size. In particular, biological networks can display a quadratic growth in regulator number with genome size even while remaining sparsely connected. These features are mutually incompatible in standard treatments of network theory which typically require that every new network node possesses at least one connection. To model sparsely connected networks, we generalize existing approaches and add each new node with a probabilistic number of links to generate either accelerating, hyper-accelerating, or even decelerating network statistics in different regimes. Under preferential attachment for example, slowly accelerating networks display stationary scale-free statistics relatively independent of network size while more rapidly accelerating networks display a transition from scale-free to exponential statistics with network growth. Such transitions explain, for instance, the evolutionary record of single-celled organisms which display strict size and complexity limits. |
2011.01688 | Vikas Trivedi | Vikas Trivedi, Sara Madaan, Daniel B. Holland, Le A. Trinh, Scott E.
Fraser, Thai V. Truong | Imaging the Beating Heart with Macroscopic Phase Stamping | 4 pages | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We present a novel approach for imaging the beating embryonic heart, based on
combining two independent imaging channels to capture the full spatio-temporal
information of the moving 3D structure. High-resolution, optically-sectioned
image recording is accompanied by simultaneous acquisition of low-resolution,
whole-heart recording, allowing the latter to be used in post-acquisition
processing to determine the macroscopic spatio-temporal phase of the heart
beating cycle. Once determined, or 'stamped', the phase information common to
both imaging channels is used to reconstruct the 3D beating heart. We
demonstrated our approach in imaging the beating heart of the zebrafish embryo,
capturing the entire heart over its full beating cycle, and characterizing
cellular dynamic behavior with sub-cellular resolution.
| [
{
"created": "Tue, 3 Nov 2020 13:24:00 GMT",
"version": "v1"
}
] | 2020-11-04 | [
[
"Trivedi",
"Vikas",
""
],
[
"Madaan",
"Sara",
""
],
[
"Holland",
"Daniel B.",
""
],
[
"Trinh",
"Le A.",
""
],
[
"Fraser",
"Scott E.",
""
],
[
"Truong",
"Thai V.",
""
]
] | We present a novel approach for imaging the beating embryonic heart, based on combining two independent imaging channels to capture the full spatio-temporal information of the moving 3D structure. High-resolution, optically-sectioned image recording is accompanied by simultaneous acquisition of low-resolution, whole-heart recording, allowing the latter to be used in post-acquisition processing to determine the macroscopic spatio-temporal phase of the heart beating cycle. Once determined, or 'stamped', the phase information common to both imaging channels is used to reconstruct the 3D beating heart. We demonstrated our approach in imaging the beating heart of the zebrafish embryo, capturing the entire heart over its full beating cycle, and characterizing cellular dynamic behavior with sub-cellular resolution. |
1608.06548 | Joshua Vogelstein | Joshua T. Vogelstein, Katrin Amunts, Andreas Andreou, Dora Angelaki,
Giorgio Ascoli, Cori Bargmann, Randal Burns, Corrado Cali, Frances Chance,
Miyoung Chun, George Church, Hollis Cline, Todd Coleman, Stephanie de La
Rochefoucauld, Winfried Denk, Ana Belen Elgoyhen, Ralph Etienne Cummings,
Alan Evans, Kenneth Harris, Michael Hausser, Sean Hill, Samuel Inverso, Chad
Jackson, Viren Jain, Rob Kass, Bobby Kasthuri, Gregory Kiar, Konrad Kording,
Sandhya Koushika, John Krakauer, Story Landis, Jeff Layton, Qingming Luo,
Adam Marblestone, David Markowitz, Justin McArthur, Brett Mensh, Michael
Milham, Partha Mitra, Pedja Neskovic, Miguel Nicolelis, Richard O'Brien, Aude
Oliva, Gergo Orban, Hanchuan Peng, Alyssa Picchini-Schaffer, Marina
Picciotto, Jean-Baptiste Poline, Mu-ming Poo, Alex Pouget, Sri Raghavachari,
Jane Roskams, Terry Sejnowski, Fritz Sommer, Nelson Spruston, Larry Swanson,
Arthur Toga, R. Jacob Vogelstein, Rafael Yuste, Anthony Zador, Richard
Huganir, Michael Miller | Grand Challenges for Global Brain Sciences | 6 pages | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The next grand challenges for society and science are in the brain sciences.
A collection of 60+ scientists from around the world, together with 10+
observers from national, private, and foundations, spent two days together
discussing the top challenges that we could solve as a global community in the
next decade. We eventually settled on three challenges, spanning anatomy,
physiology, and medicine. Addressing all three challenges requires novel
computational infrastructure. The group proposed the advent of The
International Brain Station (TIBS), to address these challenges, and launch
brain sciences to the next level of understanding.
| [
{
"created": "Tue, 23 Aug 2016 15:33:12 GMT",
"version": "v1"
},
{
"created": "Tue, 30 Aug 2016 19:36:38 GMT",
"version": "v2"
},
{
"created": "Thu, 27 Oct 2016 15:31:08 GMT",
"version": "v3"
}
] | 2016-10-28 | [
[
"Vogelstein",
"Joshua T.",
""
],
[
"Amunts",
"Katrin",
""
],
[
"Andreou",
"Andreas",
""
],
[
"Angelaki",
"Dora",
""
],
[
"Ascoli",
"Giorgio",
""
],
[
"Bargmann",
"Cori",
""
],
[
"Burns",
"Randal",
""
],
[
"Cali",
"Corrado",
""
],
[
"Chance",
"Frances",
""
],
[
"Chun",
"Miyoung",
""
],
[
"Church",
"George",
""
],
[
"Cline",
"Hollis",
""
],
[
"Coleman",
"Todd",
""
],
[
"de La Rochefoucauld",
"Stephanie",
""
],
[
"Denk",
"Winfried",
""
],
[
"Elgoyhen",
"Ana Belen",
""
],
[
"Cummings",
"Ralph Etienne",
""
],
[
"Evans",
"Alan",
""
],
[
"Harris",
"Kenneth",
""
],
[
"Hausser",
"Michael",
""
],
[
"Hill",
"Sean",
""
],
[
"Inverso",
"Samuel",
""
],
[
"Jackson",
"Chad",
""
],
[
"Jain",
"Viren",
""
],
[
"Kass",
"Rob",
""
],
[
"Kasthuri",
"Bobby",
""
],
[
"Kiar",
"Gregory",
""
],
[
"Kording",
"Konrad",
""
],
[
"Koushika",
"Sandhya",
""
],
[
"Krakauer",
"John",
""
],
[
"Landis",
"Story",
""
],
[
"Layton",
"Jeff",
""
],
[
"Luo",
"Qingming",
""
],
[
"Marblestone",
"Adam",
""
],
[
"Markowitz",
"David",
""
],
[
"McArthur",
"Justin",
""
],
[
"Mensh",
"Brett",
""
],
[
"Milham",
"Michael",
""
],
[
"Mitra",
"Partha",
""
],
[
"Neskovic",
"Pedja",
""
],
[
"Nicolelis",
"Miguel",
""
],
[
"O'Brien",
"Richard",
""
],
[
"Oliva",
"Aude",
""
],
[
"Orban",
"Gergo",
""
],
[
"Peng",
"Hanchuan",
""
],
[
"Picchini-Schaffer",
"Alyssa",
""
],
[
"Picciotto",
"Marina",
""
],
[
"Poline",
"Jean-Baptiste",
""
],
[
"Poo",
"Mu-ming",
""
],
[
"Pouget",
"Alex",
""
],
[
"Raghavachari",
"Sri",
""
],
[
"Roskams",
"Jane",
""
],
[
"Sejnowski",
"Terry",
""
],
[
"Sommer",
"Fritz",
""
],
[
"Spruston",
"Nelson",
""
],
[
"Swanson",
"Larry",
""
],
[
"Toga",
"Arthur",
""
],
[
"Vogelstein",
"R. Jacob",
""
],
[
"Yuste",
"Rafael",
""
],
[
"Zador",
"Anthony",
""
],
[
"Huganir",
"Richard",
""
],
[
"Miller",
"Michael",
""
]
] | The next grand challenges for society and science are in the brain sciences. A collection of 60+ scientists from around the world, together with 10+ observers from national, private, and foundations, spent two days together discussing the top challenges that we could solve as a global community in the next decade. We eventually settled on three challenges, spanning anatomy, physiology, and medicine. Addressing all three challenges requires novel computational infrastructure. The group proposed the advent of The International Brain Station (TIBS), to address these challenges, and launch brain sciences to the next level of understanding. |
q-bio/0510050 | Alexander Zumdieck | Alexander Zumdieck, Marco Cosentino Lagomarsino, Catalin Tanase,
Karsten Kruse, Bela Mulder, Marileen Dogterom, and Frank J"ulicher | Continuum Description of the Cytoskeleton: Ring Formation in the Cell
Cortex | 5 pages, 4 figures | null | 10.1103/PhysRevLett.95.258103 | null | q-bio.SC | null | Motivated by the formation of ring-like filament structures in the cortex of
plant and animal cells, we study the dynamics of a two-dimensional layer of
cytoskeletal filaments and motor proteins near a surface by a general continuum
theory. As a result of active processes, dynamic patterns of filament
orientation and density emerge via instabilities. We show that
self-organization phenomena can lead to the formation of stationary and
oscillating rings. We present state diagrams which reveal a rich scenario of
asymptotic behaviors and discuss the role of boundary conditions.
| [
{
"created": "Thu, 27 Oct 2005 12:30:47 GMT",
"version": "v1"
}
] | 2009-11-11 | [
[
"Zumdieck",
"Alexander",
""
],
[
"Lagomarsino",
"Marco Cosentino",
""
],
[
"Tanase",
"Catalin",
""
],
[
"Kruse",
"Karsten",
""
],
[
"Mulder",
"Bela",
""
],
[
"Dogterom",
"Marileen",
""
],
[
"J\"ulicher",
"Frank",
""
]
] | Motivated by the formation of ring-like filament structures in the cortex of plant and animal cells, we study the dynamics of a two-dimensional layer of cytoskeletal filaments and motor proteins near a surface by a general continuum theory. As a result of active processes, dynamic patterns of filament orientation and density emerge via instabilities. We show that self-organization phenomena can lead to the formation of stationary and oscillating rings. We present state diagrams which reveal a rich scenario of asymptotic behaviors and discuss the role of boundary conditions. |
1909.11878 | Eric Lofgren | Christopher T. Short, Matthew S. Mietchen, Eric T. Lofgren (for the
CDC MInD-Healthcare Program) | Transient Dynamics of Infection Transmission in a Simulated Intensive
Care Unit | Results to be presented at International Symposium on Biomathematics
and Ecology Education and Research 2019 | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Healthcare-associated infections (HAIs) remain a public health problem.
Previous work showed intensive care unit (ICU) population structure impacts
methicillin-resistant Staphylococcus aureus (MRSA) rates. Unexplored in that
work was the transient dynamics of this system. We consider the dynamics of
MRSA in an ICU in three different models: 1) a Ross-McDonald model with a
single healthcare staff type, 2) a Ross-McDonald model with nurses and doctors
considered as separate populations and 3) a meta-population model that segments
patients into smaller groups seen by a single nurse. The basic reproduction
number, R0 is derived using the Next Generation Matrix method, while the
importance of the position of patients within the meta-population model is
assessed via stochastic simulation. The single-staff model had an R0 of 0.337,
while the other two models had R0s of 0.278. The meta-population model's R0 was
not sensitive to the time nurses spent with their assigned patients vs.
unassigned patients. This suggests previous results showing that simulated
infection rates are dependent on this parameter are the result of differences
in the transient dynamics between the models, rather than differing long-term
equilibria.
| [
{
"created": "Thu, 26 Sep 2019 04:12:30 GMT",
"version": "v1"
}
] | 2019-09-27 | [
[
"Short",
"Christopher T.",
"",
"for the\n CDC MInD-Healthcare Program"
],
[
"Mietchen",
"Matthew S.",
"",
"for the\n CDC MInD-Healthcare Program"
],
[
"Lofgren",
"Eric T.",
"",
"for the\n CDC MInD-Healthcare Program"
]
] | Healthcare-associated infections (HAIs) remain a public health problem. Previous work showed intensive care unit (ICU) population structure impacts methicillin-resistant Staphylococcus aureus (MRSA) rates. Unexplored in that work was the transient dynamics of this system. We consider the dynamics of MRSA in an ICU in three different models: 1) a Ross-McDonald model with a single healthcare staff type, 2) a Ross-McDonald model with nurses and doctors considered as separate populations and 3) a meta-population model that segments patients into smaller groups seen by a single nurse. The basic reproduction number, R0 is derived using the Next Generation Matrix method, while the importance of the position of patients within the meta-population model is assessed via stochastic simulation. The single-staff model had an R0 of 0.337, while the other two models had R0s of 0.278. The meta-population model's R0 was not sensitive to the time nurses spent with their assigned patients vs. unassigned patients. This suggests previous results showing that simulated infection rates are dependent on this parameter are the result of differences in the transient dynamics between the models, rather than differing long-term equilibria. |
1102.0933 | Elena Agliari | Elena Agliari, Adriano Barra, Kristian Gervasi Vidal, Francesco Guerra | Can persistent Epstein-Barr virus infection induce Chronic Fatigue
Syndrome as a Pavlov reflex of the immune response? | 26 pages, 9 figures; to appear in the J. Bio. Dyn | null | null | null | q-bio.CB cond-mat.stat-mech physics.med-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Chronic Fatigue Syndrome is a protracted illness condition (lasting even
years) appearing with strong flu symptoms and systemic defiances by the immune
system. Here, by means of statistical mechanics techniques, we study the most
widely accepted picture for its genesis, namely a persistent acute
mononucleosis infection, and we show how such infection may drive the immune
system toward an out-of-equilibrium metastable state displaying chronic
activation of both humoral and cellular responses (a state of full inflammation
without a direct "causes-effect" reason). By exploiting a bridge with a neural
scenario, we mirror killer lymphocytes $T_K$ and $B$ cells to neurons and
helper lymphocytes $T_{H_1},T_{H_2}$ to synapses, hence showing that the immune
system may experience the Pavlov conditional reflex phenomenon: if the
exposition to a stimulus (EBV antigens) lasts for too long, strong internal
correlations among $B,T_K,T_H$ may develop ultimately resulting in a persistent
activation even though the stimulus itself is removed. These outcomes are
corroborated by several experimental findings.
| [
{
"created": "Fri, 4 Feb 2011 15:00:04 GMT",
"version": "v1"
},
{
"created": "Fri, 8 Jun 2012 15:14:11 GMT",
"version": "v2"
}
] | 2012-06-11 | [
[
"Agliari",
"Elena",
""
],
[
"Barra",
"Adriano",
""
],
[
"Vidal",
"Kristian Gervasi",
""
],
[
"Guerra",
"Francesco",
""
]
] | Chronic Fatigue Syndrome is a protracted illness condition (lasting even years) appearing with strong flu symptoms and systemic defiances by the immune system. Here, by means of statistical mechanics techniques, we study the most widely accepted picture for its genesis, namely a persistent acute mononucleosis infection, and we show how such infection may drive the immune system toward an out-of-equilibrium metastable state displaying chronic activation of both humoral and cellular responses (a state of full inflammation without a direct "causes-effect" reason). By exploiting a bridge with a neural scenario, we mirror killer lymphocytes $T_K$ and $B$ cells to neurons and helper lymphocytes $T_{H_1},T_{H_2}$ to synapses, hence showing that the immune system may experience the Pavlov conditional reflex phenomenon: if the exposition to a stimulus (EBV antigens) lasts for too long, strong internal correlations among $B,T_K,T_H$ may develop ultimately resulting in a persistent activation even though the stimulus itself is removed. These outcomes are corroborated by several experimental findings. |
q-bio/0407016 | John Cain | David G. Schaeffer, John W. Cain, Daniel J. Gauthier, Soma S. Kalb,
Wanda Krassowska, Robert A. Oliver, and Elena G. Tolkacheva | An ionically based mapping model with memory for cardiac restitution | 14 pages, 6 figures | null | null | null | q-bio.QM q-bio.TO | null | Many features of the sequence of action potentials produced by repeated
stimulation of a cardiac patch can be modeled by a 1D mapping, but not the full
behavior observed in the restitution portrait: in particular, not (i) distinct
slopes for dynamic and S1-S2 restitution (rate dependence) and not (ii) long
transients in the approach to steady state (accomodation). To address these
shortcomings, \emph{ad hoc} 2D mappings, where the second variable is a
``memory'' variable, have been proposed; it seems that these models exhibit
some, but not all, of the relevant behavior. In this paper we introduce a new
2D mapping and determine a set of parameters for it that gives a rather
accurate description of the full restitution portrait found for one animal. The
changes in the mapping, compared to previous models, result from requiring that
the mapping can be derived as an asymptotic limit of a simple ionic model.
Among other benefits, one can interpret the parameters in the mapping in terms
of the ionic model. The ionic model is an extension of a two-current model that
adds a third dependent variable, a generalized concentration. The simplicity of
the ionic model and the physiological basis for the mapping contribute to the
usefulness of these ideas for describing restitution data in a variety of
contexts. The fitting procedure is straightforward and can easily be applied to
obtain a mathematical model for data from other experiments, including
experiments on different species. Uniqueness of the parameter choice is also
discussed.
| [
{
"created": "Fri, 9 Jul 2004 13:43:04 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Schaeffer",
"David G.",
""
],
[
"Cain",
"John W.",
""
],
[
"Gauthier",
"Daniel J.",
""
],
[
"Kalb",
"Soma S.",
""
],
[
"Krassowska",
"Wanda",
""
],
[
"Oliver",
"Robert A.",
""
],
[
"Tolkacheva",
"Elena G.",
""
]
] | Many features of the sequence of action potentials produced by repeated stimulation of a cardiac patch can be modeled by a 1D mapping, but not the full behavior observed in the restitution portrait: in particular, not (i) distinct slopes for dynamic and S1-S2 restitution (rate dependence) and not (ii) long transients in the approach to steady state (accomodation). To address these shortcomings, \emph{ad hoc} 2D mappings, where the second variable is a ``memory'' variable, have been proposed; it seems that these models exhibit some, but not all, of the relevant behavior. In this paper we introduce a new 2D mapping and determine a set of parameters for it that gives a rather accurate description of the full restitution portrait found for one animal. The changes in the mapping, compared to previous models, result from requiring that the mapping can be derived as an asymptotic limit of a simple ionic model. Among other benefits, one can interpret the parameters in the mapping in terms of the ionic model. The ionic model is an extension of a two-current model that adds a third dependent variable, a generalized concentration. The simplicity of the ionic model and the physiological basis for the mapping contribute to the usefulness of these ideas for describing restitution data in a variety of contexts. The fitting procedure is straightforward and can easily be applied to obtain a mathematical model for data from other experiments, including experiments on different species. Uniqueness of the parameter choice is also discussed. |
1609.04245 | Felix Z. Hoffmann | Felix Z. Hoffmann, Jochen Triesch | Non-random network connectivity comes in pairs | 16 pages, 3 figures | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Overrepresentation of bidirectional connections in local cortical networks
has been repeatedly reported and is in the focus of the ongoing discussion of
non-random connectivity. Here we show in a brief mathematical analysis that in
a network in which connection probabilities are symmetric in pairs, $P_{ij} =
P_{ji}$, the occurrence of bidirectional connections and non-random structures
are inherently linked; an overabundance of reciprocally connected pairs emerges
necessarily when the network structure deviates from a random network in any
form.
| [
{
"created": "Wed, 14 Sep 2016 12:54:43 GMT",
"version": "v1"
},
{
"created": "Tue, 13 Dec 2016 17:59:01 GMT",
"version": "v2"
}
] | 2016-12-14 | [
[
"Hoffmann",
"Felix Z.",
""
],
[
"Triesch",
"Jochen",
""
]
] | Overrepresentation of bidirectional connections in local cortical networks has been repeatedly reported and is in the focus of the ongoing discussion of non-random connectivity. Here we show in a brief mathematical analysis that in a network in which connection probabilities are symmetric in pairs, $P_{ij} = P_{ji}$, the occurrence of bidirectional connections and non-random structures are inherently linked; an overabundance of reciprocally connected pairs emerges necessarily when the network structure deviates from a random network in any form. |
2010.00740 | Paul Birrell | Francesco Brizzi and Paul J Birrell and Peter Kirwan and Dana Ogaz and
Alison E Brown and Valerie C Delpech and O Noel Gill and Daniela De Angelis | HIV transmission in men who have sex with men in England: on track for
elimination by 2030? | 20 pages, 5 figures | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Background: After a decade of a treatment as prevention (TasP) strategy based
on progressive HIV testing scale-up and earlier treatment, a reduction in the
estimated number of new infections in men-who-have-sex-with-men (MSM) in
England had yet to be identified by 2010. To achieve internationally agreed
targets for HIV control and elimination, test-and-treat prevention efforts have
been dramatically intensified over the period 2010-2015, and, from 2016,
further strengthened by pre-exposure prophylaxis (PrEP).
Methods: Application of a novel age-stratified back-calculation approach to
data on new HIV diagnoses and CD4 count-at-diagnosis, enabled age-specific
estimation of HIV incidence, undiagnosed infections and mean time-to-diagnosis
across both the 2010-2015 and 2016-2018 periods. Estimated incidence trends
were then extrapolated, to quantify the likelihood of achieving HIV elimination
by 2030.
Findings: A fall in HIV incidence in MSM is estimated to have started in
2012/3, eighteen months before the observed fall in new diagnoses. A steep
decrease from 2,770 annual infections (95% credible interval 2.490-3,040) in
2013 to 1,740 (1,500-2,010) in 2015 is estimated, followed by steady decline
from 2016, reaching 854 (441-1,540) infections in 2018. A decline is
consistently estimated in all age groups, with a fall particularly marked in
the 24-35 age group, and slowest in the 45+ group. Comparable declines are
estimated in the number of undiagnosed infections.
Interpretation: The peak and subsequent sharp decline in HIV incidence
occurred prior to the phase-in of PrEP. Definining elimination as a public
health threat to be < 50 new infections (1.1 infections per 10,000 at risk),
40% of incidence projections hit this threshold by 2030. In practice, targeted
policies will be required, particularly among the 45+y where STIs are
increasing most rapidly.
| [
{
"created": "Fri, 2 Oct 2020 01:13:14 GMT",
"version": "v1"
}
] | 2020-10-05 | [
[
"Brizzi",
"Francesco",
""
],
[
"Birrell",
"Paul J",
""
],
[
"Kirwan",
"Peter",
""
],
[
"Ogaz",
"Dana",
""
],
[
"Brown",
"Alison E",
""
],
[
"Delpech",
"Valerie C",
""
],
[
"Gill",
"O Noel",
""
],
[
"De Angelis",
"Daniela",
""
]
] | Background: After a decade of a treatment as prevention (TasP) strategy based on progressive HIV testing scale-up and earlier treatment, a reduction in the estimated number of new infections in men-who-have-sex-with-men (MSM) in England had yet to be identified by 2010. To achieve internationally agreed targets for HIV control and elimination, test-and-treat prevention efforts have been dramatically intensified over the period 2010-2015, and, from 2016, further strengthened by pre-exposure prophylaxis (PrEP). Methods: Application of a novel age-stratified back-calculation approach to data on new HIV diagnoses and CD4 count-at-diagnosis, enabled age-specific estimation of HIV incidence, undiagnosed infections and mean time-to-diagnosis across both the 2010-2015 and 2016-2018 periods. Estimated incidence trends were then extrapolated, to quantify the likelihood of achieving HIV elimination by 2030. Findings: A fall in HIV incidence in MSM is estimated to have started in 2012/3, eighteen months before the observed fall in new diagnoses. A steep decrease from 2,770 annual infections (95% credible interval 2.490-3,040) in 2013 to 1,740 (1,500-2,010) in 2015 is estimated, followed by steady decline from 2016, reaching 854 (441-1,540) infections in 2018. A decline is consistently estimated in all age groups, with a fall particularly marked in the 24-35 age group, and slowest in the 45+ group. Comparable declines are estimated in the number of undiagnosed infections. Interpretation: The peak and subsequent sharp decline in HIV incidence occurred prior to the phase-in of PrEP. Definining elimination as a public health threat to be < 50 new infections (1.1 infections per 10,000 at risk), 40% of incidence projections hit this threshold by 2030. In practice, targeted policies will be required, particularly among the 45+y where STIs are increasing most rapidly. |
1605.00886 | Bernhard Mehlig | M. Alizadehheidari, E. Werner, C. Noble, M. Reiter-Schad, L. K.
Nyberg, J. Fritzsche, B. Mehlig, J. O. Tegenfeldt, T. Ambj\"ornsson, F.
Persson and F. Westerlund | Nanoconfined circular and linear DNA - equilibrium conformations and
unfolding kinetics | 21 pages, 7 figures, 1 table | Macromolecules 48 (2015) 871 | 10.1021/ma5022067 | null | q-bio.BM cond-mat.soft physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Studies of circular DNA confined to nanofluidic channels are relevant both
from a fundamental polymer-physics perspective and due to the importance of
circular DNA molecules in vivo. We here observe the unfolding of DNA from the
circular to linear configuration as a light-induced double strand break occurs,
characterize the dynamics, and compare the equilibrium conformational
statistics of linear and circular configurations. This is important because it
allows us to determine to which extent existing statistical theories describe
the extension of confined circular DNA. We find that the ratio of the
extensions of confined linear and circular DNA configurations increases as the
buffer concentration decreases. The experimental results fall between
theoretical predictions for the extended de Gennes regime at weaker confinement
and the Odijk regime at stronger confinement. We show that it is possible to
directly distinguish between circular and linear DNA molecules by measuring the
emission intensity from the DNA. Finally, we determine the rate of unfolding
and show that this rate is larger for more confined DNA, possibly reflecting
the corresponding larger difference in entropy between the circular and linear
configurations.
| [
{
"created": "Tue, 3 May 2016 13:05:37 GMT",
"version": "v1"
}
] | 2016-05-04 | [
[
"Alizadehheidari",
"M.",
""
],
[
"Werner",
"E.",
""
],
[
"Noble",
"C.",
""
],
[
"Reiter-Schad",
"M.",
""
],
[
"Nyberg",
"L. K.",
""
],
[
"Fritzsche",
"J.",
""
],
[
"Mehlig",
"B.",
""
],
[
"Tegenfeldt",
"J. O.",
""
],
[
"Ambjörnsson",
"T.",
""
],
[
"Persson",
"F.",
""
],
[
"Westerlund",
"F.",
""
]
] | Studies of circular DNA confined to nanofluidic channels are relevant both from a fundamental polymer-physics perspective and due to the importance of circular DNA molecules in vivo. We here observe the unfolding of DNA from the circular to linear configuration as a light-induced double strand break occurs, characterize the dynamics, and compare the equilibrium conformational statistics of linear and circular configurations. This is important because it allows us to determine to which extent existing statistical theories describe the extension of confined circular DNA. We find that the ratio of the extensions of confined linear and circular DNA configurations increases as the buffer concentration decreases. The experimental results fall between theoretical predictions for the extended de Gennes regime at weaker confinement and the Odijk regime at stronger confinement. We show that it is possible to directly distinguish between circular and linear DNA molecules by measuring the emission intensity from the DNA. Finally, we determine the rate of unfolding and show that this rate is larger for more confined DNA, possibly reflecting the corresponding larger difference in entropy between the circular and linear configurations. |
1406.7140 | Paul Gardner | Paul P. Gardner, Mario Fasold, Sarah W. Burge, Maria Ninova, Jana
Hertel, Stephanie Kehr, Tammy E. Steeves, Sam Griffiths-Jones and Peter F.
Stadler | Conservation and losses of avian non-coding RNA loci | 17 pages, 1 figure | null | null | null | q-bio.GN | http://creativecommons.org/licenses/by/3.0/ | Here we present the results of a large-scale bioinformatic annotation of
non-coding RNA loci in 48 avian genomes. Our approach uses probabilistic models
of hand-curated families from the Rfam database to infer conserved RNA families
within each avian genome. We supplement these annotations with predictions from
the tRNA annotation tool, tRNAscan-SE and microRNAs from miRBase. We show that
a number of lncRNA-associated loci are conserved between birds and mammals,
including several intriguing cases where the reported mammalian lncRNA function
is not conserved in birds. We also demonstrate extensive conservation of
classical ncRNAs (e.g., tRNAs) and more recently discovered ncRNAs (e.g.,
snoRNAs and miRNAs) in birds. Furthermore, we describe numerous "losses" of
several RNA families, and attribute these to genuine loss, divergence or
missing data. In particular, we show that many of these losses are due to the
challenges associated with assembling Avian microchromosomes. These combined
results illustrate the utility of applying homology-based methods for
annotating novel vertebrate genomes.
| [
{
"created": "Fri, 27 Jun 2014 10:21:30 GMT",
"version": "v1"
}
] | 2014-06-30 | [
[
"Gardner",
"Paul P.",
""
],
[
"Fasold",
"Mario",
""
],
[
"Burge",
"Sarah W.",
""
],
[
"Ninova",
"Maria",
""
],
[
"Hertel",
"Jana",
""
],
[
"Kehr",
"Stephanie",
""
],
[
"Steeves",
"Tammy E.",
""
],
[
"Griffiths-Jones",
"Sam",
""
],
[
"Stadler",
"Peter F.",
""
]
] | Here we present the results of a large-scale bioinformatic annotation of non-coding RNA loci in 48 avian genomes. Our approach uses probabilistic models of hand-curated families from the Rfam database to infer conserved RNA families within each avian genome. We supplement these annotations with predictions from the tRNA annotation tool, tRNAscan-SE and microRNAs from miRBase. We show that a number of lncRNA-associated loci are conserved between birds and mammals, including several intriguing cases where the reported mammalian lncRNA function is not conserved in birds. We also demonstrate extensive conservation of classical ncRNAs (e.g., tRNAs) and more recently discovered ncRNAs (e.g., snoRNAs and miRNAs) in birds. Furthermore, we describe numerous "losses" of several RNA families, and attribute these to genuine loss, divergence or missing data. In particular, we show that many of these losses are due to the challenges associated with assembling Avian microchromosomes. These combined results illustrate the utility of applying homology-based methods for annotating novel vertebrate genomes. |
1405.3504 | Fran\c{c}ois Blanquart | Fran\c{c}ois Blanquart, Guillaume Achaz, Thomas Bataillon, Olivier
Tenaillon | Properties of selected mutations and genotypic landscapes under Fisher's
Geometric Model | 51 pages, 8 figures | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The fitness landscape - the mapping between genotypes and fitness -
determines properties of the process of adaptation. Several small genetic
fitness landscapes have recently been built by selecting a handful of
beneficial mutations and measuring fitness of all combinations of these
mutations. Here we generate several testable predictions for the properties of
these landscapes under Fisher's geometric model of adaptation (FGMA). When far
from the fitness optimum, we analytically compute the fitness effect of
beneficial mutations and their epistatic interactions. We show that epistasis
may be negative or positive on average depending on the distance of the
ancestral genotype to the optimum and whether mutations were independently
selected or co-selected in an adaptive walk. Using simulations, we show that
genetic landscapes built from FGMA are very close to an additive landscape when
the ancestral strain is far from the optimum. However, when close to the
optimum, a large diversity of landscape with substantial ruggedness and sign
epistasis emerged. Strikingly, landscapes built from different realizations of
stochastic adaptive walks in the same exact conditions were highly variable,
suggesting that several realizations of small genetic landscapes are needed to
gain information about the underlying architecture of the global adaptive
landscape.
| [
{
"created": "Wed, 14 May 2014 14:14:41 GMT",
"version": "v1"
}
] | 2014-05-15 | [
[
"Blanquart",
"François",
""
],
[
"Achaz",
"Guillaume",
""
],
[
"Bataillon",
"Thomas",
""
],
[
"Tenaillon",
"Olivier",
""
]
] | The fitness landscape - the mapping between genotypes and fitness - determines properties of the process of adaptation. Several small genetic fitness landscapes have recently been built by selecting a handful of beneficial mutations and measuring fitness of all combinations of these mutations. Here we generate several testable predictions for the properties of these landscapes under Fisher's geometric model of adaptation (FGMA). When far from the fitness optimum, we analytically compute the fitness effect of beneficial mutations and their epistatic interactions. We show that epistasis may be negative or positive on average depending on the distance of the ancestral genotype to the optimum and whether mutations were independently selected or co-selected in an adaptive walk. Using simulations, we show that genetic landscapes built from FGMA are very close to an additive landscape when the ancestral strain is far from the optimum. However, when close to the optimum, a large diversity of landscape with substantial ruggedness and sign epistasis emerged. Strikingly, landscapes built from different realizations of stochastic adaptive walks in the same exact conditions were highly variable, suggesting that several realizations of small genetic landscapes are needed to gain information about the underlying architecture of the global adaptive landscape. |
2009.00107 | Yichen Wang | Ricky Wang | Data Mining and Analytical Models to Predict and Identify Adverse
Drug-drug Interactions | 21 pages | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The use of multiple drugs accounts for almost 30% of all hospital admission
and is the 5th leading cause of death in America. Since over 30% of all adverse
drug events (ADEs) are thought to be caused by drug-drug interactions (DDI),
better identification and prediction of administration of known DDIs in primary
and secondary care could reduce the number of patients seeking urgent care in
hospitals, resulting in substantial savings for health systems worldwide along
with better public health. However, current DDI prediction models are prone to
confounding biases along with either inaccurate or a lack of access to
longitudinal data from Electronic Health Records (EHR) and other drug
information such as FDA Adverse Event Reporting System (FAERS) which continue
to be the main barriers in measuring the prevalence of DDI and characterizing
the phenomenon in medical care. In this review, analytical models including
Label Propagation using drug side effect data and Supervised Learning DDI
Prediction model using Drug-Gene interactions (DGIs) data are discussed.
Improved identification of DDIs in both of these models compared to previous
versions are highlighted while limitations that include bias, inaccuracy, and
insufficient data are also assessed. A case study of Psoriasis DDI prediction
by DGI data using Random Forest Classifier was studied. Transfer Matrix
Recurrent Neural Networks (TM-RNN) that address the above limitations are
discussed in future works.
| [
{
"created": "Mon, 31 Aug 2020 21:16:41 GMT",
"version": "v1"
}
] | 2020-09-02 | [
[
"Wang",
"Ricky",
""
]
] | The use of multiple drugs accounts for almost 30% of all hospital admission and is the 5th leading cause of death in America. Since over 30% of all adverse drug events (ADEs) are thought to be caused by drug-drug interactions (DDI), better identification and prediction of administration of known DDIs in primary and secondary care could reduce the number of patients seeking urgent care in hospitals, resulting in substantial savings for health systems worldwide along with better public health. However, current DDI prediction models are prone to confounding biases along with either inaccurate or a lack of access to longitudinal data from Electronic Health Records (EHR) and other drug information such as FDA Adverse Event Reporting System (FAERS) which continue to be the main barriers in measuring the prevalence of DDI and characterizing the phenomenon in medical care. In this review, analytical models including Label Propagation using drug side effect data and Supervised Learning DDI Prediction model using Drug-Gene interactions (DGIs) data are discussed. Improved identification of DDIs in both of these models compared to previous versions are highlighted while limitations that include bias, inaccuracy, and insufficient data are also assessed. A case study of Psoriasis DDI prediction by DGI data using Random Forest Classifier was studied. Transfer Matrix Recurrent Neural Networks (TM-RNN) that address the above limitations are discussed in future works. |
q-bio/0310018 | Lutz Brusch | Lutz Brusch, Wolfram Lorenz, Michal Or-Guil, Markus B\"ar and Ursula
Kummer | Fold-Hopf Bursting in a Model for Calcium Signal Transduction | 13 pages, 5 figures | Z. Phys. Chem. 216, 487-497 (2002) | null | null | q-bio.MN | null | We study a recent model for calcium signal transduction. This model displays
spiking, bursting and chaotic oscillations in accordance with experimental
results. We calculate bifurcation diagrams and study the bursting behaviour in
detail. This behaviour is classified according to the dynamics of separated
slow and fast subsystems. It is shown to be of the Fold-Hopf type, a type which
was previously only described in the context of neuronal systems, but not in
the context of signal transduction in the cell.
| [
{
"created": "Wed, 15 Oct 2003 14:07:46 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Brusch",
"Lutz",
""
],
[
"Lorenz",
"Wolfram",
""
],
[
"Or-Guil",
"Michal",
""
],
[
"Bär",
"Markus",
""
],
[
"Kummer",
"Ursula",
""
]
] | We study a recent model for calcium signal transduction. This model displays spiking, bursting and chaotic oscillations in accordance with experimental results. We calculate bifurcation diagrams and study the bursting behaviour in detail. This behaviour is classified according to the dynamics of separated slow and fast subsystems. It is shown to be of the Fold-Hopf type, a type which was previously only described in the context of neuronal systems, but not in the context of signal transduction in the cell. |
2208.11783 | Gregory Pearcey | Gregory EP Pearcey and W Zev Rymer | Population recordings of human motor units often display 'onion skin'
discharge patterns -- implications for voluntary motor control | 24 pages, 5 figures in text | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Over the past two decades, there has been a radical transformation in our
ability to extract useful biological signals from the surface electromyogram
(EMG). Advances in EMG electrode design and signal processing techniques have
resulted in an extraordinary capacity to identify motor unit spike trains from
the surface of a muscle. These EMG grid, or high-density surface EMG (HDsEMG),
recordings now provide accurate depictions of as many as 20-30 motor unit spike
trains simultaneously during isometric contractions, even at high forces. Such
multi-unit recordings often display an unexpected feature known as onion skin
behavior, in which multiple motor unit spike trains show essentially parallel
and organized increases in discharge rate with increases in voluntary force,
such that the earliest recruited units reach the highest discharge rates, while
higher threshold units display more modest rate increases. This sequence
results in an orderly pattern of discharge resembling the layers of an onion,
in which discharge rate trajectories stay largely parallel and rarely cross.
Our objective in this review is to explain why this pattern of discharge rates
is unexpected, why it does not accurately reflect our current understanding of
motoneuron electrophysiology, and why it may potentially lead to unpredicted
disruption in muscle force generation. This review is aimed at the practicing
clinician, or the clinician scientist. More advanced descriptions of potential
electrophysiological mechanisms associated with onion skin characteristics
targeting the research scientist will be provided as reference material.
| [
{
"created": "Wed, 24 Aug 2022 21:56:04 GMT",
"version": "v1"
}
] | 2022-08-26 | [
[
"Pearcey",
"Gregory EP",
""
],
[
"Rymer",
"W Zev",
""
]
] | Over the past two decades, there has been a radical transformation in our ability to extract useful biological signals from the surface electromyogram (EMG). Advances in EMG electrode design and signal processing techniques have resulted in an extraordinary capacity to identify motor unit spike trains from the surface of a muscle. These EMG grid, or high-density surface EMG (HDsEMG), recordings now provide accurate depictions of as many as 20-30 motor unit spike trains simultaneously during isometric contractions, even at high forces. Such multi-unit recordings often display an unexpected feature known as onion skin behavior, in which multiple motor unit spike trains show essentially parallel and organized increases in discharge rate with increases in voluntary force, such that the earliest recruited units reach the highest discharge rates, while higher threshold units display more modest rate increases. This sequence results in an orderly pattern of discharge resembling the layers of an onion, in which discharge rate trajectories stay largely parallel and rarely cross. Our objective in this review is to explain why this pattern of discharge rates is unexpected, why it does not accurately reflect our current understanding of motoneuron electrophysiology, and why it may potentially lead to unpredicted disruption in muscle force generation. This review is aimed at the practicing clinician, or the clinician scientist. More advanced descriptions of potential electrophysiological mechanisms associated with onion skin characteristics targeting the research scientist will be provided as reference material. |
1308.3584 | Serguei Saavedra | Serguei Saavedra, Rudolf P. Rohr, Vasilis Dakos, Jordi Bascompte | Estimating the tolerance of species to the effects of global
environmental change | Nature Communications 4, Article number: 2350, (2013) | null | 10.1038/ncomms3350 | null | q-bio.PE physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Global environmental change is affecting species distribution and their
interactions with other species. In particular, the main drivers of
environmental change strongly affect the strength of interspecific interactions
with considerable consequences to biodiversity. However, extrapolating the
effects observed on pair-wise interactions to entire ecological networks is
challenging. Here we propose a framework to estimate the tolerance to changes
in the strength of mutualistic interaction that species in mutualistic networks
can sustain before becoming extinct. We identify the scenarios where generalist
species can be the least tolerant. We show that the least tolerant species
across different scenarios do not appear to have uniquely common
characteristics. Species tolerance is extremely sensitive to the direction of
change in the strength of mutualistic interaction, as well as to the observed
mutualistic trade-offs between the number of partners and the strength of the
interactions.
| [
{
"created": "Fri, 16 Aug 2013 09:33:41 GMT",
"version": "v1"
}
] | 2013-08-19 | [
[
"Saavedra",
"Serguei",
""
],
[
"Rohr",
"Rudolf P.",
""
],
[
"Dakos",
"Vasilis",
""
],
[
"Bascompte",
"Jordi",
""
]
] | Global environmental change is affecting species distribution and their interactions with other species. In particular, the main drivers of environmental change strongly affect the strength of interspecific interactions with considerable consequences to biodiversity. However, extrapolating the effects observed on pair-wise interactions to entire ecological networks is challenging. Here we propose a framework to estimate the tolerance to changes in the strength of mutualistic interaction that species in mutualistic networks can sustain before becoming extinct. We identify the scenarios where generalist species can be the least tolerant. We show that the least tolerant species across different scenarios do not appear to have uniquely common characteristics. Species tolerance is extremely sensitive to the direction of change in the strength of mutualistic interaction, as well as to the observed mutualistic trade-offs between the number of partners and the strength of the interactions. |
2111.12989 | Ali Amiryousefi | Ali Amiryousefi | SPAGETI: Stabilizing Phylogenetic Assessment with Gene Evolutionary Tree
Indices | null | null | null | null | q-bio.PE stat.AP | http://creativecommons.org/licenses/by/4.0/ | The standard approach to estimate species trees is to align a selected set of
genes, concatenate the alignments and then estimate a consensus tree. However,
individual genes contain differing levels of evolutionary information, either
supporting or conflicting with the consensus. Based on individual gene
evolutionary tree, a recent study has demonstrated that this approach may
result in incorrect solutions and developed the internode certainty (IC)
heuristic for estimating the confidence of splits made on the consensus tree.
Although an improvement, this heuristic neglects the differing rates of
molecular evolution in individual genes. Here I develop an improved version of
this method such that each gene is proportionally weighted based on its overall
signal and specifically with the imbalanced signal for each node represented
with gene tree.
| [
{
"created": "Thu, 25 Nov 2021 09:52:48 GMT",
"version": "v1"
}
] | 2021-11-29 | [
[
"Amiryousefi",
"Ali",
""
]
] | The standard approach to estimate species trees is to align a selected set of genes, concatenate the alignments and then estimate a consensus tree. However, individual genes contain differing levels of evolutionary information, either supporting or conflicting with the consensus. Based on individual gene evolutionary tree, a recent study has demonstrated that this approach may result in incorrect solutions and developed the internode certainty (IC) heuristic for estimating the confidence of splits made on the consensus tree. Although an improvement, this heuristic neglects the differing rates of molecular evolution in individual genes. Here I develop an improved version of this method such that each gene is proportionally weighted based on its overall signal and specifically with the imbalanced signal for each node represented with gene tree. |
1804.10114 | Alexey Shvets | Maria P. Kochugaeva, Alexey A. Shvets and Anatoly B. Kolomeisky | Kinetics of Protein-DNA Interactions: First-Passage Analysis | 17 pages; 7 figures | null | null | null | q-bio.SC | http://creativecommons.org/licenses/by/4.0/ | All living systems can function only far away from equilibrium, and for this
reason chemical kinetic methods are critically important for uncovering the
mechanisms of biological processes. Here we present a new theoretical method of
investigating dynamics of protein-DNA interactions, which govern all major
biological processes. It is based on a first-passage analysis of biochemical
and biophysical transitions, and it provides a fully analytic description of
the processes. Our approach is explained for the case of a single protein
searching for a specific binding site on DNA. In addition, the application of
the method to investigations of the effect of DNA sequence heterogeneity, and
the role multiple targets and traps in the protein search dynamics are
discussed.
| [
{
"created": "Thu, 26 Apr 2018 15:34:57 GMT",
"version": "v1"
}
] | 2018-04-27 | [
[
"Kochugaeva",
"Maria P.",
""
],
[
"Shvets",
"Alexey A.",
""
],
[
"Kolomeisky",
"Anatoly B.",
""
]
] | All living systems can function only far away from equilibrium, and for this reason chemical kinetic methods are critically important for uncovering the mechanisms of biological processes. Here we present a new theoretical method of investigating dynamics of protein-DNA interactions, which govern all major biological processes. It is based on a first-passage analysis of biochemical and biophysical transitions, and it provides a fully analytic description of the processes. Our approach is explained for the case of a single protein searching for a specific binding site on DNA. In addition, the application of the method to investigations of the effect of DNA sequence heterogeneity, and the role multiple targets and traps in the protein search dynamics are discussed. |
1611.08956 | Amir Toor | V Koparde, B Abdul Razzaq, T Suntum, R Sabo, A Scalora, M Serrano, M
Jameson-Lee, C Hall, D Kobulnicky, N Sheth, J Sampson, C Roberts, G Buck, M
Neale, A Toor | Dynamical System Modeling to Simulate Donor T Cell Response to Whole
Exome Sequencing-Derived Recipient Peptides: Understanding Randomness in
Clinical Outcomes Following Stem Cell Transplantation | null | PLoS One. 2017 Dec 1;12(12):e0187771 | 10.1371/journal.pone.0187771 | null | q-bio.QM q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Alloreactivity following stem cell transplantation (SCT) is difficult to
predict in patients undergoing transplantation from HLA matched donors. In this
study we performed whole exome sequencing of SCT donor-recipient pairs (DRP).
This allowed determination of entire library of alloreactive peptide sequences
which would bind HLA class I molecules in each DRP. Utilizing the HLA binding
affinity (IC50) and tissue expression levels of the parent proteins, an
aggregate donor T cell response to the recipient alloreactive peptides was
calculated using a vector-operator dynamical system model. Marked variability
in the simulated CD8+ T cell responses was observed in all the donor recipient
pairs.
| [
{
"created": "Mon, 28 Nov 2016 02:02:40 GMT",
"version": "v1"
}
] | 2019-12-17 | [
[
"Koparde",
"V",
""
],
[
"Razzaq",
"B Abdul",
""
],
[
"Suntum",
"T",
""
],
[
"Sabo",
"R",
""
],
[
"Scalora",
"A",
""
],
[
"Serrano",
"M",
""
],
[
"Jameson-Lee",
"M",
""
],
[
"Hall",
"C",
""
],
[
"Kobulnicky",
"D",
""
],
[
"Sheth",
"N",
""
],
[
"Sampson",
"J",
""
],
[
"Roberts",
"C",
""
],
[
"Buck",
"G",
""
],
[
"Neale",
"M",
""
],
[
"Toor",
"A",
""
]
] | Alloreactivity following stem cell transplantation (SCT) is difficult to predict in patients undergoing transplantation from HLA matched donors. In this study we performed whole exome sequencing of SCT donor-recipient pairs (DRP). This allowed determination of entire library of alloreactive peptide sequences which would bind HLA class I molecules in each DRP. Utilizing the HLA binding affinity (IC50) and tissue expression levels of the parent proteins, an aggregate donor T cell response to the recipient alloreactive peptides was calculated using a vector-operator dynamical system model. Marked variability in the simulated CD8+ T cell responses was observed in all the donor recipient pairs. |
2208.10119 | Jhoirene Clemente | Jhoirene B. Clemente, Gabriel Besas, Jerick Callado, John Erol
Evangelista | Predicting the Biological Classification of Cell-Cycle Regulated Genes
of Saccharomyces cerevisiae using Community Detection Algorithms on Gene
Co-expression Networks | 11 pages, Philippine Computing Journal Vol 16 No. 1 | null | null | null | q-bio.MN | http://creativecommons.org/licenses/by/4.0/ | The conventional approach for analyzing gene expression data involves
clustering algorithms. Cluster analyses provide partitioning of the set of
genes that can predict biological classification based on its similarity in
n-dimensional space. In this study, we investigate whether network analysis
will provide an advantage over the traditional approach. We identify the
advantages and disadvantages of using the value-based and the rank-based
construction in creating a graph representation of the original gene-expression
data in a time-series format. We tested four community detection algorithms,
namely, the Clauset-Newman-Moore (greedy), Louvain, Leiden, and Girvan-Newman
algorithms in predicting the 5 functional groups of genes. We used the Adjusted
Rand Index to assess the quality of the predicted communities with respect to
the biological classifications. We showed that Girvan-Newman outperforms the 3
modularity-based algorithms in both value-based and ranked-based constructed
graphs. Moreover, we also show that when compared to the conventional
clustering algorithms such as K-means, Spectral, Birch, and Agglomerative
algorithms, we obtained a higher ARI with Girvan-Newman. This study also
provides a tool for graph construction, visualization, and community detection
for further analysis of gene expression data.
| [
{
"created": "Mon, 22 Aug 2022 07:49:19 GMT",
"version": "v1"
}
] | 2022-08-23 | [
[
"Clemente",
"Jhoirene B.",
""
],
[
"Besas",
"Gabriel",
""
],
[
"Callado",
"Jerick",
""
],
[
"Evangelista",
"John Erol",
""
]
] | The conventional approach for analyzing gene expression data involves clustering algorithms. Cluster analyses provide partitioning of the set of genes that can predict biological classification based on its similarity in n-dimensional space. In this study, we investigate whether network analysis will provide an advantage over the traditional approach. We identify the advantages and disadvantages of using the value-based and the rank-based construction in creating a graph representation of the original gene-expression data in a time-series format. We tested four community detection algorithms, namely, the Clauset-Newman-Moore (greedy), Louvain, Leiden, and Girvan-Newman algorithms in predicting the 5 functional groups of genes. We used the Adjusted Rand Index to assess the quality of the predicted communities with respect to the biological classifications. We showed that Girvan-Newman outperforms the 3 modularity-based algorithms in both value-based and ranked-based constructed graphs. Moreover, we also show that when compared to the conventional clustering algorithms such as K-means, Spectral, Birch, and Agglomerative algorithms, we obtained a higher ARI with Girvan-Newman. This study also provides a tool for graph construction, visualization, and community detection for further analysis of gene expression data. |
2007.08002 | Lionel Roques | Lionel Roques, Olivier Bonnefon, Virgile Baudrot, Samuel Soubeyrand,
Henri Berestycki | A parsimonious model for spatial transmission and heterogeneity in the
COVID-19 propagation | null | null | null | null | q-bio.PE physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Raw data on the cumulative number of deaths at a country level generally
indicate a spatially variable distribution of the incidence of COVID-19
disease. An important issue is to determine whether this spatial pattern is a
consequence of environmental heterogeneities, such as the climatic conditions,
during the course of the outbreak. Another fundamental issue is to understand
the spatial spreading of COVID-19. To address these questions, we consider four
candidate epidemiological models with varying complexity in terms of initial
conditions, contact rates and non-local transmissions, and we fit them to
French mortality data with a mixed probabilistic-ODE approach. Using standard
statistical criteria, we select the model with non-local transmission
corresponding to a diffusion on the graph of counties that depends on the
geographic proximity, with time-dependent contact rate and spatially constant
parameters. This original spatially parsimonious model suggests that in a
geographically middle size centralized country such as France, once the
epidemic is established, the effect of global processes such as restriction
policies, sanitary measures and social distancing overwhelms the effect of
local factors. Additionally, this modeling approach reveals the latent
epidemiological dynamics including the local level of immunity, and allows us
to evaluate the role of non-local interactions on the future spread of the
disease. In view of its theoretical and numerical simplicity and its ability to
accurately track the COVID-19 epidemic curves, the framework we develop here,
in particular the non-local model and the associated estimation procedure, is
of general interest in studying spatial dynamics of epidemics.
| [
{
"created": "Wed, 15 Jul 2020 21:28:51 GMT",
"version": "v1"
},
{
"created": "Sat, 18 Jul 2020 08:25:52 GMT",
"version": "v2"
}
] | 2020-07-21 | [
[
"Roques",
"Lionel",
""
],
[
"Bonnefon",
"Olivier",
""
],
[
"Baudrot",
"Virgile",
""
],
[
"Soubeyrand",
"Samuel",
""
],
[
"Berestycki",
"Henri",
""
]
] | Raw data on the cumulative number of deaths at a country level generally indicate a spatially variable distribution of the incidence of COVID-19 disease. An important issue is to determine whether this spatial pattern is a consequence of environmental heterogeneities, such as the climatic conditions, during the course of the outbreak. Another fundamental issue is to understand the spatial spreading of COVID-19. To address these questions, we consider four candidate epidemiological models with varying complexity in terms of initial conditions, contact rates and non-local transmissions, and we fit them to French mortality data with a mixed probabilistic-ODE approach. Using standard statistical criteria, we select the model with non-local transmission corresponding to a diffusion on the graph of counties that depends on the geographic proximity, with time-dependent contact rate and spatially constant parameters. This original spatially parsimonious model suggests that in a geographically middle size centralized country such as France, once the epidemic is established, the effect of global processes such as restriction policies, sanitary measures and social distancing overwhelms the effect of local factors. Additionally, this modeling approach reveals the latent epidemiological dynamics including the local level of immunity, and allows us to evaluate the role of non-local interactions on the future spread of the disease. In view of its theoretical and numerical simplicity and its ability to accurately track the COVID-19 epidemic curves, the framework we develop here, in particular the non-local model and the associated estimation procedure, is of general interest in studying spatial dynamics of epidemics. |
2003.08896 | Marco Hartl | Marco Hartl (1,2), Diego F. Bedoya-R\'ios (3), Marta
Fern\'andez-Gatell (1), Diederik P.L. Rousseau (2), Gijs Du Laing (2),
Marianna Garf\'i (1), Jaume Puigagut (1) ((1) GEMMA - Environmental
Engineering and Microbiology Research Group, Department of Civil and
Environmental Engineering, Universitat Polit\`ecnica de
Catalunya-BarcelonaTech, Barcelona, Spain, (2) Department of Green Chemistry
and Technology, Faculty of Bioscience Engineering, Ghent University, Gent,
Belgium (3) Grupo Ciencia e Ingenier\'ia del Agua y el Ambiente, Facultad de
Ingenier\'ia, Pontificia Universidad Javeriana, Bogot\'a, Colombia) | Contaminants removal and bacterial activity enhancement along the flow
path of constructed wetland microbial fuel cells | 39 pages, 8 Figures | null | 10.1016/j.scitotenv.2018.10.234 | null | q-bio.QM | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Microbial fuel cells implemented in constructed wetlands (CW-MFCs), albeit a
relatively new technology still under study, have shown to improve treatment
efficiency of urban wastewater. So far the vast majority of CW-MFC systems
investigated were designed as lab-scale systems working under rather
unrealistic hydraulic conditions using synthetic wastewater. The main objective
of this work was to quantify CW-MFCs performance operated under different
conditions in a more realistic setup using meso-scale systems with horizontal
flow fed with real urban wastewater. Operational conditions tested were organic
loading rate (4.9+-1.6, 6.7+-1.4 and 13.6+-3.2 g COD/m2.day) and hydraulic
regime (continuous vs intermittent feeding) as well as different electrical
connections: CW control (conventional CW without electrodes), open-circuit
CW-MFC (external circuit between anode and cathode not connected) and
closed-circuit CW-MFC (external circuit connected). Eight horizontal subsurface
flow CWs were operated for about four months. Each wetland consisted of a PVC
reservoir of 0.193 m2 filled with 4/8 mm granitic riverine gravel. All wetlands
had intermediate sampling points for gravel and interstitial liquid sampling.
The CW-MFCs were designed as three MFCs incorporated one after the other along
the flow path of the CWs. Results showed no significant differences between
tested organic loading rates, hydraulic regimes or electrical connections,
however, on average, systems operated in closed-circuit CW-MFC mode under
continuous flow outperformed the other experimental conditions. Closed-circuit
CW-MFC compared to conventional CW control systems showed around 5% and 22%
higher COD and ammonium removal, respectively. Correspondingly, overall
bacteria activity, as measured by the fluorescein diacetate technique, was
higher (4% to 34%) in closed-circuit systems when compared to CW control
systems.
| [
{
"created": "Thu, 19 Mar 2020 16:53:45 GMT",
"version": "v1"
}
] | 2020-03-20 | [
[
"Hartl",
"Marco",
""
],
[
"Bedoya-Ríos",
"Diego F.",
""
],
[
"Fernández-Gatell",
"Marta",
""
],
[
"Rousseau",
"Diederik P. L.",
""
],
[
"Laing",
"Gijs Du",
""
],
[
"Garfí",
"Marianna",
""
],
[
"Puigagut",
"Jaume",
""
]
] | Microbial fuel cells implemented in constructed wetlands (CW-MFCs), albeit a relatively new technology still under study, have shown to improve treatment efficiency of urban wastewater. So far the vast majority of CW-MFC systems investigated were designed as lab-scale systems working under rather unrealistic hydraulic conditions using synthetic wastewater. The main objective of this work was to quantify CW-MFCs performance operated under different conditions in a more realistic setup using meso-scale systems with horizontal flow fed with real urban wastewater. Operational conditions tested were organic loading rate (4.9+-1.6, 6.7+-1.4 and 13.6+-3.2 g COD/m2.day) and hydraulic regime (continuous vs intermittent feeding) as well as different electrical connections: CW control (conventional CW without electrodes), open-circuit CW-MFC (external circuit between anode and cathode not connected) and closed-circuit CW-MFC (external circuit connected). Eight horizontal subsurface flow CWs were operated for about four months. Each wetland consisted of a PVC reservoir of 0.193 m2 filled with 4/8 mm granitic riverine gravel. All wetlands had intermediate sampling points for gravel and interstitial liquid sampling. The CW-MFCs were designed as three MFCs incorporated one after the other along the flow path of the CWs. Results showed no significant differences between tested organic loading rates, hydraulic regimes or electrical connections, however, on average, systems operated in closed-circuit CW-MFC mode under continuous flow outperformed the other experimental conditions. Closed-circuit CW-MFC compared to conventional CW control systems showed around 5% and 22% higher COD and ammonium removal, respectively. Correspondingly, overall bacteria activity, as measured by the fluorescein diacetate technique, was higher (4% to 34%) in closed-circuit systems when compared to CW control systems. |
2309.07096 | James Ruffle | James K Ruffle, Robert J Gray, Samia Mohinta, Guilherme Pombo,
Chaitanya Kaul, Harpreet Hyare, Geraint Rees, Parashkev Nachev | Computational limits to the legibility of the imaged human brain | 38 pages, 6 figures, 1 table, 2 supplementary figures, 1
supplementary table | null | 10.1016/j.neuroimage.2024.120600 | null | q-bio.NC cs.CV eess.IV | http://creativecommons.org/licenses/by/4.0/ | Our knowledge of the organisation of the human brain at the population-level
is yet to translate into power to predict functional differences at the
individual-level, limiting clinical applications, and casting doubt on the
generalisability of inferred mechanisms. It remains unknown whether the
difficulty arises from the absence of individuating biological patterns within
the brain, or from limited power to access them with the models and compute at
our disposal. Here we comprehensively investigate the resolvability of such
patterns with data and compute at unprecedented scale. Across 23 810 unique
participants from UK Biobank, we systematically evaluate the predictability of
25 individual biological characteristics, from all available combinations of
structural and functional neuroimaging data. Over 4526 GPU hours of
computation, we train, optimize, and evaluate out-of-sample 700 individual
predictive models, including fully-connected feed-forward neural networks of
demographic, psychological, serological, chronic disease, and functional
connectivity characteristics, and both uni- and multi-modal 3D convolutional
neural network models of macro- and micro-structural brain imaging. We find a
marked discrepancy between the high predictability of sex (balanced accuracy
99.7%), age (mean absolute error 2.048 years, R2 0.859), and weight (mean
absolute error 2.609Kg, R2 0.625), for which we set new state-of-the-art
performance, and the surprisingly low predictability of other characteristics.
Neither structural nor functional imaging predicted psychology better than the
coincidence of chronic disease (p<0.05). Serology predicted chronic disease
(p<0.05) and was best predicted by it (p<0.001), followed by structural
neuroimaging (p<0.05). Our findings suggest either more informative imaging or
more powerful models are needed to decipher individual level characteristics
from the human brain.
| [
{
"created": "Wed, 23 Aug 2023 12:37:13 GMT",
"version": "v1"
},
{
"created": "Thu, 9 Nov 2023 13:50:54 GMT",
"version": "v2"
},
{
"created": "Tue, 12 Mar 2024 16:30:34 GMT",
"version": "v3"
},
{
"created": "Tue, 2 Apr 2024 19:12:46 GMT",
"version": "v4"
}
] | 2024-04-04 | [
[
"Ruffle",
"James K",
""
],
[
"Gray",
"Robert J",
""
],
[
"Mohinta",
"Samia",
""
],
[
"Pombo",
"Guilherme",
""
],
[
"Kaul",
"Chaitanya",
""
],
[
"Hyare",
"Harpreet",
""
],
[
"Rees",
"Geraint",
""
],
[
"Nachev",
"Parashkev",
""
]
] | Our knowledge of the organisation of the human brain at the population-level is yet to translate into power to predict functional differences at the individual-level, limiting clinical applications, and casting doubt on the generalisability of inferred mechanisms. It remains unknown whether the difficulty arises from the absence of individuating biological patterns within the brain, or from limited power to access them with the models and compute at our disposal. Here we comprehensively investigate the resolvability of such patterns with data and compute at unprecedented scale. Across 23 810 unique participants from UK Biobank, we systematically evaluate the predictability of 25 individual biological characteristics, from all available combinations of structural and functional neuroimaging data. Over 4526 GPU hours of computation, we train, optimize, and evaluate out-of-sample 700 individual predictive models, including fully-connected feed-forward neural networks of demographic, psychological, serological, chronic disease, and functional connectivity characteristics, and both uni- and multi-modal 3D convolutional neural network models of macro- and micro-structural brain imaging. We find a marked discrepancy between the high predictability of sex (balanced accuracy 99.7%), age (mean absolute error 2.048 years, R2 0.859), and weight (mean absolute error 2.609Kg, R2 0.625), for which we set new state-of-the-art performance, and the surprisingly low predictability of other characteristics. Neither structural nor functional imaging predicted psychology better than the coincidence of chronic disease (p<0.05). Serology predicted chronic disease (p<0.05) and was best predicted by it (p<0.001), followed by structural neuroimaging (p<0.05). Our findings suggest either more informative imaging or more powerful models are needed to decipher individual level characteristics from the human brain. |
1403.5414 | Thierry Rabilloud | Thierry Rabilloud (LCBM - UMR 5249), Pierre Lescuyer | The proteomic to biology inference, a frequently overlooked concern in
the interpretation of proteomic data: A plea for functional validation | null | PROTEOMICS 14, 2-3 (2014) 157-61 | 10.1002/pmic.201300413 | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Proteomics will celebrate its 20th year in 2014. In this relatively short
period of time, it has invaded most areas of biology and its use will probably
continue to spread in the future. These two decades have seen a considerable
increase in the speed and sensitivity of protein identification and
characterization, even from complex samples. Indeed, what was a challenge
twenty years ago is now little more than a daily routine. Although not
completely over, the technological challenge now makes room to another
challenge, which is the best possible appraisal and exploitation of proteomic
data to draw the best possible conclusions from a biological point of view. The
point developed in this paper is that proteomic data are almost always
fragmentary. This means in turn that although better than an mRNA level, a
protein level is often insufficient to draw a valid conclusion from a
biological point of view, especially in a world where PTMs play such an
important role. This means in turn that transformation of proteomic data into
biological data requires an important intermediate layer of functional
validation, i.e. not merely the confirmation of protein abundance changes by
other methods, but a functional appraisal of the biological consequences of the
protein level changes highlighted by the proteomic screens.
| [
{
"created": "Fri, 21 Mar 2014 10:29:41 GMT",
"version": "v1"
}
] | 2014-03-24 | [
[
"Rabilloud",
"Thierry",
"",
"LCBM - UMR 5249"
],
[
"Lescuyer",
"Pierre",
""
]
] | Proteomics will celebrate its 20th year in 2014. In this relatively short period of time, it has invaded most areas of biology and its use will probably continue to spread in the future. These two decades have seen a considerable increase in the speed and sensitivity of protein identification and characterization, even from complex samples. Indeed, what was a challenge twenty years ago is now little more than a daily routine. Although not completely over, the technological challenge now makes room to another challenge, which is the best possible appraisal and exploitation of proteomic data to draw the best possible conclusions from a biological point of view. The point developed in this paper is that proteomic data are almost always fragmentary. This means in turn that although better than an mRNA level, a protein level is often insufficient to draw a valid conclusion from a biological point of view, especially in a world where PTMs play such an important role. This means in turn that transformation of proteomic data into biological data requires an important intermediate layer of functional validation, i.e. not merely the confirmation of protein abundance changes by other methods, but a functional appraisal of the biological consequences of the protein level changes highlighted by the proteomic screens. |
1212.0621 | Jasmine A. Nirody | Jasmine A. Nirody | Development of spatial coarse-to-fine processing in the visual pathway | 20 pages, 7 figures; substantial restructuring from previous version | null | 10.1186/1471-2202-14-S1-P294 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The sequential analysis of information in a coarse-to-fine manner is a
fundamental mode of processing in the visual pathway. Spatial frequency (SF)
tuning, arguably the most fundamental feature of spatial vision, provides
particular intuition within the coarse-to-fine framework: low spatial
frequencies convey global information about an image (e.g., general
orientation), while high spatial frequencies carry more detailed information
(e.g., edges). In this paper, we study the development of cortical spatial
frequency tuning. As feedforward input from the lateral geniculate nucleus
(LGN) has been shown to have significant influence on cortical coarse-to-fine
processing, we present a firing-rate based thalamocortical model which includes
both feedforward and feedback components. We analyze the relationship between
various model parameters (including cortical feedback strength) and responses.
We confirm the importance of the antagonistic relationship between the center
and surround responses in thalamic relay cell receptive fields (RFs), and
further characterize how specific structural LGN RF parameters affect cortical
coarse-to-fine processing. Our results also indicate that the effect of
cortical feedback on spatial frequency tuning is age-dependent: in particular,
cortical feedback more strongly affects coarse-to-fine processing in kittens
than in adults. We use our results to propose an experimentally testable
hypothesis for the function of the extensive feedback in the corticothalamic
circuit.
| [
{
"created": "Tue, 4 Dec 2012 06:17:36 GMT",
"version": "v1"
},
{
"created": "Fri, 25 Jan 2013 01:19:05 GMT",
"version": "v2"
},
{
"created": "Tue, 16 Jul 2013 23:31:16 GMT",
"version": "v3"
}
] | 2020-06-17 | [
[
"Nirody",
"Jasmine A.",
""
]
] | The sequential analysis of information in a coarse-to-fine manner is a fundamental mode of processing in the visual pathway. Spatial frequency (SF) tuning, arguably the most fundamental feature of spatial vision, provides particular intuition within the coarse-to-fine framework: low spatial frequencies convey global information about an image (e.g., general orientation), while high spatial frequencies carry more detailed information (e.g., edges). In this paper, we study the development of cortical spatial frequency tuning. As feedforward input from the lateral geniculate nucleus (LGN) has been shown to have significant influence on cortical coarse-to-fine processing, we present a firing-rate based thalamocortical model which includes both feedforward and feedback components. We analyze the relationship between various model parameters (including cortical feedback strength) and responses. We confirm the importance of the antagonistic relationship between the center and surround responses in thalamic relay cell receptive fields (RFs), and further characterize how specific structural LGN RF parameters affect cortical coarse-to-fine processing. Our results also indicate that the effect of cortical feedback on spatial frequency tuning is age-dependent: in particular, cortical feedback more strongly affects coarse-to-fine processing in kittens than in adults. We use our results to propose an experimentally testable hypothesis for the function of the extensive feedback in the corticothalamic circuit. |
0804.2449 | Jianhua Xing | Jianhua Xing, Jing Chen | The Goldbeter-Koshland switch in the first-order region and its response
to dynamic disorder | 23 pages, 4 figures, accepted by PLOS ONE | PLOS ONE 3(5):e2140 (2008) | 10.1371/journal.pone.0002140 | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In their classical work (Proc. Natl. Acad. Sci. USA, 1981, 78:6840-6844),
Goldbeter and Koshland mathematically analyzed a reversible covalent
modification system which is highly sensitive to the concentration of
effectors. Its signal-response curve appears sigmoidal, constituting a
biochemical switch. However, the switch behavior only emerges in the
"zero-order region", i.e. when the signal molecule concentration is much lower
than that of the substrate it modifies. In this work we showed that the
switching behavior can also occur under comparable concentrations of signals
and substrates, provided that the signal molecules catalyze the modification
reaction in cooperation. We also studied the effect of dynamic disorders on the
proposed biochemical switch, in which the enzymatic reaction rates, instead of
constant, appear as stochastic functions of time. We showed that the system is
robust to dynamic disorder at bulk concentration. But if the dynamic disorder
is quasi-static, large fluctuations of the switch response behavior may be
observed at low concentrations. Such fluctuation is relevant to many biological
functions. It can be reduced by either increasing the conformation
interconversion rate of the protein, or correlating the enzymatic reaction
rates in the network.
| [
{
"created": "Tue, 15 Apr 2008 18:24:15 GMT",
"version": "v1"
}
] | 2015-05-13 | [
[
"Xing",
"Jianhua",
""
],
[
"Chen",
"Jing",
""
]
] | In their classical work (Proc. Natl. Acad. Sci. USA, 1981, 78:6840-6844), Goldbeter and Koshland mathematically analyzed a reversible covalent modification system which is highly sensitive to the concentration of effectors. Its signal-response curve appears sigmoidal, constituting a biochemical switch. However, the switch behavior only emerges in the "zero-order region", i.e. when the signal molecule concentration is much lower than that of the substrate it modifies. In this work we showed that the switching behavior can also occur under comparable concentrations of signals and substrates, provided that the signal molecules catalyze the modification reaction in cooperation. We also studied the effect of dynamic disorders on the proposed biochemical switch, in which the enzymatic reaction rates, instead of constant, appear as stochastic functions of time. We showed that the system is robust to dynamic disorder at bulk concentration. But if the dynamic disorder is quasi-static, large fluctuations of the switch response behavior may be observed at low concentrations. Such fluctuation is relevant to many biological functions. It can be reduced by either increasing the conformation interconversion rate of the protein, or correlating the enzymatic reaction rates in the network. |
0812.0644 | David Eubanks | David A. Eubanks | Survival Strategies | 12 pages | null | null | null | q-bio.PE q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper addresses the theoretical conditions necessary for some subject of
study to survive forever. A probabilistic analysis leads to some prerequisite
conditions for preserving, say, electronic data indefinitely into the future.
The general analysis would also apply to a species, a civilization, or any
subject of study, as long as there is a definition of "survival" available. A
distinction emerges between two approaches to longevity: being many or being
smart. Natural selection relies on the first method, whereas a civilization,
individual, or other singular subject must rely on the latter. A computational
model of survival incorporates the idea of Kolmogorov-type complexity for both
strategies to illustrate the role of data analysis and information processing
that may be required. The survival-through-intelligence strategy has problems
when the subject can self-modify, which is illustrated with a link to Turing's
Halting Problem. The paper concludes with comments on the Fermi Paradox.
| [
{
"created": "Wed, 3 Dec 2008 03:18:40 GMT",
"version": "v1"
}
] | 2008-12-04 | [
[
"Eubanks",
"David A.",
""
]
] | This paper addresses the theoretical conditions necessary for some subject of study to survive forever. A probabilistic analysis leads to some prerequisite conditions for preserving, say, electronic data indefinitely into the future. The general analysis would also apply to a species, a civilization, or any subject of study, as long as there is a definition of "survival" available. A distinction emerges between two approaches to longevity: being many or being smart. Natural selection relies on the first method, whereas a civilization, individual, or other singular subject must rely on the latter. A computational model of survival incorporates the idea of Kolmogorov-type complexity for both strategies to illustrate the role of data analysis and information processing that may be required. The survival-through-intelligence strategy has problems when the subject can self-modify, which is illustrated with a link to Turing's Halting Problem. The paper concludes with comments on the Fermi Paradox. |
2102.11664 | Matthew Holden | Matthew H. Holden and Jakeb Lockyer | Poacher-population dynamics when legal trade of naturally deceased
organisms funds anti-poaching enforcement | Added reference to the peer-reviewed, final version of this paper in
the Journal of Theoretical Biology | Journal of Theoretical Biology. Volume 517, Article 110618 (2021) | 10.1016/j.jtbi.2021.110618 | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | Can a regulated, legal market for wildlife products protect species
threatened by poaching? It is one of the most controversial ideas in
biodiversity conservation. Perhaps the most convincing reason for legalizing
wildlife trade is that trade revenue could fund the protection and conservation
of poached species. In this paper, we examine the possible poacher-population
dynamic consequences of legal trade funding conservation. The model consists of
a manager scavenging carcasses for wildlife products, who then sells the
products, and directs a portion of the revenue towards funding anti-poaching
law enforcement. Through a global analysis of the model, we derive the critical
proportion of product the manager must scavenge, and the critical proportion of
trade revenue the manager must allocate towards increased enforcement, in order
for legal trade to lead to abundant long-term wildlife populations. We
illustrate how the model could inform management with parameter values derived
from the African elephant literature, under a hypothetical scenario where a
manager scavenges elephant carcasses to sell ivory. We find that there is a
large region of parameter space where populations go extinct under legal trade,
unless a significant portion of trade revenue is directed towards protecting
populations from poaching. The model is general and therefore can be used as a
starting point for exploring the consequences of funding many conservation
programs using wildlife trade revenue.
| [
{
"created": "Tue, 23 Feb 2021 12:30:53 GMT",
"version": "v1"
},
{
"created": "Mon, 22 Mar 2021 21:58:26 GMT",
"version": "v2"
}
] | 2021-03-24 | [
[
"Holden",
"Matthew H.",
""
],
[
"Lockyer",
"Jakeb",
""
]
] | Can a regulated, legal market for wildlife products protect species threatened by poaching? It is one of the most controversial ideas in biodiversity conservation. Perhaps the most convincing reason for legalizing wildlife trade is that trade revenue could fund the protection and conservation of poached species. In this paper, we examine the possible poacher-population dynamic consequences of legal trade funding conservation. The model consists of a manager scavenging carcasses for wildlife products, who then sells the products, and directs a portion of the revenue towards funding anti-poaching law enforcement. Through a global analysis of the model, we derive the critical proportion of product the manager must scavenge, and the critical proportion of trade revenue the manager must allocate towards increased enforcement, in order for legal trade to lead to abundant long-term wildlife populations. We illustrate how the model could inform management with parameter values derived from the African elephant literature, under a hypothetical scenario where a manager scavenges elephant carcasses to sell ivory. We find that there is a large region of parameter space where populations go extinct under legal trade, unless a significant portion of trade revenue is directed towards protecting populations from poaching. The model is general and therefore can be used as a starting point for exploring the consequences of funding many conservation programs using wildlife trade revenue. |
2305.01821 | Yannik Sch\"alte | Yannik Sch\"alte, Fabian Fr\"ohlich, Paul J. Jost, Jakob Vanhoefer,
Dilan Pathirana, Paul Stapor, Polina Lakrisenko, Dantong Wang, Elba
Raim\'undez, Simon Merkt, Leonard Schmiester, Philipp St\"adter, Stephan
Grein, Erika Dudkin, Domagoj Doresic, Daniel Weindl, Jan Hasenauer | pyPESTO: A modular and scalable tool for parameter estimation for
dynamic models | null | null | 10.1093/bioinformatics/btad711 | null | q-bio.QM stat.CO | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Mechanistic models are important tools to describe and understand biological
processes. However, they typically rely on unknown parameters, the estimation
of which can be challenging for large and complex systems. We present pyPESTO,
a modular framework for systematic parameter estimation, with scalable
algorithms for optimization and uncertainty quantification. While tailored to
ordinary differential equation problems, pyPESTO is broadly applicable to
black-box parameter estimation problems. Besides own implementations, it
provides a unified interface to various popular simulation and inference
methods. pyPESTO is implemented in Python, open-source under a 3-Clause BSD
license. Code and documentation are available on GitHub
(https://github.com/icb-dcm/pypesto).
| [
{
"created": "Tue, 2 May 2023 23:09:42 GMT",
"version": "v1"
}
] | 2023-11-29 | [
[
"Schälte",
"Yannik",
""
],
[
"Fröhlich",
"Fabian",
""
],
[
"Jost",
"Paul J.",
""
],
[
"Vanhoefer",
"Jakob",
""
],
[
"Pathirana",
"Dilan",
""
],
[
"Stapor",
"Paul",
""
],
[
"Lakrisenko",
"Polina",
""
],
[
"Wang",
"Dantong",
""
],
[
"Raimúndez",
"Elba",
""
],
[
"Merkt",
"Simon",
""
],
[
"Schmiester",
"Leonard",
""
],
[
"Städter",
"Philipp",
""
],
[
"Grein",
"Stephan",
""
],
[
"Dudkin",
"Erika",
""
],
[
"Doresic",
"Domagoj",
""
],
[
"Weindl",
"Daniel",
""
],
[
"Hasenauer",
"Jan",
""
]
] | Mechanistic models are important tools to describe and understand biological processes. However, they typically rely on unknown parameters, the estimation of which can be challenging for large and complex systems. We present pyPESTO, a modular framework for systematic parameter estimation, with scalable algorithms for optimization and uncertainty quantification. While tailored to ordinary differential equation problems, pyPESTO is broadly applicable to black-box parameter estimation problems. Besides own implementations, it provides a unified interface to various popular simulation and inference methods. pyPESTO is implemented in Python, open-source under a 3-Clause BSD license. Code and documentation are available on GitHub (https://github.com/icb-dcm/pypesto). |
2303.04490 | Aaron Winn | Aaron Winn, Adam Konkol, Eleni Katifori | From localized to well-mixed: How commuter interactions shape disease
spread | 26 pages, 15 figures; made minor edits for clarity | null | null | null | q-bio.PE physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Interactions between commuting individuals can lead to large-scale spreading
of rumors, ideas, or disease, even though the commuters have no net
displacement. The emergent dynamics depend crucially on the commuting
distribution of a population, that is how the probability to travel to a
destination decays with distance from home. Applying this idea to epidemics, we
will demonstrate the qualitatively different infection dynamics emerging from
populations with different commuting distributions. If the commuting
distribution is exponentially localized, we recover a reaction-diffusion system
and observe Fisher waves traveling at a speed proportional to the
characteristic commuting distance. If the commuting distribution has a long
tail, then no finite-velocity waves can form, but we show that, in some
regimes, there is nontrivial spatial dependence that the well-mixed
approximation neglects. We discuss how, in all cases, an initial
dispersal-dominated regime can allow the disease to go undetected for a finite
amount of time before exponential growth takes over. This "offset time" is a
quantity of huge importance for epidemic surveillance and yet largely ignored
in the literature.
| [
{
"created": "Wed, 8 Mar 2023 10:29:06 GMT",
"version": "v1"
},
{
"created": "Sat, 27 May 2023 04:04:55 GMT",
"version": "v2"
}
] | 2023-05-30 | [
[
"Winn",
"Aaron",
""
],
[
"Konkol",
"Adam",
""
],
[
"Katifori",
"Eleni",
""
]
] | Interactions between commuting individuals can lead to large-scale spreading of rumors, ideas, or disease, even though the commuters have no net displacement. The emergent dynamics depend crucially on the commuting distribution of a population, that is how the probability to travel to a destination decays with distance from home. Applying this idea to epidemics, we will demonstrate the qualitatively different infection dynamics emerging from populations with different commuting distributions. If the commuting distribution is exponentially localized, we recover a reaction-diffusion system and observe Fisher waves traveling at a speed proportional to the characteristic commuting distance. If the commuting distribution has a long tail, then no finite-velocity waves can form, but we show that, in some regimes, there is nontrivial spatial dependence that the well-mixed approximation neglects. We discuss how, in all cases, an initial dispersal-dominated regime can allow the disease to go undetected for a finite amount of time before exponential growth takes over. This "offset time" is a quantity of huge importance for epidemic surveillance and yet largely ignored in the literature. |
1207.4048 | Mats Wallden Msc | G. Ullman, M. Wallden, E. G. Marklund, A. Mahmutovic, I. Razinkov, J.
Elf | Hi-throughput gene expression analysis at the level of single proteins
using a microfluidic turbidostat and automated cell tracking | Accepted in Philosophical Transactions B | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We have developed a method combining microfluidics, time-lapsed
single-molecule microscopy and automated image analysis allowing for the
observation of an excess of 3000 complete cell cycles of exponentially growing
Escherichia coli cells per experiment. The method makes it possible to analyze
the rate of gene expression at the level of single proteins over the bacterial
cell cycle. We also demonstrate that it is possible to count the number of
non-specifically DNA binding LacI-Venus molecules using short excitation light
pulses. The transcription factors are localized on the nucleoids in the cell
and appear to be uniformly distributed on chromosomal DNA. An increase of the
expression of LacI is observed at the beginning of the cell cycle, possibly
because some gene copies are de-repressed as a result of partitioning
inequalities at cell division. Finally, observe a size-growth rate uncertainty
relation where cells living in rich media vary more in the length at birth than
in generation time and the opposite is true for cells living in poorer media.
| [
{
"created": "Tue, 17 Jul 2012 16:15:30 GMT",
"version": "v1"
}
] | 2012-07-18 | [
[
"Ullman",
"G.",
""
],
[
"Wallden",
"M.",
""
],
[
"Marklund",
"E. G.",
""
],
[
"Mahmutovic",
"A.",
""
],
[
"Razinkov",
"I.",
""
],
[
"Elf",
"J.",
""
]
] | We have developed a method combining microfluidics, time-lapsed single-molecule microscopy and automated image analysis allowing for the observation of an excess of 3000 complete cell cycles of exponentially growing Escherichia coli cells per experiment. The method makes it possible to analyze the rate of gene expression at the level of single proteins over the bacterial cell cycle. We also demonstrate that it is possible to count the number of non-specifically DNA binding LacI-Venus molecules using short excitation light pulses. The transcription factors are localized on the nucleoids in the cell and appear to be uniformly distributed on chromosomal DNA. An increase of the expression of LacI is observed at the beginning of the cell cycle, possibly because some gene copies are de-repressed as a result of partitioning inequalities at cell division. Finally, observe a size-growth rate uncertainty relation where cells living in rich media vary more in the length at birth than in generation time and the opposite is true for cells living in poorer media. |
2401.05611 | Janosch D\"ocker | Janosch D\"ocker, Simone Linz | On the existence of funneled orientations for classes of rooted
phylogenetic networks | null | null | null | null | q-bio.PE cs.CC cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, there has been a growing interest in the relationships between
unrooted and rooted phylogenetic networks. In this context, a natural question
to ask is if an unrooted phylogenetic network U can be oriented as a rooted
phylogenetic network such that the latter satisfies certain structural
properties. In a recent preprint, Bulteau et al. claim that it is computational
hard to decide if U has a funneled (resp. funneled tree-child) orientation, for
when the internal vertices of U have degree at most 5. Unfortunately, the proof
of their funneled tree-child result appears to be incorrect. In this paper, we
present a corrected proof and show that hardness remains for other popular
classes of rooted phylogenetic networks such as funneled normal and funneled
reticulation-visible. Additionally, our results hold regardless of whether U is
rooted at an existing vertex or by subdividing an edge with the root.
| [
{
"created": "Thu, 11 Jan 2024 01:17:44 GMT",
"version": "v1"
},
{
"created": "Sun, 14 Jan 2024 21:46:15 GMT",
"version": "v2"
}
] | 2024-01-17 | [
[
"Döcker",
"Janosch",
""
],
[
"Linz",
"Simone",
""
]
] | Recently, there has been a growing interest in the relationships between unrooted and rooted phylogenetic networks. In this context, a natural question to ask is if an unrooted phylogenetic network U can be oriented as a rooted phylogenetic network such that the latter satisfies certain structural properties. In a recent preprint, Bulteau et al. claim that it is computational hard to decide if U has a funneled (resp. funneled tree-child) orientation, for when the internal vertices of U have degree at most 5. Unfortunately, the proof of their funneled tree-child result appears to be incorrect. In this paper, we present a corrected proof and show that hardness remains for other popular classes of rooted phylogenetic networks such as funneled normal and funneled reticulation-visible. Additionally, our results hold regardless of whether U is rooted at an existing vertex or by subdividing an edge with the root. |
1309.5136 | Thierry Emonet | William Pontius, Michael W. Sneddon, Thierry Emonet | Adaptation dynamics in densely clustered chemoreceptors | Pontius W, Sneddon MW, Emonet T (2013) Adaptation Dynamics in Densely
Clustered Chemoreceptors. PLoS Comput Biol 9(9): e1003230.
doi:10.1371/journal.pcbi.1003230 | null | 10.1371/journal.pcbi.1003230 | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In many sensory systems, transmembrane receptors are spatially organized in
large clusters. Such arrangement may facilitate signal amplification and the
integration of multiple stimuli. However, this organization likely also affects
the kinetics of signaling since the cytoplasmic enzymes that modulate the
activity of the receptors must localize to the cluster prior to receptor
modification. Here we examine how these spatial considerations shape signaling
dynamics at rest and in response to stimuli. As a model, we use the chemotaxis
pathway of Escherichia coli, a canonical system for the study of how organisms
sense, respond, and adapt to environmental stimuli. In bacterial chemotaxis,
adaptation is mediated by two enzymes that localize to the clustered receptors
and modulate their activity through methylation-demethylation. Using a novel
stochastic simulation, we show that distributive receptor methylation is
necessary for successful adaptation to stimulus and also leads to large
fluctuations in receptor activity in the steady state. These fluctuations arise
from noise in the number of localized enzymes combined with saturated
modification kinetics between localized enzymes and receptor substrate. An
analytical model explains how saturated enzyme kinetics and large fluctuations
can coexist with an adapted state robust to variation in the expression level
of the pathway constituents, a key requirement to ensure the functionality of
individual cells within a population. This contrasts with the well-mixed
covalent modification system studied by Goldbeter and Koshland in which mean
activity becomes ultrasensitive to protein abundances when the enzymes operate
at saturation. Large fluctuations in receptor activity have been quantified
experimentally. Here we clarify their mechanistic relationship with
well-studied aspects of the chemotaxis system, precise adaptation and
functional robustness.
| [
{
"created": "Fri, 20 Sep 2013 01:43:57 GMT",
"version": "v1"
}
] | 2014-03-05 | [
[
"Pontius",
"William",
""
],
[
"Sneddon",
"Michael W.",
""
],
[
"Emonet",
"Thierry",
""
]
] | In many sensory systems, transmembrane receptors are spatially organized in large clusters. Such arrangement may facilitate signal amplification and the integration of multiple stimuli. However, this organization likely also affects the kinetics of signaling since the cytoplasmic enzymes that modulate the activity of the receptors must localize to the cluster prior to receptor modification. Here we examine how these spatial considerations shape signaling dynamics at rest and in response to stimuli. As a model, we use the chemotaxis pathway of Escherichia coli, a canonical system for the study of how organisms sense, respond, and adapt to environmental stimuli. In bacterial chemotaxis, adaptation is mediated by two enzymes that localize to the clustered receptors and modulate their activity through methylation-demethylation. Using a novel stochastic simulation, we show that distributive receptor methylation is necessary for successful adaptation to stimulus and also leads to large fluctuations in receptor activity in the steady state. These fluctuations arise from noise in the number of localized enzymes combined with saturated modification kinetics between localized enzymes and receptor substrate. An analytical model explains how saturated enzyme kinetics and large fluctuations can coexist with an adapted state robust to variation in the expression level of the pathway constituents, a key requirement to ensure the functionality of individual cells within a population. This contrasts with the well-mixed covalent modification system studied by Goldbeter and Koshland in which mean activity becomes ultrasensitive to protein abundances when the enzymes operate at saturation. Large fluctuations in receptor activity have been quantified experimentally. Here we clarify their mechanistic relationship with well-studied aspects of the chemotaxis system, precise adaptation and functional robustness. |
2103.04376 | Jingwen Zhang | Qing Liu, Defu Yang, Jingwen Zhang, Ziming Wei, Guorong Wu, Minghan
Chen | Analyzing the Spatiotemporal Interaction and Propagation of ATN
Biomarkers in Alzheimer's Disease using Longitudinal Neuroimaging Data | 4 pages, 2 figures, to be published in IEEE ISBI 2021 | null | null | null | q-bio.QM q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Three major biomarkers: beta-amyloid (A), pathologic tau (T), and
neurodegeneration (N), are recognized as valid proxies for neuropathologic
changes of Alzheimer's disease. While there are extensive studies on
cerebrospinal fluids biomarkers (amyloid, tau), the spatial propagation pattern
across brain is missing and their interactive mechanisms with neurodegeneration
are still unclear. To this end, we aim to analyze the spatiotemporal
associations between ATN biomarkers using large-scale neuroimaging data. We
first investigate the temporal appearances of amyloid plaques, tau tangles, and
neuronal loss by modeling the longitudinal transition trajectories. Second, we
propose linear mixed-effects models to quantify the pathological interactions
and propagation of ATN biomarkers at each brain region. Our analysis of the
current data shows that there exists a temporal latency in the build-up of
amyloid to the onset of tau pathology and neurodegeneration. The propagation
pattern of amyloid can be characterized by its diffusion along the topological
brain network. Our models provide sufficient evidence that the progression of
pathological tau and neurodegeneration share a strong regional association,
which is different from amyloid.
| [
{
"created": "Sun, 7 Mar 2021 15:26:45 GMT",
"version": "v1"
}
] | 2021-03-09 | [
[
"Liu",
"Qing",
""
],
[
"Yang",
"Defu",
""
],
[
"Zhang",
"Jingwen",
""
],
[
"Wei",
"Ziming",
""
],
[
"Wu",
"Guorong",
""
],
[
"Chen",
"Minghan",
""
]
] | Three major biomarkers: beta-amyloid (A), pathologic tau (T), and neurodegeneration (N), are recognized as valid proxies for neuropathologic changes of Alzheimer's disease. While there are extensive studies on cerebrospinal fluids biomarkers (amyloid, tau), the spatial propagation pattern across brain is missing and their interactive mechanisms with neurodegeneration are still unclear. To this end, we aim to analyze the spatiotemporal associations between ATN biomarkers using large-scale neuroimaging data. We first investigate the temporal appearances of amyloid plaques, tau tangles, and neuronal loss by modeling the longitudinal transition trajectories. Second, we propose linear mixed-effects models to quantify the pathological interactions and propagation of ATN biomarkers at each brain region. Our analysis of the current data shows that there exists a temporal latency in the build-up of amyloid to the onset of tau pathology and neurodegeneration. The propagation pattern of amyloid can be characterized by its diffusion along the topological brain network. Our models provide sufficient evidence that the progression of pathological tau and neurodegeneration share a strong regional association, which is different from amyloid. |
1603.03452 | Stuart Sevier | Stuart A. Sevier, David A. Kessler, Herbert Levine | Mechanical Bounds to Transcriptional Noise | null | null | 10.1073/pnas.1612651113 | null | q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Over the last several decades it has been increasingly recognized that
stochastic processes play a central role in transcription. Though many
stochastic effects have been explained, the source of transcriptional bursting
(one of the most well-known sources of stochasticity) has continued to evade
understanding. Recent results have pointed to mechanical feedback as the source
of transcriptional bursting but a reconciliation of this perspective with
preexisting views of transcriptional regulation is lacking. In this letter we
present a simple phenomenological model which is able to incorporate the
traditional view of gene expression within a framework with mechanical limits
to transcription. Our model explains the emergence of universal properties of
gene expression, wherein the lower limit of intrinsic noise necessarily rises
with mean expression level.
| [
{
"created": "Thu, 10 Mar 2016 21:23:20 GMT",
"version": "v1"
}
] | 2017-02-10 | [
[
"Sevier",
"Stuart A.",
""
],
[
"Kessler",
"David A.",
""
],
[
"Levine",
"Herbert",
""
]
] | Over the last several decades it has been increasingly recognized that stochastic processes play a central role in transcription. Though many stochastic effects have been explained, the source of transcriptional bursting (one of the most well-known sources of stochasticity) has continued to evade understanding. Recent results have pointed to mechanical feedback as the source of transcriptional bursting but a reconciliation of this perspective with preexisting views of transcriptional regulation is lacking. In this letter we present a simple phenomenological model which is able to incorporate the traditional view of gene expression within a framework with mechanical limits to transcription. Our model explains the emergence of universal properties of gene expression, wherein the lower limit of intrinsic noise necessarily rises with mean expression level. |
1112.2608 | Raffaella Burioni | Raffaella Burioni, Riccardo Scalco, Mario Casartelli | Rohlin Distance and the Evolution of Influenza A virus: Weak Attractors
and Precursors | 13 pages, 5+4 figures | PLoS ONE 6(12): e27924 (2011) | 10.1371/journal.pone.0027924 | null | q-bio.PE cond-mat.other cs.CE q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The evolution of the hemagglutinin amino acids sequences of Influenza A virus
is studied by a method based on an informational metrics, originally introduced
by Rohlin for partitions in abstract probability spaces. This metrics does not
require any previous functional or syntactic knowledge about the sequences and
it is sensitive to the correlated variations in the characters disposition. Its
efficiency is improved by algorithmic tools, designed to enhance the detection
of the novelty and to reduce the noise of useless mutations. We focus on the
USA data from 1993/94 to 2010/2011 for A/H3N2 and on USA data from 2006/07 to
2010/2011 for A/H1N1 . We show that the clusterization of the distance matrix
gives strong evidence to a structure of domains in the sequence space, acting
as weak attractors for the evolution, in very good agreement with the
epidemiological history of the virus. The structure proves very robust with
respect to the variations of the clusterization parameters, and extremely
coherent when restricting the observation window. The results suggest an
efficient strategy in the vaccine forecast, based on the presence of
"precursors" (or "buds") populating the most recent attractor.
| [
{
"created": "Mon, 12 Dec 2011 16:26:51 GMT",
"version": "v1"
}
] | 2015-06-03 | [
[
"Burioni",
"Raffaella",
""
],
[
"Scalco",
"Riccardo",
""
],
[
"Casartelli",
"Mario",
""
]
] | The evolution of the hemagglutinin amino acids sequences of Influenza A virus is studied by a method based on an informational metrics, originally introduced by Rohlin for partitions in abstract probability spaces. This metrics does not require any previous functional or syntactic knowledge about the sequences and it is sensitive to the correlated variations in the characters disposition. Its efficiency is improved by algorithmic tools, designed to enhance the detection of the novelty and to reduce the noise of useless mutations. We focus on the USA data from 1993/94 to 2010/2011 for A/H3N2 and on USA data from 2006/07 to 2010/2011 for A/H1N1 . We show that the clusterization of the distance matrix gives strong evidence to a structure of domains in the sequence space, acting as weak attractors for the evolution, in very good agreement with the epidemiological history of the virus. The structure proves very robust with respect to the variations of the clusterization parameters, and extremely coherent when restricting the observation window. The results suggest an efficient strategy in the vaccine forecast, based on the presence of "precursors" (or "buds") populating the most recent attractor. |
2302.14822 | Stephen Clark | Sean Tull, Razin A. Shaikh, Sara Sabrina Zemljic and Stephen Clark | Formalising and Learning a Quantum Model of Concepts | null | null | null | null | q-bio.NC cs.AI quant-ph | http://creativecommons.org/licenses/by/4.0/ | In this report we present a new modelling framework for concepts based on
quantum theory, and demonstrate how the conceptual representations can be
learned automatically from data. A contribution of the work is a thorough
category-theoretic formalisation of our framework. We claim that the use of
category theory, and in particular the use of string diagrams to describe
quantum processes, helps elucidate some of the most important features of our
quantum approach to concept modelling. Our approach builds upon Gardenfors'
classical framework of conceptual spaces, in which cognition is modelled
geometrically through the use of convex spaces, which in turn factorise in
terms of simpler spaces called domains. We show how concepts from the domains
of shape, colour, size and position can be learned from images of simple
shapes, where individual images are represented as quantum states and concepts
as quantum effects. Concepts are learned by a hybrid classical-quantum network
trained to perform concept classification, where the classical image processing
is carried out by a convolutional neural network and the quantum
representations are produced by a parameterised quantum circuit. We also use
discarding to produce mixed effects, which can then be used to learn concepts
which only apply to a subset of the domains, and show how entanglement
(together with discarding) can be used to capture interesting correlations
across domains. Finally, we consider the question of whether our quantum models
of concepts can be considered conceptual spaces in the Gardenfors sense.
| [
{
"created": "Tue, 7 Feb 2023 10:29:40 GMT",
"version": "v1"
}
] | 2023-03-01 | [
[
"Tull",
"Sean",
""
],
[
"Shaikh",
"Razin A.",
""
],
[
"Zemljic",
"Sara Sabrina",
""
],
[
"Clark",
"Stephen",
""
]
] | In this report we present a new modelling framework for concepts based on quantum theory, and demonstrate how the conceptual representations can be learned automatically from data. A contribution of the work is a thorough category-theoretic formalisation of our framework. We claim that the use of category theory, and in particular the use of string diagrams to describe quantum processes, helps elucidate some of the most important features of our quantum approach to concept modelling. Our approach builds upon Gardenfors' classical framework of conceptual spaces, in which cognition is modelled geometrically through the use of convex spaces, which in turn factorise in terms of simpler spaces called domains. We show how concepts from the domains of shape, colour, size and position can be learned from images of simple shapes, where individual images are represented as quantum states and concepts as quantum effects. Concepts are learned by a hybrid classical-quantum network trained to perform concept classification, where the classical image processing is carried out by a convolutional neural network and the quantum representations are produced by a parameterised quantum circuit. We also use discarding to produce mixed effects, which can then be used to learn concepts which only apply to a subset of the domains, and show how entanglement (together with discarding) can be used to capture interesting correlations across domains. Finally, we consider the question of whether our quantum models of concepts can be considered conceptual spaces in the Gardenfors sense. |
1110.5006 | Aleksandar Stojmirovi\'c | Aleksandar Stojmirovic, Alexander Bliskovsky and Yi-Kuo Yu | CytoSaddleSum: a functional enrichment analysis plugin for Cytoscape
based on sum-of-weights scores | 4 pages, 1 fugure | Bioinformatics. 28(6):893-894. 2012 | 10.1093/bioinformatics/bts041 | null | q-bio.QM | http://creativecommons.org/licenses/publicdomain/ | Summary: CytoSaddleSum provides Cytoscape users with access to the
functionality of SaddleSum, a functional enrichment tool based on sum-of-weight
scores. It operates by querying SaddleSum locally (using the standalone
version) or remotely (through an HTTP request to a web server). The functional
enrichment results are shown as a term relationship network, where nodes
represent terms and edges show term relationships. Furthermore, query results
are written as Cytoscape attributes allowing easy saving, retrieval and
integration into network-based data analysis workflows.
Availability: www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads The source code
is placed in Public Domain.
| [
{
"created": "Sat, 22 Oct 2011 22:46:52 GMT",
"version": "v1"
},
{
"created": "Fri, 16 Dec 2011 21:04:34 GMT",
"version": "v2"
}
] | 2012-04-20 | [
[
"Stojmirovic",
"Aleksandar",
""
],
[
"Bliskovsky",
"Alexander",
""
],
[
"Yu",
"Yi-Kuo",
""
]
] | Summary: CytoSaddleSum provides Cytoscape users with access to the functionality of SaddleSum, a functional enrichment tool based on sum-of-weight scores. It operates by querying SaddleSum locally (using the standalone version) or remotely (through an HTTP request to a web server). The functional enrichment results are shown as a term relationship network, where nodes represent terms and edges show term relationships. Furthermore, query results are written as Cytoscape attributes allowing easy saving, retrieval and integration into network-based data analysis workflows. Availability: www.ncbi.nlm.nih.gov/CBBresearch/Yu/downloads The source code is placed in Public Domain. |
1301.1608 | Hamidreza Chitsaz | Elmirasadat Forouzmand and Hamidreza Chitsaz | The RNA Newton Polytope and Learnability of Energy Parameters | null | null | null | null | q-bio.BM cs.CE cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite nearly two scores of research on RNA secondary structure and RNA-RNA
interaction prediction, the accuracy of the state-of-the-art algorithms are
still far from satisfactory. Researchers have proposed increasingly complex
energy models and improved parameter estimation methods in anticipation of
endowing their methods with enough power to solve the problem. The output has
disappointingly been only modest improvements, not matching the expectations.
Even recent massively featured machine learning approaches were not able to
break the barrier. In this paper, we introduce the notion of learnability of
the parameters of an energy model as a measure of its inherent capability. We
say that the parameters of an energy model are learnable iff there exists at
least one set of such parameters that renders every known RNA structure to date
the minimum free energy structure. We derive a necessary condition for the
learnability and give a dynamic programming algorithm to assess it. Our
algorithm computes the convex hull of the feature vectors of all feasible
structures in the ensemble of a given input sequence. Interestingly, that
convex hull coincides with the Newton polytope of the partition function as a
polynomial in energy parameters. We demonstrated the application of our theory
to a simple energy model consisting of a weighted count of A-U and C-G base
pairs. Our results show that this simple energy model satisfies the necessary
condition for less than one third of the input unpseudoknotted
sequence-structure pairs chosen from the RNA STRAND v2.0 database. For another
one third, the necessary condition is barely violated, which suggests that
augmenting this simple energy model with more features such as the Turner loops
may solve the problem. The necessary condition is severely violated for 8%,
which provides a small set of hard cases that require further investigation.
| [
{
"created": "Tue, 8 Jan 2013 17:43:08 GMT",
"version": "v1"
}
] | 2013-01-09 | [
[
"Forouzmand",
"Elmirasadat",
""
],
[
"Chitsaz",
"Hamidreza",
""
]
] | Despite nearly two scores of research on RNA secondary structure and RNA-RNA interaction prediction, the accuracy of the state-of-the-art algorithms are still far from satisfactory. Researchers have proposed increasingly complex energy models and improved parameter estimation methods in anticipation of endowing their methods with enough power to solve the problem. The output has disappointingly been only modest improvements, not matching the expectations. Even recent massively featured machine learning approaches were not able to break the barrier. In this paper, we introduce the notion of learnability of the parameters of an energy model as a measure of its inherent capability. We say that the parameters of an energy model are learnable iff there exists at least one set of such parameters that renders every known RNA structure to date the minimum free energy structure. We derive a necessary condition for the learnability and give a dynamic programming algorithm to assess it. Our algorithm computes the convex hull of the feature vectors of all feasible structures in the ensemble of a given input sequence. Interestingly, that convex hull coincides with the Newton polytope of the partition function as a polynomial in energy parameters. We demonstrated the application of our theory to a simple energy model consisting of a weighted count of A-U and C-G base pairs. Our results show that this simple energy model satisfies the necessary condition for less than one third of the input unpseudoknotted sequence-structure pairs chosen from the RNA STRAND v2.0 database. For another one third, the necessary condition is barely violated, which suggests that augmenting this simple energy model with more features such as the Turner loops may solve the problem. The necessary condition is severely violated for 8%, which provides a small set of hard cases that require further investigation. |
1708.07136 | Tianle Ma | Tianle Ma and Aidong Zhang | Integrate Multi-omic Data Using Affinity Network Fusion (ANF) for Cancer
Patient Clustering | submitted to BIBM2017 (https://muii.missouri.edu/bibm2017/) | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Clustering cancer patients into subgroups and identifying cancer subtypes is
an important task in cancer genomics. Clustering based on comprehensive
multi-omic molecular profiling can often achieve better results than those
using a single data type, since each omic data type (representing one view of
patients) may contain complementary information. However, it is challenging to
integrate heterogeneous omic data types directly. Based on one popular method
-- Similarity Network Fusion (SNF), we presented Affinity Network Fusion (ANF)
in this paper, an "upgrade" of SNF with several advantages. Similar to SNF, ANF
treats each omic data type as one view of patients and learns a fused affinity
(transition) matrix for clustering. We applied ANF to a carefully processed
harmonized cancer dataset downloaded from GDC data portals consisting of 2193
patients, and generated promising results on clustering patients into correct
disease types. Our experimental results also demonstrated the power of feature
selection and transformation combined with using ANF in patient clustering.
Moreover, eigengap analysis suggests that the learned affinity matrices of four
cancer types using our proposed framework may have successfully captured
patient group structure and can be used for discovering unknown cancer
subtypes.
| [
{
"created": "Wed, 23 Aug 2017 18:04:07 GMT",
"version": "v1"
}
] | 2017-08-25 | [
[
"Ma",
"Tianle",
""
],
[
"Zhang",
"Aidong",
""
]
] | Clustering cancer patients into subgroups and identifying cancer subtypes is an important task in cancer genomics. Clustering based on comprehensive multi-omic molecular profiling can often achieve better results than those using a single data type, since each omic data type (representing one view of patients) may contain complementary information. However, it is challenging to integrate heterogeneous omic data types directly. Based on one popular method -- Similarity Network Fusion (SNF), we presented Affinity Network Fusion (ANF) in this paper, an "upgrade" of SNF with several advantages. Similar to SNF, ANF treats each omic data type as one view of patients and learns a fused affinity (transition) matrix for clustering. We applied ANF to a carefully processed harmonized cancer dataset downloaded from GDC data portals consisting of 2193 patients, and generated promising results on clustering patients into correct disease types. Our experimental results also demonstrated the power of feature selection and transformation combined with using ANF in patient clustering. Moreover, eigengap analysis suggests that the learned affinity matrices of four cancer types using our proposed framework may have successfully captured patient group structure and can be used for discovering unknown cancer subtypes. |
2009.10609 | Hua Cheng M.D. | Hua Cheng | Spinopelvic Anatomic Parameters Prediction Model of NSLBP based on data
mining | 12 pages, 4 figures, Funding by "Science and Technology Bureau
(Zhanjiang City) technological guidance special project(No. 2017A01014.)" | null | 10.33140/MCR.05.09.15 | null | q-bio.TO | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Objective: The purpose of this study is to perform analysis through the low
back pain open data set to predict the incidence of non-specific chronic low
back pain (NSLBP) to obtain a more accurate and convenient sagittal spinopelvic
parameter model. Methods: The logistic regression analysis and multilayer
perceptron(MLP) algorithm is used to construct a NSLBP prediction model based
on the parameters of the spinopelvic parameters from open data source. Results:
Degree of spondylolisthesis(DS), Pelvic radius (PR), Sacral slope (SS), Pelvic
tilt (PT) are four predictors screened out by regression analysis that have
significant predictive power for the risk of NSLBP. The overall accuracy of the
equation prediction model is 85.8%.The MLP network algorithm determines that DS
is the most powerful predictor of NSLBP through more precise modeling. The
model has good predictive ability of 95.2% of accuracy. Conclusions: MLP models
play a more accurate role in the construction of predictive models. Computer
science is playing a greater role in helping precision medicine clinical
research.
| [
{
"created": "Mon, 14 Sep 2020 03:49:54 GMT",
"version": "v1"
}
] | 2022-01-21 | [
[
"Cheng",
"Hua",
""
]
] | Objective: The purpose of this study is to perform analysis through the low back pain open data set to predict the incidence of non-specific chronic low back pain (NSLBP) to obtain a more accurate and convenient sagittal spinopelvic parameter model. Methods: The logistic regression analysis and multilayer perceptron(MLP) algorithm is used to construct a NSLBP prediction model based on the parameters of the spinopelvic parameters from open data source. Results: Degree of spondylolisthesis(DS), Pelvic radius (PR), Sacral slope (SS), Pelvic tilt (PT) are four predictors screened out by regression analysis that have significant predictive power for the risk of NSLBP. The overall accuracy of the equation prediction model is 85.8%.The MLP network algorithm determines that DS is the most powerful predictor of NSLBP through more precise modeling. The model has good predictive ability of 95.2% of accuracy. Conclusions: MLP models play a more accurate role in the construction of predictive models. Computer science is playing a greater role in helping precision medicine clinical research. |
1612.01735 | Kelin Xia | Kelin Xia and Guo-Wei Wei | A review of geometric, topological and graph theory apparatuses for the
modeling and analysis of biomolecular data | 76 pages,33 figures | null | null | null | q-bio.BM math.AT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Geometric, topological and graph theory modeling and analysis of biomolecules
are of essential importance in the conceptualization of molecular structure,
function, dynamics, and transport. On the one hand, geometric modeling provides
molecular surface and structural representation, and offers the basis for
molecular visualization, which is crucial for the understanding of molecular
structure and interactions. On the other hand, it bridges the gap between
molecular structural data and theoretical/mathematical models. Topological
analysis and modeling give rise to atomic critical points and connectivity, and
shed light on the intrinsic topological invariants such as independent
components (atoms), rings (pockets) and cavities. Graph theory analyzes
biomolecular interactions and reveals biomolecular structure-function
relationship. In this paper, we review certain geometric, topological and graph
theory apparatuses for biomolecular data modeling and analysis. These
apparatuses are categorized into discrete and continuous ones. For discrete
approaches, graph theory, Gaussian network model, anisotropic network model,
normal mode analysis, quasi-harmonic analysis, flexibility and rigidity index,
molecular nonlinear dynamics, spectral graph theory, and persistent homology
are discussed. For continuous mathematical tools, we present discrete to
continuum mapping, high dimensional persistent homology, biomolecular geometric
modeling, differential geometry theory of surfaces, curvature evaluation,
variational derivation of minimal molecular surfaces, atoms in molecule theory
and quantum chemical topology. Four new approaches, including analytical
minimal molecular surface, Hessian matrix eigenvalue map, curvature map and
virtual particle model, are introduced for the first time to bridge the gaps in
biomolecular modeling and analysis.
| [
{
"created": "Tue, 6 Dec 2016 10:24:37 GMT",
"version": "v1"
}
] | 2016-12-07 | [
[
"Xia",
"Kelin",
""
],
[
"Wei",
"Guo-Wei",
""
]
] | Geometric, topological and graph theory modeling and analysis of biomolecules are of essential importance in the conceptualization of molecular structure, function, dynamics, and transport. On the one hand, geometric modeling provides molecular surface and structural representation, and offers the basis for molecular visualization, which is crucial for the understanding of molecular structure and interactions. On the other hand, it bridges the gap between molecular structural data and theoretical/mathematical models. Topological analysis and modeling give rise to atomic critical points and connectivity, and shed light on the intrinsic topological invariants such as independent components (atoms), rings (pockets) and cavities. Graph theory analyzes biomolecular interactions and reveals biomolecular structure-function relationship. In this paper, we review certain geometric, topological and graph theory apparatuses for biomolecular data modeling and analysis. These apparatuses are categorized into discrete and continuous ones. For discrete approaches, graph theory, Gaussian network model, anisotropic network model, normal mode analysis, quasi-harmonic analysis, flexibility and rigidity index, molecular nonlinear dynamics, spectral graph theory, and persistent homology are discussed. For continuous mathematical tools, we present discrete to continuum mapping, high dimensional persistent homology, biomolecular geometric modeling, differential geometry theory of surfaces, curvature evaluation, variational derivation of minimal molecular surfaces, atoms in molecule theory and quantum chemical topology. Four new approaches, including analytical minimal molecular surface, Hessian matrix eigenvalue map, curvature map and virtual particle model, are introduced for the first time to bridge the gaps in biomolecular modeling and analysis. |
2203.14513 | Tong Wang | Zimeng Li, Shichao Zhu, Bin Shao, Tie-Yan Liu, Xiangxiang Zeng and
Tong Wang | Multi-View Substructure Learning for Drug-Drug Interaction Prediction | null | null | null | null | q-bio.BM cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Drug-drug interaction (DDI) prediction provides a drug combination strategy
for systemically effective treatment. Previous studies usually model drug
information constrained on a single view such as the drug itself, leading to
incomplete and noisy information, which limits the accuracy of DDI prediction.
In this work, we propose a novel multi- view drug substructure network for DDI
prediction (MSN-DDI), which learns chemical substructures from both the
representations of the single drug (intra-view) and the drug pair (inter-view)
simultaneously and utilizes the substructures to update the drug representation
iteratively. Comprehensive evaluations demonstrate that MSN-DDI has almost
solved DDI prediction for existing drugs by achieving a relatively improved
accuracy of 19.32% and an over 99% accuracy under the transductive setting.
More importantly, MSN-DDI exhibits better generalization ability to unseen
drugs with a relatively improved accuracy of 7.07% under more challenging
inductive scenarios. Finally, MSN-DDI improves prediction performance for
real-world DDI applications to new drugs.
| [
{
"created": "Mon, 28 Mar 2022 05:44:29 GMT",
"version": "v1"
}
] | 2022-03-29 | [
[
"Li",
"Zimeng",
""
],
[
"Zhu",
"Shichao",
""
],
[
"Shao",
"Bin",
""
],
[
"Liu",
"Tie-Yan",
""
],
[
"Zeng",
"Xiangxiang",
""
],
[
"Wang",
"Tong",
""
]
] | Drug-drug interaction (DDI) prediction provides a drug combination strategy for systemically effective treatment. Previous studies usually model drug information constrained on a single view such as the drug itself, leading to incomplete and noisy information, which limits the accuracy of DDI prediction. In this work, we propose a novel multi- view drug substructure network for DDI prediction (MSN-DDI), which learns chemical substructures from both the representations of the single drug (intra-view) and the drug pair (inter-view) simultaneously and utilizes the substructures to update the drug representation iteratively. Comprehensive evaluations demonstrate that MSN-DDI has almost solved DDI prediction for existing drugs by achieving a relatively improved accuracy of 19.32% and an over 99% accuracy under the transductive setting. More importantly, MSN-DDI exhibits better generalization ability to unseen drugs with a relatively improved accuracy of 7.07% under more challenging inductive scenarios. Finally, MSN-DDI improves prediction performance for real-world DDI applications to new drugs. |
0807.3809 | Lars Reichl | Lars Reichl, Siegrid L\"owel, and Fred Wolf | Pinwheel stabilization by ocular dominance segregation | 10 pages, 4 figures | Phys. Rev. Lett. 102, 208101 (2009) | 10.1103/PhysRevLett.102.208101 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an analytical approach for studying the coupled development of
ocular dominance and orientation preference columns. Using this approach we
demonstrate that ocular dominance segregation can induce the stabilization and
even the production of pinwheels by their crystallization in two types of
periodic lattices. Pinwheel crystallization depends on the overall dominance of
one eye over the other, a condition that is fulfilled during early cortical
development. Increasing the strength of inter-map coupling induces a transition
from pinwheel-free stripe solutions to intermediate and high pinwheel density
states.
| [
{
"created": "Thu, 24 Jul 2008 15:44:39 GMT",
"version": "v1"
},
{
"created": "Wed, 20 May 2009 07:51:31 GMT",
"version": "v2"
}
] | 2013-05-29 | [
[
"Reichl",
"Lars",
""
],
[
"Löwel",
"Siegrid",
""
],
[
"Wolf",
"Fred",
""
]
] | We present an analytical approach for studying the coupled development of ocular dominance and orientation preference columns. Using this approach we demonstrate that ocular dominance segregation can induce the stabilization and even the production of pinwheels by their crystallization in two types of periodic lattices. Pinwheel crystallization depends on the overall dominance of one eye over the other, a condition that is fulfilled during early cortical development. Increasing the strength of inter-map coupling induces a transition from pinwheel-free stripe solutions to intermediate and high pinwheel density states. |
1308.4780 | Richard A. Blythe | Thomas C. Scott-Phillips and Richard A. Blythe | Why is combinatorial communication rare in the natural world, and why is
language an exception to this trend? | 33 page pdf, including supplementary information and one figure. To
appear in J Roy Soc Int | J. Roy. Soc. Interface (2013) 10 20130520 | 10.1098/rsif.2013.0520 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In a combinatorial communication system, some signals consist of the
combinations of other signals. Such systems are more efficient than equivalent,
non-combinatorial systems, yet despite this they are rare in nature. Why?
Previous explanations have focused on the adaptive limits of combinatorial
communication, or on its purported cognitive difficulties, but neither of these
explains the full distribution of combinatorial communication in the natural
world. Here we present a nonlinear dynamical model of the emergence of
combinatorial communication that, unlike previous models, considers how
initially non-communicative behaviour evolves to take on a communicative
function. We derive three basic principles about the emergence of combinatorial
communication. We hence show that the interdependence of signals and responses
places significant constraints on the historical pathways by which
combinatorial signals might emerge, to the extent that anything other than the
most simple form of combinatorial communication is extremely unlikely. We also
argue that these constraints can be bypassed if individuals have the
socio-cognitive capacity to engage in ostensive communication. Humans, but
probably no other species, have this ability. This may explain why language,
which is massively combinatorial, is such an extreme exception to nature's
general trend for non-combinatorial communication.
| [
{
"created": "Thu, 22 Aug 2013 07:29:31 GMT",
"version": "v1"
}
] | 2015-05-26 | [
[
"Scott-Phillips",
"Thomas C.",
""
],
[
"Blythe",
"Richard A.",
""
]
] | In a combinatorial communication system, some signals consist of the combinations of other signals. Such systems are more efficient than equivalent, non-combinatorial systems, yet despite this they are rare in nature. Why? Previous explanations have focused on the adaptive limits of combinatorial communication, or on its purported cognitive difficulties, but neither of these explains the full distribution of combinatorial communication in the natural world. Here we present a nonlinear dynamical model of the emergence of combinatorial communication that, unlike previous models, considers how initially non-communicative behaviour evolves to take on a communicative function. We derive three basic principles about the emergence of combinatorial communication. We hence show that the interdependence of signals and responses places significant constraints on the historical pathways by which combinatorial signals might emerge, to the extent that anything other than the most simple form of combinatorial communication is extremely unlikely. We also argue that these constraints can be bypassed if individuals have the socio-cognitive capacity to engage in ostensive communication. Humans, but probably no other species, have this ability. This may explain why language, which is massively combinatorial, is such an extreme exception to nature's general trend for non-combinatorial communication. |
2203.11874 | Rufus Mitchell-Heggs Mr | Rufus Mitchell-Heggs, Seigfred Prado, Giuseppe P. Gava, Mary Ann Go
and Simon R. Schultz | Neural manifold analysis of brain circuit dynamics in health and disease | 24 pages, 6 figures, 1 table | null | 10.48550/arXiv.2203.11874 | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Recent developments in experimental neuroscience make it possible to
simultaneously record the activity of thousands of neurons. However, the
development of analysis approaches for such large-scale neural recordings have
been slower than those applicable to single-cell experiments. One approach that
has gained recent popularity is neural manifold learning. This approach takes
advantage of the fact that often, even though neural datasets may be very high
dimensional, the dynamics of neural activity tends to traverse a much
lower-dimensional space. The topological structures formed by these
low-dimensional neural subspaces are referred to as neural manifolds, and may
potentially provide insight linking neural circuit dynamics with cognitive
function and behavioural performance. In this paper we review a number of
linear and non-linear approaches to neural manifold learning, by setting them
within a common mathematical framework, and comparing their advantages and
disadvantages with respect to their use for neural data analysis. We apply them
to a number of datasets from published literature, comparing the manifolds that
result from their application to hippocampal place cells, motor cortical
neurons during a reaching task, and prefrontal cortical neurons during a
multi-behaviour task. We find that in many circumstances linear algorithms
produce similar results to non-linear methods, although in particular in cases
where the behavioural complexity is greater, nonlinear methods tend to find
lower dimensional manifolds, at the possible expense of interpretability. We
demonstrate that these methods are applicable to the study of neurological
disorders through simulation of a mouse model of Alzheimers Disease, and
speculate that neural manifold analysis may help us to understand the
circuit-level consequences of molecular and cellular neuropathology.
| [
{
"created": "Tue, 22 Mar 2022 16:52:15 GMT",
"version": "v1"
},
{
"created": "Mon, 17 Oct 2022 08:40:48 GMT",
"version": "v2"
}
] | 2022-10-18 | [
[
"Mitchell-Heggs",
"Rufus",
""
],
[
"Prado",
"Seigfred",
""
],
[
"Gava",
"Giuseppe P.",
""
],
[
"Go",
"Mary Ann",
""
],
[
"Schultz",
"Simon R.",
""
]
] | Recent developments in experimental neuroscience make it possible to simultaneously record the activity of thousands of neurons. However, the development of analysis approaches for such large-scale neural recordings have been slower than those applicable to single-cell experiments. One approach that has gained recent popularity is neural manifold learning. This approach takes advantage of the fact that often, even though neural datasets may be very high dimensional, the dynamics of neural activity tends to traverse a much lower-dimensional space. The topological structures formed by these low-dimensional neural subspaces are referred to as neural manifolds, and may potentially provide insight linking neural circuit dynamics with cognitive function and behavioural performance. In this paper we review a number of linear and non-linear approaches to neural manifold learning, by setting them within a common mathematical framework, and comparing their advantages and disadvantages with respect to their use for neural data analysis. We apply them to a number of datasets from published literature, comparing the manifolds that result from their application to hippocampal place cells, motor cortical neurons during a reaching task, and prefrontal cortical neurons during a multi-behaviour task. We find that in many circumstances linear algorithms produce similar results to non-linear methods, although in particular in cases where the behavioural complexity is greater, nonlinear methods tend to find lower dimensional manifolds, at the possible expense of interpretability. We demonstrate that these methods are applicable to the study of neurological disorders through simulation of a mouse model of Alzheimers Disease, and speculate that neural manifold analysis may help us to understand the circuit-level consequences of molecular and cellular neuropathology. |
1705.00096 | Catherine Reason | Cathy M Reason | Consciousness is not a physically provable property | null | Journal of Mind and Behavior 37 (1) pp 31-46 (2016) | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a logical proof that computing machines, and by extension physical
systems, can never be certain if they possess conscious awareness. This implies
that human consciousness is associated with a violation of energy conservation.
We examine the significance that a particular interpretation of quantum
mechanics, known as single mind Q (Barrett 1999), might have for the detection
of such a violation. Finally we apply single mind Q to the problem of free will
as it arises in some celebrated experiments by the neurophysiologist Benjamin
Libet.
| [
{
"created": "Fri, 28 Apr 2017 23:21:45 GMT",
"version": "v1"
},
{
"created": "Tue, 26 Sep 2017 15:49:06 GMT",
"version": "v2"
}
] | 2017-09-27 | [
[
"Reason",
"Cathy M",
""
]
] | We present a logical proof that computing machines, and by extension physical systems, can never be certain if they possess conscious awareness. This implies that human consciousness is associated with a violation of energy conservation. We examine the significance that a particular interpretation of quantum mechanics, known as single mind Q (Barrett 1999), might have for the detection of such a violation. Finally we apply single mind Q to the problem of free will as it arises in some celebrated experiments by the neurophysiologist Benjamin Libet. |
1901.04432 | Gao-De Li Dr | Gao-De Li | Further Thoughts on Abnormal Chromatin Configuration and Oncogenesis | 5 pages | null | null | null | q-bio.SC q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | More than 30 years ago, we published a paper entitled as abnormal chromatin
configuration and oncogenesis, which proposed the first hypothesis that links
oncogenesis to abnormal three-dimensional (3D) genome structure. Recently, many
studies have demonstrated that the 3D genome structure plays a major role in
oncogenesis, which strongly supports our hypothesis. In this paper, further
thoughts about our hypothesis is presented.
| [
{
"created": "Mon, 14 Jan 2019 17:59:57 GMT",
"version": "v1"
},
{
"created": "Thu, 17 Jan 2019 17:53:50 GMT",
"version": "v2"
},
{
"created": "Fri, 1 Feb 2019 08:59:00 GMT",
"version": "v3"
}
] | 2019-02-07 | [
[
"Li",
"Gao-De",
""
]
] | More than 30 years ago, we published a paper entitled as abnormal chromatin configuration and oncogenesis, which proposed the first hypothesis that links oncogenesis to abnormal three-dimensional (3D) genome structure. Recently, many studies have demonstrated that the 3D genome structure plays a major role in oncogenesis, which strongly supports our hypothesis. In this paper, further thoughts about our hypothesis is presented. |
1307.8249 | Daniel Rico | David Juan, Daniel Rico, Tomas Marques-Bonet, Oscar
Fernandez-Capetillo and Alfonso Valencia | Late-replicating CNVs as a source of new genes | 43 pages, 5 figures and 4 figure supplements (two new figure
supplements); added references and text in Introduction and Discussion,
corrected typos; results unchanged | Biology Open 2013 BIO20136924; Advance Online Article November 15,
2013 | 10.1242/bio.20136924 | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Asynchronous replication of the genome has been associated with different
rates of point mutation and copy number variation (CNV) in human populations.
Here, we explored if the bias in the generation of CNV that is associated to
DNA replication timing might have conditioned the birth of new protein-coding
genes during evolution. We show that genes that were duplicated during primate
evolution are more commonly found among the human genes located in
late-replicating CNV regions. We traced the relationship between replication
timing and the evolutionary age of duplicated genes. Strikingly, we found that
there is a significant enrichment of evolutionary younger duplicates in late
replicating regions of the human and mouse genome. Indeed, the presence of
duplicates in late replicating regions gradually decreases as the evolutionary
time since duplication extends. Our results suggest that the accumulation of
recent duplications in late replicating CNV regions is an active process
influencing genome evolution.
| [
{
"created": "Wed, 31 Jul 2013 08:29:30 GMT",
"version": "v1"
},
{
"created": "Wed, 30 Oct 2013 14:24:59 GMT",
"version": "v2"
}
] | 2013-11-27 | [
[
"Juan",
"David",
""
],
[
"Rico",
"Daniel",
""
],
[
"Marques-Bonet",
"Tomas",
""
],
[
"Fernandez-Capetillo",
"Oscar",
""
],
[
"Valencia",
"Alfonso",
""
]
] | Asynchronous replication of the genome has been associated with different rates of point mutation and copy number variation (CNV) in human populations. Here, we explored if the bias in the generation of CNV that is associated to DNA replication timing might have conditioned the birth of new protein-coding genes during evolution. We show that genes that were duplicated during primate evolution are more commonly found among the human genes located in late-replicating CNV regions. We traced the relationship between replication timing and the evolutionary age of duplicated genes. Strikingly, we found that there is a significant enrichment of evolutionary younger duplicates in late replicating regions of the human and mouse genome. Indeed, the presence of duplicates in late replicating regions gradually decreases as the evolutionary time since duplication extends. Our results suggest that the accumulation of recent duplications in late replicating CNV regions is an active process influencing genome evolution. |
2405.18452 | Mobina Tousian Shandiz | Mobina Tousian, Christian Solis Calero, and Julio Cesar Perez
Sansalvador | Immune cells interactions in the tumor microenvironment | 12 pages, 8 figures, 0 tables | null | null | null | q-bio.CB physics.bio-ph | http://creativecommons.org/licenses/by/4.0/ | The tumor microenvironment (TME) plays a critical role in cancer cell
proliferation, invasion, and resistance to therapy. A principal component of
the TME is the tumor immune microenvironment (TIME), which includes various
immune cells such as macrophages. Depending on the signals received from
environmental elements like IL-4 or IFN-$\gamma$, macrophages can exhibit
pro-inflammatory (M1) or anti-inflammatory (M2) phenotypes. This study uses an
enhanced agent-based model to simulate interactions within the TIME, focusing
on the dynamic behavior of macrophages. We examine the response of cancer cell
populations to alterations in macrophages, categorized into three different
behaviors: M0 (initial-inactive), M1 (immune-upholding), and M2
(immune-repressing), as well as environmental differentiations. The results
highlight the significant impact of macrophage modulation on tumor
proliferation and suggest potential therapeutic strategies targeting these
immune cells.
| [
{
"created": "Tue, 28 May 2024 14:29:11 GMT",
"version": "v1"
}
] | 2024-05-30 | [
[
"Tousian",
"Mobina",
""
],
[
"Calero",
"Christian Solis",
""
],
[
"Sansalvador",
"Julio Cesar Perez",
""
]
] | The tumor microenvironment (TME) plays a critical role in cancer cell proliferation, invasion, and resistance to therapy. A principal component of the TME is the tumor immune microenvironment (TIME), which includes various immune cells such as macrophages. Depending on the signals received from environmental elements like IL-4 or IFN-$\gamma$, macrophages can exhibit pro-inflammatory (M1) or anti-inflammatory (M2) phenotypes. This study uses an enhanced agent-based model to simulate interactions within the TIME, focusing on the dynamic behavior of macrophages. We examine the response of cancer cell populations to alterations in macrophages, categorized into three different behaviors: M0 (initial-inactive), M1 (immune-upholding), and M2 (immune-repressing), as well as environmental differentiations. The results highlight the significant impact of macrophage modulation on tumor proliferation and suggest potential therapeutic strategies targeting these immune cells. |
1609.07898 | Sandro Bottaro | Sandro Bottaro, Pavel Ban\'a\v{s}, Jiri Sponer, and Giovanni Bussi | Free Energy Landscape of GAGA and UUCG RNA Tetraloops | Journal of Physical Chemistry Letters (2016) | J. Phys. Chem. Lett. 7, 4032 (2016) | 10.1021/acs.jpclett.6b01905 | null | q-bio.BM physics.bio-ph physics.chem-ph physics.comp-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We report the folding thermodynamics of ccUUCGgg and ccGAGAgg RNA tetraloops
using atomistic molecular dynamics simulations. We obtain a previously
unreported estimation of the folding free energy using parallel tempering in
combination with well-tempered metadynamics. A key ingredient is the use of a
recently developed metric distance, eRMSD, as a biased collective variable. We
find that the native fold of both tetraloops is not the global free energy
minimum using the Amber\c{hi}OL3 force field. The estimated folding free
energies are 30.2kJ/mol for UUCG and 7.5 kJ/mol for GAGA, in striking
disagreement with experimental data. We evaluate the viability of all possible
one-dimensional backbone force field corrections. We find that disfavoring the
gauche+ region of {\alpha} and {\zeta} angles consistently improves the
existing force field. The level of accuracy achieved with these corrections,
however, cannot be considered sufficient by judging on the basis of available
thermodynamic data and solution experiments.
| [
{
"created": "Mon, 26 Sep 2016 09:25:58 GMT",
"version": "v1"
}
] | 2016-11-21 | [
[
"Bottaro",
"Sandro",
""
],
[
"Banáš",
"Pavel",
""
],
[
"Sponer",
"Jiri",
""
],
[
"Bussi",
"Giovanni",
""
]
] | We report the folding thermodynamics of ccUUCGgg and ccGAGAgg RNA tetraloops using atomistic molecular dynamics simulations. We obtain a previously unreported estimation of the folding free energy using parallel tempering in combination with well-tempered metadynamics. A key ingredient is the use of a recently developed metric distance, eRMSD, as a biased collective variable. We find that the native fold of both tetraloops is not the global free energy minimum using the Amber\c{hi}OL3 force field. The estimated folding free energies are 30.2kJ/mol for UUCG and 7.5 kJ/mol for GAGA, in striking disagreement with experimental data. We evaluate the viability of all possible one-dimensional backbone force field corrections. We find that disfavoring the gauche+ region of {\alpha} and {\zeta} angles consistently improves the existing force field. The level of accuracy achieved with these corrections, however, cannot be considered sufficient by judging on the basis of available thermodynamic data and solution experiments. |
1008.5171 | Gerardo Aquino | Gerardo Aquino and Robert G. Endres | Increased accuracy of ligand sensing by receptor diffusion on cell
surface | 11 pages, 7 figures, accepted for publication on Physical Review E | Physical Review E 82, 041902 (2010) | 10.1103/PhysRevE.82.041902 | null | q-bio.SC physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The physical limit with which a cell senses external ligand concentration
corresponds to the perfect absorber, where all ligand particles are absorbed
and overcounting of same ligand particles does not occur. Here we analyze how
the lateral diffusion of receptors on the cell membrane affects the accuracy of
sensing ligand concentration. Specifically, we connect our modeling to
neurotransmission in neural synapses where the diffusion of glutamate receptors
is already known to refresh synaptic connections. We find that receptor
diffusion indeed increases the accuracy of sensing for both the glutamate AMPA
and NDMA receptors, although the NMDA receptor is overall much noiser. We
propose that the difference in accuracy of sensing of the two receptors can be
linked to their different roles in neurotransmission. Specifically, the high
accuracy in sensing glutamate is essential for the AMPA receptor to start
membrane depolarization, while the NMDA receptor is believed to work in a
second stage as a coincidence detector, involved in long-term potentiation and
memory.
| [
{
"created": "Mon, 30 Aug 2010 21:23:04 GMT",
"version": "v1"
}
] | 2015-11-06 | [
[
"Aquino",
"Gerardo",
""
],
[
"Endres",
"Robert G.",
""
]
] | The physical limit with which a cell senses external ligand concentration corresponds to the perfect absorber, where all ligand particles are absorbed and overcounting of same ligand particles does not occur. Here we analyze how the lateral diffusion of receptors on the cell membrane affects the accuracy of sensing ligand concentration. Specifically, we connect our modeling to neurotransmission in neural synapses where the diffusion of glutamate receptors is already known to refresh synaptic connections. We find that receptor diffusion indeed increases the accuracy of sensing for both the glutamate AMPA and NDMA receptors, although the NMDA receptor is overall much noiser. We propose that the difference in accuracy of sensing of the two receptors can be linked to their different roles in neurotransmission. Specifically, the high accuracy in sensing glutamate is essential for the AMPA receptor to start membrane depolarization, while the NMDA receptor is believed to work in a second stage as a coincidence detector, involved in long-term potentiation and memory. |
1911.02362 | Adam Safron | Adam Safron | Bayesian Analogical Cybernetics | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It has been argued that all of cognition can be understood in terms of
Bayesian inference. It has also been argued that analogy is the core of
cognition. Here I will propose that these perspectives are fully compatible, in
that analogical reasoning can be described in terms of Bayesian inference and
vice versa, and that both of these positions require a thorough cybernetic
grounding in order to fulfill their promise as unifying frameworks for
understanding minds. From the Bayesian perspective of the Free Energy Principle
and Active Inference framework, thought is constituted by dynamics of cascading
belief propagation through the nodes of probabilistic generative models
specified by a cortical heterarchy "rooted" in action-perception cycles that
ground the mind as an embodied control system for an autonomous agent. From the
analogical structure mapping perspective, thought is constituted by the
alignment and comparison of heterogeneous structural representations. Here I
will propose that this core cognitive process for analogical reasoning is
naturally implemented by predictive coding mechanisms. However, both Bayesian
cognitive science and models of cognitive development via analogical reasoning
require rich base domains and priors (or reliably learnable posteriors) from
which they can commence the process of bootstrapping minds. Here in the spirit
of the work of George Lakoff and Mark Johnson, I propose that embodiment
provides many of the inductive biases that are usually described in terms of
innate core knowledge. (Please note: this manuscript was written and finalized
in 2012.)
| [
{
"created": "Mon, 4 Nov 2019 06:17:55 GMT",
"version": "v1"
},
{
"created": "Fri, 8 Nov 2019 04:13:32 GMT",
"version": "v2"
}
] | 2019-11-11 | [
[
"Safron",
"Adam",
""
]
] | It has been argued that all of cognition can be understood in terms of Bayesian inference. It has also been argued that analogy is the core of cognition. Here I will propose that these perspectives are fully compatible, in that analogical reasoning can be described in terms of Bayesian inference and vice versa, and that both of these positions require a thorough cybernetic grounding in order to fulfill their promise as unifying frameworks for understanding minds. From the Bayesian perspective of the Free Energy Principle and Active Inference framework, thought is constituted by dynamics of cascading belief propagation through the nodes of probabilistic generative models specified by a cortical heterarchy "rooted" in action-perception cycles that ground the mind as an embodied control system for an autonomous agent. From the analogical structure mapping perspective, thought is constituted by the alignment and comparison of heterogeneous structural representations. Here I will propose that this core cognitive process for analogical reasoning is naturally implemented by predictive coding mechanisms. However, both Bayesian cognitive science and models of cognitive development via analogical reasoning require rich base domains and priors (or reliably learnable posteriors) from which they can commence the process of bootstrapping minds. Here in the spirit of the work of George Lakoff and Mark Johnson, I propose that embodiment provides many of the inductive biases that are usually described in terms of innate core knowledge. (Please note: this manuscript was written and finalized in 2012.) |
2104.12273 | Luigi Frunzo | A. Tenore, M.R. Mattei, L. Frunzo | Multiscale modelling of oxygenic photogranules | 33 pages, 12 figures, preprint version | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work presents a mathematical model which describes both the genesis and
growth of oxygenic photogranules (OPGs) and the related treatment process. The
photogranule has been modelled as a free boundary domain with radial symmetry,
which evolves over time as a result of microbial growth, attachment and
detachment processes. A system of hyperbolic and parabolic PDEs has been
considered to model the advective transport and growth of sessile biomass and
the diffusive transport and conversion of soluble substrates. The reactor has
been modelled as a sequencing batch reactor (SBR) through a system of first
order IDEs. Phototrophic biomass has been considered for the first time in
granular biofilms, and cyanobacteria and microalgae are taken into account
separately to model their differences in growth rate and light harvesting and
utilization. To describe the key role of cyanobacteria in the photogranules
formation process, the attachment velocity of all suspended microbial species
has been modelled as a function of the cyanobacteria concentration in suspended
form. The model takes into account the main biological aspects and processes
involved in OPGs based systems: heterotrophic and photoautotrophic activities
of cyanobacteria and microalgae, metabolic activity of heterotrophic and
nitrifying bacteria, microbial decay, EPS secrection, diffusion and conversion
of soluble substrates (inorganic and organic carbon, ammonia, nitrate and
oxygen), symbiotic and competitive interactions between the different microbial
species, day-night cycle, light diffusion and attenuation across the granular
biofilm and photoinhibion phenomena. The model has been integrated numerically,
investigating the evolution and microbial composition of photogranules and the
treatment efficiency of the OPGs-based system. The results show the consistency
of the model and confirm the effectiveness of the OPGs technology.
| [
{
"created": "Sun, 25 Apr 2021 22:02:29 GMT",
"version": "v1"
}
] | 2021-04-27 | [
[
"Tenore",
"A.",
""
],
[
"Mattei",
"M. R.",
""
],
[
"Frunzo",
"L.",
""
]
] | This work presents a mathematical model which describes both the genesis and growth of oxygenic photogranules (OPGs) and the related treatment process. The photogranule has been modelled as a free boundary domain with radial symmetry, which evolves over time as a result of microbial growth, attachment and detachment processes. A system of hyperbolic and parabolic PDEs has been considered to model the advective transport and growth of sessile biomass and the diffusive transport and conversion of soluble substrates. The reactor has been modelled as a sequencing batch reactor (SBR) through a system of first order IDEs. Phototrophic biomass has been considered for the first time in granular biofilms, and cyanobacteria and microalgae are taken into account separately to model their differences in growth rate and light harvesting and utilization. To describe the key role of cyanobacteria in the photogranules formation process, the attachment velocity of all suspended microbial species has been modelled as a function of the cyanobacteria concentration in suspended form. The model takes into account the main biological aspects and processes involved in OPGs based systems: heterotrophic and photoautotrophic activities of cyanobacteria and microalgae, metabolic activity of heterotrophic and nitrifying bacteria, microbial decay, EPS secrection, diffusion and conversion of soluble substrates (inorganic and organic carbon, ammonia, nitrate and oxygen), symbiotic and competitive interactions between the different microbial species, day-night cycle, light diffusion and attenuation across the granular biofilm and photoinhibion phenomena. The model has been integrated numerically, investigating the evolution and microbial composition of photogranules and the treatment efficiency of the OPGs-based system. The results show the consistency of the model and confirm the effectiveness of the OPGs technology. |
1602.04773 | Andr\'e Amado | Andr\'e Amado, Lenin Fern\'andez, Weini Huang, Fernando F. Ferreira,
Paulo R. A. Campos | Competing metabolic strategies in a multilevel selection model | 32 pages, 7 figures | null | 10.1098/rsos.160544 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The interplay between energy efficiency and evolutionary mechanisms is
addressed. One important question is how evolutionary mechanisms can select for
the optimised usage of energy in situations where it does not lead to immediate
advantage. For example, this problem is of great importance to improve our
understanding about the major transition from unicellular to multicellular form
of life. The immediate advantage of gathering efficient individuals in an
energetic context is not clear. Although this process increases relatedness
among individuals, it also increases local competition. To address this
question, we propose a model of two competing metabolic strategies that makes
explicit reference to the resource usage. We assume the existence of an
efficient strain, which converts resource into energy at high efficiency but
displays a low rate of resource consumption, and an inefficient strain, which
consumes resource at a high rate with a low efficiency in converting it to
energy. We explore the dynamics in both well-mixed and structured populations.
The selection for optimised energy usage is measured by the likelihood of that
an efficient strain can invade a population only comprised by inefficient
strains. It is found that the region of the parameter space at which the
efficient strain can thrive in structured populations is always larger than
observed in well-mixed populations. In fact, in well-mixed populations the
efficient strain is only evolutionarily stable in the domain whereupon there is
no evolutionary dilemma. We also observe that small group sizes enhance the
chance of invasion by the efficient strain in spite of increasing the
competition among relatives. This outcome corroborates the key role played by
kin selection and shows that the group dynamics relied on group expansion,
overlapping generations and group split can balance the negative effects of
local competition.
| [
{
"created": "Mon, 15 Feb 2016 19:32:00 GMT",
"version": "v1"
}
] | 2016-11-21 | [
[
"Amado",
"André",
""
],
[
"Fernández",
"Lenin",
""
],
[
"Huang",
"Weini",
""
],
[
"Ferreira",
"Fernando F.",
""
],
[
"Campos",
"Paulo R. A.",
""
]
] | The interplay between energy efficiency and evolutionary mechanisms is addressed. One important question is how evolutionary mechanisms can select for the optimised usage of energy in situations where it does not lead to immediate advantage. For example, this problem is of great importance to improve our understanding about the major transition from unicellular to multicellular form of life. The immediate advantage of gathering efficient individuals in an energetic context is not clear. Although this process increases relatedness among individuals, it also increases local competition. To address this question, we propose a model of two competing metabolic strategies that makes explicit reference to the resource usage. We assume the existence of an efficient strain, which converts resource into energy at high efficiency but displays a low rate of resource consumption, and an inefficient strain, which consumes resource at a high rate with a low efficiency in converting it to energy. We explore the dynamics in both well-mixed and structured populations. The selection for optimised energy usage is measured by the likelihood of that an efficient strain can invade a population only comprised by inefficient strains. It is found that the region of the parameter space at which the efficient strain can thrive in structured populations is always larger than observed in well-mixed populations. In fact, in well-mixed populations the efficient strain is only evolutionarily stable in the domain whereupon there is no evolutionary dilemma. We also observe that small group sizes enhance the chance of invasion by the efficient strain in spite of increasing the competition among relatives. This outcome corroborates the key role played by kin selection and shows that the group dynamics relied on group expansion, overlapping generations and group split can balance the negative effects of local competition. |
q-bio/0607043 | Horacio Ceva | Enrique Burgos, Horacio Ceva, Roberto P.J. Perazzo, Mariano Devoto,
Diego Medan, Martin Zimmermann, Mariana Delbue | Nestedness and degree distributions are necessarily linked in mutualist
systems | This paper and q-bio/0605026 are replaced by q-bio/0701029 | null | null | null | q-bio.PE | null | Replacement of papers q-bio/0605026 and q-bio/0607043 by a single manuscript
(q-bio/0701029) follows suggestions from a magazine's referee
| [
{
"created": "Mon, 24 Jul 2006 19:33:41 GMT",
"version": "v1"
},
{
"created": "Fri, 19 Jan 2007 12:05:04 GMT",
"version": "v2"
}
] | 2007-05-23 | [
[
"Burgos",
"Enrique",
""
],
[
"Ceva",
"Horacio",
""
],
[
"Perazzo",
"Roberto P. J.",
""
],
[
"Devoto",
"Mariano",
""
],
[
"Medan",
"Diego",
""
],
[
"Zimmermann",
"Martin",
""
],
[
"Delbue",
"Mariana",
""
]
] | Replacement of papers q-bio/0605026 and q-bio/0607043 by a single manuscript (q-bio/0701029) follows suggestions from a magazine's referee |
0808.0287 | Jeremy L. Martin | Jeremy L. Martin and E. O. Wiley | Mathematical Models and Biological Meaning: Taking Trees Seriously | 15 pages including 6 figures [5 pdf, 1 jpg]. Converted from original
MS Word manuscript to PDFLaTeX | null | 10.1371/currents.RRN1196 | null | q-bio.PE q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We compare three basic kinds of discrete mathematical models used to portray
phylogenetic relationships among species and higher taxa: phylogenetic trees,
Hennig trees and Nelson cladograms. All three models are trees, as that term is
commonly used in mathematics; the difference between them lies in the
biological interpretation of their vertices and edges. Phylogenetic trees and
Hennig trees carry exactly the same information, and translation between these
two kinds of trees can be accomplished by a simple algorithm. On the other
hand, evolutionary concepts such as monophyly are represented as different
mathematical substructures are represented differently in the two models. For
each phylogenetic or Hennig tree, there is a Nelson cladogram carrying the same
information, but the requirement that all taxa be represented by leaves
necessarily makes the representation less efficient. Moreover, we claim that it
is necessary to give some interpretation to the edges and internal vertices of
a Nelson cladogram in order to make it useful as a biological model. One
possibility is to interpret internal vertices as sets of characters and the
edges as statements of inclusion; however, this interpretation carries little
more than incomplete phenetic information. We assert that from the standpoint
of phylogenetics, one is forced to regard each internal vertex of a Nelson
cladogram as an actual (albeit unsampled) species simply to justify the use of
synapomorphies rather than symplesiomorphies.
| [
{
"created": "Sun, 3 Aug 2008 02:22:30 GMT",
"version": "v1"
}
] | 2011-10-05 | [
[
"Martin",
"Jeremy L.",
""
],
[
"Wiley",
"E. O.",
""
]
] | We compare three basic kinds of discrete mathematical models used to portray phylogenetic relationships among species and higher taxa: phylogenetic trees, Hennig trees and Nelson cladograms. All three models are trees, as that term is commonly used in mathematics; the difference between them lies in the biological interpretation of their vertices and edges. Phylogenetic trees and Hennig trees carry exactly the same information, and translation between these two kinds of trees can be accomplished by a simple algorithm. On the other hand, evolutionary concepts such as monophyly are represented as different mathematical substructures are represented differently in the two models. For each phylogenetic or Hennig tree, there is a Nelson cladogram carrying the same information, but the requirement that all taxa be represented by leaves necessarily makes the representation less efficient. Moreover, we claim that it is necessary to give some interpretation to the edges and internal vertices of a Nelson cladogram in order to make it useful as a biological model. One possibility is to interpret internal vertices as sets of characters and the edges as statements of inclusion; however, this interpretation carries little more than incomplete phenetic information. We assert that from the standpoint of phylogenetics, one is forced to regard each internal vertex of a Nelson cladogram as an actual (albeit unsampled) species simply to justify the use of synapomorphies rather than symplesiomorphies. |
2202.11552 | Gonzalo L\'opez | Gonzalo Maximiliano Lopez, Juan Pablo Aparicio | General model of sex distribution, mating probability and egg production
for macroparasites with polygamous mating system | null | null | null | null | q-bio.PE math.PR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The reproductive habits of helminths are important for the study of the
dynamics of their transmission. For populations of parasites distributed by
Poisson or negative binomial models, these habits have already been studied.
However, there are other statistical models that describe these populations,
such as zero-inflated models, but where reproductive characteristics were not
analyzed. Using an arbitrary model for the parasite population, we model the
distribution of females and males per host, and from these we model the
different reproductive variables such as the mean number of fertile females,
the mean egg production, the mating probability, the mean fertilized egg
production. We show that these variables change due to the effects of a
negative density-dependence fecundity, a characteristic of helminth parasites.
We present the results obtained for some particular models.
| [
{
"created": "Wed, 23 Feb 2022 15:01:21 GMT",
"version": "v1"
}
] | 2022-02-24 | [
[
"Lopez",
"Gonzalo Maximiliano",
""
],
[
"Aparicio",
"Juan Pablo",
""
]
] | The reproductive habits of helminths are important for the study of the dynamics of their transmission. For populations of parasites distributed by Poisson or negative binomial models, these habits have already been studied. However, there are other statistical models that describe these populations, such as zero-inflated models, but where reproductive characteristics were not analyzed. Using an arbitrary model for the parasite population, we model the distribution of females and males per host, and from these we model the different reproductive variables such as the mean number of fertile females, the mean egg production, the mating probability, the mean fertilized egg production. We show that these variables change due to the effects of a negative density-dependence fecundity, a characteristic of helminth parasites. We present the results obtained for some particular models. |
1907.10679 | Divine Wanduku | Divine Wanduku and Chinmoy Rahul | Complete maximum likelihood estimation for SEIR epidemic models:
theoretical development | null | null | null | null | q-bio.PE math.DS math.ST stat.TH | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We present a class of SEIR Markov chain models for infectious diseases
observed over discrete time in a random human population living in a closed
environment. The population changes over time through random births, deaths,
and transitions between states of the population. The SEIR models consist of
random dynamical equations for each state (S, E, I and R) involving driving
events for the process. We characterize some special types of SEIR Markov chain
models in the class including: (1) when birth and death are zero or non-zero,
and (2) when the incubation and infectious periods are constant or random. A
detailed parameter estimation applying the maximum likelihood estimation
technique and expectation maximization algorithm are presented for this study.
Numerical simulation results are given to validate the epidemic models.
| [
{
"created": "Wed, 24 Jul 2019 19:23:41 GMT",
"version": "v1"
},
{
"created": "Mon, 29 Jul 2019 15:04:07 GMT",
"version": "v2"
}
] | 2019-07-30 | [
[
"Wanduku",
"Divine",
""
],
[
"Rahul",
"Chinmoy",
""
]
] | We present a class of SEIR Markov chain models for infectious diseases observed over discrete time in a random human population living in a closed environment. The population changes over time through random births, deaths, and transitions between states of the population. The SEIR models consist of random dynamical equations for each state (S, E, I and R) involving driving events for the process. We characterize some special types of SEIR Markov chain models in the class including: (1) when birth and death are zero or non-zero, and (2) when the incubation and infectious periods are constant or random. A detailed parameter estimation applying the maximum likelihood estimation technique and expectation maximization algorithm are presented for this study. Numerical simulation results are given to validate the epidemic models. |
1504.06610 | Jianhua Xing | Jianhua Xing, Jin Yu, Hang Zhang, Xiao-Jun Tian | Computational modeling to elucidate molecular mechanisms of epigenetic
memory | 36 pages, 4 figures, 2 tables, book chapter | null | null | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | How do mammalian cells that share the same genome exist in notably distinct
phenotypes, exhibiting differences in morphology, gene expression patterns, and
epigenetic chromatin statuses? Furthermore how do cells of different phenotypes
differentiate reproducibly from a single fertilized egg? These are fundamental
problems in developmental biology. Epigenetic histone modifications play an
important role in the maintenance of different cell phenotypes. The exact
molecular mechanism for inheritance of the modification patterns over cell
generations remains elusive. The complexity comes partly from the number of
molecular species and the broad time scales involved. In recent years
mathematical modeling has made significant contributions on elucidating the
molecular mechanisms of DNA methylation and histone covalent modification
inheritance. We will pedagogically introduce the typical procedure and some
technical details of performing a mathematical modeling study, and discuss
future developments.
| [
{
"created": "Tue, 14 Apr 2015 02:11:51 GMT",
"version": "v1"
}
] | 2015-04-27 | [
[
"Xing",
"Jianhua",
""
],
[
"Yu",
"Jin",
""
],
[
"Zhang",
"Hang",
""
],
[
"Tian",
"Xiao-Jun",
""
]
] | How do mammalian cells that share the same genome exist in notably distinct phenotypes, exhibiting differences in morphology, gene expression patterns, and epigenetic chromatin statuses? Furthermore how do cells of different phenotypes differentiate reproducibly from a single fertilized egg? These are fundamental problems in developmental biology. Epigenetic histone modifications play an important role in the maintenance of different cell phenotypes. The exact molecular mechanism for inheritance of the modification patterns over cell generations remains elusive. The complexity comes partly from the number of molecular species and the broad time scales involved. In recent years mathematical modeling has made significant contributions on elucidating the molecular mechanisms of DNA methylation and histone covalent modification inheritance. We will pedagogically introduce the typical procedure and some technical details of performing a mathematical modeling study, and discuss future developments. |
1608.06546 | Yuan Zhao | Yuan Zhao and Il Memming Park | Interpretable Nonlinear Dynamic Modeling of Neural Trajectories | Accepted by 29th Conference on Neural Information Processing Systems
(NIPS 2016) | null | null | null | q-bio.QM q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A central challenge in neuroscience is understanding how neural system
implements computation through its dynamics. We propose a nonlinear time series
model aimed at characterizing interpretable dynamics from neural trajectories.
Our model assumes low-dimensional continuous dynamics in a finite volume. It
incorporates a prior assumption about globally contractional dynamics to avoid
overly enthusiastic extrapolation outside of the support of observed
trajectories. We show that our model can recover qualitative features of the
phase portrait such as attractors, slow points, and bifurcations, while also
producing reliable long-term future predictions in a variety of dynamical
models and in real neural data.
| [
{
"created": "Tue, 23 Aug 2016 15:27:24 GMT",
"version": "v1"
},
{
"created": "Thu, 27 Oct 2016 19:52:19 GMT",
"version": "v2"
}
] | 2016-10-28 | [
[
"Zhao",
"Yuan",
""
],
[
"Park",
"Il Memming",
""
]
] | A central challenge in neuroscience is understanding how neural system implements computation through its dynamics. We propose a nonlinear time series model aimed at characterizing interpretable dynamics from neural trajectories. Our model assumes low-dimensional continuous dynamics in a finite volume. It incorporates a prior assumption about globally contractional dynamics to avoid overly enthusiastic extrapolation outside of the support of observed trajectories. We show that our model can recover qualitative features of the phase portrait such as attractors, slow points, and bifurcations, while also producing reliable long-term future predictions in a variety of dynamical models and in real neural data. |
1405.4239 | Jan Karbowski | Jan Karbowski | What can a mathematician do in neuroscience? | Essay for prospective graduate students in "Computational
Neuroscience" | Mathematica Applicanda 40, 27-37 (2012) | 10.14708/ma.v40i1.277 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mammalian brain is one of the most complex objects in the known universe, as
it governs every aspect of animal's and human behavior. It is fair to say that
we have a very limited knowledge of how the brain operates and functions.
Computational Neuroscience is a scientific discipline that attempts to
understand and describe the brain in terms of mathematical modeling. This
user-friendly review tries to introduce this relatively new field to
mathematicians and physicists by showing examples of recent trends. It also
discusses briefly future prospects for constructing an integrated theory of
brain function.
| [
{
"created": "Fri, 16 May 2014 16:41:45 GMT",
"version": "v1"
}
] | 2014-05-19 | [
[
"Karbowski",
"Jan",
""
]
] | Mammalian brain is one of the most complex objects in the known universe, as it governs every aspect of animal's and human behavior. It is fair to say that we have a very limited knowledge of how the brain operates and functions. Computational Neuroscience is a scientific discipline that attempts to understand and describe the brain in terms of mathematical modeling. This user-friendly review tries to introduce this relatively new field to mathematicians and physicists by showing examples of recent trends. It also discusses briefly future prospects for constructing an integrated theory of brain function. |
2405.18419 | Daniel Cooney | Daniel B. Cooney | Exploring the Evolution of Altruistic Punishment with a PDE Model of
Cultural Multilevel Selection | 79 pages, 17 figures v2; Updated version of Section 5 with corrected
versions of Figures 5.1 and 5.5, as well as new subsection and figure added
to describe multilevel dynamics for Tullock contest function (Figure 5.5 in
Section 5.3) | null | null | null | q-bio.PE physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Two mechanisms that have been used to study the evolution of cooperative
behavior are altruistic punishment, in which cooperative individuals pay
additional costs to punish defection, and multilevel selection, in which
competition between groups can help to counteract individual-level incentives
to cheat. Boyd, Gintis, Bowles, and Richerson have used simulation models of
cultural evolution to suggest that altruistic punishment and pairwise
group-level competition can work in concert to promote cooperation, even when
neither mechanism can do so on its own. In this paper, we formulate a PDE model
for multilevel selection motivated by the approach of Boyd and coauthors,
modeling individual-level birth-death competition with a replicator equation
based on individual payoffs and describing group-level competition with
pairwise conflicts based on differences in the average payoffs of the competing
groups. Building off of existing PDE models for multilevel selection with
frequency-independent group-level competition, we use analytical and numerical
techniques to understand how the forms of individual and average payoffs can
impact the long-time ability to sustain altruistic punishment in
group-structured populations. We find several interesting differences between
the behavior of our new PDE model with pairwise group-level competition and
existing multilevel PDE models, including the observation that our new model
can feature a non-monotonic dependence of the long-time collective payoff on
the strength of altruistic punishment. Going forward, our PDE framework can
serve as a way to connect and compare disparate approaches for understanding
multilevel selection across the literature in evolutionary biology and
anthropology.
| [
{
"created": "Tue, 28 May 2024 17:57:40 GMT",
"version": "v1"
},
{
"created": "Thu, 20 Jun 2024 07:34:25 GMT",
"version": "v2"
}
] | 2024-06-21 | [
[
"Cooney",
"Daniel B.",
""
]
] | Two mechanisms that have been used to study the evolution of cooperative behavior are altruistic punishment, in which cooperative individuals pay additional costs to punish defection, and multilevel selection, in which competition between groups can help to counteract individual-level incentives to cheat. Boyd, Gintis, Bowles, and Richerson have used simulation models of cultural evolution to suggest that altruistic punishment and pairwise group-level competition can work in concert to promote cooperation, even when neither mechanism can do so on its own. In this paper, we formulate a PDE model for multilevel selection motivated by the approach of Boyd and coauthors, modeling individual-level birth-death competition with a replicator equation based on individual payoffs and describing group-level competition with pairwise conflicts based on differences in the average payoffs of the competing groups. Building off of existing PDE models for multilevel selection with frequency-independent group-level competition, we use analytical and numerical techniques to understand how the forms of individual and average payoffs can impact the long-time ability to sustain altruistic punishment in group-structured populations. We find several interesting differences between the behavior of our new PDE model with pairwise group-level competition and existing multilevel PDE models, including the observation that our new model can feature a non-monotonic dependence of the long-time collective payoff on the strength of altruistic punishment. Going forward, our PDE framework can serve as a way to connect and compare disparate approaches for understanding multilevel selection across the literature in evolutionary biology and anthropology. |
0801.1022 | George Tsibidis | Nigel J. Burroughs, George D. Tsibidis, William Gaze and Liz
Wellington | Study Of Spatial Biological Systems Using a Graphical User Interface | null | Proceedings of the Tenth International Conference on
Human-Computer Interaction 2003, pp. 48-52 | null | null | q-bio.QM q-bio.OT | null | In this paper, we describe a Graphical User Interface (GUI) designed to
manage large quantities of image data of a biological system. After setting the
design requirements for the system, we developed an ecology quantification GUI
that assists biologists in analysing data. We focus on the main features of the
interface and we present the results and an evaluation of the system. Finally,
we provide some directions for some future work.
| [
{
"created": "Mon, 7 Jan 2008 15:34:49 GMT",
"version": "v1"
}
] | 2008-01-08 | [
[
"Burroughs",
"Nigel J.",
""
],
[
"Tsibidis",
"George D.",
""
],
[
"Gaze",
"William",
""
],
[
"Wellington",
"Liz",
""
]
] | In this paper, we describe a Graphical User Interface (GUI) designed to manage large quantities of image data of a biological system. After setting the design requirements for the system, we developed an ecology quantification GUI that assists biologists in analysing data. We focus on the main features of the interface and we present the results and an evaluation of the system. Finally, we provide some directions for some future work. |
2208.03143 | Amin Gasmi | Amin Gasmi (SOFNNA) | Deep Learning and Health Informatics for Smart Monitoring and Diagnosis | null | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The connection between the design and delivery of health care services using
information technology is known as health informatics. It involves data usage,
validation, and transfer of an integrated medical analysis using neural
networks of multi-layer deep learning techniques to analyze complex data. For
instance, Google incorporated ''DeepMind'' health mobile tool that integrates
\& leverage medical data needed to enhance professional healthcare delivery to
patients. Moorfield Eye Hospital London introduced DeepMind Research Algorithms
with dozens of retinal scans attributes while DeepMind UCL handled the
identification of cancerous tissues using CT \& MRI Scan tools. Atomise
analyzed drugs and chemicals with Deep Learning Neural Networks to identify
accurate pre-clinical prescriptions. Health informatics makes medical care
intelligent, interactive, cost-effective, and accessible; especially with DL
application tools for detecting the actual cause of diseases. The extensive use
of neural network tools leads to the expansion of different medical disciplines
which mitigates data complexity and enhances 3-4D overlap images using target
point label data detectors that support data augmentation, un-semi-supervised
learning, multi-modality and transfer learning architecture. Health science
over the years focused on artificial intelligence tools for care delivery,
chronic care management, prevention/wellness, clinical supports, and diagnosis.
The outcome of their research leads to cardiac arrest diagnosis through Heart
Signal Computer-Aided Diagnostic tool (CADX) and other multifunctional deep
learning techniques that offer care, diagnosis \& treatment. Health informatics
provides monitored outcomes of human body organs through medical images that
classify interstitial lung disease, detects image nodules for reconstruction \&
tumor segmentation. The emergent medical research applications gave rise to
clinical-pathological human-level performing tools for handling Radiological,
Ophthalmological, and Dental diagnosis. This research will evaluate
methodologies, Deep learning architectures, approaches, bio-informatics,
specified function requirements, monitoring tools, ANN (artificial neural
network), data labeling \& annotation algorithms that control data validation,
modeling, and diagnosis of different diseases using smart monitoring health
informatics applications.
| [
{
"created": "Fri, 5 Aug 2022 13:07:59 GMT",
"version": "v1"
}
] | 2022-08-08 | [
[
"Gasmi",
"Amin",
"",
"SOFNNA"
]
] | The connection between the design and delivery of health care services using information technology is known as health informatics. It involves data usage, validation, and transfer of an integrated medical analysis using neural networks of multi-layer deep learning techniques to analyze complex data. For instance, Google incorporated ''DeepMind'' health mobile tool that integrates \& leverage medical data needed to enhance professional healthcare delivery to patients. Moorfield Eye Hospital London introduced DeepMind Research Algorithms with dozens of retinal scans attributes while DeepMind UCL handled the identification of cancerous tissues using CT \& MRI Scan tools. Atomise analyzed drugs and chemicals with Deep Learning Neural Networks to identify accurate pre-clinical prescriptions. Health informatics makes medical care intelligent, interactive, cost-effective, and accessible; especially with DL application tools for detecting the actual cause of diseases. The extensive use of neural network tools leads to the expansion of different medical disciplines which mitigates data complexity and enhances 3-4D overlap images using target point label data detectors that support data augmentation, un-semi-supervised learning, multi-modality and transfer learning architecture. Health science over the years focused on artificial intelligence tools for care delivery, chronic care management, prevention/wellness, clinical supports, and diagnosis. The outcome of their research leads to cardiac arrest diagnosis through Heart Signal Computer-Aided Diagnostic tool (CADX) and other multifunctional deep learning techniques that offer care, diagnosis \& treatment. Health informatics provides monitored outcomes of human body organs through medical images that classify interstitial lung disease, detects image nodules for reconstruction \& tumor segmentation. The emergent medical research applications gave rise to clinical-pathological human-level performing tools for handling Radiological, Ophthalmological, and Dental diagnosis. This research will evaluate methodologies, Deep learning architectures, approaches, bio-informatics, specified function requirements, monitoring tools, ANN (artificial neural network), data labeling \& annotation algorithms that control data validation, modeling, and diagnosis of different diseases using smart monitoring health informatics applications. |
1209.0725 | Stuart Borrett Stuart Borrett | Stuart R. Borrett | Throughflow centrality is a global indicator of the functional
importance of species in ecosystems | 7 figures, 2 tables | 2013. Ecological Indicators 32:182-196 | 10.1016/j.ecolind.2013.03.014 | null | q-bio.QM physics.soc-ph q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To better understand and manage complex systems like ecosystems it is
critical to know the relative contribution of system components to system
functioning. Ecologists and social scientists have described many ways that
individuals can be important; This paper makes two key contributions to this
research area. First, it shows that throughflow, the total energy-matter
entering or exiting a system component, is a global indicator of the relative
contribution of the component to the whole system activity. It is global
because it includes the direct and indirect exchanges among community members.
Further, throughflow is a special case of Hubbell status as defined in social
science. This recognition effectively joins the concepts, enabling ecologists
to use and build on the broader centrality research in network science. Second,
I characterize the distribution of throughflow in 45 empirically-based trophic
ecosystem models. Consistent with expectations, this analysis shows that a
small fraction of the system components are responsible for the majority of the
system activity. In 73% of the ecosystem models, 20% or less of the nodes
generate 80% or more of the total system throughflow. Four or fewer dominant
nodes are required to account for 50% of the total system activity. 121 of the
130 dominant nodes in the 45 ecosystem models could be classified as primary
producers, dead organic matter, or bacteria. Thus, throughflow centrality
indicates the rank power of the ecosystems components and shows the power
concentration in the primary production and decomposition cycle. Although these
results are specific to ecosystems, these techniques build on flow analysis
based on economic input-output analysis. Therefore these results should be
useful for ecosystem ecology, industrial ecology, the study of urban
metabolism, as well as other domains using input-output analysis.
| [
{
"created": "Tue, 4 Sep 2012 18:41:13 GMT",
"version": "v1"
}
] | 2013-04-24 | [
[
"Borrett",
"Stuart R.",
""
]
] | To better understand and manage complex systems like ecosystems it is critical to know the relative contribution of system components to system functioning. Ecologists and social scientists have described many ways that individuals can be important; This paper makes two key contributions to this research area. First, it shows that throughflow, the total energy-matter entering or exiting a system component, is a global indicator of the relative contribution of the component to the whole system activity. It is global because it includes the direct and indirect exchanges among community members. Further, throughflow is a special case of Hubbell status as defined in social science. This recognition effectively joins the concepts, enabling ecologists to use and build on the broader centrality research in network science. Second, I characterize the distribution of throughflow in 45 empirically-based trophic ecosystem models. Consistent with expectations, this analysis shows that a small fraction of the system components are responsible for the majority of the system activity. In 73% of the ecosystem models, 20% or less of the nodes generate 80% or more of the total system throughflow. Four or fewer dominant nodes are required to account for 50% of the total system activity. 121 of the 130 dominant nodes in the 45 ecosystem models could be classified as primary producers, dead organic matter, or bacteria. Thus, throughflow centrality indicates the rank power of the ecosystems components and shows the power concentration in the primary production and decomposition cycle. Although these results are specific to ecosystems, these techniques build on flow analysis based on economic input-output analysis. Therefore these results should be useful for ecosystem ecology, industrial ecology, the study of urban metabolism, as well as other domains using input-output analysis. |
2401.16533 | Parisa Ahmadi Ghomroudi | Parisa Ahmadi Ghomroudi, Roma Siugzdaite, Irene Messina, Alessandro
Grecucci | Resting-State fingerprints of Acceptance and Reappraisal. The role of
Sensorimotor, Executive and Affective networks | 33 pages, 6 figures, 3 tables | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Acceptance and Reappraisal are considered adaptive emotion regulation
strategies. While previous studies have explored the neural underpinnings of
these strategies using task based fMRI and sMRI, a gap exists in the literature
concerning resting-state functional brain networks contributions to these
abilities, especially for what concerns Acceptance. Another intriguing question
is whether these strategies rely on similar or different neural mechanisms.
Building on the well-known improved emotion regulation and increased cognitive
flexibility of individuals who rely on acceptance, we expected to find
decreased activity inside the Affective network and increased activity inside
the Executive and Sensorimotor networks to be predicted of acceptance. We also
expect that these networks may be associated at least in part with Reappraisal,
indicating a common mechanism behind different strategies. To test these
hypotheses, we conducted a functional connectivity analysis of resting-state
data from 134 individuals (95 females). To assess acceptance and reappraisal
abilities, we used the Cognitive Emotion Regulation Questionnaire (CERQ) and a
group-ICA unsupervised machine learning approach to identify resting state
networks. Subsequently, we conducted backward regression to predict acceptance
and reappraisal abilities. As expected, results indicated that acceptance was
predicted by decreased Affective, and increased Executive, and Sensorimotor
networks, while reappraisal was predicted by an increase in the Sensorimotor
network. Notably, these findings suggest both distinct and overlapping brain
contributions to acceptance and reappraisal, with the Sensorimotor network
potentially serving as a core common mechanism. These results not only align
with previous findings but also expand upon them, demonstrating the complex
interplay of cognitive, affective, and sensory abilities in emotion regulation.
| [
{
"created": "Mon, 29 Jan 2024 20:14:08 GMT",
"version": "v1"
},
{
"created": "Tue, 13 Feb 2024 17:37:31 GMT",
"version": "v2"
}
] | 2024-02-14 | [
[
"Ghomroudi",
"Parisa Ahmadi",
""
],
[
"Siugzdaite",
"Roma",
""
],
[
"Messina",
"Irene",
""
],
[
"Grecucci",
"Alessandro",
""
]
] | Acceptance and Reappraisal are considered adaptive emotion regulation strategies. While previous studies have explored the neural underpinnings of these strategies using task based fMRI and sMRI, a gap exists in the literature concerning resting-state functional brain networks contributions to these abilities, especially for what concerns Acceptance. Another intriguing question is whether these strategies rely on similar or different neural mechanisms. Building on the well-known improved emotion regulation and increased cognitive flexibility of individuals who rely on acceptance, we expected to find decreased activity inside the Affective network and increased activity inside the Executive and Sensorimotor networks to be predicted of acceptance. We also expect that these networks may be associated at least in part with Reappraisal, indicating a common mechanism behind different strategies. To test these hypotheses, we conducted a functional connectivity analysis of resting-state data from 134 individuals (95 females). To assess acceptance and reappraisal abilities, we used the Cognitive Emotion Regulation Questionnaire (CERQ) and a group-ICA unsupervised machine learning approach to identify resting state networks. Subsequently, we conducted backward regression to predict acceptance and reappraisal abilities. As expected, results indicated that acceptance was predicted by decreased Affective, and increased Executive, and Sensorimotor networks, while reappraisal was predicted by an increase in the Sensorimotor network. Notably, these findings suggest both distinct and overlapping brain contributions to acceptance and reappraisal, with the Sensorimotor network potentially serving as a core common mechanism. These results not only align with previous findings but also expand upon them, demonstrating the complex interplay of cognitive, affective, and sensory abilities in emotion regulation. |
2309.02148 | Tobias Paul | Tobias Paul | The canonical equation of adaptive dynamics in individual-based models
with power law mutation rates | 19 pages, 8 Figures | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we consider an individual-based model with power law mutation
probability. In this setting, we use the large population limit with a
subsequent ``small mutations'' limit to derive the canonical equation of
adaptive dynamics. For a one-dimensional trait space this corresponds to well
established results and we can formulate a criterion for evolutionary branching
in the spirit of Champagnat and M\'el\'eard (2011). In higher dimensional trait
spaces, we find that the speed at which the solution of the canonical equation
moves through space is reduced due to mutations being restricted to the
underlying grid on the trait space. However, as opposed to the canonical
equation with rare mutations, we can explicitly calculate the path which the
dominant trait will take without having to solve the equation itself.
| [
{
"created": "Tue, 5 Sep 2023 11:39:30 GMT",
"version": "v1"
},
{
"created": "Tue, 13 Feb 2024 12:23:38 GMT",
"version": "v2"
}
] | 2024-02-14 | [
[
"Paul",
"Tobias",
""
]
] | In this paper, we consider an individual-based model with power law mutation probability. In this setting, we use the large population limit with a subsequent ``small mutations'' limit to derive the canonical equation of adaptive dynamics. For a one-dimensional trait space this corresponds to well established results and we can formulate a criterion for evolutionary branching in the spirit of Champagnat and M\'el\'eard (2011). In higher dimensional trait spaces, we find that the speed at which the solution of the canonical equation moves through space is reduced due to mutations being restricted to the underlying grid on the trait space. However, as opposed to the canonical equation with rare mutations, we can explicitly calculate the path which the dominant trait will take without having to solve the equation itself. |
1912.01507 | Yao Li | Wenjie Li and Yao Li | Entropy, mutual information, and systematic measures of structured
spiking neural networks | null | null | null | null | q-bio.NC math.PR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The aim of this paper is to investigate various information-theoretic
measures, including entropy, mutual information, and some systematic measures
that based on mutual information, for a class of structured spiking neuronal
network. In order to analyze and compute these information-theoretic measures
for large networks, we coarse-grained the data by ignoring the order of spikes
that fall into the same small time bin. The resultant coarse-grained entropy
mainly capture the information contained in the rhythm produced by a local
population of the network. We first proved that these information theoretical
measures are well-defined and computable by proving the stochastic stability
and the law of large numbers. Then we use three neuronal network examples, from
simple to complex, to investigate these information-theoretic measures. Several
analytical and computational results about properties of these
information-theoretic measures are given.
| [
{
"created": "Wed, 27 Nov 2019 02:31:04 GMT",
"version": "v1"
}
] | 2019-12-04 | [
[
"Li",
"Wenjie",
""
],
[
"Li",
"Yao",
""
]
] | The aim of this paper is to investigate various information-theoretic measures, including entropy, mutual information, and some systematic measures that based on mutual information, for a class of structured spiking neuronal network. In order to analyze and compute these information-theoretic measures for large networks, we coarse-grained the data by ignoring the order of spikes that fall into the same small time bin. The resultant coarse-grained entropy mainly capture the information contained in the rhythm produced by a local population of the network. We first proved that these information theoretical measures are well-defined and computable by proving the stochastic stability and the law of large numbers. Then we use three neuronal network examples, from simple to complex, to investigate these information-theoretic measures. Several analytical and computational results about properties of these information-theoretic measures are given. |
1304.2301 | Luis Jover | Luis F. Jover, Michael H. Cortez, Joshua S. Weitz | Mechanisms of Multi-strain Coexistence in Host-Phage Systems with Nested
Infection Networks | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Bacteria and their viruses ("bacteriophages") coexist in natural environments
forming complex infection networks. Recent empirical findings suggest that
phage-bacteria infection networks often possess a nested structure such that
there is a hierarchical relationship among who can infect whom. Here we
consider how nested infection networks may affect phage and bacteria dynamics
using a multi-type Lotka-Volterra framework with cross-infection. Analysis of
similar models have, in the past, assumed simpler interaction structures as a
first step towards tractability. We solve the proposed model, finding trade-off
conditions on the life-history traits of both bacteria and viruses that allow
coexistence in communities with nested infection networks. First, we find that
bacterial growth rate should decrease with increasing defense against
infection. Second, we find that the efficiency of viral infection should
decrease with host range. Next, we establish a relationship between relative
densities and the curvature of life history trade-offs. We compare and contrast
the current findings to the "Kill-the-Winner" model of multi-species
phage-bacteria communities. Finally, we discuss a suite of testable hypotheses
stemming from the current model concerning relationships between infection
range, life history traits and coexistence in complex phage-bacteria
communities.
| [
{
"created": "Mon, 8 Apr 2013 18:32:56 GMT",
"version": "v1"
}
] | 2013-04-09 | [
[
"Jover",
"Luis F.",
""
],
[
"Cortez",
"Michael H.",
""
],
[
"Weitz",
"Joshua S.",
""
]
] | Bacteria and their viruses ("bacteriophages") coexist in natural environments forming complex infection networks. Recent empirical findings suggest that phage-bacteria infection networks often possess a nested structure such that there is a hierarchical relationship among who can infect whom. Here we consider how nested infection networks may affect phage and bacteria dynamics using a multi-type Lotka-Volterra framework with cross-infection. Analysis of similar models have, in the past, assumed simpler interaction structures as a first step towards tractability. We solve the proposed model, finding trade-off conditions on the life-history traits of both bacteria and viruses that allow coexistence in communities with nested infection networks. First, we find that bacterial growth rate should decrease with increasing defense against infection. Second, we find that the efficiency of viral infection should decrease with host range. Next, we establish a relationship between relative densities and the curvature of life history trade-offs. We compare and contrast the current findings to the "Kill-the-Winner" model of multi-species phage-bacteria communities. Finally, we discuss a suite of testable hypotheses stemming from the current model concerning relationships between infection range, life history traits and coexistence in complex phage-bacteria communities. |
0901.1675 | Mircea Andrecut Dr | M. Andrecut, D. Foster, H. Carteret and S. A. Kauffman | Maximal Information Transfer and Behavior Diversity in Random Threshold
Networks | 14 pages, 4 figures | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Random Threshold Networks (RTNs) are an idealized model of diluted, non
symmetric spin glasses, neural networks or gene regulatory networks. RTNs also
serve as an interesting general example of any coordinated causal system. Here
we study the conditions for maximal information transfer and behavior diversity
in RTNs. These conditions are likely to play a major role in physical and
biological systems, perhaps serving as important selective traits in biological
systems. We show that the pairwise mutual information is maximized in
dynamically critical networks. Also, we show that the correlated behavior
diversity is maximized for slightly chaotic networks, close to the critical
region. Importantly, critical networks maximize coordinated, diverse dynamical
behavior across the network and across time: the information transmission
between source and receiver nodes and the diversity of dynamical behaviors,
when measured with a time delay between the source and receiver, are maximized
for critical networks.
| [
{
"created": "Mon, 12 Jan 2009 21:52:47 GMT",
"version": "v1"
}
] | 2009-01-14 | [
[
"Andrecut",
"M.",
""
],
[
"Foster",
"D.",
""
],
[
"Carteret",
"H.",
""
],
[
"Kauffman",
"S. A.",
""
]
] | Random Threshold Networks (RTNs) are an idealized model of diluted, non symmetric spin glasses, neural networks or gene regulatory networks. RTNs also serve as an interesting general example of any coordinated causal system. Here we study the conditions for maximal information transfer and behavior diversity in RTNs. These conditions are likely to play a major role in physical and biological systems, perhaps serving as important selective traits in biological systems. We show that the pairwise mutual information is maximized in dynamically critical networks. Also, we show that the correlated behavior diversity is maximized for slightly chaotic networks, close to the critical region. Importantly, critical networks maximize coordinated, diverse dynamical behavior across the network and across time: the information transmission between source and receiver nodes and the diversity of dynamical behaviors, when measured with a time delay between the source and receiver, are maximized for critical networks. |
2301.07455 | Kirubeswaran O.R | O.R. Kirubeswaran, Katherine R. Storrs | Inconsistent illusory motion in predictive coding deep neural networks | Published in vision research
(https://doi.org/10.1016/j.visres.2023.108195) | null | 10.1016/j.visres.2023.108195 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Why do we perceive illusory motion in some static images? Several accounts
have been proposed based on eye movements, response latencies to different
image elements, or interactions between image patterns and motion energy
detectors. Recently, PredNet, a recurrent deep neural network (DNN) based on
predictive coding principles, was reported to reproduce the "Rotating Snakes"
illusion, suggesting a role for predictive coding in illusory motion. We
replicate this finding and then use a series of "in silico psychophysics"
experiments to examine whether PredNet behaves consistently with human
observers for simplified variants of the illusory stimuli. We also measure
response latencies to individual elements of the Rotating Snakes pattern by
probing internal units in the network. A pretrained PredNet model predicted
illusory motion for all subcomponents of the Rotating Snakes stimulus,
consistent with human observers. However, we found no simple response delays in
internal units, as found in physiological data. The PredNet model's detection
of motion in gradients was based on contrast, not luminance, as it is in human
perception. Finally, we tested the robustness of the illusion on 10 identical
PredNets trained on the same video data; we found a large variation in the
ability of the network to reproduce the illusion and predict motion for
simplified variants of the illusion. Also, unlike human observers, none of the
networks predicted illusory motion for greyscale variants of the pattern. Even
when a DNN successfully reproduces some idiosyncrasy of human vision, a more
detailed investigation can reveal inconsistencies between humans and the
network and between different instances of the same network. The inconsistency
of the Rotating Snakes illusion in PredNets trained from different
initializations suggests that predictive coding does not reliably lead to
human-like illusory motion.
| [
{
"created": "Wed, 18 Jan 2023 11:56:24 GMT",
"version": "v1"
},
{
"created": "Tue, 21 Feb 2023 05:28:34 GMT",
"version": "v2"
}
] | 2023-02-23 | [
[
"Kirubeswaran",
"O. R.",
""
],
[
"Storrs",
"Katherine R.",
""
]
] | Why do we perceive illusory motion in some static images? Several accounts have been proposed based on eye movements, response latencies to different image elements, or interactions between image patterns and motion energy detectors. Recently, PredNet, a recurrent deep neural network (DNN) based on predictive coding principles, was reported to reproduce the "Rotating Snakes" illusion, suggesting a role for predictive coding in illusory motion. We replicate this finding and then use a series of "in silico psychophysics" experiments to examine whether PredNet behaves consistently with human observers for simplified variants of the illusory stimuli. We also measure response latencies to individual elements of the Rotating Snakes pattern by probing internal units in the network. A pretrained PredNet model predicted illusory motion for all subcomponents of the Rotating Snakes stimulus, consistent with human observers. However, we found no simple response delays in internal units, as found in physiological data. The PredNet model's detection of motion in gradients was based on contrast, not luminance, as it is in human perception. Finally, we tested the robustness of the illusion on 10 identical PredNets trained on the same video data; we found a large variation in the ability of the network to reproduce the illusion and predict motion for simplified variants of the illusion. Also, unlike human observers, none of the networks predicted illusory motion for greyscale variants of the pattern. Even when a DNN successfully reproduces some idiosyncrasy of human vision, a more detailed investigation can reveal inconsistencies between humans and the network and between different instances of the same network. The inconsistency of the Rotating Snakes illusion in PredNets trained from different initializations suggests that predictive coding does not reliably lead to human-like illusory motion. |
2106.09594 | Andrew Eckford | Alexander S. Moffett, Guiying Cui, Peter J. Thomas, William D. Hunt,
Nael A. McCarty, Ryan S. Westafer, Andrew W. Eckford | A factor graph EM algorithm for inference of kinetic microstates from
patch clamp measurements | null | null | null | null | q-bio.QM eess.SP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We derive a factor graph EM (FGEM) algorithm, a technique that permits
combined parameter estimation and statistical inference, to determine hidden
kinetic microstates from patch clamp measurements. Using the cystic fibrosis
transmembrane conductance regulator (CFTR) and nicotinic acetylcholine receptor
(nAChR) as examples, we perform {\em Monte Carlo} simulations to demonstrate
the performance of the algorithm. We show that the performance, measured in
terms of the probability of estimation error, approaches the theoretical
performance limit of maximum {\em a posteriori} estimation. Moreover, the
algorithm provides a reliability score for its estimates, and we demonstrate
that the score can be used to further improve the performance of estimation. We
use the algorithm to estimate hidden kinetic states in lab-obtained CFTR single
channel patch clamp traces.
| [
{
"created": "Thu, 17 Jun 2021 15:22:42 GMT",
"version": "v1"
}
] | 2021-06-18 | [
[
"Moffett",
"Alexander S.",
""
],
[
"Cui",
"Guiying",
""
],
[
"Thomas",
"Peter J.",
""
],
[
"Hunt",
"William D.",
""
],
[
"McCarty",
"Nael A.",
""
],
[
"Westafer",
"Ryan S.",
""
],
[
"Eckford",
"Andrew W.",
""
]
] | We derive a factor graph EM (FGEM) algorithm, a technique that permits combined parameter estimation and statistical inference, to determine hidden kinetic microstates from patch clamp measurements. Using the cystic fibrosis transmembrane conductance regulator (CFTR) and nicotinic acetylcholine receptor (nAChR) as examples, we perform {\em Monte Carlo} simulations to demonstrate the performance of the algorithm. We show that the performance, measured in terms of the probability of estimation error, approaches the theoretical performance limit of maximum {\em a posteriori} estimation. Moreover, the algorithm provides a reliability score for its estimates, and we demonstrate that the score can be used to further improve the performance of estimation. We use the algorithm to estimate hidden kinetic states in lab-obtained CFTR single channel patch clamp traces. |
1804.07387 | Kyongsik Yun | Kyongsik Yun, Juhee Lee, Jaekyu Choi, In-Uk Song, Yong-An Chung | Smartphone-based point-of-care lipid blood test performance evaluation
compared with a clinical diagnostic laboratory method | null | null | null | null | q-bio.QM q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Managing blood lipid levels is important for the treatment and prevention of
diabetes, cardiovascular disease, and obesity. An easy-to-use, portable lipid
blood test will accelerate more frequent testing by patients and at-risk
populations. We used smartphone systems that are already familiar to many
people. Because smartphone systems can be carried around everywhere, blood can
be measured easily and frequently. We compared the results of lipid tests with
those of existing clinical diagnostic laboratory methods. We found that
smartphone-based point-of-care lipid blood tests are as accurate as
hospital-grade laboratory tests. Our system will be useful for those who need
to manage blood lipid levels to motivate them to track and control their
behavior.
| [
{
"created": "Thu, 19 Apr 2018 21:48:58 GMT",
"version": "v1"
}
] | 2018-04-23 | [
[
"Yun",
"Kyongsik",
""
],
[
"Lee",
"Juhee",
""
],
[
"Choi",
"Jaekyu",
""
],
[
"Song",
"In-Uk",
""
],
[
"Chung",
"Yong-An",
""
]
] | Managing blood lipid levels is important for the treatment and prevention of diabetes, cardiovascular disease, and obesity. An easy-to-use, portable lipid blood test will accelerate more frequent testing by patients and at-risk populations. We used smartphone systems that are already familiar to many people. Because smartphone systems can be carried around everywhere, blood can be measured easily and frequently. We compared the results of lipid tests with those of existing clinical diagnostic laboratory methods. We found that smartphone-based point-of-care lipid blood tests are as accurate as hospital-grade laboratory tests. Our system will be useful for those who need to manage blood lipid levels to motivate them to track and control their behavior. |
2308.10840 | Vimal William | Vimal W and Akshansh Gupta | Deep Learning Architecture for Motor Imaged Words | null | null | null | null | q-bio.NC eess.SP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The notion of a Brain-Computer Interface system is the acquisition of signals
from the brain, processing them, and translating them into commands. The study
concentrated on a specific sort of brain signal known as Motor Imagery EEG
signals, which are activated in the brain without any external stimulus of the
needed motor activities in relation to the signal. The signals are further
processed using complicated signal processing methods such as wavelet-based
denoising and Independent Component Analysis (ICA) based dimensionality
reduction approach. To extract the characteristics from the processed data,
both signal processing includes Short-Term Fourier Transforms (STFT) and a
probabilistic approach such as Gramian Angular field Theory are used.
Furthermore, the gathered feature signals are analyzed and converted into
noteworthy commands by Deep Learning algorithms, which can be a mix of
complicated Deep Learning algorithm families such as CNN and RNN. The Weights
of trained model with the particular subject is further used for the multiple
subject which shows in the elevation of accuracy rate in translating the Motor
Imagery EEG signals into the relevant motor actions
| [
{
"created": "Tue, 8 Aug 2023 12:07:07 GMT",
"version": "v1"
}
] | 2023-08-22 | [
[
"W",
"Vimal",
""
],
[
"Gupta",
"Akshansh",
""
]
] | The notion of a Brain-Computer Interface system is the acquisition of signals from the brain, processing them, and translating them into commands. The study concentrated on a specific sort of brain signal known as Motor Imagery EEG signals, which are activated in the brain without any external stimulus of the needed motor activities in relation to the signal. The signals are further processed using complicated signal processing methods such as wavelet-based denoising and Independent Component Analysis (ICA) based dimensionality reduction approach. To extract the characteristics from the processed data, both signal processing includes Short-Term Fourier Transforms (STFT) and a probabilistic approach such as Gramian Angular field Theory are used. Furthermore, the gathered feature signals are analyzed and converted into noteworthy commands by Deep Learning algorithms, which can be a mix of complicated Deep Learning algorithm families such as CNN and RNN. The Weights of trained model with the particular subject is further used for the multiple subject which shows in the elevation of accuracy rate in translating the Motor Imagery EEG signals into the relevant motor actions |
2003.04398 | Alexandru Hening | Alexandru Hening and Yao Li | Stationary distributions of persistent ecological systems | 48 pages, 14 figures | null | null | null | q-bio.PE cs.NA math.NA math.PR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We analyze ecological systems that are influenced by random environmental
fluctuations. We first provide general conditions which ensure that the species
coexist and the system converges to a unique invariant probability measure
(stationary distribution). Since it is usually impossible to characterize this
invariant probability measure analytically, we develop a powerful method for
numerically approximating invariant probability measures. This allows us to
shed light upon how the various parameters of the ecosystem impact the
stationary distribution.
We analyze different types of environmental fluctuations. At first we study
ecosystems modeled by stochastic differential equations. In the second setting
we look at piecewise deterministic Markov processes. These are processes where
one follows a system of differential equations for a random time, after which
the environmental state changes, and one follows a different set of
differential equations -- this procedure then gets repeated indefinitely.
Finally, we look at stochastic differential equations with switching, which
take into account both the white noise fluctuations and the random
environmental switches.
As applications of our theoretical and numerical analysis, we look at
competitive Lotka--Volterra, Beddington-DeAngelis predator-prey, and
rock-paper-scissors dynamics. We highlight new biological insights by analyzing
the stationary distributions of the ecosystems and by seeing how various types
of environmental fluctuations influence the long term fate of populations.
| [
{
"created": "Mon, 9 Mar 2020 20:24:36 GMT",
"version": "v1"
},
{
"created": "Tue, 18 May 2021 14:03:46 GMT",
"version": "v2"
}
] | 2021-05-19 | [
[
"Hening",
"Alexandru",
""
],
[
"Li",
"Yao",
""
]
] | We analyze ecological systems that are influenced by random environmental fluctuations. We first provide general conditions which ensure that the species coexist and the system converges to a unique invariant probability measure (stationary distribution). Since it is usually impossible to characterize this invariant probability measure analytically, we develop a powerful method for numerically approximating invariant probability measures. This allows us to shed light upon how the various parameters of the ecosystem impact the stationary distribution. We analyze different types of environmental fluctuations. At first we study ecosystems modeled by stochastic differential equations. In the second setting we look at piecewise deterministic Markov processes. These are processes where one follows a system of differential equations for a random time, after which the environmental state changes, and one follows a different set of differential equations -- this procedure then gets repeated indefinitely. Finally, we look at stochastic differential equations with switching, which take into account both the white noise fluctuations and the random environmental switches. As applications of our theoretical and numerical analysis, we look at competitive Lotka--Volterra, Beddington-DeAngelis predator-prey, and rock-paper-scissors dynamics. We highlight new biological insights by analyzing the stationary distributions of the ecosystems and by seeing how various types of environmental fluctuations influence the long term fate of populations. |
2407.20992 | Mattia Miotto | Greta Grassmann, Lorenzo Di Rienzo, Giancarlo Ruocco, Mattia Miotto,
Edoardo Milanetti | Compact assessment of molecular surface complementarities enhances
neural network-aided prediction of key binding residues | 15 pages, 5 figures, 1 table | null | null | null | q-bio.BM | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Predicting interactions between biomolecules, such as protein-protein
complexes, remains a challenging problem. Despite the many advancements done so
far, the performances of docking protocols are deeply dependent on their
capability of identify binding regions. In this context, we present a novel
approach that builds upon our previous works modeling protein surface patches
via sets of orthogonal polynomials to identify regions of high
shape/electrostatic complementarity. By incorporating another key binding
property, such as the balance between hydrophilic and hydrophobic
contributions, we define new binding matrices that serve an effective inputs
for training a neural network. Our approach also allows for the quantitative
definition of a typical binding site area - approximately 10\AA~in radius -
where hydrophobic contribution and shape complementarity, which reflects the
Lennard-Jones interaction, are maximized. Using this new architecture, CIRNet
(Core Interacting Residues Network), we achieve an accuracy of approximately
0.82 in identifying pairs of core interacting residues on a balanced dataset.
In a blind search for core interacting residues, CIRNet distinguishes these
from decoys with a ROC AUC of 0.72. This protocol can enahnce docking
algorithms by rescaling the proposed poses. When applied to the top ten models
from three popular docking server, CIRNet improves docking outcomes, reducing
the the average RMSD between the refined poses and the native state by up to
58%.
| [
{
"created": "Tue, 30 Jul 2024 17:28:19 GMT",
"version": "v1"
}
] | 2024-07-31 | [
[
"Grassmann",
"Greta",
""
],
[
"Di Rienzo",
"Lorenzo",
""
],
[
"Ruocco",
"Giancarlo",
""
],
[
"Miotto",
"Mattia",
""
],
[
"Milanetti",
"Edoardo",
""
]
] | Predicting interactions between biomolecules, such as protein-protein complexes, remains a challenging problem. Despite the many advancements done so far, the performances of docking protocols are deeply dependent on their capability of identify binding regions. In this context, we present a novel approach that builds upon our previous works modeling protein surface patches via sets of orthogonal polynomials to identify regions of high shape/electrostatic complementarity. By incorporating another key binding property, such as the balance between hydrophilic and hydrophobic contributions, we define new binding matrices that serve an effective inputs for training a neural network. Our approach also allows for the quantitative definition of a typical binding site area - approximately 10\AA~in radius - where hydrophobic contribution and shape complementarity, which reflects the Lennard-Jones interaction, are maximized. Using this new architecture, CIRNet (Core Interacting Residues Network), we achieve an accuracy of approximately 0.82 in identifying pairs of core interacting residues on a balanced dataset. In a blind search for core interacting residues, CIRNet distinguishes these from decoys with a ROC AUC of 0.72. This protocol can enahnce docking algorithms by rescaling the proposed poses. When applied to the top ten models from three popular docking server, CIRNet improves docking outcomes, reducing the the average RMSD between the refined poses and the native state by up to 58%. |
q-bio/0604011 | Ashok Palaniappan | Ashok Palaniappan | A Surprising Clarification of the Mechanism of Ion-channel
Voltage-Gating | Indication of alternative mechanisms | null | null | null | q-bio.BM q-bio.NC q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An intense controversy has surrounded the mechanism of voltage-gating in ion
channels. We interpreted the two leading models of voltage-gating with respect
to the thermodynamic energetics of membrane insertion of the voltage-sensing
'module' from a comprehensive set of potassium channels. KvAP is an archaeal
voltage-gated potassium channel whose x-ray structure was the basis for
determining the general mechanism of voltage-gating. The free energy of
membrane insertion of the KvAP voltage sensor was revealed to be a single
outlier. This was due to its unusual sequence that facilitated large gating
movements in its native lipid membrane. This degree of free energy was the
least typical of the other voltage sensors, including the Shaker potassium
channel. We inferred that the two leading models of voltage-gating referred to
alternative mechanisms of voltage-gating: each is applicable to an independent
set of ion channels. The large motion of the voltage-sensor during gating
proposed by the KvAP-paddle model of gating is unlikely to be mirrored by the
majority of ion channels whose voltage sensors are not located at the
membrane-cytoplasm interface in the channel closed state.
| [
{
"created": "Sun, 9 Apr 2006 07:19:50 GMT",
"version": "v1"
},
{
"created": "Fri, 15 Sep 2006 06:44:50 GMT",
"version": "v2"
},
{
"created": "Mon, 3 Sep 2007 07:04:02 GMT",
"version": "v3"
},
{
"created": "Fri, 28 Sep 2007 10:34:22 GMT",
"version": "v4"
},
{
"created": "Wed, 20 Apr 2011 07:23:30 GMT",
"version": "v5"
}
] | 2011-04-21 | [
[
"Palaniappan",
"Ashok",
""
]
] | An intense controversy has surrounded the mechanism of voltage-gating in ion channels. We interpreted the two leading models of voltage-gating with respect to the thermodynamic energetics of membrane insertion of the voltage-sensing 'module' from a comprehensive set of potassium channels. KvAP is an archaeal voltage-gated potassium channel whose x-ray structure was the basis for determining the general mechanism of voltage-gating. The free energy of membrane insertion of the KvAP voltage sensor was revealed to be a single outlier. This was due to its unusual sequence that facilitated large gating movements in its native lipid membrane. This degree of free energy was the least typical of the other voltage sensors, including the Shaker potassium channel. We inferred that the two leading models of voltage-gating referred to alternative mechanisms of voltage-gating: each is applicable to an independent set of ion channels. The large motion of the voltage-sensor during gating proposed by the KvAP-paddle model of gating is unlikely to be mirrored by the majority of ion channels whose voltage sensors are not located at the membrane-cytoplasm interface in the channel closed state. |
2009.01901 | Joseph Vallino | Joseph J. Vallino and Ioannis Tsakalakis | Phytoplankton temporal strategies increase entropy production in a
marine food web model | 39 pp. including Supplementary Material, 6 Figures | Entropy 2020, 22(11), 1249, 25 pp | 10.3390/e22111249 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We develop a trait-based model founded on the hypothesis that biological
systems evolve and organize to maximize entropy production by dissipating
chemical and electromagnetic potentials over longer time scales than abiotic
processes by implementing temporal strategies. A marine food web consisting of
phytoplankton, bacteria and consumer functional groups is used to explore how
temporal strategies, or the lack there of, change entropy production in a
shallow pond that receives a continuous flow of reduced organic carbon plus
inorganic nitrogen and illumination from solar radiation with diel and seasonal
dynamics. Results show that a temporal strategy that employs an explicit
circadian clock produces more entropy than a passive strategy that uses
internal carbon storage or a balanced growth strategy that requires
phytoplankton to grow with fixed stoichiometry. When the community is forced to
operate at high specific growth rates near 2 d-1, the optimization-guided model
selects for phytoplankton ecotypes that exhibit complementary for winter versus
summer environmental conditions to increase entropy production. We also present
a new type of trait-based modeling where trait values are determined by
maximizing entropy production rather than by random selection.
| [
{
"created": "Thu, 3 Sep 2020 19:42:38 GMT",
"version": "v1"
}
] | 2020-11-17 | [
[
"Vallino",
"Joseph J.",
""
],
[
"Tsakalakis",
"Ioannis",
""
]
] | We develop a trait-based model founded on the hypothesis that biological systems evolve and organize to maximize entropy production by dissipating chemical and electromagnetic potentials over longer time scales than abiotic processes by implementing temporal strategies. A marine food web consisting of phytoplankton, bacteria and consumer functional groups is used to explore how temporal strategies, or the lack there of, change entropy production in a shallow pond that receives a continuous flow of reduced organic carbon plus inorganic nitrogen and illumination from solar radiation with diel and seasonal dynamics. Results show that a temporal strategy that employs an explicit circadian clock produces more entropy than a passive strategy that uses internal carbon storage or a balanced growth strategy that requires phytoplankton to grow with fixed stoichiometry. When the community is forced to operate at high specific growth rates near 2 d-1, the optimization-guided model selects for phytoplankton ecotypes that exhibit complementary for winter versus summer environmental conditions to increase entropy production. We also present a new type of trait-based modeling where trait values are determined by maximizing entropy production rather than by random selection. |
q-bio/0703039 | Daniel Birch | Daniel A. Birch, Yue-Kin Tsang, William R. Young | Bounding biomass in the Fisher equation | 32 Pages, 13 Figures | Physical Review E 75, 066304 (2007) | 10.1103/PhysRevE.75.066304 | null | q-bio.PE nlin.CD physics.chem-ph | null | The FKPP equation with a variable growth rate and advection by an
incompressible velocity field is considered as a model for plankton dispersed
by ocean currents. If the average growth rate is negative then the model has a
survival-extinction transition; the location of this transition in the
parameter space is constrained using variational arguments and delimited by
simulations. The statistical steady state reached when the system is in the
survival region of parameter space is characterized by integral constraints and
upper and lower bounds on the biomass and productivity that follow from
variational arguments and direct inequalities. In the limit of
zero-decorrelation time the velocity field is shown to act as Fickian diffusion
with an eddy diffusivity much larger than the molecular diffusivity and this
allows a one-dimensional model to predict the biomass, productivity and
extinction transitions. All results are illustrated with a simple growth and
stirring model.
| [
{
"created": "Sat, 17 Mar 2007 21:00:58 GMT",
"version": "v1"
}
] | 2007-09-04 | [
[
"Birch",
"Daniel A.",
""
],
[
"Tsang",
"Yue-Kin",
""
],
[
"Young",
"William R.",
""
]
] | The FKPP equation with a variable growth rate and advection by an incompressible velocity field is considered as a model for plankton dispersed by ocean currents. If the average growth rate is negative then the model has a survival-extinction transition; the location of this transition in the parameter space is constrained using variational arguments and delimited by simulations. The statistical steady state reached when the system is in the survival region of parameter space is characterized by integral constraints and upper and lower bounds on the biomass and productivity that follow from variational arguments and direct inequalities. In the limit of zero-decorrelation time the velocity field is shown to act as Fickian diffusion with an eddy diffusivity much larger than the molecular diffusivity and this allows a one-dimensional model to predict the biomass, productivity and extinction transitions. All results are illustrated with a simple growth and stirring model. |
2006.02720 | Lucia Marucci | Lucia Marucci, Matteo Barberis, Jonathan Karr, Oliver Ray, Paul R.
Race, Miguel de Souza Andrade, Claire Grierson, Stefan Andreas Hoffmann,
Sophie Landon, Elibio Rech, Joshua Rees-Garbutt, Richard Seabrook, William
Shaw, Christopher Woods | Computer-aided whole-cell design: taking a holistic approach by
integrating synthetic with systems biology | null | null | null | null | q-bio.QM q-bio.MN | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Computer-aided design for synthetic biology promises to accelerate the
rational and robust engineering of biological systems; it requires both
detailed and quantitative mathematical and experimental models of the processes
to (re)design, and software and tools for genetic engineering and DNA assembly.
Ultimately, the increased precision in the design phase will have a dramatic
impact on the production of designer cells and organisms with bespoke functions
and increased modularity. Computer-aided design strategies require quantitative
representations of cells, able to capture multiscale processes and link
genotypes to phenotypes. Here, we present a perspective on how whole-cell,
multiscale models could transform design-build-test-learn cycles in synthetic
biology. We show how these models could significantly aid in the design and
learn phases while reducing experimental testing by presenting case studies
spanning from genome minimization to cell-free systems, and we discuss several
challenges for the realization of our vision. The possibility to describe and
build in silico whole-cells offers an opportunity to develop increasingly
automatized, precise and accessible computer-aided design tools and strategies
throughout novel interdisciplinary collaborations.
| [
{
"created": "Thu, 4 Jun 2020 09:19:52 GMT",
"version": "v1"
}
] | 2020-06-05 | [
[
"Marucci",
"Lucia",
""
],
[
"Barberis",
"Matteo",
""
],
[
"Karr",
"Jonathan",
""
],
[
"Ray",
"Oliver",
""
],
[
"Race",
"Paul R.",
""
],
[
"Andrade",
"Miguel de Souza",
""
],
[
"Grierson",
"Claire",
""
],
[
"Hoffmann",
"Stefan Andreas",
""
],
[
"Landon",
"Sophie",
""
],
[
"Rech",
"Elibio",
""
],
[
"Rees-Garbutt",
"Joshua",
""
],
[
"Seabrook",
"Richard",
""
],
[
"Shaw",
"William",
""
],
[
"Woods",
"Christopher",
""
]
] | Computer-aided design for synthetic biology promises to accelerate the rational and robust engineering of biological systems; it requires both detailed and quantitative mathematical and experimental models of the processes to (re)design, and software and tools for genetic engineering and DNA assembly. Ultimately, the increased precision in the design phase will have a dramatic impact on the production of designer cells and organisms with bespoke functions and increased modularity. Computer-aided design strategies require quantitative representations of cells, able to capture multiscale processes and link genotypes to phenotypes. Here, we present a perspective on how whole-cell, multiscale models could transform design-build-test-learn cycles in synthetic biology. We show how these models could significantly aid in the design and learn phases while reducing experimental testing by presenting case studies spanning from genome minimization to cell-free systems, and we discuss several challenges for the realization of our vision. The possibility to describe and build in silico whole-cells offers an opportunity to develop increasingly automatized, precise and accessible computer-aided design tools and strategies throughout novel interdisciplinary collaborations. |
2401.06987 | Dimitri Loutchko | Dimitri Loutchko, Yuki Sughiyama, Tetsuya J. Kobayashi | Cramer-Rao bound and absolute sensitivity in chemical reaction networks | 21 pages, 3 figures | null | null | null | q-bio.MN cs.IT math.IT physics.bio-ph physics.chem-ph | http://creativecommons.org/licenses/by/4.0/ | Chemical reaction networks (CRN) comprise an important class of models to
understand biological functions such as cellular information processing, the
robustness and control of metabolic pathways, circadian rhythms, and many more.
However, any CRN describing a certain function does not act in isolation but is
a part of a much larger network and as such is constantly subject to external
changes. In [Shinar, Alon, and Feinberg. "Sensitivity and robustness in
chemical reaction networks." SIAM J App Math (2009): 977-998.], the responses
of CRN to changes in the linear conserved quantities, called sensitivities,
were studied in and the question of how to construct absolute, i.e.,
basis-independent, sensitivities was raised. In this article, by applying
information geometric methods, such a construction is provided. The idea is to
track how concentration changes in a particular chemical propagate to changes
of all the other chemicals within a steady state. This is encoded in the matrix
of absolute sensitivites. A linear algebraic characterization of the matrix of
absolute sensitivities for quasi-thermostatic CRN is derived via a Cramer-Rao
bound for CRN, which is based on the the analogy between quasi-thermostatic
steady states and the exponential family of probability distributions.
| [
{
"created": "Sat, 13 Jan 2024 06:00:45 GMT",
"version": "v1"
}
] | 2024-01-17 | [
[
"Loutchko",
"Dimitri",
""
],
[
"Sughiyama",
"Yuki",
""
],
[
"Kobayashi",
"Tetsuya J.",
""
]
] | Chemical reaction networks (CRN) comprise an important class of models to understand biological functions such as cellular information processing, the robustness and control of metabolic pathways, circadian rhythms, and many more. However, any CRN describing a certain function does not act in isolation but is a part of a much larger network and as such is constantly subject to external changes. In [Shinar, Alon, and Feinberg. "Sensitivity and robustness in chemical reaction networks." SIAM J App Math (2009): 977-998.], the responses of CRN to changes in the linear conserved quantities, called sensitivities, were studied in and the question of how to construct absolute, i.e., basis-independent, sensitivities was raised. In this article, by applying information geometric methods, such a construction is provided. The idea is to track how concentration changes in a particular chemical propagate to changes of all the other chemicals within a steady state. This is encoded in the matrix of absolute sensitivites. A linear algebraic characterization of the matrix of absolute sensitivities for quasi-thermostatic CRN is derived via a Cramer-Rao bound for CRN, which is based on the the analogy between quasi-thermostatic steady states and the exponential family of probability distributions. |
1911.06085 | Daniel Backhaus | Daniel Backhaus, Ralf Engbert, Lars Oliver Martin Rothkegel, Hans Arne
Trukenbrod | Task-dependence in scene perception: Head unrestrained viewing using
mobile eye-tracking | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Real-world scene perception is typically studied in the laboratory using
static picture viewing with restrained head position. Consequently, the
transfer of results obtained in this paradigm to real-word scenarios has been
questioned. The advancement of mobile eye-trackers and the progress in image
processing, however, permit a more natural experimental setup that, at the same
time, maintains the high experimental control from the standard laboratory
setting. We investigated eye movements while participants were standing in
front of a projector screen and explored images under four specific task
instructions. Eye movements were recorded with a mobile eye-tracking device and
raw gaze data was transformed from head-centered into image-centered
coordinates. We observed differences between tasks in temporal and spatial
eye-movement parameters and found that the bias to fixate images near the
center differed between tasks. Our results demonstrate that current mobile
eye-tracking technology and a highly controlled design support the study of
fine-scaled task dependencies in an experimental setting that permits more
natural viewing behavior than the static picture viewing paradigm.
| [
{
"created": "Thu, 14 Nov 2019 13:21:46 GMT",
"version": "v1"
},
{
"created": "Fri, 15 Nov 2019 14:05:44 GMT",
"version": "v2"
},
{
"created": "Fri, 13 Dec 2019 12:56:06 GMT",
"version": "v3"
}
] | 2019-12-16 | [
[
"Backhaus",
"Daniel",
""
],
[
"Engbert",
"Ralf",
""
],
[
"Rothkegel",
"Lars Oliver Martin",
""
],
[
"Trukenbrod",
"Hans Arne",
""
]
] | Real-world scene perception is typically studied in the laboratory using static picture viewing with restrained head position. Consequently, the transfer of results obtained in this paradigm to real-word scenarios has been questioned. The advancement of mobile eye-trackers and the progress in image processing, however, permit a more natural experimental setup that, at the same time, maintains the high experimental control from the standard laboratory setting. We investigated eye movements while participants were standing in front of a projector screen and explored images under four specific task instructions. Eye movements were recorded with a mobile eye-tracking device and raw gaze data was transformed from head-centered into image-centered coordinates. We observed differences between tasks in temporal and spatial eye-movement parameters and found that the bias to fixate images near the center differed between tasks. Our results demonstrate that current mobile eye-tracking technology and a highly controlled design support the study of fine-scaled task dependencies in an experimental setting that permits more natural viewing behavior than the static picture viewing paradigm. |
2401.07279 | Rebecca Crossley | Rebecca M. Crossley, Kevin J. Painter, Tommaso Lorenzi, Philip K.
Maini, Ruth E. Baker | Phenotypic switching mechanisms determine the structure of cell
migration into extracellular matrix under the `go-or-grow' hypothesis | 35 pages, 12 figures | null | null | null | q-bio.CB | http://creativecommons.org/licenses/by/4.0/ | A fundamental feature of collective cell migration is phenotypic
heterogeneity which, for example, influences tumour progression and relapse.
While current mathematical models often consider discrete phenotypic
structuring of the cell population, in-line with the `go-or-grow' hypothesis
\cite{hatzikirou2012go, stepien2018traveling}, they regularly overlook the role
that the environment may play in determining the cells' phenotype during
migration. Comparing a previously studied volume-filling model for a
homogeneous population of generalist cells that can proliferate, move and
degrade extracellular matrix (ECM) \cite{crossley2023travelling} to a novel
model for a heterogeneous population comprising two distinct sub-populations of
specialist cells that can either move and degrade ECM or proliferate, this
study explores how different hypothetical phenotypic switching mechanisms
affect the speed and structure of the invading cell populations. Through a
continuum model derived from its individual-based counterpart, insights into
the influence of the ECM and the impact of phenotypic switching on migrating
cell populations emerge. Notably, specialist cell populations that cannot
switch phenotype show reduced invasiveness compared to generalist cell
populations, while implementing different forms of switching significantly
alters the structure of migrating cell fronts. This key result suggests that
the structure of an invading cell population could be used to infer the
underlying mechanisms governing phenotypic switching.
| [
{
"created": "Sun, 14 Jan 2024 12:29:38 GMT",
"version": "v1"
},
{
"created": "Mon, 10 Jun 2024 22:15:20 GMT",
"version": "v2"
}
] | 2024-06-12 | [
[
"Crossley",
"Rebecca M.",
""
],
[
"Painter",
"Kevin J.",
""
],
[
"Lorenzi",
"Tommaso",
""
],
[
"Maini",
"Philip K.",
""
],
[
"Baker",
"Ruth E.",
""
]
] | A fundamental feature of collective cell migration is phenotypic heterogeneity which, for example, influences tumour progression and relapse. While current mathematical models often consider discrete phenotypic structuring of the cell population, in-line with the `go-or-grow' hypothesis \cite{hatzikirou2012go, stepien2018traveling}, they regularly overlook the role that the environment may play in determining the cells' phenotype during migration. Comparing a previously studied volume-filling model for a homogeneous population of generalist cells that can proliferate, move and degrade extracellular matrix (ECM) \cite{crossley2023travelling} to a novel model for a heterogeneous population comprising two distinct sub-populations of specialist cells that can either move and degrade ECM or proliferate, this study explores how different hypothetical phenotypic switching mechanisms affect the speed and structure of the invading cell populations. Through a continuum model derived from its individual-based counterpart, insights into the influence of the ECM and the impact of phenotypic switching on migrating cell populations emerge. Notably, specialist cell populations that cannot switch phenotype show reduced invasiveness compared to generalist cell populations, while implementing different forms of switching significantly alters the structure of migrating cell fronts. This key result suggests that the structure of an invading cell population could be used to infer the underlying mechanisms governing phenotypic switching. |
2003.07105 | Hon-Cheong So | Shitao Rao, Liangying Yin, Yong Xiang, Hon-Cheong So | Analysis of genetic differences between psychiatric disorders: Exploring
pathways and cell-types/tissues involved and ability to differentiate the
disorders by polygenic scores | null | null | null | null | q-bio.GN | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Although displaying genetic correlations, psychiatric disorders are
clinically defined as categorical entities as they each have distinguishing
clinical features and may involve different treatments. Identifying
differential genetic variations between these disorders may reveal how the
disorders differ biologically and help to guide more personalized treatment.
Here we presented a comprehensive analysis to identify genetic markers
differentially associated with various psychiatric disorders/traits based on
GWAS summary statistics, covering 18 psychiatric traits/disorders and 26
comparisons. We also conducted comprehensive analysis to unravel the genes,
pathways and SNP functional categories involved, and the cell types and tissues
implicated. We also assessed how well one could distinguish between psychiatric
disorders by polygenic risk scores (PRS).
SNP-based heritabilities (h2SNP) were significantly larger than zero for most
comparisons. Based on current GWAS data, PRS have mostly modest power to
distinguish between psychiatric disorders. For example, we estimated that AUC
for distinguishing schizophrenia from major depressive disorder (MDD), bipolar
disorder (BPD) from MDD and schizophrenia from BPD were 0.694, 0.602 and 0.618
respectively, while the maximum AUC (based on h2SNP) were 0.763, 0.749 and
0.726 respectively. We also uncovered differences in each pair of studied
traits in terms of their differences in genetic correlation with comorbid
traits. For example, clinically-defined MDD appeared to more strongly
genetically correlated with other psychiatric disorders and heart disease, when
compared to non-clinically-defined depression in UK Biobank.
Our findings highlight genetic differences between psychiatric disorders and
the mechanisms involved. PRS may aid differential diagnosis of selected
psychiatric disorders in the future with larger GWAS samples.
| [
{
"created": "Mon, 16 Mar 2020 10:46:29 GMT",
"version": "v1"
},
{
"created": "Tue, 17 Mar 2020 01:08:05 GMT",
"version": "v2"
},
{
"created": "Tue, 30 Jun 2020 01:15:13 GMT",
"version": "v3"
},
{
"created": "Fri, 21 May 2021 01:18:09 GMT",
"version": "v4"
}
] | 2021-05-24 | [
[
"Rao",
"Shitao",
""
],
[
"Yin",
"Liangying",
""
],
[
"Xiang",
"Yong",
""
],
[
"So",
"Hon-Cheong",
""
]
] | Although displaying genetic correlations, psychiatric disorders are clinically defined as categorical entities as they each have distinguishing clinical features and may involve different treatments. Identifying differential genetic variations between these disorders may reveal how the disorders differ biologically and help to guide more personalized treatment. Here we presented a comprehensive analysis to identify genetic markers differentially associated with various psychiatric disorders/traits based on GWAS summary statistics, covering 18 psychiatric traits/disorders and 26 comparisons. We also conducted comprehensive analysis to unravel the genes, pathways and SNP functional categories involved, and the cell types and tissues implicated. We also assessed how well one could distinguish between psychiatric disorders by polygenic risk scores (PRS). SNP-based heritabilities (h2SNP) were significantly larger than zero for most comparisons. Based on current GWAS data, PRS have mostly modest power to distinguish between psychiatric disorders. For example, we estimated that AUC for distinguishing schizophrenia from major depressive disorder (MDD), bipolar disorder (BPD) from MDD and schizophrenia from BPD were 0.694, 0.602 and 0.618 respectively, while the maximum AUC (based on h2SNP) were 0.763, 0.749 and 0.726 respectively. We also uncovered differences in each pair of studied traits in terms of their differences in genetic correlation with comorbid traits. For example, clinically-defined MDD appeared to more strongly genetically correlated with other psychiatric disorders and heart disease, when compared to non-clinically-defined depression in UK Biobank. Our findings highlight genetic differences between psychiatric disorders and the mechanisms involved. PRS may aid differential diagnosis of selected psychiatric disorders in the future with larger GWAS samples. |
q-bio/0509008 | Tibor Antal | Tibor Antal and Istvan Scheuring | Fixation of strategies for an evolutionary game in finite populations | 22 pages, 2 eps figures | Bulletin of Mathematical Biology 68, 1923 (2006) | 10.1007/s11538-006-9061-4 | null | q-bio.PE cond-mat.stat-mech | null | A stochastic evolutionary dynamics of two strategies given by 2 x 2 matrix
games is studied in finite populations. We focus on stochastic properties of
fixation: how a strategy represented by a single individual wins over the
entire population. The process is discussed in the framework of a random walk
with arbitrary hopping rates. The time of fixation is found to be identical for
both strategies in any particular game. The asymptotic behavior of the fixation
time and fixation probabilities in the large population size limit is also
discussed. We show that fixation is fast when there is at least one pure
evolutionary stable strategy (ESS) in the infinite population size limit, while
fixation is slow when the ESS is the coexistence of the two strategies.
| [
{
"created": "Thu, 8 Sep 2005 13:14:46 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Antal",
"Tibor",
""
],
[
"Scheuring",
"Istvan",
""
]
] | A stochastic evolutionary dynamics of two strategies given by 2 x 2 matrix games is studied in finite populations. We focus on stochastic properties of fixation: how a strategy represented by a single individual wins over the entire population. The process is discussed in the framework of a random walk with arbitrary hopping rates. The time of fixation is found to be identical for both strategies in any particular game. The asymptotic behavior of the fixation time and fixation probabilities in the large population size limit is also discussed. We show that fixation is fast when there is at least one pure evolutionary stable strategy (ESS) in the infinite population size limit, while fixation is slow when the ESS is the coexistence of the two strategies. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.