id
stringlengths 9
13
| submitter
stringlengths 4
48
| authors
stringlengths 4
9.62k
| title
stringlengths 4
343
| comments
stringlengths 2
480
⌀ | journal-ref
stringlengths 9
309
⌀ | doi
stringlengths 12
138
⌀ | report-no
stringclasses 277
values | categories
stringlengths 8
87
| license
stringclasses 9
values | orig_abstract
stringlengths 27
3.76k
| versions
listlengths 1
15
| update_date
stringlengths 10
10
| authors_parsed
listlengths 1
147
| abstract
stringlengths 24
3.75k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2308.07122
|
Jose Fontanari
|
Jos\'e F. Fontanari, Viviane M. de Oliveira and Paulo R. A. Campos
|
Evolving division of labor in a response threshold model
| null |
Ecological Complexity 58 (2024) 101083
|
10.1016/j.ecocom.2024.101083
| null |
q-bio.PE nlin.AO nlin.CD
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The response threshold model explains the emergence of division of labor
(i.e., task specialization) in an unstructured population by assuming that the
individuals have different propensities to work on different tasks. The
incentive to attend to a particular task increases when the task is left
unattended and decreases when individuals work on it. Here we derive mean-field
equations for the stimulus dynamics and show that they exhibit complex
attractors through period-doubling bifurcation cascades when the noise
disrupting the thresholds is small. In addition, we show how the fixed
threshold can be set to ensure specialization in both the transient and
equilibrium regimes of the stimulus dynamics. However, a complete explanation
of the emergence of division of labor requires that we address the question of
where the threshold variation comes from, starting from a homogeneous
population. We then study a structured population scenario, where the
population is divided into a large number of independent groups of equal size,
and the fitness of a group is proportional to the weighted mean work performed
on the tasks during a fixed period of time. Using a winner-take-all strategy to
model group competition and assuming an initial homogeneous metapopulation, we
find that a substantial fraction of workers specialize in each task, without
the need to penalize task switching.
|
[
{
"created": "Mon, 14 Aug 2023 13:07:37 GMT",
"version": "v1"
}
] |
2024-04-29
|
[
[
"Fontanari",
"José F.",
""
],
[
"de Oliveira",
"Viviane M.",
""
],
[
"Campos",
"Paulo R. A.",
""
]
] |
The response threshold model explains the emergence of division of labor (i.e., task specialization) in an unstructured population by assuming that the individuals have different propensities to work on different tasks. The incentive to attend to a particular task increases when the task is left unattended and decreases when individuals work on it. Here we derive mean-field equations for the stimulus dynamics and show that they exhibit complex attractors through period-doubling bifurcation cascades when the noise disrupting the thresholds is small. In addition, we show how the fixed threshold can be set to ensure specialization in both the transient and equilibrium regimes of the stimulus dynamics. However, a complete explanation of the emergence of division of labor requires that we address the question of where the threshold variation comes from, starting from a homogeneous population. We then study a structured population scenario, where the population is divided into a large number of independent groups of equal size, and the fitness of a group is proportional to the weighted mean work performed on the tasks during a fixed period of time. Using a winner-take-all strategy to model group competition and assuming an initial homogeneous metapopulation, we find that a substantial fraction of workers specialize in each task, without the need to penalize task switching.
|
2210.12991
|
Emilio Dorigatti
|
Ingo Ziegler, Bolei Ma, Ercong Nie, Bernd Bischl, David R\"ugamer,
Benjamin Schubert, Emilio Dorigatti
|
What cleaves? Is proteasomal cleavage prediction reaching a ceiling?
|
15 pages, 1 figure
| null | null | null |
q-bio.QM
|
http://creativecommons.org/licenses/by-sa/4.0/
|
Epitope vaccines are a promising direction to enable precision treatment for
cancer, autoimmune diseases, and allergies. Effectively designing such vaccines
requires accurate prediction of proteasomal cleavage in order to ensure that
the epitopes in the vaccine are presented to T cells by the major
histocompatibility complex (MHC). While direct identification of proteasomal
cleavage \emph{in vitro} is cumbersome and low throughput, it is possible to
implicitly infer cleavage events from the termini of MHC-presented epitopes,
which can be detected in large amounts thanks to recent advances in
high-throughput MHC ligandomics. Inferring cleavage events in such a way
provides an inherently noisy signal which can be tackled with new developments
in the field of deep learning that supposedly make it possible to learn
predictors from noisy labels. Inspired by such innovations, we sought to
modernize proteasomal cleavage predictors by benchmarking a wide range of
recent methods, including LSTMs, transformers, CNNs, and denoising methods, on
a recently introduced cleavage dataset. We found that increasing model scale
and complexity appeared to deliver limited performance gains, as several
methods reached about 88.5% AUC on C-terminal and 79.5% AUC on N-terminal
cleavage prediction. This suggests that the noise and/or complexity of
proteasomal cleavage and the subsequent biological processes of the antigen
processing pathway are the major limiting factors for predictive performance
rather than the specific modeling approach used. While biological complexity
can be tackled by more data and better models, noise and randomness inherently
limit the maximum achievable predictive performance. All our datasets and
experiments are available at
https://github.com/ziegler-ingo/cleavage_prediction.
|
[
{
"created": "Mon, 24 Oct 2022 07:26:55 GMT",
"version": "v1"
},
{
"created": "Tue, 25 Oct 2022 10:15:59 GMT",
"version": "v2"
}
] |
2022-10-26
|
[
[
"Ziegler",
"Ingo",
""
],
[
"Ma",
"Bolei",
""
],
[
"Nie",
"Ercong",
""
],
[
"Bischl",
"Bernd",
""
],
[
"Rügamer",
"David",
""
],
[
"Schubert",
"Benjamin",
""
],
[
"Dorigatti",
"Emilio",
""
]
] |
Epitope vaccines are a promising direction to enable precision treatment for cancer, autoimmune diseases, and allergies. Effectively designing such vaccines requires accurate prediction of proteasomal cleavage in order to ensure that the epitopes in the vaccine are presented to T cells by the major histocompatibility complex (MHC). While direct identification of proteasomal cleavage \emph{in vitro} is cumbersome and low throughput, it is possible to implicitly infer cleavage events from the termini of MHC-presented epitopes, which can be detected in large amounts thanks to recent advances in high-throughput MHC ligandomics. Inferring cleavage events in such a way provides an inherently noisy signal which can be tackled with new developments in the field of deep learning that supposedly make it possible to learn predictors from noisy labels. Inspired by such innovations, we sought to modernize proteasomal cleavage predictors by benchmarking a wide range of recent methods, including LSTMs, transformers, CNNs, and denoising methods, on a recently introduced cleavage dataset. We found that increasing model scale and complexity appeared to deliver limited performance gains, as several methods reached about 88.5% AUC on C-terminal and 79.5% AUC on N-terminal cleavage prediction. This suggests that the noise and/or complexity of proteasomal cleavage and the subsequent biological processes of the antigen processing pathway are the major limiting factors for predictive performance rather than the specific modeling approach used. While biological complexity can be tackled by more data and better models, noise and randomness inherently limit the maximum achievable predictive performance. All our datasets and experiments are available at https://github.com/ziegler-ingo/cleavage_prediction.
|
1810.13162
|
Federica Bubba
|
Federica Bubba (LJLL), Camille Pouchol, Nathalie Ferrand (INSERM),
Guillaume Vidal (CRPP), Luis Almeida (LJLL), Beno{\i}t Perthame (LJLL),
Mich\`ele Sabbah (D\'epartement Oncologie - H\'ematologie)
|
A chemotaxis-based explanation of spheroid formation in 3D cultures of
breast cancer cells
| null | null | null | null |
q-bio.CB
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Three-dimensional cultures of cells are gaining popularity as an in vitro
improvement over 2D Petri dishes. In many such experiments, cells have been
found to organize in aggregates. We present new results of three-dimensional in
vitro cultures of breast cancer cells exhibiting patterns. Understanding their
formation is of particular interest in the context of cancer since metastases
have been shown to be created by cells moving in clusters. In this paper, we
propose that the main mechanism which leads to the emergence of patterns is
chemotaxis, i.e., oriented movement of cells towards high concentration zones
of a signal emitted by the cells themselves. Studying a Keller-Segel PDE system
to model chemotactical auto-organization of cells, we prove that it is subject
to Turing instability under a time-dependent condition. This result is
illustrated by two-dimensional simulations of the model showing spheroidal
patterns. They are qualitatively compared to the biological results and their
variability is discussed both theoretically and numerically.
|
[
{
"created": "Wed, 31 Oct 2018 08:53:30 GMT",
"version": "v1"
}
] |
2018-11-01
|
[
[
"Bubba",
"Federica",
"",
"LJLL"
],
[
"Pouchol",
"Camille",
"",
"INSERM"
],
[
"Ferrand",
"Nathalie",
"",
"INSERM"
],
[
"Vidal",
"Guillaume",
"",
"CRPP"
],
[
"Almeida",
"Luis",
"",
"LJLL"
],
[
"Perthame",
"Benoıt",
"",
"LJLL"
],
[
"Sabbah",
"Michèle",
"",
"Département Oncologie - Hématologie"
]
] |
Three-dimensional cultures of cells are gaining popularity as an in vitro improvement over 2D Petri dishes. In many such experiments, cells have been found to organize in aggregates. We present new results of three-dimensional in vitro cultures of breast cancer cells exhibiting patterns. Understanding their formation is of particular interest in the context of cancer since metastases have been shown to be created by cells moving in clusters. In this paper, we propose that the main mechanism which leads to the emergence of patterns is chemotaxis, i.e., oriented movement of cells towards high concentration zones of a signal emitted by the cells themselves. Studying a Keller-Segel PDE system to model chemotactical auto-organization of cells, we prove that it is subject to Turing instability under a time-dependent condition. This result is illustrated by two-dimensional simulations of the model showing spheroidal patterns. They are qualitatively compared to the biological results and their variability is discussed both theoretically and numerically.
|
1711.09273
|
Stuart Hagler
|
Stuart Hagler
|
A General Optimal Control Model of Human Movement Patterns II: Rapid,
Targeted Hand Movements (Fitts Law)
|
25 pages, 2 figures
| null | null | null |
q-bio.QM q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Rapid, targeted hand movements exhibit a regular movement pattern described
by Fitts law. We develop a model of these movements in which this movement
pattern results from an optimal control model describing rapid hand movements
and a utility model describing the speed/accuracy trade-off between moving the
hand rapidly to the target and hitting the target accurately. The optimal
control model is constructed using principled approach in which we forbid the
muscle forces to exhibit any discontinuities and require the cost to be
expressed in terms of a psychophysical representation of the movement. This
yields a yank-control or jerk-control model of the movement which exhibits two
constants of the motion that are closely related to the energy and momentum in
classical mechanics. We force the optimal control model to obey Fitts law by
requiring a particular relationship hold between the constants of the motion
and the size of the target and show that the resulting model compares well to a
standard expression of Fitts law obtained empirically using observations of
computer mouse movements. We then proceed to further show how this relationship
may be obtained as the result of a simple models of the movement accuracy and
the speed/accuracy trade-off. We use the movement accuracy model to analyze
observed differences in computer mouse movement patterns between older adults
with mild cognitive impairment and intact older adults. We conclude by looking
at how a subject might carry out in practice the optimization implicit in
resolving the speed/accuracy trade-off in our model.
|
[
{
"created": "Sat, 25 Nov 2017 18:59:40 GMT",
"version": "v1"
},
{
"created": "Sun, 9 Dec 2018 20:46:51 GMT",
"version": "v2"
}
] |
2018-12-11
|
[
[
"Hagler",
"Stuart",
""
]
] |
Rapid, targeted hand movements exhibit a regular movement pattern described by Fitts law. We develop a model of these movements in which this movement pattern results from an optimal control model describing rapid hand movements and a utility model describing the speed/accuracy trade-off between moving the hand rapidly to the target and hitting the target accurately. The optimal control model is constructed using principled approach in which we forbid the muscle forces to exhibit any discontinuities and require the cost to be expressed in terms of a psychophysical representation of the movement. This yields a yank-control or jerk-control model of the movement which exhibits two constants of the motion that are closely related to the energy and momentum in classical mechanics. We force the optimal control model to obey Fitts law by requiring a particular relationship hold between the constants of the motion and the size of the target and show that the resulting model compares well to a standard expression of Fitts law obtained empirically using observations of computer mouse movements. We then proceed to further show how this relationship may be obtained as the result of a simple models of the movement accuracy and the speed/accuracy trade-off. We use the movement accuracy model to analyze observed differences in computer mouse movement patterns between older adults with mild cognitive impairment and intact older adults. We conclude by looking at how a subject might carry out in practice the optimization implicit in resolving the speed/accuracy trade-off in our model.
|
1807.07825
|
{\L}ukasz Mioduszewski MSc
|
{\L}ukasz Mioduszewski and Marek Cieplak
|
Disordered peptide chains in an {\alpha}-C-based coarse-grained model
|
20 pages, 9 figures, 2 tables. Published in PCCP, 2018, 20,
19057-19070
|
Physical Chemistry Chemical Physics, 2018, 20, 19057-19070
|
10.1039/C8CP03309A
| null |
q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We construct a one-bead-per-residue coarse-grained dynamical model to
describe intrinsically disordered proteins at significantly longer timescales
than in the all-atom models. In this model, inter-residue contacts form and
disappear during the course of the time evolution. The contacts may arise
between the sidechains, the backbones or the sidechains and backbones of the
interacting residues. The model yields results that are consistent with many
all-atom and experimental data on these systems. We demonstrate that the
geometrical properties of various homopeptides differ substantially in this
model. In particular, the average radius of gyration scales with the sequence
length in a residue-dependent manner.
|
[
{
"created": "Fri, 20 Jul 2018 13:05:34 GMT",
"version": "v1"
}
] |
2018-07-23
|
[
[
"Mioduszewski",
"Łukasz",
""
],
[
"Cieplak",
"Marek",
""
]
] |
We construct a one-bead-per-residue coarse-grained dynamical model to describe intrinsically disordered proteins at significantly longer timescales than in the all-atom models. In this model, inter-residue contacts form and disappear during the course of the time evolution. The contacts may arise between the sidechains, the backbones or the sidechains and backbones of the interacting residues. The model yields results that are consistent with many all-atom and experimental data on these systems. We demonstrate that the geometrical properties of various homopeptides differ substantially in this model. In particular, the average radius of gyration scales with the sequence length in a residue-dependent manner.
|
1812.09414
|
Tilo Schwalger
|
Valentin Schmutz, Wulfram Gerstner and Tilo Schwalger
|
Mesoscopic population equations for spiking neural networks with
synaptic short-term plasticity
| null | null | null | null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Coarse-graining microscopic models of biological neural networks to obtain
mesoscopic models of neural activities is an essential step towards multi-scale
models of the brain. Here, we extend a recent theory for mesoscopic population
dynamics with static synapses to the case of dynamic synapses exhibiting
short-term plasticity (STP). Under the assumption that spike arrivals at
synapses have Poisson statistics, we derive analytically stochastic mean-field
dynamics for the effective synaptic coupling between finite-size populations
undergoing Tsodyks-Markram STP. The novel mean-field equations account for both
finite number of synapses and correlations between the neurotransmitter release
probability and the fraction of available synaptic resources. Comparisons with
Monte Carlo simulations of the microscopic model show that in both feedforward
and recurrent networks the mesoscopic mean-field model accurately reproduces
stochastic realizations of the total synaptic input into a postsynaptic neuron
and accounts for stochastic switches between Up and Down states as well as for
population spikes. The extended mesoscopic population theory of spiking neural
networks with STP may be useful for a systematic reduction of detailed
biophysical models of cortical microcircuits to efficient and mathematically
tractable mean-field models.
|
[
{
"created": "Fri, 21 Dec 2018 23:36:11 GMT",
"version": "v1"
}
] |
2018-12-27
|
[
[
"Schmutz",
"Valentin",
""
],
[
"Gerstner",
"Wulfram",
""
],
[
"Schwalger",
"Tilo",
""
]
] |
Coarse-graining microscopic models of biological neural networks to obtain mesoscopic models of neural activities is an essential step towards multi-scale models of the brain. Here, we extend a recent theory for mesoscopic population dynamics with static synapses to the case of dynamic synapses exhibiting short-term plasticity (STP). Under the assumption that spike arrivals at synapses have Poisson statistics, we derive analytically stochastic mean-field dynamics for the effective synaptic coupling between finite-size populations undergoing Tsodyks-Markram STP. The novel mean-field equations account for both finite number of synapses and correlations between the neurotransmitter release probability and the fraction of available synaptic resources. Comparisons with Monte Carlo simulations of the microscopic model show that in both feedforward and recurrent networks the mesoscopic mean-field model accurately reproduces stochastic realizations of the total synaptic input into a postsynaptic neuron and accounts for stochastic switches between Up and Down states as well as for population spikes. The extended mesoscopic population theory of spiking neural networks with STP may be useful for a systematic reduction of detailed biophysical models of cortical microcircuits to efficient and mathematically tractable mean-field models.
|
1003.1993
|
Andre X. C. N. Valente
|
Andr\'e X. C. N. Valente, Jorge A. B. Sousa, Tiago F. Outeiro, Lino
Ferreira
|
A stem-cell ageing hypothesis on the origin of Parkinson's disease
| null | null | null | null |
q-bio.CB q-bio.OT
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A transcriptome-wide blood expression dataset of Parkinson's disease (PD)
patients and controls was analyzed under the hypothesis-rich mathematical
framework. The analysis pointed towards differential expression in blood cells
in many of the processes known or predicted to be disrupted in PD. We suggest
that circulating blood cells in PD patients can be in a full-blown
PD-expression state. We put forward the hypothesis that sporadic PD can
originate as a case of hematopoietic stem cell/differentiation process
expression program defect and suggest this research direction deserves further
investigation.
|
[
{
"created": "Mon, 8 Mar 2010 19:56:30 GMT",
"version": "v1"
},
{
"created": "Sat, 27 Mar 2010 23:44:27 GMT",
"version": "v2"
}
] |
2010-03-30
|
[
[
"Valente",
"André X. C. N.",
""
],
[
"Sousa",
"Jorge A. B.",
""
],
[
"Outeiro",
"Tiago F.",
""
],
[
"Ferreira",
"Lino",
""
]
] |
A transcriptome-wide blood expression dataset of Parkinson's disease (PD) patients and controls was analyzed under the hypothesis-rich mathematical framework. The analysis pointed towards differential expression in blood cells in many of the processes known or predicted to be disrupted in PD. We suggest that circulating blood cells in PD patients can be in a full-blown PD-expression state. We put forward the hypothesis that sporadic PD can originate as a case of hematopoietic stem cell/differentiation process expression program defect and suggest this research direction deserves further investigation.
|
2404.00365
|
Malay Banerjee
|
Samiran Ghosh, Malay Banerjee, Vitaly Volpert
|
An age-distributed immuno-epidemiological model with information-based
vaccination decision
| null | null | null | null |
q-bio.PE math.DS
|
http://creativecommons.org/licenses/by/4.0/
|
A new age-distributed immuno-epidemiological model with information-based
vaccine uptake suggested in this work represents a system of
integro-differential equations for the numbers of susceptible individuals,
infected individuals, vaccinated individuals and recovered individuals. This
model describes the influence of vaccination decision on epidemic progression
in different age groups. We prove the existence and uniqueness of a positive
solution using the fixed point theory. In a particular case of age-independent
model, we determine the final size of epidemic, that is, the limiting number of
susceptible individuals at asymptotically large time. Numerical simulations
show that the information-based vaccine acceptance can significantly influence
the epidemic progression. Though the initial stage of epidemic progression is
the same for all memory kernels, as the epidemic progresses and more
information about the disease becomes available, further epidemic progression
strongly depends on the memory effect. Short-range memory kernel appears to be
more effective in restraining the epidemic outbreaks because it allows for more
responsive and adaptive vaccination decisions based on the most recent
information about the disease.
|
[
{
"created": "Sat, 30 Mar 2024 13:35:52 GMT",
"version": "v1"
}
] |
2024-04-02
|
[
[
"Ghosh",
"Samiran",
""
],
[
"Banerjee",
"Malay",
""
],
[
"Volpert",
"Vitaly",
""
]
] |
A new age-distributed immuno-epidemiological model with information-based vaccine uptake suggested in this work represents a system of integro-differential equations for the numbers of susceptible individuals, infected individuals, vaccinated individuals and recovered individuals. This model describes the influence of vaccination decision on epidemic progression in different age groups. We prove the existence and uniqueness of a positive solution using the fixed point theory. In a particular case of age-independent model, we determine the final size of epidemic, that is, the limiting number of susceptible individuals at asymptotically large time. Numerical simulations show that the information-based vaccine acceptance can significantly influence the epidemic progression. Though the initial stage of epidemic progression is the same for all memory kernels, as the epidemic progresses and more information about the disease becomes available, further epidemic progression strongly depends on the memory effect. Short-range memory kernel appears to be more effective in restraining the epidemic outbreaks because it allows for more responsive and adaptive vaccination decisions based on the most recent information about the disease.
|
1709.04702
|
Krzysztof Bartoszek
|
Krzysztof Bartoszek
|
Trait evolution with jumps: illusionary normality
|
http://kkzmbm.mimuw.edu.pl/?pageId=4&sprawId=23
|
Proceedings of the XXIII National Conference on Applications of
Mathematics in Biology and Medicine. 2017, pp. 23-28
| null | null |
q-bio.PE math.PR stat.AP
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Phylogenetic comparative methods for real-valued traits usually make use of
stochastic process whose trajectories are continuous. This is despite
biological intuition that evolution is rather punctuated than gradual. On the
other hand, there has been a number of recent proposals of evolutionary models
with jump components. However, as we are only beginning to understand the
behaviour of branching Ornstein-Uhlenbeck (OU) processes the asymptotics of
branching OU processes with jumps is an even greater unknown. In this work we
build up on a previous study concerning OU with jumps evolution on a pure birth
tree. We introduce an extinction component and explore via simulations, its
effects on the weak convergence of such a process. We furthermore, also use
this work to illustrate the simulation and graphic generation possibilities of
the mvSLOUCH package.
|
[
{
"created": "Thu, 14 Sep 2017 11:14:07 GMT",
"version": "v1"
},
{
"created": "Fri, 22 Sep 2017 09:02:41 GMT",
"version": "v2"
}
] |
2017-09-25
|
[
[
"Bartoszek",
"Krzysztof",
""
]
] |
Phylogenetic comparative methods for real-valued traits usually make use of stochastic process whose trajectories are continuous. This is despite biological intuition that evolution is rather punctuated than gradual. On the other hand, there has been a number of recent proposals of evolutionary models with jump components. However, as we are only beginning to understand the behaviour of branching Ornstein-Uhlenbeck (OU) processes the asymptotics of branching OU processes with jumps is an even greater unknown. In this work we build up on a previous study concerning OU with jumps evolution on a pure birth tree. We introduce an extinction component and explore via simulations, its effects on the weak convergence of such a process. We furthermore, also use this work to illustrate the simulation and graphic generation possibilities of the mvSLOUCH package.
|
1601.00987
|
Sarah Muldoon
|
Sarah Feldt Muldoon, Fabio Pasqualetti, Shi Gu, Matthew Cieslak, Scott
T. Grafton, Jean M. Vettel, and Danielle S. Bassett
|
Stimulation-based control of dynamic brain networks
|
54 pages, 10 figures, includes Supplementary Information
| null | null | null |
q-bio.NC cs.SY
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The ability to modulate brain states using targeted stimulation is
increasingly being employed to treat neurological disorders and to enhance
human performance. Despite the growing interest in brain stimulation as a form
of neuromodulation, much remains unknown about the network-level impact of
these focal perturbations. To study the system wide impact of regional
stimulation, we employ a data-driven computational model of nonlinear brain
dynamics to systematically explore the effects of targeted stimulation.
Validating predictions from network control theory, we uncover the relationship
between regional controllability and the focal versus global impact of
stimulation, and we relate these findings to differences in the underlying
network architecture. Finally, by mapping brain regions to cognitive systems,
we observe that the default mode system imparts large global change despite
being highly constrained by structural connectivity. This work forms an
important step towards the development of personalized stimulation protocols
for medical treatment or performance enhancement.
|
[
{
"created": "Tue, 5 Jan 2016 21:33:24 GMT",
"version": "v1"
}
] |
2016-01-07
|
[
[
"Muldoon",
"Sarah Feldt",
""
],
[
"Pasqualetti",
"Fabio",
""
],
[
"Gu",
"Shi",
""
],
[
"Cieslak",
"Matthew",
""
],
[
"Grafton",
"Scott T.",
""
],
[
"Vettel",
"Jean M.",
""
],
[
"Bassett",
"Danielle S.",
""
]
] |
The ability to modulate brain states using targeted stimulation is increasingly being employed to treat neurological disorders and to enhance human performance. Despite the growing interest in brain stimulation as a form of neuromodulation, much remains unknown about the network-level impact of these focal perturbations. To study the system wide impact of regional stimulation, we employ a data-driven computational model of nonlinear brain dynamics to systematically explore the effects of targeted stimulation. Validating predictions from network control theory, we uncover the relationship between regional controllability and the focal versus global impact of stimulation, and we relate these findings to differences in the underlying network architecture. Finally, by mapping brain regions to cognitive systems, we observe that the default mode system imparts large global change despite being highly constrained by structural connectivity. This work forms an important step towards the development of personalized stimulation protocols for medical treatment or performance enhancement.
|
2206.00495
|
Eleonora Vercesi
|
Stefano Gualandi and Giuseppe Toscani and Eleonora Vercesi
|
A kinetic description of the body size distributions of species
| null | null | null | null |
q-bio.PE cs.MA
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
In this paper, by resorting to classical methods of statistical mechanics, we
build a kinetic model able to reproduce the observed statistical weight
distribution of many diverse species. The kinetic description of the time
variations of the weight distribution is based on elementary interactions that
describe in a qualitative and quantitative way successive evolutionary updates,
and determine explicit equilibrium distributions. Numerical fittings on
mammalian eutherians of the order Chiroptera population illustrates the
effectiveness of the approach.
|
[
{
"created": "Wed, 1 Jun 2022 13:49:58 GMT",
"version": "v1"
}
] |
2022-06-02
|
[
[
"Gualandi",
"Stefano",
""
],
[
"Toscani",
"Giuseppe",
""
],
[
"Vercesi",
"Eleonora",
""
]
] |
In this paper, by resorting to classical methods of statistical mechanics, we build a kinetic model able to reproduce the observed statistical weight distribution of many diverse species. The kinetic description of the time variations of the weight distribution is based on elementary interactions that describe in a qualitative and quantitative way successive evolutionary updates, and determine explicit equilibrium distributions. Numerical fittings on mammalian eutherians of the order Chiroptera population illustrates the effectiveness of the approach.
|
1012.5887
|
Peter Waddell
|
Peter J. Waddell, Ishita Khan, Xi Tan, and Sunghwan Yoo
|
A Unified Framework for Trees, Multi-Dimensional Scaling and Planar
Graphs
|
14 pages, 4 figure and 1 table
| null | null | null |
q-bio.PE q-bio.GN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Least squares trees, multi-dimensional scaling and Neighbor Nets are all
different and popular ways of visualizing multi-dimensional data. The method of
flexi-Weighted Least Squares (fWLS) is a powerful method of fitting
phylogenetic trees, when the exact form of errors is unknown. Here, both
polynomial and exponential weights are used to model errors. The exact same
models are implemented for multi-dimensional scaling to yield flexi-Weighted
MDS, including as special cases methods such as the Sammon Stress function.
Here we apply all these methods to population genetic data looking at the
relationships of "Abrahams Children" encompassing Arabs and now widely
dispersed populations of Jews, in relation to an African outgroup and a variety
of European populations. Trees, MDS and Neighbor Nets of this data are compared
within a common likelihood framework and the strengths and weaknesses of each
method are explored. Because the errors in this type of data can be complex,
for example, due to unexpected genetic transfer, we use a residual resampling
method to assess the robustness of trees and the Neighbor Net. Despite the
Neighbor Net fitting best by all criteria except BIC, its structure is ill
defined following residual resampling. In contrast, fWLS trees are favored by
BIC and retain considerable strong internal structure following residual
resampling. This structure clearly separates various European and Middle
Eastern populations, yet it is clear all of the models have errors much larger
than expected by sampling variance alone.
|
[
{
"created": "Wed, 29 Dec 2010 08:33:00 GMT",
"version": "v1"
}
] |
2010-12-30
|
[
[
"Waddell",
"Peter J.",
""
],
[
"Khan",
"Ishita",
""
],
[
"Tan",
"Xi",
""
],
[
"Yoo",
"Sunghwan",
""
]
] |
Least squares trees, multi-dimensional scaling and Neighbor Nets are all different and popular ways of visualizing multi-dimensional data. The method of flexi-Weighted Least Squares (fWLS) is a powerful method of fitting phylogenetic trees, when the exact form of errors is unknown. Here, both polynomial and exponential weights are used to model errors. The exact same models are implemented for multi-dimensional scaling to yield flexi-Weighted MDS, including as special cases methods such as the Sammon Stress function. Here we apply all these methods to population genetic data looking at the relationships of "Abrahams Children" encompassing Arabs and now widely dispersed populations of Jews, in relation to an African outgroup and a variety of European populations. Trees, MDS and Neighbor Nets of this data are compared within a common likelihood framework and the strengths and weaknesses of each method are explored. Because the errors in this type of data can be complex, for example, due to unexpected genetic transfer, we use a residual resampling method to assess the robustness of trees and the Neighbor Net. Despite the Neighbor Net fitting best by all criteria except BIC, its structure is ill defined following residual resampling. In contrast, fWLS trees are favored by BIC and retain considerable strong internal structure following residual resampling. This structure clearly separates various European and Middle Eastern populations, yet it is clear all of the models have errors much larger than expected by sampling variance alone.
|
1507.08367
|
Marisa Eisenberg
|
Jeremy P D'Silva, Marisa C. Eisenberg
|
Modeling Spatial Invasion of Ebola in West Africa
| null | null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The 2014-2015 Ebola Virus Disease (EVD) epidemic in West Africa was the
largest ever recorded, representing a fundamental shift in Ebola epidemiology
with unprecedented spatiotemporal complexity. We developed spatial transmission
models using a gravity-model framework to explain spatiotemporal dynamics of
EVD in West Africa at both the national and district-level scales, and to
compare effectiveness of local interventions (e.g. local quarantine) and
long-range interventions (e.g. border-closures). Incorporating spatial
interactions, the gravity model successfully captures the multiple waves of
epidemic growth observed in Guinea. Model simulations indicate that
local-transmission reductions were most effective in Liberia, while long-range
transmission was dominant in Sierra Leone. The model indicates the presence of
spatial herd protection, wherein intervention in one region has a protective
effect on surrounding regions. The district-level intervention analysis
indicates the presence of intervention-amplifying regions, which provide
above-expected levels of reduction in cases and deaths beyond their borders.
The gravity-modeling approach accurately captured the spatial spread patterns
of EVD at both country and district levels, and helps to identify the most
effective locales for intervention. This model structure and intervention
analysis provides information that can be used by public health policymakers to
assist planning and response efforts for future epidemics.
|
[
{
"created": "Thu, 30 Jul 2015 03:40:25 GMT",
"version": "v1"
},
{
"created": "Wed, 25 May 2016 14:19:25 GMT",
"version": "v2"
}
] |
2016-05-26
|
[
[
"D'Silva",
"Jeremy P",
""
],
[
"Eisenberg",
"Marisa C.",
""
]
] |
The 2014-2015 Ebola Virus Disease (EVD) epidemic in West Africa was the largest ever recorded, representing a fundamental shift in Ebola epidemiology with unprecedented spatiotemporal complexity. We developed spatial transmission models using a gravity-model framework to explain spatiotemporal dynamics of EVD in West Africa at both the national and district-level scales, and to compare effectiveness of local interventions (e.g. local quarantine) and long-range interventions (e.g. border-closures). Incorporating spatial interactions, the gravity model successfully captures the multiple waves of epidemic growth observed in Guinea. Model simulations indicate that local-transmission reductions were most effective in Liberia, while long-range transmission was dominant in Sierra Leone. The model indicates the presence of spatial herd protection, wherein intervention in one region has a protective effect on surrounding regions. The district-level intervention analysis indicates the presence of intervention-amplifying regions, which provide above-expected levels of reduction in cases and deaths beyond their borders. The gravity-modeling approach accurately captured the spatial spread patterns of EVD at both country and district levels, and helps to identify the most effective locales for intervention. This model structure and intervention analysis provides information that can be used by public health policymakers to assist planning and response efforts for future epidemics.
|
2303.06194
|
Ghazwan Hasan
|
Qutaiba Shuaib Al-Nema, Ghazwan Qasim Hasan, Omar Abdulazeez Alhamd
|
A high yield method for protoplast isolation and ease detection of rol B
and C genes in the hairy roots of cauliflflower (Brassica oleracea L.)
inoculated with Agrobacterium rhizogenes
|
6 pages, 3 figures, 5 tables
|
Vol 8, 2022, 415-420
|
10.33640/2405-609X.3255
| null |
q-bio.SC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Protoplasts represent a unique experimental system for the circulation and
formation of genetically modified plants. Here, protoplasts were isolated from
genetically modified hairy root tissues of Brassica oleracea L. induced by the
Agrobacterium rhizogenes strain (ATCC13332). The concentration of enzyme
solutions utilized for protoplast isolation was 1.5 % Cellulase YC and 0.1 %
Pectolyase Y23 in 13% mannitol solution, which resulted in high efficiency of
isolation within 8 hours, in which the protoplast yield was 2 x 104 cells ml-1
and the percentage of viability was 72%. Each protoplast has one nucleus with a
nucleation of 48%. A polymerase chain reaction (PCR) assay verified the
presence of rol B and rol C genes in hairy root tissues by detaching a single
bundle of DNA replication from these roots using a specific pair of primers.
The current study demonstrated that A. rhizogenes strain (ATCC13332) is a
vector for the incorporation of T-DNA genes into cauliflower plants, as well as
the success of the hairy roots retention of rol B and rol C genes transferred
to it.
|
[
{
"created": "Fri, 10 Mar 2023 19:59:18 GMT",
"version": "v1"
}
] |
2023-03-14
|
[
[
"Al-Nema",
"Qutaiba Shuaib",
""
],
[
"Hasan",
"Ghazwan Qasim",
""
],
[
"Alhamd",
"Omar Abdulazeez",
""
]
] |
Protoplasts represent a unique experimental system for the circulation and formation of genetically modified plants. Here, protoplasts were isolated from genetically modified hairy root tissues of Brassica oleracea L. induced by the Agrobacterium rhizogenes strain (ATCC13332). The concentration of enzyme solutions utilized for protoplast isolation was 1.5 % Cellulase YC and 0.1 % Pectolyase Y23 in 13% mannitol solution, which resulted in high efficiency of isolation within 8 hours, in which the protoplast yield was 2 x 104 cells ml-1 and the percentage of viability was 72%. Each protoplast has one nucleus with a nucleation of 48%. A polymerase chain reaction (PCR) assay verified the presence of rol B and rol C genes in hairy root tissues by detaching a single bundle of DNA replication from these roots using a specific pair of primers. The current study demonstrated that A. rhizogenes strain (ATCC13332) is a vector for the incorporation of T-DNA genes into cauliflower plants, as well as the success of the hairy roots retention of rol B and rol C genes transferred to it.
|
1303.4401
|
Andrea Velenich
|
Andrea Velenich and Jeff Gore
|
The strength of genetic interactions scales weakly with the mutational
effects
|
11 pages, 6 figures + Supplementary Material
|
Genome Biology 2013, 14:R76
|
10.1186/gb-2013-14-7-r76
| null |
q-bio.GN q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Genetic interactions pervade every aspect of biology, from evolutionary
theory where they determine the accessibility of evolutionary paths, to
medicine where they contribute to complex genetic diseases. Until very
recently, studies on epistatic interactions have been based on a handful of
mutations, providing at best anecdotal evidence about the frequency and the
typical strength of genetic interactions. In this study we analyze the publicly
available Data Repository of Yeast Genetic INteractions (DRYGIN), which
contains the growth rates of over five million double gene knockout mutants. We
discuss a geometric definition of epistasis which reveals a simple and
surprisingly weak scaling law for the characteristic strength of genetic
interactions as a function of the effects of the mutations being combined. We
then utilize this scaling to quantify the roughness of naturally occurring
fitness landscapes. Finally, we show how the observed roughness differs from
what is predicted by Fisher's geometric model of epistasis and discuss its
consequences on the evolutionary dynamics. Although epistatic interactions
between specific genes remain largely unpredictable, the statistical properties
of an ensemble of interactions can display conspicuous regularities and be
described by simple mathematical laws. By exploiting the amount of data
produced by modern high-throughput techniques it is now possible to thoroughly
test the predictions of theoretical models of genetic interactions and to build
informed computational models of evolution on realistic fitness landscapes.
|
[
{
"created": "Mon, 18 Mar 2013 20:01:04 GMT",
"version": "v1"
}
] |
2014-01-20
|
[
[
"Velenich",
"Andrea",
""
],
[
"Gore",
"Jeff",
""
]
] |
Genetic interactions pervade every aspect of biology, from evolutionary theory where they determine the accessibility of evolutionary paths, to medicine where they contribute to complex genetic diseases. Until very recently, studies on epistatic interactions have been based on a handful of mutations, providing at best anecdotal evidence about the frequency and the typical strength of genetic interactions. In this study we analyze the publicly available Data Repository of Yeast Genetic INteractions (DRYGIN), which contains the growth rates of over five million double gene knockout mutants. We discuss a geometric definition of epistasis which reveals a simple and surprisingly weak scaling law for the characteristic strength of genetic interactions as a function of the effects of the mutations being combined. We then utilize this scaling to quantify the roughness of naturally occurring fitness landscapes. Finally, we show how the observed roughness differs from what is predicted by Fisher's geometric model of epistasis and discuss its consequences on the evolutionary dynamics. Although epistatic interactions between specific genes remain largely unpredictable, the statistical properties of an ensemble of interactions can display conspicuous regularities and be described by simple mathematical laws. By exploiting the amount of data produced by modern high-throughput techniques it is now possible to thoroughly test the predictions of theoretical models of genetic interactions and to build informed computational models of evolution on realistic fitness landscapes.
|
2401.10009
|
Haiping Huang
|
Junbin Qiu and Haiping Huang
|
An optimization-based equilibrium measure describes non-equilibrium
steady state dynamics: application to edge of chaos
|
21 pages, 9 figures, revised version 2
| null | null | null |
q-bio.NC cond-mat.stat-mech cs.NE
|
http://creativecommons.org/licenses/by/4.0/
|
Understanding neural dynamics is a central topic in machine learning,
non-linear physics and neuroscience. However, the dynamics is non-linear,
stochastic and particularly non-gradient, i.e., the driving force can not be
written as gradient of a potential. These features make analytic studies very
challenging. The common tool is the path integral approach or dynamical
mean-field theory, but the drawback is that one has to solve the
integro-differential or dynamical mean-field equations, which is
computationally expensive and has no closed form solutions in general. From the
aspect of associated Fokker-Planck equation, the steady state solution is
generally unknown. Here, we treat searching for the steady states as an
optimization problem, and construct an approximate potential related to the
speed of the dynamics, and find that searching for the ground state of this
potential is equivalent to running an approximate stochastic gradient dynamics
or Langevin dynamics. Only in the zero temperature limit, the distribution of
the original steady states can be achieved. The resultant stationary state of
the dynamics follows exactly the canonical Boltzmann measure. Within this
framework, the quenched disorder intrinsic in the neural networks can be
averaged out by applying the replica method, which leads naturally to order
parameters for the non-equilibrium steady states. Our theory reproduces the
well-known result of edge-of-chaos, and further the order parameters
characterizing the continuous transition are derived, and the order parameters
are explained as fluctuations and responses of the steady states. Our method
thus opens the door to analytically study the steady state landscape of the
deterministic or stochastic high dimensional dynamics.
|
[
{
"created": "Thu, 18 Jan 2024 14:25:32 GMT",
"version": "v1"
},
{
"created": "Fri, 7 Jun 2024 06:29:08 GMT",
"version": "v2"
}
] |
2024-06-10
|
[
[
"Qiu",
"Junbin",
""
],
[
"Huang",
"Haiping",
""
]
] |
Understanding neural dynamics is a central topic in machine learning, non-linear physics and neuroscience. However, the dynamics is non-linear, stochastic and particularly non-gradient, i.e., the driving force can not be written as gradient of a potential. These features make analytic studies very challenging. The common tool is the path integral approach or dynamical mean-field theory, but the drawback is that one has to solve the integro-differential or dynamical mean-field equations, which is computationally expensive and has no closed form solutions in general. From the aspect of associated Fokker-Planck equation, the steady state solution is generally unknown. Here, we treat searching for the steady states as an optimization problem, and construct an approximate potential related to the speed of the dynamics, and find that searching for the ground state of this potential is equivalent to running an approximate stochastic gradient dynamics or Langevin dynamics. Only in the zero temperature limit, the distribution of the original steady states can be achieved. The resultant stationary state of the dynamics follows exactly the canonical Boltzmann measure. Within this framework, the quenched disorder intrinsic in the neural networks can be averaged out by applying the replica method, which leads naturally to order parameters for the non-equilibrium steady states. Our theory reproduces the well-known result of edge-of-chaos, and further the order parameters characterizing the continuous transition are derived, and the order parameters are explained as fluctuations and responses of the steady states. Our method thus opens the door to analytically study the steady state landscape of the deterministic or stochastic high dimensional dynamics.
|
2401.14442
|
Talip Ucar
|
Talip Ucar, Aubin Ramon, Dino Oglic, Rebecca Croasdale-Wood, Tom
Diethe, Pietro Sormanni
|
Improving Antibody Humanness Prediction using Patent Data
|
ICML 2024, 14 pages, 6 figures, Code:
https://github.com/AstraZeneca/SelfPAD
| null | null | null |
q-bio.QM cs.LG stat.ML
|
http://creativecommons.org/licenses/by/4.0/
|
We investigate the potential of patent data for improving the antibody
humanness prediction using a multi-stage, multi-loss training process.
Humanness serves as a proxy for the immunogenic response to antibody
therapeutics, one of the major causes of attrition in drug discovery and a
challenging obstacle for their use in clinical settings. We pose the initial
learning stage as a weakly-supervised contrastive-learning problem, where each
antibody sequence is associated with possibly multiple identifiers of function
and the objective is to learn an encoder that groups them according to their
patented properties. We then freeze a part of the contrastive encoder and
continue training it on the patent data using the cross-entropy loss to predict
the humanness score of a given antibody sequence. We illustrate the utility of
the patent data and our approach by performing inference on three different
immunogenicity datasets, unseen during training. Our empirical results
demonstrate that the learned model consistently outperforms the alternative
baselines and establishes new state-of-the-art on five out of six inference
tasks, irrespective of the used metric.
|
[
{
"created": "Thu, 25 Jan 2024 16:04:17 GMT",
"version": "v1"
},
{
"created": "Wed, 31 Jan 2024 07:12:52 GMT",
"version": "v2"
},
{
"created": "Sat, 8 Jun 2024 07:14:03 GMT",
"version": "v3"
}
] |
2024-06-11
|
[
[
"Ucar",
"Talip",
""
],
[
"Ramon",
"Aubin",
""
],
[
"Oglic",
"Dino",
""
],
[
"Croasdale-Wood",
"Rebecca",
""
],
[
"Diethe",
"Tom",
""
],
[
"Sormanni",
"Pietro",
""
]
] |
We investigate the potential of patent data for improving the antibody humanness prediction using a multi-stage, multi-loss training process. Humanness serves as a proxy for the immunogenic response to antibody therapeutics, one of the major causes of attrition in drug discovery and a challenging obstacle for their use in clinical settings. We pose the initial learning stage as a weakly-supervised contrastive-learning problem, where each antibody sequence is associated with possibly multiple identifiers of function and the objective is to learn an encoder that groups them according to their patented properties. We then freeze a part of the contrastive encoder and continue training it on the patent data using the cross-entropy loss to predict the humanness score of a given antibody sequence. We illustrate the utility of the patent data and our approach by performing inference on three different immunogenicity datasets, unseen during training. Our empirical results demonstrate that the learned model consistently outperforms the alternative baselines and establishes new state-of-the-art on five out of six inference tasks, irrespective of the used metric.
|
2003.14150
|
Anis Koubaa
|
Anis Koubaa
|
Understanding the COVID19 Outbreak: A Comparative Data Analytics and
Study
|
RIOTU Lab Technical Report
| null | null |
RT-2020-01
|
q-bio.PE cs.SI
|
http://creativecommons.org/licenses/by/4.0/
|
The Coronavirus, also known as the COVID-19 virus, has emerged in Wuhan China
since late November 2019. Since that time, it has been spreading at large-scale
until today all around the world. It is currently recognized as the world's
most viral and severe epidemic spread in the last twenty years, as compared to
Ebola 2014, MERS 2012, and SARS 2003. Despite being still in the middle of the
outbreak, there is an urgent need to understand the impact of COVID-19. The
objective is to clarify how it was spread so fast in a short time worldwide in
unprecedented fashion. This paper represents a first initiative to achieve this
goal, and it provides a comprehensive analytical study about the Coronavirus.
The contribution of this paper consists in providing descriptive and predictive
models that give insights into COVID-19 impact through the analysis of
extensive data updated daily for the outbreak in all countries. We aim at
answering several open questions: How does COVID-19 spread around the world?
What is its impact in terms of confirmed and death cases at the continent,
region, and country levels? How does its severity compare with other epidemic
outbreaks, including Ebola 2014, MERS 2012, and SARS 2003? Is there a
correlation between the number of confirmed cases and death cases? We present a
comprehensive analytics visualization to address the questions mentioned above.
To the best of our knowledge, this is the first systematic analytical papers
that pave the way towards a better understanding of COVID-19. The analytical
dashboards and collected data of this study are available online [1].
|
[
{
"created": "Sun, 29 Mar 2020 10:33:24 GMT",
"version": "v1"
}
] |
2020-04-01
|
[
[
"Koubaa",
"Anis",
""
]
] |
The Coronavirus, also known as the COVID-19 virus, has emerged in Wuhan China since late November 2019. Since that time, it has been spreading at large-scale until today all around the world. It is currently recognized as the world's most viral and severe epidemic spread in the last twenty years, as compared to Ebola 2014, MERS 2012, and SARS 2003. Despite being still in the middle of the outbreak, there is an urgent need to understand the impact of COVID-19. The objective is to clarify how it was spread so fast in a short time worldwide in unprecedented fashion. This paper represents a first initiative to achieve this goal, and it provides a comprehensive analytical study about the Coronavirus. The contribution of this paper consists in providing descriptive and predictive models that give insights into COVID-19 impact through the analysis of extensive data updated daily for the outbreak in all countries. We aim at answering several open questions: How does COVID-19 spread around the world? What is its impact in terms of confirmed and death cases at the continent, region, and country levels? How does its severity compare with other epidemic outbreaks, including Ebola 2014, MERS 2012, and SARS 2003? Is there a correlation between the number of confirmed cases and death cases? We present a comprehensive analytics visualization to address the questions mentioned above. To the best of our knowledge, this is the first systematic analytical papers that pave the way towards a better understanding of COVID-19. The analytical dashboards and collected data of this study are available online [1].
|
2309.06388
|
Chunyan Ao
|
Chunyan Ao, Zhichao Xiao, Lixin Guan, Liang Yu
|
Computational Approaches for Predicting Drug-Disease Associations: A
Comprehensive Review
|
34 page, 5 figures, 2 tables
| null | null | null |
q-bio.QM cs.LG
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent decades, traditional drug research and development have been facing
challenges such as high cost, long timelines, and high risks. To address these
issues, many computational approaches have been suggested for predicting the
relationship between drugs and diseases through drug repositioning, aiming to
reduce the cost, development cycle, and risks associated with developing new
drugs. Researchers have explored different computational methods to predict
drug-disease associations, including drug side effects-disease associations,
drug-target associations, and miRNAdisease associations. In this comprehensive
review, we focus on recent advances in predicting drug-disease association
methods for drug repositioning. We first categorize these methods into several
groups, including neural network-based algorithms, matrixbased algorithms,
recommendation algorithms, link-based reasoning algorithms, and text mining and
semantic reasoning. Then, we compare the prediction performance of existing
drug-disease association prediction algorithms. Lastly, we delve into the
present challenges and future prospects concerning drug-disease associations.
|
[
{
"created": "Sun, 10 Sep 2023 11:34:29 GMT",
"version": "v1"
}
] |
2023-09-13
|
[
[
"Ao",
"Chunyan",
""
],
[
"Xiao",
"Zhichao",
""
],
[
"Guan",
"Lixin",
""
],
[
"Yu",
"Liang",
""
]
] |
In recent decades, traditional drug research and development have been facing challenges such as high cost, long timelines, and high risks. To address these issues, many computational approaches have been suggested for predicting the relationship between drugs and diseases through drug repositioning, aiming to reduce the cost, development cycle, and risks associated with developing new drugs. Researchers have explored different computational methods to predict drug-disease associations, including drug side effects-disease associations, drug-target associations, and miRNAdisease associations. In this comprehensive review, we focus on recent advances in predicting drug-disease association methods for drug repositioning. We first categorize these methods into several groups, including neural network-based algorithms, matrixbased algorithms, recommendation algorithms, link-based reasoning algorithms, and text mining and semantic reasoning. Then, we compare the prediction performance of existing drug-disease association prediction algorithms. Lastly, we delve into the present challenges and future prospects concerning drug-disease associations.
|
2406.05248
|
Paul Taylor
|
Richard C. Reynolds, Daniel R. Glen, Gang Chen, Ziad S. Saad, Robert
W. Cox, Paul A. Taylor
|
Processing, evaluating and understanding FMRI data with afni_proc.py
|
52 pages, 10 figures, 6 tables
| null | null | null |
q-bio.NC
|
http://creativecommons.org/licenses/by/4.0/
|
FMRI data are noisy, complicated to acquire, and typically go through many
steps of processing before they are used in a study or clinical practice. Being
able to visualize and understand the data from the start through the completion
of processing, while being confident that each intermediate step was
successful, is challenging. AFNI's "afni_proc.py" is a tool to create and run a
processing pipeline for FMRI data. With its flexible features, "afni_proc.py"
allows users to both control and evaluate their processing at a detailed level.
It has been designed to keep users informed about all processing steps: it does
not just process the data, but first outputs a fully commented processing
script that the users can read, query, interpret and refer back to. Having this
full provenance is important for being able to understand each step of
processing; it also promotes transparency and reproducibility by keeping the
record of individual-level processing and modeling specifics in a single,
shareable place. Additionally, "afni_proc.py" creates pipelines that contain
several automatic self-checks for potential problems during runtime. The output
directory contains a dictionary of relevant quantities that can be
programmatically queried for potential issues and a systematic, interactive
quality control (QC) HTML. All of these features help users evaluate and
understand their data and processing in detail. We describe these and other
aspects of "afni_proc.py" here using a set of task-based and resting state FMRI
example commands.
|
[
{
"created": "Fri, 7 Jun 2024 20:14:52 GMT",
"version": "v1"
},
{
"created": "Tue, 11 Jun 2024 02:20:54 GMT",
"version": "v2"
}
] |
2024-06-12
|
[
[
"Reynolds",
"Richard C.",
""
],
[
"Glen",
"Daniel R.",
""
],
[
"Chen",
"Gang",
""
],
[
"Saad",
"Ziad S.",
""
],
[
"Cox",
"Robert W.",
""
],
[
"Taylor",
"Paul A.",
""
]
] |
FMRI data are noisy, complicated to acquire, and typically go through many steps of processing before they are used in a study or clinical practice. Being able to visualize and understand the data from the start through the completion of processing, while being confident that each intermediate step was successful, is challenging. AFNI's "afni_proc.py" is a tool to create and run a processing pipeline for FMRI data. With its flexible features, "afni_proc.py" allows users to both control and evaluate their processing at a detailed level. It has been designed to keep users informed about all processing steps: it does not just process the data, but first outputs a fully commented processing script that the users can read, query, interpret and refer back to. Having this full provenance is important for being able to understand each step of processing; it also promotes transparency and reproducibility by keeping the record of individual-level processing and modeling specifics in a single, shareable place. Additionally, "afni_proc.py" creates pipelines that contain several automatic self-checks for potential problems during runtime. The output directory contains a dictionary of relevant quantities that can be programmatically queried for potential issues and a systematic, interactive quality control (QC) HTML. All of these features help users evaluate and understand their data and processing in detail. We describe these and other aspects of "afni_proc.py" here using a set of task-based and resting state FMRI example commands.
|
1204.0831
|
Iaroslav Ispolatov
|
Michael Doebeli and Iaroslav Ispolatov
|
Symmetric competition as a general model for single-species adaptive
dynamics
|
26 pages, 1 figure
|
J. Math. Biol. (2013) 67: 169
|
10.1007/s00285-012-0547-4
| null |
q-bio.PE cond-mat.stat-mech
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Adaptive dynamics is a widely used framework for modeling long-term evolution
of continuous phenotypes. It is based on invasion fitness functions, which
determine selection gradients and the canonical equation of adaptive dynamics.
Even though the derivation of the adaptive dynamics from a given invasion
fitness function is general and model-independent, the derivation of the
invasion fitness function itself requires specification of an underlying
ecological model. Therefore, evolutionary insights gained from adaptive
dynamics models are generally model-dependent. Logistic models for symmetric,
frequency-dependent competition are widely used in this context. Such models
have the property that the selection gradients derived from them are gradients
of scalar functions, which reflects a certain gradient property of the
corresponding invasion fitness function. We show that any adaptive dynamics
model that is based on an invasion fitness functions with this gradient
property can be transformed into a generalized symmetric competition model.
This provides a precise delineation of the generality of results derived from
competition models. Roughly speaking, to understand the adaptive dynamics of
the class of models satisfying a certain gradient condition, one only needs a
complete understanding of the adaptive dynamics of symmetric,
frequency-dependent competition. We show how this result can be applied to
number of basic issues in evolutionary theory.
|
[
{
"created": "Tue, 3 Apr 2012 23:11:09 GMT",
"version": "v1"
}
] |
2017-02-07
|
[
[
"Doebeli",
"Michael",
""
],
[
"Ispolatov",
"Iaroslav",
""
]
] |
Adaptive dynamics is a widely used framework for modeling long-term evolution of continuous phenotypes. It is based on invasion fitness functions, which determine selection gradients and the canonical equation of adaptive dynamics. Even though the derivation of the adaptive dynamics from a given invasion fitness function is general and model-independent, the derivation of the invasion fitness function itself requires specification of an underlying ecological model. Therefore, evolutionary insights gained from adaptive dynamics models are generally model-dependent. Logistic models for symmetric, frequency-dependent competition are widely used in this context. Such models have the property that the selection gradients derived from them are gradients of scalar functions, which reflects a certain gradient property of the corresponding invasion fitness function. We show that any adaptive dynamics model that is based on an invasion fitness functions with this gradient property can be transformed into a generalized symmetric competition model. This provides a precise delineation of the generality of results derived from competition models. Roughly speaking, to understand the adaptive dynamics of the class of models satisfying a certain gradient condition, one only needs a complete understanding of the adaptive dynamics of symmetric, frequency-dependent competition. We show how this result can be applied to number of basic issues in evolutionary theory.
|
1204.2255
|
Tijana Milenkovic
|
Ryan W. Solava, Ryan P. Michaels, Tijana Milenkovic
|
Identifying edge clusters in networks via edge graphlet degree vectors
(edge-GDVs) and edge-GDV-similarities
| null | null | null | null |
q-bio.MN cs.DM cs.SI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Inference of new biological knowledge, e.g., prediction of protein function,
from protein-protein interaction (PPI) networks has received attention in the
post-genomic era. A popular strategy has been to cluster the network into
functionally coherent groups of proteins and predict protein function from the
clusters. Traditionally, network research has focused on clustering of nodes.
However, why favor nodes over edges, when clustering of edges may be preferred?
For example, nodes belong to multiple functional groups, but clustering of
nodes typically cannot capture the group overlap, while clustering of edges
can. Clustering of adjacent edges that share many neighbors was proposed
recently, outperforming different node clustering methods. However, since some
biological processes can have characteristic "signatures" throughout the
network, not just locally, it may be of interest to consider edges that are not
necessarily adjacent. Hence, we design a sensitive measure of the "topological
similarity" of edges that can deal with edges that are not necessarily
adjacent. We cluster edges that are similar according to our measure in
different baker's yeast PPI networks, outperforming existing node and edge
clustering approaches.
|
[
{
"created": "Tue, 10 Apr 2012 19:32:06 GMT",
"version": "v1"
}
] |
2012-04-11
|
[
[
"Solava",
"Ryan W.",
""
],
[
"Michaels",
"Ryan P.",
""
],
[
"Milenkovic",
"Tijana",
""
]
] |
Inference of new biological knowledge, e.g., prediction of protein function, from protein-protein interaction (PPI) networks has received attention in the post-genomic era. A popular strategy has been to cluster the network into functionally coherent groups of proteins and predict protein function from the clusters. Traditionally, network research has focused on clustering of nodes. However, why favor nodes over edges, when clustering of edges may be preferred? For example, nodes belong to multiple functional groups, but clustering of nodes typically cannot capture the group overlap, while clustering of edges can. Clustering of adjacent edges that share many neighbors was proposed recently, outperforming different node clustering methods. However, since some biological processes can have characteristic "signatures" throughout the network, not just locally, it may be of interest to consider edges that are not necessarily adjacent. Hence, we design a sensitive measure of the "topological similarity" of edges that can deal with edges that are not necessarily adjacent. We cluster edges that are similar according to our measure in different baker's yeast PPI networks, outperforming existing node and edge clustering approaches.
|
1305.1573
|
Alexander Mathis
|
Alexander Mathis, Andreas V.M. Herz, Martin B. Stemmler
|
Multi-Scale Codes in the Nervous System: The Problem of Noise
Correlations and the Ambiguity of Periodic Scales
|
11 pages, 9 figures
|
Phys. Rev. E 88, 022713 2013
|
10.1103/PhysRevE.88.022713
| null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Encoding information about continuous variables using noisy computational
units is a challenge; nonetheless, asymptotic theory shows that combining
multiple periodic scales for coding can be highly precise despite the
corrupting influence of noise (Mathis et al., Phys. Rev. Lett. 2012). Indeed,
cortex seems to use such stochastic multi-scale periodic `grid codes' to
represent position accurately. We show here how these codes can be read out
without taking the asymptotic limit; even on short time scales, the precision
of neuronal grid codes scales exponentially in the number N of neurons. Does
this finding also hold for neurons that are not statistically independent? To
assess the extent to which biological grid codes are subject to statistical
dependencies, we analyze the noise correlations between pairs of grid code
neurons in behaving rodents. We find that if the grids of the two neurons align
and have the same length scale, the noise correlations between the neurons can
reach 0.8. For increasing mismatches between the grids of the two neurons, the
noise correlations fall rapidly. Incorporating such correlations into a
population coding model reveals that the correlations lessen the resolution,
but the exponential scaling of resolution with N is unaffected.
|
[
{
"created": "Tue, 7 May 2013 16:24:41 GMT",
"version": "v1"
},
{
"created": "Tue, 28 May 2013 13:33:40 GMT",
"version": "v2"
}
] |
2013-08-22
|
[
[
"Mathis",
"Alexander",
""
],
[
"Herz",
"Andreas V. M.",
""
],
[
"Stemmler",
"Martin B.",
""
]
] |
Encoding information about continuous variables using noisy computational units is a challenge; nonetheless, asymptotic theory shows that combining multiple periodic scales for coding can be highly precise despite the corrupting influence of noise (Mathis et al., Phys. Rev. Lett. 2012). Indeed, cortex seems to use such stochastic multi-scale periodic `grid codes' to represent position accurately. We show here how these codes can be read out without taking the asymptotic limit; even on short time scales, the precision of neuronal grid codes scales exponentially in the number N of neurons. Does this finding also hold for neurons that are not statistically independent? To assess the extent to which biological grid codes are subject to statistical dependencies, we analyze the noise correlations between pairs of grid code neurons in behaving rodents. We find that if the grids of the two neurons align and have the same length scale, the noise correlations between the neurons can reach 0.8. For increasing mismatches between the grids of the two neurons, the noise correlations fall rapidly. Incorporating such correlations into a population coding model reveals that the correlations lessen the resolution, but the exponential scaling of resolution with N is unaffected.
|
2106.04089
|
David Clark
|
David G. Clark, L. F. Abbott, SueYeon Chung
|
Credit Assignment Through Broadcasting a Global Error Vector
|
20 pages, 6 figures; expanded references and discussion
| null | null | null |
q-bio.NC cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Backpropagation (BP) uses detailed, unit-specific feedback to train deep
neural networks (DNNs) with remarkable success. That biological neural circuits
appear to perform credit assignment, but cannot implement BP, implies the
existence of other powerful learning algorithms. Here, we explore the extent to
which a globally broadcast learning signal, coupled with local weight updates,
enables training of DNNs. We present both a learning rule, called global
error-vector broadcasting (GEVB), and a class of DNNs, called vectorized
nonnegative networks (VNNs), in which this learning rule operates. VNNs have
vector-valued units and nonnegative weights past the first layer. The GEVB
learning rule generalizes three-factor Hebbian learning, updating each weight
by an amount proportional to the inner product of the presynaptic activation
and a globally broadcast error vector when the postsynaptic unit is active. We
prove that these weight updates are matched in sign to the gradient, enabling
accurate credit assignment. Moreover, at initialization, these updates are
exactly proportional to the gradient in the limit of infinite network width.
GEVB matches the performance of BP in VNNs, and in some cases outperforms
direct feedback alignment (DFA) applied in conventional networks. Unlike DFA,
GEVB successfully trains convolutional layers. Altogether, our theoretical and
empirical results point to a surprisingly powerful role for a global learning
signal in training DNNs.
|
[
{
"created": "Tue, 8 Jun 2021 04:08:46 GMT",
"version": "v1"
},
{
"created": "Thu, 28 Oct 2021 20:31:37 GMT",
"version": "v2"
}
] |
2021-11-01
|
[
[
"Clark",
"David G.",
""
],
[
"Abbott",
"L. F.",
""
],
[
"Chung",
"SueYeon",
""
]
] |
Backpropagation (BP) uses detailed, unit-specific feedback to train deep neural networks (DNNs) with remarkable success. That biological neural circuits appear to perform credit assignment, but cannot implement BP, implies the existence of other powerful learning algorithms. Here, we explore the extent to which a globally broadcast learning signal, coupled with local weight updates, enables training of DNNs. We present both a learning rule, called global error-vector broadcasting (GEVB), and a class of DNNs, called vectorized nonnegative networks (VNNs), in which this learning rule operates. VNNs have vector-valued units and nonnegative weights past the first layer. The GEVB learning rule generalizes three-factor Hebbian learning, updating each weight by an amount proportional to the inner product of the presynaptic activation and a globally broadcast error vector when the postsynaptic unit is active. We prove that these weight updates are matched in sign to the gradient, enabling accurate credit assignment. Moreover, at initialization, these updates are exactly proportional to the gradient in the limit of infinite network width. GEVB matches the performance of BP in VNNs, and in some cases outperforms direct feedback alignment (DFA) applied in conventional networks. Unlike DFA, GEVB successfully trains convolutional layers. Altogether, our theoretical and empirical results point to a surprisingly powerful role for a global learning signal in training DNNs.
|
q-bio/0702007
|
Joel Miller
|
Joel C. Miller
|
Predicting the size and probability of epidemics in a population with
heterogeneous infectiousness and susceptibility
|
5 pages, 3 figures. Submitted to Physical Review Letters
| null |
10.1103/PhysRevE.76.010101
|
LA-UR-06-8193
|
q-bio.QM q-bio.PE
| null |
We analytically address disease outbreaks in large, random networks with
heterogeneous infectivity and susceptibility. The transmissibility $T_{uv}$
(the probability that infection of $u$ causes infection of $v$) depends on the
infectivity of $u$ and the susceptibility of $v$. Initially a single node is
infected, following which a large-scale epidemic may or may not occur. We use a
generating function approach to study how heterogeneity affects the probability
that an epidemic occurs and, if one occurs, its attack rate (the fraction
infected). For fixed average transmissibility, we find upper and lower bounds
on these. An epidemic is most likely if infectivity is homogeneous and least
likely if the variance of infectivity is maximized. Similarly, the attack rate
is largest if susceptibility is homogeneous and smallest if the variance is
maximized. We further show that heterogeneity in infectious period is
important, contrary to assumptions of previous studies. We confirm our
theoretical predictions by simulation. Our results have implications for
control strategy design and identification of populations at higher risk from
an epidemic.
|
[
{
"created": "Tue, 6 Feb 2007 01:17:33 GMT",
"version": "v1"
}
] |
2009-11-13
|
[
[
"Miller",
"Joel C.",
""
]
] |
We analytically address disease outbreaks in large, random networks with heterogeneous infectivity and susceptibility. The transmissibility $T_{uv}$ (the probability that infection of $u$ causes infection of $v$) depends on the infectivity of $u$ and the susceptibility of $v$. Initially a single node is infected, following which a large-scale epidemic may or may not occur. We use a generating function approach to study how heterogeneity affects the probability that an epidemic occurs and, if one occurs, its attack rate (the fraction infected). For fixed average transmissibility, we find upper and lower bounds on these. An epidemic is most likely if infectivity is homogeneous and least likely if the variance of infectivity is maximized. Similarly, the attack rate is largest if susceptibility is homogeneous and smallest if the variance is maximized. We further show that heterogeneity in infectious period is important, contrary to assumptions of previous studies. We confirm our theoretical predictions by simulation. Our results have implications for control strategy design and identification of populations at higher risk from an epidemic.
|
1708.01751
|
Hector Zenil
|
Hector Zenil and Peter Minary
|
Training-free Measures Based on Algorithmic Probability Identify High
Nucleosome Occupancy in DNA Sequences
|
8 pages main text (4 figures), 12 total with Supplementary (1 figure)
| null | null | null |
q-bio.QM cs.IT math.IT q-bio.GN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce and study a set of training-free methods of
information-theoretic and algorithmic complexity nature applied to DNA
sequences to identify their potential capabilities to determine nucleosomal
binding sites. We test our measures on well-studied genomic sequences of
different sizes drawn from different sources. The measures reveal the known in
vivo versus in vitro predictive discrepancies and uncover their potential to
pinpoint (high) nucleosome occupancy. We explore different possible signals
within and beyond the nucleosome length and find that complexity indices are
informative of nucleosome occupancy. We compare against the gold standard
(Kaplan model) and find similar and complementary results with the main
difference that our sequence complexity approach. For example, for high
occupancy, complexity-based scores outperform the Kaplan model for predicting
binding representing a significant advancement in predicting the highest
nucleosome occupancy following a training-free approach.
|
[
{
"created": "Sat, 5 Aug 2017 11:09:46 GMT",
"version": "v1"
},
{
"created": "Tue, 8 Aug 2017 14:00:56 GMT",
"version": "v2"
},
{
"created": "Tue, 16 Oct 2018 20:29:14 GMT",
"version": "v3"
}
] |
2018-10-18
|
[
[
"Zenil",
"Hector",
""
],
[
"Minary",
"Peter",
""
]
] |
We introduce and study a set of training-free methods of information-theoretic and algorithmic complexity nature applied to DNA sequences to identify their potential capabilities to determine nucleosomal binding sites. We test our measures on well-studied genomic sequences of different sizes drawn from different sources. The measures reveal the known in vivo versus in vitro predictive discrepancies and uncover their potential to pinpoint (high) nucleosome occupancy. We explore different possible signals within and beyond the nucleosome length and find that complexity indices are informative of nucleosome occupancy. We compare against the gold standard (Kaplan model) and find similar and complementary results with the main difference that our sequence complexity approach. For example, for high occupancy, complexity-based scores outperform the Kaplan model for predicting binding representing a significant advancement in predicting the highest nucleosome occupancy following a training-free approach.
|
1905.04224
|
Minhaj Nur Alam
|
Minhaj Alam, David Le, Jennifer I. Lim, R.V.P. Chan, and Xincheng Yao
|
Supervised machine learning based multi-task artificial intelligence
classification of retinopathies
|
Supplemental material attached at the end
|
https://www.mdpi.com/2077-0383/8/6/872
|
10.3390/jcm8060872
| null |
q-bio.QM eess.IV q-bio.TO
|
http://creativecommons.org/licenses/by/4.0/
|
Artificial intelligence (AI) classification holds promise as a novel and
affordable screening tool for clinical management of ocular diseases. Rural and
underserved areas, which suffer from lack of access to experienced
ophthalmologists may particularly benefit from this technology. Quantitative
optical coherence tomography angiography (OCTA) imaging provides excellent
capability to identify subtle vascular distortions, which are useful for
classifying retinovascular diseases. However, application of AI for
differentiation and classification of multiple eye diseases is not yet
established. In this study, we demonstrate supervised machine learning based
multi-task OCTA classification. We sought 1) to differentiate normal from
diseased ocular conditions, 2) to differentiate different ocular disease
conditions from each other, and 3) to stage the severity of each ocular
condition. Quantitative OCTA features, including blood vessel tortuosity (BVT),
blood vascular caliber (BVC), vessel perimeter index (VPI), blood vessel
density (BVD), foveal avascular zone (FAZ) area (FAZ-A), and FAZ contour
irregularity (FAZ-CI) were fully automatically extracted from the OCTA images.
A stepwise backward elimination approach was employed to identify sensitive
OCTA features and optimal-feature-combinations for the multi-task
classification. For proof-of-concept demonstration, diabetic retinopathy (DR)
and sickle cell retinopathy (SCR) were used to validate the supervised machine
leaning classifier. The presented AI classification methodology is applicable
and can be readily extended to other ocular diseases, holding promise to enable
a mass-screening platform for clinical deployment and telemedicine.
|
[
{
"created": "Fri, 10 May 2019 15:47:31 GMT",
"version": "v1"
}
] |
2019-06-19
|
[
[
"Alam",
"Minhaj",
""
],
[
"Le",
"David",
""
],
[
"Lim",
"Jennifer I.",
""
],
[
"Chan",
"R. V. P.",
""
],
[
"Yao",
"Xincheng",
""
]
] |
Artificial intelligence (AI) classification holds promise as a novel and affordable screening tool for clinical management of ocular diseases. Rural and underserved areas, which suffer from lack of access to experienced ophthalmologists may particularly benefit from this technology. Quantitative optical coherence tomography angiography (OCTA) imaging provides excellent capability to identify subtle vascular distortions, which are useful for classifying retinovascular diseases. However, application of AI for differentiation and classification of multiple eye diseases is not yet established. In this study, we demonstrate supervised machine learning based multi-task OCTA classification. We sought 1) to differentiate normal from diseased ocular conditions, 2) to differentiate different ocular disease conditions from each other, and 3) to stage the severity of each ocular condition. Quantitative OCTA features, including blood vessel tortuosity (BVT), blood vascular caliber (BVC), vessel perimeter index (VPI), blood vessel density (BVD), foveal avascular zone (FAZ) area (FAZ-A), and FAZ contour irregularity (FAZ-CI) were fully automatically extracted from the OCTA images. A stepwise backward elimination approach was employed to identify sensitive OCTA features and optimal-feature-combinations for the multi-task classification. For proof-of-concept demonstration, diabetic retinopathy (DR) and sickle cell retinopathy (SCR) were used to validate the supervised machine leaning classifier. The presented AI classification methodology is applicable and can be readily extended to other ocular diseases, holding promise to enable a mass-screening platform for clinical deployment and telemedicine.
|
1604.02919
|
Silvia Grigolon
|
Silvia Grigolon, Francesca Di Patti, Andrea De Martino, Enzo Marinari
|
Noise Processing by MicroRNA-Mediated Circuits: the Incoherent
Feed-Forward Loop, Revisited
|
25 pages (Main Text and Supplementary Information), 5 figures
|
Heliyon 2 (2016) e00095
|
10.1016/j.heliyon.2016.e00095
| null |
q-bio.MN cond-mat.stat-mech physics.bio-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The intrinsic stochasticity of gene expression is usually mitigated in higher
eukaryotes by post-transcriptional regulation channels that stabilise the
output layer, most notably protein levels. The discovery of small non-coding
RNAs (miRNAs) in specific motifs of the genetic regulatory network has led to
identifying noise buffering as the possible key function they exert in
regulation. Recent in vitro} and in silico studies have corroborated this
hypothesis. It is however also known that miRNA-mediated noise reduction is
hampered by transcriptional bursting in simple topologies. Here, using
stochastic simulations validated by analytical calculations based on van
Kampen's expansion, we revisit the noise-buffering capacity of the
miRNA-mediated Incoherent Feed Forward Loop (IFFL), a small module that is
widespread in the gene regulatory networks of higher eukaryotes, in order to
account for the effects of intermittency in the transcriptional activity of the
modulator gene. We show that bursting considerably alters the circuit's ability
to control static protein noise. By comparing with other regulatory
architectures, we find that direct transcriptional regulation significantly
outperforms the IFFL in a broad range of kinetic parameters. This suggests
that, under pulsatile inputs, static noise reduction may be less important than
dynamical aspects of noise and information processing in characterising the
performance of regulatory elements.
|
[
{
"created": "Mon, 11 Apr 2016 12:42:53 GMT",
"version": "v1"
}
] |
2016-04-12
|
[
[
"Grigolon",
"Silvia",
""
],
[
"Di Patti",
"Francesca",
""
],
[
"De Martino",
"Andrea",
""
],
[
"Marinari",
"Enzo",
""
]
] |
The intrinsic stochasticity of gene expression is usually mitigated in higher eukaryotes by post-transcriptional regulation channels that stabilise the output layer, most notably protein levels. The discovery of small non-coding RNAs (miRNAs) in specific motifs of the genetic regulatory network has led to identifying noise buffering as the possible key function they exert in regulation. Recent in vitro} and in silico studies have corroborated this hypothesis. It is however also known that miRNA-mediated noise reduction is hampered by transcriptional bursting in simple topologies. Here, using stochastic simulations validated by analytical calculations based on van Kampen's expansion, we revisit the noise-buffering capacity of the miRNA-mediated Incoherent Feed Forward Loop (IFFL), a small module that is widespread in the gene regulatory networks of higher eukaryotes, in order to account for the effects of intermittency in the transcriptional activity of the modulator gene. We show that bursting considerably alters the circuit's ability to control static protein noise. By comparing with other regulatory architectures, we find that direct transcriptional regulation significantly outperforms the IFFL in a broad range of kinetic parameters. This suggests that, under pulsatile inputs, static noise reduction may be less important than dynamical aspects of noise and information processing in characterising the performance of regulatory elements.
|
1304.5404
|
Corentin Briat Dr
|
Ankit Gupta, Corentin Briat and Mustafa Khammash
|
A scalable computational framework for establishing long-term behavior
of stochastic reaction networks
|
31 pages, 9 figures
| null |
10.1371/journal.pcbi.1003669
| null |
q-bio.MN cs.SY math.OC math.PR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Reaction networks are systems in which the populations of a finite number of
species evolve through predefined interactions. Such networks are found as
modeling tools in many biological disciplines such as biochemistry, ecology,
epidemiology, immunology, systems biology and synthetic biology. It is now
well-established that, for small population sizes, stochastic models for
biochemical reaction networks are necessary to capture randomness in the
interactions. The tools for analyzing such models, however, still lag far
behind their deterministic counterparts. In this paper, we bridge this gap by
developing a constructive framework for examining the long-term behavior and
stability properties of the reaction dynamics in a stochastic setting. In
particular, we address the problems of determining ergodicity of the reaction
dynamics, which is analogous to having a globally attracting fixed point for
deterministic dynamics. We also examine when the statistical moments of the
underlying process remain bounded with time and when they converge to their
steady state values. The framework we develop relies on a blend of ideas from
probability theory, linear algebra and optimization theory. We demonstrate that
the stability properties of a wide class of biological networks can be assessed
from our sufficient theoretical conditions that can be recast as efficient and
scalable linear programs, well-known for their tractability. It is notably
shown that the computational complexity is often linear in the number of
species. We illustrate the validity, the efficiency and the wide applicability
of our results on several reaction networks arising in biochemistry, systems
biology, epidemiology and ecology. The biological implications of the results
as well as an example of a non-ergodic biological network are also discussed.
|
[
{
"created": "Fri, 19 Apr 2013 12:59:26 GMT",
"version": "v1"
},
{
"created": "Mon, 4 Nov 2013 05:52:53 GMT",
"version": "v2"
},
{
"created": "Mon, 5 May 2014 14:27:13 GMT",
"version": "v3"
}
] |
2015-06-15
|
[
[
"Gupta",
"Ankit",
""
],
[
"Briat",
"Corentin",
""
],
[
"Khammash",
"Mustafa",
""
]
] |
Reaction networks are systems in which the populations of a finite number of species evolve through predefined interactions. Such networks are found as modeling tools in many biological disciplines such as biochemistry, ecology, epidemiology, immunology, systems biology and synthetic biology. It is now well-established that, for small population sizes, stochastic models for biochemical reaction networks are necessary to capture randomness in the interactions. The tools for analyzing such models, however, still lag far behind their deterministic counterparts. In this paper, we bridge this gap by developing a constructive framework for examining the long-term behavior and stability properties of the reaction dynamics in a stochastic setting. In particular, we address the problems of determining ergodicity of the reaction dynamics, which is analogous to having a globally attracting fixed point for deterministic dynamics. We also examine when the statistical moments of the underlying process remain bounded with time and when they converge to their steady state values. The framework we develop relies on a blend of ideas from probability theory, linear algebra and optimization theory. We demonstrate that the stability properties of a wide class of biological networks can be assessed from our sufficient theoretical conditions that can be recast as efficient and scalable linear programs, well-known for their tractability. It is notably shown that the computational complexity is often linear in the number of species. We illustrate the validity, the efficiency and the wide applicability of our results on several reaction networks arising in biochemistry, systems biology, epidemiology and ecology. The biological implications of the results as well as an example of a non-ergodic biological network are also discussed.
|
0711.0175
|
Razvan Radulescu M.D.
|
Razvan Tudor Radulescu
|
The insulin-RB synapse in health and disease: cellular rocket science
|
12 pages
| null | null | null |
q-bio.BM q-bio.SC
| null |
Time has come for a survey of our knowledge on the physical interaction
between the growth-promoting insulin molecule and retinoblastoma tumor
suppressor protein (RB). Theoretical and experimental observations over the
past 15 years reviewed here indicate that the insulin-RB dimer may represent an
essential molecular crossroads involved in major physiological and pathological
conditions. Within this system, the putative tumor suppressor insulin-degrading
enzyme (IDE) should be an important modulator. Perhaps most remarkably, the
abstraction of this encounter between insulin and RB, two growth-regulatory
giants acting either in concert or against each other depending on the
respective cellular requirements, reveals that Nature may compute in
controlling cell fate and we could follow in its footsteps towards developing
more efficient therapeutics as well as novel technical devices.
|
[
{
"created": "Thu, 1 Nov 2007 19:53:44 GMT",
"version": "v1"
}
] |
2007-11-02
|
[
[
"Radulescu",
"Razvan Tudor",
""
]
] |
Time has come for a survey of our knowledge on the physical interaction between the growth-promoting insulin molecule and retinoblastoma tumor suppressor protein (RB). Theoretical and experimental observations over the past 15 years reviewed here indicate that the insulin-RB dimer may represent an essential molecular crossroads involved in major physiological and pathological conditions. Within this system, the putative tumor suppressor insulin-degrading enzyme (IDE) should be an important modulator. Perhaps most remarkably, the abstraction of this encounter between insulin and RB, two growth-regulatory giants acting either in concert or against each other depending on the respective cellular requirements, reveals that Nature may compute in controlling cell fate and we could follow in its footsteps towards developing more efficient therapeutics as well as novel technical devices.
|
1810.12062
|
Max Falkenberg McGillivray
|
Max Falkenberg, Andrew J. Ford, Anthony C. Li, Alberto Ciacci,
Nicholas S. Peters, Kim Christensen
|
Unified Mechanism of Atrial Fibrillation in a Simple Model
| null |
Phys. Rev. E 100, 062406 (2019)
|
10.1103/PhysRevE.100.062406
| null |
q-bio.TO physics.med-ph
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
The mechanism of atrial fibrillation (AF) is poorly understood, resulting in
disappointing success rates of ablative treatment. Different mechanisms defined
largely by different atrial activation patterns have been proposed and,
arguably, this dispute has slowed the progress of AF research. Recent clinical
evidence suggests a unifying mechanism based on sustained re-entrant circuits
in the complex atrial architecture. Here, we present a simple computational
model showing spontaneous emergence of AF that strongly supports, and gives a
theoretical explanation for, the clinically observed diversity of activation.
We show that the difference in surface activation patterns is a direct
consequence of the thickness of the discrete network of heart muscle cells
through which electrical signals percolate to reach the imaged surface. The
model naturally follows the clinical spectrum of AF spanning sinus rhythm,
paroxysmal and persistent AF as the decoupling of myocardial cells results in
the lattice approaching the percolation threshold. This allows the model to
make additional predictions beyond the current clinical understanding, showing
that for paroxysmal AF re-entrant circuits emerge near the endocardium, but in
persistent AF they emerge deeper in the bulk of the atrial wall where
endocardial ablation is less effective. If clinically confirmed, this may
explain the lower success rate of ablation in long-lasting persistent AF.
|
[
{
"created": "Mon, 29 Oct 2018 11:32:52 GMT",
"version": "v1"
}
] |
2020-01-27
|
[
[
"Falkenberg",
"Max",
""
],
[
"Ford",
"Andrew J.",
""
],
[
"Li",
"Anthony C.",
""
],
[
"Ciacci",
"Alberto",
""
],
[
"Peters",
"Nicholas S.",
""
],
[
"Christensen",
"Kim",
""
]
] |
The mechanism of atrial fibrillation (AF) is poorly understood, resulting in disappointing success rates of ablative treatment. Different mechanisms defined largely by different atrial activation patterns have been proposed and, arguably, this dispute has slowed the progress of AF research. Recent clinical evidence suggests a unifying mechanism based on sustained re-entrant circuits in the complex atrial architecture. Here, we present a simple computational model showing spontaneous emergence of AF that strongly supports, and gives a theoretical explanation for, the clinically observed diversity of activation. We show that the difference in surface activation patterns is a direct consequence of the thickness of the discrete network of heart muscle cells through which electrical signals percolate to reach the imaged surface. The model naturally follows the clinical spectrum of AF spanning sinus rhythm, paroxysmal and persistent AF as the decoupling of myocardial cells results in the lattice approaching the percolation threshold. This allows the model to make additional predictions beyond the current clinical understanding, showing that for paroxysmal AF re-entrant circuits emerge near the endocardium, but in persistent AF they emerge deeper in the bulk of the atrial wall where endocardial ablation is less effective. If clinically confirmed, this may explain the lower success rate of ablation in long-lasting persistent AF.
|
1211.0104
|
Suman Kumar Banik
|
Alok Kumar Maity, Arnab Bandyopadhyay, Sudip Chattopadhyay,
Jyotipratim Ray Chaudhuri, Ralf Metzler, Pinaki Chaudhury and Suman K Banik
|
Quantification of noise in the bifunctionality-induced
post-translational modification
|
Revised version, 7 pages, 5 figures
|
Phys Rev E 88 (2013) 032716
|
10.1103/PhysRevE.88.032716
| null |
q-bio.SC physics.bio-ph q-bio.MN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a generic analytical scheme for the quantification of fluctuations
due to bifunctionality-induced signal transduction within the members of
bacterial two-component system. The proposed model takes into account
post-translational modifications in terms of elementary phosphotransfer
kinetics. Sources of fluctuations due to autophosphorylation, kinase and
phosphatase activity of the sensor kinase have been considered in the model via
Langevin equations, which are then solved within the framework of linear noise
approximation. The resultant analytical expression of phosphorylated response
regulators are then used to quantify the noise profile of biologically
motivated single and branched pathways. Enhancement and reduction of noise in
terms of extra phosphate outflux and influx, respectively, have been analyzed
for the branched system. Furthermore, role of fluctuations of the network
output in the regulation of a promoter with random activation/deactivation
dynamics has been analyzed.
|
[
{
"created": "Thu, 1 Nov 2012 06:19:23 GMT",
"version": "v1"
},
{
"created": "Mon, 23 Sep 2013 05:35:52 GMT",
"version": "v2"
}
] |
2013-10-02
|
[
[
"Maity",
"Alok Kumar",
""
],
[
"Bandyopadhyay",
"Arnab",
""
],
[
"Chattopadhyay",
"Sudip",
""
],
[
"Chaudhuri",
"Jyotipratim Ray",
""
],
[
"Metzler",
"Ralf",
""
],
[
"Chaudhury",
"Pinaki",
""
],
[
"Banik",
"Suman K",
""
]
] |
We present a generic analytical scheme for the quantification of fluctuations due to bifunctionality-induced signal transduction within the members of bacterial two-component system. The proposed model takes into account post-translational modifications in terms of elementary phosphotransfer kinetics. Sources of fluctuations due to autophosphorylation, kinase and phosphatase activity of the sensor kinase have been considered in the model via Langevin equations, which are then solved within the framework of linear noise approximation. The resultant analytical expression of phosphorylated response regulators are then used to quantify the noise profile of biologically motivated single and branched pathways. Enhancement and reduction of noise in terms of extra phosphate outflux and influx, respectively, have been analyzed for the branched system. Furthermore, role of fluctuations of the network output in the regulation of a promoter with random activation/deactivation dynamics has been analyzed.
|
2212.02864
|
Adnan Ferdous Ashrafi
|
Mehedi Hasan Sarkar, Adnan Ferdous Ashrafi
|
Genetic Sequence compression using Machine Learning and Arithmetic
Encoding Decoding Techniques
|
6 page, 4 figures, 3 tables, accepted at ICCIT 2022
|
2022 25th International Conference on Computer and Information
Technology (ICCIT), 2022, pp. 1038-1043
|
10.1109/ICCIT57492.2022.10055899
| null |
q-bio.QM
|
http://creativecommons.org/licenses/by/4.0/
|
We live in a period where bio-informatics is rapidly expanding, a significant
quantity of genomic data has been produced as a result of the advancement of
high-throughput genome sequencing technology, raising concerns about the costs
associated with data storage and transmission. The question of how to properly
compress data from genomic sequences is still open. Previously many researcher
proposed many compression method on this topic DNA Compression without machine
learning and with machine learning approach. Extending a previous research, we
propose a new architecture like modified DeepDNA and we have propose a new
methodology be deploying a double base-ed strategy for compression of DNA
sequences. And validated the results by experimenting on three sizes of
datasets are 100, 243, 356. The experimental outcomes highlight our improved
approach's superiority over existing approaches for analyzing the human
mitochondrial genome data, such as DeepDNA.
|
[
{
"created": "Tue, 6 Dec 2022 10:15:31 GMT",
"version": "v1"
}
] |
2023-03-10
|
[
[
"Sarkar",
"Mehedi Hasan",
""
],
[
"Ashrafi",
"Adnan Ferdous",
""
]
] |
We live in a period where bio-informatics is rapidly expanding, a significant quantity of genomic data has been produced as a result of the advancement of high-throughput genome sequencing technology, raising concerns about the costs associated with data storage and transmission. The question of how to properly compress data from genomic sequences is still open. Previously many researcher proposed many compression method on this topic DNA Compression without machine learning and with machine learning approach. Extending a previous research, we propose a new architecture like modified DeepDNA and we have propose a new methodology be deploying a double base-ed strategy for compression of DNA sequences. And validated the results by experimenting on three sizes of datasets are 100, 243, 356. The experimental outcomes highlight our improved approach's superiority over existing approaches for analyzing the human mitochondrial genome data, such as DeepDNA.
|
q-bio/0405007
|
Andrea Pagnani
|
M. Leone and A. Pagnani
|
Predicting protein functions with message passing algorithms
|
12 pages, 9 eps figures, 1 additional html table
|
Bioinformatics 21: 239-247 (2005).
|
10.1093/bioinformatics/bth491
| null |
q-bio.QM cond-mat.dis-nn
| null |
Motivation: In the last few years a growing interest in biology has been
shifting towards the problem of optimal information extraction from the huge
amount of data generated via large scale and high-throughput techniques. One of
the most relevant issues has recently become that of correctly and reliably
predicting the functions of observed but still functionally undetermined
proteins starting from information coming from the network of co-observed
proteins of known functions.
Method: The method proposed in this article is based on a message passing
algorithm known as Belief Propagation, which takes as input the network of
proteins physical interactions and a catalog of known proteins functions, and
returns the probabilities for each unclassified protein of having one chosen
function. The implementation of the algorithm allows for fast on-line analysis,
and can be easily generalized to more complex graph topologies taking into
account hyper-graphs, {\em i.e.} complexes of more than two interacting
proteins.
|
[
{
"created": "Fri, 7 May 2004 11:05:17 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Leone",
"M.",
""
],
[
"Pagnani",
"A.",
""
]
] |
Motivation: In the last few years a growing interest in biology has been shifting towards the problem of optimal information extraction from the huge amount of data generated via large scale and high-throughput techniques. One of the most relevant issues has recently become that of correctly and reliably predicting the functions of observed but still functionally undetermined proteins starting from information coming from the network of co-observed proteins of known functions. Method: The method proposed in this article is based on a message passing algorithm known as Belief Propagation, which takes as input the network of proteins physical interactions and a catalog of known proteins functions, and returns the probabilities for each unclassified protein of having one chosen function. The implementation of the algorithm allows for fast on-line analysis, and can be easily generalized to more complex graph topologies taking into account hyper-graphs, {\em i.e.} complexes of more than two interacting proteins.
|
0901.3914
|
Yoichiro Mori
|
Yoichiro Mori
|
From Three-Dimensional Electrophysiology to the Cable Model: an
Asymptotic Study
| null | null | null | null |
q-bio.NC physics.bio-ph q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cellular electrophysiology is often modeled using the cable equations. The
cable model can only be used when ionic concentration effects and three
dimensional geometry effects are negligible. The Poisson model, in which the
electrostatic potential satisfies the Poisson equation and the ionic
concentrations satisfy the drift-diffusion equation, is a system of equations
that can incorporate such effects. The Poisson model is unfortunately
prohibitively expensive for numerical computation because of the presence of
thin space charge layers at internal membrane boundaries. As a computationally
efficient and biophysically natural alternative, we introduce the
electroneutral model in which the Poisson equation is replaced by the
electroneutrality condition and the presence of the space charge layer is
incorporated in boundary conditions at the membrane interfaces. We use matched
asymptotics and numerical computations to show that the electroneutral model
provides an excellent approximation to the Poisson model. Further asymptotic
calculations illuminate the relationship of the electroneutral or Poisson
models with the cable model, and reveal the presence of a hierarchy of
electrophysiology models.
|
[
{
"created": "Sun, 25 Jan 2009 18:34:09 GMT",
"version": "v1"
}
] |
2009-01-27
|
[
[
"Mori",
"Yoichiro",
""
]
] |
Cellular electrophysiology is often modeled using the cable equations. The cable model can only be used when ionic concentration effects and three dimensional geometry effects are negligible. The Poisson model, in which the electrostatic potential satisfies the Poisson equation and the ionic concentrations satisfy the drift-diffusion equation, is a system of equations that can incorporate such effects. The Poisson model is unfortunately prohibitively expensive for numerical computation because of the presence of thin space charge layers at internal membrane boundaries. As a computationally efficient and biophysically natural alternative, we introduce the electroneutral model in which the Poisson equation is replaced by the electroneutrality condition and the presence of the space charge layer is incorporated in boundary conditions at the membrane interfaces. We use matched asymptotics and numerical computations to show that the electroneutral model provides an excellent approximation to the Poisson model. Further asymptotic calculations illuminate the relationship of the electroneutral or Poisson models with the cable model, and reveal the presence of a hierarchy of electrophysiology models.
|
2001.10351
|
Ali Demirci
|
Ali Demirci, Ayse Peker Dobie, Ayse Humeyra Bilge, Semra Ahmetolan
|
Unexpected parameter ranges of the 2009 A(H1N1) epidemic for Istanbul
and the Netherlands
| null | null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The data of the 2009 A(H1N1) epidemic in Istanbul, Turkey is unique in terms
of the collected data, which include not only the hospitalization but also the
fatality information recorded during the pandemic. The analysis of this data
displayed an unexpected time shift between the hospital referrals and
fatalities. This time shift, which does not conform to the SIR and SEIR models,
was explained by multi-stage SIR and SEIR models [21]. In this study we prove
that the delay for these models is half of the infectious period within a
quadratic approximation, and we determine the epidemic parameters $R_0$, $T$
and $I_0$ of the 2009 A(H1N1) Istanbul and Netherlands epidemics.These epidemic
parameters were estimated by comparing the normalized cumulative fatality data
with the solutions of the SIR model. Two different error criteria, the $L_2$
norms of the error over the whole observation period and over the initial
portion of the data, were used in order to obtain the best-fitting models. It
was observed that, with respect to both criteria, the parameters of "good"
models were agglomerated along a line in the $T$-$R_0$ plane, instead of being
scattered uniformly around a "best" model. As this fact indicates the existence
of a nearly invariant quantity, interval estimates for the parameters were
given. As the initial phase of the epidemics were less influenced by the
effects of medical interventions, the error norm based on the initial portion
of the data was preferred. However, the presented parameter ranges are well out
of the range for the usual influenza epidemic parameter values. To confirm our
observations on the Istanbul data, the same error criteria were also used for
the 2009 A(H1N1) epidemic for the Netherlands, which has a similar population
density as in Istanbul. As in the Istanbul case, the parameter ranges do not
match the usual influenza epidemic parameter values.
|
[
{
"created": "Tue, 28 Jan 2020 14:26:02 GMT",
"version": "v1"
}
] |
2020-01-29
|
[
[
"Demirci",
"Ali",
""
],
[
"Dobie",
"Ayse Peker",
""
],
[
"Bilge",
"Ayse Humeyra",
""
],
[
"Ahmetolan",
"Semra",
""
]
] |
The data of the 2009 A(H1N1) epidemic in Istanbul, Turkey is unique in terms of the collected data, which include not only the hospitalization but also the fatality information recorded during the pandemic. The analysis of this data displayed an unexpected time shift between the hospital referrals and fatalities. This time shift, which does not conform to the SIR and SEIR models, was explained by multi-stage SIR and SEIR models [21]. In this study we prove that the delay for these models is half of the infectious period within a quadratic approximation, and we determine the epidemic parameters $R_0$, $T$ and $I_0$ of the 2009 A(H1N1) Istanbul and Netherlands epidemics.These epidemic parameters were estimated by comparing the normalized cumulative fatality data with the solutions of the SIR model. Two different error criteria, the $L_2$ norms of the error over the whole observation period and over the initial portion of the data, were used in order to obtain the best-fitting models. It was observed that, with respect to both criteria, the parameters of "good" models were agglomerated along a line in the $T$-$R_0$ plane, instead of being scattered uniformly around a "best" model. As this fact indicates the existence of a nearly invariant quantity, interval estimates for the parameters were given. As the initial phase of the epidemics were less influenced by the effects of medical interventions, the error norm based on the initial portion of the data was preferred. However, the presented parameter ranges are well out of the range for the usual influenza epidemic parameter values. To confirm our observations on the Istanbul data, the same error criteria were also used for the 2009 A(H1N1) epidemic for the Netherlands, which has a similar population density as in Istanbul. As in the Istanbul case, the parameter ranges do not match the usual influenza epidemic parameter values.
|
1212.5010
|
Tomas Tokar
|
Tomas Tokar, Zdenko Turcan, Jozef Ulicny
|
Boolean network-based model of the Bcl-2 family mediated MOMP regulation
| null | null | null | null |
q-bio.MN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Mitochondrial outer membrane permeabilization (MOMP) is one of the most
important points, in majority of apoptotic signaling cascades. Decision
mechanism controlling whether the MOMP occurs or not, is formed by an interplay
between members of the Bcl-2 family. To understand the role of individual
members of this family within the MOMP regulation, we constructed a boolean
network-based mathematical model of interactions between the Bcl-2 proteins.
Results of computational simulations reveal the existence of the potentially
malign configurations of activities of the Bcl-2 proteins, blocking the
occurrence of MOMP, independently of the incoming stimuli. Our results suggest
role of the antiapoptotic protein Mcl-1 in relation to these configurations. We
demonstrate here, the importance of the Bid and Bim according to activation of
effectors Bax and Bak, and the irreversibility of this activation. The model
further shows the distinct requirements for effectors activation, where the
antiapoptic protein Bcl-w is seemingly a key factor preventing the Bax
activation. We believe that this work may help to describe the functioning of
the Bcl-2 regulation of MOMP better, and hopefully provide some contribution
regarding the anti-cancer drug development research.
|
[
{
"created": "Thu, 20 Dec 2012 12:45:22 GMT",
"version": "v1"
}
] |
2012-12-21
|
[
[
"Tokar",
"Tomas",
""
],
[
"Turcan",
"Zdenko",
""
],
[
"Ulicny",
"Jozef",
""
]
] |
Mitochondrial outer membrane permeabilization (MOMP) is one of the most important points, in majority of apoptotic signaling cascades. Decision mechanism controlling whether the MOMP occurs or not, is formed by an interplay between members of the Bcl-2 family. To understand the role of individual members of this family within the MOMP regulation, we constructed a boolean network-based mathematical model of interactions between the Bcl-2 proteins. Results of computational simulations reveal the existence of the potentially malign configurations of activities of the Bcl-2 proteins, blocking the occurrence of MOMP, independently of the incoming stimuli. Our results suggest role of the antiapoptotic protein Mcl-1 in relation to these configurations. We demonstrate here, the importance of the Bid and Bim according to activation of effectors Bax and Bak, and the irreversibility of this activation. The model further shows the distinct requirements for effectors activation, where the antiapoptic protein Bcl-w is seemingly a key factor preventing the Bax activation. We believe that this work may help to describe the functioning of the Bcl-2 regulation of MOMP better, and hopefully provide some contribution regarding the anti-cancer drug development research.
|
2102.08209
|
Masoumeh Zareh
|
Masoumeh Zareh, Mohammad Hossein Manshaei, and Sayed Jalal Zahabi
|
Modeling the Hallucinating Brain: A Generative Adversarial Framework
| null | null | null | null |
q-bio.NC cs.LG
|
http://creativecommons.org/publicdomain/zero/1.0/
|
This paper looks into the modeling of hallucination in the human's brain.
Hallucinations are known to be causally associated with some malfunctions
within the interaction of different areas of the brain involved in perception.
Focusing on visual hallucination and its underlying causes, we identify an
adversarial mechanism between different parts of the brain which are
responsible in the process of visual perception. We then show how the
characterized adversarial interactions in the brain can be modeled by a
generative adversarial network.
|
[
{
"created": "Tue, 9 Feb 2021 14:30:14 GMT",
"version": "v1"
}
] |
2021-02-17
|
[
[
"Zareh",
"Masoumeh",
""
],
[
"Manshaei",
"Mohammad Hossein",
""
],
[
"Zahabi",
"Sayed Jalal",
""
]
] |
This paper looks into the modeling of hallucination in the human's brain. Hallucinations are known to be causally associated with some malfunctions within the interaction of different areas of the brain involved in perception. Focusing on visual hallucination and its underlying causes, we identify an adversarial mechanism between different parts of the brain which are responsible in the process of visual perception. We then show how the characterized adversarial interactions in the brain can be modeled by a generative adversarial network.
|
0910.2084
|
Suman Kumar Banik
|
Suman K Banik, Andrew T Fenley and Rahul V Kulkarni
|
A model for signal transduction during quorum sensing in \emph{Vibrio
harveyi}
|
18 pages, 5 figures, IOP style files included
|
Phys. Biol. 6 (2009) 046008
|
10.1088/1478-3975/6/4/046008
| null |
q-bio.CB q-bio.MN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We present a framework for analyzing luminescence regulation during quorum
sensing in the bioluminescent bacterium \emph{Vibrio harveyi}. Using a
simplified model for signal transduction in the quorum sensing pathway, we
identify key dimensionless parameters that control the system's response. These
parameters are estimated using experimental data on luminescence phenotypes for
different mutant strains. The corresponding model predictions are consistent
with results from other experiments which did not serve as inputs for
determining model parameters. Furthermore, the proposed framework leads to
novel testable predictions for luminescence phenotypes and for responses of the
network to different perturbations.
|
[
{
"created": "Mon, 12 Oct 2009 05:37:45 GMT",
"version": "v1"
}
] |
2009-10-22
|
[
[
"Banik",
"Suman K",
""
],
[
"Fenley",
"Andrew T",
""
],
[
"Kulkarni",
"Rahul V",
""
]
] |
We present a framework for analyzing luminescence regulation during quorum sensing in the bioluminescent bacterium \emph{Vibrio harveyi}. Using a simplified model for signal transduction in the quorum sensing pathway, we identify key dimensionless parameters that control the system's response. These parameters are estimated using experimental data on luminescence phenotypes for different mutant strains. The corresponding model predictions are consistent with results from other experiments which did not serve as inputs for determining model parameters. Furthermore, the proposed framework leads to novel testable predictions for luminescence phenotypes and for responses of the network to different perturbations.
|
1911.04374
|
Laura-Jayne Gardiner
|
Laura-Jayne Gardiner, Anna Paola Carrieri, Jenny Wilshaw, Stephen
Checkley, Edward O Pyzer-Knapp and Ritesh Krishna
|
Combining human cell line transcriptome analysis and Bayesian inference
to build trustworthy machine learning models for prediction of animal
toxicity in drug development
|
Machine Learning for Health (ML4H) at NeurIPS 2019 - Extended
Abstract
| null | null | null |
q-bio.GN q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Biomedical data, particularly in the field of genomics, has characteristics
which make it challenging for machine learning applications - it can be sparse,
high dimensional and noisy. Biomedical applications also present challenges to
model selection - whilst powerful, accurate predictions are necessary, they
alone are not sufficient for a model to be deemed useful. Due to the nature of
the predictions, a model must also be trustworthy and transparent, empowering a
practitioner with confidence that its use is appropriate and reliable. In this
paper, we propose that this can be achieved through the use of judiciously
built feature sets coupled with Bayesian models, specifically Gaussian
processes. We apply Gaussian processes to drug discovery, using inexpensive
transcriptomic profiles from human cell lines to predict animal kidney and
liver toxicity after treatment with specific chemical compounds. This approach
has the potential to reduce invasive and expensive animal testing during
clinical trials if in vitro human cell line analysis can accurately predict
model animal phenotypes. We compare results across a range of feature sets and
models, to highlight model importance for medical applications.
|
[
{
"created": "Mon, 11 Nov 2019 16:32:02 GMT",
"version": "v1"
},
{
"created": "Tue, 12 Nov 2019 14:06:44 GMT",
"version": "v2"
}
] |
2019-11-13
|
[
[
"Gardiner",
"Laura-Jayne",
""
],
[
"Carrieri",
"Anna Paola",
""
],
[
"Wilshaw",
"Jenny",
""
],
[
"Checkley",
"Stephen",
""
],
[
"Pyzer-Knapp",
"Edward O",
""
],
[
"Krishna",
"Ritesh",
""
]
] |
Biomedical data, particularly in the field of genomics, has characteristics which make it challenging for machine learning applications - it can be sparse, high dimensional and noisy. Biomedical applications also present challenges to model selection - whilst powerful, accurate predictions are necessary, they alone are not sufficient for a model to be deemed useful. Due to the nature of the predictions, a model must also be trustworthy and transparent, empowering a practitioner with confidence that its use is appropriate and reliable. In this paper, we propose that this can be achieved through the use of judiciously built feature sets coupled with Bayesian models, specifically Gaussian processes. We apply Gaussian processes to drug discovery, using inexpensive transcriptomic profiles from human cell lines to predict animal kidney and liver toxicity after treatment with specific chemical compounds. This approach has the potential to reduce invasive and expensive animal testing during clinical trials if in vitro human cell line analysis can accurately predict model animal phenotypes. We compare results across a range of feature sets and models, to highlight model importance for medical applications.
|
2009.11121
|
Cecilia Jarne
|
C. Jarne, F A. G\'omez Albarrac\'in, M. Caruso
|
An algorithm to represent inbreeding trees
| null | null |
10.1016/j.physa.2021.125894
| null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent work has proven the existence of extreme inbreeding in a European
ancestry sample taken from the contemporary UK population \cite{nature_01}.
This result brings our attention again to a math problem related to inbreeding
family trees and diversity. Groups with a finite number of individuals could
give a variety of genetic relationships. { In previous works
\cite{PhysRevE.92.052132, PhysRevE.90.022125, JARNE20191}, we have addressed
the issue of building inbreeding trees for biparental reproduction using
Markovian models. Here, we extend these studies by presenting an algorithm to
generate and represent inbreeding trees with no overlapping generations. We
explicitly assume a two-gender reproductory scheme, and we pay particular
attention to the links between nodes. We show that even for a simple case with
a relatively small number of nodes in the tree, there are a large number of
possible ways to rearrange the links between generations. We present an
open-source python code to generate the tree graph, the adjacency matrix, and
the histogram of the links for each different tree representation. We show how
this mapping reflects the difference between tree realizations, and how
valuable information may be extracted upon inspection of these matrices. The
algorithm includes a feature to average several tree realizations, obtain the
connectivity distribution, and calculate the average and mean value. We used
this feature to compare trees with a different number of generations and nodes.
The code presented here, available in Git-Hub, may be easily modified to be
applied to other areas of interest involving connections between individuals,
extend the study to add more characteristics of the different nodes, etc.
|
[
{
"created": "Mon, 21 Sep 2020 20:24:30 GMT",
"version": "v1"
},
{
"created": "Mon, 1 Feb 2021 14:21:41 GMT",
"version": "v2"
}
] |
2021-09-08
|
[
[
"Jarne",
"C.",
""
],
[
"Albarracín",
"F A. Gómez",
""
],
[
"Caruso",
"M.",
""
]
] |
Recent work has proven the existence of extreme inbreeding in a European ancestry sample taken from the contemporary UK population \cite{nature_01}. This result brings our attention again to a math problem related to inbreeding family trees and diversity. Groups with a finite number of individuals could give a variety of genetic relationships. { In previous works \cite{PhysRevE.92.052132, PhysRevE.90.022125, JARNE20191}, we have addressed the issue of building inbreeding trees for biparental reproduction using Markovian models. Here, we extend these studies by presenting an algorithm to generate and represent inbreeding trees with no overlapping generations. We explicitly assume a two-gender reproductory scheme, and we pay particular attention to the links between nodes. We show that even for a simple case with a relatively small number of nodes in the tree, there are a large number of possible ways to rearrange the links between generations. We present an open-source python code to generate the tree graph, the adjacency matrix, and the histogram of the links for each different tree representation. We show how this mapping reflects the difference between tree realizations, and how valuable information may be extracted upon inspection of these matrices. The algorithm includes a feature to average several tree realizations, obtain the connectivity distribution, and calculate the average and mean value. We used this feature to compare trees with a different number of generations and nodes. The code presented here, available in Git-Hub, may be easily modified to be applied to other areas of interest involving connections between individuals, extend the study to add more characteristics of the different nodes, etc.
|
1612.02807
|
Thierry Mora
|
Ulisse Ferrari, Tomoyuki Obuchi, Thierry Mora
|
Random versus maximum entropy models of neural population activity
| null |
Phys. Rev. E 95, 042321 (2017)
|
10.1103/PhysRevE.95.042321
| null |
q-bio.NC cond-mat.dis-nn
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
The principle of maximum entropy provides a useful method for inferring
statistical mechanics models from observations in correlated systems, and is
widely used in a variety of fields where accurate data are available. While the
assumptions underlying maximum entropy are intuitive and appealing, its
adequacy for describing complex empirical data has been little studied in
comparison to alternative approaches. Here data from the collective spiking
activity of retinal neurons is reanalysed. The accuracy of the maximum entropy
distribution constrained by mean firing rates and pairwise correlations is
compared to a random ensemble of distributions constrained by the same
observables. In general, maximum entropy approximates the true distribution
better than the typical or mean distribution from that ensemble. This advantage
improves with population size, with groups as small as 8 being almost always
better described by maximum entropy. Failure of maximum entropy to outperform
random models is found to be associated with strong correlations in the
population.
|
[
{
"created": "Thu, 8 Dec 2016 20:44:59 GMT",
"version": "v1"
}
] |
2017-06-02
|
[
[
"Ferrari",
"Ulisse",
""
],
[
"Obuchi",
"Tomoyuki",
""
],
[
"Mora",
"Thierry",
""
]
] |
The principle of maximum entropy provides a useful method for inferring statistical mechanics models from observations in correlated systems, and is widely used in a variety of fields where accurate data are available. While the assumptions underlying maximum entropy are intuitive and appealing, its adequacy for describing complex empirical data has been little studied in comparison to alternative approaches. Here data from the collective spiking activity of retinal neurons is reanalysed. The accuracy of the maximum entropy distribution constrained by mean firing rates and pairwise correlations is compared to a random ensemble of distributions constrained by the same observables. In general, maximum entropy approximates the true distribution better than the typical or mean distribution from that ensemble. This advantage improves with population size, with groups as small as 8 being almost always better described by maximum entropy. Failure of maximum entropy to outperform random models is found to be associated with strong correlations in the population.
|
1509.05904
|
Shai Carmi
|
Shai Carmi, James Xue, and Itsik Pe'er
|
A note on the distribution of admixture segment lengths and ancestry
proportions under pulse and two-wave admixture models
|
12 pages, 3 figures
| null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Admixed populations are formed by the merging of two or more ancestral
populations, and the ancestry of each locus in an admixed genome derives from
either source. Consider a simple "pulse" admixture model, where populations A
and B merged t generations ago without subsequent gene flow. We derive the
distribution of the proportion of an admixed chromosome that has A (or B)
ancestry, as a function of the chromosome length L, t, and the initial
contribution of the A source, m. We demonstrate that these results can be used
for inference of the admixture parameters. For more complex admixture models,
we derive an expression in Laplace space for the distribution of ancestry
proportions that depends on having the distribution of the lengths of segments
of each ancestry. We obtain explicit results for the special case of a
"two-wave" admixture model, where population A contributed additional migrants
in one of the generations between the present and the initial admixture event.
Specifically, we derive formulas for the distribution of A and B segment
lengths and numerical results for the distribution of ancestry proportions. We
show that for recent admixture, data generated under a two-wave model can
hardly be distinguished from that generated under a pulse model.
|
[
{
"created": "Sat, 19 Sep 2015 15:42:53 GMT",
"version": "v1"
}
] |
2015-09-22
|
[
[
"Carmi",
"Shai",
""
],
[
"Xue",
"James",
""
],
[
"Pe'er",
"Itsik",
""
]
] |
Admixed populations are formed by the merging of two or more ancestral populations, and the ancestry of each locus in an admixed genome derives from either source. Consider a simple "pulse" admixture model, where populations A and B merged t generations ago without subsequent gene flow. We derive the distribution of the proportion of an admixed chromosome that has A (or B) ancestry, as a function of the chromosome length L, t, and the initial contribution of the A source, m. We demonstrate that these results can be used for inference of the admixture parameters. For more complex admixture models, we derive an expression in Laplace space for the distribution of ancestry proportions that depends on having the distribution of the lengths of segments of each ancestry. We obtain explicit results for the special case of a "two-wave" admixture model, where population A contributed additional migrants in one of the generations between the present and the initial admixture event. Specifically, we derive formulas for the distribution of A and B segment lengths and numerical results for the distribution of ancestry proportions. We show that for recent admixture, data generated under a two-wave model can hardly be distinguished from that generated under a pulse model.
|
1311.2435
|
Jo\~ao Batista
|
Jo\~ao Barroso-Batista, Ana Sousa, Marta Louren\c{c}o, Marie-Louise
Bergman, Jocelyne Demengeot, Karina B. Xavier and Isabel Gordo
|
The first steps of adaptation of Escherichia coli to the gut are
dominated by soft sweeps
|
46 pages with figures (5 in the main text, 4 in supplementary
materials) and tables (7 in supplementary materials). Submitted to PLOS
Genetics
|
PLoS Genet 10(3): e1004182 (2014)
|
10.1371/journal.pgen.1004182
| null |
q-bio.PE
|
http://creativecommons.org/licenses/by-nc-sa/3.0/
|
The accumulation of adaptive mutations is essential for survival in novel
environments. However, in clonal populations with a high mutational supply, the
power of natural selection is expected to be limited. This is due to clonal
interference - the competition of clones carrying different beneficial
mutations - which leads to the loss of many small effect mutations and fixation
of large effect ones. If interference is abundant, then mechanisms for
horizontal transfer of genes, which allow the immediate combination of
beneficial alleles in a single background, are expected to evolve. However, the
relevance of interference in natural complex environments, such as the gut, is
poorly known. To address this issue, we studied the invasion of beneficial
mutations responsible for Escherichia coli's adaptation to the mouse gut and
demonstrate the pervasiveness of clonal interference. The observed dynamics of
change in frequency of beneficial mutations are consistent with soft sweeps,
where a similar adaptive mutation arises repeatedly on different haplotypes
without reaching fixation. The genetic basis of the adaptive mutations revealed
a striking parallelism in independently evolving populations. This was mainly
characterized by the insertion of transposable elements in both coding and
regulatory regions of a few genes. Interestingly in most populations, we
observed a complete phenotypic sweep without loss of genetic variation. The
intense clonal interference during adaptation to the gut environment, here
demonstrated, may be important for our understanding of the levels of strain
diversity of E. coli inhabiting the human gut microbiota and of its
recombination rate.
|
[
{
"created": "Mon, 11 Nov 2013 13:24:52 GMT",
"version": "v1"
}
] |
2014-04-04
|
[
[
"Barroso-Batista",
"João",
""
],
[
"Sousa",
"Ana",
""
],
[
"Lourenço",
"Marta",
""
],
[
"Bergman",
"Marie-Louise",
""
],
[
"Demengeot",
"Jocelyne",
""
],
[
"Xavier",
"Karina B.",
""
],
[
"Gordo",
"Isabel",
""
]
] |
The accumulation of adaptive mutations is essential for survival in novel environments. However, in clonal populations with a high mutational supply, the power of natural selection is expected to be limited. This is due to clonal interference - the competition of clones carrying different beneficial mutations - which leads to the loss of many small effect mutations and fixation of large effect ones. If interference is abundant, then mechanisms for horizontal transfer of genes, which allow the immediate combination of beneficial alleles in a single background, are expected to evolve. However, the relevance of interference in natural complex environments, such as the gut, is poorly known. To address this issue, we studied the invasion of beneficial mutations responsible for Escherichia coli's adaptation to the mouse gut and demonstrate the pervasiveness of clonal interference. The observed dynamics of change in frequency of beneficial mutations are consistent with soft sweeps, where a similar adaptive mutation arises repeatedly on different haplotypes without reaching fixation. The genetic basis of the adaptive mutations revealed a striking parallelism in independently evolving populations. This was mainly characterized by the insertion of transposable elements in both coding and regulatory regions of a few genes. Interestingly in most populations, we observed a complete phenotypic sweep without loss of genetic variation. The intense clonal interference during adaptation to the gut environment, here demonstrated, may be important for our understanding of the levels of strain diversity of E. coli inhabiting the human gut microbiota and of its recombination rate.
|
1112.1393
|
Sara Walker
|
Marcelo Gleiser, Bradley J. Nelson and Sara Imari Walker
|
Chiral Polymerization in Open Systems From Chiral-Selective Reaction
Rates
|
15 pages, 6 figures, accepted for publication in Origins of Life and
Evolution of Biospheres
| null |
10.1007/s11084-012-9274-5
| null |
q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We investigate the possibility that prebiotic homochirality can be achieved
exclusively through chiral-selective reaction rate parameters without any other
explicit mechanism for chiral bias. Specifically, we examine an open network of
polymerization reactions, where the reaction rates can have chiral-selective
values. The reactions are neither autocatalytic nor do they contain explicit
enantiomeric cross-inhibition terms. We are thus investigating how rare a set
of chiral-selective reaction rates needs to be in order to generate a
reasonable amount of chiral bias. We quantify our results adopting a
statistical approach: varying both the mean value and the rms dispersion of the
relevant reaction rates, we show that moderate to high levels of chiral excess
can be achieved with fairly small chiral bias, below 10%. Considering the
various unknowns related to prebiotic chemical networks in early Earth and the
dependence of reaction rates to environmental properties such as temperature
and pressure variations, we argue that homochirality could have been achieved
from moderate amounts of chiral selectivity in the reaction rates.
|
[
{
"created": "Tue, 6 Dec 2011 20:20:13 GMT",
"version": "v1"
},
{
"created": "Mon, 5 Mar 2012 07:04:08 GMT",
"version": "v2"
},
{
"created": "Thu, 26 Apr 2012 17:24:47 GMT",
"version": "v3"
}
] |
2015-06-03
|
[
[
"Gleiser",
"Marcelo",
""
],
[
"Nelson",
"Bradley J.",
""
],
[
"Walker",
"Sara Imari",
""
]
] |
We investigate the possibility that prebiotic homochirality can be achieved exclusively through chiral-selective reaction rate parameters without any other explicit mechanism for chiral bias. Specifically, we examine an open network of polymerization reactions, where the reaction rates can have chiral-selective values. The reactions are neither autocatalytic nor do they contain explicit enantiomeric cross-inhibition terms. We are thus investigating how rare a set of chiral-selective reaction rates needs to be in order to generate a reasonable amount of chiral bias. We quantify our results adopting a statistical approach: varying both the mean value and the rms dispersion of the relevant reaction rates, we show that moderate to high levels of chiral excess can be achieved with fairly small chiral bias, below 10%. Considering the various unknowns related to prebiotic chemical networks in early Earth and the dependence of reaction rates to environmental properties such as temperature and pressure variations, we argue that homochirality could have been achieved from moderate amounts of chiral selectivity in the reaction rates.
|
1702.07038
|
Isabel Gauthier Isabel Gauthier
|
Isabel Gauthier
|
The Quest for the FFA led to the Expertise Account of its Specialization
| null | null | null | null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This article is written in response to a Progressions article by Kanwisher in
the Journal of Neuroscience, The Quest for the FFA and Where It Led (Kanwisher,
2017). I reflect on the extensive research program dedicated to the study of
how and why perceptual expertise explains the many ways that faces are special,
a research program which both predates and follows the Kanwisher (1997)
landmark article where the fusiform face area (FFA) is named. The expertise
accounts suggests that the FFA is an area recruited by expertise individuating
objects that are perceptually similar because they share a configuration of
parts. While Kanwisher (2017) discussed the expertise account only very briefly
and only to dismiss it, there is strong and replicable evidence that responses
in the FFA are highly sensitive to experience with non-face objects. I point
out that Kanwisher was well positioned to present these findings in their
historical context as she participated in the design of the first fMRI study on
car and bird expertise, as well as the first replication of this finding.
Perhaps most relevant to readers interested in the neural bases of face
processing, it is important to distinguish studies that describe the phenomenon
of face-selectivity from those that test an explanation for this phenomenon. In
the Progressions article, arguments for a face-dedicated processing module that
is not the result of our experience with faces are provided without attention
to a great deal of expertise research which is directly inconsistent with that
claim. The claim also lacks more direct support, as face-selective responses in
the visual system are not found in infants and children and face-selective
activity in FFA does not appear to be heritable.
|
[
{
"created": "Wed, 22 Feb 2017 23:11:00 GMT",
"version": "v1"
}
] |
2017-02-24
|
[
[
"Gauthier",
"Isabel",
""
]
] |
This article is written in response to a Progressions article by Kanwisher in the Journal of Neuroscience, The Quest for the FFA and Where It Led (Kanwisher, 2017). I reflect on the extensive research program dedicated to the study of how and why perceptual expertise explains the many ways that faces are special, a research program which both predates and follows the Kanwisher (1997) landmark article where the fusiform face area (FFA) is named. The expertise accounts suggests that the FFA is an area recruited by expertise individuating objects that are perceptually similar because they share a configuration of parts. While Kanwisher (2017) discussed the expertise account only very briefly and only to dismiss it, there is strong and replicable evidence that responses in the FFA are highly sensitive to experience with non-face objects. I point out that Kanwisher was well positioned to present these findings in their historical context as she participated in the design of the first fMRI study on car and bird expertise, as well as the first replication of this finding. Perhaps most relevant to readers interested in the neural bases of face processing, it is important to distinguish studies that describe the phenomenon of face-selectivity from those that test an explanation for this phenomenon. In the Progressions article, arguments for a face-dedicated processing module that is not the result of our experience with faces are provided without attention to a great deal of expertise research which is directly inconsistent with that claim. The claim also lacks more direct support, as face-selective responses in the visual system are not found in infants and children and face-selective activity in FFA does not appear to be heritable.
|
1504.00043
|
Sattar Taheri-Araghi
|
Suckjoon Jun and Sattar Taheri-Araghi
|
Cell-size maintenance: universal strategy revealed
| null |
Trends in Microbiology, Vol. 23, No. 1, 4-6, 2015
|
10.1016/j.tim.2014.12.001
| null |
q-bio.CB physics.bio-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
How cells maintain a stable size has fascinated scientists since the
beginning of modern biology, but has remained largely mysterious. Recently,
however, the ability to analyze single bacteria in real time has provided new,
important quantitative insights into this long-standing question in cell
biology.
|
[
{
"created": "Tue, 31 Mar 2015 21:13:06 GMT",
"version": "v1"
}
] |
2015-04-02
|
[
[
"Jun",
"Suckjoon",
""
],
[
"Taheri-Araghi",
"Sattar",
""
]
] |
How cells maintain a stable size has fascinated scientists since the beginning of modern biology, but has remained largely mysterious. Recently, however, the ability to analyze single bacteria in real time has provided new, important quantitative insights into this long-standing question in cell biology.
|
1508.05707
|
Lucilla de Arcangelis
|
Vittorio Capano, Hans J. Herrmann and Lucilla de Arcangelis
|
Optimal percentage of inhibitory synapses in multi-task learning
|
5 pages, 5 figures
|
SCIENTIFIC REPORTS vol 5 page 9895 (2015)
|
10.1038/srep09895
| null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Performing more tasks in parallel is a typical feature of complex brains.
These are characterized by the coexistence of excitatory and inhibitory
synapses, whose percentage in mammals is measured to have a typical value of
20-30\%. Here we investigate parallel learning of more Boolean rules in
neuronal networks. We find that multi-task learning results from the
alternation of learning and forgetting of the individual rules. Interestingly,
a fraction of 30\% inhibitory synapses optimizes the overall performance,
carving a complex backbone supporting information transmission with a minimal
shortest path length. We show that 30\% inhibitory synapses is the percentage
maximizing the learning performance since it guarantees, at the same time, the
network excitability necessary to express the response and the variability
required to confine the employment of resources.
|
[
{
"created": "Mon, 24 Aug 2015 07:27:05 GMT",
"version": "v1"
}
] |
2015-08-25
|
[
[
"Capano",
"Vittorio",
""
],
[
"Herrmann",
"Hans J.",
""
],
[
"de Arcangelis",
"Lucilla",
""
]
] |
Performing more tasks in parallel is a typical feature of complex brains. These are characterized by the coexistence of excitatory and inhibitory synapses, whose percentage in mammals is measured to have a typical value of 20-30\%. Here we investigate parallel learning of more Boolean rules in neuronal networks. We find that multi-task learning results from the alternation of learning and forgetting of the individual rules. Interestingly, a fraction of 30\% inhibitory synapses optimizes the overall performance, carving a complex backbone supporting information transmission with a minimal shortest path length. We show that 30\% inhibitory synapses is the percentage maximizing the learning performance since it guarantees, at the same time, the network excitability necessary to express the response and the variability required to confine the employment of resources.
|
1807.00668
|
Andrew McMurry
|
Andrew J McMurry (1), Richen Zhang (1), Alex Foxman (2), Lawrence
Reiter, (2) Ronny Schnel (2), DeLeys Brandman (1) ((1) Medal, Inc, (2)
NACORS, LLC - National Accountable Care Organization Research Services)
|
Using routinely collected patient data to support clinical trials
research in accountable care organizations
|
11 pages including cover, 4 figures, 2 tables
| null | null | null |
q-bio.QM cs.CY cs.IR
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Background: More than half (57%) of pharma clinical research spend is in
support of clinical trials. One reason is that Electronic Health Record (EHR)
systems and HIPAA privacy rules often limit how broadly patient information can
be shared, resulting in laborious human efforts to manually collect,
de-identify, and summarize patient information for use in clinical studies.
Purpose: Conduct feasibility study for a Rheumatoid Arthritis (RA) clinical
trial in an Accountable Care Organization. Measure prevalence of RA and related
conditions matching study criteria. Evaluate automation of patient
de-identification and summarization to support patient cohort development for
clinical studies.
Methods: Collect original clinical documentation directly from the provider
EHR system and extract clinical concepts necessary for matching study criteria.
Automatically de-identify Protected Health Information (PHI) protect patient
privacy and promote sharing. Leverage existing physician expert knowledge
sources to enable analysis of patient populations.
Results: Prevalence of RA was four percent (4%) in the study population (mean
age 53 years, 52% female, 48% male). Clinical documentation for 3500 patient
were extracted from three (3) EHR systems. Grouped diagnosis codes revealed
high prevalence of diabetes and diseases of the circulatory system, as
expected. De-identification accurately removed 99% of PHI identifiers with 99%
sensitivity and 99% specificity.
Conclusions: Results suggest the approach can improve automation and
accelerate planning and construction of new clinical studies in the ACO
setting. De-identification accuracy was better than previously approved
requirements defined by four (4) hospital Institutional Review Boards.
|
[
{
"created": "Mon, 25 Jun 2018 22:13:30 GMT",
"version": "v1"
}
] |
2018-07-03
|
[
[
"McMurry",
"Andrew J",
""
],
[
"Zhang",
"Richen",
""
],
[
"Foxman",
"Alex",
""
],
[
"Reiter",
"Lawrence",
""
],
[
"Schnel",
"Ronny",
""
],
[
"Brandman",
"DeLeys",
""
]
] |
Background: More than half (57%) of pharma clinical research spend is in support of clinical trials. One reason is that Electronic Health Record (EHR) systems and HIPAA privacy rules often limit how broadly patient information can be shared, resulting in laborious human efforts to manually collect, de-identify, and summarize patient information for use in clinical studies. Purpose: Conduct feasibility study for a Rheumatoid Arthritis (RA) clinical trial in an Accountable Care Organization. Measure prevalence of RA and related conditions matching study criteria. Evaluate automation of patient de-identification and summarization to support patient cohort development for clinical studies. Methods: Collect original clinical documentation directly from the provider EHR system and extract clinical concepts necessary for matching study criteria. Automatically de-identify Protected Health Information (PHI) protect patient privacy and promote sharing. Leverage existing physician expert knowledge sources to enable analysis of patient populations. Results: Prevalence of RA was four percent (4%) in the study population (mean age 53 years, 52% female, 48% male). Clinical documentation for 3500 patient were extracted from three (3) EHR systems. Grouped diagnosis codes revealed high prevalence of diabetes and diseases of the circulatory system, as expected. De-identification accurately removed 99% of PHI identifiers with 99% sensitivity and 99% specificity. Conclusions: Results suggest the approach can improve automation and accelerate planning and construction of new clinical studies in the ACO setting. De-identification accuracy was better than previously approved requirements defined by four (4) hospital Institutional Review Boards.
|
1310.8139
|
Ido Kanter
|
Roni Vardi, Amir Goldental, Shoshana Guberman, Alexander Kalmanovich,
Hagar Marmari and Ido Kanter
|
Sudden synchrony leaps accompanied by frequency multiplications in
neuronal activity
|
23 pages, 3 figures
|
Front. Neural Circuits, 30 October 2013
|
10.3389/fncir.2013.00176
| null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A classical view of neural coding relies on temporal firing synchrony among
functional groups of neurons; however the underlying mechanism remains an
enigma. Here we experimentally demonstrate a mechanism where time-lags among
neuronal spiking leap from several tens of milliseconds to nearly zero-lag
synchrony. It also allows sudden leaps out of synchrony, hence forming short
epochs of synchrony. Our results are based on an experimental procedure where
conditioned stimulations were enforced on circuits of neurons embedded within a
large-scale network of cortical cells in vitro and are corroborated by
simulations of neuronal populations. The underlying biological mechanisms are
the unavoidable increase of the neuronal response latency to ongoing
stimulations and temporal or spatial summation required to generate evoked
spikes. These sudden leaps in and out of synchrony may be accompanied by
multiplications of the neuronal firing frequency, hence offering reliable
information-bearing indicators which may bridge between the two principal
neuronal coding paradigms.
|
[
{
"created": "Wed, 30 Oct 2013 13:14:09 GMT",
"version": "v1"
}
] |
2013-10-31
|
[
[
"Vardi",
"Roni",
""
],
[
"Goldental",
"Amir",
""
],
[
"Guberman",
"Shoshana",
""
],
[
"Kalmanovich",
"Alexander",
""
],
[
"Marmari",
"Hagar",
""
],
[
"Kanter",
"Ido",
""
]
] |
A classical view of neural coding relies on temporal firing synchrony among functional groups of neurons; however the underlying mechanism remains an enigma. Here we experimentally demonstrate a mechanism where time-lags among neuronal spiking leap from several tens of milliseconds to nearly zero-lag synchrony. It also allows sudden leaps out of synchrony, hence forming short epochs of synchrony. Our results are based on an experimental procedure where conditioned stimulations were enforced on circuits of neurons embedded within a large-scale network of cortical cells in vitro and are corroborated by simulations of neuronal populations. The underlying biological mechanisms are the unavoidable increase of the neuronal response latency to ongoing stimulations and temporal or spatial summation required to generate evoked spikes. These sudden leaps in and out of synchrony may be accompanied by multiplications of the neuronal firing frequency, hence offering reliable information-bearing indicators which may bridge between the two principal neuronal coding paradigms.
|
2210.03231
|
Qiang Li
|
Qiang Li, Greg Ver Steeg, Shujian Yu, Jesus Malo
|
Functional Connectome of the Human Brain with Total Correlation
|
22 pages, 13 figures
|
Entropy 2022, 24(12), 1725;
|
10.3390/e24121725
| null |
q-bio.NC math.ST stat.TH
|
http://creativecommons.org/licenses/by/4.0/
|
Recent studies proposed the use of Total Correlation to describe functional
connectivity among brain regions as a multivariate alternative to conventional
pair-wise measures such as correlation or mutual information. In this work we
build on this idea to infer a large scale (whole brain) connectivity network
based on Total Correlation and show the possibility of using this kind of
networks as biomarkers of brain alterations. In particular, this work uses
Correlation Explanation (CorEx) to estimate Total Correlation. First, we prove
that CorEx estimates of total correlation and clustering results are trustable
compared to ground truth values. Second, the inferred large scale connectivity
network extracted from the more extensive open fMRI datasets is consistent with
existing neuroscience studies but, interestingly, can estimate additional
relations beyond pair-wise regions. And finally, we show how the connectivity
graphs based on Total Correlation can also be an effective tool to aid in the
discovery of brain diseases.
|
[
{
"created": "Thu, 6 Oct 2022 21:56:30 GMT",
"version": "v1"
},
{
"created": "Tue, 11 Oct 2022 16:09:16 GMT",
"version": "v2"
},
{
"created": "Wed, 12 Oct 2022 14:37:35 GMT",
"version": "v3"
},
{
"created": "Mon, 14 Nov 2022 10:57:22 GMT",
"version": "v4"
}
] |
2022-11-28
|
[
[
"Li",
"Qiang",
""
],
[
"Steeg",
"Greg Ver",
""
],
[
"Yu",
"Shujian",
""
],
[
"Malo",
"Jesus",
""
]
] |
Recent studies proposed the use of Total Correlation to describe functional connectivity among brain regions as a multivariate alternative to conventional pair-wise measures such as correlation or mutual information. In this work we build on this idea to infer a large scale (whole brain) connectivity network based on Total Correlation and show the possibility of using this kind of networks as biomarkers of brain alterations. In particular, this work uses Correlation Explanation (CorEx) to estimate Total Correlation. First, we prove that CorEx estimates of total correlation and clustering results are trustable compared to ground truth values. Second, the inferred large scale connectivity network extracted from the more extensive open fMRI datasets is consistent with existing neuroscience studies but, interestingly, can estimate additional relations beyond pair-wise regions. And finally, we show how the connectivity graphs based on Total Correlation can also be an effective tool to aid in the discovery of brain diseases.
|
q-bio/0611040
|
Gerhard Schmid
|
G. Schmid, I. Goychuk and P. Hanggi
|
Capacitance fluctuations causing channel noise reduction in stochastic
Hodgkin-Huxley systems
|
18 pages
|
Physical Biology 3, 248254 (2006)
|
10.1088/1478-3975/3/4/002
| null |
q-bio.NC q-bio.SC
| null |
Voltage-dependent ion channels determine the electric properties of axonal
cell membranes. They not only allow the passage of ions through the cell
membrane but also contribute to an additional charging of the cell membrane
resulting in the so-called capacitance loading. The switching of the channel
gates between an open and a closed configuration is intrinsically related to
the movement of gating charge within the cell membrane. At the beginning of an
action potential the transient gating current is opposite to the direction of
the current of sodium ions through the membrane. Therefore, the excitability is
expected to become reduced due to the influence of a gating current. Our
stochastic Hodgkin-Huxley like modeling takes into account both the channel
noise -- i.e. the fluctuations of the number of open ion channels -- and the
capacitance fluctuations that result from the dynamics of the gating charge. We
investigate the spiking dynamics of membrane patches of variable size and
analyze the statistics of the spontaneous spiking. As a main result, we find
that the gating currents yield a drastic reduction of the spontaneous spiking
rate for sufficiently large ion channel clusters. Consequently, this
demonstrates a prominent mechanism for channel noise reduction.
|
[
{
"created": "Fri, 10 Nov 2006 13:19:55 GMT",
"version": "v1"
}
] |
2009-11-13
|
[
[
"Schmid",
"G.",
""
],
[
"Goychuk",
"I.",
""
],
[
"Hanggi",
"P.",
""
]
] |
Voltage-dependent ion channels determine the electric properties of axonal cell membranes. They not only allow the passage of ions through the cell membrane but also contribute to an additional charging of the cell membrane resulting in the so-called capacitance loading. The switching of the channel gates between an open and a closed configuration is intrinsically related to the movement of gating charge within the cell membrane. At the beginning of an action potential the transient gating current is opposite to the direction of the current of sodium ions through the membrane. Therefore, the excitability is expected to become reduced due to the influence of a gating current. Our stochastic Hodgkin-Huxley like modeling takes into account both the channel noise -- i.e. the fluctuations of the number of open ion channels -- and the capacitance fluctuations that result from the dynamics of the gating charge. We investigate the spiking dynamics of membrane patches of variable size and analyze the statistics of the spontaneous spiking. As a main result, we find that the gating currents yield a drastic reduction of the spontaneous spiking rate for sufficiently large ion channel clusters. Consequently, this demonstrates a prominent mechanism for channel noise reduction.
|
2104.02174
|
Saumya Yashmohini Sahai
|
Saumya Yashmohini Sahai, Saket Gurukar, Wasiur R. KhudaBukhsh,
Srinivasan Parthasarathy, Grzegorz A. Rempala
|
A Machine Learning Model for Nowcasting Epidemic Incidence
| null | null | null | null |
q-bio.QM
|
http://creativecommons.org/licenses/by/4.0/
|
Due to delay in reporting, the daily national and statewide COVID-19
incidence counts are often unreliable and need to be estimated from recent
data. This process is known in economics as nowcasting. We describe in this
paper a simple random forest statistical model for nowcasting the COVID - 19
daily new infection counts based on historic data along with a set of simple
covariates, such as the currently reported infection counts, day of the week,
and time since first reporting. We apply the model to adjust the daily
infection counts in Ohio, and show that the predictions from this simple
data-driven method compare favorably both in quality and computational burden
to those obtained from the state-of-the-art hierarchical Bayesian model
employing a complex statistical algorithm.
|
[
{
"created": "Mon, 5 Apr 2021 22:28:38 GMT",
"version": "v1"
}
] |
2021-04-07
|
[
[
"Sahai",
"Saumya Yashmohini",
""
],
[
"Gurukar",
"Saket",
""
],
[
"KhudaBukhsh",
"Wasiur R.",
""
],
[
"Parthasarathy",
"Srinivasan",
""
],
[
"Rempala",
"Grzegorz A.",
""
]
] |
Due to delay in reporting, the daily national and statewide COVID-19 incidence counts are often unreliable and need to be estimated from recent data. This process is known in economics as nowcasting. We describe in this paper a simple random forest statistical model for nowcasting the COVID - 19 daily new infection counts based on historic data along with a set of simple covariates, such as the currently reported infection counts, day of the week, and time since first reporting. We apply the model to adjust the daily infection counts in Ohio, and show that the predictions from this simple data-driven method compare favorably both in quality and computational burden to those obtained from the state-of-the-art hierarchical Bayesian model employing a complex statistical algorithm.
|
1805.00570
|
Ruth Collins
|
Cecil Barnett-Neefs and Ruth N. Collins
|
Identification of a complete YPT1 Rab GTPase sequence from the fungal
pathogen Colletotrichum incanum
|
4 figures, 1 Appendix
| null | null | null |
q-bio.GN
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Colletotrichum represent a genus of fungal species primarily known as plant
pathogens with severe economic impacts in temperate, subtropical and tropical
climates Consensus taxonomy and classification systems for Colletotrichum
species have been undergoing revision as high resolution genomic data becomes
available. Here we propose an alternative annotation that provides a complete
sequence for a Colletotrichum YPT1 gene homolog using the whole genome shotgun
sequence of Colletotrichum incanum isolated from soybean crops in Illinois,
USA.
|
[
{
"created": "Tue, 1 May 2018 22:21:54 GMT",
"version": "v1"
}
] |
2018-05-03
|
[
[
"Barnett-Neefs",
"Cecil",
""
],
[
"Collins",
"Ruth N.",
""
]
] |
Colletotrichum represent a genus of fungal species primarily known as plant pathogens with severe economic impacts in temperate, subtropical and tropical climates Consensus taxonomy and classification systems for Colletotrichum species have been undergoing revision as high resolution genomic data becomes available. Here we propose an alternative annotation that provides a complete sequence for a Colletotrichum YPT1 gene homolog using the whole genome shotgun sequence of Colletotrichum incanum isolated from soybean crops in Illinois, USA.
|
2205.13493
|
Valentin Schmutz
|
Shuqi Wang, Valentin Schmutz, Guillaume Bellec, Wulfram Gerstner
|
Mesoscopic modeling of hidden spiking neurons
|
23 pages, 7 figures
| null | null | null |
q-bio.NC cs.LG stat.ML
|
http://creativecommons.org/licenses/by-nc-nd/4.0/
|
Can we use spiking neural networks (SNN) as generative models of
multi-neuronal recordings, while taking into account that most neurons are
unobserved? Modeling the unobserved neurons with large pools of hidden spiking
neurons leads to severely underconstrained problems that are hard to tackle
with maximum likelihood estimation. In this work, we use coarse-graining and
mean-field approximations to derive a bottom-up, neuronally-grounded latent
variable model (neuLVM), where the activity of the unobserved neurons is
reduced to a low-dimensional mesoscopic description. In contrast to previous
latent variable models, neuLVM can be explicitly mapped to a recurrent,
multi-population SNN, giving it a transparent biological interpretation. We
show, on synthetic spike trains, that a few observed neurons are sufficient for
neuLVM to perform efficient model inversion of large SNNs, in the sense that it
can recover connectivity parameters, infer single-trial latent population
activity, reproduce ongoing metastable dynamics, and generalize when subjected
to perturbations mimicking photo-stimulation.
|
[
{
"created": "Thu, 26 May 2022 17:04:39 GMT",
"version": "v1"
},
{
"created": "Sat, 7 Jan 2023 16:25:16 GMT",
"version": "v2"
}
] |
2023-01-10
|
[
[
"Wang",
"Shuqi",
""
],
[
"Schmutz",
"Valentin",
""
],
[
"Bellec",
"Guillaume",
""
],
[
"Gerstner",
"Wulfram",
""
]
] |
Can we use spiking neural networks (SNN) as generative models of multi-neuronal recordings, while taking into account that most neurons are unobserved? Modeling the unobserved neurons with large pools of hidden spiking neurons leads to severely underconstrained problems that are hard to tackle with maximum likelihood estimation. In this work, we use coarse-graining and mean-field approximations to derive a bottom-up, neuronally-grounded latent variable model (neuLVM), where the activity of the unobserved neurons is reduced to a low-dimensional mesoscopic description. In contrast to previous latent variable models, neuLVM can be explicitly mapped to a recurrent, multi-population SNN, giving it a transparent biological interpretation. We show, on synthetic spike trains, that a few observed neurons are sufficient for neuLVM to perform efficient model inversion of large SNNs, in the sense that it can recover connectivity parameters, infer single-trial latent population activity, reproduce ongoing metastable dynamics, and generalize when subjected to perturbations mimicking photo-stimulation.
|
q-bio/0609019
|
Bruce Ayati
|
Bruce P. Ayati and Isaac Klapper
|
A Multiscale Model of Biofilm as a Senescence-Structured Fluid
| null |
Multiscale Modeling & Simulation, Vol. 6, No. 2, pp. 347-365, 2007
|
10.1137/060669796
| null |
q-bio.CB
| null |
We derive a physiologically structured multiscale model for biofilm
development. The model has components on two spatial scales, which induce
different time scales into the problem. The macroscopic behavior of the system
is modeled using growth-induced flow in a domain with a moving boundary.
Cell-level processes are incorporated into the model using a so-called
physiologically structured variable to represent cell senescence, which in turn
affects cell division and mortality. We present computational results for our
models which shed light on modeling the combined role senescence and the
biofilm state play in the defense strategy of bacteria.
|
[
{
"created": "Wed, 13 Sep 2006 18:47:29 GMT",
"version": "v1"
}
] |
2023-02-14
|
[
[
"Ayati",
"Bruce P.",
""
],
[
"Klapper",
"Isaac",
""
]
] |
We derive a physiologically structured multiscale model for biofilm development. The model has components on two spatial scales, which induce different time scales into the problem. The macroscopic behavior of the system is modeled using growth-induced flow in a domain with a moving boundary. Cell-level processes are incorporated into the model using a so-called physiologically structured variable to represent cell senescence, which in turn affects cell division and mortality. We present computational results for our models which shed light on modeling the combined role senescence and the biofilm state play in the defense strategy of bacteria.
|
q-bio/0508016
|
Veit Schw\"ammle
|
V. Schw\"ammle, K. Luz-Burgoa, J. S. S\'a Martins and S. Moss de
Oliveira
|
Phase transition in a mean-field model for sympatric speciation
|
accepted for Physica A
| null |
10.1016/j.physa.2006.01.076
| null |
q-bio.PE
| null |
We introduce an analytical model for population dynamics with intra-specific
competition, mutation and assortative mating as basic ingredients. The set of
equations that describes the time evolution of population size in a mean-field
approximation may be decoupled. We find a phase transition leading to sympatric
speciation as a parameter that quantifies competition strength is varied. This
transition, previously found in a computational model, occurs to be of first
order.
|
[
{
"created": "Mon, 15 Aug 2005 20:15:14 GMT",
"version": "v1"
}
] |
2015-06-26
|
[
[
"Schwämmle",
"V.",
""
],
[
"Luz-Burgoa",
"K.",
""
],
[
"Martins",
"J. S. Sá",
""
],
[
"de Oliveira",
"S. Moss",
""
]
] |
We introduce an analytical model for population dynamics with intra-specific competition, mutation and assortative mating as basic ingredients. The set of equations that describes the time evolution of population size in a mean-field approximation may be decoupled. We find a phase transition leading to sympatric speciation as a parameter that quantifies competition strength is varied. This transition, previously found in a computational model, occurs to be of first order.
|
2309.07356
|
Jinzhi Lei
|
Rongsheng Huang, Qiaojun Situ, Jinzhi Lei
|
Dynamics of cell-type transition mediated by epigenetic modifications
|
34 pages, 12 figures
| null | null | null |
q-bio.QM math-ph math.MP q-bio.CB
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Maintaining tissue homeostasis requires appropriate regulation of stem cell
differentiation. The Waddington landscape posits that gene circuits in a cell
form a potential landscape of different cell types, wherein cells follow
attractors of the probability landscape to develop into distinct cell types.
However, how adult stem cells achieve a delicate balance between self-renewal
and differentiation remains unclear. We propose that random inheritance of
epigenetic states plays a pivotal role in stem cell differentiation and present
a hybrid model of stem cell differentiation induced by epigenetic
modifications. Our comprehensive model integrates gene regulation networks,
epigenetic state inheritance, and cell regeneration, encompassing multi-scale
dynamics ranging from transcription regulation to cell population. Through
model simulations, we demonstrate that random inheritance of epigenetic states
during cell divisions can spontaneously induce cell differentiation,
dedifferentiation, and transdifferentiation. Furthermore, we investigate the
influences of interfering with epigenetic modifications and introducing
additional transcription factors on the probabilities of dedifferentiation and
transdifferentiation, revealing the underlying mechanism of cell reprogramming.
This \textit{in silico} model provides valuable insights into the intricate
mechanism governing stem cell differentiation and cell reprogramming and offers
a promising path to enhance the field of regenerative medicine.
|
[
{
"created": "Wed, 13 Sep 2023 23:54:56 GMT",
"version": "v1"
}
] |
2023-09-15
|
[
[
"Huang",
"Rongsheng",
""
],
[
"Situ",
"Qiaojun",
""
],
[
"Lei",
"Jinzhi",
""
]
] |
Maintaining tissue homeostasis requires appropriate regulation of stem cell differentiation. The Waddington landscape posits that gene circuits in a cell form a potential landscape of different cell types, wherein cells follow attractors of the probability landscape to develop into distinct cell types. However, how adult stem cells achieve a delicate balance between self-renewal and differentiation remains unclear. We propose that random inheritance of epigenetic states plays a pivotal role in stem cell differentiation and present a hybrid model of stem cell differentiation induced by epigenetic modifications. Our comprehensive model integrates gene regulation networks, epigenetic state inheritance, and cell regeneration, encompassing multi-scale dynamics ranging from transcription regulation to cell population. Through model simulations, we demonstrate that random inheritance of epigenetic states during cell divisions can spontaneously induce cell differentiation, dedifferentiation, and transdifferentiation. Furthermore, we investigate the influences of interfering with epigenetic modifications and introducing additional transcription factors on the probabilities of dedifferentiation and transdifferentiation, revealing the underlying mechanism of cell reprogramming. This \textit{in silico} model provides valuable insights into the intricate mechanism governing stem cell differentiation and cell reprogramming and offers a promising path to enhance the field of regenerative medicine.
|
2402.12405
|
Meng Xiao
|
Cong Li, Meng Xiao, Pengfei Wang, Guihai Feng, Xin Li, Yuanchun Zhou
|
scInterpreter: Training Large Language Models to Interpret scRNA-seq
Data for Cell Type Annotation
|
4 pages, submitted to FCS
| null | null | null |
q-bio.GN cs.AI
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Despite the inherent limitations of existing Large Language Models in
directly reading and interpreting single-cell omics data, they demonstrate
significant potential and flexibility as the Foundation Model. This research
focuses on how to train and adapt the Large Language Model with the capability
to interpret and distinguish cell types in single-cell RNA sequencing data. Our
preliminary research results indicate that these foundational models excel in
accurately categorizing known cell types, demonstrating the potential of the
Large Language Models as effective tools for uncovering new biological
insights.
|
[
{
"created": "Sun, 18 Feb 2024 05:39:00 GMT",
"version": "v1"
}
] |
2024-02-21
|
[
[
"Li",
"Cong",
""
],
[
"Xiao",
"Meng",
""
],
[
"Wang",
"Pengfei",
""
],
[
"Feng",
"Guihai",
""
],
[
"Li",
"Xin",
""
],
[
"Zhou",
"Yuanchun",
""
]
] |
Despite the inherent limitations of existing Large Language Models in directly reading and interpreting single-cell omics data, they demonstrate significant potential and flexibility as the Foundation Model. This research focuses on how to train and adapt the Large Language Model with the capability to interpret and distinguish cell types in single-cell RNA sequencing data. Our preliminary research results indicate that these foundational models excel in accurately categorizing known cell types, demonstrating the potential of the Large Language Models as effective tools for uncovering new biological insights.
|
1606.03592
|
Robert Leech
|
Peter J. Hellyer, Claudia Clopath, Angie A. Kehagia, Federico E.
Turkheimer, Robert Leech
|
Balanced activation in a simple embodied neural simulation
|
26 pages, 7 figures, associated github repository:
https://github.com/c3nl-neuraldynamics/Avatar
| null | null | null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In recent years, there have been many computational simulations of
spontaneous neural dynamics. Here, we explore a model of spontaneous neural
dynamics and allow it to control a virtual agent moving in a simple
environment. This setup generates interesting brain-environment feedback
interactions that rapidly destabilize neural and behavioral dynamics and
suggest the need for homeostatic mechanisms. We investigate roles for both
local homeostatic plasticity (local inhibition adjusting over time to balance
excitatory input) as well as macroscopic task negative activity (that
compensates for task positive, sensory input) in regulating both neural
activity and resulting behavior (trajectories through the environment). Our
results suggest complementary functional roles for both local homeostatic
plasticity and balanced activity across brain regions in maintaining neural and
behavioral dynamics. These findings suggest important functional roles for
homeostatic systems in maintaining neural and behavioral dynamics and suggest a
novel functional role for frequently reported macroscopic task-negative
patterns of activity (e.g., the default mode network).
|
[
{
"created": "Sat, 11 Jun 2016 13:39:43 GMT",
"version": "v1"
},
{
"created": "Thu, 18 Aug 2016 09:52:31 GMT",
"version": "v2"
}
] |
2016-08-19
|
[
[
"Hellyer",
"Peter J.",
""
],
[
"Clopath",
"Claudia",
""
],
[
"Kehagia",
"Angie A.",
""
],
[
"Turkheimer",
"Federico E.",
""
],
[
"Leech",
"Robert",
""
]
] |
In recent years, there have been many computational simulations of spontaneous neural dynamics. Here, we explore a model of spontaneous neural dynamics and allow it to control a virtual agent moving in a simple environment. This setup generates interesting brain-environment feedback interactions that rapidly destabilize neural and behavioral dynamics and suggest the need for homeostatic mechanisms. We investigate roles for both local homeostatic plasticity (local inhibition adjusting over time to balance excitatory input) as well as macroscopic task negative activity (that compensates for task positive, sensory input) in regulating both neural activity and resulting behavior (trajectories through the environment). Our results suggest complementary functional roles for both local homeostatic plasticity and balanced activity across brain regions in maintaining neural and behavioral dynamics. These findings suggest important functional roles for homeostatic systems in maintaining neural and behavioral dynamics and suggest a novel functional role for frequently reported macroscopic task-negative patterns of activity (e.g., the default mode network).
|
1703.10481
|
Kumar Sankar Ray
|
Mandrita Mondal and Kumar S. Ray
|
DNA Tweezers Based on Semantics of DNA Strand Graph
|
22 pages, 11 figures. arXiv admin note: substantial text overlap with
arXiv:1702.05383
| null | null | null |
q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Because of the limitations of classical silicon based computational
technology, several alternatives to traditional method in form of
unconventional computing have been proposed. In this paper we will focus on DNA
computing which is showing the possibility of excellence for its massive
parallelism, potential for information storage, speed and energy efficiency. In
this paper we will describe how syllogistic reasoning by DNA tweezers can be
presented by the semantics of process calculus and DNA strand graph. Syllogism
is an essential ingredient for commonsense reasoning of an individual. This
paper enlightens the procedure to deduce a precise conclusion from a set of
propositions by using formal language theory in form of process calculus and
the expressive power of DNA strand graph.
|
[
{
"created": "Wed, 29 Mar 2017 09:03:27 GMT",
"version": "v1"
}
] |
2017-03-31
|
[
[
"Mondal",
"Mandrita",
""
],
[
"Ray",
"Kumar S.",
""
]
] |
Because of the limitations of classical silicon based computational technology, several alternatives to traditional method in form of unconventional computing have been proposed. In this paper we will focus on DNA computing which is showing the possibility of excellence for its massive parallelism, potential for information storage, speed and energy efficiency. In this paper we will describe how syllogistic reasoning by DNA tweezers can be presented by the semantics of process calculus and DNA strand graph. Syllogism is an essential ingredient for commonsense reasoning of an individual. This paper enlightens the procedure to deduce a precise conclusion from a set of propositions by using formal language theory in form of process calculus and the expressive power of DNA strand graph.
|
2004.14290
|
Maria McGee
|
Maria P McGee, Michael Morykwas, Mary Kerns, Anirudh Vashisht, Ashok N
Hegde and Louis Argenta
|
Interstitial cells and neurons respond to variations in hydration
| null | null | null | null |
q-bio.TO
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Dehydration and brain interstitial fluid alterations associate to cognitive
dysfunction. We now explore whether changes in matrix hydration are a possible
common signal for modulation of water-transfer rates and neuron function.
|
[
{
"created": "Mon, 27 Apr 2020 12:40:19 GMT",
"version": "v1"
}
] |
2020-04-30
|
[
[
"McGee",
"Maria P",
""
],
[
"Morykwas",
"Michael",
""
],
[
"Kerns",
"Mary",
""
],
[
"Vashisht",
"Anirudh",
""
],
[
"Hegde",
"Ashok N",
""
],
[
"Argenta",
"Louis",
""
]
] |
Dehydration and brain interstitial fluid alterations associate to cognitive dysfunction. We now explore whether changes in matrix hydration are a possible common signal for modulation of water-transfer rates and neuron function.
|
1908.07960
|
Niv DeMalach
|
DeMalach Niv, Po-Ju Ke, Tadashi Fukami
|
The effects of ecological selection on species diversity and trait
distribution: predictions and an empirical test
| null | null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Ecological selection is a major driver of community assembly. Selection is
classified as stabilizing when species with intermediate trait values gain the
highest reproductive success, whereas selection is considered directional when
fitness is highest for species with extreme trait values. Previous studies have
investigated the effects of different selection types on trait distribution,
but the effects of selection on species diversity have remained unclear. Here,
we propose a framework for inferring the type and strength of selection by
studying species diversity and trait distribution together against null
expectations. We use a simulation model to confirm our prediction that
directional selection should lead to lower species diversity than stabilizing
selection despite a similar effect on trait community-weighted variance. We
apply the framework to a mesocosm system of annual plants to test whether
differences in species diversity between two habitats that vary in productivity
are related to differences in selection on seed mass. We show that, in both
habitats, species diversity was lower than the null expectation, but that
species diversity was lower in the more productive habitat. We attribute this
difference to strong directional selection for large-seeded species in the
productive habitat as indicated by trait community-weighted-mean being higher
and community-weighted variance being lower than the null expectations. In the
less productive habitat, we found that community-weighted variance was higher
than expected by chance, suggesting that seed mass could be a driver of niche
partitioning under such conditions. Altogether, our results suggest that
viewing species diversity and trait distribution as interrelated patterns
driven by the same process, ecological selection, is helpful in understanding
community assembly.
|
[
{
"created": "Wed, 21 Aug 2019 16:05:43 GMT",
"version": "v1"
},
{
"created": "Wed, 12 May 2021 10:24:18 GMT",
"version": "v2"
}
] |
2021-05-13
|
[
[
"Niv",
"DeMalach",
""
],
[
"Ke",
"Po-Ju",
""
],
[
"Fukami",
"Tadashi",
""
]
] |
Ecological selection is a major driver of community assembly. Selection is classified as stabilizing when species with intermediate trait values gain the highest reproductive success, whereas selection is considered directional when fitness is highest for species with extreme trait values. Previous studies have investigated the effects of different selection types on trait distribution, but the effects of selection on species diversity have remained unclear. Here, we propose a framework for inferring the type and strength of selection by studying species diversity and trait distribution together against null expectations. We use a simulation model to confirm our prediction that directional selection should lead to lower species diversity than stabilizing selection despite a similar effect on trait community-weighted variance. We apply the framework to a mesocosm system of annual plants to test whether differences in species diversity between two habitats that vary in productivity are related to differences in selection on seed mass. We show that, in both habitats, species diversity was lower than the null expectation, but that species diversity was lower in the more productive habitat. We attribute this difference to strong directional selection for large-seeded species in the productive habitat as indicated by trait community-weighted-mean being higher and community-weighted variance being lower than the null expectations. In the less productive habitat, we found that community-weighted variance was higher than expected by chance, suggesting that seed mass could be a driver of niche partitioning under such conditions. Altogether, our results suggest that viewing species diversity and trait distribution as interrelated patterns driven by the same process, ecological selection, is helpful in understanding community assembly.
|
2403.10478
|
Michael Brocidiacono
|
Michael Brocidiacono, Konstantin I. Popov, Alexander Tropsha
|
An Improved Metric and Benchmark for Assessing the Performance of
Virtual Screening Models
|
10 pages, 4 figures, and 4 tables. The source code is available at
https://github.com/molecularmodelinglab/bigbind
| null | null | null |
q-bio.QM
|
http://creativecommons.org/licenses/by/4.0/
|
Structure-based virtual screening (SBVS) is a key workflow in computational
drug discovery. SBVS models are assessed by measuring the enrichment of known
active molecules over decoys in retrospective screens. However, the standard
formula for enrichment cannot estimate model performance on very large
libraries. Additionally, current screening benchmarks cannot easily be used
with machine learning (ML) models due to data leakage. We propose an improved
formula for calculating VS enrichment and introduce the BayesBind benchmarking
set composed of protein targets that are structurally dissimilar to those in
the BigBind training set. We assess current models on this benchmark and find
that none perform appreciably better than a KNN baseline.
|
[
{
"created": "Fri, 15 Mar 2024 17:09:02 GMT",
"version": "v1"
}
] |
2024-03-18
|
[
[
"Brocidiacono",
"Michael",
""
],
[
"Popov",
"Konstantin I.",
""
],
[
"Tropsha",
"Alexander",
""
]
] |
Structure-based virtual screening (SBVS) is a key workflow in computational drug discovery. SBVS models are assessed by measuring the enrichment of known active molecules over decoys in retrospective screens. However, the standard formula for enrichment cannot estimate model performance on very large libraries. Additionally, current screening benchmarks cannot easily be used with machine learning (ML) models due to data leakage. We propose an improved formula for calculating VS enrichment and introduce the BayesBind benchmarking set composed of protein targets that are structurally dissimilar to those in the BigBind training set. We assess current models on this benchmark and find that none perform appreciably better than a KNN baseline.
|
1401.4956
|
Nicola Palmieri
|
Nicola Palmieri, Carolin Kosiol, Christian Schl\"otterer
|
The life cycle of Drosophila orphan genes
|
47 pages, 19 figures
| null |
10.7554/elife.01311
| null |
q-bio.GN
|
http://creativecommons.org/licenses/by/3.0/
|
Orphans are genes restricted to a single phylogenetic lineage and emerge at
high rates. While this predicts an accumulation of genes, the gene number has
remained remarkably constant through evolution. This paradox has not yet been
resolved. Because orphan genes have been mainly analyzed over long evolutionary
time scales, orphan loss has remained unexplored. Here we study the patterns of
orphan turnover among close relatives in the Drosophila obscura group. We show
that orphans are not only emerging at a high rate, but that they are also
rapidly lost. Interestingly, recently emerged orphans are more likely to be
lost than older ones. Furthermore, highly expressed orphans with a strong
male-bias are more likely to be retained. Since both lost and retained orphans
show similar evolutionary signatures of functional conservation, we propose
that orphan loss is not driven by high rates of sequence evolution, but
reflects lineage specific functional requirements.
|
[
{
"created": "Mon, 20 Jan 2014 16:03:15 GMT",
"version": "v1"
}
] |
2014-01-23
|
[
[
"Palmieri",
"Nicola",
""
],
[
"Kosiol",
"Carolin",
""
],
[
"Schlötterer",
"Christian",
""
]
] |
Orphans are genes restricted to a single phylogenetic lineage and emerge at high rates. While this predicts an accumulation of genes, the gene number has remained remarkably constant through evolution. This paradox has not yet been resolved. Because orphan genes have been mainly analyzed over long evolutionary time scales, orphan loss has remained unexplored. Here we study the patterns of orphan turnover among close relatives in the Drosophila obscura group. We show that orphans are not only emerging at a high rate, but that they are also rapidly lost. Interestingly, recently emerged orphans are more likely to be lost than older ones. Furthermore, highly expressed orphans with a strong male-bias are more likely to be retained. Since both lost and retained orphans show similar evolutionary signatures of functional conservation, we propose that orphan loss is not driven by high rates of sequence evolution, but reflects lineage specific functional requirements.
|
q-bio/0406028
|
Long Wang
|
Long Wang
|
Aggregation of foraging swarms
| null | null | null | null |
q-bio.CB
| null |
In this paper we consider a continuous-time anisotropic swarm model with an
attraction/repulsion function and study its aggregation properties. It is shown
that the swarm members will aggregate and eventually form a cohesive cluster of
finite size around the swarm center. We also study the swarm cohesiveness when
the motion of each agent is a combination of the inter-individual interactions
and the interaction of the agent with external environment. Moreover, we extend
our results to more general attraction/repulsion functions. The model in this
paper is more general than isotropic swarms and our results provide further
insight into the effect of the interaction pattern on individual motion in a
swarm system.
|
[
{
"created": "Mon, 14 Jun 2004 18:56:27 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Wang",
"Long",
""
]
] |
In this paper we consider a continuous-time anisotropic swarm model with an attraction/repulsion function and study its aggregation properties. It is shown that the swarm members will aggregate and eventually form a cohesive cluster of finite size around the swarm center. We also study the swarm cohesiveness when the motion of each agent is a combination of the inter-individual interactions and the interaction of the agent with external environment. Moreover, we extend our results to more general attraction/repulsion functions. The model in this paper is more general than isotropic swarms and our results provide further insight into the effect of the interaction pattern on individual motion in a swarm system.
|
1601.06943
|
Simone Pigolotti
|
Simone Pigolotti, Roberto Benzi
|
Competition between fast- and slow-diffusing species in non-homogeneous
environments
|
11 pages, 6 figures, accepted for publication in J. Theo. Biol
| null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We study an individual-based model in which two spatially-distributed
species, characterized by different diffusivities, compete for resources. We
consider three different ecological settings. In the first, diffusing faster
has a cost in terms of reproduction rate. In the second case, resources are not
uniformly distributed in space. In the third case, the two species are
transported by a fluid flow. In all these cases, at varying the parameters, we
observe a transition from a regime in which diffusing faster confers an
effective selective advantage to one in which it constitutes a disadvantage. We
analytically estimate the magnitude of this advantage (or disadvantage) and
test it by measuring fixation probabilities in simulations of the
individual-based model. Our results provide a framework to quantify
evolutionary pressure for increased or decreased dispersal in a given
environment.
|
[
{
"created": "Tue, 26 Jan 2016 09:30:01 GMT",
"version": "v1"
}
] |
2016-01-27
|
[
[
"Pigolotti",
"Simone",
""
],
[
"Benzi",
"Roberto",
""
]
] |
We study an individual-based model in which two spatially-distributed species, characterized by different diffusivities, compete for resources. We consider three different ecological settings. In the first, diffusing faster has a cost in terms of reproduction rate. In the second case, resources are not uniformly distributed in space. In the third case, the two species are transported by a fluid flow. In all these cases, at varying the parameters, we observe a transition from a regime in which diffusing faster confers an effective selective advantage to one in which it constitutes a disadvantage. We analytically estimate the magnitude of this advantage (or disadvantage) and test it by measuring fixation probabilities in simulations of the individual-based model. Our results provide a framework to quantify evolutionary pressure for increased or decreased dispersal in a given environment.
|
2303.06060
|
Liwei Huang
|
Liwei Huang, Zhengyu Ma, Liutao Yu, Huihui Zhou, Yonghong Tian
|
Deep Spiking Neural Networks with High Representation Similarity Model
Visual Pathways of Macaque and Mouse
|
Accepted by Proceedings of the 37th AAAI Conference on Artificial
Intelligence (AAAI-23)
| null | null | null |
q-bio.NC cs.NE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Deep artificial neural networks (ANNs) play a major role in modeling the
visual pathways of primate and rodent. However, they highly simplify the
computational properties of neurons compared to their biological counterparts.
Instead, Spiking Neural Networks (SNNs) are more biologically plausible models
since spiking neurons encode information with time sequences of spikes, just
like biological neurons do. However, there is a lack of studies on visual
pathways with deep SNNs models. In this study, we model the visual cortex with
deep SNNs for the first time, and also with a wide range of state-of-the-art
deep CNNs and ViTs for comparison. Using three similarity metrics, we conduct
neural representation similarity experiments on three neural datasets collected
from two species under three types of stimuli. Based on extensive similarity
analyses, we further investigate the functional hierarchy and mechanisms across
species. Almost all similarity scores of SNNs are higher than their
counterparts of CNNs with an average of 6.6%. Depths of the layers with the
highest similarity scores exhibit little differences across mouse cortical
regions, but vary significantly across macaque regions, suggesting that the
visual processing structure of mice is more regionally homogeneous than that of
macaques. Besides, the multi-branch structures observed in some top mouse
brain-like neural networks provide computational evidence of parallel
processing streams in mice, and the different performance in fitting macaque
neural representations under different stimuli exhibits the functional
specialization of information processing in macaques. Taken together, our study
demonstrates that SNNs could serve as promising candidates to better model and
explain the functional hierarchy and mechanisms of the visual system.
|
[
{
"created": "Thu, 9 Mar 2023 13:07:30 GMT",
"version": "v1"
},
{
"created": "Mon, 27 Mar 2023 10:12:45 GMT",
"version": "v2"
},
{
"created": "Wed, 5 Apr 2023 12:29:38 GMT",
"version": "v3"
},
{
"created": "Fri, 12 May 2023 09:16:39 GMT",
"version": "v4"
},
{
"created": "Mon, 22 May 2023 04:03:46 GMT",
"version": "v5"
}
] |
2023-05-23
|
[
[
"Huang",
"Liwei",
""
],
[
"Ma",
"Zhengyu",
""
],
[
"Yu",
"Liutao",
""
],
[
"Zhou",
"Huihui",
""
],
[
"Tian",
"Yonghong",
""
]
] |
Deep artificial neural networks (ANNs) play a major role in modeling the visual pathways of primate and rodent. However, they highly simplify the computational properties of neurons compared to their biological counterparts. Instead, Spiking Neural Networks (SNNs) are more biologically plausible models since spiking neurons encode information with time sequences of spikes, just like biological neurons do. However, there is a lack of studies on visual pathways with deep SNNs models. In this study, we model the visual cortex with deep SNNs for the first time, and also with a wide range of state-of-the-art deep CNNs and ViTs for comparison. Using three similarity metrics, we conduct neural representation similarity experiments on three neural datasets collected from two species under three types of stimuli. Based on extensive similarity analyses, we further investigate the functional hierarchy and mechanisms across species. Almost all similarity scores of SNNs are higher than their counterparts of CNNs with an average of 6.6%. Depths of the layers with the highest similarity scores exhibit little differences across mouse cortical regions, but vary significantly across macaque regions, suggesting that the visual processing structure of mice is more regionally homogeneous than that of macaques. Besides, the multi-branch structures observed in some top mouse brain-like neural networks provide computational evidence of parallel processing streams in mice, and the different performance in fitting macaque neural representations under different stimuli exhibits the functional specialization of information processing in macaques. Taken together, our study demonstrates that SNNs could serve as promising candidates to better model and explain the functional hierarchy and mechanisms of the visual system.
|
1904.01903
|
Gerhard Wolber
|
David Schaller, Szymon Pach and Gerhard Wolber
|
PyRod -- Tracing Water Molecules in Molecular Dynamics Simulations
| null |
J.Chem.Inf.Model. (2019) 2818-2829
|
10.1021/acs.jcim.9b00281
| null |
q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Ligands entering a protein binding pocket essentially compete with water
molecules for binding to the protein. Hence, the location and thermodynamic
properties of water molecules in protein structures have gained increased
attention in the drug design community. Including corresponding data into 3D
pharmacophore modeling is essential for efficient high throughput virtual
screening. Here, we present PyRod, a free and open-source python software that
allows for visualization of pharmacophoric binding pocket characteristics,
identification of hot spots for ligand binding and subsequent generation of
pharmacophore features for virtual screening. The implemented routines analyze
the protein environment of water molecules in molecular dynamics (MD)
simulations and can differentiate between hydrogen bonded waters as well as
waters in a protein environment of hydrophobic, charged or aromatic atom
groups. The gathered information is further processed to generate dynamic
molecular interaction fields (dMIFs) for visualization and pharmacophoric
features for virtual screening. The described software was applied to 5
therapeutically relevant drug targets and generated pharmacophores were
evaluated using DUD-E benchmarking sets. The best performing pharmacophore was
found for the HIV1 protease with an early enrichment factor of 54.6. PyRod adds
a new perspective to structure-based screening campaigns by providing
easy-to-interpret dMIFs and purely protein-based 3D pharmacophores that are
solely based on tracing water molecules in MD simulations. Since structural
information about co-crystallized ligands is not needed, screening campaigns
can be followed, for which less or no ligand information is available. PyRod is
freely available at https://github.com/schallerdavid/pyrod.
|
[
{
"created": "Wed, 3 Apr 2019 10:36:49 GMT",
"version": "v1"
},
{
"created": "Tue, 27 Aug 2019 12:02:25 GMT",
"version": "v2"
}
] |
2019-08-28
|
[
[
"Schaller",
"David",
""
],
[
"Pach",
"Szymon",
""
],
[
"Wolber",
"Gerhard",
""
]
] |
Ligands entering a protein binding pocket essentially compete with water molecules for binding to the protein. Hence, the location and thermodynamic properties of water molecules in protein structures have gained increased attention in the drug design community. Including corresponding data into 3D pharmacophore modeling is essential for efficient high throughput virtual screening. Here, we present PyRod, a free and open-source python software that allows for visualization of pharmacophoric binding pocket characteristics, identification of hot spots for ligand binding and subsequent generation of pharmacophore features for virtual screening. The implemented routines analyze the protein environment of water molecules in molecular dynamics (MD) simulations and can differentiate between hydrogen bonded waters as well as waters in a protein environment of hydrophobic, charged or aromatic atom groups. The gathered information is further processed to generate dynamic molecular interaction fields (dMIFs) for visualization and pharmacophoric features for virtual screening. The described software was applied to 5 therapeutically relevant drug targets and generated pharmacophores were evaluated using DUD-E benchmarking sets. The best performing pharmacophore was found for the HIV1 protease with an early enrichment factor of 54.6. PyRod adds a new perspective to structure-based screening campaigns by providing easy-to-interpret dMIFs and purely protein-based 3D pharmacophores that are solely based on tracing water molecules in MD simulations. Since structural information about co-crystallized ligands is not needed, screening campaigns can be followed, for which less or no ligand information is available. PyRod is freely available at https://github.com/schallerdavid/pyrod.
|
1310.4547
|
Mike Steel Prof.
|
Joshua I Smith, Mike Steel, Wim Hordijk
|
Autocatalytic sets in a partitioned biochemical network
|
28 pages, 8 figures
| null | null | null |
q-bio.MN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In previous work, RAF theory has been developed as a tool for making
theoretical progress on the origin of life question, providing insight into the
structure and occurrence of self-sustaining and collectively autocatalytic sets
within catalytic polymer networks. We present here an extension in which there
are two "independent" polymer sets, where catalysis occurs within and between
the sets, but there are no reactions combining polymers from both sets. Such an
extension reflects the interaction between nucleic acids and peptides observed
in modern cells and proposed forms of early life.
|
[
{
"created": "Wed, 16 Oct 2013 23:36:14 GMT",
"version": "v1"
}
] |
2013-10-18
|
[
[
"Smith",
"Joshua I",
""
],
[
"Steel",
"Mike",
""
],
[
"Hordijk",
"Wim",
""
]
] |
In previous work, RAF theory has been developed as a tool for making theoretical progress on the origin of life question, providing insight into the structure and occurrence of self-sustaining and collectively autocatalytic sets within catalytic polymer networks. We present here an extension in which there are two "independent" polymer sets, where catalysis occurs within and between the sets, but there are no reactions combining polymers from both sets. Such an extension reflects the interaction between nucleic acids and peptides observed in modern cells and proposed forms of early life.
|
2103.12173
|
Michael Thorne
|
Michael Thorne
|
Tipping Cycles
| null | null | null | null |
q-bio.PE math.CA
|
http://creativecommons.org/licenses/by/4.0/
|
Ecological systems are studied using many different approaches and
mathematical tools. One approach, based on the Jacobian of Lotka-Volterra type
models, has been a staple of mathematical ecology for years, leading to many
ideas such as on questions of system stability. Instability in such methods is
determined by the presence of an eigenvalue of the community matrix lying in
the right half plane. The coefficients of the characteristic polynomial derived
from community matrices contain information related to the specific matrix
elements that play a greater destabilising role. Yet the destabilising
circuits, or cycles, constructed by multiplying these elements together, form
only a subset of all the feedback loops comprising a given system. This paper
looks at the destabilising feedback loops in predator-prey, mutualistic and
competitive systems in terms of sets of the matrix elements to explore how sign
structure affects how the elements contribute to instability. This leads to
quite rich combinatorial structure among the destabilising cycle sets as set
size grows within the coefficients of the characteristic polynomial.
|
[
{
"created": "Mon, 22 Mar 2021 20:43:39 GMT",
"version": "v1"
},
{
"created": "Mon, 29 Mar 2021 08:37:03 GMT",
"version": "v2"
},
{
"created": "Thu, 24 Jun 2021 10:18:29 GMT",
"version": "v3"
}
] |
2021-06-25
|
[
[
"Thorne",
"Michael",
""
]
] |
Ecological systems are studied using many different approaches and mathematical tools. One approach, based on the Jacobian of Lotka-Volterra type models, has been a staple of mathematical ecology for years, leading to many ideas such as on questions of system stability. Instability in such methods is determined by the presence of an eigenvalue of the community matrix lying in the right half plane. The coefficients of the characteristic polynomial derived from community matrices contain information related to the specific matrix elements that play a greater destabilising role. Yet the destabilising circuits, or cycles, constructed by multiplying these elements together, form only a subset of all the feedback loops comprising a given system. This paper looks at the destabilising feedback loops in predator-prey, mutualistic and competitive systems in terms of sets of the matrix elements to explore how sign structure affects how the elements contribute to instability. This leads to quite rich combinatorial structure among the destabilising cycle sets as set size grows within the coefficients of the characteristic polynomial.
|
2310.01100
|
Houwen Xin
|
Lizhi Xin, Kevin Xin, Houwen Xin
|
A computational model for synaptic message transmission
| null | null | null | null |
q-bio.NC
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
A computational model incorporating insights from quantum theory is proposed
to describe and explain synaptic message transmission. We propose that
together, neurotransmitters and their corresponding receptors, function as a
physical "quantum decision tree" to "decide" whether to excite or inhibit the
synapse. When a neurotransmitter binds to its corresponding receptor, it is the
equivalent of randomly choosing different "strategies"; a "strategy" has two
actions to take: excite or inhibit the synapse with a certain probability. The
genetic programming can be applied for learning the observed data sequence to
simulate the synaptic message transmission.
|
[
{
"created": "Mon, 2 Oct 2023 11:19:57 GMT",
"version": "v1"
}
] |
2023-10-03
|
[
[
"Xin",
"Lizhi",
""
],
[
"Xin",
"Kevin",
""
],
[
"Xin",
"Houwen",
""
]
] |
A computational model incorporating insights from quantum theory is proposed to describe and explain synaptic message transmission. We propose that together, neurotransmitters and their corresponding receptors, function as a physical "quantum decision tree" to "decide" whether to excite or inhibit the synapse. When a neurotransmitter binds to its corresponding receptor, it is the equivalent of randomly choosing different "strategies"; a "strategy" has two actions to take: excite or inhibit the synapse with a certain probability. The genetic programming can be applied for learning the observed data sequence to simulate the synaptic message transmission.
|
2004.06311
|
Weijie Pang
|
Weijie Pang
|
Public Health Policy: COVID-19 Epidemic and SEIR Model with Asymptomatic
Viral Carriers
|
17 pages, 10 figures
| null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We measure the effect of different public health regulations to the spread of
COVID-19, based on a SEIRA model -- a SEIR model including asymptomatic
transmissions. The cumulative confirmed cases and death show nonlinear positive
relationship with the value of asymptomatic rate. Based on this model, we
analyze the inhibit effects to COVID-19 of three types of public health
policies, i.e. isolation of laboratory confirmed cases, general personal
protection and quarantine (lock-down). The simulations conclude that the
isolation display limited effects to the asymptomatic viral carriers. The
general personal protection and quarantine perform similar effects when the
their percentages of participants are same. When the total proportion of
asymptomatic, mild symptomatic and neglected patients is 40%, only depends on
isolation policy may lead to an additional 75% infections, compared with
general personal protection or quarantine with an efficiency 80%. At end, we
provide seven recommendations of public health intervention before and during
an aerial transmitted epidemic (COVID-19).
|
[
{
"created": "Tue, 14 Apr 2020 05:33:41 GMT",
"version": "v1"
}
] |
2020-04-15
|
[
[
"Pang",
"Weijie",
""
]
] |
We measure the effect of different public health regulations to the spread of COVID-19, based on a SEIRA model -- a SEIR model including asymptomatic transmissions. The cumulative confirmed cases and death show nonlinear positive relationship with the value of asymptomatic rate. Based on this model, we analyze the inhibit effects to COVID-19 of three types of public health policies, i.e. isolation of laboratory confirmed cases, general personal protection and quarantine (lock-down). The simulations conclude that the isolation display limited effects to the asymptomatic viral carriers. The general personal protection and quarantine perform similar effects when the their percentages of participants are same. When the total proportion of asymptomatic, mild symptomatic and neglected patients is 40%, only depends on isolation policy may lead to an additional 75% infections, compared with general personal protection or quarantine with an efficiency 80%. At end, we provide seven recommendations of public health intervention before and during an aerial transmitted epidemic (COVID-19).
|
2009.10014
|
Madhav Marathe
|
Aniruddha Adiga, Devdatt Dubhashi, Bryan Lewis, Madhav Marathe,
Srinivasan Venkatramanan, Anil Vullikanti
|
Models for COVID-19 Pandemic: A Comparative Analysis
| null | null | null | null |
q-bio.PE cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
COVID-19 pandemic represents an unprecedented global health crisis in the
last 100 years. Its economic, social and health impact continues to grow and is
likely to end up as one of the worst global disasters since the 1918 pandemic
and the World Wars. Mathematical models have played an important role in the
ongoing crisis; they have been used to inform public policies and have been
instrumental in many of the social distancing measures that were instituted
worldwide.
In this article we review some of the important mathematical models used to
support the ongoing planning and response efforts. These models differ in their
use, their mathematical form and their scope.
|
[
{
"created": "Mon, 21 Sep 2020 16:42:00 GMT",
"version": "v1"
}
] |
2020-09-22
|
[
[
"Adiga",
"Aniruddha",
""
],
[
"Dubhashi",
"Devdatt",
""
],
[
"Lewis",
"Bryan",
""
],
[
"Marathe",
"Madhav",
""
],
[
"Venkatramanan",
"Srinivasan",
""
],
[
"Vullikanti",
"Anil",
""
]
] |
COVID-19 pandemic represents an unprecedented global health crisis in the last 100 years. Its economic, social and health impact continues to grow and is likely to end up as one of the worst global disasters since the 1918 pandemic and the World Wars. Mathematical models have played an important role in the ongoing crisis; they have been used to inform public policies and have been instrumental in many of the social distancing measures that were instituted worldwide. In this article we review some of the important mathematical models used to support the ongoing planning and response efforts. These models differ in their use, their mathematical form and their scope.
|
1409.0590
|
Christoph Adami
|
Christoph Adami
|
Information-theoretic considerations concerning the origin of life
|
10 pages, one figure. Expanded discussion of experiments with
biopolymers
|
Origins of Life and Evolution of the Bioshperes 45 (2015) 9439
|
10.1007/s11084-015-9439-0
| null |
q-bio.PE cs.IT math.IT nlin.AO q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Research investigating the origins of life usually focuses on exploring
possible life-bearing chemistries in the pre-biotic Earth, or else on synthetic
approaches. Little work has been done exploring fundamental issues concerning
the spontaneous emergence of life using only concepts (such as information and
evolution) that are divorced from any particular chemistry. Here, I advocate
studying the probability of spontaneous molecular self-replication as a
function of the information contained in the replicator, and the environmental
conditions that might enable this emergence. I show that (under certain
simplifying assumptions) the probability to discover a self-replicator by
chance depends exponentially on the rate of formation of the monomers. If the
rate at which monomers are formed is somewhat similar to the rate at which they
would occur in a self-replicating polymer, the likelihood to discover such a
replicator by chance is increased by many orders of magnitude. I document such
an increase in searches for a self-replicator within the digital life system
avida
|
[
{
"created": "Tue, 2 Sep 2014 02:02:39 GMT",
"version": "v1"
},
{
"created": "Sat, 22 Nov 2014 18:09:39 GMT",
"version": "v2"
}
] |
2015-06-24
|
[
[
"Adami",
"Christoph",
""
]
] |
Research investigating the origins of life usually focuses on exploring possible life-bearing chemistries in the pre-biotic Earth, or else on synthetic approaches. Little work has been done exploring fundamental issues concerning the spontaneous emergence of life using only concepts (such as information and evolution) that are divorced from any particular chemistry. Here, I advocate studying the probability of spontaneous molecular self-replication as a function of the information contained in the replicator, and the environmental conditions that might enable this emergence. I show that (under certain simplifying assumptions) the probability to discover a self-replicator by chance depends exponentially on the rate of formation of the monomers. If the rate at which monomers are formed is somewhat similar to the rate at which they would occur in a self-replicating polymer, the likelihood to discover such a replicator by chance is increased by many orders of magnitude. I document such an increase in searches for a self-replicator within the digital life system avida
|
1003.2879
|
Nicolas Brodu
|
Nicolas Brodu, Fabien Lotte, Anatole L\'ecuyer
|
Exploring Two Novel Features for EEG-based Brain-Computer Interfaces:
Multifractal Cumulants and Predictive Complexity
|
Updated with more subjects. Separated out the band-power comparisons
in a companion article after reviewer feedback. Source code and companion
article are available at
http://nicolas.brodu.numerimoire.net/en/recherche/publications
| null | null | null |
q-bio.NC physics.data-an
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we introduce two new features for the design of
electroencephalography (EEG) based Brain-Computer Interfaces (BCI): one feature
based on multifractal cumulants, and one feature based on the predictive
complexity of the EEG time series. The multifractal cumulants feature measures
the signal regularity, while the predictive complexity measures the difficulty
to predict the future of the signal based on its past, hence a degree of how
complex it is. We have conducted an evaluation of the performance of these two
novel features on EEG data corresponding to motor-imagery. We also compared
them to the most successful features used in the BCI field, namely the
Band-Power features. We evaluated these three kinds of features and their
combinations on EEG signals from 13 subjects. Results obtained show that our
novel features can lead to BCI designs with improved classification
performance, notably when using and combining the three kinds of feature
(band-power, multifractal cumulants, predictive complexity) together.
|
[
{
"created": "Mon, 15 Mar 2010 10:20:14 GMT",
"version": "v1"
},
{
"created": "Sat, 18 Sep 2010 15:52:09 GMT",
"version": "v2"
}
] |
2010-09-21
|
[
[
"Brodu",
"Nicolas",
""
],
[
"Lotte",
"Fabien",
""
],
[
"Lécuyer",
"Anatole",
""
]
] |
In this paper, we introduce two new features for the design of electroencephalography (EEG) based Brain-Computer Interfaces (BCI): one feature based on multifractal cumulants, and one feature based on the predictive complexity of the EEG time series. The multifractal cumulants feature measures the signal regularity, while the predictive complexity measures the difficulty to predict the future of the signal based on its past, hence a degree of how complex it is. We have conducted an evaluation of the performance of these two novel features on EEG data corresponding to motor-imagery. We also compared them to the most successful features used in the BCI field, namely the Band-Power features. We evaluated these three kinds of features and their combinations on EEG signals from 13 subjects. Results obtained show that our novel features can lead to BCI designs with improved classification performance, notably when using and combining the three kinds of feature (band-power, multifractal cumulants, predictive complexity) together.
|
1209.0985
|
Leonid Belous
|
Yu. V. Rubin, L. F. Belous
|
Molecular structure and interactions of nucleic acid components in
nanopaticles: ab initio calculations
|
8 pages, 4 figures; http://www.ujp.bitp.kiev.ua
|
Ukrainian Journal of Physics, 2012, Vol. 57, no. 7, pp. 723-731
| null | null |
q-bio.BM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Self-associates of nucleic acid components (stacking trimers and tetramers of
the base pairs of nucleic acids) and short fragments of nucleic acids are
nanoparticles (linear sizes of these particles are more than 10 A. Modern
quantum-mechanical methods and softwares allow one to perform ab initio
calculations of the systems consisting of 150-200 atoms with enough large basis
sets (for example, 6-31G*). The aim of this work is to reveal the peculiarities
of molecular and electronic structures, as well as the energy features of
nanoparticles of nucleic acid components.
|
[
{
"created": "Wed, 5 Sep 2012 14:02:49 GMT",
"version": "v1"
}
] |
2012-09-06
|
[
[
"Rubin",
"Yu. V.",
""
],
[
"Belous",
"L. F.",
""
]
] |
Self-associates of nucleic acid components (stacking trimers and tetramers of the base pairs of nucleic acids) and short fragments of nucleic acids are nanoparticles (linear sizes of these particles are more than 10 A. Modern quantum-mechanical methods and softwares allow one to perform ab initio calculations of the systems consisting of 150-200 atoms with enough large basis sets (for example, 6-31G*). The aim of this work is to reveal the peculiarities of molecular and electronic structures, as well as the energy features of nanoparticles of nucleic acid components.
|
1304.6098
|
Melissa Wilson Sayres
|
Melissa A. Wilson Sayres
|
Timing of ancient human Y lineage depends on the mutation rate: A
comment on Mendez et al
|
5 pages, 1 table
| null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Mendez et al. recently report the identification of a Y chromosome lineage
from an African American that is an outgroup to all other known Y haplotypes,
and report a time to most recent common ancestor, TMRCA, for human Y lineages
that is substantially longer than any previous estimate. The identification of
a novel Y haplotype is always exciting, and this haplotype, in particular, is
unique in its basal position on the Y haplotype tree. However, at 338 (237-581)
thousand years ago, kya, the extremely ancient TMRCA reported by Mendez et al.
is inconsistent with the known human fossil record (which estimate the age of
anatomically modern humans at 195 +- 5 kya), with estimates from mtDNA (176.6
+- 11.3 kya, and 204.9 (116.8-295.7) kya) and with population genetic theory.
The inflated TMRCA can quite easily be attributed to the extremely low Y
chromosome mutation rate used by the authors.
|
[
{
"created": "Mon, 22 Apr 2013 20:26:19 GMT",
"version": "v1"
}
] |
2013-04-24
|
[
[
"Sayres",
"Melissa A. Wilson",
""
]
] |
Mendez et al. recently report the identification of a Y chromosome lineage from an African American that is an outgroup to all other known Y haplotypes, and report a time to most recent common ancestor, TMRCA, for human Y lineages that is substantially longer than any previous estimate. The identification of a novel Y haplotype is always exciting, and this haplotype, in particular, is unique in its basal position on the Y haplotype tree. However, at 338 (237-581) thousand years ago, kya, the extremely ancient TMRCA reported by Mendez et al. is inconsistent with the known human fossil record (which estimate the age of anatomically modern humans at 195 +- 5 kya), with estimates from mtDNA (176.6 +- 11.3 kya, and 204.9 (116.8-295.7) kya) and with population genetic theory. The inflated TMRCA can quite easily be attributed to the extremely low Y chromosome mutation rate used by the authors.
|
1206.0362
|
Philippe Robert S.
|
Vincent Fromion and Emanuele Leoncini and Philippe Robert
|
Stochastic Gene Expression in Cells: A Point Process Approach
| null | null | null | null |
q-bio.QM math.PR
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
This paper investigates the stochastic fluctuations of the number of copies
of a given protein in a cell. This problem has already been addressed in the
past and closed-form expressions of the mean and variance have been obtained
for a simplified stochastic model of the gene expression. These results have
been obtained under the assumption that the duration of all the protein
production steps are exponentially distributed. In such a case, a Markovian
approach (via Fokker-Planck equations) is used to derive analytic formulas of
the mean and the variance of the number of proteins at equilibrium. This
assumption is however not totally satisfactory from a modeling point of view
since the distribution of the duration of some steps is more likely to be
Gaussian, if not almost deterministic. In such a setting, Markovian methods can
no longer be used. A finer characterization of the fluctuations of the number
of proteins is therefore of primary interest to understand the general economy
of the cell. In this paper, we propose a new approach, based on marked Poisson
point processes, which allows to remove the exponential assumption. This is
applied in the framework of the classical three stages models of the
literature: transcription, translation and degradation. The interest of the
method is shown by recovering the classical results under the assumptions that
all the durations are exponentially distributed but also by deriving new
analytic formulas when some of the distributions are not anymore exponential.
Our results show in particular that the exponential assumption may,
surprisingly, underestimate significantly the variance of the number of
proteins when some steps are in fact not exponentially distributed. This
counter-intuitive result stresses the importance of the statistical assumptions
in the protein production process.
|
[
{
"created": "Sat, 2 Jun 2012 10:47:00 GMT",
"version": "v1"
}
] |
2012-06-05
|
[
[
"Fromion",
"Vincent",
""
],
[
"Leoncini",
"Emanuele",
""
],
[
"Robert",
"Philippe",
""
]
] |
This paper investigates the stochastic fluctuations of the number of copies of a given protein in a cell. This problem has already been addressed in the past and closed-form expressions of the mean and variance have been obtained for a simplified stochastic model of the gene expression. These results have been obtained under the assumption that the duration of all the protein production steps are exponentially distributed. In such a case, a Markovian approach (via Fokker-Planck equations) is used to derive analytic formulas of the mean and the variance of the number of proteins at equilibrium. This assumption is however not totally satisfactory from a modeling point of view since the distribution of the duration of some steps is more likely to be Gaussian, if not almost deterministic. In such a setting, Markovian methods can no longer be used. A finer characterization of the fluctuations of the number of proteins is therefore of primary interest to understand the general economy of the cell. In this paper, we propose a new approach, based on marked Poisson point processes, which allows to remove the exponential assumption. This is applied in the framework of the classical three stages models of the literature: transcription, translation and degradation. The interest of the method is shown by recovering the classical results under the assumptions that all the durations are exponentially distributed but also by deriving new analytic formulas when some of the distributions are not anymore exponential. Our results show in particular that the exponential assumption may, surprisingly, underestimate significantly the variance of the number of proteins when some steps are in fact not exponentially distributed. This counter-intuitive result stresses the importance of the statistical assumptions in the protein production process.
|
2003.03223
|
Andrea Gianotti Prof
|
Mattia Di Nunzio, Gianfranco Picone, Federica Pasini, Elena Chiarello,
Maria Fiorenza Caboni, Francesco Capozzi, Andrea Gianotti, Alessandra Bordoni
|
Olive oil by-product as functional ingredient in bakery products
| null |
Food Research International, 131, 108940 (2020)
|
10.1016/j.foodres.2019.108940
| null |
q-bio.TO q-bio.SC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
By-products represent a major disposal problem for the food industry, but
they are also promising sources of bioactive compounds. Olive pomace, one of
the main by-products of olive oil production, is a potential low-cost,
phenol-rich ingredient for the formulation of functional food. In this study,
bakery products enriched with defatted olive pomace powder and their
conventional counterparts were chemical characterized and in vitro digested.
The bioaccessible fractions were supplemented to cultured human intestinal
cells exposed to an inflammatory stimulus, and the anti-inflammatory effect and
metabolome modification were evaluated. Although in all bakery products the
enrichment with olive pomace significantly increased the total phenolic
content, this increase was paralleled by an enhanced anti-inflammatory activity
only in conventionally fermented bread. Therefore, while confirming olive oil
by-products as functional ingredients for bakery food enrichment, our data
highlight that changes in chemical composition cannot predict changes in
functionality. Functionality should be first evaluated in biological in vitro
systems, and then confirmed in human intervention studies.
|
[
{
"created": "Fri, 28 Feb 2020 11:08:34 GMT",
"version": "v1"
}
] |
2020-06-29
|
[
[
"Di Nunzio",
"Mattia",
""
],
[
"Picone",
"Gianfranco",
""
],
[
"Pasini",
"Federica",
""
],
[
"Chiarello",
"Elena",
""
],
[
"Caboni",
"Maria Fiorenza",
""
],
[
"Capozzi",
"Francesco",
""
],
[
"Gianotti",
"Andrea",
""
],
[
"Bordoni",
"Alessandra",
""
]
] |
By-products represent a major disposal problem for the food industry, but they are also promising sources of bioactive compounds. Olive pomace, one of the main by-products of olive oil production, is a potential low-cost, phenol-rich ingredient for the formulation of functional food. In this study, bakery products enriched with defatted olive pomace powder and their conventional counterparts were chemical characterized and in vitro digested. The bioaccessible fractions were supplemented to cultured human intestinal cells exposed to an inflammatory stimulus, and the anti-inflammatory effect and metabolome modification were evaluated. Although in all bakery products the enrichment with olive pomace significantly increased the total phenolic content, this increase was paralleled by an enhanced anti-inflammatory activity only in conventionally fermented bread. Therefore, while confirming olive oil by-products as functional ingredients for bakery food enrichment, our data highlight that changes in chemical composition cannot predict changes in functionality. Functionality should be first evaluated in biological in vitro systems, and then confirmed in human intervention studies.
|
1709.10483
|
Alexandra Badea
|
Robert J Anderson, James J Cook, Natalie A Delpratt, John C Nouls, Bin
Gu, James O McNamara, Brian B Avants, G Allan Johnson, Alexandra Badea
|
Small Animal Multivariate Brain Analysis (SAMBA): A High Throughput
Pipeline with a Validation Framework
|
48 pages, 9 Figures, 3 Tables, 1 Suppl Table, 7 SupplementaryTables
| null | null | null |
q-bio.QM q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
While many neuroscience questions aim to understand the human brain, much
current knowledge has been gained using animal models, which replicate genetic,
structural, and connectivity aspects of the human brain. While voxel-based
analysis (VBA) of preclinical magnetic resonance images is widely-used, a
thorough examination of the statistical robustness, stability, and error rates
is hindered by high computational demands of processing large arrays, and the
many parameters involved. Thus, workflows are often based on intuition or
experience, while preclinical validation studies remain scarce. To increase
throughput and reproducibility of quantitative small animal brain studies, we
have developed a publicly shared, high throughput VBA pipeline in a
high-performance computing environment, called SAMBA. The increased
computational efficiency allowed large multidimensional arrays to be processed
in 1-3 days, a task that previously took ~1 month. To quantify the variability
and reliability of preclinical VBA in rodent models, we propose a validation
framework consisting of morphological phantoms, and four metrics. This
addresses several sources that impact VBA results, including registration and
template construction strategies. We have used this framework to inform the VBA
workflow parameters in a VBA study for a mouse model of epilepsy. We also
present initial efforts towards standardizing small animal neuroimaging data in
a similar fashion with human neuroimaging. We conclude that verifying the
accuracy of VBA merits attention, and should be the focus of a broader effort
within the community. The proposed framework promotes consistent quality
assurance of VBA in preclinical neuroimaging; facilitating the creation and
communication of robust results.
|
[
{
"created": "Fri, 29 Sep 2017 16:28:46 GMT",
"version": "v1"
},
{
"created": "Mon, 21 May 2018 13:50:16 GMT",
"version": "v2"
}
] |
2018-05-22
|
[
[
"Anderson",
"Robert J",
""
],
[
"Cook",
"James J",
""
],
[
"Delpratt",
"Natalie A",
""
],
[
"Nouls",
"John C",
""
],
[
"Gu",
"Bin",
""
],
[
"McNamara",
"James O",
""
],
[
"Avants",
"Brian B",
""
],
[
"Johnson",
"G Allan",
""
],
[
"Badea",
"Alexandra",
""
]
] |
While many neuroscience questions aim to understand the human brain, much current knowledge has been gained using animal models, which replicate genetic, structural, and connectivity aspects of the human brain. While voxel-based analysis (VBA) of preclinical magnetic resonance images is widely-used, a thorough examination of the statistical robustness, stability, and error rates is hindered by high computational demands of processing large arrays, and the many parameters involved. Thus, workflows are often based on intuition or experience, while preclinical validation studies remain scarce. To increase throughput and reproducibility of quantitative small animal brain studies, we have developed a publicly shared, high throughput VBA pipeline in a high-performance computing environment, called SAMBA. The increased computational efficiency allowed large multidimensional arrays to be processed in 1-3 days, a task that previously took ~1 month. To quantify the variability and reliability of preclinical VBA in rodent models, we propose a validation framework consisting of morphological phantoms, and four metrics. This addresses several sources that impact VBA results, including registration and template construction strategies. We have used this framework to inform the VBA workflow parameters in a VBA study for a mouse model of epilepsy. We also present initial efforts towards standardizing small animal neuroimaging data in a similar fashion with human neuroimaging. We conclude that verifying the accuracy of VBA merits attention, and should be the focus of a broader effort within the community. The proposed framework promotes consistent quality assurance of VBA in preclinical neuroimaging; facilitating the creation and communication of robust results.
|
1912.11356
|
Muhammad Nabeel Asim
|
Muhammad Nabeel Asima, Muhammad Imran Malik, Andreas Dengela, Sheraz
Ahmed
|
A Robust and Precise ConvNet for small non-coding RNA classification
(RPC-snRC)
|
34 pages
| null | null | null |
q-bio.GN cs.LG
|
http://creativecommons.org/licenses/by/4.0/
|
Functional or non-coding RNAs are attracting more attention as they are now
potentially considered valuable resources in the development of new drugs
intended to cure several human diseases. The identification of drugs targeting
the regulatory circuits of functional RNAs depends on knowing its family, a
task which is known as RNA sequence classification. State-of-the-art small
noncoding RNA classification methodologies take secondary structural features
as input. However, in such classification, feature extraction approaches only
take global characteristics into account and completely oversight co-relative
effect of local structures. Furthermore secondary structure based approaches
incorporate high dimensional feature space which proves computationally
expensive. This paper proposes a novel Robust and Precise ConvNet (RPC-snRC)
methodology which classifies small non-coding RNAs sequences into their
relevant families by utilizing the primary sequence of RNAs. RPC-snRC
methodology learns hierarchical representation of features by utilizing
positioning and occurrences information of nucleotides. To avoid exploding and
vanishing gradient problems, we use an approach similar to DenseNet in which
gradient can flow straight from subsequent layers to previous layers. In order
to assess the effectiveness of deeper architectures for small non-coding RNA
classification, we also adapted two ResNet architectures having different
number of layers. Experimental results on a benchmark small non-coding RNA
dataset show that our proposed methodology does not only outperform existing
small non-coding RNA classification approaches with a significant performance
margin of 10% but it also outshines adapted ResNet architectures.
|
[
{
"created": "Mon, 23 Dec 2019 08:33:42 GMT",
"version": "v1"
}
] |
2019-12-25
|
[
[
"Asima",
"Muhammad Nabeel",
""
],
[
"Malik",
"Muhammad Imran",
""
],
[
"Dengela",
"Andreas",
""
],
[
"Ahmed",
"Sheraz",
""
]
] |
Functional or non-coding RNAs are attracting more attention as they are now potentially considered valuable resources in the development of new drugs intended to cure several human diseases. The identification of drugs targeting the regulatory circuits of functional RNAs depends on knowing its family, a task which is known as RNA sequence classification. State-of-the-art small noncoding RNA classification methodologies take secondary structural features as input. However, in such classification, feature extraction approaches only take global characteristics into account and completely oversight co-relative effect of local structures. Furthermore secondary structure based approaches incorporate high dimensional feature space which proves computationally expensive. This paper proposes a novel Robust and Precise ConvNet (RPC-snRC) methodology which classifies small non-coding RNAs sequences into their relevant families by utilizing the primary sequence of RNAs. RPC-snRC methodology learns hierarchical representation of features by utilizing positioning and occurrences information of nucleotides. To avoid exploding and vanishing gradient problems, we use an approach similar to DenseNet in which gradient can flow straight from subsequent layers to previous layers. In order to assess the effectiveness of deeper architectures for small non-coding RNA classification, we also adapted two ResNet architectures having different number of layers. Experimental results on a benchmark small non-coding RNA dataset show that our proposed methodology does not only outperform existing small non-coding RNA classification approaches with a significant performance margin of 10% but it also outshines adapted ResNet architectures.
|
1705.10516
|
Daniel Hoffmann
|
Farnoush Farahpour, Mohammadkarim Saeedghalati, Verena Brauer, Daniel
Hoffmann
|
Trade-off shapes diversity in eco-evolutionary dynamics
| null | null |
10.7554/eLife.36273
| null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We introduce an Interaction and Trade-off based Eco-Evolutionary Model
(ITEEM), in which species are competing for common resources in a well-mixed
system, and their evolution in interaction trait space is subject to a
life-history trade-off between replication rate and competitive ability. We
demonstrate that the strength of the trade-off has a fundamental impact on
eco-evolutionary dynamics, as it imposes four phases of diversity, including a
sharp phase transition. Despite its minimalism, ITEEM produces without further
ad hoc features a remarkable range of observed patterns of eco-evolutionary
dynamics. Most notably we find self-organization towards structured communities
with high and sustainable diversity, in which competing species form
interaction cycles similar to rock-paper-scissors games.
|
[
{
"created": "Tue, 30 May 2017 09:23:00 GMT",
"version": "v1"
}
] |
2018-09-07
|
[
[
"Farahpour",
"Farnoush",
""
],
[
"Saeedghalati",
"Mohammadkarim",
""
],
[
"Brauer",
"Verena",
""
],
[
"Hoffmann",
"Daniel",
""
]
] |
We introduce an Interaction and Trade-off based Eco-Evolutionary Model (ITEEM), in which species are competing for common resources in a well-mixed system, and their evolution in interaction trait space is subject to a life-history trade-off between replication rate and competitive ability. We demonstrate that the strength of the trade-off has a fundamental impact on eco-evolutionary dynamics, as it imposes four phases of diversity, including a sharp phase transition. Despite its minimalism, ITEEM produces without further ad hoc features a remarkable range of observed patterns of eco-evolutionary dynamics. Most notably we find self-organization towards structured communities with high and sustainable diversity, in which competing species form interaction cycles similar to rock-paper-scissors games.
|
1702.00031
|
Lee Worden
|
Lee Worden, Travis C. Porco
|
Products of Compartmental Models in Epidemiology
| null | null | null | null |
q-bio.PE q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In this paper, we show that many structured epidemic models may be described
using a straightforward product structure. Such products, derived from products
of directed graphs, may represent useful refinements including geographic and
demographic structure, age structure, gender, risk groups, or immunity status.
Extension to multi-strain dynamics, i.e. pathogen heterogeneity, is also shown
to be feasible in this framework. Systematic use of such products may aid in
model development and exploration, can yield insight, and could form the basis
of a systematic approach to numerical structural sensitivity analysis.
|
[
{
"created": "Tue, 31 Jan 2017 19:33:20 GMT",
"version": "v1"
},
{
"created": "Wed, 28 Jun 2017 23:40:49 GMT",
"version": "v2"
}
] |
2017-06-30
|
[
[
"Worden",
"Lee",
""
],
[
"Porco",
"Travis C.",
""
]
] |
In this paper, we show that many structured epidemic models may be described using a straightforward product structure. Such products, derived from products of directed graphs, may represent useful refinements including geographic and demographic structure, age structure, gender, risk groups, or immunity status. Extension to multi-strain dynamics, i.e. pathogen heterogeneity, is also shown to be feasible in this framework. Systematic use of such products may aid in model development and exploration, can yield insight, and could form the basis of a systematic approach to numerical structural sensitivity analysis.
|
0902.3147
|
Claude Pasquier
|
Claude Pasquier (IBDC)
|
Biological data integration using Semantic Web technologies
| null |
Biochimie 90, 4 (2008) 584-94
|
10.1016/j.biochi.2008.02.007
| null |
q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Current research in biology heavily depends on the availability and efficient
use of information. In order to build new knowledge, various sources of
biological data must often be combined. Semantic Web technologies, which
provide a common framework allowing data to be shared and reused between
applications, can be applied to the management of disseminated biological data.
However, due to some specificities of biological data, the application of these
technologies to life science constitutes a real challenge. Through a use case
of biological data integration, we show in this paper that current Semantic Web
technologies start to become mature and can be applied for the development of
large applications. However, in order to get the best from these technologies,
improvements are needed both at the level of tool performance and knowledge
modeling.
|
[
{
"created": "Wed, 18 Feb 2009 14:05:06 GMT",
"version": "v1"
}
] |
2009-02-19
|
[
[
"Pasquier",
"Claude",
"",
"IBDC"
]
] |
Current research in biology heavily depends on the availability and efficient use of information. In order to build new knowledge, various sources of biological data must often be combined. Semantic Web technologies, which provide a common framework allowing data to be shared and reused between applications, can be applied to the management of disseminated biological data. However, due to some specificities of biological data, the application of these technologies to life science constitutes a real challenge. Through a use case of biological data integration, we show in this paper that current Semantic Web technologies start to become mature and can be applied for the development of large applications. However, in order to get the best from these technologies, improvements are needed both at the level of tool performance and knowledge modeling.
|
0704.1811
|
Samarth Swarup
|
Samarth Swarup and Les Gasser
|
Unifying Evolutionary and Network Dynamics
|
11 pages, 12 figures, Accepted for publication in Physical Review E
| null |
10.1103/PhysRevE.75.066114
| null |
q-bio.QM q-bio.PE
| null |
Many important real-world networks manifest "small-world" properties such as
scale-free degree distributions, small diameters, and clustering. The most
common model of growth for these networks is "preferential attachment", where
nodes acquire new links with probability proportional to the number of links
they already have. We show that preferential attachment is a special case of
the process of molecular evolution. We present a new single-parameter model of
network growth that unifies varieties of preferential attachment with the
quasispecies equation (which models molecular evolution), and also with the
Erdos-Renyi random graph model. We suggest some properties of evolutionary
models that might be applied to the study of networks. We also derive the form
of the degree distribution resulting from our algorithm, and we show through
simulations that the process also models aspects of network growth. The
unification allows mathematical machinery developed for evolutionary dynamics
to be applied in the study of network dynamics, and vice versa.
|
[
{
"created": "Fri, 13 Apr 2007 19:56:37 GMT",
"version": "v1"
}
] |
2009-11-13
|
[
[
"Swarup",
"Samarth",
""
],
[
"Gasser",
"Les",
""
]
] |
Many important real-world networks manifest "small-world" properties such as scale-free degree distributions, small diameters, and clustering. The most common model of growth for these networks is "preferential attachment", where nodes acquire new links with probability proportional to the number of links they already have. We show that preferential attachment is a special case of the process of molecular evolution. We present a new single-parameter model of network growth that unifies varieties of preferential attachment with the quasispecies equation (which models molecular evolution), and also with the Erdos-Renyi random graph model. We suggest some properties of evolutionary models that might be applied to the study of networks. We also derive the form of the degree distribution resulting from our algorithm, and we show through simulations that the process also models aspects of network growth. The unification allows mathematical machinery developed for evolutionary dynamics to be applied in the study of network dynamics, and vice versa.
|
0908.4508
|
Peter Csermely
|
Gabor I. Simko, David Gyurko, Daniel V. Veres, Tibor Nanasi, Peter
Csermely
|
Network strategies to understand the aging process and help age-related
drug design
|
an invited paper to Genome Medicine with 8 pages, 2 figures, 1 table
and 46 references
|
Genome Medicine (2009) 1, 90
|
10.1186/gm90
| null |
q-bio.MN physics.bio-ph q-bio.GN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Recent studies have demonstrated that network approaches are highly
appropriate tools to understand the extreme complexity of the aging process.
The generality of the network concept helps to define and study the aging of
technological, social networks and ecosystems, which may give novel concepts to
cure age-related diseases. The current review focuses on the role of
protein-protein interaction networks (interactomes) in aging. Hubs and
inter-modular elements of both interactomes and signaling networks are key
regulators of the aging process. Aging induces an increase in the permeability
of several cellular compartments, such as the cell nucleus, introducing gross
changes in the representation of network structures. The large overlap between
aging genes and genes of age-related major diseases makes drugs which aid
healthy aging promising candidates for the prevention and treatment of
age-related diseases, such as cancer, atherosclerosis, diabetes and
neurodegenerative disorders. We also discuss a number of possible research
options to further explore the potential of the network concept in this
important field, and show that multi-target drugs (representing
"magic-buckshots" instead of the traditional "magic bullets") may become an
especially useful class of age-related future drugs.
|
[
{
"created": "Mon, 31 Aug 2009 11:51:36 GMT",
"version": "v1"
},
{
"created": "Mon, 28 Sep 2009 20:30:40 GMT",
"version": "v2"
}
] |
2009-09-28
|
[
[
"Simko",
"Gabor I.",
""
],
[
"Gyurko",
"David",
""
],
[
"Veres",
"Daniel V.",
""
],
[
"Nanasi",
"Tibor",
""
],
[
"Csermely",
"Peter",
""
]
] |
Recent studies have demonstrated that network approaches are highly appropriate tools to understand the extreme complexity of the aging process. The generality of the network concept helps to define and study the aging of technological, social networks and ecosystems, which may give novel concepts to cure age-related diseases. The current review focuses on the role of protein-protein interaction networks (interactomes) in aging. Hubs and inter-modular elements of both interactomes and signaling networks are key regulators of the aging process. Aging induces an increase in the permeability of several cellular compartments, such as the cell nucleus, introducing gross changes in the representation of network structures. The large overlap between aging genes and genes of age-related major diseases makes drugs which aid healthy aging promising candidates for the prevention and treatment of age-related diseases, such as cancer, atherosclerosis, diabetes and neurodegenerative disorders. We also discuss a number of possible research options to further explore the potential of the network concept in this important field, and show that multi-target drugs (representing "magic-buckshots" instead of the traditional "magic bullets") may become an especially useful class of age-related future drugs.
|
2005.07856
|
Kai Guo
|
Zhihan Wang, Kai Guo, Pan Gao, Qinqin Pu, Min Wu, Changlong Li and
Junguk Hur
|
Identification of Repurposable Drugs and Adverse Drug Reactions for
Various Courses of COVID-19 Based on Single-Cell RNA Sequencing Data
| null | null | null | null |
q-bio.GN
|
http://creativecommons.org/licenses/by/4.0/
|
Coronavirus disease 2019 (COVID-19) has impacted almost every part of human
life worldwide, posing a massive threat to human health. There is no specific
drug for COVID-19, highlighting the urgent need for the development of
effective therapeutics. To identify potentially repurposable drugs, we employed
a systematic approach to mine candidates from U.S. FDA-approved drugs and
preclinical small-molecule compounds by integrating the gene expression
perturbation data for chemicals from the Library of Integrated Network-Based
Cellular Signatures project with a publicly available single-cell RNA
sequencing dataset from mild and severe COVID-19 patients. We identified 281
FDA-approved drugs that have the potential to be effective against SARS-CoV-2
infection, 16 of which are currently undergoing clinical trials to evaluate
their efficacy against COVID-19. We experimentally tested the inhibitory
effects of tyrphostin-AG-1478 and brefeldin-a on the replication of the
single-stranded ribonucleic acid (ssRNA) virus influenza A virus. In
conclusion, we have identified a list of repurposable anti-SARS-CoV-2 drugs
using a systems biology approach.
|
[
{
"created": "Sat, 16 May 2020 03:30:17 GMT",
"version": "v1"
},
{
"created": "Fri, 4 Dec 2020 16:25:20 GMT",
"version": "v2"
}
] |
2020-12-07
|
[
[
"Wang",
"Zhihan",
""
],
[
"Guo",
"Kai",
""
],
[
"Gao",
"Pan",
""
],
[
"Pu",
"Qinqin",
""
],
[
"Wu",
"Min",
""
],
[
"Li",
"Changlong",
""
],
[
"Hur",
"Junguk",
""
]
] |
Coronavirus disease 2019 (COVID-19) has impacted almost every part of human life worldwide, posing a massive threat to human health. There is no specific drug for COVID-19, highlighting the urgent need for the development of effective therapeutics. To identify potentially repurposable drugs, we employed a systematic approach to mine candidates from U.S. FDA-approved drugs and preclinical small-molecule compounds by integrating the gene expression perturbation data for chemicals from the Library of Integrated Network-Based Cellular Signatures project with a publicly available single-cell RNA sequencing dataset from mild and severe COVID-19 patients. We identified 281 FDA-approved drugs that have the potential to be effective against SARS-CoV-2 infection, 16 of which are currently undergoing clinical trials to evaluate their efficacy against COVID-19. We experimentally tested the inhibitory effects of tyrphostin-AG-1478 and brefeldin-a on the replication of the single-stranded ribonucleic acid (ssRNA) virus influenza A virus. In conclusion, we have identified a list of repurposable anti-SARS-CoV-2 drugs using a systems biology approach.
|
2105.07390
|
Saurav Mandal PhD
|
Saurav Mandal, Akshansh Gupta and Waribam Pratibha Chanu
|
Survival prediction of head and neck squamous cell carcinoma using
machine learning models
|
3 figures
| null | null | null |
q-bio.GN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Head and Neck Squamous Cell Carcinoma (HNSCC) is one of cancer type that is
most distressing leading to acute pain, effecting speech and primary survival
functions such as swallowing and breathing. The morbidity and mortality of
HNSCC patients have not significantly improved even tough there has been
advancement in surgical and radiotherapy treatments. The high mortality may be
attributed to the complexity and significant changes in the clinical outcomes.
Therefore, it is important to increase the accuracy of predicting the outcome
of cancer survival. Few cancer survival prediction models of HNSCC have been
proposed so far. In this study, genomic data (whole exome sequencing) are
integrated with clinical data to improve the performance of prediction model.
The somatic mutations of every patient is processed using Multifractal
Deterended Fluctuation Analysis (MFDFA) algorithm and the parameter values of
Fractal Dimension (Dq) is included along with clinical data for cancer survival
prediction. Feature ranking proves that the new engineered feature is one of
the important feature in prediction model. In order to improve the performance
index of models, hyperparameters were also tuned in all the classifiers
considered. 10-Fold cross validation is implemented and XGBoost (98% AUROC, 94%
precision, and 93% recall) proves to be best model classifier followed by
Random Forest 93% AUROC, 93% precision, and 93% recall), Support Vector Machine
(84% AUCROC, 79% precision, and 79% recall) and Logistic Regression (80% AUROC,
77% precision, and 76% recall).
|
[
{
"created": "Sun, 16 May 2021 09:34:29 GMT",
"version": "v1"
}
] |
2021-05-18
|
[
[
"Mandal",
"Saurav",
""
],
[
"Gupta",
"Akshansh",
""
],
[
"Chanu",
"Waribam Pratibha",
""
]
] |
Head and Neck Squamous Cell Carcinoma (HNSCC) is one of cancer type that is most distressing leading to acute pain, effecting speech and primary survival functions such as swallowing and breathing. The morbidity and mortality of HNSCC patients have not significantly improved even tough there has been advancement in surgical and radiotherapy treatments. The high mortality may be attributed to the complexity and significant changes in the clinical outcomes. Therefore, it is important to increase the accuracy of predicting the outcome of cancer survival. Few cancer survival prediction models of HNSCC have been proposed so far. In this study, genomic data (whole exome sequencing) are integrated with clinical data to improve the performance of prediction model. The somatic mutations of every patient is processed using Multifractal Deterended Fluctuation Analysis (MFDFA) algorithm and the parameter values of Fractal Dimension (Dq) is included along with clinical data for cancer survival prediction. Feature ranking proves that the new engineered feature is one of the important feature in prediction model. In order to improve the performance index of models, hyperparameters were also tuned in all the classifiers considered. 10-Fold cross validation is implemented and XGBoost (98% AUROC, 94% precision, and 93% recall) proves to be best model classifier followed by Random Forest 93% AUROC, 93% precision, and 93% recall), Support Vector Machine (84% AUCROC, 79% precision, and 79% recall) and Logistic Regression (80% AUROC, 77% precision, and 76% recall).
|
1711.09015
|
Dayun Yan
|
Dayun Yan, Jonathan H. Sherman, Jerome Canady, Barry Trink, Michael
Keidar
|
The cellular ROS-scavenging function, a key factor determining the
specific vulnerability of cancer cells to cold atmospheric plasma in vitro
| null | null | null | null |
q-bio.CB physics.bio-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Cold atmospheric plasma (CAP) has shown its promising application in cancer
treatment both in vitro and in vivo. However, the anti-cancer mechanism is
still largely unknown. CAP may kill cancer cells via triggering the rise of
intracellular ROS, DNA damage, mitochondrial damage, or cellular membrane
damage. While, the specific vulnerability of cancer cells to CAP has been
observed, the underlying mechanism of such cell-based specific vulnerability to
CAP is completely unknown. Here, through the comparison of CAP treatment and
H2O2 treatment on 10 different cancer cell lines in vitro, we observed that the
H2O2 consumption speed by cancer cells was strongly correlated to the
cytotoxicity of CAP treatment on cancer cells. Cancer cells that clear
extracellular H2O2 more quickly are more resistant to the cytotoxicity of CAP
treatment. This finding strongly indicates that the anti-oxidant system in
cancer cells play a key role in the specific vulnerability of cancer cells to
CAP treatment in vitro.
|
[
{
"created": "Fri, 24 Nov 2017 15:18:52 GMT",
"version": "v1"
}
] |
2017-11-27
|
[
[
"Yan",
"Dayun",
""
],
[
"Sherman",
"Jonathan H.",
""
],
[
"Canady",
"Jerome",
""
],
[
"Trink",
"Barry",
""
],
[
"Keidar",
"Michael",
""
]
] |
Cold atmospheric plasma (CAP) has shown its promising application in cancer treatment both in vitro and in vivo. However, the anti-cancer mechanism is still largely unknown. CAP may kill cancer cells via triggering the rise of intracellular ROS, DNA damage, mitochondrial damage, or cellular membrane damage. While, the specific vulnerability of cancer cells to CAP has been observed, the underlying mechanism of such cell-based specific vulnerability to CAP is completely unknown. Here, through the comparison of CAP treatment and H2O2 treatment on 10 different cancer cell lines in vitro, we observed that the H2O2 consumption speed by cancer cells was strongly correlated to the cytotoxicity of CAP treatment on cancer cells. Cancer cells that clear extracellular H2O2 more quickly are more resistant to the cytotoxicity of CAP treatment. This finding strongly indicates that the anti-oxidant system in cancer cells play a key role in the specific vulnerability of cancer cells to CAP treatment in vitro.
|
1111.6353
|
Kevin Lin
|
Kevin K. Lin, Kyle C. A. Wedgwood, Stephen Coombes, Lai-Sang Young
|
Limitations of perturbative techniques in the analysis of rhythms and
oscillations
| null | null | null | null |
q-bio.NC
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Perturbation theory is an important tool in the analysis of oscillators and
their response to external stimuli. It is predicated on the assumption that the
perturbations in question are "sufficiently weak", an assumption that is not
always valid when perturbative methods are applied. In this paper, we identify
a number of concrete dynamical scenarios in which a standard perturbative
technique, based on the infinitesimal phase response curve (PRC), is shown to
give different predictions than the full model. Shear-induced chaos, i.e.,
chaotic behavior that results from the amplification of small perturbations by
underlying shear, is missed entirely by the PRC. We show also that the presence
of "sticky" phase-space structures tend to cause perturbative techniques to
overestimate the frequencies and regularity of the oscillations. The phenomena
we describe can all be observed in a simple 2D neuron model, which we choose
for illustration as the PRC is widely used in mathematical neuroscience.
|
[
{
"created": "Mon, 28 Nov 2011 06:28:53 GMT",
"version": "v1"
},
{
"created": "Wed, 18 Jan 2012 04:02:23 GMT",
"version": "v2"
}
] |
2012-01-19
|
[
[
"Lin",
"Kevin K.",
""
],
[
"Wedgwood",
"Kyle C. A.",
""
],
[
"Coombes",
"Stephen",
""
],
[
"Young",
"Lai-Sang",
""
]
] |
Perturbation theory is an important tool in the analysis of oscillators and their response to external stimuli. It is predicated on the assumption that the perturbations in question are "sufficiently weak", an assumption that is not always valid when perturbative methods are applied. In this paper, we identify a number of concrete dynamical scenarios in which a standard perturbative technique, based on the infinitesimal phase response curve (PRC), is shown to give different predictions than the full model. Shear-induced chaos, i.e., chaotic behavior that results from the amplification of small perturbations by underlying shear, is missed entirely by the PRC. We show also that the presence of "sticky" phase-space structures tend to cause perturbative techniques to overestimate the frequencies and regularity of the oscillations. The phenomena we describe can all be observed in a simple 2D neuron model, which we choose for illustration as the PRC is widely used in mathematical neuroscience.
|
1610.02864
|
Roshan Prizak
|
Tamar Friedlander, Roshan Prizak, Nicholas H. Barton, and Ga\v{s}per
Tka\v{c}ik
|
Evolution of new regulatory functions on biophysically realistic fitness
landscapes
| null | null |
10.1038/s41467-017-00238-8
| null |
q-bio.PE q-bio.MN
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Regulatory networks consist of interacting molecules with a high degree of
mutual chemical specificity. How can these molecules evolve when their function
depends on maintenance of interactions with cognate partners and simultaneous
avoidance of deleterious "crosstalk" with non-cognate molecules? Although
physical models of molecular interactions provide a framework in which
co-evolution of network components can be analyzed, most theoretical studies
have focused on the evolution of individual alleles, neglecting the network. In
contrast, we study the elementary step in the evolution of gene regulatory
networks: duplication of a transcription factor followed by selection for TFs
to specialize their inputs as well as the regulation of their downstream genes.
We show how to coarse grain the complete, biophysically realistic
genotype-phenotype map for this process into macroscopic functional outcomes
and quantify the probability of attaining each. We determine which evolutionary
and biophysical parameters bias evolutionary trajectories towards fast
emergence of new functions and show that this can be greatly facilitated by the
availability of "promiscuity-promoting" mutations that affect TF specificity.
|
[
{
"created": "Mon, 10 Oct 2016 11:47:01 GMT",
"version": "v1"
},
{
"created": "Mon, 27 Feb 2017 10:31:33 GMT",
"version": "v2"
}
] |
2017-11-01
|
[
[
"Friedlander",
"Tamar",
""
],
[
"Prizak",
"Roshan",
""
],
[
"Barton",
"Nicholas H.",
""
],
[
"Tkačik",
"Gašper",
""
]
] |
Regulatory networks consist of interacting molecules with a high degree of mutual chemical specificity. How can these molecules evolve when their function depends on maintenance of interactions with cognate partners and simultaneous avoidance of deleterious "crosstalk" with non-cognate molecules? Although physical models of molecular interactions provide a framework in which co-evolution of network components can be analyzed, most theoretical studies have focused on the evolution of individual alleles, neglecting the network. In contrast, we study the elementary step in the evolution of gene regulatory networks: duplication of a transcription factor followed by selection for TFs to specialize their inputs as well as the regulation of their downstream genes. We show how to coarse grain the complete, biophysically realistic genotype-phenotype map for this process into macroscopic functional outcomes and quantify the probability of attaining each. We determine which evolutionary and biophysical parameters bias evolutionary trajectories towards fast emergence of new functions and show that this can be greatly facilitated by the availability of "promiscuity-promoting" mutations that affect TF specificity.
|
q-bio/0404036
|
Lutz Brusch
|
Lutz Brusch, Gianaurelio Cuniberti, Martin Bertau
|
Model evaluation for glycolytic oscillations in yeast biotransformations
of xenobiotics
| null |
Biophysical Chemistry 109, 413-426 (2004)
|
10.1016/j.bpc.2003.12.004
| null |
q-bio.MN q-bio.QM
| null |
Anaerobic glycolysis in yeast perturbed by the reduction of xenobiotic
ketones is studied numerically in two models which possess the same topology
but different levels of complexity. By comparing both models' predictions for
concentrations and fluxes as well as steady or oscillatory temporal behavior we
answer the question what phenomena require what kind of minimum model
abstraction. While mean concentrations and fluxes are predicted in agreement by
both models we observe different domains of oscillatory behavior in parameter
space. Generic properties of the glycolytic response to ketones are discussed.
|
[
{
"created": "Mon, 26 Apr 2004 15:36:01 GMT",
"version": "v1"
}
] |
2007-05-23
|
[
[
"Brusch",
"Lutz",
""
],
[
"Cuniberti",
"Gianaurelio",
""
],
[
"Bertau",
"Martin",
""
]
] |
Anaerobic glycolysis in yeast perturbed by the reduction of xenobiotic ketones is studied numerically in two models which possess the same topology but different levels of complexity. By comparing both models' predictions for concentrations and fluxes as well as steady or oscillatory temporal behavior we answer the question what phenomena require what kind of minimum model abstraction. While mean concentrations and fluxes are predicted in agreement by both models we observe different domains of oscillatory behavior in parameter space. Generic properties of the glycolytic response to ketones are discussed.
|
2005.08012
|
Ian Craig
|
L.E. Olivier, I.K. Craig
|
An epidemiological model for the spread of COVID-19: A South African
case study
|
14 pages, 13 Figures, 1 Table
| null | null | null |
q-bio.PE physics.soc-ph
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
An epidemiological model is developed for the spread of COVID-19 in South
Africa. A variant of the classical compartmental SEIR model, called the SEIQRDP
model, is used. As South Africa is still in the early phases of the global
COVID-19 pandemic with the confirmed infectious cases not having peaked, the
SEIQRDP model is first parameterized on data for Germany, Italy, and South
Korea - countries for which the number of infectious cases are well past their
peaks. Good fits are achieved with reasonable predictions of where the number
of COVID-19 confirmed cases, deaths, and recovered cases will end up and by
when. South African data for the period from 23 March to 8 May 2020 is then
used to obtain SEIQRDP model parameters. It is found that the model fits the
initial disease progression well, but that the long-term predictive capability
of the model is rather poor. The South African SEIQRDP model is subsequently
recalculated with the basic reproduction number constrained to reported values.
The resulting model fits the data well, and long-term predictions appear to be
reasonable. The South African SEIQRDP model predicts that the peak in the
number of confirmed infectious individuals will occur at the end of October
2020, and that the total number of deaths will range from about 10,000 to
90,000, with a nominal value of about 22,000. All of these predictions are
heavily dependent on the disease control measures in place, and the adherence
to these measures. These predictions are further shown to be particularly
sensitive to parameters used to determine the basic reproduction number. The
future aim is to use a feedback control approach together with the South
African SEIQRDP model to determine the epidemiological impact of varying
lockdown levels proposed by the South African Government.
|
[
{
"created": "Sat, 16 May 2020 15:11:27 GMT",
"version": "v1"
},
{
"created": "Fri, 22 May 2020 14:47:27 GMT",
"version": "v2"
}
] |
2020-05-25
|
[
[
"Olivier",
"L. E.",
""
],
[
"Craig",
"I. K.",
""
]
] |
An epidemiological model is developed for the spread of COVID-19 in South Africa. A variant of the classical compartmental SEIR model, called the SEIQRDP model, is used. As South Africa is still in the early phases of the global COVID-19 pandemic with the confirmed infectious cases not having peaked, the SEIQRDP model is first parameterized on data for Germany, Italy, and South Korea - countries for which the number of infectious cases are well past their peaks. Good fits are achieved with reasonable predictions of where the number of COVID-19 confirmed cases, deaths, and recovered cases will end up and by when. South African data for the period from 23 March to 8 May 2020 is then used to obtain SEIQRDP model parameters. It is found that the model fits the initial disease progression well, but that the long-term predictive capability of the model is rather poor. The South African SEIQRDP model is subsequently recalculated with the basic reproduction number constrained to reported values. The resulting model fits the data well, and long-term predictions appear to be reasonable. The South African SEIQRDP model predicts that the peak in the number of confirmed infectious individuals will occur at the end of October 2020, and that the total number of deaths will range from about 10,000 to 90,000, with a nominal value of about 22,000. All of these predictions are heavily dependent on the disease control measures in place, and the adherence to these measures. These predictions are further shown to be particularly sensitive to parameters used to determine the basic reproduction number. The future aim is to use a feedback control approach together with the South African SEIQRDP model to determine the epidemiological impact of varying lockdown levels proposed by the South African Government.
|
1710.09030
|
Kevin Supakkul
|
Kevin Supakkul
|
Using Positional Heel-marker Data to More Accurately Calculate Stride
Length for Treadmill Walking: A Step Length Approach
| null | null | null | null |
q-bio.QM
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
Treadmill walking is a convenient tool for studying the human gait; however,
a common gait parameter, stride length, can be difficult to calculate directly
because relevant reference points continually move backwards. Although there is
no direct calculation of stride length itself, we can use positional
heel-marker data to directly determine a similar parameter, step length, and we
can sum two step lengths to result in one stride length. This proposed method
of calculation is simple but seems to be unexplored in other literature, so
this paper displays the details of the calculation. Our experimental results
differed from the expected values by 2.2% and had a very low standard
deviation, suggesting that this method is viable for practical use. The ability
to calculate stride length for treadmill walking using heel-marker data may
allow for quick and accurate gait calculations that further contribute to the
versatility of heel data as a tool for gait analysis.
|
[
{
"created": "Wed, 25 Oct 2017 00:42:20 GMT",
"version": "v1"
}
] |
2017-10-26
|
[
[
"Supakkul",
"Kevin",
""
]
] |
Treadmill walking is a convenient tool for studying the human gait; however, a common gait parameter, stride length, can be difficult to calculate directly because relevant reference points continually move backwards. Although there is no direct calculation of stride length itself, we can use positional heel-marker data to directly determine a similar parameter, step length, and we can sum two step lengths to result in one stride length. This proposed method of calculation is simple but seems to be unexplored in other literature, so this paper displays the details of the calculation. Our experimental results differed from the expected values by 2.2% and had a very low standard deviation, suggesting that this method is viable for practical use. The ability to calculate stride length for treadmill walking using heel-marker data may allow for quick and accurate gait calculations that further contribute to the versatility of heel data as a tool for gait analysis.
|
2108.08306
|
Muhammad Dawood
|
Muhammad Dawood, Kim Branson, Nasir M. Rajpoot, Fayyaz ul Amir Afsar
Minhas
|
ALBRT: Cellular Composition Prediction in Routine Histology Images
|
11 pages, 5 figures
| null | null | null |
q-bio.QM eess.IV
|
http://creativecommons.org/licenses/by-nc-sa/4.0/
|
Cellular composition prediction, i.e., predicting the presence and counts of
different types of cells in the tumor microenvironment from a digitized image
of a Hematoxylin and Eosin (H&E) stained tissue section can be used for various
tasks in computational pathology such as the analysis of cellular topology and
interactions, subtype prediction, survival analysis, etc. In this work, we
propose an image-based cellular composition predictor (ALBRT) which can
accurately predict the presence and counts of different types of cells in a
given image patch. ALBRT, by its contrastive-learning inspired design, learns a
compact and rotation-invariant feature representation that is then used for
cellular composition prediction of different cell types. It offers significant
improvement over existing state-of-the-art approaches for cell classification
and counting. The patch-level feature representation learned by ALBRT is
transferrable for cellular composition analysis over novel datasets and can
also be utilized for downstream prediction tasks in CPath as well. The code and
the inference webserver for the proposed method are available at the URL:
https://github.com/engrodawood/ALBRT.
|
[
{
"created": "Wed, 18 Aug 2021 15:41:06 GMT",
"version": "v1"
},
{
"created": "Thu, 26 Aug 2021 11:01:02 GMT",
"version": "v2"
}
] |
2021-08-27
|
[
[
"Dawood",
"Muhammad",
""
],
[
"Branson",
"Kim",
""
],
[
"Rajpoot",
"Nasir M.",
""
],
[
"Minhas",
"Fayyaz ul Amir Afsar",
""
]
] |
Cellular composition prediction, i.e., predicting the presence and counts of different types of cells in the tumor microenvironment from a digitized image of a Hematoxylin and Eosin (H&E) stained tissue section can be used for various tasks in computational pathology such as the analysis of cellular topology and interactions, subtype prediction, survival analysis, etc. In this work, we propose an image-based cellular composition predictor (ALBRT) which can accurately predict the presence and counts of different types of cells in a given image patch. ALBRT, by its contrastive-learning inspired design, learns a compact and rotation-invariant feature representation that is then used for cellular composition prediction of different cell types. It offers significant improvement over existing state-of-the-art approaches for cell classification and counting. The patch-level feature representation learned by ALBRT is transferrable for cellular composition analysis over novel datasets and can also be utilized for downstream prediction tasks in CPath as well. The code and the inference webserver for the proposed method are available at the URL: https://github.com/engrodawood/ALBRT.
|
1810.01243
|
Melpomeni Kalofonou
|
Mohammed Khwaja, Melpomeni Kalofonou and Chris Toumazou
|
A Deep Autoencoder System for Differentiation of Cancer Types Based on
DNA Methylation State
| null | null | null | null |
q-bio.QM cs.LG stat.ML
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
A Deep Autoencoder based content retrieval algorithm is proposed for
prediction and differentiation of cancer types based on the presence of
epigenetic patterns of DNA methylation identified in genetic regions known as
CpG islands. The developed deep learning system uses a CpG island state
classification sub-system to complete sets of missing/incomplete island data in
given human cell lines, and is then pipelined with an intricate set of
statistical and signal processing methods to accurately predict the presence of
cancer and further differentiate the type and cell of origin in the event of a
positive result. The proposed system was trained with previously reported data
derived from four case groups of cancer cell lines, achieving overall
Sensitivity of 88.24%, Specificity of 83.33%, Accuracy of 84.75% and Matthews
Correlation Coefficient of 0.687. The ability to predict and differentiate
cancer types using epigenetic events as the identifying patterns was
demonstrated in previously reported data sets from breast, lung, lymphoblastic
leukemia and urological cancer cell lines, allowing the pipelined system to be
robust and adjustable to other cancer cell lines or epigenetic events.
|
[
{
"created": "Tue, 2 Oct 2018 13:44:37 GMT",
"version": "v1"
},
{
"created": "Fri, 5 Oct 2018 14:17:49 GMT",
"version": "v2"
}
] |
2018-10-08
|
[
[
"Khwaja",
"Mohammed",
""
],
[
"Kalofonou",
"Melpomeni",
""
],
[
"Toumazou",
"Chris",
""
]
] |
A Deep Autoencoder based content retrieval algorithm is proposed for prediction and differentiation of cancer types based on the presence of epigenetic patterns of DNA methylation identified in genetic regions known as CpG islands. The developed deep learning system uses a CpG island state classification sub-system to complete sets of missing/incomplete island data in given human cell lines, and is then pipelined with an intricate set of statistical and signal processing methods to accurately predict the presence of cancer and further differentiate the type and cell of origin in the event of a positive result. The proposed system was trained with previously reported data derived from four case groups of cancer cell lines, achieving overall Sensitivity of 88.24%, Specificity of 83.33%, Accuracy of 84.75% and Matthews Correlation Coefficient of 0.687. The ability to predict and differentiate cancer types using epigenetic events as the identifying patterns was demonstrated in previously reported data sets from breast, lung, lymphoblastic leukemia and urological cancer cell lines, allowing the pipelined system to be robust and adjustable to other cancer cell lines or epigenetic events.
|
1908.07837
|
Md Abdul Kuddus Mr
|
Md Abdul Kuddus, Michael T. Meehan, Adeshina I. Adekunle, Lisa J.
White, Emma S. McBryde
|
Mathematical analysis of a two-strain disease model with amplification
|
22 pages, 11 figures
| null | null | null |
q-bio.PE
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
We investigate a two-strain disease model with amplification to simulate the
prevalence of drug-susceptible (s) and drug-resistant (m) disease strains. We
model the emergence of drug resistance as a consequence of inadequate
treatment, i.e. amplification. We perform a dynamical analysis of the resulting
system and find that the model contains three equilibrium points: a
disease-free equilibrium; a mono-existent disease-endemic equilibrium with
respect to the drug-resistant strain; and a co-existent disease-endemic
equilibrium where both the drug-susceptible and drug-resistant strains persist.
We found two basic reproduction numbers: one associated with the
drug-susceptible strain $R_{0s}$; the other with the drug-resistant strain
$R_{0m}$,and showed that at least one of the strains can spread in a population
if ($R_{0s}$,$R_{0m}$) > 1 (epidemic).Furthermore, we also showed that if
$R_{0m}$ > max($R_{0s}$,1), the drug-susceptible strain dies out but the
drug-resistant strain persists in the population; however if $R_{0s}$ >
max($R_{0m}$,1), then both the drug-susceptible and drug-resistant strains
persist in the population. We conducted a local stability analysis of the
system equilibrium points using the Routh-Hurwitz conditions and a global
stability analysis using appropriate Lyapunov functions. Sensitivity analysis
was used to identify the most important model parameters through the partial
rank correlation coefficient (PRCC) method. We found that the contact rate of
both strains had the largest influence on prevalence. We also investigated the
impact of amplification and treatment rates of both strains on the equilibrium
prevalence of infection; results suggest that poor quality treatment make
coexistence more likely but increase the relative abundance of resistant
infections.
|
[
{
"created": "Mon, 19 Aug 2019 23:02:47 GMT",
"version": "v1"
}
] |
2019-08-22
|
[
[
"Kuddus",
"Md Abdul",
""
],
[
"Meehan",
"Michael T.",
""
],
[
"Adekunle",
"Adeshina I.",
""
],
[
"White",
"Lisa J.",
""
],
[
"McBryde",
"Emma S.",
""
]
] |
We investigate a two-strain disease model with amplification to simulate the prevalence of drug-susceptible (s) and drug-resistant (m) disease strains. We model the emergence of drug resistance as a consequence of inadequate treatment, i.e. amplification. We perform a dynamical analysis of the resulting system and find that the model contains three equilibrium points: a disease-free equilibrium; a mono-existent disease-endemic equilibrium with respect to the drug-resistant strain; and a co-existent disease-endemic equilibrium where both the drug-susceptible and drug-resistant strains persist. We found two basic reproduction numbers: one associated with the drug-susceptible strain $R_{0s}$; the other with the drug-resistant strain $R_{0m}$,and showed that at least one of the strains can spread in a population if ($R_{0s}$,$R_{0m}$) > 1 (epidemic).Furthermore, we also showed that if $R_{0m}$ > max($R_{0s}$,1), the drug-susceptible strain dies out but the drug-resistant strain persists in the population; however if $R_{0s}$ > max($R_{0m}$,1), then both the drug-susceptible and drug-resistant strains persist in the population. We conducted a local stability analysis of the system equilibrium points using the Routh-Hurwitz conditions and a global stability analysis using appropriate Lyapunov functions. Sensitivity analysis was used to identify the most important model parameters through the partial rank correlation coefficient (PRCC) method. We found that the contact rate of both strains had the largest influence on prevalence. We also investigated the impact of amplification and treatment rates of both strains on the equilibrium prevalence of infection; results suggest that poor quality treatment make coexistence more likely but increase the relative abundance of resistant infections.
|
2208.12032
|
Corey Maley
|
Corey J. Maley
|
How (and Why) to Think that the Brain is Literally a Computer
| null |
Frontiers in Computer Science, Section: Theoretical Computer
Science, 2022
|
10.3389/fcomp.2022.970396
| null |
q-bio.NC cs.AI
|
http://creativecommons.org/licenses/by/4.0/
|
The relationship between brains and computers is often taken to be merely
metaphorical. However, genuine computational systems can be implemented in
virtually any media; thus, one can take seriously the view that brains
literally compute. But without empirical criteria for what makes a physical
system genuinely a computational one, computation remains a matter of
perspective, especially for natural systems (e.g., brains) that were not
explicitly designed and engineered to be computers. Considerations from real
examples of physical computers-both analog and digital, contemporary and
historical-make clear what those empirical criteria must be. Finally, applying
those criteria to the brain shows how we can view the brain as a computer
(probably an analog one at that), which, in turn, illuminates how that claim is
both informative and falsifiable.
|
[
{
"created": "Wed, 24 Aug 2022 15:38:10 GMT",
"version": "v1"
}
] |
2022-08-26
|
[
[
"Maley",
"Corey J.",
""
]
] |
The relationship between brains and computers is often taken to be merely metaphorical. However, genuine computational systems can be implemented in virtually any media; thus, one can take seriously the view that brains literally compute. But without empirical criteria for what makes a physical system genuinely a computational one, computation remains a matter of perspective, especially for natural systems (e.g., brains) that were not explicitly designed and engineered to be computers. Considerations from real examples of physical computers-both analog and digital, contemporary and historical-make clear what those empirical criteria must be. Finally, applying those criteria to the brain shows how we can view the brain as a computer (probably an analog one at that), which, in turn, illuminates how that claim is both informative and falsifiable.
|
1609.04973
|
Tae-Rin Lee
|
Tae-Rin Lee, Sung Sic Yoo, Jiho Yang
|
Generalized Plasma Skimming Model for Cells and Drug Carriers in the
Microvasculature
| null | null |
10.1007/s10237-016-0832-z
| null |
q-bio.TO physics.flu-dyn
|
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
|
In microvascular transport, where both blood and drug carriers are involved,
plasma skimming has a key role on changing hematocrit level and drug carrier
concentration in capillary beds after continuous vessel bifurcation in the
microvasculature. While there have been numerous studies on modeling the plasma
skimming of blood, previous works lacked in consideration of its interaction
with drug carriers. In this paper, a generalized plasma skimming model is
suggested to predict the redistributions of both the cells and drug carriers at
each bifurcation. In order to examine its applicability, this new model was
applied on a single bifurcation system to predict the redistribution of red
blood cells and drug carriers. Furthermore, this model was tested at
microvascular network level under different plasma skimming conditions for
predicting the concentration of drug carriers. Based on these results, the
applicability of this generalized plasma skimming model is fully discussed and
future works along with the model's limitations are summarized.
|
[
{
"created": "Fri, 16 Sep 2016 09:46:20 GMT",
"version": "v1"
},
{
"created": "Mon, 19 Sep 2016 05:38:14 GMT",
"version": "v2"
},
{
"created": "Tue, 20 Sep 2016 07:37:01 GMT",
"version": "v3"
}
] |
2016-09-23
|
[
[
"Lee",
"Tae-Rin",
""
],
[
"Yoo",
"Sung Sic",
""
],
[
"Yang",
"Jiho",
""
]
] |
In microvascular transport, where both blood and drug carriers are involved, plasma skimming has a key role on changing hematocrit level and drug carrier concentration in capillary beds after continuous vessel bifurcation in the microvasculature. While there have been numerous studies on modeling the plasma skimming of blood, previous works lacked in consideration of its interaction with drug carriers. In this paper, a generalized plasma skimming model is suggested to predict the redistributions of both the cells and drug carriers at each bifurcation. In order to examine its applicability, this new model was applied on a single bifurcation system to predict the redistribution of red blood cells and drug carriers. Furthermore, this model was tested at microvascular network level under different plasma skimming conditions for predicting the concentration of drug carriers. Based on these results, the applicability of this generalized plasma skimming model is fully discussed and future works along with the model's limitations are summarized.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.