id stringlengths 9 13 | submitter stringlengths 4 48 | authors stringlengths 4 9.62k | title stringlengths 4 343 | comments stringlengths 2 480 ⌀ | journal-ref stringlengths 9 309 ⌀ | doi stringlengths 12 138 ⌀ | report-no stringclasses 277 values | categories stringlengths 8 87 | license stringclasses 9 values | orig_abstract stringlengths 27 3.76k | versions listlengths 1 15 | update_date stringlengths 10 10 | authors_parsed listlengths 1 147 | abstract stringlengths 24 3.75k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2403.18850 | Jonito Aerts Arguelles | Jonito Aerts Argu\"elles | Are Colors Quanta of Light for Human Vision? A Quantum Cognition Study
of Visual Perception | 28 pages, 4 figures. arXiv admin note: text overlap with
arXiv:2208.03726 | null | null | null | q-bio.NC cs.AI quant-ph | http://creativecommons.org/licenses/by/4.0/ | We study the phenomenon of categorical perception within the quantum
measurement process. The mechanism underlying this phenomenon consists in
dilating stimuli being perceived to belong to different categories and
contracting stimuli being perceived to belong to the same category. We show
that, due to the naturally different way in determining the distance between
pure states compared to the distance between density states, the phenomenon of
categorical perception is rooted in the structure of the quantum measurement
process itself. We apply our findings to the situation of visual perception of
colors and argue that it is possible to consider colors as light quanta for
human visual perception in a similar way as photons are light quanta for
physical measurements of light frequencies. In our approach we see perception
as a complex encounter between the existing physical reality, the stimuli, and
the reality expected by the perciever, resulting in the experience of the
percepts. We investigate what that means for the situation of two colors, which
we call Light and Dark, given our findings on categorical perception within the
quantum measurement process.
| [
{
"created": "Thu, 14 Mar 2024 21:10:07 GMT",
"version": "v1"
}
] | 2024-03-29 | [
[
"Arguëlles",
"Jonito Aerts",
""
]
] | We study the phenomenon of categorical perception within the quantum measurement process. The mechanism underlying this phenomenon consists in dilating stimuli being perceived to belong to different categories and contracting stimuli being perceived to belong to the same category. We show that, due to the naturally different way in determining the distance between pure states compared to the distance between density states, the phenomenon of categorical perception is rooted in the structure of the quantum measurement process itself. We apply our findings to the situation of visual perception of colors and argue that it is possible to consider colors as light quanta for human visual perception in a similar way as photons are light quanta for physical measurements of light frequencies. In our approach we see perception as a complex encounter between the existing physical reality, the stimuli, and the reality expected by the perciever, resulting in the experience of the percepts. We investigate what that means for the situation of two colors, which we call Light and Dark, given our findings on categorical perception within the quantum measurement process. |
1710.08263 | Max Falkenberg McGillivray | Max Falkenberg McGillivray, William Cheng, Nicholas S. Peters, Kim
Christensen | Machine learning methods for locating re-entrant drivers from
electrograms in a model of atrial fibrillation | null | R. Soc. open sci.5: 172434 2018 | 10.1098/rsos.172434 | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mapping resolution has recently been identified as a key limitation in
successfully locating the drivers of atrial fibrillation. Using a simple
cellular automata model of atrial fibrillation, we demonstrate a method by
which re-entrant drivers can be located quickly and accurately using a
collection of indirect electrogram measurements. The method proposed employs
simple, out of the box machine learning algorithms to correlate characteristic
electrogram gradients with the displacement of an electrogram recording from a
re-entrant driver. Such a method is less sensitive to local fluctuations in
electrical activity. As a result, the method successfully locates 95.4% of
drivers in tissues containing a single driver, and 94.8% (92.5%) for the first
(second) driver in tissues containing two drivers of atrial fibrillation.
Additionally, we demonstrate how the technique can be applied to tissues with
an arbitrary number of drivers. Extending the technique for use in clinical
practice could alleviate the limitations in current ablation techniques that
arise from limited mapping resolution.
| [
{
"created": "Mon, 23 Oct 2017 13:37:21 GMT",
"version": "v1"
}
] | 2020-01-27 | [
[
"McGillivray",
"Max Falkenberg",
""
],
[
"Cheng",
"William",
""
],
[
"Peters",
"Nicholas S.",
""
],
[
"Christensen",
"Kim",
""
]
] | Mapping resolution has recently been identified as a key limitation in successfully locating the drivers of atrial fibrillation. Using a simple cellular automata model of atrial fibrillation, we demonstrate a method by which re-entrant drivers can be located quickly and accurately using a collection of indirect electrogram measurements. The method proposed employs simple, out of the box machine learning algorithms to correlate characteristic electrogram gradients with the displacement of an electrogram recording from a re-entrant driver. Such a method is less sensitive to local fluctuations in electrical activity. As a result, the method successfully locates 95.4% of drivers in tissues containing a single driver, and 94.8% (92.5%) for the first (second) driver in tissues containing two drivers of atrial fibrillation. Additionally, we demonstrate how the technique can be applied to tissues with an arbitrary number of drivers. Extending the technique for use in clinical practice could alleviate the limitations in current ablation techniques that arise from limited mapping resolution. |
1907.08801 | Mauricio Barahona | Amadeus Maes, Mauricio Barahona, Claudia Clopath | Learning spatiotemporal signals using a recurrent spiking network that
discretizes time | To appear in Plos Computational Biology | null | 10.1371/journal.pcbi.1007606 | null | q-bio.NC cs.NE nlin.AO physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning to produce spatiotemporal sequences is a common task that the brain
has to solve. The same neural substrate may be used by the brain to produce
different sequential behaviours. The way the brain learns and encodes such
tasks remains unknown as current computational models do not typically use
realistic biologically-plausible learning. Here, we propose a model where a
spiking recurrent network of excitatory and inhibitory biophysical neurons
drives a read-out layer: the dynamics of the driver recurrent network is
trained to encode time which is then mapped through the read-out neurons to
encode another dimension, such as space or a phase. Different spatiotemporal
patterns can be learned and encoded through the synaptic weights to the
read-out neurons that follow common Hebbian learning rules. We demonstrate that
the model is able to learn spatiotemporal dynamics on time scales that are
behaviourally relevant and we show that the learned sequences are robustly
replayed during a regime of spontaneous activity.
| [
{
"created": "Sat, 20 Jul 2019 11:54:20 GMT",
"version": "v1"
},
{
"created": "Thu, 19 Dec 2019 05:40:07 GMT",
"version": "v2"
}
] | 2020-07-01 | [
[
"Maes",
"Amadeus",
""
],
[
"Barahona",
"Mauricio",
""
],
[
"Clopath",
"Claudia",
""
]
] | Learning to produce spatiotemporal sequences is a common task that the brain has to solve. The same neural substrate may be used by the brain to produce different sequential behaviours. The way the brain learns and encodes such tasks remains unknown as current computational models do not typically use realistic biologically-plausible learning. Here, we propose a model where a spiking recurrent network of excitatory and inhibitory biophysical neurons drives a read-out layer: the dynamics of the driver recurrent network is trained to encode time which is then mapped through the read-out neurons to encode another dimension, such as space or a phase. Different spatiotemporal patterns can be learned and encoded through the synaptic weights to the read-out neurons that follow common Hebbian learning rules. We demonstrate that the model is able to learn spatiotemporal dynamics on time scales that are behaviourally relevant and we show that the learned sequences are robustly replayed during a regime of spontaneous activity. |
1409.6939 | David Tourigny | David S. Tourigny | Two-component feedback loops and deformed mechanics | To appear in Physics Letters A | null | null | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It is shown that a general two-component feedback loop can be viewed as a
deformed Hamiltonian system. Some of the implications of using ideas from
theoretical physics to study biological processes are discussed.
| [
{
"created": "Wed, 24 Sep 2014 13:12:49 GMT",
"version": "v1"
},
{
"created": "Thu, 27 Nov 2014 18:13:15 GMT",
"version": "v2"
}
] | 2014-12-01 | [
[
"Tourigny",
"David S.",
""
]
] | It is shown that a general two-component feedback loop can be viewed as a deformed Hamiltonian system. Some of the implications of using ideas from theoretical physics to study biological processes are discussed. |
2203.03857 | D. Vijayan Senthilkumar | Snehasish Roy Chowdhury, Ramesh Arumugam, Wei Zou, V. K. Chandrasekar
and D. V. Senthilkumar | Role of limiting dispersal on metacommunity stability and persistence | Accepted for Publication in Physical Review E | null | 10.1103/PhysRevE.105.034309 | null | q-bio.PE nlin.AO | http://creativecommons.org/licenses/by/4.0/ | The role of dispersal on the stability and synchrony of a metacommunity is a
topic of considerable interest in theoretical ecology. Dispersal is known to
promote both synchrony, which enhances the likelihood of extinction, and
spatial heterogeneity, which favors the persistence of the population. Several
efforts have been made to understand the effect of diverse variants of
dispersal in the spatially distributed ecological community. Despite the
environmental change strongly affect the dispersal, the effects of controlled
dispersal on the metacommunity stability and their persistence remain unknown.
We study the influence of limiting the immigration using two patch
prey-predator metacommunity at both local and spatial scales. We find that the
spread of the inhomogeneous stable steady states (asynchronous states)
decreases monotonically upon limiting the predator dispersal. Nevertheless, at
the local scale, the spread of the inhomogeneous steady states increases up to
a critical value of the limiting factor, favoring the metacommunity
persistence, and then starts decreasing for further decrease in the limiting
factor with varying local interaction. Interestingly, limiting the prey
dispersal promotes inhomogeneous steady states in a large region of the
parameter space, thereby increasing the metacommunity persistence both at
spatial and local scales. Further, we show similar qualitative dynamics in an
entire class of complex networks consisting of a large number of patches. We
also deduce various bifurcation curves and stability condition for the
inhomogeneous steady states, which we find to agree well with the simulation
results. Thus, our findings on the effect of the limiting dispersal can help to
develop conservation measures for ecological community.
| [
{
"created": "Tue, 8 Mar 2022 05:21:20 GMT",
"version": "v1"
}
] | 2022-04-06 | [
[
"Chowdhury",
"Snehasish Roy",
""
],
[
"Arumugam",
"Ramesh",
""
],
[
"Zou",
"Wei",
""
],
[
"Chandrasekar",
"V. K.",
""
],
[
"Senthilkumar",
"D. V.",
""
]
] | The role of dispersal on the stability and synchrony of a metacommunity is a topic of considerable interest in theoretical ecology. Dispersal is known to promote both synchrony, which enhances the likelihood of extinction, and spatial heterogeneity, which favors the persistence of the population. Several efforts have been made to understand the effect of diverse variants of dispersal in the spatially distributed ecological community. Despite the environmental change strongly affect the dispersal, the effects of controlled dispersal on the metacommunity stability and their persistence remain unknown. We study the influence of limiting the immigration using two patch prey-predator metacommunity at both local and spatial scales. We find that the spread of the inhomogeneous stable steady states (asynchronous states) decreases monotonically upon limiting the predator dispersal. Nevertheless, at the local scale, the spread of the inhomogeneous steady states increases up to a critical value of the limiting factor, favoring the metacommunity persistence, and then starts decreasing for further decrease in the limiting factor with varying local interaction. Interestingly, limiting the prey dispersal promotes inhomogeneous steady states in a large region of the parameter space, thereby increasing the metacommunity persistence both at spatial and local scales. Further, we show similar qualitative dynamics in an entire class of complex networks consisting of a large number of patches. We also deduce various bifurcation curves and stability condition for the inhomogeneous steady states, which we find to agree well with the simulation results. Thus, our findings on the effect of the limiting dispersal can help to develop conservation measures for ecological community. |
2301.06645 | Yixiang Wu | Shanshan Chen, Yixiang Wu | Analysis of a Reaction-Diffusion Susceptible-Infected-Susceptible
Epidemic Patch Model Incorporating Movement Inside and Among Patches | null | null | null | null | q-bio.PE math.AP math.DS | http://creativecommons.org/licenses/by/4.0/ | In this paper, we propose and analyze a reaction-diffusion
susceptible-infected-susceptible (SIS) epidemic patch model. The individuals
are assumed to reside in different patches, where they are able to move inside
and among the patches. The movement of individuals inside the patches is
descried by diffusion terms, and the movement pattern among patches is modeled
by an essentially nonnegative matrix. We define a basic reproduction number
$\mathcal{R}_0$ for the model and show that it is a threshold value for disease
extinction versus persistence. The monotone dependence of $\mathcal{R}_0$ on
the movement rates of infected individuals is proved when the dispersal pattern
is symmetric or non-symmetric. Numerical simulations are performed to
illustrate the impact of the movement of individuals inside and among patches
on the transmission of the disease.
| [
{
"created": "Tue, 17 Jan 2023 00:28:06 GMT",
"version": "v1"
}
] | 2023-01-18 | [
[
"Chen",
"Shanshan",
""
],
[
"Wu",
"Yixiang",
""
]
] | In this paper, we propose and analyze a reaction-diffusion susceptible-infected-susceptible (SIS) epidemic patch model. The individuals are assumed to reside in different patches, where they are able to move inside and among the patches. The movement of individuals inside the patches is descried by diffusion terms, and the movement pattern among patches is modeled by an essentially nonnegative matrix. We define a basic reproduction number $\mathcal{R}_0$ for the model and show that it is a threshold value for disease extinction versus persistence. The monotone dependence of $\mathcal{R}_0$ on the movement rates of infected individuals is proved when the dispersal pattern is symmetric or non-symmetric. Numerical simulations are performed to illustrate the impact of the movement of individuals inside and among patches on the transmission of the disease. |
1404.5046 | Susmita Roy | Susmita Roy and Biman Bagchi | A Comparative Study of Protein Unfolding in Aqueous Urea and DMSO
Solutions: Surface Polarity, Solvent Specificity and Sequence of Secondary
Structure Melting | null | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Elucidation of possible pathways between folded (native) and unfolded states
of a protein is a challenging task, as the intermediates are often hard to
detect. Here we alter the solvent environment in a controlled manner by
choosing two different co-solvents of water, urea and dimethyl sulphoxide
(DMSO), and study unfolding of four different proteins to understand the
respective sequence of melting by computer simulation methods. We indeed find
interesting differences in the sequence of melting of alpha-helices and
beta-sheets in these two solvents. For example, at 8M urea solution, beta-sheet
parts of a protein is found to unfold preferentially, followed by the unfolding
of alpha helices. In contrast, 8M DMSO solution unfolds alpha helices first,
followed by the separation of beta-sheets for majority of proteins. Sequence of
unfolding events in four different alpha/beta proteins and also in chicken
villin head piece (HP-36) both in urea and DMSO solution demonstrate that the
unfolding pathways are determined jointly by relative exposure of polar and
non-polar residues of a protein and the mode of molecular action of a solvent
on that protein.
| [
{
"created": "Sun, 20 Apr 2014 15:22:22 GMT",
"version": "v1"
}
] | 2014-04-22 | [
[
"Roy",
"Susmita",
""
],
[
"Bagchi",
"Biman",
""
]
] | Elucidation of possible pathways between folded (native) and unfolded states of a protein is a challenging task, as the intermediates are often hard to detect. Here we alter the solvent environment in a controlled manner by choosing two different co-solvents of water, urea and dimethyl sulphoxide (DMSO), and study unfolding of four different proteins to understand the respective sequence of melting by computer simulation methods. We indeed find interesting differences in the sequence of melting of alpha-helices and beta-sheets in these two solvents. For example, at 8M urea solution, beta-sheet parts of a protein is found to unfold preferentially, followed by the unfolding of alpha helices. In contrast, 8M DMSO solution unfolds alpha helices first, followed by the separation of beta-sheets for majority of proteins. Sequence of unfolding events in four different alpha/beta proteins and also in chicken villin head piece (HP-36) both in urea and DMSO solution demonstrate that the unfolding pathways are determined jointly by relative exposure of polar and non-polar residues of a protein and the mode of molecular action of a solvent on that protein. |
1709.05022 | John Medaglia | John D. Medaglia, Denise Y. Harvey, Nicole White, Danielle S. Bassett,
Roy H. Hamilton | Network Controllability in the IFG Relates to Controlled Language
Variability and Susceptibility to TMS | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In language production, humans are confronted with considerable word
selection demands. Often, we must select a word from among similar, acceptable,
and competing alternative words in order to construct a sentence that conveys
an intended meaning. In recent years, the left inferior frontal gyrus (LIFG)
has been identified as critical to this ability. Despite a recent emphasis on
network approaches to understanding language, how the LIFG interacts with the
brain's complex networks to facilitate controlled language performance remains
unknown. Here, we take a novel approach to understand word selection as a
network control process in the brain. Using an anatomical brain network derived
from high-resolution diffusion spectrum imaging (DSI), we computed network
controllability underlying the site of transcranial magnetic stimulation in the
LIFG between administrations of two word selection tasks. We find that a
statistic that quantifies the LIFG's theoretically predicted control of
difficult-to-reach states explains vulnerability to TMS in language tasks that
vary in response (cognitive control) demands: open-response (word generation)
vs. closed-response (number naming) tasks. Moreover, we find that a statistic
that quantifies the LIFG's theoretically predicted control of communication
across modules in the human connectome explains TMS-induced changes in
open-response language task performance only. These findings establish a link
between network controllability, cognitive function, and TMS effects.
| [
{
"created": "Fri, 15 Sep 2017 00:59:08 GMT",
"version": "v1"
},
{
"created": "Thu, 25 Jan 2018 20:04:46 GMT",
"version": "v2"
},
{
"created": "Tue, 30 Jan 2018 20:53:30 GMT",
"version": "v3"
}
] | 2018-02-01 | [
[
"Medaglia",
"John D.",
""
],
[
"Harvey",
"Denise Y.",
""
],
[
"White",
"Nicole",
""
],
[
"Bassett",
"Danielle S.",
""
],
[
"Hamilton",
"Roy H.",
""
]
] | In language production, humans are confronted with considerable word selection demands. Often, we must select a word from among similar, acceptable, and competing alternative words in order to construct a sentence that conveys an intended meaning. In recent years, the left inferior frontal gyrus (LIFG) has been identified as critical to this ability. Despite a recent emphasis on network approaches to understanding language, how the LIFG interacts with the brain's complex networks to facilitate controlled language performance remains unknown. Here, we take a novel approach to understand word selection as a network control process in the brain. Using an anatomical brain network derived from high-resolution diffusion spectrum imaging (DSI), we computed network controllability underlying the site of transcranial magnetic stimulation in the LIFG between administrations of two word selection tasks. We find that a statistic that quantifies the LIFG's theoretically predicted control of difficult-to-reach states explains vulnerability to TMS in language tasks that vary in response (cognitive control) demands: open-response (word generation) vs. closed-response (number naming) tasks. Moreover, we find that a statistic that quantifies the LIFG's theoretically predicted control of communication across modules in the human connectome explains TMS-induced changes in open-response language task performance only. These findings establish a link between network controllability, cognitive function, and TMS effects. |
1707.05193 | Birgitta Dresp-Langley | Birgitta Dresp-Langley | A temporal access code to consciousness? | 39 pages, 4 figures | Neural Plasticity, 2009, Article ID 482696 | 10.1155/2009/482696 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While questions of a functional localization of consciousness in the brain
have been the subject of myriad studies, the idea of a temporal access code as
a specific brain mechanism for consciousness has remained a neglected
possibility. Dresp-Langley and Durup (2009, 2012) proposed a theoretical
approach in terms of a temporal access mechanism for consciousness based on its
two universally recognized properties. Consciousness is limited in processing
capacity and described by a unique processing stream across a single dimension,
which is time. The time ordering function of conscious states is highlighted
and neurobiological theories of the temporal brain activities likely to
underlie such function are discussed, and the properties of the code model are
then introduced. Spatial information is integrated into provisory topological
maps at non-conscious levels through adaptive resonant matching, but does not
form part of the temporal access code as such. The latter, de-correlated from
the spatial code, operates without any need for firing synchrony on the sole
basis of temporal coincidence probabilities in dedicated resonant circuits
through progressively non-arbitrary selection of specific temporal activity
patterns in the continuously developing brain.
| [
{
"created": "Wed, 12 Jul 2017 16:01:55 GMT",
"version": "v1"
}
] | 2022-03-15 | [
[
"Dresp-Langley",
"Birgitta",
""
]
] | While questions of a functional localization of consciousness in the brain have been the subject of myriad studies, the idea of a temporal access code as a specific brain mechanism for consciousness has remained a neglected possibility. Dresp-Langley and Durup (2009, 2012) proposed a theoretical approach in terms of a temporal access mechanism for consciousness based on its two universally recognized properties. Consciousness is limited in processing capacity and described by a unique processing stream across a single dimension, which is time. The time ordering function of conscious states is highlighted and neurobiological theories of the temporal brain activities likely to underlie such function are discussed, and the properties of the code model are then introduced. Spatial information is integrated into provisory topological maps at non-conscious levels through adaptive resonant matching, but does not form part of the temporal access code as such. The latter, de-correlated from the spatial code, operates without any need for firing synchrony on the sole basis of temporal coincidence probabilities in dedicated resonant circuits through progressively non-arbitrary selection of specific temporal activity patterns in the continuously developing brain. |
2209.06479 | David Saakian | R. Poghosyan, R. Zadourian, David B. Saakian | The non-perturbative phenomenon for the Crow Kimura model with
stochastic resetting | 5 pages, 7 figures, submitted to the Journal of the Physical Society
of Japan | null | null | null | q-bio.PE cond-mat.stat-mech math.PR | http://creativecommons.org/licenses/by/4.0/ | We consider the Crow Kimura model, modified via stochastic resetting. There
are two principally different situations: First, when due to resetting the
system jumps to the low fitness state, everything is rather simple in this
case, we have a solution which is a slight modification of the standard
Crow-Kimura model case. When there is resetting to the high-fitness state,
there is a non-perturbative phenomenon via the resetting probability, even a
minimal resetting probability drastically changes the solution. We found two
subphases in this phase.
| [
{
"created": "Wed, 14 Sep 2022 08:20:21 GMT",
"version": "v1"
}
] | 2022-09-15 | [
[
"Poghosyan",
"R.",
""
],
[
"Zadourian",
"R.",
""
],
[
"Saakian",
"David B.",
""
]
] | We consider the Crow Kimura model, modified via stochastic resetting. There are two principally different situations: First, when due to resetting the system jumps to the low fitness state, everything is rather simple in this case, we have a solution which is a slight modification of the standard Crow-Kimura model case. When there is resetting to the high-fitness state, there is a non-perturbative phenomenon via the resetting probability, even a minimal resetting probability drastically changes the solution. We found two subphases in this phase. |
1207.6422 | Ling Xue Ms | Ling Xue, Caterina Scoglio | The network level reproduction number for infectious diseases with both
vertical and horizontal transmission | null | Mathematical Biosciences 2013 | 10.1016/j.mbs.2013.02.004 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A wide range of infectious diseases are both vertically and horizontally
transmitted. Such diseases are spatially transmitted via multiple species in
heterogeneous environments, typically described by complex meta-population
models. The reproduction number is a critical metric predicting whether the
disease can invade the meta-population system. This paper presents the
reproduction number for a generic disease vertically and horizontally
transmitted among multiple species in heterogeneous networks, where nodes are
locations, and links reflect outgoing or incoming movement flows. The
metapopulation model for vertically and horizontally transmitted diseases is
gradually formulated from two species, two-node network models. We derived an
explicit expression of the reproduction number, which is the spectral radius of
a matrix reduced in size with respect to the original next generation matrix.
The reproduction number is shown to be a function of vertical and horizontal
transmission parameters, and the lower bound is the reproduction number for
horizontal transmission. As an application, the reproduction number and its
bounds for the Rift Valley fever zoonosis, where livestock, mosquitoes, and
humans are the involved species are derived. By computing the reproduction
number for different scenarios through numerical simulations, we found the
reproduction number is affected by livestock movement rates only when
parameters are heterogeneous across nodes. To summarize, our study contributes
the reproduction number for vertically and horizontally transmitted diseases in
heterogeneous networks. This explicit expression is easily adaptable to
specific infectious diseases, affording insights into disease evolution.
| [
{
"created": "Thu, 26 Jul 2012 22:00:43 GMT",
"version": "v1"
},
{
"created": "Mon, 11 Feb 2013 16:27:02 GMT",
"version": "v2"
}
] | 2013-03-05 | [
[
"Xue",
"Ling",
""
],
[
"Scoglio",
"Caterina",
""
]
] | A wide range of infectious diseases are both vertically and horizontally transmitted. Such diseases are spatially transmitted via multiple species in heterogeneous environments, typically described by complex meta-population models. The reproduction number is a critical metric predicting whether the disease can invade the meta-population system. This paper presents the reproduction number for a generic disease vertically and horizontally transmitted among multiple species in heterogeneous networks, where nodes are locations, and links reflect outgoing or incoming movement flows. The metapopulation model for vertically and horizontally transmitted diseases is gradually formulated from two species, two-node network models. We derived an explicit expression of the reproduction number, which is the spectral radius of a matrix reduced in size with respect to the original next generation matrix. The reproduction number is shown to be a function of vertical and horizontal transmission parameters, and the lower bound is the reproduction number for horizontal transmission. As an application, the reproduction number and its bounds for the Rift Valley fever zoonosis, where livestock, mosquitoes, and humans are the involved species are derived. By computing the reproduction number for different scenarios through numerical simulations, we found the reproduction number is affected by livestock movement rates only when parameters are heterogeneous across nodes. To summarize, our study contributes the reproduction number for vertically and horizontally transmitted diseases in heterogeneous networks. This explicit expression is easily adaptable to specific infectious diseases, affording insights into disease evolution. |
1810.01682 | Paula Villa Mart\'in Dr. | Paula Villa Mart\'in, Jorge Hidalgo, Rafael Rubio de Casas, and Miguel
A. Mu\~noz | Eco-evolutionary Model of Rapid Phenotypic Diversification in
Species-Rich Communities | null | PLoS computational biology, 2016, vol. 12, no 10, p. e1005139 | 10.1371/journal.pcbi.1005139 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Evolutionary and ecosystem dynamics are often treated as different processes
--operating at separate timescales-- even if evidence reveals that rapid
evolutionary changes can feed back into ecological interactions. A recent
long-term field experiment has explicitly shown that communities of competing
plant species can experience very fast phenotypic diversification, and that
this gives rise to enhanced complementarity in resource exploitation and to
enlarged ecosystem-level productivity. Here, we build on progress made in
recent years in the integration of eco-evolutionary dynamics, and present a
computational approach aimed at describing these empirical findings in detail.
In particular we model a community of organisms of different but similar
species evolving in time through mechanisms of birth, competition, sexual
reproduction, descent with modification, and death. Based on simple rules, this
model provides a rationalization for the emergence of rapid phenotypic
diversification in species-rich communities. Furthermore, it also leads to
non-trivial predictions about long-term phenotypic change and ecological
interactions. Our results illustrate that the presence of highly specialized,
non-competing species leads to very stable communities and reveals that
phenotypically equivalent species occupying the same niche may emerge and
coexist for very long times. Thus, the framework presented here provides a
simple approach --complementing existing theories, but specifically devised to
account for the specificities of the recent empirical findings for plant
communities-- to explain the collective emergence of diversification at a
community level, and paves the way to further scrutinize the intimate
entanglement of ecological and evolutionary processes, especially in
species-rich communities.
| [
{
"created": "Wed, 3 Oct 2018 10:49:40 GMT",
"version": "v1"
}
] | 2018-10-04 | [
[
"Martín",
"Paula Villa",
""
],
[
"Hidalgo",
"Jorge",
""
],
[
"de Casas",
"Rafael Rubio",
""
],
[
"Muñoz",
"Miguel A.",
""
]
] | Evolutionary and ecosystem dynamics are often treated as different processes --operating at separate timescales-- even if evidence reveals that rapid evolutionary changes can feed back into ecological interactions. A recent long-term field experiment has explicitly shown that communities of competing plant species can experience very fast phenotypic diversification, and that this gives rise to enhanced complementarity in resource exploitation and to enlarged ecosystem-level productivity. Here, we build on progress made in recent years in the integration of eco-evolutionary dynamics, and present a computational approach aimed at describing these empirical findings in detail. In particular we model a community of organisms of different but similar species evolving in time through mechanisms of birth, competition, sexual reproduction, descent with modification, and death. Based on simple rules, this model provides a rationalization for the emergence of rapid phenotypic diversification in species-rich communities. Furthermore, it also leads to non-trivial predictions about long-term phenotypic change and ecological interactions. Our results illustrate that the presence of highly specialized, non-competing species leads to very stable communities and reveals that phenotypically equivalent species occupying the same niche may emerge and coexist for very long times. Thus, the framework presented here provides a simple approach --complementing existing theories, but specifically devised to account for the specificities of the recent empirical findings for plant communities-- to explain the collective emergence of diversification at a community level, and paves the way to further scrutinize the intimate entanglement of ecological and evolutionary processes, especially in species-rich communities. |
2202.06583 | Fiona Macfarlane | Fiona R Macfarlane, Xinran Ruan, Tommaso Lorenzi | Individual-based and continuum models of phenotypically heterogeneous
growing cell populations | null | null | null | null | q-bio.PE math.AP | http://creativecommons.org/licenses/by/4.0/ | Existing studies comparing individual-based models of growing cell
populations and their continuum counterparts have mainly focused on homogeneous
populations, in which all cells have the same phenotypic characteristics.
However, significant intercellular phenotypic variability is commonly observed
in cellular systems. Therefore, we develop here an individual-based model for
the growth of phenotypically heterogeneous cell populations. In this model, the
phenotypic state of each cell is described by a structuring variable that
captures intercellular variability in cell proliferation and migration rates.
The model tracks the spatial evolutionary dynamics of single cells, which
undergo pressure-dependent proliferation, heritable phenotypic changes and
directional movement in response to pressure differentials. We formally show
that the continuum limit of this model comprises a non-local partial
differential equation for the cell population density, which generalises
earlier models of growing cell populations. Results of the individual-based
model illustrate how proliferation-migration tradeoffs shaping the evolution of
single cells can lead to the formation of travelling waves at the population
level where highly-mobile cells locally dominate at the invasive front, while
more-proliferative cells are found at the rear. We demonstrate that there is an
excellent quantitative agreement between these results and the results of
numerical simulations and formal travelling-wave analysis of the continuum
model, when sufficiently large cell numbers are considered. We provide
numerical evidence of scenarios in which the predictions of the two models may
differ due to demographic stochasticity, which cannot be captured by the
continuum model. This indicates the importance of integrating individual-based
and continuum approaches when modelling the growth of phenotypically
heterogeneous cell populations.
| [
{
"created": "Mon, 14 Feb 2022 09:53:40 GMT",
"version": "v1"
},
{
"created": "Tue, 15 Feb 2022 17:52:53 GMT",
"version": "v2"
},
{
"created": "Thu, 10 Mar 2022 15:26:44 GMT",
"version": "v3"
},
{
"created": "Fri, 3 Jun 2022 14:34:27 GMT",
"version": "v4"
}
] | 2022-06-06 | [
[
"Macfarlane",
"Fiona R",
""
],
[
"Ruan",
"Xinran",
""
],
[
"Lorenzi",
"Tommaso",
""
]
] | Existing studies comparing individual-based models of growing cell populations and their continuum counterparts have mainly focused on homogeneous populations, in which all cells have the same phenotypic characteristics. However, significant intercellular phenotypic variability is commonly observed in cellular systems. Therefore, we develop here an individual-based model for the growth of phenotypically heterogeneous cell populations. In this model, the phenotypic state of each cell is described by a structuring variable that captures intercellular variability in cell proliferation and migration rates. The model tracks the spatial evolutionary dynamics of single cells, which undergo pressure-dependent proliferation, heritable phenotypic changes and directional movement in response to pressure differentials. We formally show that the continuum limit of this model comprises a non-local partial differential equation for the cell population density, which generalises earlier models of growing cell populations. Results of the individual-based model illustrate how proliferation-migration tradeoffs shaping the evolution of single cells can lead to the formation of travelling waves at the population level where highly-mobile cells locally dominate at the invasive front, while more-proliferative cells are found at the rear. We demonstrate that there is an excellent quantitative agreement between these results and the results of numerical simulations and formal travelling-wave analysis of the continuum model, when sufficiently large cell numbers are considered. We provide numerical evidence of scenarios in which the predictions of the two models may differ due to demographic stochasticity, which cannot be captured by the continuum model. This indicates the importance of integrating individual-based and continuum approaches when modelling the growth of phenotypically heterogeneous cell populations. |
1801.00257 | Martin Frasch | Martin G. Frasch, Silvia Lobmaier, Tamara Stampalija, Paula Desplats,
Mar\'ia Eugenia Pallar\'es, Ver\'onica Pastor, Marcela Brocco, Hau-tieng Wu,
Jay Schulkin, Christophe Herry, Andrew Seely, Gerlinde A.S. Metz, Yoram
Louzoun, Marta Antonelli | Non-invasive biomarkers of fetal brain development reflecting prenatal
stress: an integrative multi-scale multi-species perspective on data
collection and analysis | Focused review, 13 pages, 5 figures | null | 10.1016/j.neubiorev.2018.05.026 | null | q-bio.QM q-bio.NC | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Prenatal stress (PS) impacts early postnatal behavioural and cognitive
development. This process of 'fetal programming' is mediated by the effects of
the prenatal experience on the developing hypothalamic-pituitary-adrenal (HPA)
axis and autonomic nervous system (ANS). The HPA axis is a dynamic system
regulating homeostasis, especially the stress response, and is highly sensitive
to adverse early life experiences. We review the evidence for the effects of PS
on fetal programming of the HPA axis and the ANS. We derive a multi-scale
multi-species approach to devising preclinical and clinical studies to identify
early non-invasively available pre- and postnatal biomarkers of these
programming effects. Such approach would identify adverse postnatal brain
developmental trajectories, a prerequisite for designing therapeutic
interventions. The multiple scales include the biomarkers reflecting changes in
the brain epigenome, metabolome, microbiome and the ANS activity gauged via an
array of advanced non-invasively obtainable properties of fetal heart rate
fluctuations. The proposed framework has the potential to reveal mechanistic
links between maternal stress during pregnancy and changes across these
physiological scales. Such biomarkers may hence be useful as early and
non-invasive predictors of neurodevelopmental trajectories influenced by the
PS. We conclude that studies into PS effects must be conducted on multiple
scales derived from concerted observations in multiple animal models and human
cohorts performed in an interactive and iterative manner and deploying machine
learning for data synthesis, identification and validation of the best
non-invasive biomarkers.
| [
{
"created": "Sun, 31 Dec 2017 09:20:09 GMT",
"version": "v1"
}
] | 2019-01-01 | [
[
"Frasch",
"Martin G.",
""
],
[
"Lobmaier",
"Silvia",
""
],
[
"Stampalija",
"Tamara",
""
],
[
"Desplats",
"Paula",
""
],
[
"Pallarés",
"María Eugenia",
""
],
[
"Pastor",
"Verónica",
""
],
[
"Brocco",
"Marcela",
... | Prenatal stress (PS) impacts early postnatal behavioural and cognitive development. This process of 'fetal programming' is mediated by the effects of the prenatal experience on the developing hypothalamic-pituitary-adrenal (HPA) axis and autonomic nervous system (ANS). The HPA axis is a dynamic system regulating homeostasis, especially the stress response, and is highly sensitive to adverse early life experiences. We review the evidence for the effects of PS on fetal programming of the HPA axis and the ANS. We derive a multi-scale multi-species approach to devising preclinical and clinical studies to identify early non-invasively available pre- and postnatal biomarkers of these programming effects. Such approach would identify adverse postnatal brain developmental trajectories, a prerequisite for designing therapeutic interventions. The multiple scales include the biomarkers reflecting changes in the brain epigenome, metabolome, microbiome and the ANS activity gauged via an array of advanced non-invasively obtainable properties of fetal heart rate fluctuations. The proposed framework has the potential to reveal mechanistic links between maternal stress during pregnancy and changes across these physiological scales. Such biomarkers may hence be useful as early and non-invasive predictors of neurodevelopmental trajectories influenced by the PS. We conclude that studies into PS effects must be conducted on multiple scales derived from concerted observations in multiple animal models and human cohorts performed in an interactive and iterative manner and deploying machine learning for data synthesis, identification and validation of the best non-invasive biomarkers. |
q-bio/0507044 | Paul Fran\c{c}ois | Paul Francois and Vincent Hakim | A core genetic module : the Mixed Feedback Loop | To be published in Physical Review E | null | 10.1103/PhysRevE.72.031908 | null | q-bio.MN q-bio.OT | null | The so-called Mixed Feedback Loop (MFL) is a small two-gene network where
protein A regulates the transcription of protein B and the two proteins form a
heterodimer. It has been found to be statistically over-represented in
statistical analyses of gene and protein interaction databases and to lie at
the core of several computer-generated genetic networks. Here, we propose and
mathematically study a model of the MFL and show that, by itself, it can serve
both as a bistable switch and as a clock (an oscillator) depending on kinetic
parameters. The MFL phase diagram as well as a detailed description of the
nonlinear oscillation regime are presented and some biological examples are
discussed. The results emphasize the role of protein interactions in the
function of genetic modules and the usefulness of modelling RNA dynamics
explicitly.
| [
{
"created": "Thu, 28 Jul 2005 16:17:42 GMT",
"version": "v1"
}
] | 2009-11-11 | [
[
"Francois",
"Paul",
""
],
[
"Hakim",
"Vincent",
""
]
] | The so-called Mixed Feedback Loop (MFL) is a small two-gene network where protein A regulates the transcription of protein B and the two proteins form a heterodimer. It has been found to be statistically over-represented in statistical analyses of gene and protein interaction databases and to lie at the core of several computer-generated genetic networks. Here, we propose and mathematically study a model of the MFL and show that, by itself, it can serve both as a bistable switch and as a clock (an oscillator) depending on kinetic parameters. The MFL phase diagram as well as a detailed description of the nonlinear oscillation regime are presented and some biological examples are discussed. The results emphasize the role of protein interactions in the function of genetic modules and the usefulness of modelling RNA dynamics explicitly. |
1904.12964 | Maria Martinez Salazar | M.B.Mart\'inez Salazar, J.Delgado Dom\'inguez, J.Silva Estrada,
C.Gonz\'alez Bonilla, I.Becker | Vaccination with Leishmania mexicana LPG induces PD-1 in CD8+ and PD-L2
in macrophages thereby suppressing the immune response: A model to assess
vaccine efficacy | 7 pages, 5 figures | Vaccine Volume 32, Issue 11, 5 March 2014, Pages 1259-1265 | 10.1016/j.vaccine.2014.01.016 | null | q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Leishmania lipophosphoglycan is a molecule that has been used as a vaccine
candidate, with contradictory results. Since unsuccessful protection could be
related to suppressed T cell responses, we analyzed the expression of
inhibitory receptor PD-1 in CD8+ and CD4+ lymphocytes and it is ligand PD-L2 in
macrophages of BALB/c mice immunized with various doses of Leishmania mexicana
LPG and re-stimulated in vitro with different concentrations of LPG.
Vaccination with LPG enhanced the expression of PD-1 in CD8+ cells. Activation
molecules CD137 were reduced in CD8+ cells from vaccinated mice. In vitro
re-stimulation enhanced PD-L2 expression in macrophages of healthy mice in a
dose-dependent fashion. The expression of PD-1, PD-L2, and CD137 is modulated
according to the amount of LPG used during immunization and in vitro
re-stimulation. We analyzed the expression of these molecules in mice infected
with 1 x 10'4 or 1 x 10'5 L. mexicana promastigotes and re-stimulated in vitro
with LPG. Infection with 1 x 10'5 parasites increased the PD-1 expression in
CD8+ and diminished PD-L2 in macrophages. When these CD8+ cells were
re-stimulated in vitro with LPG, simulating a second exposure to parasite
antigens, PD-1 expression increased significantly more, in a dose-dependent
fashion. We conclude that CD8+ T lymphocytes and macrophages express inhibition
molecules according to the concentrations of Leishmania LPG and to the parasite
load. Vaccination with increased amounts of LPG or infections with higher
parasite numbers induces enhanced expression of PD-1 and functional
inactivation of CD8+ cells, which can have critical consequences in
leishmaniasis since these cells are crucial for disease control.
| [
{
"created": "Mon, 29 Apr 2019 21:51:34 GMT",
"version": "v1"
}
] | 2019-05-01 | [
[
"Salazar",
"M. B. Martínez",
""
],
[
"Domínguez",
"J. Delgado",
""
],
[
"Estrada",
"J. Silva",
""
],
[
"Bonilla",
"C. González",
""
],
[
"Becker",
"I.",
""
]
] | Leishmania lipophosphoglycan is a molecule that has been used as a vaccine candidate, with contradictory results. Since unsuccessful protection could be related to suppressed T cell responses, we analyzed the expression of inhibitory receptor PD-1 in CD8+ and CD4+ lymphocytes and it is ligand PD-L2 in macrophages of BALB/c mice immunized with various doses of Leishmania mexicana LPG and re-stimulated in vitro with different concentrations of LPG. Vaccination with LPG enhanced the expression of PD-1 in CD8+ cells. Activation molecules CD137 were reduced in CD8+ cells from vaccinated mice. In vitro re-stimulation enhanced PD-L2 expression in macrophages of healthy mice in a dose-dependent fashion. The expression of PD-1, PD-L2, and CD137 is modulated according to the amount of LPG used during immunization and in vitro re-stimulation. We analyzed the expression of these molecules in mice infected with 1 x 10'4 or 1 x 10'5 L. mexicana promastigotes and re-stimulated in vitro with LPG. Infection with 1 x 10'5 parasites increased the PD-1 expression in CD8+ and diminished PD-L2 in macrophages. When these CD8+ cells were re-stimulated in vitro with LPG, simulating a second exposure to parasite antigens, PD-1 expression increased significantly more, in a dose-dependent fashion. We conclude that CD8+ T lymphocytes and macrophages express inhibition molecules according to the concentrations of Leishmania LPG and to the parasite load. Vaccination with increased amounts of LPG or infections with higher parasite numbers induces enhanced expression of PD-1 and functional inactivation of CD8+ cells, which can have critical consequences in leishmaniasis since these cells are crucial for disease control. |
1205.4981 | Cristian Arsene | Cristian G. Arsene, Dirk Schulze, J\"urgen Kratzsch and Andr\'e
Henrion | High Sensitivity Mass Spectrometric Quantification of Serum Growth
Hormone by Amphiphilic Peptide Conjugation | null | J Mass Spectrom 47(2012)1554 | 10.1002/jms.3094 | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Amphiphilic peptide conjugation affords a significant increase in sensitivity
with protein quantification by electrospray-ionization mass spectrometry. This
has been demonstrated here for human growth hormone in serum using
N-(3-iodopropyl)-N,N,N-dimethyloctylammonium iodide (IPDOA-iodide) as
derivatizing reagent. The signal enhancement achieved in comparison to the
method without derivatization enables extension of the applicable concentration
range down to the very low concentrations as encountered with clinical glucose
suppression tests for patients with acromegaly. The method has been validated
using a set of serum samples spiked with known amounts of recombinant 22 kDa
growth hormone in the range of 0.48 to 7.65 \mug/L. The coefficient of
variation (CV) calculated, based on the deviation of results from the expected
concentrations, was 3.5% and the limit of quantification (LoQ) was determined
as 0.4 \mug/L. The potential of the method as a tool in clinical practice has
been demonstrated with patient samples of about 1 \mug/L.
| [
{
"created": "Tue, 22 May 2012 17:07:05 GMT",
"version": "v1"
}
] | 2012-12-06 | [
[
"Arsene",
"Cristian G.",
""
],
[
"Schulze",
"Dirk",
""
],
[
"Kratzsch",
"Jürgen",
""
],
[
"Henrion",
"André",
""
]
] | Amphiphilic peptide conjugation affords a significant increase in sensitivity with protein quantification by electrospray-ionization mass spectrometry. This has been demonstrated here for human growth hormone in serum using N-(3-iodopropyl)-N,N,N-dimethyloctylammonium iodide (IPDOA-iodide) as derivatizing reagent. The signal enhancement achieved in comparison to the method without derivatization enables extension of the applicable concentration range down to the very low concentrations as encountered with clinical glucose suppression tests for patients with acromegaly. The method has been validated using a set of serum samples spiked with known amounts of recombinant 22 kDa growth hormone in the range of 0.48 to 7.65 \mug/L. The coefficient of variation (CV) calculated, based on the deviation of results from the expected concentrations, was 3.5% and the limit of quantification (LoQ) was determined as 0.4 \mug/L. The potential of the method as a tool in clinical practice has been demonstrated with patient samples of about 1 \mug/L. |
1302.5803 | Jiang Zhang | Jiang Zhang, Lingfei Wu | Allometry and Dissipation of Ecological Flow Networks | 35 pages, 13 figures | PLoS ONE 8(9): e72525, 2013 | 10.1371/journal.pone.0072525 | null | q-bio.PE physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An ecological flow network is a weighted directed graph in which nodes are
species, edges are "who eats whom" relationships and weights are rates of
energy or nutrients transfer between species. Allometric scaling is a
ubiquitous feature for flow systems like river basins, vascular networks and
food webs. By "ecological network analysis" method, we can reveal the hidden
allometry directly on the original flow networks without cutting edges. On the
other hand, dissipation law, which is another significant scaling relationship
between the energy dissipation (respiration) and the throughflow of any species
is also discovered on the collected flow networks. Interestingly, the exponents
of allometric law ($\eta$) and the dissipation law ($\gamma$) have a strong
connection for both empirical and simulated flow networks. The dissipation law
exponent $\gamma$ rather than the topology of the network is the most important
ingredient to the allometric exponent $\eta$. By reinterpreting $\eta$ as the
inequality of species impacts (direct and indirect influences) to the whole
network along all energy flow pathways but not the energy transportation
efficiency, we found that as $\gamma$ increases, the relative energy loss of
large nodes (with high throughflow) increases, $\eta$ decreases, and the
inequality of the whole flow network as well as the relative importance of
large species decreases. Therefore, flow structure and thermodynamic constraint
are connected.
| [
{
"created": "Sat, 23 Feb 2013 14:18:51 GMT",
"version": "v1"
}
] | 2013-09-24 | [
[
"Zhang",
"Jiang",
""
],
[
"Wu",
"Lingfei",
""
]
] | An ecological flow network is a weighted directed graph in which nodes are species, edges are "who eats whom" relationships and weights are rates of energy or nutrients transfer between species. Allometric scaling is a ubiquitous feature for flow systems like river basins, vascular networks and food webs. By "ecological network analysis" method, we can reveal the hidden allometry directly on the original flow networks without cutting edges. On the other hand, dissipation law, which is another significant scaling relationship between the energy dissipation (respiration) and the throughflow of any species is also discovered on the collected flow networks. Interestingly, the exponents of allometric law ($\eta$) and the dissipation law ($\gamma$) have a strong connection for both empirical and simulated flow networks. The dissipation law exponent $\gamma$ rather than the topology of the network is the most important ingredient to the allometric exponent $\eta$. By reinterpreting $\eta$ as the inequality of species impacts (direct and indirect influences) to the whole network along all energy flow pathways but not the energy transportation efficiency, we found that as $\gamma$ increases, the relative energy loss of large nodes (with high throughflow) increases, $\eta$ decreases, and the inequality of the whole flow network as well as the relative importance of large species decreases. Therefore, flow structure and thermodynamic constraint are connected. |
2005.09622 | John Nardini | John T. Nardini, John H. Lagergren, Andrea Hawkins-Daarud, Lee Curtin,
Bethan Morris, Erica M. Rutter, Kristin R. Swanson, Kevin B. Flores | Learning Equations from Biological Data with Limited Time Samples | null | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Equation learning methods present a promising tool to aid scientists in the
modeling process for biological data. Previous equation learning studies have
demonstrated that these methods can infer models from rich datasets, however,
the performance of these methods in the presence of common challenges from
biological data has not been thoroughly explored. We present an equation
learning methodology comprised of data denoising, equation learning, model
selection and post-processing steps that infers a dynamical systems model from
noisy spatiotemporal data. The performance of this methodology is thoroughly
investigated in the face of several common challenges presented by biological
data, namely, sparse data sampling, large noise levels, and heterogeneity
between datasets. We find that this methodology can accurately infer the
correct underlying equation and predict unobserved system dynamics from a small
number of time samples when the data is sampled over a time interval exhibiting
both linear and nonlinear dynamics. Our findings suggest that equation learning
methods can be used for model discovery and selection in many areas of biology
when an informative dataset is used. We focus on glioblastoma multiforme
modeling as a case study in this work to highlight how these results are
informative for data-driven modeling-based tumor invasion predictions.
| [
{
"created": "Tue, 19 May 2020 17:51:21 GMT",
"version": "v1"
}
] | 2020-05-20 | [
[
"Nardini",
"John T.",
""
],
[
"Lagergren",
"John H.",
""
],
[
"Hawkins-Daarud",
"Andrea",
""
],
[
"Curtin",
"Lee",
""
],
[
"Morris",
"Bethan",
""
],
[
"Rutter",
"Erica M.",
""
],
[
"Swanson",
"Kristin R.",
... | Equation learning methods present a promising tool to aid scientists in the modeling process for biological data. Previous equation learning studies have demonstrated that these methods can infer models from rich datasets, however, the performance of these methods in the presence of common challenges from biological data has not been thoroughly explored. We present an equation learning methodology comprised of data denoising, equation learning, model selection and post-processing steps that infers a dynamical systems model from noisy spatiotemporal data. The performance of this methodology is thoroughly investigated in the face of several common challenges presented by biological data, namely, sparse data sampling, large noise levels, and heterogeneity between datasets. We find that this methodology can accurately infer the correct underlying equation and predict unobserved system dynamics from a small number of time samples when the data is sampled over a time interval exhibiting both linear and nonlinear dynamics. Our findings suggest that equation learning methods can be used for model discovery and selection in many areas of biology when an informative dataset is used. We focus on glioblastoma multiforme modeling as a case study in this work to highlight how these results are informative for data-driven modeling-based tumor invasion predictions. |
2406.12949 | Florian Schunck | Florian Schunck, Bernhard Kodritsch, Wibke Busch, Martin Krauss,
Andreas Focks | Integrating time-resolved $nrf2$ gene-expression data into a full GUTS
model as a proxy for toxicodynamic damage in zebrafish embryo | null | null | null | null | q-bio.QM math.DS stat.AP | http://creativecommons.org/licenses/by/4.0/ | The immense production of the chemical industry requires an improved
predictive risk assessment that can handle constantly evolving challenges while
reducing the dependency of risk assessment on animal testing. Integrating
'omics data into mechanistic models offers a promising solution by linking
cellular processes triggered after chemical exposure with observed effects in
the organism. With the emerging availability of time-resolved RNA data, the
goal of integrating gene expression data into mechanistic models can be
approached. We propose a biologically anchored TKTD model, which describes key
processes that link the gene expression level of the stress regulator $nrf2$ to
detoxification and lethality by associating toxicodynamic damage with $nrf2$
expression. Fitting such a model to complex datasets consisting of multiple
endpoints required the combination of methods from molecular biology,
mechanistic dynamic systems modeling and Bayesian inference. In this study we
successfully integrate time-resolved gene expression data into TKTD models, and
thus provide a method for assessing the influence of molecular markers on
survival. This novel method was used to test whether, $nrf2$, can be applied to
predict lethality in zebrafish embryos. With the presented approach we outline
a method to successively approach the goal of a predictive risk assessment
based on molecular data.
| [
{
"created": "Tue, 18 Jun 2024 12:28:38 GMT",
"version": "v1"
}
] | 2024-06-21 | [
[
"Schunck",
"Florian",
""
],
[
"Kodritsch",
"Bernhard",
""
],
[
"Busch",
"Wibke",
""
],
[
"Krauss",
"Martin",
""
],
[
"Focks",
"Andreas",
""
]
] | The immense production of the chemical industry requires an improved predictive risk assessment that can handle constantly evolving challenges while reducing the dependency of risk assessment on animal testing. Integrating 'omics data into mechanistic models offers a promising solution by linking cellular processes triggered after chemical exposure with observed effects in the organism. With the emerging availability of time-resolved RNA data, the goal of integrating gene expression data into mechanistic models can be approached. We propose a biologically anchored TKTD model, which describes key processes that link the gene expression level of the stress regulator $nrf2$ to detoxification and lethality by associating toxicodynamic damage with $nrf2$ expression. Fitting such a model to complex datasets consisting of multiple endpoints required the combination of methods from molecular biology, mechanistic dynamic systems modeling and Bayesian inference. In this study we successfully integrate time-resolved gene expression data into TKTD models, and thus provide a method for assessing the influence of molecular markers on survival. This novel method was used to test whether, $nrf2$, can be applied to predict lethality in zebrafish embryos. With the presented approach we outline a method to successively approach the goal of a predictive risk assessment based on molecular data. |
2408.07637 | Weishun Zhong | Weishun Zhong, Mikhail Katkov, Misha Tsodyks | Hierarchical Working Memory and a New Magic Number | 16 pages, 7 figures | null | null | null | q-bio.NC cond-mat.dis-nn cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The extremely limited working memory span, typically around four items,
contrasts sharply with our everyday experience of processing much larger
streams of sensory information concurrently. This disparity suggests that
working memory can organize information into compact representations such as
chunks, yet the underlying neural mechanisms remain largely unknown. Here, we
propose a recurrent neural network model for chunking within the framework of
the synaptic theory of working memory. We showed that by selectively
suppressing groups of stimuli, the network can maintain and retrieve the
stimuli in chunks, hence exceeding the basic capacity. Moreover, we show that
our model can dynamically construct hierarchical representations within working
memory through hierarchical chunking. A consequence of this proposed mechanism
is a new limit on the number of items that can be stored and subsequently
retrieved from working memory, depending only on the basic working memory
capacity when chunking is not invoked. Predictions from our model were
confirmed by analyzing single-unit responses in epileptic patients and memory
experiments with verbal material. Our work provides a novel conceptual and
analytical framework for understanding the on-the-fly organization of
information in the brain that is crucial for cognition.
| [
{
"created": "Wed, 14 Aug 2024 16:03:47 GMT",
"version": "v1"
}
] | 2024-08-15 | [
[
"Zhong",
"Weishun",
""
],
[
"Katkov",
"Mikhail",
""
],
[
"Tsodyks",
"Misha",
""
]
] | The extremely limited working memory span, typically around four items, contrasts sharply with our everyday experience of processing much larger streams of sensory information concurrently. This disparity suggests that working memory can organize information into compact representations such as chunks, yet the underlying neural mechanisms remain largely unknown. Here, we propose a recurrent neural network model for chunking within the framework of the synaptic theory of working memory. We showed that by selectively suppressing groups of stimuli, the network can maintain and retrieve the stimuli in chunks, hence exceeding the basic capacity. Moreover, we show that our model can dynamically construct hierarchical representations within working memory through hierarchical chunking. A consequence of this proposed mechanism is a new limit on the number of items that can be stored and subsequently retrieved from working memory, depending only on the basic working memory capacity when chunking is not invoked. Predictions from our model were confirmed by analyzing single-unit responses in epileptic patients and memory experiments with verbal material. Our work provides a novel conceptual and analytical framework for understanding the on-the-fly organization of information in the brain that is crucial for cognition. |
1808.06083 | Radhakrishnan Chandrashekar Dr. | Hamid-Reza Rastegar-Sedehi, Chandrashekar Radhakrishnan, Samer
Intissar Nehme, Liev Birman, Paula Velasquez, Tim Byrnes | Non-equilibrium time dynamics of genetic evolution | Published in Physical Review E | Physical Review E 98, 022403 (2018) | 10.1103/PhysRevE.98.022403 | null | q-bio.PE cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Biological systems are typically highly open, non-equilibrium systems that
are very challenging to understand from a statistical mechanics perspective.
While statistical treatments of evolutionary biological systems have a long and
rich history, examination of the time-dependent non-equilibrium dynamics has
been less studied. In this paper we first derive a generalized master equation
in the genotype space for diploid organisms incorporating the processes of
selection, mutation, recombination, and reproduction. The master equation is
defined in terms of continuous time and can handle an arbitrary number of gene
loci and alleles, and can be defined in terms of an absolute population or
probabilities. We examine and analytically solve several prototypical cases
which illustrate the interplay of the various processes and discuss the
timescales of their evolution. The entropy production during the evolution
towards steady state is calculated and we find that it agrees with predictions
from non-equilibrium statistical mechanics where it is large when the
population distribution evolves towards a more viable genotype. The stability
of the non-equilibrium steady state is confirmed using the Glansdorff-Prigogine
criterion.
| [
{
"created": "Sat, 18 Aug 2018 13:12:52 GMT",
"version": "v1"
}
] | 2018-08-21 | [
[
"Rastegar-Sedehi",
"Hamid-Reza",
""
],
[
"Radhakrishnan",
"Chandrashekar",
""
],
[
"Nehme",
"Samer Intissar",
""
],
[
"Birman",
"Liev",
""
],
[
"Velasquez",
"Paula",
""
],
[
"Byrnes",
"Tim",
""
]
] | Biological systems are typically highly open, non-equilibrium systems that are very challenging to understand from a statistical mechanics perspective. While statistical treatments of evolutionary biological systems have a long and rich history, examination of the time-dependent non-equilibrium dynamics has been less studied. In this paper we first derive a generalized master equation in the genotype space for diploid organisms incorporating the processes of selection, mutation, recombination, and reproduction. The master equation is defined in terms of continuous time and can handle an arbitrary number of gene loci and alleles, and can be defined in terms of an absolute population or probabilities. We examine and analytically solve several prototypical cases which illustrate the interplay of the various processes and discuss the timescales of their evolution. The entropy production during the evolution towards steady state is calculated and we find that it agrees with predictions from non-equilibrium statistical mechanics where it is large when the population distribution evolves towards a more viable genotype. The stability of the non-equilibrium steady state is confirmed using the Glansdorff-Prigogine criterion. |
2006.06829 | Amin Dehghani | Amin Dehghani, Hamid Soltanian-Zadeh, Gholam-Ali Hossein-Zadeh | Probing fMRI brain connectivity and activity changes during emotion
regulation by EEG neurofeedback | 40 pages, 6 figures, 4 tables | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neurofeedback is a non-invasive brain training with long-term medical and
non-medical applications. Despite the existence of several emotion regulation
studies using neurofeedback, further investigation is needed to understand
interactions of the brain regions involved in the process. We implemented EEG
neurofeedback with simultaneous fMRI using a modified happiness-inducing task
through autobiographical memories to upregulate positive emotion. The results
showed increased activity of prefrontal, occipital, parietal, and limbic
regions and increased functional connectivity between prefrontal, parietal,
limbic system, and insula in the experimental group. New connectivity links
were identified by comparing the functional connectivity of different
experimental conditions within the experimental group and between the
experimental and control groups. The proposed multimodal approach quantified
the changes in the brain activity (up to 1.9% increase) and connectivity
(FDR-corrected for multiple comparison, q = 0.05) during emotion regulation
in/between prefrontal, parietal, limbic, and insula regions. Psychometric
assessments confirmed significant changes in positive and negative mood states
by neurofeedback with a p-value smaller than 0.002 in the experimental group.
This study quantifies the effects of EEG neurofeedback in changing functional
connectivity of all brain regions involved in emotion regulation. For the brain
regions involved in emotion regulation, we found significant BOLD and
functional connectivity increases due to neurofeedback in the experimental
group but no learning effect was observed in the control group. The results
reveal the neurobiological substrate of emotion regulation by the EEG
neurofeedback and separate the effect of the neurofeedback and the recall of
the autobiographical memories.
| [
{
"created": "Thu, 11 Jun 2020 21:16:59 GMT",
"version": "v1"
}
] | 2020-06-19 | [
[
"Dehghani",
"Amin",
""
],
[
"Soltanian-Zadeh",
"Hamid",
""
],
[
"Hossein-Zadeh",
"Gholam-Ali",
""
]
] | Neurofeedback is a non-invasive brain training with long-term medical and non-medical applications. Despite the existence of several emotion regulation studies using neurofeedback, further investigation is needed to understand interactions of the brain regions involved in the process. We implemented EEG neurofeedback with simultaneous fMRI using a modified happiness-inducing task through autobiographical memories to upregulate positive emotion. The results showed increased activity of prefrontal, occipital, parietal, and limbic regions and increased functional connectivity between prefrontal, parietal, limbic system, and insula in the experimental group. New connectivity links were identified by comparing the functional connectivity of different experimental conditions within the experimental group and between the experimental and control groups. The proposed multimodal approach quantified the changes in the brain activity (up to 1.9% increase) and connectivity (FDR-corrected for multiple comparison, q = 0.05) during emotion regulation in/between prefrontal, parietal, limbic, and insula regions. Psychometric assessments confirmed significant changes in positive and negative mood states by neurofeedback with a p-value smaller than 0.002 in the experimental group. This study quantifies the effects of EEG neurofeedback in changing functional connectivity of all brain regions involved in emotion regulation. For the brain regions involved in emotion regulation, we found significant BOLD and functional connectivity increases due to neurofeedback in the experimental group but no learning effect was observed in the control group. The results reveal the neurobiological substrate of emotion regulation by the EEG neurofeedback and separate the effect of the neurofeedback and the recall of the autobiographical memories. |
2106.04085 | Gerardo L. Febres Dr. | Gerardo L. Febres | Assessing the impact of social activity permissiveness on the COVID-19
infection curve of several countries | 17 pages, 6 Figures, 3 Tables, Figures for 38 countries in the
Appendix | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | This document aims to estimate and describe the effects of the social
distancing measures implemented in several countries with the expectancy of
controlling the spread of COVID-19. The procedure relies on the classic
Susceptible-Infected-Removed (SIR) model, which is modified to incorporate a
permissiveness index, representing the isolation achieved by the social
distancing and the future development of vaccination campaigns and allowing the
math model to reproduce more than one infection wave. The adjusted SIR models
are used to study the compromise between the economy's reactivation and the
resulting infection spreading increase. The document presents a
graphical-abacus that describes the convenience of progressively relax social
distancing measures while a feasible vaccination campaign develops
| [
{
"created": "Tue, 8 Jun 2021 03:48:01 GMT",
"version": "v1"
},
{
"created": "Fri, 25 Jun 2021 05:13:31 GMT",
"version": "v2"
}
] | 2021-06-28 | [
[
"Febres",
"Gerardo L.",
""
]
] | This document aims to estimate and describe the effects of the social distancing measures implemented in several countries with the expectancy of controlling the spread of COVID-19. The procedure relies on the classic Susceptible-Infected-Removed (SIR) model, which is modified to incorporate a permissiveness index, representing the isolation achieved by the social distancing and the future development of vaccination campaigns and allowing the math model to reproduce more than one infection wave. The adjusted SIR models are used to study the compromise between the economy's reactivation and the resulting infection spreading increase. The document presents a graphical-abacus that describes the convenience of progressively relax social distancing measures while a feasible vaccination campaign develops |
1803.08474 | Joachim Krug | Joachim Krug | Population Genetics and Evolution | 18 pages, 7 figures | Lecture notes of the 49th IFF spring school "Physics of Life"
(Forschungszentrum J\"ulich, 2018) | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | These lecture notes introduce key concepts of mathematical population
genetics within the most elementary setting and describe a few recent
applications to microbial evolution experiments. Pointers to the literature for
further reading are provided, and some of the derivations are left as exercises
for the reader.
| [
{
"created": "Thu, 22 Mar 2018 17:20:37 GMT",
"version": "v1"
}
] | 2018-03-23 | [
[
"Krug",
"Joachim",
""
]
] | These lecture notes introduce key concepts of mathematical population genetics within the most elementary setting and describe a few recent applications to microbial evolution experiments. Pointers to the literature for further reading are provided, and some of the derivations are left as exercises for the reader. |
1210.3294 | Barbara Engelhardt | Christopher D Brown, Lara M Mangravite, Barbara E Engelhardt | Integrative modeling of eQTLs and cis-regulatory elements suggest
mechanisms underlying cell type specificity of eQTLs | 25 pages, 7 figures, 3 tables | null | 10.1371/journal.pgen.1003649 | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Genetic variants in cis-regulatory elements or trans-acting regulators
commonly influence the quantity and spatiotemporal distribution of gene
transcription. Recent interest in expression quantitative trait locus (eQTL)
mapping has paralleled the adoption of genome-wide association studies (GWAS)
for the analysis of complex traits and disease in humans. Under the hypothesis
that many GWAS associations tag non-coding SNPs with small effects, and that
these SNPs exert phenotypic control by modifying gene expression, it has become
common to interpret GWAS associations using eQTL data. To exploit the
mechanistic interpretability of eQTL-GWAS comparisons, an improved
understanding of the genetic architecture and cell type specificity of eQTLs is
required. We address this need by performing an eQTL analysis in four parts:
first we identified eQTLs from eleven studies on seven cell types; next we
quantified cell type specific eQTLs across the studies; then we integrated eQTL
data with cis-regulatory element (CRE) data sets from the ENCODE project;
finally we built a classifier to predict cell type specific eQTLs. Consistent
with prior studies, we demonstrate that allelic heterogeneity is pervasive at
cis-eQTLs and that cis-eQTLs are often cell type specific. Within and between
cell type eQTL replication is associated with eQTL SNP overlap with hundreds of
cell type specific CRE element classes, including enhancer, promoter, and
repressive chromatin marks, regions of open chromatin, and many classes of DNA
binding proteins. Using a random forest classifier including 526 CRE data sets
as features, we successfully predict the cell type specificity of eQTL SNPs in
the absence of gene expression data from the cell type of interest. We
anticipate that such integrative, predictive modeling will improve our ability
to understand the mechanistic basis of human complex phenotypic variation.
| [
{
"created": "Thu, 11 Oct 2012 16:44:01 GMT",
"version": "v1"
}
] | 2015-11-12 | [
[
"Brown",
"Christopher D",
""
],
[
"Mangravite",
"Lara M",
""
],
[
"Engelhardt",
"Barbara E",
""
]
] | Genetic variants in cis-regulatory elements or trans-acting regulators commonly influence the quantity and spatiotemporal distribution of gene transcription. Recent interest in expression quantitative trait locus (eQTL) mapping has paralleled the adoption of genome-wide association studies (GWAS) for the analysis of complex traits and disease in humans. Under the hypothesis that many GWAS associations tag non-coding SNPs with small effects, and that these SNPs exert phenotypic control by modifying gene expression, it has become common to interpret GWAS associations using eQTL data. To exploit the mechanistic interpretability of eQTL-GWAS comparisons, an improved understanding of the genetic architecture and cell type specificity of eQTLs is required. We address this need by performing an eQTL analysis in four parts: first we identified eQTLs from eleven studies on seven cell types; next we quantified cell type specific eQTLs across the studies; then we integrated eQTL data with cis-regulatory element (CRE) data sets from the ENCODE project; finally we built a classifier to predict cell type specific eQTLs. Consistent with prior studies, we demonstrate that allelic heterogeneity is pervasive at cis-eQTLs and that cis-eQTLs are often cell type specific. Within and between cell type eQTL replication is associated with eQTL SNP overlap with hundreds of cell type specific CRE element classes, including enhancer, promoter, and repressive chromatin marks, regions of open chromatin, and many classes of DNA binding proteins. Using a random forest classifier including 526 CRE data sets as features, we successfully predict the cell type specificity of eQTL SNPs in the absence of gene expression data from the cell type of interest. We anticipate that such integrative, predictive modeling will improve our ability to understand the mechanistic basis of human complex phenotypic variation. |
2312.15270 | Indrajit Ghosh | I. Ghosh, S. Gupta, S. Rana | Anticipating dengue outbreaks using a novel hybrid ARIMA-ARNN model with
exogenous variables | null | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Dengue incidence forecasting using hybrid models has been surging in the data
rich world. Hybridization of statistical time series forecasting models and
machine learning models are explored for dengue forecasting with different
degrees of success. In this paper, we propose a multivariate expansion of the
hybrid ARIMA-ARNN model. The main motivation is to propose a novel
hybridization and apply it to dengue outbreak prediction. The asymptotic
stationarity of the proposed model has been established. We check the
forecasting capability and robustness of the forecasts through numerical
experiments. State-of-the-art forecasting models for multivariate time series
data are compared with the proposed model using accuracy metrics. Dengue
incidence data from San Juan and Iquitos are utilized along with rainfall as an
exogenous variable. Results indicate that the proposed model improves the
ARIMAX forecasts in some situations and closely follows it otherwise. The
theoretical as well as experimental results reinforce that the proposed model
has the potential to act as a candidate for early warning of dengue outbreaks.
The proposed model can be readily generalized to incorporate more exogenous
variables and also applied to other time series forecasting problems wherever
exogenous variable(s) are available.
| [
{
"created": "Sat, 23 Dec 2023 14:37:35 GMT",
"version": "v1"
}
] | 2023-12-27 | [
[
"Ghosh",
"I.",
""
],
[
"Gupta",
"S.",
""
],
[
"Rana",
"S.",
""
]
] | Dengue incidence forecasting using hybrid models has been surging in the data rich world. Hybridization of statistical time series forecasting models and machine learning models are explored for dengue forecasting with different degrees of success. In this paper, we propose a multivariate expansion of the hybrid ARIMA-ARNN model. The main motivation is to propose a novel hybridization and apply it to dengue outbreak prediction. The asymptotic stationarity of the proposed model has been established. We check the forecasting capability and robustness of the forecasts through numerical experiments. State-of-the-art forecasting models for multivariate time series data are compared with the proposed model using accuracy metrics. Dengue incidence data from San Juan and Iquitos are utilized along with rainfall as an exogenous variable. Results indicate that the proposed model improves the ARIMAX forecasts in some situations and closely follows it otherwise. The theoretical as well as experimental results reinforce that the proposed model has the potential to act as a candidate for early warning of dengue outbreaks. The proposed model can be readily generalized to incorporate more exogenous variables and also applied to other time series forecasting problems wherever exogenous variable(s) are available. |
1605.04221 | Francesca Mastrogiuseppe | Francesca Mastrogiuseppe, Srdjan Ostojic | Intrinsically-generated fluctuating activity in excitatory-inhibitory
networks | null | PLOS Computational Biology 13(4): e1005498 (2017) | 10.1371/journal.pcbi.1005498 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recurrent networks of non-linear units display a variety of dynamical regimes
depending on the structure of their synaptic connectivity. A particularly
remarkable phenomenon is the appearance of strongly fluctuating, chaotic
activity in networks of deterministic, but randomly connected rate units. How
this type of intrinsi- cally generated fluctuations appears in more realistic
networks of spiking neurons has been a long standing question. To ease the
comparison between rate and spiking networks, recent works investigated the
dynami- cal regimes of randomly-connected rate networks with segregated
excitatory and inhibitory populations, and firing rates constrained to be
positive. These works derived general dynamical mean field (DMF) equations
describing the fluctuating dynamics, but solved these equations only in the
case of purely inhibitory networks. Using a simplified excitatory-inhibitory
architecture in which DMF equations are more easily tractable, here we show
that the presence of excitation qualitatively modifies the fluctuating activity
compared to purely inhibitory networks. In presence of excitation,
intrinsically generated fluctuations induce a strong increase in mean firing
rates, a phenomenon that is much weaker in purely inhibitory networks.
Excitation moreover induces two different fluctuating regimes: for moderate
overall coupling, recurrent inhibition is sufficient to stabilize fluctuations,
for strong coupling, firing rates are stabilized solely by the upper bound
imposed on activity, even if inhibition is stronger than excitation. These
results extend to more general network architectures, and to rate networks
receiving noisy inputs mimicking spiking activity. Finally, we show that
signatures of the second dynamical regime appear in networks of
integrate-and-fire neurons.
| [
{
"created": "Fri, 13 May 2016 15:43:59 GMT",
"version": "v1"
},
{
"created": "Tue, 21 Mar 2017 13:44:01 GMT",
"version": "v2"
},
{
"created": "Tue, 9 May 2017 13:47:00 GMT",
"version": "v3"
}
] | 2017-05-10 | [
[
"Mastrogiuseppe",
"Francesca",
""
],
[
"Ostojic",
"Srdjan",
""
]
] | Recurrent networks of non-linear units display a variety of dynamical regimes depending on the structure of their synaptic connectivity. A particularly remarkable phenomenon is the appearance of strongly fluctuating, chaotic activity in networks of deterministic, but randomly connected rate units. How this type of intrinsi- cally generated fluctuations appears in more realistic networks of spiking neurons has been a long standing question. To ease the comparison between rate and spiking networks, recent works investigated the dynami- cal regimes of randomly-connected rate networks with segregated excitatory and inhibitory populations, and firing rates constrained to be positive. These works derived general dynamical mean field (DMF) equations describing the fluctuating dynamics, but solved these equations only in the case of purely inhibitory networks. Using a simplified excitatory-inhibitory architecture in which DMF equations are more easily tractable, here we show that the presence of excitation qualitatively modifies the fluctuating activity compared to purely inhibitory networks. In presence of excitation, intrinsically generated fluctuations induce a strong increase in mean firing rates, a phenomenon that is much weaker in purely inhibitory networks. Excitation moreover induces two different fluctuating regimes: for moderate overall coupling, recurrent inhibition is sufficient to stabilize fluctuations, for strong coupling, firing rates are stabilized solely by the upper bound imposed on activity, even if inhibition is stronger than excitation. These results extend to more general network architectures, and to rate networks receiving noisy inputs mimicking spiking activity. Finally, we show that signatures of the second dynamical regime appear in networks of integrate-and-fire neurons. |
q-bio/0701026 | Amelie Mathieu | Amelie Mathieu (MAS, AMAP), Paul-Henry Courn\`ede (MAS), Philippe De
Reffye (AMAP, INRIA Rocquencourt) | The Influence of Photosynthesis on the Number of Metamers per Growth
Unit in GreenLab Model | null | 4TH International Workshop on Functional-Structural Plant Models,
France (06/2004) | null | null | q-bio.TO math.DS q-bio.QM | null | GreenLab Model is a functional-structural plant growth model that combines
both organogenesis (at each cycle, new organs are created with respect to
genetic rules) and photosynthesis (organs are filled with the biomass produced
by the leaves photosynthesis). Our new developments of the model concern the
retroaction of photosynthesis on organogenesis. We present here the first step
towards the total representation of this retroaction, where the influence of
available biomass on the number of metamers in new growth units us modelled.
The theory is introduced and applied to a Corner model tree. Different
interesting behaviours are pointed out.
| [
{
"created": "Wed, 17 Jan 2007 19:51:03 GMT",
"version": "v1"
}
] | 2016-08-14 | [
[
"Mathieu",
"Amelie",
"",
"MAS, AMAP"
],
[
"Cournède",
"Paul-Henry",
"",
"MAS"
],
[
"De Reffye",
"Philippe",
"",
"AMAP, INRIA Rocquencourt"
]
] | GreenLab Model is a functional-structural plant growth model that combines both organogenesis (at each cycle, new organs are created with respect to genetic rules) and photosynthesis (organs are filled with the biomass produced by the leaves photosynthesis). Our new developments of the model concern the retroaction of photosynthesis on organogenesis. We present here the first step towards the total representation of this retroaction, where the influence of available biomass on the number of metamers in new growth units us modelled. The theory is introduced and applied to a Corner model tree. Different interesting behaviours are pointed out. |
2207.04832 | Lazaros Mitskopoulos | Lazaros Mitskopoulos, Theoklitos Amvrosiadis and Arno Onken | Mixed vine copula flows for flexible modelling of neural dependencies | null | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Recordings of complex neural population responses provide a unique
opportunity for advancing our understanding of neural information processing at
multiple scales and improving performance of brain computer interfaces.
However, most existing analytical techniques fall short of capturing the
complexity of interactions within the concerted population activity. Vine
copula-based approaches have shown to be successful at addressing complex
high-order dependencies within the population, disentangled from the
single-neuron statistics. However, most applications have focused on parametric
copulas which bear the risk of misspecifying dependence structures. In order to
avoid this risk, we adopted a fully non-parametric approach for the
single-neuron margins and copulas by using Neural Spline Flows (NSF). We
validated the NSF framework on simulated data of continuous and discrete type
with various forms of dependency structures and with different dimensionality.
Overall, NSFs performed similarly to existing non-parametric estimators, while
allowing for considerably faster and more flexible sampling which also enables
faster Monte Carlo estimation of copula entropy. Moreover, our framework was
able to capture low and higher order heavy tail dependencies in neuronal
responses recorded in the mouse primary visual cortex during a visual learning
task while the animal was navigating a virtual reality environment. These
findings highlight an often ignored aspect of complexity in coordinated
neuronal activity which can be important for understanding and deciphering
collective neural dynamics for neurotechnological applications.
| [
{
"created": "Mon, 11 Jul 2022 12:59:13 GMT",
"version": "v1"
}
] | 2022-07-12 | [
[
"Mitskopoulos",
"Lazaros",
""
],
[
"Amvrosiadis",
"Theoklitos",
""
],
[
"Onken",
"Arno",
""
]
] | Recordings of complex neural population responses provide a unique opportunity for advancing our understanding of neural information processing at multiple scales and improving performance of brain computer interfaces. However, most existing analytical techniques fall short of capturing the complexity of interactions within the concerted population activity. Vine copula-based approaches have shown to be successful at addressing complex high-order dependencies within the population, disentangled from the single-neuron statistics. However, most applications have focused on parametric copulas which bear the risk of misspecifying dependence structures. In order to avoid this risk, we adopted a fully non-parametric approach for the single-neuron margins and copulas by using Neural Spline Flows (NSF). We validated the NSF framework on simulated data of continuous and discrete type with various forms of dependency structures and with different dimensionality. Overall, NSFs performed similarly to existing non-parametric estimators, while allowing for considerably faster and more flexible sampling which also enables faster Monte Carlo estimation of copula entropy. Moreover, our framework was able to capture low and higher order heavy tail dependencies in neuronal responses recorded in the mouse primary visual cortex during a visual learning task while the animal was navigating a virtual reality environment. These findings highlight an often ignored aspect of complexity in coordinated neuronal activity which can be important for understanding and deciphering collective neural dynamics for neurotechnological applications. |
1912.13094 | Natasha Blitvi\'c | Natasha Blitvi\'c and Vicente I. Fernandez | Aging a little: The optimality of limited senescence in Escherichia coli | 29 pages (manuscript + supplemental materials). This work grew out of
arXiv:1901.04080 v1, which previously discussed both combinatorics and
biology. The updated arXiv:1901.04080 v2 now focuses solely on the
combinatorial aspects, while this manuscript focuses on the biological
application | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent studies have shown that even in the absence of extrinsic stress, the
morphologically symmetrically dividing model bacteria Escherichia coli do not
generate offspring with equal reproductive fitness. Instead, daughter cells
exhibit asymmetric division times that converge to two distinct growth states.
This represents a limited senescence / rejuvenation process derived from
asymmetric division that is stable for hundreds of generations. It remains
unclear why the bacteria do not continue the senescence beyond this asymptote.
Although there are inherent fitness benefits for heterogeneity in population
growth rates, the two growth equilibria are surprisingly similar, differing by
a few percent. In this work we derive an explicit model for the growth of a
bacterial population with two growth equilibria, based on a generalized
Fibonacci recurrence, in order to quantify the fitness benefit of a limited
senescence process and examine costs associated with asymmetry that could
generate the observed behavior. We find that with simple saturating effects of
asymmetric partitioning of subcellular components, two distinct but similar
growth states may be optimal while providing evolutionarily significant fitness
advantages.
| [
{
"created": "Mon, 30 Dec 2019 21:48:32 GMT",
"version": "v1"
}
] | 2020-01-01 | [
[
"Blitvić",
"Natasha",
""
],
[
"Fernandez",
"Vicente I.",
""
]
] | Recent studies have shown that even in the absence of extrinsic stress, the morphologically symmetrically dividing model bacteria Escherichia coli do not generate offspring with equal reproductive fitness. Instead, daughter cells exhibit asymmetric division times that converge to two distinct growth states. This represents a limited senescence / rejuvenation process derived from asymmetric division that is stable for hundreds of generations. It remains unclear why the bacteria do not continue the senescence beyond this asymptote. Although there are inherent fitness benefits for heterogeneity in population growth rates, the two growth equilibria are surprisingly similar, differing by a few percent. In this work we derive an explicit model for the growth of a bacterial population with two growth equilibria, based on a generalized Fibonacci recurrence, in order to quantify the fitness benefit of a limited senescence process and examine costs associated with asymmetry that could generate the observed behavior. We find that with simple saturating effects of asymmetric partitioning of subcellular components, two distinct but similar growth states may be optimal while providing evolutionarily significant fitness advantages. |
1407.6903 | Roc\'io Espada | Roc\'io Espada, R. Gonzalo Parra, Thierry Mora, Aleksandra M. Walczak,
and Diego Ferreiro | Capturing coevolutionary signals in repeat proteins | null | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The analysis of correlations of amino acid occurrences in globular proteins
has led to the development of statistical tools that can identify native
contacts -- portions of the chains that come to close distance in folded
structural ensembles. Here we introduce a statistical coupling analysis for
repeat proteins -- natural systems for which the identification of domains
remains challenging. We show that the inherent translational symmetry of repeat
protein sequences introduces a strong bias in the pair correlations at
precisely the length scale of the repeat-unit. Equalizing for this bias reveals
true co-evolutionary signals from which local native-contacts can be
identified. Importantly, parameter values obtained for all other interactions
are not significantly affected by the equalization. We quantify the robustness
of the procedure and assign confidence levels to the interactions, identifying
the minimum number of sequences needed to extract evolutionary information in
several repeat protein families. The overall procedure can be used to
reconstruct the interactions at long distances, identifying the characteristics
of the strongest couplings in each family, and can be applied to any system
that appears translationally symmetric.
| [
{
"created": "Fri, 25 Jul 2014 14:01:39 GMT",
"version": "v1"
}
] | 2014-07-28 | [
[
"Espada",
"Rocío",
""
],
[
"Parra",
"R. Gonzalo",
""
],
[
"Mora",
"Thierry",
""
],
[
"Walczak",
"Aleksandra M.",
""
],
[
"Ferreiro",
"Diego",
""
]
] | The analysis of correlations of amino acid occurrences in globular proteins has led to the development of statistical tools that can identify native contacts -- portions of the chains that come to close distance in folded structural ensembles. Here we introduce a statistical coupling analysis for repeat proteins -- natural systems for which the identification of domains remains challenging. We show that the inherent translational symmetry of repeat protein sequences introduces a strong bias in the pair correlations at precisely the length scale of the repeat-unit. Equalizing for this bias reveals true co-evolutionary signals from which local native-contacts can be identified. Importantly, parameter values obtained for all other interactions are not significantly affected by the equalization. We quantify the robustness of the procedure and assign confidence levels to the interactions, identifying the minimum number of sequences needed to extract evolutionary information in several repeat protein families. The overall procedure can be used to reconstruct the interactions at long distances, identifying the characteristics of the strongest couplings in each family, and can be applied to any system that appears translationally symmetric. |
1211.4037 | Steven Frank | Steven A. Frank | Natural selection. V. How to read the fundamental equations of
evolutionary change in terms of information theory | null | Frank, S. A. 2012. Natural selection. V. How to read the
fundamental equations of evolutionary change in terms of information theory.
Journal of Evolutionary Biology 25:2377-2396 | 10.1111/jeb.12010 | null | q-bio.PE cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The equations of evolutionary change by natural selection are commonly
expressed in statistical terms. Fisher's fundamental theorem emphasizes the
variance in fitness. Quantitative genetics expresses selection with covariances
and regressions. Population genetic equations depend on genetic variances. How
can we read those statistical expressions with respect to the meaning of
natural selection? One possibility is to relate the statistical expressions to
the amount of information that populations accumulate by selection. However,
the connection between selection and information theory has never been
compelling. Here, I show the correct relations between statistical expressions
for selection and information theory expressions for selection. Those relations
link selection to the fundamental concepts of entropy and information in the
theories of physics, statistics, and communication. We can now read the
equations of selection in terms of their natural meaning. Selection causes
populations to accumulate information about the environment.
| [
{
"created": "Fri, 16 Nov 2012 21:13:31 GMT",
"version": "v1"
}
] | 2012-11-20 | [
[
"Frank",
"Steven A.",
""
]
] | The equations of evolutionary change by natural selection are commonly expressed in statistical terms. Fisher's fundamental theorem emphasizes the variance in fitness. Quantitative genetics expresses selection with covariances and regressions. Population genetic equations depend on genetic variances. How can we read those statistical expressions with respect to the meaning of natural selection? One possibility is to relate the statistical expressions to the amount of information that populations accumulate by selection. However, the connection between selection and information theory has never been compelling. Here, I show the correct relations between statistical expressions for selection and information theory expressions for selection. Those relations link selection to the fundamental concepts of entropy and information in the theories of physics, statistics, and communication. We can now read the equations of selection in terms of their natural meaning. Selection causes populations to accumulate information about the environment. |
q-bio/0609043 | Anton Zilman | A. Zilman, S. DiTalia, B. T. Chait, M. P Rout, M. O. Magnasco | Efficiency, selectivity and robustness of the nuclear pore complex
transport | 38 pages, six figures | null | 10.1371/journal.pcbi.0030125 | null | q-bio.CB | null | All materials enter or exit the cell nucleus through nuclear pore complexes
(NPCs), efficient transport devices that combine high selectivity and
throughput. A central feature of this transport is the binding of
cargo-carrying soluble transport factors to flexible, unstructured
proteinaceous filaments called FG-nups that line the NPC. We have modeled the
dynamics of transport factors and their interaction with the flexible FG-nups
as diffusion in an effective potential, using both analytical theory and
computer simulations. We show that specific binding of transport factors to the
FG-nups facilitates transport and provides the mechanism of selectivity. We
show that the high selectivity of transport can be accounted for by competition
for both binding sites and space inside the NPC, which selects for transport
factors over other macromolecules that interact only non-specifically with the
NPC. We also show that transport is relatively insensitive to changes in the
number and distribution of FG-nups in the NPC, due mainly to their flexibility;
this accounts for recent experiments where up to half of the total mass of the
NPC has been deleted, without abolishing the transport. Notably, we demonstrate
that previously established physical and structural properties of the NPC can
account for observed features of nucleocytoplasmic transport. Finally, our
results suggest strategies for creation of artificial nano-molecular sorting
devices.
| [
{
"created": "Tue, 26 Sep 2006 13:36:02 GMT",
"version": "v1"
}
] | 2015-06-26 | [
[
"Zilman",
"A.",
""
],
[
"DiTalia",
"S.",
""
],
[
"Chait",
"B. T.",
""
],
[
"Rout",
"M. P",
""
],
[
"Magnasco",
"M. O.",
""
]
] | All materials enter or exit the cell nucleus through nuclear pore complexes (NPCs), efficient transport devices that combine high selectivity and throughput. A central feature of this transport is the binding of cargo-carrying soluble transport factors to flexible, unstructured proteinaceous filaments called FG-nups that line the NPC. We have modeled the dynamics of transport factors and their interaction with the flexible FG-nups as diffusion in an effective potential, using both analytical theory and computer simulations. We show that specific binding of transport factors to the FG-nups facilitates transport and provides the mechanism of selectivity. We show that the high selectivity of transport can be accounted for by competition for both binding sites and space inside the NPC, which selects for transport factors over other macromolecules that interact only non-specifically with the NPC. We also show that transport is relatively insensitive to changes in the number and distribution of FG-nups in the NPC, due mainly to their flexibility; this accounts for recent experiments where up to half of the total mass of the NPC has been deleted, without abolishing the transport. Notably, we demonstrate that previously established physical and structural properties of the NPC can account for observed features of nucleocytoplasmic transport. Finally, our results suggest strategies for creation of artificial nano-molecular sorting devices. |
2404.18775 | Sarah Marzen | Sarah Marzen | Resource-rational reinforcement learning and sensorimotor causal states,
and resource-rational maximiners | 12 pages | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | We propose a new computational-level objective function for theoretical
biology and theoretical neuroscience that combines: reinforcement learning, the
study of learning with feedback via rewards; rate-distortion theory, a branch
of information theory that deals with compressing signals to retain relevant
information; and computational mechanics, the study of minimal sufficient
statistics of prediction also known as causal states. We highlight why this
proposal is likely only an approximation, but is likely to be an interesting
one, and propose a new algorithm for evaluating it to obtain the newly-coined
``reward-rate manifold''. The performance of real and artificial agents in
partially observable environments can be newly benchmarked using these
reward-rate manifolds. Finally, we describe experiments that can probe whether
or not biological organisms are resource-rational reinforcement learners, using
as an example maximin strategies, as bacteria have been shown to be approximate
maximiners -- doing their best in the worst-case environment, regardless of
what is actually happening.
| [
{
"created": "Mon, 29 Apr 2024 15:10:02 GMT",
"version": "v1"
},
{
"created": "Thu, 27 Jun 2024 20:56:28 GMT",
"version": "v2"
}
] | 2024-07-01 | [
[
"Marzen",
"Sarah",
""
]
] | We propose a new computational-level objective function for theoretical biology and theoretical neuroscience that combines: reinforcement learning, the study of learning with feedback via rewards; rate-distortion theory, a branch of information theory that deals with compressing signals to retain relevant information; and computational mechanics, the study of minimal sufficient statistics of prediction also known as causal states. We highlight why this proposal is likely only an approximation, but is likely to be an interesting one, and propose a new algorithm for evaluating it to obtain the newly-coined ``reward-rate manifold''. The performance of real and artificial agents in partially observable environments can be newly benchmarked using these reward-rate manifolds. Finally, we describe experiments that can probe whether or not biological organisms are resource-rational reinforcement learners, using as an example maximin strategies, as bacteria have been shown to be approximate maximiners -- doing their best in the worst-case environment, regardless of what is actually happening. |
2210.02881 | Lin Li | Lin Li, Esther Gupta, John Spaeth, Leslie Shing, Tristan Bepler,
Rajmonda Sulo Caceres | Antibody Representation Learning for Drug Discovery | null | null | null | null | q-bio.QM cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Therapeutic antibody development has become an increasingly popular approach
for drug development. To date, antibody therapeutics are largely developed
using large scale experimental screens of antibody libraries containing
hundreds of millions of antibody sequences. The high cost and difficulty of
developing therapeutic antibodies create a pressing need for computational
methods to predict antibody properties and create bespoke designs. However, the
relationship between antibody sequence and activity is a complex physical
process and traditional iterative design approaches rely on large scale assays
and random mutagenesis. Deep learning methods have emerged as a promising way
to learn antibody property predictors, but predicting antibody properties and
target-specific activities depends critically on the choice of antibody
representations and data linking sequences to properties is often limited.
Existing works have not yet investigated the value, limitations and
opportunities of these methods in application to antibody-based drug discovery.
In this paper, we present results on a novel SARS-CoV-2 antibody binding
dataset and an additional benchmark dataset. We compare three classes of
models: conventional statistical sequence models, supervised learning on each
dataset independently, and fine-tuning an antibody specific pre-trained
language model. Experimental results suggest that self-supervised pretraining
of feature representation consistently offers significant improvement in over
previous approaches. We also investigate the impact of data size on the model
performance, and discuss challenges and opportunities that the machine learning
community can address to advance in silico engineering and design of
therapeutic antibodies.
| [
{
"created": "Wed, 5 Oct 2022 13:48:41 GMT",
"version": "v1"
}
] | 2022-10-07 | [
[
"Li",
"Lin",
""
],
[
"Gupta",
"Esther",
""
],
[
"Spaeth",
"John",
""
],
[
"Shing",
"Leslie",
""
],
[
"Bepler",
"Tristan",
""
],
[
"Caceres",
"Rajmonda Sulo",
""
]
] | Therapeutic antibody development has become an increasingly popular approach for drug development. To date, antibody therapeutics are largely developed using large scale experimental screens of antibody libraries containing hundreds of millions of antibody sequences. The high cost and difficulty of developing therapeutic antibodies create a pressing need for computational methods to predict antibody properties and create bespoke designs. However, the relationship between antibody sequence and activity is a complex physical process and traditional iterative design approaches rely on large scale assays and random mutagenesis. Deep learning methods have emerged as a promising way to learn antibody property predictors, but predicting antibody properties and target-specific activities depends critically on the choice of antibody representations and data linking sequences to properties is often limited. Existing works have not yet investigated the value, limitations and opportunities of these methods in application to antibody-based drug discovery. In this paper, we present results on a novel SARS-CoV-2 antibody binding dataset and an additional benchmark dataset. We compare three classes of models: conventional statistical sequence models, supervised learning on each dataset independently, and fine-tuning an antibody specific pre-trained language model. Experimental results suggest that self-supervised pretraining of feature representation consistently offers significant improvement in over previous approaches. We also investigate the impact of data size on the model performance, and discuss challenges and opportunities that the machine learning community can address to advance in silico engineering and design of therapeutic antibodies. |
0704.1169 | Dirson Jian Li | Dirson Jian Li, Shengli Zhang | Holographic bound and protein linguistics | 4 pages, 4 figures. A trial application of holographic bound in life
science | null | null | null | q-bio.GN hep-th q-bio.QM | null | The holographic bound in physics constrains the complexity of life. The
finite storage capability of information in the observable universe requires
the protein linguistics in the evolution of life. We find that the evolution of
genetic code determines the variance of amino acid frequencies and genomic GC
content among species. The elegant linguistic mechanism is confirmed by the
experimental observations based on all known entire proteomes.
| [
{
"created": "Tue, 10 Apr 2007 12:04:28 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Li",
"Dirson Jian",
""
],
[
"Zhang",
"Shengli",
""
]
] | The holographic bound in physics constrains the complexity of life. The finite storage capability of information in the observable universe requires the protein linguistics in the evolution of life. We find that the evolution of genetic code determines the variance of amino acid frequencies and genomic GC content among species. The elegant linguistic mechanism is confirmed by the experimental observations based on all known entire proteomes. |
0708.1341 | David Hsu | David Hsu, Murielle Hsu, He Huang and Erwin B. Montgomery, Jr | An algorithm for detecting oscillatory behavior in discretized data: the
damped-oscillator oscillator detector | 20 pages, 6 figures | null | null | null | q-bio.QM q-bio.NC | null | We present a simple algorithm for detecting oscillatory behavior in discrete
data. The data is used as an input driving force acting on a set of simulated
damped oscillators. By monitoring the energy of the simulated oscillators, we
can detect oscillatory behavior in data. In application to in vivo deep brain
basal ganglia recordings, we found sharp peaks in the spectrum at 20 and 70 Hz.
The algorithm is also compared to the conventional fast Fourier transform and
circular statistics techniques using computer generated model data, and is
found to be comparable to or better than fast Fourier transform in test cases.
Circular statistics performed poorly in our tests.
| [
{
"created": "Thu, 9 Aug 2007 21:14:11 GMT",
"version": "v1"
}
] | 2012-08-27 | [
[
"Hsu",
"David",
""
],
[
"Hsu",
"Murielle",
""
],
[
"Huang",
"He",
""
],
[
"Montgomery,",
"Erwin B.",
"Jr"
]
] | We present a simple algorithm for detecting oscillatory behavior in discrete data. The data is used as an input driving force acting on a set of simulated damped oscillators. By monitoring the energy of the simulated oscillators, we can detect oscillatory behavior in data. In application to in vivo deep brain basal ganglia recordings, we found sharp peaks in the spectrum at 20 and 70 Hz. The algorithm is also compared to the conventional fast Fourier transform and circular statistics techniques using computer generated model data, and is found to be comparable to or better than fast Fourier transform in test cases. Circular statistics performed poorly in our tests. |
q-bio/0404033 | David R. Bickel | David R. Bickel | Probabilities of spurious connections in gene networks: Application to
expression time series | Like q-bio.GN/0404032, this was rejected in March 2004 because it was
submitted to the math archive. The only modification is a corrected reference
to q-bio.GN/0404032, which was not modified at all | Bioinformatics 21, 1121-1128 (2005) | 10.1093/bioinformatics/bti140 | null | q-bio.GN q-bio.CB | null | Motivation: The reconstruction of gene networks from gene expression
microarrays is gaining popularity as methods improve and as more data become
available. The reliability of such networks could be judged by the probability
that a connection between genes is spurious, resulting from chance fluctuations
rather than from a true biological relationship. Results: Unlike the false
discovery rate and positive false discovery rate, the decisive false discovery
rate (dFDR) is exactly equal to a conditional probability without assuming
independence or the randomness of hypothesis truth values. This property is
useful not only in the common application to the detection of differential gene
expression, but also in determining the probability of a spurious connection in
a reconstructed gene network. Estimators of the dFDR can estimate each of three
probabilities: 1. The probability that two genes that appear to be associated
with each other lack such association. 2. The probability that a time ordering
observed for two associated genes is misleading. 3. The probability that a time
ordering observed for two genes is misleading, either because they are not
associated or because they are associated without a lag in time. The first
probability applies to both static and dynamic gene networks, and the other two
only apply to dynamic gene networks. Availability: Cross-platform software for
network reconstruction, probability estimation, and plotting is free from
http://www.davidbickel.com as R functions and a Java application.
| [
{
"created": "Fri, 23 Apr 2004 17:29:46 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Bickel",
"David R.",
""
]
] | Motivation: The reconstruction of gene networks from gene expression microarrays is gaining popularity as methods improve and as more data become available. The reliability of such networks could be judged by the probability that a connection between genes is spurious, resulting from chance fluctuations rather than from a true biological relationship. Results: Unlike the false discovery rate and positive false discovery rate, the decisive false discovery rate (dFDR) is exactly equal to a conditional probability without assuming independence or the randomness of hypothesis truth values. This property is useful not only in the common application to the detection of differential gene expression, but also in determining the probability of a spurious connection in a reconstructed gene network. Estimators of the dFDR can estimate each of three probabilities: 1. The probability that two genes that appear to be associated with each other lack such association. 2. The probability that a time ordering observed for two associated genes is misleading. 3. The probability that a time ordering observed for two genes is misleading, either because they are not associated or because they are associated without a lag in time. The first probability applies to both static and dynamic gene networks, and the other two only apply to dynamic gene networks. Availability: Cross-platform software for network reconstruction, probability estimation, and plotting is free from http://www.davidbickel.com as R functions and a Java application. |
2007.05401 | Sally Sisi Qu | Sisi Qu, Mengmeng Xu, Bernard Ghanem, Jesper Tegner | Learning Heat Diffusion for Network Alignment | 4 Pages, 2 figures | Presented at the ICML 2020 Workshop on Computational Biology (WCB) | null | null | q-bio.QM physics.soc-ph q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Networks are abundant in the life sciences. Outstanding challenges include
how to characterize similarities between networks, and in extension how to
integrate information across networks. Yet, network alignment remains a core
algorithmic problem. Here, we present a novel learning algorithm called
evolutionary heat diffusion-based network alignment (EDNA) to address this
challenge. EDNA uses the diffusion signal as a proxy for computing node
similarities between networks. Comparing EDNA with state-of-the-art algorithms
on a popular protein-protein interaction network dataset, using four different
evaluation metrics, we achieve (i) the most accurate alignments, (ii) increased
robustness against noise, and (iii) superior scaling capacity. The EDNA
algorithm is versatile in that other available network alignments/embeddings
can be used as an initial baseline alignment, and then EDNA works as a wrapper
around them by running the evolutionary diffusion on top of them. In
conclusion, EDNA outperforms state-of-the-art methods for network alignment,
thus setting the stage for large-scale comparison and integration of networks.
| [
{
"created": "Fri, 10 Jul 2020 14:06:53 GMT",
"version": "v1"
}
] | 2020-07-13 | [
[
"Qu",
"Sisi",
""
],
[
"Xu",
"Mengmeng",
""
],
[
"Ghanem",
"Bernard",
""
],
[
"Tegner",
"Jesper",
""
]
] | Networks are abundant in the life sciences. Outstanding challenges include how to characterize similarities between networks, and in extension how to integrate information across networks. Yet, network alignment remains a core algorithmic problem. Here, we present a novel learning algorithm called evolutionary heat diffusion-based network alignment (EDNA) to address this challenge. EDNA uses the diffusion signal as a proxy for computing node similarities between networks. Comparing EDNA with state-of-the-art algorithms on a popular protein-protein interaction network dataset, using four different evaluation metrics, we achieve (i) the most accurate alignments, (ii) increased robustness against noise, and (iii) superior scaling capacity. The EDNA algorithm is versatile in that other available network alignments/embeddings can be used as an initial baseline alignment, and then EDNA works as a wrapper around them by running the evolutionary diffusion on top of them. In conclusion, EDNA outperforms state-of-the-art methods for network alignment, thus setting the stage for large-scale comparison and integration of networks. |
2404.04726 | Hui Xue PhD | Azaan Rehman, Alexander Zhovmer, Ryo Sato, Yosuke Mukoyama, Jiji Chen,
Alberto Rissone, Rosa Puertollano, Harshad Vishwasrao, Hari Shroff, Christian
A. Combs, Hui Xue | Convolutional Neural Network Transformer (CNNT) for Fluorescence
Microscopy image Denoising with Improved Generalization and Fast Adaptation | null | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Deep neural networks have been applied to improve the image quality of
fluorescence microscopy imaging. Previous methods are based on convolutional
neural networks (CNNs) which generally require more time-consuming training of
separate models for each new imaging experiment, impairing the applicability
and generalization. Once the model is trained (typically with tens to hundreds
of image pairs) it can then be used to enhance new images that are like the
training data. In this study, we proposed a novel imaging-transformer based
model, Convolutional Neural Network Transformer (CNNT), to outperform the CNN
networks for image denoising. In our scheme we have trained a single CNNT based
backbone model from pairwise high-low SNR images for one type of fluorescence
microscope (instance structured illumination, iSim). Fast adaption to new
applications was achieved by fine-tuning the backbone on only 5-10 sample pairs
per new experiment. Results show the CNNT backbone and fine-tuning scheme
significantly reduces the training time and improves the image quality,
outperformed training separate models using CNN approaches such as - RCAN and
Noise2Fast. Here we show three examples of the efficacy of this approach on
denoising wide-field, two-photon and confocal fluorescence data. In the
confocal experiment, which is a 5 by 5 tiled acquisition, the fine-tuned CNNT
model reduces the scan time form one hour to eight minutes, with improved
quality.
| [
{
"created": "Sat, 6 Apr 2024 20:34:14 GMT",
"version": "v1"
}
] | 2024-04-09 | [
[
"Rehman",
"Azaan",
""
],
[
"Zhovmer",
"Alexander",
""
],
[
"Sato",
"Ryo",
""
],
[
"Mukoyama",
"Yosuke",
""
],
[
"Chen",
"Jiji",
""
],
[
"Rissone",
"Alberto",
""
],
[
"Puertollano",
"Rosa",
""
],
[
"... | Deep neural networks have been applied to improve the image quality of fluorescence microscopy imaging. Previous methods are based on convolutional neural networks (CNNs) which generally require more time-consuming training of separate models for each new imaging experiment, impairing the applicability and generalization. Once the model is trained (typically with tens to hundreds of image pairs) it can then be used to enhance new images that are like the training data. In this study, we proposed a novel imaging-transformer based model, Convolutional Neural Network Transformer (CNNT), to outperform the CNN networks for image denoising. In our scheme we have trained a single CNNT based backbone model from pairwise high-low SNR images for one type of fluorescence microscope (instance structured illumination, iSim). Fast adaption to new applications was achieved by fine-tuning the backbone on only 5-10 sample pairs per new experiment. Results show the CNNT backbone and fine-tuning scheme significantly reduces the training time and improves the image quality, outperformed training separate models using CNN approaches such as - RCAN and Noise2Fast. Here we show three examples of the efficacy of this approach on denoising wide-field, two-photon and confocal fluorescence data. In the confocal experiment, which is a 5 by 5 tiled acquisition, the fine-tuned CNNT model reduces the scan time form one hour to eight minutes, with improved quality. |
2109.06230 | Thomas Kirby | Thomas G. Kirby, Julie C. Blackwood (Williams College) | Potential of the Sterile Insect Technique for Control of Deer Ticks,
$\textit{Ixodes scapularis}$ | 32 pages, 5 figures | null | 10.36934/TR2021_251 | null | q-bio.PE math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The deer tick, $\textit{Ixodes scapularis}$, is a vector for numerous human
diseases, including Lyme disease, anaplasmosis, and babesiosis. Concern is
rising in the US and abroad as the population and range of this species grow
and new diseases emerge. Herein I consider the potential for control of
$\textit{I. scapularis}$ using the Sterile Insect Technique (SIT), which acts
by reducing net fertility through release of sterile males. I construct a
population model with density-dependent and -independent growth, migration, and
an Allee effect (decline of the population when it is small), and use this
model to simulate sterile tick release in both single- and multi-patch
frameworks. I test two key concerns with implementing $\textit{I. scapularis}$
SIT: that the ticks' lengthy life course could make control take too long and
that low migration might mean sterile males need thorough manual dispersal to
all parts of the control area. Results suggest that typical $\textit{I.
scapularis}$ SIT programs will take about eight years, a prediction near the
normal range for the technique, but that thorough distribution of sterile ticks
over the control area is indeed critical, increasing expense substantially by
necessitating aerial release. With particularly high rearing costs also
expected for $\textit{I. scapularis}$, the latter finding suggests that
cost-effectiveness improvements to aerial release may be a prerequisite to
$\textit{I. scapularis}$ SIT.
| [
{
"created": "Mon, 13 Sep 2021 18:04:39 GMT",
"version": "v1"
}
] | 2021-12-21 | [
[
"Kirby",
"Thomas G.",
"",
"Williams College"
],
[
"Blackwood",
"Julie C.",
"",
"Williams College"
]
] | The deer tick, $\textit{Ixodes scapularis}$, is a vector for numerous human diseases, including Lyme disease, anaplasmosis, and babesiosis. Concern is rising in the US and abroad as the population and range of this species grow and new diseases emerge. Herein I consider the potential for control of $\textit{I. scapularis}$ using the Sterile Insect Technique (SIT), which acts by reducing net fertility through release of sterile males. I construct a population model with density-dependent and -independent growth, migration, and an Allee effect (decline of the population when it is small), and use this model to simulate sterile tick release in both single- and multi-patch frameworks. I test two key concerns with implementing $\textit{I. scapularis}$ SIT: that the ticks' lengthy life course could make control take too long and that low migration might mean sterile males need thorough manual dispersal to all parts of the control area. Results suggest that typical $\textit{I. scapularis}$ SIT programs will take about eight years, a prediction near the normal range for the technique, but that thorough distribution of sterile ticks over the control area is indeed critical, increasing expense substantially by necessitating aerial release. With particularly high rearing costs also expected for $\textit{I. scapularis}$, the latter finding suggests that cost-effectiveness improvements to aerial release may be a prerequisite to $\textit{I. scapularis}$ SIT. |
1305.4366 | Xiang Zhou | Xiang Zhou and Matthew Stephens | Efficient Algorithms for Multivariate Linear Mixed Models in Genome-wide
Association Studies | null | null | null | null | q-bio.QM stat.AP stat.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multivariate linear mixed models (mvLMMs) have been widely used in many areas
of genetics, and have attracted considerable recent interest in genome-wide
association studies (GWASs). However, fitting mvLMMs is computationally
non-trivial, and no existing method is computationally practical for performing
the likelihood ratio test (LRT) for mvLMMs in GWAS settings with moderate
sample size n. The existing software MTMM perform an approximate LRT for two
phenotypes, and as we find, its p values can substantially understate the
significance of associations. Here, we present novel computationally-efficient
algorithms for fitting mvLMMs, and computing the LRT in GWAS settings. After a
single initial eigen-decomposition (with complexity O(n^3)) the algorithms i)
reduce computational complexity (per iteration of the optimizer) from cubic to
linear in n; and ii) in GWAS analyses, reduces per-marker complexity from cubic
to quadratic in n. These innovations make it practical to compute the LRT for
mvLMMs in GWASs for tens of thousands of samples and a moderate number of
phenotypes (~2-10). With simulations, we show that the LRT provides correct
control for type I error. With both simulations and real data we find that the
LRT is more powerful than the approximate LRT from MTMM, and illustrate the
benefits of analyzing more than two phenotypes. The method is implemented in
the GEMMA software package, freely available at
http://stephenslab.uchicago.edu/software.html
| [
{
"created": "Sun, 19 May 2013 14:53:48 GMT",
"version": "v1"
},
{
"created": "Wed, 11 Sep 2013 21:10:44 GMT",
"version": "v2"
}
] | 2013-09-13 | [
[
"Zhou",
"Xiang",
""
],
[
"Stephens",
"Matthew",
""
]
] | Multivariate linear mixed models (mvLMMs) have been widely used in many areas of genetics, and have attracted considerable recent interest in genome-wide association studies (GWASs). However, fitting mvLMMs is computationally non-trivial, and no existing method is computationally practical for performing the likelihood ratio test (LRT) for mvLMMs in GWAS settings with moderate sample size n. The existing software MTMM perform an approximate LRT for two phenotypes, and as we find, its p values can substantially understate the significance of associations. Here, we present novel computationally-efficient algorithms for fitting mvLMMs, and computing the LRT in GWAS settings. After a single initial eigen-decomposition (with complexity O(n^3)) the algorithms i) reduce computational complexity (per iteration of the optimizer) from cubic to linear in n; and ii) in GWAS analyses, reduces per-marker complexity from cubic to quadratic in n. These innovations make it practical to compute the LRT for mvLMMs in GWASs for tens of thousands of samples and a moderate number of phenotypes (~2-10). With simulations, we show that the LRT provides correct control for type I error. With both simulations and real data we find that the LRT is more powerful than the approximate LRT from MTMM, and illustrate the benefits of analyzing more than two phenotypes. The method is implemented in the GEMMA software package, freely available at http://stephenslab.uchicago.edu/software.html |
2307.07528 | C\'edric Walker | Cedric Walker, Tasneem Talawalla, Robert Toth, Akhil Ambekar, Kien
Rea, Oswin Chamian, Fan Fan, Sabina Berezowska, Sven Rottenberg, Anant
Madabhushi, Marie Maillard, Laura Barisoni, Hugo Mark Horlings, Andrew
Janowczyk | PatchSorter: A High Throughput Deep Learning Digital Pathology Tool for
Object Labeling | The submission includes 15 pages, 8 figures, 1 table, and 30
references. It is a new submission | null | null | null | q-bio.QM cs.AI cs.CV cs.HC eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The discovery of patterns associated with diagnosis, prognosis, and therapy
response in digital pathology images often requires intractable labeling of
large quantities of histological objects. Here we release an open-source
labeling tool, PatchSorter, which integrates deep learning with an intuitive
web interface. Using >100,000 objects, we demonstrate a >7x improvement in
labels per second over unaided labeling, with minimal impact on labeling
accuracy, thus enabling high-throughput labeling of large datasets.
| [
{
"created": "Thu, 13 Jul 2023 09:32:42 GMT",
"version": "v1"
}
] | 2023-07-18 | [
[
"Walker",
"Cedric",
""
],
[
"Talawalla",
"Tasneem",
""
],
[
"Toth",
"Robert",
""
],
[
"Ambekar",
"Akhil",
""
],
[
"Rea",
"Kien",
""
],
[
"Chamian",
"Oswin",
""
],
[
"Fan",
"Fan",
""
],
[
"Berezowska... | The discovery of patterns associated with diagnosis, prognosis, and therapy response in digital pathology images often requires intractable labeling of large quantities of histological objects. Here we release an open-source labeling tool, PatchSorter, which integrates deep learning with an intuitive web interface. Using >100,000 objects, we demonstrate a >7x improvement in labels per second over unaided labeling, with minimal impact on labeling accuracy, thus enabling high-throughput labeling of large datasets. |
2407.04680 | Tommaso Tosato | Tommaso Tosato, Pascal Jr Tikeng Notsawo, Saskia Helbling, Irina Rish,
Guillaume Dumas | Lost in Translation: The Algorithmic Gap Between LMs and the Brain | null | null | null | null | q-bio.NC cs.AI cs.CL | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Language Models (LMs) have achieved impressive performance on various
linguistic tasks, but their relationship to human language processing in the
brain remains unclear. This paper examines the gaps and overlaps between LMs
and the brain at different levels of analysis, emphasizing the importance of
looking beyond input-output behavior to examine and compare the internal
processes of these systems. We discuss how insights from neuroscience, such as
sparsity, modularity, internal states, and interactive learning, can inform the
development of more biologically plausible language models. Furthermore, we
explore the role of scaling laws in bridging the gap between LMs and human
cognition, highlighting the need for efficiency constraints analogous to those
in biological systems. By developing LMs that more closely mimic brain
function, we aim to advance both artificial intelligence and our understanding
of human cognition.
| [
{
"created": "Fri, 5 Jul 2024 17:43:16 GMT",
"version": "v1"
}
] | 2024-07-08 | [
[
"Tosato",
"Tommaso",
""
],
[
"Notsawo",
"Pascal Jr Tikeng",
""
],
[
"Helbling",
"Saskia",
""
],
[
"Rish",
"Irina",
""
],
[
"Dumas",
"Guillaume",
""
]
] | Language Models (LMs) have achieved impressive performance on various linguistic tasks, but their relationship to human language processing in the brain remains unclear. This paper examines the gaps and overlaps between LMs and the brain at different levels of analysis, emphasizing the importance of looking beyond input-output behavior to examine and compare the internal processes of these systems. We discuss how insights from neuroscience, such as sparsity, modularity, internal states, and interactive learning, can inform the development of more biologically plausible language models. Furthermore, we explore the role of scaling laws in bridging the gap between LMs and human cognition, highlighting the need for efficiency constraints analogous to those in biological systems. By developing LMs that more closely mimic brain function, we aim to advance both artificial intelligence and our understanding of human cognition. |
1801.05116 | Roman Sandler | Roman A. Sandler, Kunling Geng, Dong Song, Robert E. Hampson, Mark R.
Witcher, Sam A. Deadwyler, Theodore W. Berger, Vasilis Z. Marmarelis | Designing Patient-Specific Optimal Neurostimulation Patterns for Seizure
Suppression | null | Neural.Computation 30.5 (2018) 1180-1208 | 10.1162/neco_a_01075 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neurostimulation is a promising therapy for abating epileptic seizures.
However, it is extremely difficult to identify optimal stimulation patterns
experimentally. In this study human recordings are used to develop a functional
24 neuron network statistical model of hippocampal connectivity and dynamics.
Spontaneous seizure-like activity is induced in-silico in this reconstructed
neuronal network. The network is then used as a testbed to design and validate
a wide range of neurostimulation patterns. Commonly used periodic trains were
not able to permanently abate seizures at any frequency. A simulated annealing
global optimization algorithm was then used to identify an optimal stimulation
pattern which successfully abated 92% of seizures. Finally, in a fully
responsive, or "closed-loop" neurostimulation paradigm, the optimal stimulation
successfully prevented the network from entering the seizure state. We propose
that the framework presented here for algorithmically identifying
patient-specific neurostimulation patterns can greatly increase the efficacy of
neurostimulation devices for seizures.
| [
{
"created": "Tue, 16 Jan 2018 04:59:25 GMT",
"version": "v1"
}
] | 2018-06-07 | [
[
"Sandler",
"Roman A.",
""
],
[
"Geng",
"Kunling",
""
],
[
"Song",
"Dong",
""
],
[
"Hampson",
"Robert E.",
""
],
[
"Witcher",
"Mark R.",
""
],
[
"Deadwyler",
"Sam A.",
""
],
[
"Berger",
"Theodore W.",
""
]... | Neurostimulation is a promising therapy for abating epileptic seizures. However, it is extremely difficult to identify optimal stimulation patterns experimentally. In this study human recordings are used to develop a functional 24 neuron network statistical model of hippocampal connectivity and dynamics. Spontaneous seizure-like activity is induced in-silico in this reconstructed neuronal network. The network is then used as a testbed to design and validate a wide range of neurostimulation patterns. Commonly used periodic trains were not able to permanently abate seizures at any frequency. A simulated annealing global optimization algorithm was then used to identify an optimal stimulation pattern which successfully abated 92% of seizures. Finally, in a fully responsive, or "closed-loop" neurostimulation paradigm, the optimal stimulation successfully prevented the network from entering the seizure state. We propose that the framework presented here for algorithmically identifying patient-specific neurostimulation patterns can greatly increase the efficacy of neurostimulation devices for seizures. |
1303.6667 | Mateusz Sikora | Mateusz Sikora, Marek Cieplak | Formation of Cystine Slipknots in Dimeric Proteins | null | PLoS ONE 8(3): e57443 2013 | 10.1371/journal.pone.0057443 | null | q-bio.BM physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider mechanical stability of dimeric and monomeric proteins with the
cystine knot motif. A structure based dynamical model is used to demonstrate
that all dimeric and some monomeric proteins of this kind should have
considerable resistance to stretching that is significantly larger than that of
titin. The mechanisms of the large mechanostability are elucidated. In most
cases, it originates from the induced formation of one or two cystine
slipknots. Since there are four termini in a dimer, there are several ways of
selecting two of them to pull by. We show that in the cystine knot systems,
there is strong anisotropy in mechanostability and force patterns related to
the selection. We show that the thermodynamic stability of the dimers is
enhanced compared to the constituting monomers whereas machanostability is
either lower or higher.
| [
{
"created": "Tue, 26 Mar 2013 21:26:11 GMT",
"version": "v1"
}
] | 2013-03-28 | [
[
"Sikora",
"Mateusz",
""
],
[
"Cieplak",
"Marek",
""
]
] | We consider mechanical stability of dimeric and monomeric proteins with the cystine knot motif. A structure based dynamical model is used to demonstrate that all dimeric and some monomeric proteins of this kind should have considerable resistance to stretching that is significantly larger than that of titin. The mechanisms of the large mechanostability are elucidated. In most cases, it originates from the induced formation of one or two cystine slipknots. Since there are four termini in a dimer, there are several ways of selecting two of them to pull by. We show that in the cystine knot systems, there is strong anisotropy in mechanostability and force patterns related to the selection. We show that the thermodynamic stability of the dimers is enhanced compared to the constituting monomers whereas machanostability is either lower or higher. |
q-bio/0609038 | Eric Vigoda | Daniel Stefankovic and Eric Vigoda | Phylogeny of Mixture Models: Robustness of Maximum Likelihood and
Non-identifiable Distributions | 3 figures | null | null | null | q-bio.PE | null | We address phylogenetic reconstruction when the data is generated from a
mixture distribution. Such topics have gained considerable attention in the
biological community with the clear evidence of heterogeneity of mutation
rates. In our work, we consider data coming from a mixture of trees which share
a common topology, but differ in their edge weights (i.e., branch lengths). We
first show the pitfalls of popular methods, including maximum likelihood and
Markov chain Monte Carlo algorithms. We then determine in which evolutionary
models, reconstructing the tree topology, under a mixture distribution, is
(im)possible. We prove that every model whose transition matrices can be
parameterized by an open set of multi-linear polynomials, either has
non-identifiable mixture distributions, in which case reconstruction is
impossible in general, or there exist linear tests which identify the topology.
This duality theorem, relies on our notion of linear tests and uses ideas from
convex programming duality. Linear tests are closely related to linear
invariants, which were first introduced by Lake, and are natural from an
algebraic geometry perspective.
| [
{
"created": "Mon, 25 Sep 2006 17:11:38 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Stefankovic",
"Daniel",
""
],
[
"Vigoda",
"Eric",
""
]
] | We address phylogenetic reconstruction when the data is generated from a mixture distribution. Such topics have gained considerable attention in the biological community with the clear evidence of heterogeneity of mutation rates. In our work, we consider data coming from a mixture of trees which share a common topology, but differ in their edge weights (i.e., branch lengths). We first show the pitfalls of popular methods, including maximum likelihood and Markov chain Monte Carlo algorithms. We then determine in which evolutionary models, reconstructing the tree topology, under a mixture distribution, is (im)possible. We prove that every model whose transition matrices can be parameterized by an open set of multi-linear polynomials, either has non-identifiable mixture distributions, in which case reconstruction is impossible in general, or there exist linear tests which identify the topology. This duality theorem, relies on our notion of linear tests and uses ideas from convex programming duality. Linear tests are closely related to linear invariants, which were first introduced by Lake, and are natural from an algebraic geometry perspective. |
1505.08009 | Tom Nye | S. Cherlin, T. M. W. Nye, S. E. Heaps, R. J. Boys, T. A. Williams, T.
M. Embley | The effect of non-reversibility on inferring rooted phylogenies | 8 figures, 6 tables | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Most phylogenetic models assume that the evolutionary process is stationary
and reversible. As a result, the root of the tree cannot be inferred as part of
the analysis because the likelihood of the data does not depend on the position
of the root. Yet defining the root of a phylogenetic tree is a key component of
phylogenetic inference because it provides a point of reference for polarising
ancestor/descendant relationships and therefore interpreting the tree. In this
paper we investigate the effect of relaxing the reversibility assumption and
allowing the position of the root to be another unknown in the model. We
propose two hierarchical models that are centred on a reversible model but
perturbed to allow non-reversibility. The models differ in the degree of
structure imposed on the perturbations. The analysis is performed in the
Bayesian framework using Markov chain Monte Carlo methods. We illustrate the
performance of the two non-reversible models in analyses of simulated data sets
using two types of topological priors. We then apply the models to a real
biological data set, the radiation of polyploid yeasts, for which there is a
robust biological opinion about the root position. Finally we apply the models
to a second biological data set for which the rooted tree is controversial: the
ribosomal tree of life. We compare the two non-reversible models and conclude
that both are useful in inferring the position of the root from real biological
data sets.
| [
{
"created": "Fri, 29 May 2015 12:11:04 GMT",
"version": "v1"
},
{
"created": "Mon, 20 Feb 2017 11:56:41 GMT",
"version": "v2"
}
] | 2017-02-21 | [
[
"Cherlin",
"S.",
""
],
[
"Nye",
"T. M. W.",
""
],
[
"Heaps",
"S. E.",
""
],
[
"Boys",
"R. J.",
""
],
[
"Williams",
"T. A.",
""
],
[
"Embley",
"T. M.",
""
]
] | Most phylogenetic models assume that the evolutionary process is stationary and reversible. As a result, the root of the tree cannot be inferred as part of the analysis because the likelihood of the data does not depend on the position of the root. Yet defining the root of a phylogenetic tree is a key component of phylogenetic inference because it provides a point of reference for polarising ancestor/descendant relationships and therefore interpreting the tree. In this paper we investigate the effect of relaxing the reversibility assumption and allowing the position of the root to be another unknown in the model. We propose two hierarchical models that are centred on a reversible model but perturbed to allow non-reversibility. The models differ in the degree of structure imposed on the perturbations. The analysis is performed in the Bayesian framework using Markov chain Monte Carlo methods. We illustrate the performance of the two non-reversible models in analyses of simulated data sets using two types of topological priors. We then apply the models to a real biological data set, the radiation of polyploid yeasts, for which there is a robust biological opinion about the root position. Finally we apply the models to a second biological data set for which the rooted tree is controversial: the ribosomal tree of life. We compare the two non-reversible models and conclude that both are useful in inferring the position of the root from real biological data sets. |
1209.4722 | Hiroyoshi Miyakawa | Hiroyoshi Miyakawa (1) and Toru Aonishi (2) ((1) Laboratory of
Cellular Neurobiology, School of Life Sciences, Tokyo University of Pharmacy
and Life Sciences, Tokyo, Japan, (2) Department of Computational Intelligence
and Systems Science, Tokyo Institute of Technology, Yokohama, Japan) | Apparent extracellular current density and extracellular space: basis
for the current source density analysis in neural tissue | 21 pages, 5 figures | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This article provides a theoretical basis for relating macroscopic electrical
signals recorded from biological tissue, such as electroencephalogram (EEG) and
local field potential (LFP), to the electrophysiological processes at the
cellular level in a manner consistent with Maxwell's equations. Concepts of the
apparent extracellular current density and the apparent extracellular space
with apparent permittivity and conductivity are introduced from the
conservation of current and Gauss's theorem. A general equation for the current
source density (CSD) analysis is derived for biological tissue with
frequency-dependent apparent permittivity and conductivity. An intuitive
account of the apparent extracellular space is given to relate the concept to
the dielectric dispersion of biological tissue.
| [
{
"created": "Fri, 21 Sep 2012 07:00:13 GMT",
"version": "v1"
}
] | 2012-09-24 | [
[
"Miyakawa",
"Hiroyoshi",
""
],
[
"Aonishi",
"Toru",
""
]
] | This article provides a theoretical basis for relating macroscopic electrical signals recorded from biological tissue, such as electroencephalogram (EEG) and local field potential (LFP), to the electrophysiological processes at the cellular level in a manner consistent with Maxwell's equations. Concepts of the apparent extracellular current density and the apparent extracellular space with apparent permittivity and conductivity are introduced from the conservation of current and Gauss's theorem. A general equation for the current source density (CSD) analysis is derived for biological tissue with frequency-dependent apparent permittivity and conductivity. An intuitive account of the apparent extracellular space is given to relate the concept to the dielectric dispersion of biological tissue. |
2207.06678 | Aaron Wang | Aaron Wang | Deep Learning Methods for Protein Family Classification on PDB
Sequencing Data | null | null | null | null | q-bio.QM cs.LG q-bio.GN | http://creativecommons.org/licenses/by/4.0/ | Composed of amino acid chains that influence how they fold and thus dictating
their function and features, proteins are a class of macromolecules that play a
central role in major biological processes and are required for the structure,
function, and regulation of the body's tissues. Understanding protein functions
is vital to the development of therapeutics and precision medicine, and hence
the ability to classify proteins and their functions based on measurable
features is crucial; indeed, the automatic inference of a protein's properties
from its sequence of amino acids, known as its primary structure, remains an
important open problem within the field of bioinformatics, especially given the
recent advancements in sequencing technologies and the extensive number of
known but uncategorized proteins with unknown properties. In this work, we
demonstrate and compare the performance of several deep learning frameworks,
including novel bi-directional LSTM and convolutional models, on widely
available sequencing data from the Protein Data Bank (PDB) of the Research
Collaboratory for Structural Bioinformatics (RCSB), as well as benchmark this
performance against classical machine learning approaches, including k-nearest
neighbors and multinomial regression classifiers, trained on experimental data.
Our results show that our deep learning models deliver superior performance to
classical machine learning methods, with the convolutional architecture
providing the most impressive inference performance.
| [
{
"created": "Thu, 14 Jul 2022 06:11:32 GMT",
"version": "v1"
}
] | 2022-07-15 | [
[
"Wang",
"Aaron",
""
]
] | Composed of amino acid chains that influence how they fold and thus dictating their function and features, proteins are a class of macromolecules that play a central role in major biological processes and are required for the structure, function, and regulation of the body's tissues. Understanding protein functions is vital to the development of therapeutics and precision medicine, and hence the ability to classify proteins and their functions based on measurable features is crucial; indeed, the automatic inference of a protein's properties from its sequence of amino acids, known as its primary structure, remains an important open problem within the field of bioinformatics, especially given the recent advancements in sequencing technologies and the extensive number of known but uncategorized proteins with unknown properties. In this work, we demonstrate and compare the performance of several deep learning frameworks, including novel bi-directional LSTM and convolutional models, on widely available sequencing data from the Protein Data Bank (PDB) of the Research Collaboratory for Structural Bioinformatics (RCSB), as well as benchmark this performance against classical machine learning approaches, including k-nearest neighbors and multinomial regression classifiers, trained on experimental data. Our results show that our deep learning models deliver superior performance to classical machine learning methods, with the convolutional architecture providing the most impressive inference performance. |
1509.05219 | Pierre Casadebaig | Pierre Casadebaig, Bangyou Zheng, Scott Chapman, Neil Huth, Robert
Faivre, Karine Chenu | Assessment of the potential impacts of plant traits across environments
by combining global sensitivity analysis and dynamic modeling in wheat | 22 pages, 8 figures. This work has been submitted to PLoS One | PLoS ONE 11 (2016) e0146385 | 10.1371/journal.pone.0146385 | null | q-bio.QM q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A crop can be viewed as a complex system with outputs (e.g. yield) that are
affected by inputs of genetic, physiology, pedo-climatic and management
information. Application of numerical methods for model exploration assist in
evaluating the major most influential inputs, providing the simulation model is
a credible description of the biological system. A sensitivity analysis was
used to assess the simulated impact on yield of a suite of traits involved in
major processes of crop growth and development, and to evaluate how the
simulated value of such traits varies across environments and in relation to
other traits (which can be interpreted as a virtual change in genetic
background). The study focused on wheat in Australia, with an emphasis on
adaptation to low rainfall conditions. A large set of traits (90) was evaluated
in a wide target population of environments (4 sites x 125 years), management
practices (3 sowing dates x 2 N fertilization) and $CO_2$ (2 levels). The
Morris sensitivity analysis method was used to sample the parameter space and
reduce computational requirements, while maintaining a realistic representation
of the targeted trait x environment x management landscape ($\sim$ 82 million
individual simulations in total). The patterns of parameter x environment x
management interactions were investigated for the most influential parameters,
considering a potential genetic range of +/- 20% compared to a reference. Main
(i.e. linear) and interaction (i.e. non-linear and interaction) sensitivity
indices calculated for most of APSIM-Wheat parameters allowed the identifcation
of 42 parameters substantially impacting yield in most target environments.
Among these, a subset of parameters related to phenology, resource acquisition,
resource use efficiency and biomass allocation were identified as potential
candidates for crop (and model) improvement.
| [
{
"created": "Thu, 17 Sep 2015 11:55:25 GMT",
"version": "v1"
}
] | 2016-01-26 | [
[
"Casadebaig",
"Pierre",
""
],
[
"Zheng",
"Bangyou",
""
],
[
"Chapman",
"Scott",
""
],
[
"Huth",
"Neil",
""
],
[
"Faivre",
"Robert",
""
],
[
"Chenu",
"Karine",
""
]
] | A crop can be viewed as a complex system with outputs (e.g. yield) that are affected by inputs of genetic, physiology, pedo-climatic and management information. Application of numerical methods for model exploration assist in evaluating the major most influential inputs, providing the simulation model is a credible description of the biological system. A sensitivity analysis was used to assess the simulated impact on yield of a suite of traits involved in major processes of crop growth and development, and to evaluate how the simulated value of such traits varies across environments and in relation to other traits (which can be interpreted as a virtual change in genetic background). The study focused on wheat in Australia, with an emphasis on adaptation to low rainfall conditions. A large set of traits (90) was evaluated in a wide target population of environments (4 sites x 125 years), management practices (3 sowing dates x 2 N fertilization) and $CO_2$ (2 levels). The Morris sensitivity analysis method was used to sample the parameter space and reduce computational requirements, while maintaining a realistic representation of the targeted trait x environment x management landscape ($\sim$ 82 million individual simulations in total). The patterns of parameter x environment x management interactions were investigated for the most influential parameters, considering a potential genetic range of +/- 20% compared to a reference. Main (i.e. linear) and interaction (i.e. non-linear and interaction) sensitivity indices calculated for most of APSIM-Wheat parameters allowed the identifcation of 42 parameters substantially impacting yield in most target environments. Among these, a subset of parameters related to phenology, resource acquisition, resource use efficiency and biomass allocation were identified as potential candidates for crop (and model) improvement. |
1411.2549 | Yong Kong | Yong Kong | Distributions of positive signals in pyrosequencing | 19 pages, 2 figures | Journal of mathematical biology 69 (1), 39-54, 2014 | 10.1007/s00285-013-0691-5 | null | q-bio.GN cs.DM math.CO stat.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pyrosequencing is one of the important next-generation sequencing
technologies. We derive the distribution of the number of positive signals in
pyrograms of this sequencing technology as a function of flow cycle numbers and
nucleotide probabilities of the target sequences. As for the distribution of
sequence length, we also derive the distribution of positive signals for the
fixed flow cycle model. Explicit formulas are derived for the mean and variance
of the distributions. A simple result for the mean of the distribution is that
the mean number of positive signals in a pyrogram is approximately twice the
number of flow cycles, regardless of nucleotide probabilities. The statistical
distributions will be useful for instrument and software development for
pyrosequencing and other related platforms.
| [
{
"created": "Thu, 6 Nov 2014 03:18:19 GMT",
"version": "v1"
}
] | 2024-05-28 | [
[
"Kong",
"Yong",
""
]
] | Pyrosequencing is one of the important next-generation sequencing technologies. We derive the distribution of the number of positive signals in pyrograms of this sequencing technology as a function of flow cycle numbers and nucleotide probabilities of the target sequences. As for the distribution of sequence length, we also derive the distribution of positive signals for the fixed flow cycle model. Explicit formulas are derived for the mean and variance of the distributions. A simple result for the mean of the distribution is that the mean number of positive signals in a pyrogram is approximately twice the number of flow cycles, regardless of nucleotide probabilities. The statistical distributions will be useful for instrument and software development for pyrosequencing and other related platforms. |
1910.08433 | Melissa McGuirl | Melissa R. McGuirl, Alexandria Volkening, Bj\"orn Sandstede | Topological data analysis of zebrafish patterns | null | null | 10.1073/pnas.1917763117 | null | q-bio.QM math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Self-organized pattern behavior is ubiquitous throughout nature, from fish
schooling to collective cell dynamics during organism development.
Qualitatively these patterns display impressive consistency, yet variability
inevitably exists within pattern-forming systems on both microscopic and
macroscopic scales. Quantifying variability and measuring pattern features can
inform the underlying agent interactions and allow for predictive analyses.
Nevertheless, current methods for analyzing patterns that arise from collective
behavior only capture macroscopic features, or rely on either manual inspection
or smoothing algorithms that lose the underlying agent-based nature of the
data. Here we introduce methods based on topological data analysis and
interpretable machine learning for quantifying both agent-level features and
global pattern attributes on a large scale. Because the zebrafish is a model
organism for skin pattern formation, we focus specifically on analyzing its
skin patterns as a means of illustrating our approach. Using a recent
agent-based model, we simulate thousands of wild-type and mutant zebrafish
patterns and apply our methodology to better understand pattern variability in
zebrafish. Our methodology is able to quantify the differential impact of
stochasticity in cell interactions on wild-type and mutant patterns, and we use
our methods to predict stripe and spot statistics as a function of varying
cellular communication. Our work provides a new approach to automatically
quantifying biological patterns and analyzing agent-based dynamics so that we
can now answer critical questions in pattern formation at a much larger scale.
| [
{
"created": "Fri, 18 Oct 2019 14:16:34 GMT",
"version": "v1"
}
] | 2022-06-08 | [
[
"McGuirl",
"Melissa R.",
""
],
[
"Volkening",
"Alexandria",
""
],
[
"Sandstede",
"Björn",
""
]
] | Self-organized pattern behavior is ubiquitous throughout nature, from fish schooling to collective cell dynamics during organism development. Qualitatively these patterns display impressive consistency, yet variability inevitably exists within pattern-forming systems on both microscopic and macroscopic scales. Quantifying variability and measuring pattern features can inform the underlying agent interactions and allow for predictive analyses. Nevertheless, current methods for analyzing patterns that arise from collective behavior only capture macroscopic features, or rely on either manual inspection or smoothing algorithms that lose the underlying agent-based nature of the data. Here we introduce methods based on topological data analysis and interpretable machine learning for quantifying both agent-level features and global pattern attributes on a large scale. Because the zebrafish is a model organism for skin pattern formation, we focus specifically on analyzing its skin patterns as a means of illustrating our approach. Using a recent agent-based model, we simulate thousands of wild-type and mutant zebrafish patterns and apply our methodology to better understand pattern variability in zebrafish. Our methodology is able to quantify the differential impact of stochasticity in cell interactions on wild-type and mutant patterns, and we use our methods to predict stripe and spot statistics as a function of varying cellular communication. Our work provides a new approach to automatically quantifying biological patterns and analyzing agent-based dynamics so that we can now answer critical questions in pattern formation at a much larger scale. |
0912.2241 | Stefan Grosskinsky | Adnan Ali and Stefan Grosskinsky | Pattern formation through genetic drift at expanding population fronts | 17 pages with 9 figures | Adv. Compl. Sys. 13(3), 349-366 (2010) | 10.1142/S0219525910002578 | null | q-bio.PE cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate the nature of genetic drift acting at the leading edge of
range expansions, building on recent results in [Hallatschek et al., Proc.\
Natl.\ Acad.\ Sci., \textbf{104}(50): 19926 - 19930 (2007)]. A well mixed
population of two fluorescently labeled microbial species is grown in a
circular geometry. As the population expands, a coarsening process driven by
genetic drift gives rise to sectoring patterns with fractal boundaries, which
show a non-trivial asymptotic distribution. Using simplified lattice based
Monte Carlo simulations as a generic caricature of the above experiment, we
present detailed numerical results to establish a model for sector boundaries
as time changed Brownian motions. This is used to derive a general one-to-one
mapping of sector statistics between circular and linear geometries, which
leads to a full understanding of the sectoring patterns in terms of
annihilating diffusions.
| [
{
"created": "Fri, 11 Dec 2009 14:36:28 GMT",
"version": "v1"
},
{
"created": "Fri, 13 Aug 2010 12:52:35 GMT",
"version": "v2"
}
] | 2010-08-16 | [
[
"Ali",
"Adnan",
""
],
[
"Grosskinsky",
"Stefan",
""
]
] | We investigate the nature of genetic drift acting at the leading edge of range expansions, building on recent results in [Hallatschek et al., Proc.\ Natl.\ Acad.\ Sci., \textbf{104}(50): 19926 - 19930 (2007)]. A well mixed population of two fluorescently labeled microbial species is grown in a circular geometry. As the population expands, a coarsening process driven by genetic drift gives rise to sectoring patterns with fractal boundaries, which show a non-trivial asymptotic distribution. Using simplified lattice based Monte Carlo simulations as a generic caricature of the above experiment, we present detailed numerical results to establish a model for sector boundaries as time changed Brownian motions. This is used to derive a general one-to-one mapping of sector statistics between circular and linear geometries, which leads to a full understanding of the sectoring patterns in terms of annihilating diffusions. |
1901.07000 | Stefan Ruschel | Lai-Sang Young, Stefan Ruschel, Serhiy Yanchuk, Tiago Pereira | Consequences of delays and imperfect implementation of isolation in
epidemic control | 21 pages, 3 figures | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For centuries isolation has been the main control strategy of unforeseen
epidemic outbreaks. When implemented in full and without delay, isolation is
very effective. However, flawless implementation is seldom feasible in
practice. We present an epidemic model called SIQ with an isolation protocol,
focusing on the consequences of delays and incomplete identification of
infected hosts. The continuum limit of this model is a system of Delay
Differential Equations, the analysis of which reveals clearly the dependence of
epidemic evolution on model parameters including disease reproductive number,
isolation probability, speed of identification of infected hosts and recovery
rates. Our model offers estimates on minimum response capabilities needed to
curb outbreaks,and predictions of endemic states when containment fails.
Critical response capability is expressed explicitly in terms of parameters
that are easy to obtain, to assist in the evaluation of funding priorities
involving preparedness and epidemics management.
| [
{
"created": "Mon, 21 Jan 2019 17:12:55 GMT",
"version": "v1"
}
] | 2019-01-23 | [
[
"Young",
"Lai-Sang",
""
],
[
"Ruschel",
"Stefan",
""
],
[
"Yanchuk",
"Serhiy",
""
],
[
"Pereira",
"Tiago",
""
]
] | For centuries isolation has been the main control strategy of unforeseen epidemic outbreaks. When implemented in full and without delay, isolation is very effective. However, flawless implementation is seldom feasible in practice. We present an epidemic model called SIQ with an isolation protocol, focusing on the consequences of delays and incomplete identification of infected hosts. The continuum limit of this model is a system of Delay Differential Equations, the analysis of which reveals clearly the dependence of epidemic evolution on model parameters including disease reproductive number, isolation probability, speed of identification of infected hosts and recovery rates. Our model offers estimates on minimum response capabilities needed to curb outbreaks,and predictions of endemic states when containment fails. Critical response capability is expressed explicitly in terms of parameters that are easy to obtain, to assist in the evaluation of funding priorities involving preparedness and epidemics management. |
1310.0035 | Michael Elliot | Michael G. Elliot | Identical inferences about correlated evolution arise from ancestral
state reconstruction and independent contrasts | 31 pages including supplementary information, 2 figures | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Inferences about the evolution of continuous traits based on reconstruction
of ancestral states has often been considered more error-prone than analysis of
independent contrasts. Here we show that both methods in fact yield identical
estimators for the correlation coefficient and regression gradient of
correlated traits, indicating that reconstructed ancestral states are a valid
source of information about correlated evolution. We show that the independent
contrast associated with a pair of sibling nodes on a phylogenetic tree can be
expressed in terms of the maximum likelihood ancestral state function at those
nodes and their common parent. This expression gives rise to novel formulae for
independent contrasts for any model of evolution admitting of a local
likelihood function. We thus derive new formulae for independent contrasts
applicable to traits evolving under directional drift, and use simulated data
to show that these directional contrasts provide better estimates of
evolutionary model parameters than standard independent contrasts, when traits
in fact evolve with a directional tendency.
| [
{
"created": "Mon, 30 Sep 2013 20:10:18 GMT",
"version": "v1"
}
] | 2013-10-02 | [
[
"Elliot",
"Michael G.",
""
]
] | Inferences about the evolution of continuous traits based on reconstruction of ancestral states has often been considered more error-prone than analysis of independent contrasts. Here we show that both methods in fact yield identical estimators for the correlation coefficient and regression gradient of correlated traits, indicating that reconstructed ancestral states are a valid source of information about correlated evolution. We show that the independent contrast associated with a pair of sibling nodes on a phylogenetic tree can be expressed in terms of the maximum likelihood ancestral state function at those nodes and their common parent. This expression gives rise to novel formulae for independent contrasts for any model of evolution admitting of a local likelihood function. We thus derive new formulae for independent contrasts applicable to traits evolving under directional drift, and use simulated data to show that these directional contrasts provide better estimates of evolutionary model parameters than standard independent contrasts, when traits in fact evolve with a directional tendency. |
q-bio/0310034 | Gavin E. Crooks | Gavin E. Crooks, Steven E. Brenner | Protein secondary structure: Entropy, correlations and prediction | 8 pages, 5 figures | Bioinformatics 20:1603-1611 (2004) | 10.1093/bioinformatics/bth132 | null | q-bio.BM | null | Is protein secondary structure primarily determined by local interactions
between residues closely spaced along the amino acid backbone, or by non-local
tertiary interactions? To answer this question we have measured the entropy
densities of primary structure and secondary structure sequences, and the local
inter-sequence mutual information density. We find that the important
inter-sequence interactions are short ranged, that correlations between
neighboring amino acids are essentially uninformative, and that only 1/4 of the
total information needed to determine the secondary structure is available from
local inter-sequence correlations. Since the remaining information must come
from non-local interactions, this observation supports the view that the
majority of most proteins fold via a cooperative process where secondary and
tertiary structure form concurrently. To provide a more direct comparison to
existing secondary structure prediction methods, we construct a simple hidden
Markov model (HMM) of the sequences. This HMM achieves a prediction accuracy
comparable to other single sequence secondary structure prediction algorithms,
and can extract almost all of the inter-sequence mutual information. This
suggests that these algorithms are almost optimal, and that we should not
expect a dramatic improvement in prediction accuracy. However, local
correlations between secondary and primary structure are probably of
under-appreciated importance in many tertiary structure prediction methods,
such as threading.
| [
{
"created": "Wed, 29 Oct 2003 06:48:11 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Crooks",
"Gavin E.",
""
],
[
"Brenner",
"Steven E.",
""
]
] | Is protein secondary structure primarily determined by local interactions between residues closely spaced along the amino acid backbone, or by non-local tertiary interactions? To answer this question we have measured the entropy densities of primary structure and secondary structure sequences, and the local inter-sequence mutual information density. We find that the important inter-sequence interactions are short ranged, that correlations between neighboring amino acids are essentially uninformative, and that only 1/4 of the total information needed to determine the secondary structure is available from local inter-sequence correlations. Since the remaining information must come from non-local interactions, this observation supports the view that the majority of most proteins fold via a cooperative process where secondary and tertiary structure form concurrently. To provide a more direct comparison to existing secondary structure prediction methods, we construct a simple hidden Markov model (HMM) of the sequences. This HMM achieves a prediction accuracy comparable to other single sequence secondary structure prediction algorithms, and can extract almost all of the inter-sequence mutual information. This suggests that these algorithms are almost optimal, and that we should not expect a dramatic improvement in prediction accuracy. However, local correlations between secondary and primary structure are probably of under-appreciated importance in many tertiary structure prediction methods, such as threading. |
0708.1928 | Katja Taute | Katja M. Taute, Francesco Pampaloni, Erwin Frey and Ernst-Ludwig
Florin | Microtubule dynamics depart from wormlike chain model | 4 pages, 4 figures. Updated content, added reference, corrected typos | Physical Review Letters (2008), 100:028102. | 10.1103/PhysRevLett.100.028102 | null | q-bio.BM | null | Thermal shape fluctuations of grafted microtubules were studied using high
resolution particle tracking of attached fluorescent beads. First mode
relaxation times were extracted from the mean square displacement in the
transverse coordinate. For microtubules shorter than 10 um, the relaxation
times were found to follow an L^2 dependence instead of L^4 as expected from
the standard wormlike chain model. This length dependence is shown to result
from a complex length dependence of the bending stiffness which can be
understood as a result of the molecular architecture of microtubules. For
microtubules shorter than 5 um, high drag coefficients indicate contributions
from internal friction to the fluctuation dynamics.
| [
{
"created": "Tue, 14 Aug 2007 17:15:07 GMT",
"version": "v1"
},
{
"created": "Tue, 11 Sep 2007 18:48:05 GMT",
"version": "v2"
}
] | 2008-09-09 | [
[
"Taute",
"Katja M.",
""
],
[
"Pampaloni",
"Francesco",
""
],
[
"Frey",
"Erwin",
""
],
[
"Florin",
"Ernst-Ludwig",
""
]
] | Thermal shape fluctuations of grafted microtubules were studied using high resolution particle tracking of attached fluorescent beads. First mode relaxation times were extracted from the mean square displacement in the transverse coordinate. For microtubules shorter than 10 um, the relaxation times were found to follow an L^2 dependence instead of L^4 as expected from the standard wormlike chain model. This length dependence is shown to result from a complex length dependence of the bending stiffness which can be understood as a result of the molecular architecture of microtubules. For microtubules shorter than 5 um, high drag coefficients indicate contributions from internal friction to the fluctuation dynamics. |
1903.07783 | Kai Qiao | Kai Qiao, Jian Chen, Linyuan Wang, Chi Zhang, Lei Zeng, Li Tong, Bin
Yan | Category decoding of visual stimuli from human brain activity using a
bidirectional recurrent neural network to simulate bidirectional information
flows in human visual cortices | submitted to the Frontiers in neuroscience | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, visual encoding and decoding based on functional magnetic resonance
imaging (fMRI) have realized many achievements with the rapid development of
deep network computation. Despite the hierarchically similar representations of
deep network and human vision, visual information flows from primary visual
cortices to high visual cortices and vice versa based on the bottom-up and
top-down manners, respectively. Inspired by the bidirectional information
flows, we proposed a bidirectional recurrent neural network (BRNN)-based method
to decode the categories from fMRI data. The forward and backward directions in
the BRNN module characterized the bottom-up and top-down manners, respectively.
The proposed method regarded the selected voxels of each visual cortex region
(V1, V2, V3, V4, and LO) as one node in the sequence fed into the BRNN module
and combined the output of the BRNN module to decode the categories with the
subsequent fully connected layer. This new method allows the efficient
utilization of hierarchical information representations and bidirectional
information flows in human visual cortices. Experiment results demonstrated
that our method improved the accuracy of three-level category decoding than
other methods, which implicitly validated the hierarchical and bidirectional
human visual representations. Comparative analysis revealed that the category
representations of human visual cortices were hierarchical, distributed,
complementary, and correlative.
| [
{
"created": "Tue, 19 Mar 2019 01:08:13 GMT",
"version": "v1"
}
] | 2019-03-20 | [
[
"Qiao",
"Kai",
""
],
[
"Chen",
"Jian",
""
],
[
"Wang",
"Linyuan",
""
],
[
"Zhang",
"Chi",
""
],
[
"Zeng",
"Lei",
""
],
[
"Tong",
"Li",
""
],
[
"Yan",
"Bin",
""
]
] | Recently, visual encoding and decoding based on functional magnetic resonance imaging (fMRI) have realized many achievements with the rapid development of deep network computation. Despite the hierarchically similar representations of deep network and human vision, visual information flows from primary visual cortices to high visual cortices and vice versa based on the bottom-up and top-down manners, respectively. Inspired by the bidirectional information flows, we proposed a bidirectional recurrent neural network (BRNN)-based method to decode the categories from fMRI data. The forward and backward directions in the BRNN module characterized the bottom-up and top-down manners, respectively. The proposed method regarded the selected voxels of each visual cortex region (V1, V2, V3, V4, and LO) as one node in the sequence fed into the BRNN module and combined the output of the BRNN module to decode the categories with the subsequent fully connected layer. This new method allows the efficient utilization of hierarchical information representations and bidirectional information flows in human visual cortices. Experiment results demonstrated that our method improved the accuracy of three-level category decoding than other methods, which implicitly validated the hierarchical and bidirectional human visual representations. Comparative analysis revealed that the category representations of human visual cortices were hierarchical, distributed, complementary, and correlative. |
1406.7474 | Erfan Khaji Mr. | Erfan Khaji, Mahsa Mortazavi | On the Optimization of non-Dense Metabolic Networks in non-Equilibrium
State Utilizing 2D-Lattice Simulation | 10 Figures, 9 Pages | null | null | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modeling and optimization of metabolic networks has been one of the hottest
topics in computational systems biology within recent years. However, the
complexity and uncertainty of these networks in addition to the lack of
necessary data has resulted in more efforts to design and usage of more capable
models which fit to realistic conditions. In this paper, instead of optimizing
networks in equilibrium condition, the optimization of dynamic networks in
non-equilibrium states including low number of molecules has been studied using
a 2-D lattice simulation. A prototyped network has been simulated with such
approach, and has been optimized using Swarm Particle Algorithm the results of
which are presented in addition to the relevant plots.
| [
{
"created": "Sun, 29 Jun 2014 08:41:30 GMT",
"version": "v1"
}
] | 2014-07-01 | [
[
"Khaji",
"Erfan",
""
],
[
"Mortazavi",
"Mahsa",
""
]
] | Modeling and optimization of metabolic networks has been one of the hottest topics in computational systems biology within recent years. However, the complexity and uncertainty of these networks in addition to the lack of necessary data has resulted in more efforts to design and usage of more capable models which fit to realistic conditions. In this paper, instead of optimizing networks in equilibrium condition, the optimization of dynamic networks in non-equilibrium states including low number of molecules has been studied using a 2-D lattice simulation. A prototyped network has been simulated with such approach, and has been optimized using Swarm Particle Algorithm the results of which are presented in addition to the relevant plots. |
q-bio/0702049 | Anne Shiu | Jason Morton, Lior Pachter, Anne Shiu, Bernd Sturmfels | The Cyclohedron Test for Finding Periodic Genes in Time Course
Expression Studies | Revision consists of reorganization and further statistical
discussion; 19 pages, 4 figures | null | null | null | q-bio.QM stat.AP | null | The problem of finding periodically expressed genes from time course
microarray experiments is at the center of numerous efforts to identify the
molecular components of biological clocks. We present a new approach to this
problem based on the cyclohedron test, which is a rank test inspired by recent
advances in algebraic combinatorics. The test has the advantage of being robust
to measurement errors, and can be used to ascertain the significance of
top-ranked genes. We apply the test to recently published measurements of gene
expression during mouse somitogenesis and find 32 genes that collectively are
significant. Among these are previously identified periodic genes involved in
the Notch/FGF and Wnt signaling pathways, as well as novel candidate genes that
may play a role in regulating the segmentation clock. These results confirm
that there are an abundance of exceptionally periodic genes expressed during
somitogenesis. The emphasis of this paper is on the statistics and
combinatorics that underlie the cyclohedron test and its implementation within
a multiple testing framework.
| [
{
"created": "Fri, 23 Feb 2007 19:39:57 GMT",
"version": "v1"
},
{
"created": "Fri, 23 Feb 2007 21:05:19 GMT",
"version": "v2"
},
{
"created": "Tue, 22 May 2007 17:43:12 GMT",
"version": "v3"
}
] | 2007-05-23 | [
[
"Morton",
"Jason",
""
],
[
"Pachter",
"Lior",
""
],
[
"Shiu",
"Anne",
""
],
[
"Sturmfels",
"Bernd",
""
]
] | The problem of finding periodically expressed genes from time course microarray experiments is at the center of numerous efforts to identify the molecular components of biological clocks. We present a new approach to this problem based on the cyclohedron test, which is a rank test inspired by recent advances in algebraic combinatorics. The test has the advantage of being robust to measurement errors, and can be used to ascertain the significance of top-ranked genes. We apply the test to recently published measurements of gene expression during mouse somitogenesis and find 32 genes that collectively are significant. Among these are previously identified periodic genes involved in the Notch/FGF and Wnt signaling pathways, as well as novel candidate genes that may play a role in regulating the segmentation clock. These results confirm that there are an abundance of exceptionally periodic genes expressed during somitogenesis. The emphasis of this paper is on the statistics and combinatorics that underlie the cyclohedron test and its implementation within a multiple testing framework. |
2002.12541 | Kevin S. McLoughlin | Kevin S. McLoughlin, Claire G. Jeong, Thomas D. Sweitzer, Amanda J.
Minnich, Margaret J. Tse, Brian J. Bennion, Jonathan E. Allen, Stacie
Calad-Thomson, Thomas S. Rush, James M. Brase | Machine Learning Models to Predict Inhibition of the Bile Salt Export
Pump | null | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Drug-induced liver injury (DILI) is the most common cause of acute liver
failure and a frequent reason for withdrawal of candidate drugs during
preclinical and clinical testing. An important type of DILI is cholestatic
liver injury, caused by buildup of bile salts within hepatocytes; it is
frequently associated with inhibition of bile salt transporters, such as the
bile salt export pump (BSEP). Reliable in silico models to predict BSEP
inhibition directly from chemical structures would significantly reduce costs
during drug discovery and could help avoid injury to patients. Unfortunately,
models published to date have been insufficiently accurate to encourage wide
adoption. We report our development of classification and regression models for
BSEP inhibition with substantially improved performance over previously
published models. Our model development leveraged the ATOM Modeling PipeLine
(AMPL) developed by the ATOM Consortium, which enabled us to train and evaluate
thousands of candidate models. In the course of model development, we assessed
a variety of schemes for chemical featurization, dataset partitioning and class
labeling, and identified those producing models that generalized best to novel
chemical entities. Our best performing classification model was a neural
network with ROC AUC = 0.88 on our internal test dataset and 0.89 on an
independent external compound set. Our best regression model, the first ever
reported for predicting BSEP IC50s, yielded a test set $R^2 = 0.56$ and mean
absolute error 0.37, corresponding to a mean 2.3-fold error in predicted IC50s,
comparable to experimental variation. These models will thus be useful as
inputs to mechanistic predictions of DILI and as part of computational
pipelines for drug discovery.
| [
{
"created": "Fri, 28 Feb 2020 04:37:44 GMT",
"version": "v1"
}
] | 2020-03-02 | [
[
"McLoughlin",
"Kevin S.",
""
],
[
"Jeong",
"Claire G.",
""
],
[
"Sweitzer",
"Thomas D.",
""
],
[
"Minnich",
"Amanda J.",
""
],
[
"Tse",
"Margaret J.",
""
],
[
"Bennion",
"Brian J.",
""
],
[
"Allen",
"Jonathan E... | Drug-induced liver injury (DILI) is the most common cause of acute liver failure and a frequent reason for withdrawal of candidate drugs during preclinical and clinical testing. An important type of DILI is cholestatic liver injury, caused by buildup of bile salts within hepatocytes; it is frequently associated with inhibition of bile salt transporters, such as the bile salt export pump (BSEP). Reliable in silico models to predict BSEP inhibition directly from chemical structures would significantly reduce costs during drug discovery and could help avoid injury to patients. Unfortunately, models published to date have been insufficiently accurate to encourage wide adoption. We report our development of classification and regression models for BSEP inhibition with substantially improved performance over previously published models. Our model development leveraged the ATOM Modeling PipeLine (AMPL) developed by the ATOM Consortium, which enabled us to train and evaluate thousands of candidate models. In the course of model development, we assessed a variety of schemes for chemical featurization, dataset partitioning and class labeling, and identified those producing models that generalized best to novel chemical entities. Our best performing classification model was a neural network with ROC AUC = 0.88 on our internal test dataset and 0.89 on an independent external compound set. Our best regression model, the first ever reported for predicting BSEP IC50s, yielded a test set $R^2 = 0.56$ and mean absolute error 0.37, corresponding to a mean 2.3-fold error in predicted IC50s, comparable to experimental variation. These models will thus be useful as inputs to mechanistic predictions of DILI and as part of computational pipelines for drug discovery. |
2403.08685 | Zan Ahmad | Zan Ahmad, Minglang Yin, Yashil Sukurdeep, Noam Rotenberg, Eugene
Kholmovski, Natalia A. Trayanova | Elastic shape analysis computations for clustering left atrial appendage
geometries of atrial fibrillation patients | Submitted as a conference paper to MICCAI 2024 | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Morphological variations in the left atrial appendage (LAA) are associated
with different levels of ischemic stroke risk for patients with atrial
fibrillation (AF). Studying LAA morphology can elucidate mechanisms behind this
association and lead to the development of advanced stroke risk stratification
tools. However, current categorical descriptions of LAA morphologies are
qualitative and inconsistent across studies, which impedes advancements in our
understanding of stroke pathogenesis in AF. To mitigate these issues, we
introduce a quantitative pipeline that combines elastic shape analysis with
unsupervised learning for the categorization of LAA morphology in AF patients.
As part of our pipeline, we compute pairwise elastic distances between LAA
meshes from a cohort of 20 AF patients, and leverage these distances to cluster
our shape data. We demonstrate that our method clusters LAA morphologies based
on distinctive shape features, overcoming the innate inconsistencies of current
LAA categorization systems, and paving the way for improved stroke risk metrics
using objective LAA shape groups.
| [
{
"created": "Wed, 13 Mar 2024 16:42:05 GMT",
"version": "v1"
}
] | 2024-03-14 | [
[
"Ahmad",
"Zan",
""
],
[
"Yin",
"Minglang",
""
],
[
"Sukurdeep",
"Yashil",
""
],
[
"Rotenberg",
"Noam",
""
],
[
"Kholmovski",
"Eugene",
""
],
[
"Trayanova",
"Natalia A.",
""
]
] | Morphological variations in the left atrial appendage (LAA) are associated with different levels of ischemic stroke risk for patients with atrial fibrillation (AF). Studying LAA morphology can elucidate mechanisms behind this association and lead to the development of advanced stroke risk stratification tools. However, current categorical descriptions of LAA morphologies are qualitative and inconsistent across studies, which impedes advancements in our understanding of stroke pathogenesis in AF. To mitigate these issues, we introduce a quantitative pipeline that combines elastic shape analysis with unsupervised learning for the categorization of LAA morphology in AF patients. As part of our pipeline, we compute pairwise elastic distances between LAA meshes from a cohort of 20 AF patients, and leverage these distances to cluster our shape data. We demonstrate that our method clusters LAA morphologies based on distinctive shape features, overcoming the innate inconsistencies of current LAA categorization systems, and paving the way for improved stroke risk metrics using objective LAA shape groups. |
1512.02574 | Andreas Daffertshofer | Robert Ton, Gustavo Deco, Morten L Kringelbach, Mark Woolrich, and
Andreas Daffertshofer | Distinct criticality of phase and amplitude dynamics in the resting
brain | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Converging research suggests that the resting brain operates at the cusp of
dynamic instability signified by scale-free temporal correlations. We asked if
the scaling properties of these correlations differ between amplitude and phase
fluctuations, which may reflect different aspects of cortical functioning.
Using source-reconstructed magneto-encephalographic signals, we found power-law
scaling for the collective amplitude and for phase synchronization, both
capturing whole-brain activity. The temporal changes of the amplitude comprise
slow, persistent memory processes, whereas phase synchronization exhibits less
temporally structured and more complex correlations, indicating a fast and
flexible coding. This distinct temporal scaling supports the idea of different
roles of amplitude and phase in cortical functioning.
| [
{
"created": "Tue, 8 Dec 2015 18:28:06 GMT",
"version": "v1"
}
] | 2015-12-09 | [
[
"Ton",
"Robert",
""
],
[
"Deco",
"Gustavo",
""
],
[
"Kringelbach",
"Morten L",
""
],
[
"Woolrich",
"Mark",
""
],
[
"Daffertshofer",
"Andreas",
""
]
] | Converging research suggests that the resting brain operates at the cusp of dynamic instability signified by scale-free temporal correlations. We asked if the scaling properties of these correlations differ between amplitude and phase fluctuations, which may reflect different aspects of cortical functioning. Using source-reconstructed magneto-encephalographic signals, we found power-law scaling for the collective amplitude and for phase synchronization, both capturing whole-brain activity. The temporal changes of the amplitude comprise slow, persistent memory processes, whereas phase synchronization exhibits less temporally structured and more complex correlations, indicating a fast and flexible coding. This distinct temporal scaling supports the idea of different roles of amplitude and phase in cortical functioning. |
2308.04464 | Mohammad Heydari | Ali Bayat, Mohammad Heydari, Amir Albadvi | Analysis of Insect-Plant Interactions Affected by Mining Operations, A
Graph Mining Approach | 9 pages, 16 figures | null | null | null | q-bio.PE cs.SI | http://creativecommons.org/licenses/by/4.0/ | The decline in ecological connections signifies the potential extinction of
species, which can be attributed to disruptions and alterations. The decrease
in interconnections among species reflects their susceptibility to changes. For
example, certain insects and plants that rely on exclusive interactions with a
limited number of species, or even a specific species, face the risk of
extinction if they lose these crucial connections. Currently, mining activities
pose significant harm to natural ecosystems, resulting in various adverse
environmental impacts. In this study, we utilized network science techniques to
analyze the ecosystem in a graph-based structure, aiming to conserve the
ecosystem affected by mining operations in the northern region of Scotland. The
research encompasses identifying the most vital members of the network,
establishing criteria for identifying communities within the network,
comparing, and evaluating them, using models to predict secondary extinctions
that occur when a species is removed from the network, and assessing the extent
of network damage. Our study's novelty is utilizing network science approaches
to investigate the biological data related to interactions between insects and
plants.
| [
{
"created": "Tue, 8 Aug 2023 02:53:16 GMT",
"version": "v1"
},
{
"created": "Mon, 18 Sep 2023 20:01:04 GMT",
"version": "v2"
},
{
"created": "Sun, 8 Oct 2023 12:33:38 GMT",
"version": "v3"
}
] | 2023-10-10 | [
[
"Bayat",
"Ali",
""
],
[
"Heydari",
"Mohammad",
""
],
[
"Albadvi",
"Amir",
""
]
] | The decline in ecological connections signifies the potential extinction of species, which can be attributed to disruptions and alterations. The decrease in interconnections among species reflects their susceptibility to changes. For example, certain insects and plants that rely on exclusive interactions with a limited number of species, or even a specific species, face the risk of extinction if they lose these crucial connections. Currently, mining activities pose significant harm to natural ecosystems, resulting in various adverse environmental impacts. In this study, we utilized network science techniques to analyze the ecosystem in a graph-based structure, aiming to conserve the ecosystem affected by mining operations in the northern region of Scotland. The research encompasses identifying the most vital members of the network, establishing criteria for identifying communities within the network, comparing, and evaluating them, using models to predict secondary extinctions that occur when a species is removed from the network, and assessing the extent of network damage. Our study's novelty is utilizing network science approaches to investigate the biological data related to interactions between insects and plants. |
0704.1362 | J.H. van Hateren | J. H. van Hateren | Fast recursive filters for simulating nonlinear dynamic systems | 20 pages, 8 figures, 1 table. A comparison with 4th-order Runge-Kutta
integration shows that the new algorithm is 1-2 orders of magnitude faster.
The paper is in press now at Neural Computation | Neural Computation 20:1821-1846 (2008) | null | null | q-bio.QM q-bio.NC | null | A fast and accurate computational scheme for simulating nonlinear dynamic
systems is presented. The scheme assumes that the system can be represented by
a combination of components of only two different types: first-order low-pass
filters and static nonlinearities. The parameters of these filters and
nonlinearities may depend on system variables, and the topology of the system
may be complex, including feedback. Several examples taken from neuroscience
are given: phototransduction, photopigment bleaching, and spike generation
according to the Hodgkin-Huxley equations. The scheme uses two slightly
different forms of autoregressive filters, with an implicit delay of zero for
feedforward control and an implicit delay of half a sample distance for
feedback control. On a fairly complex model of the macaque retinal horizontal
cell it computes, for a given level of accuracy, 1-2 orders of magnitude faster
than 4th-order Runge-Kutta. The computational scheme has minimal memory
requirements, and is also suited for computation on a stream processor, such as
a GPU (Graphical Processing Unit).
| [
{
"created": "Wed, 11 Apr 2007 07:48:11 GMT",
"version": "v1"
},
{
"created": "Thu, 30 Aug 2007 07:50:06 GMT",
"version": "v2"
}
] | 2008-06-20 | [
[
"van Hateren",
"J. H.",
""
]
] | A fast and accurate computational scheme for simulating nonlinear dynamic systems is presented. The scheme assumes that the system can be represented by a combination of components of only two different types: first-order low-pass filters and static nonlinearities. The parameters of these filters and nonlinearities may depend on system variables, and the topology of the system may be complex, including feedback. Several examples taken from neuroscience are given: phototransduction, photopigment bleaching, and spike generation according to the Hodgkin-Huxley equations. The scheme uses two slightly different forms of autoregressive filters, with an implicit delay of zero for feedforward control and an implicit delay of half a sample distance for feedback control. On a fairly complex model of the macaque retinal horizontal cell it computes, for a given level of accuracy, 1-2 orders of magnitude faster than 4th-order Runge-Kutta. The computational scheme has minimal memory requirements, and is also suited for computation on a stream processor, such as a GPU (Graphical Processing Unit). |
2201.01818 | Diego Martinez | Diego A. Martinez, Carol L. Ponce-de-Leon and Carlos Vilchez | Meta-analysis of commercial-scale trials as a means to improve
decision-making processes in the poultry industry: a phytogenic feed additive
case study | null | International Journal of Poultry Science (2020) 19(11) | 10.3923/ijps.2020.513.523 | null | q-bio.QM stat.ME | http://creativecommons.org/licenses/by/4.0/ | Background and Objective: In the current study, we sought to determine the
value of a meta-analysis to improve decision-making processes related to
nutrition in the poultry industry. To this end, nine commercial size
experiments were conducted to test the effect of a phytogenic feed additive and
three approaches were applied to the data. Materials and Methods: In all
experiments, 1-day-old male Cobb 500 chicks were used and fed corn-soybean meal
diets. Two dietary treatments were tested: T1, control diet and T2, control
diet + feed additive at a 0.05% inclusion rate. The experimental units were
broiler houses (7 experiments), floor pens (1 experiment) and cages (1
experiment). The response variables were final body weight, feed intake, feed
conversion ratio, mortality and production efficiency. Analyses of variance of
data from each and all the experiments were performed using SAS under
completely randomized non-blocked or blocked designs, respectively. The
meta-analyses were performed in R programming language. Results: No
statistically significant effects were found in the evaluated variables in any
of the independent experiments (p>0.12), nor following the application of a
block design (p>0.08). The meta-analyses showed no statistically significant
global effects in terms of final body weight (p>0.19), feed intake (p>0.23),
mortality (p>0.09), or European Production Efficiency Factor (p>0.08); however,
a positive global effect was found with respect to feed conversion ratio
(p<0.046). Conclusion: This meta-analysis demonstrated that the phytogenic feed
additive improved the efficiency of birds to convert feed to body weight (35 g
less feed per 1 kg of body weight obtained). Thus, the use of meta-analyses in
commercial-scale poultry trials can increase statistical power and as a result,
help to detect statistical differences if they exist.
| [
{
"created": "Wed, 5 Jan 2022 21:01:31 GMT",
"version": "v1"
}
] | 2022-01-07 | [
[
"Martinez",
"Diego A.",
""
],
[
"Ponce-de-Leon",
"Carol L.",
""
],
[
"Vilchez",
"Carlos",
""
]
] | Background and Objective: In the current study, we sought to determine the value of a meta-analysis to improve decision-making processes related to nutrition in the poultry industry. To this end, nine commercial size experiments were conducted to test the effect of a phytogenic feed additive and three approaches were applied to the data. Materials and Methods: In all experiments, 1-day-old male Cobb 500 chicks were used and fed corn-soybean meal diets. Two dietary treatments were tested: T1, control diet and T2, control diet + feed additive at a 0.05% inclusion rate. The experimental units were broiler houses (7 experiments), floor pens (1 experiment) and cages (1 experiment). The response variables were final body weight, feed intake, feed conversion ratio, mortality and production efficiency. Analyses of variance of data from each and all the experiments were performed using SAS under completely randomized non-blocked or blocked designs, respectively. The meta-analyses were performed in R programming language. Results: No statistically significant effects were found in the evaluated variables in any of the independent experiments (p>0.12), nor following the application of a block design (p>0.08). The meta-analyses showed no statistically significant global effects in terms of final body weight (p>0.19), feed intake (p>0.23), mortality (p>0.09), or European Production Efficiency Factor (p>0.08); however, a positive global effect was found with respect to feed conversion ratio (p<0.046). Conclusion: This meta-analysis demonstrated that the phytogenic feed additive improved the efficiency of birds to convert feed to body weight (35 g less feed per 1 kg of body weight obtained). Thus, the use of meta-analyses in commercial-scale poultry trials can increase statistical power and as a result, help to detect statistical differences if they exist. |
2001.09481 | Mostafa Akhavansafar | Mostafa Akhavan Safar, Babak Teimourpour, Mehrdad Kargari | A Network Science Approach to Driver Gene Detection In Human Regulatory
Network Using Genes Influence Evaluation | null | Journal of Biomedical Informatics,2020,103661,ISSN 1532-0464 | 10.1016/j.jbi.2020.103661 | null | q-bio.MN | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Cancer disease occurs because of a disorder in the cellular regulatory
mechanism, Which causes cellular malformation. The genes that start the
malformation are called Cancer driver genes (CDGs) . Numerous computational
methods have been introduced to identify cancer driver genes that use the
concept of mutation.Regarding abnormalities spread in human cell and tumor
development, CDGs are likely to be the potential types of gene with high
influence in the network. This increases the importance of influence diffusion
concept for the identification of CDGs.recently developed a method based on
influence maximization for identifying cancer driver genes. One of the
challenges in these types of networks is to find the power of regulatory
interaction between edges.The current study developed a technique to identify
cancer driver gene and predict the impact of regulatory interactions in a
transcriptional regulatory network. This technique utilizes the concept of
influence diffusion and optimizes the Hyperlink-Induced Topic Search algorithm
based on the influence diffusion. The results suggest the better performance of
our proposed technique than the other computational and network-based
approaches.
| [
{
"created": "Sun, 26 Jan 2020 16:30:05 GMT",
"version": "v1"
}
] | 2020-12-16 | [
[
"Safar",
"Mostafa Akhavan",
""
],
[
"Teimourpour",
"Babak",
""
],
[
"Kargari",
"Mehrdad",
""
]
] | Cancer disease occurs because of a disorder in the cellular regulatory mechanism, Which causes cellular malformation. The genes that start the malformation are called Cancer driver genes (CDGs) . Numerous computational methods have been introduced to identify cancer driver genes that use the concept of mutation.Regarding abnormalities spread in human cell and tumor development, CDGs are likely to be the potential types of gene with high influence in the network. This increases the importance of influence diffusion concept for the identification of CDGs.recently developed a method based on influence maximization for identifying cancer driver genes. One of the challenges in these types of networks is to find the power of regulatory interaction between edges.The current study developed a technique to identify cancer driver gene and predict the impact of regulatory interactions in a transcriptional regulatory network. This technique utilizes the concept of influence diffusion and optimizes the Hyperlink-Induced Topic Search algorithm based on the influence diffusion. The results suggest the better performance of our proposed technique than the other computational and network-based approaches. |
1803.10221 | Mojtaba Aliakbarzadeh | Mojtaba Aliakbarzadeh and Kirsty Kitto | Preparation and Measurement in Quantum Memory Models | published in Journal of Mathematical Psychology | null | 10.1016/j.jmp.2018.03.002 | null | q-bio.NC quant-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Quantum Cognition has delivered a number of models for semantic memory, but
to date these have tended to assume pure states and projective measurement.
Here we relax these assumptions. A quantum inspired model of human word
association experiments will be extended using a density matrix representation
of human memory and a POVM based upon non-ideal measurements. Our formulation
allows for a consideration of key terms like measurement and contextuality
within a rigorous modern approach. This approach both provides new conceptual
advances and suggests new experimental protocols.
| [
{
"created": "Mon, 26 Mar 2018 08:14:08 GMT",
"version": "v1"
}
] | 2018-03-29 | [
[
"Aliakbarzadeh",
"Mojtaba",
""
],
[
"Kitto",
"Kirsty",
""
]
] | Quantum Cognition has delivered a number of models for semantic memory, but to date these have tended to assume pure states and projective measurement. Here we relax these assumptions. A quantum inspired model of human word association experiments will be extended using a density matrix representation of human memory and a POVM based upon non-ideal measurements. Our formulation allows for a consideration of key terms like measurement and contextuality within a rigorous modern approach. This approach both provides new conceptual advances and suggests new experimental protocols. |
1406.1603 | Robert Prevedel | Tina Schr\"odel, Robert Prevedel, Karin Aumayr, Manuel Zimmer and
Alipasha Vaziri | Brain-wide 3D imaging of neuronal activity in Caenorhabditis elegans
with sculpted light | 28 pages, 5 figures, plus Supplementary Information (24 pages, 10
figures) | Nature Methods 10, 1013-1020 (2013) | 10.1038/nmeth.2637 | null | q-bio.NC physics.bio-ph physics.optics | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent efforts in neuroscience research seek to obtain detailed anatomical
neuronal wiring maps as well as information on how neurons in these networks
engage in dynamic activities. Although the entire connectivity map of the
nervous system of C. elegans has been known for more than 25 years, this
knowledge has not been sufficient to predict all functional connections
underlying behavior. To approach this goal, we developed a two-photon technique
for brain-wide calcium imaging in C. elegans using wide-field temporal focusing
(WF-TEFO). Pivotal to our results was the use of a nuclear-localized,
genetically encoded calcium indicator (NLS-GCaMP5K) that permits unambiguous
discrimination of individual neurons within the densely-packed head ganglia of
C. elegans. We demonstrate near-simultaneous recording of activity of up to 70%
of all head neurons. In combination with a lab-on-a-chip device for stimulus
delivery, this method provides an enabling platform for establishing functional
maps of neuronal networks.
| [
{
"created": "Fri, 6 Jun 2014 07:46:02 GMT",
"version": "v1"
}
] | 2014-06-09 | [
[
"Schrödel",
"Tina",
""
],
[
"Prevedel",
"Robert",
""
],
[
"Aumayr",
"Karin",
""
],
[
"Zimmer",
"Manuel",
""
],
[
"Vaziri",
"Alipasha",
""
]
] | Recent efforts in neuroscience research seek to obtain detailed anatomical neuronal wiring maps as well as information on how neurons in these networks engage in dynamic activities. Although the entire connectivity map of the nervous system of C. elegans has been known for more than 25 years, this knowledge has not been sufficient to predict all functional connections underlying behavior. To approach this goal, we developed a two-photon technique for brain-wide calcium imaging in C. elegans using wide-field temporal focusing (WF-TEFO). Pivotal to our results was the use of a nuclear-localized, genetically encoded calcium indicator (NLS-GCaMP5K) that permits unambiguous discrimination of individual neurons within the densely-packed head ganglia of C. elegans. We demonstrate near-simultaneous recording of activity of up to 70% of all head neurons. In combination with a lab-on-a-chip device for stimulus delivery, this method provides an enabling platform for establishing functional maps of neuronal networks. |
2206.07187 | Peter D. Harrington | Peter D. Harrington, Danielle L. Cantrell, Michael G. G. Foreman, Ming
Guo, and Mark A. Lewis | Calculating the timing and probability of arrival for sea lice
dispersing between salmon farms | 32 pages, 5 figures | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | Sea lice are a threat to the health of both wild and farmed salmon and an
economic burden for salmon farms. With a free living larval stage, sea lice can
disperse tens of kilometers in the ocean between salmon farms, leading to
connected sea lice populations that are difficult to control in isolation. In
this paper we develop a simple analytical model for the dispersal of sea lice
between two salmon farms. From the model we calculate the arrival time
distribution of sea lice dispersing between farms, as well as the level of
cross-infection of sea lice. We also use numerical flows from a hydrodynamic
model, coupled with a particle tracking model, to directly calculate the
arrival time of sea lice dispersing between two farms in the Broughton
Archipelago, BC, in order to fit our analytical model and find realistic
parameter estimates. Using the parametrized analytical model we show that there
is often an intermediate inter-farm spacing that maximizes the level of cross
infection between farms, and that increased temperatures will lead to increased
levels of cross infection.
| [
{
"created": "Tue, 14 Jun 2022 22:07:59 GMT",
"version": "v1"
}
] | 2022-06-16 | [
[
"Harrington",
"Peter D.",
""
],
[
"Cantrell",
"Danielle L.",
""
],
[
"Foreman",
"Michael G. G.",
""
],
[
"Guo",
"Ming",
""
],
[
"Lewis",
"Mark A.",
""
]
] | Sea lice are a threat to the health of both wild and farmed salmon and an economic burden for salmon farms. With a free living larval stage, sea lice can disperse tens of kilometers in the ocean between salmon farms, leading to connected sea lice populations that are difficult to control in isolation. In this paper we develop a simple analytical model for the dispersal of sea lice between two salmon farms. From the model we calculate the arrival time distribution of sea lice dispersing between farms, as well as the level of cross-infection of sea lice. We also use numerical flows from a hydrodynamic model, coupled with a particle tracking model, to directly calculate the arrival time of sea lice dispersing between two farms in the Broughton Archipelago, BC, in order to fit our analytical model and find realistic parameter estimates. Using the parametrized analytical model we show that there is often an intermediate inter-farm spacing that maximizes the level of cross infection between farms, and that increased temperatures will lead to increased levels of cross infection. |
2311.08654 | Erfan Mohammadi | Narges Ramezani, Erfan Mohammadi | The Role of Public Health in the Fight Against Cancer: Awareness,
Prevention, and Early Detection | null | null | null | null | q-bio.TO | http://creativecommons.org/licenses/by/4.0/ | Cancer, as a complex and devastating sickness, poses a great public health
challenge on a global scale. Its far-attaining impact necessitates a devoted
branch of medicine referred to as oncology, which makes a specialty in the
prevention, diagnosis, and treatment of cancer. In this paper, we aim to offer
a complete evaluation of the oncology sector, delving into its rich history,
numerous forms of most cancers, diagnostic methods, and treatment alternatives.
By exploring recent advances in oncology, which include precision remedy,
immunotherapy, and the integration of era, we shed mild on the promising traits
of the subject. However, it is miles essential to know the continual challenges
that we face, such as the high costs related to remedy and the emergence of
drug resistance. Despite these challenges, the final goal of oncology remains
unwavering - to provide exceptional feasible outcomes for sufferers of most
cancers, using both healing and palliative remedy strategies. As our knowledge
of this complicated ailment continues to adapt, we must prioritize prevention
and early detection and deal with disparities in access to care. By fostering
collaboration and operating collectively, we can seriously improve the lives of
tens of millions of individuals affected by cancer around the arena.
| [
{
"created": "Wed, 15 Nov 2023 02:28:42 GMT",
"version": "v1"
},
{
"created": "Sun, 17 Dec 2023 19:29:46 GMT",
"version": "v2"
}
] | 2023-12-19 | [
[
"Ramezani",
"Narges",
""
],
[
"Mohammadi",
"Erfan",
""
]
] | Cancer, as a complex and devastating sickness, poses a great public health challenge on a global scale. Its far-attaining impact necessitates a devoted branch of medicine referred to as oncology, which makes a specialty in the prevention, diagnosis, and treatment of cancer. In this paper, we aim to offer a complete evaluation of the oncology sector, delving into its rich history, numerous forms of most cancers, diagnostic methods, and treatment alternatives. By exploring recent advances in oncology, which include precision remedy, immunotherapy, and the integration of era, we shed mild on the promising traits of the subject. However, it is miles essential to know the continual challenges that we face, such as the high costs related to remedy and the emergence of drug resistance. Despite these challenges, the final goal of oncology remains unwavering - to provide exceptional feasible outcomes for sufferers of most cancers, using both healing and palliative remedy strategies. As our knowledge of this complicated ailment continues to adapt, we must prioritize prevention and early detection and deal with disparities in access to care. By fostering collaboration and operating collectively, we can seriously improve the lives of tens of millions of individuals affected by cancer around the arena. |
q-bio/0311025 | L. E. Jones | Laura E. Jones and Stephen P. Ellner | Evolutionary tradeoff and equilibrium in an aquatic predator-prey system | 30 pages including 8 figures, 2 tables and an Appendix; to appear in
Bulletin of Mathematical Biology. Revised three Figures, added references and
expanded Section 5 | Bulletin of Mathematical Biology (66) 1547-1573 (2004) | 10.1016/j.bulm.2004.02.006 | null | q-bio.PE q-bio.QM | null | Due to the conventional distinction between ecological (rapid) and
evolutionary (slow)timescales, ecological and population models to date have
typically ignored the effects of evolution. Yet the potential for rapid
evolutionary change has been recently established and may be critical to
understanding how populations adapt to changing environments. In this paper we
examine the relationship between ecological and evolutionary dynamics, focusing
on a well-studied experimental aquatic predator-prey system (Fussmann et al.
2000; Shertzer et al. 2002; Yoshida et al. 2003). Major properties of
predator-prey cycles in this system are determined by ongoing evolutionary
dynamics in the prey population. Under some conditions, however, the
populations tend to apparently stable steady-state densities. These are the
subject of the present paper. We examine a previously developed model for the
system, to determine how evolution shapes properties of the equilibria, in
particular the number and identity of coexisting prey genotypes. We then apply
these results to explore how evolutionary dynamics can shape the responses of
the system to "management": externally imposed alterations in conditions.
Specifically, we compare the behavior of the system including evolutionary
dynamics, with predictions that would be made if the potential for rapid
evolutionary change is negelected. Finally, we posit some simple experiments to
verify our prediction that evolution can have significant qualitative effects
on observed population-level responses to changing conditions.
| [
{
"created": "Tue, 18 Nov 2003 18:14:15 GMT",
"version": "v1"
},
{
"created": "Tue, 9 Mar 2004 21:22:03 GMT",
"version": "v2"
}
] | 2012-11-20 | [
[
"Jones",
"Laura E.",
""
],
[
"Ellner",
"Stephen P.",
""
]
] | Due to the conventional distinction between ecological (rapid) and evolutionary (slow)timescales, ecological and population models to date have typically ignored the effects of evolution. Yet the potential for rapid evolutionary change has been recently established and may be critical to understanding how populations adapt to changing environments. In this paper we examine the relationship between ecological and evolutionary dynamics, focusing on a well-studied experimental aquatic predator-prey system (Fussmann et al. 2000; Shertzer et al. 2002; Yoshida et al. 2003). Major properties of predator-prey cycles in this system are determined by ongoing evolutionary dynamics in the prey population. Under some conditions, however, the populations tend to apparently stable steady-state densities. These are the subject of the present paper. We examine a previously developed model for the system, to determine how evolution shapes properties of the equilibria, in particular the number and identity of coexisting prey genotypes. We then apply these results to explore how evolutionary dynamics can shape the responses of the system to "management": externally imposed alterations in conditions. Specifically, we compare the behavior of the system including evolutionary dynamics, with predictions that would be made if the potential for rapid evolutionary change is negelected. Finally, we posit some simple experiments to verify our prediction that evolution can have significant qualitative effects on observed population-level responses to changing conditions. |
2212.11021 | Rolf Bader | Rolf Bader | Impulse Pattern Formulation (IPF) Brain Model | null | null | null | null | q-bio.NC nlin.AO | http://creativecommons.org/licenses/by/4.0/ | A new brain model is introduced, based on the Impulse Pattern Formulation
(IPF) already established for modeling and understanding musical instrument and
rhythm perception and production. It assumes the brain works with impulses,
neural bursts, ejected from an arbitrary reference point in the brain, arriving
at other reflecting brain regions, and returning to the reference point delayed
and damped. A plasticity model is suggested to adjust reflection strength in
time. The model is systematically studied with 50 reflection points by varying
the amount of excitatory vs. inhibitory neurons, the presence or absence of
plasticity or external sensory input, and the strength of the input and
plasticity in terms of system adaptation to an input or to the system itself.
The Brain IPF shows adaptation to an external stimulus, which is stronger
without plasticity, showing the active brain not being a simple passive
\emph{tabula rasa}. A relation of 10-20\% of inhibitory vs. excitatory neurons,
as found in the brain, shows a maximum adaptation to an external stimulus
compared to all other relations, pointing to an optimum of this relation
concerning adaptation. When assuming strong brain periodicities only up to
about 100 Hz, the reflection strength of the model is highest for delays of
around 300 ms, corresponding to Event-Related Potential (ERP) timescales of
brain potentials most often found roughly between 100 - 400 ms. The mean
convergence times of the model correspond to short-time memory time scales with
a mean of five seconds for converging IPFs. The Brain IPF is computationally
very cheap, highly flexible, and with musical instruments already found to be
of high predictive precision. Therefore, in future studies, the Brain IPF might
be a model able to understand very large systems composed of an ensemble of
brains as well as cultural artifacts and ecological entities.
| [
{
"created": "Wed, 21 Dec 2022 14:03:27 GMT",
"version": "v1"
},
{
"created": "Mon, 23 Jan 2023 10:50:20 GMT",
"version": "v2"
}
] | 2023-01-24 | [
[
"Bader",
"Rolf",
""
]
] | A new brain model is introduced, based on the Impulse Pattern Formulation (IPF) already established for modeling and understanding musical instrument and rhythm perception and production. It assumes the brain works with impulses, neural bursts, ejected from an arbitrary reference point in the brain, arriving at other reflecting brain regions, and returning to the reference point delayed and damped. A plasticity model is suggested to adjust reflection strength in time. The model is systematically studied with 50 reflection points by varying the amount of excitatory vs. inhibitory neurons, the presence or absence of plasticity or external sensory input, and the strength of the input and plasticity in terms of system adaptation to an input or to the system itself. The Brain IPF shows adaptation to an external stimulus, which is stronger without plasticity, showing the active brain not being a simple passive \emph{tabula rasa}. A relation of 10-20\% of inhibitory vs. excitatory neurons, as found in the brain, shows a maximum adaptation to an external stimulus compared to all other relations, pointing to an optimum of this relation concerning adaptation. When assuming strong brain periodicities only up to about 100 Hz, the reflection strength of the model is highest for delays of around 300 ms, corresponding to Event-Related Potential (ERP) timescales of brain potentials most often found roughly between 100 - 400 ms. The mean convergence times of the model correspond to short-time memory time scales with a mean of five seconds for converging IPFs. The Brain IPF is computationally very cheap, highly flexible, and with musical instruments already found to be of high predictive precision. Therefore, in future studies, the Brain IPF might be a model able to understand very large systems composed of an ensemble of brains as well as cultural artifacts and ecological entities. |
q-bio/0610025 | Kai-Yeung Lau | Kai-Yeung Lau (UCSF), Surya Ganguli (UCSF), Chao Tang (UCSF) | Function Constrains Network Architecture and Dynamics: A Case Study on
the Yeast Cell Cycle Boolean Network | 10 pages, 11 figures, 2 tables Accepted for Physical Review E | null | 10.1103/PhysRevE.75.051907 | null | q-bio.MN | null | We develop a general method to explore how the function performed by a
biological network can constrain both its structural and dynamical network
properties. This approach is orthogonal to prior studies which examine the
functional consequences of a given structural feature, for example a scale free
architecture. A key step is to construct an algorithm that allows us to
efficiently sample from a maximum entropy distribution on the space of boolean
dynamical networks constrained to perform a specific function, or cascade of
gene expression. Such a distribution can act as a "functional null model" to
test the significance of any given network feature, and can aid in revealing
underlying evolutionary selection pressures on various network properties.
Although our methods are general, we illustrate them in an analysis of the
yeast cell cycle cascade. This analysis uncovers strong constraints on the
architecture of the cell cycle regulatory network as well as significant
selection pressures on this network to maintain ordered and convergent
dynamics, possibly at the expense of sacrificing robustness to structural
perturbations.
| [
{
"created": "Fri, 13 Oct 2006 18:44:39 GMT",
"version": "v1"
},
{
"created": "Tue, 24 Apr 2007 04:36:32 GMT",
"version": "v2"
}
] | 2009-11-13 | [
[
"Lau",
"Kai-Yeung",
"",
"UCSF"
],
[
"Ganguli",
"Surya",
"",
"UCSF"
],
[
"Tang",
"Chao",
"",
"UCSF"
]
] | We develop a general method to explore how the function performed by a biological network can constrain both its structural and dynamical network properties. This approach is orthogonal to prior studies which examine the functional consequences of a given structural feature, for example a scale free architecture. A key step is to construct an algorithm that allows us to efficiently sample from a maximum entropy distribution on the space of boolean dynamical networks constrained to perform a specific function, or cascade of gene expression. Such a distribution can act as a "functional null model" to test the significance of any given network feature, and can aid in revealing underlying evolutionary selection pressures on various network properties. Although our methods are general, we illustrate them in an analysis of the yeast cell cycle cascade. This analysis uncovers strong constraints on the architecture of the cell cycle regulatory network as well as significant selection pressures on this network to maintain ordered and convergent dynamics, possibly at the expense of sacrificing robustness to structural perturbations. |
1702.06939 | Angelika Manhart | Pierre Degond, Angelika Manhart and Hui Yu | An age-structured continuum model for myxobacteria | null | null | null | null | q-bio.CB math.AP physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Myxobacteria are social bacteria, that can glide in 2D and form
counter-propagating, interacting waves. Here we present a novel age-structured,
continuous macroscopic model for the movement of myxobacteria. The derivation
is based on microscopic interaction rules that can be formulated as a
particle-based model and set within the SOH (Self-Organized Hydrodynamics)
framework. The strength of this combined approach is that microscopic knowledge
or data can be incorporated easily into the particle model, whilst the
continuous model allows for easy numerical analysis of the different effects.
However we found that the derived macroscopic model lacks a diffusion term in
the density equations, which is necessary to control the number of waves,
indicating that a higher order approximation during the derivation is crucial.
Upon ad-hoc addition of the diffusion term, we found very good agreement
between the age-structured model and the biology. In particular we analyzed the
influence of a refractory (insensitivity) period following a reversal of
movement. Our analysis reveals that the refractory period is not necessary for
wave formation, but essential to wave synchronization, indicating separate
molecular mechanisms.
| [
{
"created": "Wed, 22 Feb 2017 18:52:34 GMT",
"version": "v1"
},
{
"created": "Thu, 5 Apr 2018 14:58:38 GMT",
"version": "v2"
}
] | 2018-04-06 | [
[
"Degond",
"Pierre",
""
],
[
"Manhart",
"Angelika",
""
],
[
"Yu",
"Hui",
""
]
] | Myxobacteria are social bacteria, that can glide in 2D and form counter-propagating, interacting waves. Here we present a novel age-structured, continuous macroscopic model for the movement of myxobacteria. The derivation is based on microscopic interaction rules that can be formulated as a particle-based model and set within the SOH (Self-Organized Hydrodynamics) framework. The strength of this combined approach is that microscopic knowledge or data can be incorporated easily into the particle model, whilst the continuous model allows for easy numerical analysis of the different effects. However we found that the derived macroscopic model lacks a diffusion term in the density equations, which is necessary to control the number of waves, indicating that a higher order approximation during the derivation is crucial. Upon ad-hoc addition of the diffusion term, we found very good agreement between the age-structured model and the biology. In particular we analyzed the influence of a refractory (insensitivity) period following a reversal of movement. Our analysis reveals that the refractory period is not necessary for wave formation, but essential to wave synchronization, indicating separate molecular mechanisms. |
1202.3834 | Oscar Westesson | Oscar Westesson, Ian Holmes | Developing and applying heterogeneous phylogenetic models with XRate | 34 pages, 3 figures, glossary of XRate model terminology | null | 10.1371/journal.pone.0036898 | null | q-bio.QM q-bio.GN q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modeling sequence evolution on phylogenetic trees is a useful technique in
computational biology. Especially powerful are models which take account of the
heterogeneous nature of sequence evolution according to the "grammar" of the
encoded gene features. However, beyond a modest level of model complexity,
manual coding of models becomes prohibitively labor-intensive. We demonstrate,
via a set of case studies, the new built-in model-prototyping capabilities of
XRate (macros and Scheme extensions). These features allow rapid implementation
of phylogenetic models which would have previously been far more
labor-intensive. XRate's new capabilities for lineage-specific models,
ancestral sequence reconstruction, and improved annotation output are also
discussed. XRate's flexible model-specification capabilities and computational
efficiency make it well-suited to developing and prototyping phylogenetic
grammar models. XRate is available as part of the DART software package:
http://biowiki.org/DART .
| [
{
"created": "Fri, 17 Feb 2012 03:33:30 GMT",
"version": "v1"
}
] | 2015-06-04 | [
[
"Westesson",
"Oscar",
""
],
[
"Holmes",
"Ian",
""
]
] | Modeling sequence evolution on phylogenetic trees is a useful technique in computational biology. Especially powerful are models which take account of the heterogeneous nature of sequence evolution according to the "grammar" of the encoded gene features. However, beyond a modest level of model complexity, manual coding of models becomes prohibitively labor-intensive. We demonstrate, via a set of case studies, the new built-in model-prototyping capabilities of XRate (macros and Scheme extensions). These features allow rapid implementation of phylogenetic models which would have previously been far more labor-intensive. XRate's new capabilities for lineage-specific models, ancestral sequence reconstruction, and improved annotation output are also discussed. XRate's flexible model-specification capabilities and computational efficiency make it well-suited to developing and prototyping phylogenetic grammar models. XRate is available as part of the DART software package: http://biowiki.org/DART . |
1501.05391 | Aditya Hernowo | Aditya T. Hernowo, Doety Prins, Heidi A. Baseler, Tina Plank, Andre
Gouws, Johanna M.M. Hooymans, Antony B. Morland, Mark W. Greenlee and Frans
W. Cornelissen | Morphometric analyses of the visual pathways in macular degeneration | appears in Cortex (2013) | null | 10.1016/j.cortex.2013.01.003 | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Introduction. Macular degeneration (MD) causes central visual field loss.
When field defects occur in both eyes and overlap, parts of the visual pathways
are no longer stimulated. Previous reports have shown that this affects the
grey matter of the primary visual cortex, but possible effects on the preceding
visual pathway structures have not been fully established. Method. In this
multicentre study, we used high-resolution anatomical magnetic resonance
imaging and voxel-based morphometry to investigate the visual pathway
structures up to the primary visual cortex of patients with age-related macular
degeneration (AMD) and juvenile macular degeneration (JMD). Results. Compared
to age-matched healthy controls, in patients with JMD we found volumetric
reductions in the optic nerves, the chiasm, the lateral geniculate bodies, the
optic radiations and the visual cortex. In patients with AMD we found
volumetric reductions in the lateral geniculate bodies, the optic radiations
and the visual cortex. An unexpected finding was that AMD, but not JMD, was
associated with a reduction in frontal white matter volume. Conclusion. MD is
associated with degeneration of structures along the visual pathways. A
reduction in frontal white matter volume only present in the AMD patients may
constitute a neural correlate of previously reported association between AMD
and mild cognitive impairment.
Keywords: macular degeneration - visual pathway - visual field - voxel-based
morphometry
| [
{
"created": "Thu, 22 Jan 2015 04:34:45 GMT",
"version": "v1"
}
] | 2015-01-23 | [
[
"Hernowo",
"Aditya T.",
""
],
[
"Prins",
"Doety",
""
],
[
"Baseler",
"Heidi A.",
""
],
[
"Plank",
"Tina",
""
],
[
"Gouws",
"Andre",
""
],
[
"Hooymans",
"Johanna M. M.",
""
],
[
"Morland",
"Antony B.",
""
... | Introduction. Macular degeneration (MD) causes central visual field loss. When field defects occur in both eyes and overlap, parts of the visual pathways are no longer stimulated. Previous reports have shown that this affects the grey matter of the primary visual cortex, but possible effects on the preceding visual pathway structures have not been fully established. Method. In this multicentre study, we used high-resolution anatomical magnetic resonance imaging and voxel-based morphometry to investigate the visual pathway structures up to the primary visual cortex of patients with age-related macular degeneration (AMD) and juvenile macular degeneration (JMD). Results. Compared to age-matched healthy controls, in patients with JMD we found volumetric reductions in the optic nerves, the chiasm, the lateral geniculate bodies, the optic radiations and the visual cortex. In patients with AMD we found volumetric reductions in the lateral geniculate bodies, the optic radiations and the visual cortex. An unexpected finding was that AMD, but not JMD, was associated with a reduction in frontal white matter volume. Conclusion. MD is associated with degeneration of structures along the visual pathways. A reduction in frontal white matter volume only present in the AMD patients may constitute a neural correlate of previously reported association between AMD and mild cognitive impairment. Keywords: macular degeneration - visual pathway - visual field - voxel-based morphometry |
1601.07192 | Ke-Fei Cao | Huan Wang, Chuan-Yun Xu, Jing-Bo Hu, Ke-Fei Cao | A complex network analysis of hypertension-related genes | 13 pages, 6 figures, 3 tables; a revised version of an article
published in Physica A: Statistical Mechanics and its Applications | Physica A 394 (2014) 166--176 | 10.1016/j.physa.2013.09.054 | null | q-bio.MN physics.bio-ph physics.med-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, a network of hypertension-related genes is constructed by
analyzing the correlations of gene expression data among the Dahl
salt-sensitive rat and two consomic rat strains. The numerical calculations
show that this sparse and assortative network has small-world and scale-free
properties. Further, 16 key hub genes (Col4a1, Lcn2, Cdk4, etc.) are determined
by introducing an integrated centrality and have been confirmed by
biological/medical research to play important roles in hypertension.
| [
{
"created": "Wed, 27 Jan 2016 02:44:05 GMT",
"version": "v1"
}
] | 2016-01-28 | [
[
"Wang",
"Huan",
""
],
[
"Xu",
"Chuan-Yun",
""
],
[
"Hu",
"Jing-Bo",
""
],
[
"Cao",
"Ke-Fei",
""
]
] | In this paper, a network of hypertension-related genes is constructed by analyzing the correlations of gene expression data among the Dahl salt-sensitive rat and two consomic rat strains. The numerical calculations show that this sparse and assortative network has small-world and scale-free properties. Further, 16 key hub genes (Col4a1, Lcn2, Cdk4, etc.) are determined by introducing an integrated centrality and have been confirmed by biological/medical research to play important roles in hypertension. |
2005.05320 | Alex Viguerie PhD | Alex Viguerie, Guillermo Lorenzo, Ferdinando Auricchio, Davide Baroli,
Thomas J.R. Hughes, Alessia Patton, Alessandro Reali, Thomas E. Yankeelov,
Alessandro Veneziani | Simulating the spread of COVID-19 via spatially-resolved
susceptible-exposed-infected-recovered-deceased (SEIRD) model with
heterogeneous diffusion | 7 pages, 3 figures | null | null | null | q-bio.PE cs.NA math.NA physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present an early version of a
Susceptible-Exposed-Infected-Recovered-Deceased (SEIRD) mathematical model
based on partial differential equations coupled with a heterogeneous diffusion
model. The model describes the spatio-temporal spread of the COVID-19 pandemic,
and aims to capture dynamics also based on human habits and geographical
features. To test the model, we compare the outputs generated by a
finite-element solver with measured data over the Italian region of Lombardy,
which has been heavily impacted by this crisis between February and April 2020.
Our results show a strong qualitative agreement between the simulated forecast
of the spatio-temporal COVID-19 spread in Lombardy and epidemiological data
collected at the municipality level. Additional simulations exploring
alternative scenarios for the relaxation of lockdown restrictions suggest that
reopening strategies should account for local population densities and the
specific dynamics of the contagion. Thus, we argue that data-driven simulations
of our model could ultimately inform health authorities to design effective
pandemic-arresting measures and anticipate the geographical allocation of
crucial medical resources.
| [
{
"created": "Mon, 11 May 2020 14:43:36 GMT",
"version": "v1"
}
] | 2020-05-13 | [
[
"Viguerie",
"Alex",
""
],
[
"Lorenzo",
"Guillermo",
""
],
[
"Auricchio",
"Ferdinando",
""
],
[
"Baroli",
"Davide",
""
],
[
"Hughes",
"Thomas J. R.",
""
],
[
"Patton",
"Alessia",
""
],
[
"Reali",
"Alessandro",
... | We present an early version of a Susceptible-Exposed-Infected-Recovered-Deceased (SEIRD) mathematical model based on partial differential equations coupled with a heterogeneous diffusion model. The model describes the spatio-temporal spread of the COVID-19 pandemic, and aims to capture dynamics also based on human habits and geographical features. To test the model, we compare the outputs generated by a finite-element solver with measured data over the Italian region of Lombardy, which has been heavily impacted by this crisis between February and April 2020. Our results show a strong qualitative agreement between the simulated forecast of the spatio-temporal COVID-19 spread in Lombardy and epidemiological data collected at the municipality level. Additional simulations exploring alternative scenarios for the relaxation of lockdown restrictions suggest that reopening strategies should account for local population densities and the specific dynamics of the contagion. Thus, we argue that data-driven simulations of our model could ultimately inform health authorities to design effective pandemic-arresting measures and anticipate the geographical allocation of crucial medical resources. |
2004.09428 | Joshua Welsh | Joshua A. Welsh, Edwin van der Pol, Britta A. Bettin, David R. F.
Carter, An Hendrix, Metka Lenassi, Marc-Andr\'e Langlois, Alicia Llorente,
Arthur S. van de Nes, Rienk Nieuwland, Vera Tang, Lili Wang, Kenneth W.
Witwer, Jennifer C. Jones | Towards defining reference materials for extracellular vesicle size,
concentration, refractive index and epitope abundance | 30 pages, 6 figures, 2 tables | null | 10.1080/20013078.2020.1816641 | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Accurate characterization of extracellular vesicles (EVs) is critical to
explore their diagnostic and therapeutic applications. As the EV research field
has developed, so too have the techniques used to characterize them. The
development of reference materials is required for the standardization of these
techniques. This work, initiated from the ISEV 2017 Biomarker Workshop in
Birmingham, UK, and with further discussion during the ISEV 2019
Standardization Workshop in Ghent, Belgium, sets out to elucidate which
reference materials are required and which are currently available to
standardize commonly used analysis platforms for characterizing EV size,
concentration, refractive index, and epitope expression. Due to their
predominant use, a particular focus is placed on the optical methods
nanoparticle tracking analysis and flow cytometry.
| [
{
"created": "Mon, 20 Apr 2020 16:32:04 GMT",
"version": "v1"
}
] | 2020-09-29 | [
[
"Welsh",
"Joshua A.",
""
],
[
"van der Pol",
"Edwin",
""
],
[
"Bettin",
"Britta A.",
""
],
[
"Carter",
"David R. F.",
""
],
[
"Hendrix",
"An",
""
],
[
"Lenassi",
"Metka",
""
],
[
"Langlois",
"Marc-André",
"... | Accurate characterization of extracellular vesicles (EVs) is critical to explore their diagnostic and therapeutic applications. As the EV research field has developed, so too have the techniques used to characterize them. The development of reference materials is required for the standardization of these techniques. This work, initiated from the ISEV 2017 Biomarker Workshop in Birmingham, UK, and with further discussion during the ISEV 2019 Standardization Workshop in Ghent, Belgium, sets out to elucidate which reference materials are required and which are currently available to standardize commonly used analysis platforms for characterizing EV size, concentration, refractive index, and epitope expression. Due to their predominant use, a particular focus is placed on the optical methods nanoparticle tracking analysis and flow cytometry. |
2311.08703 | Gi-Hwan Shin | Gi-Hwan Shin and Young-Seok Kweon and Heon-Gyu Kwak and Ha-Na Jo and
Seong-Whan Lee | Impact of Nap on Performance in Different Working Memory Tasks Using EEG | Submitted to 2024 12th IEEE International Winter Conference on
Brain-Computer Interface | null | null | null | q-bio.NC cs.HC | http://creativecommons.org/licenses/by/4.0/ | Electroencephalography (EEG) has been widely used to study the relationship
between naps and working memory, yet the effects of naps on distinct working
memory tasks remain unclear. Here, participants performed word-pair and
visuospatial working memory tasks pre- and post-nap sessions. We found marked
differences in accuracy and reaction time between tasks performed pre- and
post-nap. In order to identify the impact of naps on performance in each
working memory task, we employed clustering to classify participants as high-
or low-performers. Analysis of sleep architecture revealed significant
variations in sleep onset latency and rapid eye movement (REM) proportion. In
addition, the two groups exhibited prominent differences, especially in the
delta power of the Non-REM 3 stage linked to memory. Our results emphasize the
interplay between nap-related neural activity and working memory, underlining
specific EEG markers associated with cognitive performance.
| [
{
"created": "Wed, 15 Nov 2023 05:09:09 GMT",
"version": "v1"
}
] | 2023-11-16 | [
[
"Shin",
"Gi-Hwan",
""
],
[
"Kweon",
"Young-Seok",
""
],
[
"Kwak",
"Heon-Gyu",
""
],
[
"Jo",
"Ha-Na",
""
],
[
"Lee",
"Seong-Whan",
""
]
] | Electroencephalography (EEG) has been widely used to study the relationship between naps and working memory, yet the effects of naps on distinct working memory tasks remain unclear. Here, participants performed word-pair and visuospatial working memory tasks pre- and post-nap sessions. We found marked differences in accuracy and reaction time between tasks performed pre- and post-nap. In order to identify the impact of naps on performance in each working memory task, we employed clustering to classify participants as high- or low-performers. Analysis of sleep architecture revealed significant variations in sleep onset latency and rapid eye movement (REM) proportion. In addition, the two groups exhibited prominent differences, especially in the delta power of the Non-REM 3 stage linked to memory. Our results emphasize the interplay between nap-related neural activity and working memory, underlining specific EEG markers associated with cognitive performance. |
2109.14766 | Soha Hassoun | Xinmeng Li, Li-ping Liu, Soha Hassoun | Boost-RS: Boosted Embeddings for Recommender Systems and its Application
to Enzyme-Substrate Interaction Prediction | 9 pages; 2 figures | null | null | null | q-bio.QM cs.IR cs.LG | http://creativecommons.org/licenses/by/4.0/ | Despite experimental and curation efforts, the extent of enzyme promiscuity
on substrates continues to be largely unexplored and under documented.
Recommender systems (RS), which are currently unexplored for the
enzyme-substrate interaction prediction problem, can be utilized to provide
enzyme recommendations for substrates, and vice versa. The performance of
Collaborative-Filtering (CF) recommender systems however hinges on the quality
of embedding vectors of users and items (enzymes and substrates in our case).
Importantly, enhancing CF embeddings with heterogeneous auxiliary data,
specially relational data (e.g., hierarchical, pairwise, or groupings), remains
a challenge. We propose an innovative general RS framework, termed Boost-RS,
that enhances RS performance by "boosting" embedding vectors through auxiliary
data. Specifically, Boost-RS is trained and dynamically tuned on multiple
relevant auxiliary learning tasks Boost-RS utilizes contrastive learning tasks
to exploit relational data. To show the efficacy of Boost-RS for the
enzyme-substrate prediction interaction problem, we apply the Boost-RS
framework to several baseline CF models. We show that each of our auxiliary
tasks boosts learning of the embedding vectors, and that contrastive learning
using Boost-RS outperforms attribute concatenation and multi-label learning. We
also show that Boost-RS outperforms similarity-based models. Ablation studies
and visualization of learned representations highlight the importance of using
contrastive learning on some of the auxiliary data in boosting the embedding
vectors.
| [
{
"created": "Tue, 28 Sep 2021 19:21:28 GMT",
"version": "v1"
}
] | 2021-10-01 | [
[
"Li",
"Xinmeng",
""
],
[
"Liu",
"Li-ping",
""
],
[
"Hassoun",
"Soha",
""
]
] | Despite experimental and curation efforts, the extent of enzyme promiscuity on substrates continues to be largely unexplored and under documented. Recommender systems (RS), which are currently unexplored for the enzyme-substrate interaction prediction problem, can be utilized to provide enzyme recommendations for substrates, and vice versa. The performance of Collaborative-Filtering (CF) recommender systems however hinges on the quality of embedding vectors of users and items (enzymes and substrates in our case). Importantly, enhancing CF embeddings with heterogeneous auxiliary data, specially relational data (e.g., hierarchical, pairwise, or groupings), remains a challenge. We propose an innovative general RS framework, termed Boost-RS, that enhances RS performance by "boosting" embedding vectors through auxiliary data. Specifically, Boost-RS is trained and dynamically tuned on multiple relevant auxiliary learning tasks Boost-RS utilizes contrastive learning tasks to exploit relational data. To show the efficacy of Boost-RS for the enzyme-substrate prediction interaction problem, we apply the Boost-RS framework to several baseline CF models. We show that each of our auxiliary tasks boosts learning of the embedding vectors, and that contrastive learning using Boost-RS outperforms attribute concatenation and multi-label learning. We also show that Boost-RS outperforms similarity-based models. Ablation studies and visualization of learned representations highlight the importance of using contrastive learning on some of the auxiliary data in boosting the embedding vectors. |
1803.01651 | Francesc Rossell\'o | Tom\'as M. Coronado, Arnau Mir, Francesc Rossell\'o, Gabriel Valiente | A balance index for phylogenetic trees based on rooted quartets | 38 pages, 12 figures | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We define a new balance index for rooted phylogenetic trees based on the
symmetry of the evolutive history of every set of 4 leaves. This index makes
sense for multifurcating trees and it can be computed in time linear in the
number of leaves. We determine its maximum and minimum values for arbitrary and
bifurcating trees, and we provide exact formulas for its expected value and
variance on bifurcating trees under Ford's $\alpha$-model and Aldous'
$\beta$-model and on arbitrary trees under the $\alpha$-$\gamma$-model.
| [
{
"created": "Mon, 5 Mar 2018 13:22:20 GMT",
"version": "v1"
},
{
"created": "Fri, 22 Mar 2019 08:47:00 GMT",
"version": "v2"
}
] | 2019-03-25 | [
[
"Coronado",
"Tomás M.",
""
],
[
"Mir",
"Arnau",
""
],
[
"Rosselló",
"Francesc",
""
],
[
"Valiente",
"Gabriel",
""
]
] | We define a new balance index for rooted phylogenetic trees based on the symmetry of the evolutive history of every set of 4 leaves. This index makes sense for multifurcating trees and it can be computed in time linear in the number of leaves. We determine its maximum and minimum values for arbitrary and bifurcating trees, and we provide exact formulas for its expected value and variance on bifurcating trees under Ford's $\alpha$-model and Aldous' $\beta$-model and on arbitrary trees under the $\alpha$-$\gamma$-model. |
1004.4186 | Ma\"el Mont\'evil | Francis Bailly, Giuseppe Longo, Ma\"el Mont\'evil | A 2-dimensional Geometry for Biological Time | Presented in an invited Lecture, conference "Biologie e selezioni
naturali", Florence, December 4-8, 2009 | Progress in Biophysics and Molecular Biology 106 (3): 474-484
(2011), ISSN: 0079-6107 | 10.1016/j.pbiomolbio.2011.02.001 | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper proposes an abstract mathematical frame for describing some
features of biological time. The key point is that usual physical (linear)
representation of time is insufficient, in our view, for the understanding key
phenomena of life, such as rhythms, both physical (circadian, seasonal ...) and
properly biological (heart beating, respiration, metabolic ...). In particular,
the role of biological rhythms do not seem to have any counterpart in
mathematical formalization of physical clocks, which are based on frequencies
along the usual (possibly thermodynamical, thus oriented) time. We then suggest
a functional representation of biological time by a 2-dimensional manifold as a
mathematical frame for accommodating autonomous biological rhythms. The
"visual" representation of rhythms so obtained, in particular heart beatings,
will provide, by a few examples, hints towards possible applications of our
approach to the understanding of interspecific differences or intraspecific
pathologies. The 3-dimensional embedding space, needed for purely mathematical
reasons, allows to introduce a suitable extra-dimension for "representation
time", with a cognitive significance.
| [
{
"created": "Fri, 23 Apr 2010 17:25:17 GMT",
"version": "v1"
},
{
"created": "Thu, 9 Sep 2010 10:59:53 GMT",
"version": "v2"
},
{
"created": "Fri, 6 Apr 2012 01:40:49 GMT",
"version": "v3"
}
] | 2012-04-09 | [
[
"Bailly",
"Francis",
""
],
[
"Longo",
"Giuseppe",
""
],
[
"Montévil",
"Maël",
""
]
] | This paper proposes an abstract mathematical frame for describing some features of biological time. The key point is that usual physical (linear) representation of time is insufficient, in our view, for the understanding key phenomena of life, such as rhythms, both physical (circadian, seasonal ...) and properly biological (heart beating, respiration, metabolic ...). In particular, the role of biological rhythms do not seem to have any counterpart in mathematical formalization of physical clocks, which are based on frequencies along the usual (possibly thermodynamical, thus oriented) time. We then suggest a functional representation of biological time by a 2-dimensional manifold as a mathematical frame for accommodating autonomous biological rhythms. The "visual" representation of rhythms so obtained, in particular heart beatings, will provide, by a few examples, hints towards possible applications of our approach to the understanding of interspecific differences or intraspecific pathologies. The 3-dimensional embedding space, needed for purely mathematical reasons, allows to introduce a suitable extra-dimension for "representation time", with a cognitive significance. |
1604.04683 | John Medaglia | John D. Medaglia, Fabio Pasqualetti, Roy H. Hamilton, Sharon L.
Thompson-Schill, and Danielle S. Bassett | Brain and Cognitive Reserve: Translation via Network Control Theory | null | null | null | null | q-bio.NC q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Traditional approaches to understanding the brain's resilience to
neuropathology have identified neurophysiological variables, often described as
brain or cognitive 'reserve,' associated with better outcomes. However,
mechanisms of function and resilience in large-scale brain networks remain
poorly understood. Dynamic network theory may provide a basis for substantive
advances in understanding functional resilience in the human brain. In this
perspective, we describe recent theoretical approaches from network control
theory as a framework for investigating network level mechanisms underlying
cognitive function and the dynamics of neuroplasticity in the human brain. We
describe the theoretical opportunities offered by the application of network
control theory at the level of the human connectome to understand cognitive
resilience and inform translational intervention.
| [
{
"created": "Sat, 16 Apr 2016 02:50:00 GMT",
"version": "v1"
},
{
"created": "Tue, 17 Jan 2017 13:56:10 GMT",
"version": "v2"
}
] | 2017-01-18 | [
[
"Medaglia",
"John D.",
""
],
[
"Pasqualetti",
"Fabio",
""
],
[
"Hamilton",
"Roy H.",
""
],
[
"Thompson-Schill",
"Sharon L.",
""
],
[
"Bassett",
"Danielle S.",
""
]
] | Traditional approaches to understanding the brain's resilience to neuropathology have identified neurophysiological variables, often described as brain or cognitive 'reserve,' associated with better outcomes. However, mechanisms of function and resilience in large-scale brain networks remain poorly understood. Dynamic network theory may provide a basis for substantive advances in understanding functional resilience in the human brain. In this perspective, we describe recent theoretical approaches from network control theory as a framework for investigating network level mechanisms underlying cognitive function and the dynamics of neuroplasticity in the human brain. We describe the theoretical opportunities offered by the application of network control theory at the level of the human connectome to understand cognitive resilience and inform translational intervention. |
1601.08121 | Vitaly Ganusov | Vitaly V. Ganusov | Physical and mathematical modeling in experimental papers: achieving
robustness of mathematical modeling studies | response to a recent paper Moebius and Laan Cell 2015 | null | null | null | q-bio.QM | http://creativecommons.org/publicdomain/zero/1.0/ | Development of several alternative mathematical models for the biological
system in question and discrimination between such models using experimental
data is the best way to robust conclusions. Models which challenge existing
theories are more valuable than models which support such theories.
| [
{
"created": "Thu, 28 Jan 2016 03:04:37 GMT",
"version": "v1"
}
] | 2016-02-01 | [
[
"Ganusov",
"Vitaly V.",
""
]
] | Development of several alternative mathematical models for the biological system in question and discrimination between such models using experimental data is the best way to robust conclusions. Models which challenge existing theories are more valuable than models which support such theories. |
1403.6426 | Elmar Peise | Elmar Peise (1), Diego Fabregat-Traver (1), Paolo Bientinesi (1) ((1)
AICES, RWTH Aachen) | High Performance Solutions for Big-data GWAS | Submitted to Parallel Computing. arXiv admin note: substantial text
overlap with arXiv:1304.2272 | null | null | AICES-2013/12-01 | q-bio.GN cs.CE cs.MS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In order to associate complex traits with genetic polymorphisms, genome-wide
association studies process huge datasets involving tens of thousands of
individuals genotyped for millions of polymorphisms. When handling these
datasets, which exceed the main memory of contemporary computers, one faces two
distinct challenges: 1) Millions of polymorphisms and thousands of phenotypes
come at the cost of hundreds of gigabytes of data, which can only be kept in
secondary storage; 2) the relatedness of the test population is represented by
a relationship matrix, which, for large populations, can only fit in the
combined main memory of a distributed architecture. In this paper, by using
distributed resources such as Cloud or clusters, we address both challenges:
The genotype and phenotype data is streamed from secondary storage using a
double buffer- ing technique, while the relationship matrix is kept across the
main memory of a distributed memory system. With the help of these solutions,
we develop separate algorithms for studies involving only one or a multitude of
traits. We show that these algorithms sustain high-performance and allow the
analysis of enormous datasets.
| [
{
"created": "Tue, 25 Mar 2014 17:21:55 GMT",
"version": "v1"
}
] | 2014-03-26 | [
[
"Peise",
"Elmar",
""
],
[
"Fabregat-Traver",
"Diego",
""
],
[
"Bientinesi",
"Paolo",
""
]
] | In order to associate complex traits with genetic polymorphisms, genome-wide association studies process huge datasets involving tens of thousands of individuals genotyped for millions of polymorphisms. When handling these datasets, which exceed the main memory of contemporary computers, one faces two distinct challenges: 1) Millions of polymorphisms and thousands of phenotypes come at the cost of hundreds of gigabytes of data, which can only be kept in secondary storage; 2) the relatedness of the test population is represented by a relationship matrix, which, for large populations, can only fit in the combined main memory of a distributed architecture. In this paper, by using distributed resources such as Cloud or clusters, we address both challenges: The genotype and phenotype data is streamed from secondary storage using a double buffer- ing technique, while the relationship matrix is kept across the main memory of a distributed memory system. With the help of these solutions, we develop separate algorithms for studies involving only one or a multitude of traits. We show that these algorithms sustain high-performance and allow the analysis of enormous datasets. |
1402.5060 | Philippe Marcq | Olivier Cochet-Escartin, Jonas Ranft, Pascal Silberzan, Philippe Marcq | Border forces and friction control epithelial closure dynamics | 44 pages, 17 figures | null | 10.1016/j.bpj.2013.11.015 | null | q-bio.CB q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Epithelization, the process whereby an epithelium covers a cell-free surface,
is not only central to wound healing but also pivotal in embryonic
morphogenesis, regeneration, and cancer. In the context of wound healing, the
epithelization mechanisms differ depending on the sizes and geometries of the
wounds as well as on the cell type while a unified theoretical decription is
still lacking. Here, we used a barrier-based protocol that allows for making
large arrays of well-controlled circular model wounds within an epithelium at
confluence, without injuring any cells. We propose a physical model that takes
into account border forces, friction with the substrate, and tissue rheology.
Despite the presence of a contractile actomyosin cable at the periphery of the
wound, epithelization was mostly driven by border protrusive activity. Closure
dynamics was quantified by an epithelization coefficient $D = \sigma_p/\xi$
defined as the ratio of the border protrusive stress $\sigma_p$ to the friction
coefficient $\xi$ between epithelium and substrate. The same assay and model
showed a high sensitivity to the RasV12 mutation on human epithelial cells,
demonstrating the general applicability of the approach and its potential to
quantitatively characterize metastatic transformations.
| [
{
"created": "Thu, 20 Feb 2014 16:31:15 GMT",
"version": "v1"
}
] | 2014-02-21 | [
[
"Cochet-Escartin",
"Olivier",
""
],
[
"Ranft",
"Jonas",
""
],
[
"Silberzan",
"Pascal",
""
],
[
"Marcq",
"Philippe",
""
]
] | Epithelization, the process whereby an epithelium covers a cell-free surface, is not only central to wound healing but also pivotal in embryonic morphogenesis, regeneration, and cancer. In the context of wound healing, the epithelization mechanisms differ depending on the sizes and geometries of the wounds as well as on the cell type while a unified theoretical decription is still lacking. Here, we used a barrier-based protocol that allows for making large arrays of well-controlled circular model wounds within an epithelium at confluence, without injuring any cells. We propose a physical model that takes into account border forces, friction with the substrate, and tissue rheology. Despite the presence of a contractile actomyosin cable at the periphery of the wound, epithelization was mostly driven by border protrusive activity. Closure dynamics was quantified by an epithelization coefficient $D = \sigma_p/\xi$ defined as the ratio of the border protrusive stress $\sigma_p$ to the friction coefficient $\xi$ between epithelium and substrate. The same assay and model showed a high sensitivity to the RasV12 mutation on human epithelial cells, demonstrating the general applicability of the approach and its potential to quantitatively characterize metastatic transformations. |
1503.09147 | Jaline Gerardin | Jaline Gerardin, Andre Lin Ouedraogo, Kevin A. McCarthy, Bocar
Kouyate, Philip A. Eckhoff, Edward A. Wenger | Characterization of the infectious reservoir of malaria with an
agent-based model calibrated to age-stratified parasite densities and
infectiousness | submitted to Malaria Journal on March 31, 2015 | Malaria Journal 2015, 14:231 | 10.1186/s12936-015-0751-y | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background Elimination of malaria can only be achieved through removal of all
vectors or complete depletion of the infectious reservoir in humans.
Mechanistic models can be built to synthesize diverse observations from the
field collected under a variety of conditions and subsequently used to query
the infectious reservoir in great detail. Methods The EMOD model of malaria
transmission was calibrated to prevalence, incidence, asexual parasite density,
gametocyte density, infection duration, and infectiousness data from 9 study
sites. The infectious reservoir was characterized by diagnostic detection limit
and age group over a range of transmission intensities with and without case
management and vector control. Mass screen-and-treat drug campaigns were tested
for likelihood of achieving elimination. Results The composition of the
infectious reservoir by diagnostic threshold is similar over a range of
transmission intensities, and higher intensity settings are biased toward
infections in children. Recent ramp-ups in case management and use of
insecticide-treated bednets reduce the infectious reservoir and shift the
composition toward submicroscopic infections. Mass campaigns with antimalarial
drugs are highly effective at interrupting transmission if deployed shortly
after ITN campaigns. Conclusions Low density infections comprise a substantial
portion of the infectious reservoir. Proper timing of vector control, seasonal
variation in transmission intensity, and mass drug campaigns allows lingering
population immunity to help drive a region toward elimination.
| [
{
"created": "Tue, 31 Mar 2015 17:58:33 GMT",
"version": "v1"
}
] | 2015-07-13 | [
[
"Gerardin",
"Jaline",
""
],
[
"Ouedraogo",
"Andre Lin",
""
],
[
"McCarthy",
"Kevin A.",
""
],
[
"Kouyate",
"Bocar",
""
],
[
"Eckhoff",
"Philip A.",
""
],
[
"Wenger",
"Edward A.",
""
]
] | Background Elimination of malaria can only be achieved through removal of all vectors or complete depletion of the infectious reservoir in humans. Mechanistic models can be built to synthesize diverse observations from the field collected under a variety of conditions and subsequently used to query the infectious reservoir in great detail. Methods The EMOD model of malaria transmission was calibrated to prevalence, incidence, asexual parasite density, gametocyte density, infection duration, and infectiousness data from 9 study sites. The infectious reservoir was characterized by diagnostic detection limit and age group over a range of transmission intensities with and without case management and vector control. Mass screen-and-treat drug campaigns were tested for likelihood of achieving elimination. Results The composition of the infectious reservoir by diagnostic threshold is similar over a range of transmission intensities, and higher intensity settings are biased toward infections in children. Recent ramp-ups in case management and use of insecticide-treated bednets reduce the infectious reservoir and shift the composition toward submicroscopic infections. Mass campaigns with antimalarial drugs are highly effective at interrupting transmission if deployed shortly after ITN campaigns. Conclusions Low density infections comprise a substantial portion of the infectious reservoir. Proper timing of vector control, seasonal variation in transmission intensity, and mass drug campaigns allows lingering population immunity to help drive a region toward elimination. |
1802.10401 | Caroline Gr\"onwall | Katy A. Lloyd, Johanna Steen, Khaled Amara, Philip J. Titcombe, Lena
Israelsson, Susanna L. Lundstrom, Diana Zhou, Roman A. Zubarev, Evan Reed,
Luca Piccoli, Cem Gabay, Antonio Lanzavecchia, Dominique Baeten, Karin
Lundberg, Daniel L. Mueller, Lars Klareskog, Vivianne Malmstrom, and Caroline
Gronwall | Variable domain N-linked glycosylation and negative surface charge are
key features of monoclonal ACPA: implications for B-cell selection | null | null | 10.1002/eji.201747446 | null | q-bio.BM q-bio.CB | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Autoreactive B cells have a central role in the pathogenesis of rheumatoid
arthritis (RA), and recent findings have proposed that anti-citrullinated
protein autoantibodies (ACPA) may be directly pathogenic. Herein, we
demonstrate the frequency of variable-region glycosylation in single-cell
cloned mAbs. A total of 14 ACPA mAbs were evaluated for predicted N-linked
glycosylation motifs in silico and compared to 452 highly-mutated mAbs from RA
patients and controls. Variable region N-linked motifs (N-X-S/T) were
strikingly prevalent within ACPA (100%) compared to somatically hypermutated
(SHM) RA bone marrow plasma cells (21%), and synovial plasma cells from
seropositive (39%) and seronegative RA (7%). When normalized for SHM, ACPA
still had significantly higher frequency of N-linked motifs compared to all
studied mAbs including highly-mutated HIV broadly-neutralizing and
malaria-associated mAbs. The Fab glycans of ACPA-mAbs were highly sialylated,
contributed to altered charge, but did not influence antigen binding. The
analysis revealed evidence of unusual B-cell selection pressure and
SHM-mediated decreased in surface charge and isoelectric point in ACPA. It is
still unknown how these distinct features of anti-citrulline immunity may have
an impact on pathogenesis. However, it is evident that they offer selective
advantages for ACPA+ B cells, possibly also through non-antigen driven
mechanisms.
| [
{
"created": "Wed, 28 Feb 2018 13:33:25 GMT",
"version": "v1"
}
] | 2018-03-01 | [
[
"Lloyd",
"Katy A.",
""
],
[
"Steen",
"Johanna",
""
],
[
"Amara",
"Khaled",
""
],
[
"Titcombe",
"Philip J.",
""
],
[
"Israelsson",
"Lena",
""
],
[
"Lundstrom",
"Susanna L.",
""
],
[
"Zhou",
"Diana",
""
],
... | Autoreactive B cells have a central role in the pathogenesis of rheumatoid arthritis (RA), and recent findings have proposed that anti-citrullinated protein autoantibodies (ACPA) may be directly pathogenic. Herein, we demonstrate the frequency of variable-region glycosylation in single-cell cloned mAbs. A total of 14 ACPA mAbs were evaluated for predicted N-linked glycosylation motifs in silico and compared to 452 highly-mutated mAbs from RA patients and controls. Variable region N-linked motifs (N-X-S/T) were strikingly prevalent within ACPA (100%) compared to somatically hypermutated (SHM) RA bone marrow plasma cells (21%), and synovial plasma cells from seropositive (39%) and seronegative RA (7%). When normalized for SHM, ACPA still had significantly higher frequency of N-linked motifs compared to all studied mAbs including highly-mutated HIV broadly-neutralizing and malaria-associated mAbs. The Fab glycans of ACPA-mAbs were highly sialylated, contributed to altered charge, but did not influence antigen binding. The analysis revealed evidence of unusual B-cell selection pressure and SHM-mediated decreased in surface charge and isoelectric point in ACPA. It is still unknown how these distinct features of anti-citrulline immunity may have an impact on pathogenesis. However, it is evident that they offer selective advantages for ACPA+ B cells, possibly also through non-antigen driven mechanisms. |
1709.10296 | Thu Thuy Nguyen | Paul Loubet (1), Charles Burdet (1,2), William Vindrios, Nathalie
Grall (1), Michel Wolff, Yazdan Yazdanpanah (1), Antoine Andremont (1),
Xavier Duval (1), Fran\c{c}ois-Xavier Lescure (1) ((1) IAME, (2) DEBRC) | Cefazolin versus anti-staphylococcal penicillins for treatment of
methicillin-susceptible Staphylococcus aureus bacteraemia: a narrative review | null | Clinical Microbiology and Infection, Wiley, 2017 | 10.1016/j.cmi.2017.07.003 | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Anti-staphylococcal penicillins (ASPs) are recommended as first-line agents
in methicillin-susceptible Staphylococcus aureus (MSSA) bacteraemia. Concerns
about their safety profile have contributed to the increased use of cefazolin.
The comparative clinical effectiveness and safety profile of cefazolin versus
ASPs for such infections remain unclear. Furthermore, uncertainty persists
concerning the use of cefazolin due to controversies over its efficacy in deep
MSSA infections and its possible negative ecological impact.
| [
{
"created": "Fri, 29 Sep 2017 09:20:07 GMT",
"version": "v1"
}
] | 2017-10-02 | [
[
"Loubet",
"Paul",
"",
"IAME"
],
[
"Burdet",
"Charles",
"",
"IAME",
"DEBRC"
],
[
"Vindrios",
"William",
"",
"IAME"
],
[
"Grall",
"Nathalie",
"",
"IAME"
],
[
"Wolff",
"Michel",
"",
"IAME"
],
[
"Yazdanpanah",
... | Anti-staphylococcal penicillins (ASPs) are recommended as first-line agents in methicillin-susceptible Staphylococcus aureus (MSSA) bacteraemia. Concerns about their safety profile have contributed to the increased use of cefazolin. The comparative clinical effectiveness and safety profile of cefazolin versus ASPs for such infections remain unclear. Furthermore, uncertainty persists concerning the use of cefazolin due to controversies over its efficacy in deep MSSA infections and its possible negative ecological impact. |
1411.1190 | Bo Chen | Bo Chen and Pietro Perona | Towards an optimal decision strategy of visual search | 19 pages, 6 figures | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Searching for objects amongst clutter is a key ability of visual systems.
Speed and accuracy are often crucial: how can the visual system trade off these
competing quantities for optimal performance in different tasks? How does the
trade-off depend on target appearance and scene complexity? We show that the
optimal tradeoff strategy may be cast as the solution to a partially observable
Markov decision process (POMDP) and computed by a dynamic programming
procedure. However, this procedure is computationally intensive when the visual
scene becomes too cluttered. Therefore, we also conjecture an optimal strategy
that scales to large number of clutters. Our conjecture applies to homogeneous
visual search and for a special case of heterogenous search where the
signal-to-noise ratio differs across location. Using the conjecture we show
that two existing decision mechanisms for analyzing human data, namely
diffusion-to-bound and maximum-of-output, are sub-optimal; the optimal strategy
instead employs two scaled diffusions.
| [
{
"created": "Wed, 5 Nov 2014 08:53:19 GMT",
"version": "v1"
}
] | 2014-11-06 | [
[
"Chen",
"Bo",
""
],
[
"Perona",
"Pietro",
""
]
] | Searching for objects amongst clutter is a key ability of visual systems. Speed and accuracy are often crucial: how can the visual system trade off these competing quantities for optimal performance in different tasks? How does the trade-off depend on target appearance and scene complexity? We show that the optimal tradeoff strategy may be cast as the solution to a partially observable Markov decision process (POMDP) and computed by a dynamic programming procedure. However, this procedure is computationally intensive when the visual scene becomes too cluttered. Therefore, we also conjecture an optimal strategy that scales to large number of clutters. Our conjecture applies to homogeneous visual search and for a special case of heterogenous search where the signal-to-noise ratio differs across location. Using the conjecture we show that two existing decision mechanisms for analyzing human data, namely diffusion-to-bound and maximum-of-output, are sub-optimal; the optimal strategy instead employs two scaled diffusions. |
1107.1577 | Thierry Rabilloud | Mireille Chevallet (BBSI), Catherine Aude-Garcia (BBSI), C\'ecile
Lelong (BBSI), Serge M. Cand\'eias (BBSI), Sylvie Luche (BBSI), V\'eronique
Collin-Faure (BBSI), Sarah Triboulet (BBSI), D\'evy Diallo (BBSI), H\'el\`ene
Diemer (IPHC-DSA), Alain van Dorsselaer (IPHC-DSA), Thierry Rabilloud (BBSI) | Effects of nanoparticles on murine macrophages | Nanosafe2010: International Conference on Safe Production and Use of
Nanomaterials 16-18 November 2010, Grenoble, France, Grenoble : France (2010) | null | 10.1088/1742-6596/304/1/012034 | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Metallic nanoparticles are more and more widely used in an increasing number
of applications. Consequently, they are more and more present in the
environment, and the risk that they may represent for human health must be
evaluated. This requires to increase our knowledge of the cellular responses to
nanoparticles. In this context, macrophages appear as an attractive system.
They play a major role in eliminating foreign matter, e.g. pathogens or
infectious agents, by phagocytosis and inflammatory responses, and are thus
highly likely to react to nanoparticles. We have decided to study their
responses to nanoparticles by a combination of classical and wide-scope
approaches such as proteomics. The long term goal of this study is the better
understanding of the responses of macrophages to nanoparticles, and thus to
help to assess their possible impact on human health. We chose as a model
system bone marrow-derived macrophages and studied the effect of commonly used
nanoparticles such as TiO2 and Cu. Classical responses of macrophage were
characterized and proteomic approaches based on 2D gels of whole cell extracts
were used. Preliminary proteomic data resulting from whole cell extracts showed
different effects for TiO2-NPs and Cu-NPs. Modifications of the expression of
several proteins involved in different pathways such as, for example, signal
transduction, endosome-lysosome pathway, Krebs cycle, oxidative stress response
have been underscored. These first results validate our proteomics approach and
open a new wide field of investigation for NPs impact on macrophages
| [
{
"created": "Fri, 8 Jul 2011 08:46:15 GMT",
"version": "v1"
}
] | 2011-07-11 | [
[
"Chevallet",
"Mireille",
"",
"BBSI"
],
[
"Aude-Garcia",
"Catherine",
"",
"BBSI"
],
[
"Lelong",
"Cécile",
"",
"BBSI"
],
[
"Candéias",
"Serge M.",
"",
"BBSI"
],
[
"Luche",
"Sylvie",
"",
"BBSI"
],
[
"Collin-Faure"... | Metallic nanoparticles are more and more widely used in an increasing number of applications. Consequently, they are more and more present in the environment, and the risk that they may represent for human health must be evaluated. This requires to increase our knowledge of the cellular responses to nanoparticles. In this context, macrophages appear as an attractive system. They play a major role in eliminating foreign matter, e.g. pathogens or infectious agents, by phagocytosis and inflammatory responses, and are thus highly likely to react to nanoparticles. We have decided to study their responses to nanoparticles by a combination of classical and wide-scope approaches such as proteomics. The long term goal of this study is the better understanding of the responses of macrophages to nanoparticles, and thus to help to assess their possible impact on human health. We chose as a model system bone marrow-derived macrophages and studied the effect of commonly used nanoparticles such as TiO2 and Cu. Classical responses of macrophage were characterized and proteomic approaches based on 2D gels of whole cell extracts were used. Preliminary proteomic data resulting from whole cell extracts showed different effects for TiO2-NPs and Cu-NPs. Modifications of the expression of several proteins involved in different pathways such as, for example, signal transduction, endosome-lysosome pathway, Krebs cycle, oxidative stress response have been underscored. These first results validate our proteomics approach and open a new wide field of investigation for NPs impact on macrophages |
2302.10348 | Gianni De Fabritiis | Pablo Herrera-Nieto, Adri\`a P\'erez and Gianni De Fabritiis | Binding-and-folding recognition of an intrinsically disordered protein
using online learning molecular dynamics | null | null | null | null | q-bio.BM cs.LG physics.comp-ph | http://creativecommons.org/licenses/by/4.0/ | Intrinsically disordered proteins participate in many biological processes by
folding upon binding with other proteins. However, coupled folding and binding
processes are not well understood from an atomistic point of view. One of the
main questions is whether folding occurs prior to or after binding. Here we use
a novel unbiased high-throughput adaptive sampling approach to reconstruct the
binding and folding between the disordered transactivation domain of
\mbox{c-Myb} and the KIX domain of the CREB-binding protein. The reconstructed
long-term dynamical process highlights the binding of a short stretch of amino
acids on \mbox{c-Myb} as a folded $\alpha$-helix. Leucine residues, specially
Leu298 to Leu302, establish initial native contacts that prime the binding and
folding of the rest of the peptide, with a mixture of conformational selection
on the N-terminal region with an induced fit of the C-terminal.
| [
{
"created": "Mon, 20 Feb 2023 22:30:35 GMT",
"version": "v1"
}
] | 2023-02-22 | [
[
"Herrera-Nieto",
"Pablo",
""
],
[
"Pérez",
"Adrià",
""
],
[
"De Fabritiis",
"Gianni",
""
]
] | Intrinsically disordered proteins participate in many biological processes by folding upon binding with other proteins. However, coupled folding and binding processes are not well understood from an atomistic point of view. One of the main questions is whether folding occurs prior to or after binding. Here we use a novel unbiased high-throughput adaptive sampling approach to reconstruct the binding and folding between the disordered transactivation domain of \mbox{c-Myb} and the KIX domain of the CREB-binding protein. The reconstructed long-term dynamical process highlights the binding of a short stretch of amino acids on \mbox{c-Myb} as a folded $\alpha$-helix. Leucine residues, specially Leu298 to Leu302, establish initial native contacts that prime the binding and folding of the rest of the peptide, with a mixture of conformational selection on the N-terminal region with an induced fit of the C-terminal. |
1108.0356 | Emilio N.M. Cirillo | D. Andreucci, D. Bellaveglia, E.N.M. Cirillo, S. Marconi | Monte Carlo study of gating and selection in potassium channels | null | Physical Review E 84, 021920, 2011 | 10.1103/PhysRevE.84.021920 | null | q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The study of selection and gating in potassium channels is a very important
issue in modern biology. Indeed such structures are known in all types of cells
in all organisms where they play many important functional roles. The mechanism
of gating and selection of ionic species is not clearly understood. In this
paper we study a model in which gating is obtained via an affinity-switching
selectivity filter. We discuss the dependence of selectivity and efficiency on
the cytosolic ionic concentration and on the typical pore open state duration.
We demonstrate that a simple modification of the way in which the selectivity
filter is modeled yields larger channel efficiency.
| [
{
"created": "Mon, 1 Aug 2011 16:29:18 GMT",
"version": "v1"
}
] | 2015-05-30 | [
[
"Andreucci",
"D.",
""
],
[
"Bellaveglia",
"D.",
""
],
[
"Cirillo",
"E. N. M.",
""
],
[
"Marconi",
"S.",
""
]
] | The study of selection and gating in potassium channels is a very important issue in modern biology. Indeed such structures are known in all types of cells in all organisms where they play many important functional roles. The mechanism of gating and selection of ionic species is not clearly understood. In this paper we study a model in which gating is obtained via an affinity-switching selectivity filter. We discuss the dependence of selectivity and efficiency on the cytosolic ionic concentration and on the typical pore open state duration. We demonstrate that a simple modification of the way in which the selectivity filter is modeled yields larger channel efficiency. |
1810.11393 | Jo\~ao Sacramento | Jo\~ao Sacramento, Rui Ponte Costa, Yoshua Bengio, Walter Senn | Dendritic cortical microcircuits approximate the backpropagation
algorithm | To appear in Advances in Neural Information Processing Systems 31
(NIPS 2018). 12 pages, 3 figures, 9 pages of supplementary material (2
supplementary figures) | null | null | null | q-bio.NC cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep learning has seen remarkable developments over the last years, many of
them inspired by neuroscience. However, the main learning mechanism behind
these advances - error backpropagation - appears to be at odds with
neurobiology. Here, we introduce a multilayer neuronal network model with
simplified dendritic compartments in which error-driven synaptic plasticity
adapts the network towards a global desired output. In contrast to previous
work our model does not require separate phases and synaptic learning is driven
by local dendritic prediction errors continuously in time. Such errors
originate at apical dendrites and occur due to a mismatch between predictive
input from lateral interneurons and activity from actual top-down feedback.
Through the use of simple dendritic compartments and different cell-types our
model can represent both error and normal activity within a pyramidal neuron.
We demonstrate the learning capabilities of the model in regression and
classification tasks, and show analytically that it approximates the error
backpropagation algorithm. Moreover, our framework is consistent with recent
observations of learning between brain areas and the architecture of cortical
microcircuits. Overall, we introduce a novel view of learning on dendritic
cortical circuits and on how the brain may solve the long-standing synaptic
credit assignment problem.
| [
{
"created": "Fri, 26 Oct 2018 15:40:58 GMT",
"version": "v1"
}
] | 2018-10-29 | [
[
"Sacramento",
"João",
""
],
[
"Costa",
"Rui Ponte",
""
],
[
"Bengio",
"Yoshua",
""
],
[
"Senn",
"Walter",
""
]
] | Deep learning has seen remarkable developments over the last years, many of them inspired by neuroscience. However, the main learning mechanism behind these advances - error backpropagation - appears to be at odds with neurobiology. Here, we introduce a multilayer neuronal network model with simplified dendritic compartments in which error-driven synaptic plasticity adapts the network towards a global desired output. In contrast to previous work our model does not require separate phases and synaptic learning is driven by local dendritic prediction errors continuously in time. Such errors originate at apical dendrites and occur due to a mismatch between predictive input from lateral interneurons and activity from actual top-down feedback. Through the use of simple dendritic compartments and different cell-types our model can represent both error and normal activity within a pyramidal neuron. We demonstrate the learning capabilities of the model in regression and classification tasks, and show analytically that it approximates the error backpropagation algorithm. Moreover, our framework is consistent with recent observations of learning between brain areas and the architecture of cortical microcircuits. Overall, we introduce a novel view of learning on dendritic cortical circuits and on how the brain may solve the long-standing synaptic credit assignment problem. |
2405.12941 | Lancelot Da Costa | Lars Sandved-Smith, Lancelot Da Costa | Metacognitive particles, mental action and the sense of agency | 13 pages, 4 figures | null | null | null | q-bio.NC nlin.AO physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper articulates metacognition using the language of statistical
physics and Bayesian mechanics. Metacognitive beliefs, defined as beliefs about
beliefs, find a natural description within this formalism, which allows us to
define the dynamics of 'metacognitive particles', i.e., systems possessing
metacognitive beliefs. We further unpack this typology of metacognitive systems
by distinguishing passive and active metacognitive particles, where active
particles are endowed with the capacity for mental actions that update the
parameters of other beliefs. We provide arguments for the necessity of this
architecture in the emergence of a subjective sense of agency and the
experience of being separate from the environment. The motivation is to pave
the way towards a mathematical and physical understanding of cognition -- and
higher forms thereof -- furthering the study and formalization of cognitive
science in the language of mathematical physics.
| [
{
"created": "Tue, 21 May 2024 17:14:10 GMT",
"version": "v1"
},
{
"created": "Fri, 14 Jun 2024 10:09:03 GMT",
"version": "v2"
}
] | 2024-06-17 | [
[
"Sandved-Smith",
"Lars",
""
],
[
"Da Costa",
"Lancelot",
""
]
] | This paper articulates metacognition using the language of statistical physics and Bayesian mechanics. Metacognitive beliefs, defined as beliefs about beliefs, find a natural description within this formalism, which allows us to define the dynamics of 'metacognitive particles', i.e., systems possessing metacognitive beliefs. We further unpack this typology of metacognitive systems by distinguishing passive and active metacognitive particles, where active particles are endowed with the capacity for mental actions that update the parameters of other beliefs. We provide arguments for the necessity of this architecture in the emergence of a subjective sense of agency and the experience of being separate from the environment. The motivation is to pave the way towards a mathematical and physical understanding of cognition -- and higher forms thereof -- furthering the study and formalization of cognitive science in the language of mathematical physics. |
1209.3330 | Christoph Adami | Randal S. Olson, Arend Hintze, Fred C. Dyer, David B. Knoester, and
Christoph Adami | Predator confusion is sufficient to evolve swarming behavior | 11 pages, 6 figures. Supplementary information (including video files
S1 and S5) in ancillary material. Videos S2-S4 are available from the authors
upon request | J. Royal Society Interface 10 (2013) 2010305 | 10.1098/rsif.2013.0305 | null | q-bio.PE cs.NE nlin.AO q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Swarming behaviors in animals have been extensively studied due to their
implications for the evolution of cooperation, social cognition, and
predator-prey dynamics. An important goal of these studies is discerning which
evolutionary pressures favor the formation of swarms. One hypothesis is that
swarms arise because the presence of multiple moving prey in swarms causes
confusion for attacking predators, but it remains unclear how important this
selective force is. Using an evolutionary model of a predator-prey system, we
show that predator confusion provides a sufficient selection pressure to evolve
swarming behavior in prey. Furthermore, we demonstrate that the evolutionary
effect of predator confusion on prey could in turn exert pressure on the
structure of the predator's visual field, favoring the frontally oriented,
high-resolution visual systems commonly observed in predators that feed on
swarming animals. Finally, we provide evidence that when prey evolve swarming
in response to predator confusion, there is a change in the shape of the
functional response curve describing the predator's consumption rate as prey
density increases. Thus, we show that a relatively simple perceptual
constraint--predator confusion--could have pervasive evolutionary effects on
prey behavior, predator sensory mechanisms, and the ecological interactions
between predators and prey.
| [
{
"created": "Fri, 14 Sep 2012 21:31:18 GMT",
"version": "v1"
},
{
"created": "Sat, 26 Jan 2013 00:36:56 GMT",
"version": "v2"
},
{
"created": "Wed, 3 Apr 2013 19:56:18 GMT",
"version": "v3"
}
] | 2015-03-13 | [
[
"Olson",
"Randal S.",
""
],
[
"Hintze",
"Arend",
""
],
[
"Dyer",
"Fred C.",
""
],
[
"Knoester",
"David B.",
""
],
[
"Adami",
"Christoph",
""
]
] | Swarming behaviors in animals have been extensively studied due to their implications for the evolution of cooperation, social cognition, and predator-prey dynamics. An important goal of these studies is discerning which evolutionary pressures favor the formation of swarms. One hypothesis is that swarms arise because the presence of multiple moving prey in swarms causes confusion for attacking predators, but it remains unclear how important this selective force is. Using an evolutionary model of a predator-prey system, we show that predator confusion provides a sufficient selection pressure to evolve swarming behavior in prey. Furthermore, we demonstrate that the evolutionary effect of predator confusion on prey could in turn exert pressure on the structure of the predator's visual field, favoring the frontally oriented, high-resolution visual systems commonly observed in predators that feed on swarming animals. Finally, we provide evidence that when prey evolve swarming in response to predator confusion, there is a change in the shape of the functional response curve describing the predator's consumption rate as prey density increases. Thus, we show that a relatively simple perceptual constraint--predator confusion--could have pervasive evolutionary effects on prey behavior, predator sensory mechanisms, and the ecological interactions between predators and prey. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.