id stringlengths 9 13 | submitter stringlengths 4 48 | authors stringlengths 4 9.62k | title stringlengths 4 343 | comments stringlengths 2 480 ⌀ | journal-ref stringlengths 9 309 ⌀ | doi stringlengths 12 138 ⌀ | report-no stringclasses 277 values | categories stringlengths 8 87 | license stringclasses 9 values | orig_abstract stringlengths 27 3.76k | versions listlengths 1 15 | update_date stringlengths 10 10 | authors_parsed listlengths 1 147 | abstract stringlengths 24 3.75k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
0808.3511 | K. P. Unnikrishnan | P.S. Sastry (Indian Institute of Science), and K.P. Unnikrishnan
(General Motors Research) | Conditional probability based significance tests for sequential patterns
in multi-neuronal spike trains | 35 pages, 7 figures | null | null | null | q-bio.NC cond-mat.dis-nn cs.DB q-bio.QM stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we consider the problem of detecting statistically significant
sequential patterns in multi-neuronal spike trains. These patterns are
characterized by ordered sequences of spikes from different neurons with
specific delays between spikes. We have previously proposed a data mining
scheme to efficiently discover such patterns which are frequent in the sense
that the count of non-overlapping occurrences of the pattern in the data stream
is above a threshold. Here we propose a method to determine the statistical
significance of these repeating patterns and to set the thresholds
automatically. The novelty of our approach is that we use a compound null
hypothesis that includes not only models of independent neurons but also models
where neurons have weak dependencies. The strength of interaction among the
neurons is represented in terms of certain pair-wise conditional probabilities.
We specify our null hypothesis by putting an upper bound on all such
conditional probabilities. We construct a probabilistic model that captures the
counting process and use this to calculate the mean and variance of the count
for any pattern. Using this we derive a test of significance for rejecting such
a null hypothesis. This also allows us to rank-order different significant
patterns. We illustrate the effectiveness of our approach using spike trains
generated from a non-homogeneous Poisson model with embedded dependencies.
| [
{
"created": "Tue, 26 Aug 2008 13:28:43 GMT",
"version": "v1"
},
{
"created": "Thu, 28 Aug 2008 02:15:45 GMT",
"version": "v2"
}
] | 2008-08-28 | [
[
"Sastry",
"P. S.",
"",
"Indian Institute of Science"
],
[
"Unnikrishnan",
"K. P.",
"",
"General Motors Research"
]
] | In this paper we consider the problem of detecting statistically significant sequential patterns in multi-neuronal spike trains. These patterns are characterized by ordered sequences of spikes from different neurons with specific delays between spikes. We have previously proposed a data mining scheme to efficiently discover such patterns which are frequent in the sense that the count of non-overlapping occurrences of the pattern in the data stream is above a threshold. Here we propose a method to determine the statistical significance of these repeating patterns and to set the thresholds automatically. The novelty of our approach is that we use a compound null hypothesis that includes not only models of independent neurons but also models where neurons have weak dependencies. The strength of interaction among the neurons is represented in terms of certain pair-wise conditional probabilities. We specify our null hypothesis by putting an upper bound on all such conditional probabilities. We construct a probabilistic model that captures the counting process and use this to calculate the mean and variance of the count for any pattern. Using this we derive a test of significance for rejecting such a null hypothesis. This also allows us to rank-order different significant patterns. We illustrate the effectiveness of our approach using spike trains generated from a non-homogeneous Poisson model with embedded dependencies. |
1710.04173 | Sabine Ploux Dr. | Sabine Ploux, Rui Wang, ZhengFeng Zhong, Hai Zhao, Yang Xin and
Bao-Liang Lu | Structural Stability of Lexical Semantic Spaces: Nouns in Chinese and
French | 17 pages, 4 figures | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many studies in the neurosciences have dealt with the semantic processing of
words or categories, but few have looked into the semantic organization of the
lexicon thought as a system. The present study was designed to try to move
towards this goal, using both electrophysiological and corpus-based data, and
to compare two languages from different families: French and Mandarin Chinese.
We conducted an EEG-based semantic-decision experiment using 240 words from
eight categories (clothing, parts of a house, tools, vehicles,
fruits/vegetables, animals, body parts, and people) as the material. A
data-analysis method (correspondence analysis) commonly used in computational
linguistics was applied to the electrophysiological signals.
The present cross-language comparison indicated stability for the following
aspects of the languages' lexical semantic organizations: (1) the
living/nonliving distinction, which showed up as a main factor for both
languages; (2) greater dispersion of the living categories as compared to the
nonliving ones; (3) prototypicality of the \emph{animals} category within the
living categories, and with respect to the living/nonliving distinction; and
(4) the existence of a person-centered reference gradient. Our
electrophysiological analysis indicated stability of the networks at play in
each of these processes. Stability was also observed in the data taken from
word usage in the languages (synonyms and associated words obtained from
textual corpora).
| [
{
"created": "Wed, 11 Oct 2017 16:59:12 GMT",
"version": "v1"
}
] | 2017-10-12 | [
[
"Ploux",
"Sabine",
""
],
[
"Wang",
"Rui",
""
],
[
"Zhong",
"ZhengFeng",
""
],
[
"Zhao",
"Hai",
""
],
[
"Xin",
"Yang",
""
],
[
"Lu",
"Bao-Liang",
""
]
] | Many studies in the neurosciences have dealt with the semantic processing of words or categories, but few have looked into the semantic organization of the lexicon thought as a system. The present study was designed to try to move towards this goal, using both electrophysiological and corpus-based data, and to compare two languages from different families: French and Mandarin Chinese. We conducted an EEG-based semantic-decision experiment using 240 words from eight categories (clothing, parts of a house, tools, vehicles, fruits/vegetables, animals, body parts, and people) as the material. A data-analysis method (correspondence analysis) commonly used in computational linguistics was applied to the electrophysiological signals. The present cross-language comparison indicated stability for the following aspects of the languages' lexical semantic organizations: (1) the living/nonliving distinction, which showed up as a main factor for both languages; (2) greater dispersion of the living categories as compared to the nonliving ones; (3) prototypicality of the \emph{animals} category within the living categories, and with respect to the living/nonliving distinction; and (4) the existence of a person-centered reference gradient. Our electrophysiological analysis indicated stability of the networks at play in each of these processes. Stability was also observed in the data taken from word usage in the languages (synonyms and associated words obtained from textual corpora). |
1303.4229 | Nicolas Innocenti | Nicolas Innocenti and Erik Aurell | Lognormality and oscillations in the coverage of high-throughput
transcriptomic data towards gene ends | null | J. Stat. Mech. (2013) P10013 | 10.1088/1742-5468/2013/10/P10013 | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | High-throughput transcriptomics experiments have reached the stage where the
count of the number of reads alignable to a given position can be treated as an
almost-continuous signal. This allows to ask questions of
biophysical/biotechnical nature, but which may still have biological
implications. Here we show that when sequencing RNA fragments from one end, as
it is the case on most platforms, an oscillation in the read count is observed
at the other end. We further show that these oscillations can be well described
by Kolmogorov's 1941 broken stick model. We investigate how the model can be
used to improve predictions of gene ends (3' transcript ends) but conclude that
with present data the improvement is only marginal. The results highlight
subtle effects in high-throughput transcriptomics experiments which do not have
a biological origin, but which may still be used to obtain biological
information.
| [
{
"created": "Mon, 18 Mar 2013 12:36:53 GMT",
"version": "v1"
},
{
"created": "Wed, 3 Jul 2013 20:34:33 GMT",
"version": "v2"
},
{
"created": "Wed, 28 Aug 2013 15:03:56 GMT",
"version": "v3"
}
] | 2014-10-02 | [
[
"Innocenti",
"Nicolas",
""
],
[
"Aurell",
"Erik",
""
]
] | High-throughput transcriptomics experiments have reached the stage where the count of the number of reads alignable to a given position can be treated as an almost-continuous signal. This allows to ask questions of biophysical/biotechnical nature, but which may still have biological implications. Here we show that when sequencing RNA fragments from one end, as it is the case on most platforms, an oscillation in the read count is observed at the other end. We further show that these oscillations can be well described by Kolmogorov's 1941 broken stick model. We investigate how the model can be used to improve predictions of gene ends (3' transcript ends) but conclude that with present data the improvement is only marginal. The results highlight subtle effects in high-throughput transcriptomics experiments which do not have a biological origin, but which may still be used to obtain biological information. |
1904.08182 | Carolin Loos | Carolin Loos, Jan Hasenauer | Mathematical modeling of variability in intracellular signaling | null | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Cellular signaling is essential in information processing and decision
making. Therefore, a variety of experimental approaches have been developed to
study signaling on bulk and single-cell level. Single-cell measurements of
signaling molecules demonstrated a substantial cell-to-cell variability,
raising questions about its causes and mechanisms and about how cell
populations cope with or exploit cellular heterogeneity. To gain insights from
single-cell signaling data, analysis and modeling approaches have been
introduced. This review discusses these modeling approaches, with a focus on
recent advances in the development and calibration of mechanistic models.
Additionally, it outlines current and future challenges.
| [
{
"created": "Wed, 17 Apr 2019 11:08:06 GMT",
"version": "v1"
}
] | 2019-04-18 | [
[
"Loos",
"Carolin",
""
],
[
"Hasenauer",
"Jan",
""
]
] | Cellular signaling is essential in information processing and decision making. Therefore, a variety of experimental approaches have been developed to study signaling on bulk and single-cell level. Single-cell measurements of signaling molecules demonstrated a substantial cell-to-cell variability, raising questions about its causes and mechanisms and about how cell populations cope with or exploit cellular heterogeneity. To gain insights from single-cell signaling data, analysis and modeling approaches have been introduced. This review discusses these modeling approaches, with a focus on recent advances in the development and calibration of mechanistic models. Additionally, it outlines current and future challenges. |
1501.06530 | David Bardos | David C. Bardos, Gurutzeta Guillera-Arroita and Brendan A. Wintle | Covariate influence in spatially autocorrelated occupancy and abundance
data | References updated | null | null | null | q-bio.QM stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The autologistic model and related auto-models, commonly applied as
autocovariate regression, offer distinct advantages for analysing spatially
autocorrelated ecological data. However, comparative studies by Carl and K\"uhn
(Ecol. Model., 2007, 207, 159), Dormann (Ecol. Model., 2007, 207, 234), Dormann
et al. (Ecography, 2007, 30, 609) and Beale et al. (Ecol. Lett., 2010, 13, 246)
concluded that autocovariate regression yields anomalous covariate parameter
estimates. The last three studies were based on erroneous numerical evidence,
due to violation of conditions (Besag, J. R. Stat. Soc., Ser. B, 1974, 36, 192)
for auto-model validity. Here we show that after correcting these technical
errors, a more fundamental conceptual error remains: the comparative studies
are founded on a mathematically incorrect notion of bias, involving direct
comparison of parameter estimates across models differing in mathematical
structure. We develop a set of simulation-based measures of covariate influence
that are directly comparable across models and apply them to examples from the
abovementioned studies. We find that in these cases, the effect of auto-model
parameters is similar to (and consistent with) corresponding linear model
effects, due to a phenomenon within auto-models that we refer to as "covariate
amplification". Thus, simple comparison of parameter magnitudes between
structurally different models can be highly misleading. We demonstrate that the
recent critique of auto-models is entirely unfounded. Correctly applied and
interpreted, autocovariate regression provides a practical approach to
inference for spatially autocorrelated species distribution or abundance data,
while overcoming well-known limitations of generalized linear models.
| [
{
"created": "Mon, 26 Jan 2015 19:21:43 GMT",
"version": "v1"
},
{
"created": "Tue, 27 Jan 2015 17:00:12 GMT",
"version": "v2"
}
] | 2015-01-28 | [
[
"Bardos",
"David C.",
""
],
[
"Guillera-Arroita",
"Gurutzeta",
""
],
[
"Wintle",
"Brendan A.",
""
]
] | The autologistic model and related auto-models, commonly applied as autocovariate regression, offer distinct advantages for analysing spatially autocorrelated ecological data. However, comparative studies by Carl and K\"uhn (Ecol. Model., 2007, 207, 159), Dormann (Ecol. Model., 2007, 207, 234), Dormann et al. (Ecography, 2007, 30, 609) and Beale et al. (Ecol. Lett., 2010, 13, 246) concluded that autocovariate regression yields anomalous covariate parameter estimates. The last three studies were based on erroneous numerical evidence, due to violation of conditions (Besag, J. R. Stat. Soc., Ser. B, 1974, 36, 192) for auto-model validity. Here we show that after correcting these technical errors, a more fundamental conceptual error remains: the comparative studies are founded on a mathematically incorrect notion of bias, involving direct comparison of parameter estimates across models differing in mathematical structure. We develop a set of simulation-based measures of covariate influence that are directly comparable across models and apply them to examples from the abovementioned studies. We find that in these cases, the effect of auto-model parameters is similar to (and consistent with) corresponding linear model effects, due to a phenomenon within auto-models that we refer to as "covariate amplification". Thus, simple comparison of parameter magnitudes between structurally different models can be highly misleading. We demonstrate that the recent critique of auto-models is entirely unfounded. Correctly applied and interpreted, autocovariate regression provides a practical approach to inference for spatially autocorrelated species distribution or abundance data, while overcoming well-known limitations of generalized linear models. |
1907.01588 | Elahe Arani | Elahe Arani, Sofia Triantafillou and Konrad P. Kording | Reverse engineering neural networks from many partial recordings | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Much of neuroscience aims at reverse engineering the brain, but we only
record a small number of neurons at a time. We do not currently know if reverse
engineering the brain requires us to simultaneously record most neurons or if
multiple recordings from smaller subsets suffice. This is made even more
important by the development of novel techniques that allow recording from
selected subsets of neurons, e.g. using optical techniques. To get at this
question, we analyze a neural network, trained on the MNIST dataset, using only
partial recordings and characterize the dependency of the quality of our
reverse engineering on the number of simultaneously recorded "neurons". We find
that reverse engineering of the nonlinear neural network is meaningfully
possible if a sufficiently large number of neurons is simultaneously recorded
but that this number can be considerably smaller than the number of neurons.
Moreover, recording many times from small random subsets of neurons yields
surprisingly good performance. Application in neuroscience suggests to
approximate the I/O function of an actual neural system, we need to record from
a much larger number of neurons. The kind of scaling analysis we perform here
can, and arguably should be used to calibrate approaches that can dramatically
scale up the size of recorded data sets in neuroscience.
| [
{
"created": "Tue, 2 Jul 2019 19:15:30 GMT",
"version": "v1"
}
] | 2019-07-04 | [
[
"Arani",
"Elahe",
""
],
[
"Triantafillou",
"Sofia",
""
],
[
"Kording",
"Konrad P.",
""
]
] | Much of neuroscience aims at reverse engineering the brain, but we only record a small number of neurons at a time. We do not currently know if reverse engineering the brain requires us to simultaneously record most neurons or if multiple recordings from smaller subsets suffice. This is made even more important by the development of novel techniques that allow recording from selected subsets of neurons, e.g. using optical techniques. To get at this question, we analyze a neural network, trained on the MNIST dataset, using only partial recordings and characterize the dependency of the quality of our reverse engineering on the number of simultaneously recorded "neurons". We find that reverse engineering of the nonlinear neural network is meaningfully possible if a sufficiently large number of neurons is simultaneously recorded but that this number can be considerably smaller than the number of neurons. Moreover, recording many times from small random subsets of neurons yields surprisingly good performance. Application in neuroscience suggests to approximate the I/O function of an actual neural system, we need to record from a much larger number of neurons. The kind of scaling analysis we perform here can, and arguably should be used to calibrate approaches that can dramatically scale up the size of recorded data sets in neuroscience. |
1105.1117 | Alfonso P\'erez-Escudero | Alfonso P\'erez-Escudero, Gonzalo G. de Polavieja | Collective Animal Behavior from Bayesian Estimation and Probability
Matching | 19 pages, including Supplemental Figures and Supplemental Text. In
press in PLoS Computational Biology | PLoS Comput Biol 7(11): e1002282 (2011) | 10.1371/journal.pcbi.1002282 | null | q-bio.QM cs.SI nlin.AO physics.data-an physics.soc-ph q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Animals living in groups make movement decisions that depend, among other
factors, on social interactions with other group members. Our present
understanding of social rules in animal collectives is mainly based on
empirical fits to observations, with less emphasis in obtaining
first-principles approaches that allow their derivation. Here we show that
patterns of collective decisions can be derived from the basic ability of
animals to make probabilistic estimations in the presence of uncertainty. We
build a decision-making model with two stages: Bayesian estimation and
probabilistic matching. In the first stage, each animal makes a Bayesian
estimation of which behavior is best to perform taking into account personal
information about the environment and social information collected by observing
the behaviors of other animals. In the probability matching stage, each animal
chooses a behavior with a probability equal to the Bayesian-estimated
probability that this behavior is the most appropriate one. This model derives
very simple rules of interaction in animal collectives that depend only on two
types of reliability parameters, one that each animal assigns to the other
animals and another given by the quality of the non-social information. We test
our model by obtaining theoretically a rich set of observed collective patterns
of decisions in three-spined sticklebacks, Gasterosteus aculeatus, a shoaling
fish species. The quantitative link shown between probabilistic estimation and
collective rules of behavior allows a better contact with other fields such as
foraging, mate selection, neurobiology and psychology, and gives predictions
for experiments directly testing the relationship between estimation and
collective behavior.
| [
{
"created": "Thu, 5 May 2011 16:32:59 GMT",
"version": "v1"
},
{
"created": "Mon, 7 Nov 2011 13:59:44 GMT",
"version": "v2"
}
] | 2011-11-22 | [
[
"Pérez-Escudero",
"Alfonso",
""
],
[
"de Polavieja",
"Gonzalo G.",
""
]
] | Animals living in groups make movement decisions that depend, among other factors, on social interactions with other group members. Our present understanding of social rules in animal collectives is mainly based on empirical fits to observations, with less emphasis in obtaining first-principles approaches that allow their derivation. Here we show that patterns of collective decisions can be derived from the basic ability of animals to make probabilistic estimations in the presence of uncertainty. We build a decision-making model with two stages: Bayesian estimation and probabilistic matching. In the first stage, each animal makes a Bayesian estimation of which behavior is best to perform taking into account personal information about the environment and social information collected by observing the behaviors of other animals. In the probability matching stage, each animal chooses a behavior with a probability equal to the Bayesian-estimated probability that this behavior is the most appropriate one. This model derives very simple rules of interaction in animal collectives that depend only on two types of reliability parameters, one that each animal assigns to the other animals and another given by the quality of the non-social information. We test our model by obtaining theoretically a rich set of observed collective patterns of decisions in three-spined sticklebacks, Gasterosteus aculeatus, a shoaling fish species. The quantitative link shown between probabilistic estimation and collective rules of behavior allows a better contact with other fields such as foraging, mate selection, neurobiology and psychology, and gives predictions for experiments directly testing the relationship between estimation and collective behavior. |
2110.11501 | Rui Ponte Costa | Joseph Pemberton and Ellen Boven and Richard Apps and Rui Ponte Costa | Cortico-cerebellar networks as decoupling neural interfaces | To appear in Advances in Neural Information Processing Systems 35
(NeurIPS 2021); 15 pages and 5 figures in the main manuscript; 8 pages and 8
figures in the supplementary material | null | null | null | q-bio.NC cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The brain solves the credit assignment problem remarkably well. For credit to
be assigned across neural networks they must, in principle, wait for specific
neural computations to finish. How the brain deals with this inherent locking
problem has remained unclear. Deep learning methods suffer from similar locking
constraints both on the forward and feedback phase. Recently, decoupled neural
interfaces (DNIs) were introduced as a solution to the forward and feedback
locking problems in deep networks. Here we propose that a specialised brain
region, the cerebellum, helps the cerebral cortex solve similar locking
problems akin to DNIs. To demonstrate the potential of this framework we
introduce a systems-level model in which a recurrent cortical network receives
online temporal feedback predictions from a cerebellar module. We test this
cortico-cerebellar recurrent neural network (ccRNN) model on a number of
sensorimotor (line and digit drawing) and cognitive tasks (pattern recognition
and caption generation) that have been shown to be cerebellar-dependent. In all
tasks, we observe that ccRNNs facilitates learning while reducing ataxia-like
behaviours, consistent with classical experimental observations. Moreover, our
model also explains recent behavioural and neuronal observations while making
several testable predictions across multiple levels. Overall, our work offers a
novel perspective on the cerebellum as a brain-wide decoupling machine for
efficient credit assignment and opens a new avenue between deep learning and
neuroscience.
| [
{
"created": "Thu, 21 Oct 2021 22:02:38 GMT",
"version": "v1"
},
{
"created": "Thu, 28 Oct 2021 12:57:34 GMT",
"version": "v2"
}
] | 2021-10-29 | [
[
"Pemberton",
"Joseph",
""
],
[
"Boven",
"Ellen",
""
],
[
"Apps",
"Richard",
""
],
[
"Costa",
"Rui Ponte",
""
]
] | The brain solves the credit assignment problem remarkably well. For credit to be assigned across neural networks they must, in principle, wait for specific neural computations to finish. How the brain deals with this inherent locking problem has remained unclear. Deep learning methods suffer from similar locking constraints both on the forward and feedback phase. Recently, decoupled neural interfaces (DNIs) were introduced as a solution to the forward and feedback locking problems in deep networks. Here we propose that a specialised brain region, the cerebellum, helps the cerebral cortex solve similar locking problems akin to DNIs. To demonstrate the potential of this framework we introduce a systems-level model in which a recurrent cortical network receives online temporal feedback predictions from a cerebellar module. We test this cortico-cerebellar recurrent neural network (ccRNN) model on a number of sensorimotor (line and digit drawing) and cognitive tasks (pattern recognition and caption generation) that have been shown to be cerebellar-dependent. In all tasks, we observe that ccRNNs facilitates learning while reducing ataxia-like behaviours, consistent with classical experimental observations. Moreover, our model also explains recent behavioural and neuronal observations while making several testable predictions across multiple levels. Overall, our work offers a novel perspective on the cerebellum as a brain-wide decoupling machine for efficient credit assignment and opens a new avenue between deep learning and neuroscience. |
1812.02121 | Cinzia Soresina | Sara Pasquali and Cinzia Soresina and Gianni Gilioli | The effects of fecundity, mortality and distribution of the initial
condition in phenological models | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pest phenological models describe the cumulative flux of the individuals into
each stage of the life cycle of a stage-structured population. Phenological
models are widely used tools in pest control decision making. Despite the fact
that these models do not provide information on population abundance, they
share some advantages with respect to the more sophisticated and complex
demographic models. The main advantage is that they do not require data
collection to define the initial conditions of model simulation, reducing the
effort for field sampling and the high uncertainty affecting sample estimates.
Phenological models are often built considering the developmental rate function
only. To the aim of adding more realism to phenological models, in this paper
we explore the consequences of improving these models taking into consideration
three additional elements: the age distribution of individuals which exit from
the overwintering phase, the age- and temperature-dependent profile of the
fecundity rate function and the consideration of a temperature-dependent
mortality rate function. Numerical simulations are performed to investigate the
effect of these elements with respect to phenological models considering
development rate functions only. To further test the implications of different
models formulation, we compare results obtained from different phenological
models to the case study of the codling moth (Cydia pomonella) a primary pest
of the apple orchard. The results obtained from model comparison are discussed
in view of their potential application in pest control decision support.
| [
{
"created": "Wed, 5 Dec 2018 17:20:46 GMT",
"version": "v1"
}
] | 2018-12-06 | [
[
"Pasquali",
"Sara",
""
],
[
"Soresina",
"Cinzia",
""
],
[
"Gilioli",
"Gianni",
""
]
] | Pest phenological models describe the cumulative flux of the individuals into each stage of the life cycle of a stage-structured population. Phenological models are widely used tools in pest control decision making. Despite the fact that these models do not provide information on population abundance, they share some advantages with respect to the more sophisticated and complex demographic models. The main advantage is that they do not require data collection to define the initial conditions of model simulation, reducing the effort for field sampling and the high uncertainty affecting sample estimates. Phenological models are often built considering the developmental rate function only. To the aim of adding more realism to phenological models, in this paper we explore the consequences of improving these models taking into consideration three additional elements: the age distribution of individuals which exit from the overwintering phase, the age- and temperature-dependent profile of the fecundity rate function and the consideration of a temperature-dependent mortality rate function. Numerical simulations are performed to investigate the effect of these elements with respect to phenological models considering development rate functions only. To further test the implications of different models formulation, we compare results obtained from different phenological models to the case study of the codling moth (Cydia pomonella) a primary pest of the apple orchard. The results obtained from model comparison are discussed in view of their potential application in pest control decision support. |
1811.12314 | Zhiqin Xu | Zhi-Qin John Xu, Douglas Zhou, David Cai | Swift Two-sample Test on High-dimensional Neural Spiking Data | 10 pages, 6 figures | null | null | null | q-bio.NC stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To understand how neural networks process information, it is important to
investigate how neural network dynamics varies with respect to different
stimuli. One challenging task is to design efficient statistical approaches to
analyze multiple spike train data obtained from a short recording time. Based
on the development of high-dimensional statistical methods, it is able to deal
with data whose dimension is much larger than the sample size. However, these
methods often require statistically independent samples to start with, while
neural data are correlated over consecutive sampling time bins. We develop an
approach to pretreat neural data to become independent samples over time by
transferring the correlation of dynamics for each neuron in different sampling
time bins into the correlation of dynamics among different dimensions within
each sampling time bin. We verify the method using simulation data generated
from Integrate-and-fire neuron network models and a large-scale network model
of primary visual cortex within a short time, e.g., a few seconds. Our method
may offer experimenters to use the advantage of the development of statistical
methods to analyze high-dimensional neural data.
| [
{
"created": "Sun, 11 Nov 2018 20:40:41 GMT",
"version": "v1"
}
] | 2018-11-30 | [
[
"Xu",
"Zhi-Qin John",
""
],
[
"Zhou",
"Douglas",
""
],
[
"Cai",
"David",
""
]
] | To understand how neural networks process information, it is important to investigate how neural network dynamics varies with respect to different stimuli. One challenging task is to design efficient statistical approaches to analyze multiple spike train data obtained from a short recording time. Based on the development of high-dimensional statistical methods, it is able to deal with data whose dimension is much larger than the sample size. However, these methods often require statistically independent samples to start with, while neural data are correlated over consecutive sampling time bins. We develop an approach to pretreat neural data to become independent samples over time by transferring the correlation of dynamics for each neuron in different sampling time bins into the correlation of dynamics among different dimensions within each sampling time bin. We verify the method using simulation data generated from Integrate-and-fire neuron network models and a large-scale network model of primary visual cortex within a short time, e.g., a few seconds. Our method may offer experimenters to use the advantage of the development of statistical methods to analyze high-dimensional neural data. |
2109.02228 | Christopher Nottingham | Christopher D. Nottingham and Russell B. Millar | spatialSim: multi-species spatiotemporal size-structured operating model
for management strategy evaluation | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Spatiotemporal processes have the potential to be one of the most influential
factors governing how fisheries targeting sedentary species respond to
harvesting. Despite this, management strategy evaluation often fails to account
for space or does so at low resolutions due to compute constraints. In this
paper, a multi-species spatiotemporal size-structured operating model for
sedentary species is presented. The model combines a spatially continuous
Gaussian Markov Random Field model of the population dynamics with an areal
harvesting model that supports preferential targeting and site selection
constraints (e.g., economic constraints). This approach is very compute
efficient, which makes it feasible to simulate realistic fisher dynamics and
catch data at true spatial scale (e.g., the swept area of a dredge). The New
Zealand surfclam fishery was used as a case study to demonstrate the
versatility of the operating model and to showcase the simulation of localized
depletion, which was manifest in the generation of realistic
catch-per-unit-effort data that were uncorrelated with the trends in population
abundance. The model is available as part of the open-source R package
spatialSim.
| [
{
"created": "Mon, 6 Sep 2021 03:44:17 GMT",
"version": "v1"
}
] | 2021-09-07 | [
[
"Nottingham",
"Christopher D.",
""
],
[
"Millar",
"Russell B.",
""
]
] | Spatiotemporal processes have the potential to be one of the most influential factors governing how fisheries targeting sedentary species respond to harvesting. Despite this, management strategy evaluation often fails to account for space or does so at low resolutions due to compute constraints. In this paper, a multi-species spatiotemporal size-structured operating model for sedentary species is presented. The model combines a spatially continuous Gaussian Markov Random Field model of the population dynamics with an areal harvesting model that supports preferential targeting and site selection constraints (e.g., economic constraints). This approach is very compute efficient, which makes it feasible to simulate realistic fisher dynamics and catch data at true spatial scale (e.g., the swept area of a dredge). The New Zealand surfclam fishery was used as a case study to demonstrate the versatility of the operating model and to showcase the simulation of localized depletion, which was manifest in the generation of realistic catch-per-unit-effort data that were uncorrelated with the trends in population abundance. The model is available as part of the open-source R package spatialSim. |
1407.3543 | Peter Gawthrop | Peter Gawthrop, Henrik Gollee and Ian Loram | Intermittent Control in Man and Machine | null | null | null | null | q-bio.QM cs.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Intermittent control has a long history in the physiological literature and
there is strong experimental evidence that some human control systems are
intermittent. Intermittent control has also appeared in various forms in the
engineering literature. This article discusses a particular mathematical model
of Event-driven Intermittent Control which brings together engineering and
physiological insights and builds on and extends previous work in this area.
Illustrative examples of the properties of Intermittent Control in a
physiological context are given together with suggestions for future research
directions in both physiology and engineering.
| [
{
"created": "Mon, 14 Jul 2014 06:07:37 GMT",
"version": "v1"
}
] | 2014-07-23 | [
[
"Gawthrop",
"Peter",
""
],
[
"Gollee",
"Henrik",
""
],
[
"Loram",
"Ian",
""
]
] | Intermittent control has a long history in the physiological literature and there is strong experimental evidence that some human control systems are intermittent. Intermittent control has also appeared in various forms in the engineering literature. This article discusses a particular mathematical model of Event-driven Intermittent Control which brings together engineering and physiological insights and builds on and extends previous work in this area. Illustrative examples of the properties of Intermittent Control in a physiological context are given together with suggestions for future research directions in both physiology and engineering. |
0709.0225 | Jonas Cremer | Jonas Cremer, Tobias Reichenbach, Erwin Frey | Anomalous finite-size effects in the Battle of the Sexes | 8 pages, 5 figures. To appear in the ECCS '07 issue, Eur. Phys. J. B
(2008) | Eur. Phys. J. B 63, 373-380 (2008) | 10.1140/epjb/e2008-00036-x | LMU-ASC 67/07 | q-bio.PE cond-mat.stat-mech physics.bio-ph | null | The Battle of the Sexes describes asymmetric conflicts in mating behavior of
males and females. Males can be philanderer or faithful, while females are
either fast or coy, leading to a cyclic dynamics. The adjusted replicator
equation predicts stable coexistence of all four strategies. In this situation,
we consider the effects of fluctuations stemming from a finite population size.
We show that they unavoidably lead to extinction of two strategies in the
population. However, the typical time until extinction occurs strongly prolongs
with increasing system size. In the meantime, a quasi-stationary probability
distribution forms that is anomalously flat in the vicinity of the coexistence
state. This behavior originates in a vanishing linear deterministic drift near
the fixed point. We provide numerical data as well as an analytical approach to
the mean extinction time and the quasi-stationary probability distribution.
| [
{
"created": "Mon, 3 Sep 2007 13:21:44 GMT",
"version": "v1"
}
] | 2008-08-31 | [
[
"Cremer",
"Jonas",
""
],
[
"Reichenbach",
"Tobias",
""
],
[
"Frey",
"Erwin",
""
]
] | The Battle of the Sexes describes asymmetric conflicts in mating behavior of males and females. Males can be philanderer or faithful, while females are either fast or coy, leading to a cyclic dynamics. The adjusted replicator equation predicts stable coexistence of all four strategies. In this situation, we consider the effects of fluctuations stemming from a finite population size. We show that they unavoidably lead to extinction of two strategies in the population. However, the typical time until extinction occurs strongly prolongs with increasing system size. In the meantime, a quasi-stationary probability distribution forms that is anomalously flat in the vicinity of the coexistence state. This behavior originates in a vanishing linear deterministic drift near the fixed point. We provide numerical data as well as an analytical approach to the mean extinction time and the quasi-stationary probability distribution. |
2407.21087 | Efrat Monsonego Ornan | Gal Becker, Jerome Nicolas Janssen, Rotem Kalev-Altman, Dana Meilich,
Astar Shitrit, Svetlana Penn, Ram Reifen and Efrat Monsonego Ornan | Plant and insect proteins support optimal bone growth and development;
Evidences from a pre-clinical model | null | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | By 2050, the global population will exceed 9 billion, demanding a 70%
increase in food production. Animal proteins alone may not suffice and
contribute to global warming. Alternative proteins such as legumes, algae, and
insects are being explored, but their health impacts are largely unknown. For
this, three-week-old rats were fed diets containing 20% protein from various
sources for six weeks. A casein-based control diet was compared to soy isolate,
spirulina powder, chickpea isolate, chickpea flour, and fly larvae powder.
Except for spirulina, alternative protein groups showed comparable growth
patterns to the casein group. Morphological and mechanical tests of femur bones
matched growth patterns. Caecal 16S analysis highlighted the impact on gut
microbiota diversity. Chickpea flour showed significantly lower
$\alpha$-diversity compared with casein and chickpea isolate groups while
chickpea flour, had the greatest distinction in $\beta$-diversity. Alternative
protein sources supported optimal growth, but quality and health implications
require further exploration.
| [
{
"created": "Tue, 30 Jul 2024 15:03:54 GMT",
"version": "v1"
}
] | 2024-08-01 | [
[
"Becker",
"Gal",
""
],
[
"Janssen",
"Jerome Nicolas",
""
],
[
"Kalev-Altman",
"Rotem",
""
],
[
"Meilich",
"Dana",
""
],
[
"Shitrit",
"Astar",
""
],
[
"Penn",
"Svetlana",
""
],
[
"Reifen",
"Ram",
""
],
[
"Ornan",
"Efrat Monsonego",
""
]
] | By 2050, the global population will exceed 9 billion, demanding a 70% increase in food production. Animal proteins alone may not suffice and contribute to global warming. Alternative proteins such as legumes, algae, and insects are being explored, but their health impacts are largely unknown. For this, three-week-old rats were fed diets containing 20% protein from various sources for six weeks. A casein-based control diet was compared to soy isolate, spirulina powder, chickpea isolate, chickpea flour, and fly larvae powder. Except for spirulina, alternative protein groups showed comparable growth patterns to the casein group. Morphological and mechanical tests of femur bones matched growth patterns. Caecal 16S analysis highlighted the impact on gut microbiota diversity. Chickpea flour showed significantly lower $\alpha$-diversity compared with casein and chickpea isolate groups while chickpea flour, had the greatest distinction in $\beta$-diversity. Alternative protein sources supported optimal growth, but quality and health implications require further exploration. |
q-bio/0311001 | Rajesh Karmakar | Indrani Bose and Rajesh Karmakar | Mathematical models of haploinsufficiency | 13 pages, 5 figures | null | null | null | q-bio.OT cond-mat.stat-mech q-bio.QM | null | We study simple mathematical models of gene expression to explore the
possible origins of haploinsufficiency (HI). In a diploid organism, each gene
exists in two copies and when one of these is mutated, the amount of proteins
synthesized is reduced and may fall below a threshold level for the onset of
some desired activity. This can give rise to HI, a manifestation of which is in
the form of a disease. We consider both deterministic and stochastic models of
gene expression and suggest possible scenarios for the occurrence of HI in the
two cases. In the stochastic case, random fluctuations around the mean protein
level give rise to a finite probability that the protein level falls below a
threshold. Increased gene copy number and faster gene expression kinetics
reduce the variance around the mean protein level. The difference between slow
and fast gene expression kinetics, as regards response to a signaling gradient,
is further pointed out. The majority of results reported in the paper are
derived analytically.
| [
{
"created": "Mon, 3 Nov 2003 11:06:52 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Bose",
"Indrani",
""
],
[
"Karmakar",
"Rajesh",
""
]
] | We study simple mathematical models of gene expression to explore the possible origins of haploinsufficiency (HI). In a diploid organism, each gene exists in two copies and when one of these is mutated, the amount of proteins synthesized is reduced and may fall below a threshold level for the onset of some desired activity. This can give rise to HI, a manifestation of which is in the form of a disease. We consider both deterministic and stochastic models of gene expression and suggest possible scenarios for the occurrence of HI in the two cases. In the stochastic case, random fluctuations around the mean protein level give rise to a finite probability that the protein level falls below a threshold. Increased gene copy number and faster gene expression kinetics reduce the variance around the mean protein level. The difference between slow and fast gene expression kinetics, as regards response to a signaling gradient, is further pointed out. The majority of results reported in the paper are derived analytically. |
2307.14550 | Kristina Crona | Kristina Crona | Irreversible evolution, obstacles in fitness landscapes and persistent
drug resistance | null | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | We use fitness graphs, or directed cube graphs, for analyzing evolutionary
reversibility. The main application is antimicrobial drug resistance.
Reversible drug resistance has been observed both clinically and
experimentally. If drug resistance depends on a single point mutation, then a
possible scenario is that the mutation reverts back to the wild-type codon
after the drug has been discontinued, so that susceptibility is fully restored.
In general, a drug pause does not automatically imply fast elimination of drug
resistance. Also if drug resistance is reversible, the threshold concentration
for reverse evolution may be lower than for forward evolution. For a
theoretical understanding of evolutionary reversibility, including threshold
asymmetries, it is necessary to analyze obstacles in fitness landscapes. We
compare local and global obstacles, obstacles for forward and reverse
evolution, and conjecture that favorable landscapes for forward evolution
correlate with evolution being reversible. Both suboptimal peaks and plateaus
are analyzed with some observations on the impact of redundancy and
dimensionality. Our findings are compared with laboratory studies on
irreversible malarial drug resistance.
| [
{
"created": "Thu, 27 Jul 2023 00:17:48 GMT",
"version": "v1"
}
] | 2023-07-28 | [
[
"Crona",
"Kristina",
""
]
] | We use fitness graphs, or directed cube graphs, for analyzing evolutionary reversibility. The main application is antimicrobial drug resistance. Reversible drug resistance has been observed both clinically and experimentally. If drug resistance depends on a single point mutation, then a possible scenario is that the mutation reverts back to the wild-type codon after the drug has been discontinued, so that susceptibility is fully restored. In general, a drug pause does not automatically imply fast elimination of drug resistance. Also if drug resistance is reversible, the threshold concentration for reverse evolution may be lower than for forward evolution. For a theoretical understanding of evolutionary reversibility, including threshold asymmetries, it is necessary to analyze obstacles in fitness landscapes. We compare local and global obstacles, obstacles for forward and reverse evolution, and conjecture that favorable landscapes for forward evolution correlate with evolution being reversible. Both suboptimal peaks and plateaus are analyzed with some observations on the impact of redundancy and dimensionality. Our findings are compared with laboratory studies on irreversible malarial drug resistance. |
2301.08785 | Rene Warren | Lauren Coombe, Ren\'e L. Warren, Johnathan Wong, Vladimir Nikolic,
Inanc Birol | ntLink: a toolkit for de novo genome assembly scaffolding and mapping
using long reads | 23 pages, 2 figures | Current Protocols, 3(4), e733 (2023) | 10.1002/cpz1.733 | null | q-bio.GN | http://creativecommons.org/licenses/by/4.0/ | With the increasing affordability and accessibility of genome sequencing
data, de novo genome assembly is an important first step to a wide variety of
downstream studies and analyses. Therefore, bioinformatics tools that enable
the generation of high-quality genome assemblies in a computationally efficient
manner are essential. Recent developments in long-read sequencing technologies
have greatly benefited genome assembly work, including scaffolding, by
providing long-range evidence that can aid in resolving the challenging
repetitive regions of complex genomes. ntLink is a flexible and
resource-efficient genome scaffolding tool that utilizes long-read sequencing
data to improve upon draft genome assemblies built from any sequencing
technologies, including the same long reads. Instead of using read alignments
to identify candidate joins, ntLink utilizes minimizer-based mappings to infer
how input sequences should be ordered and oriented into scaffolds. Recent
improvements to ntLink have added important features such as overlap detection,
gap-filling and in-code scaffolding iterations. Here, we present three basic
protocols demonstrating how to use each of these new features to yield highly
contiguous genome assemblies, while still maintaining ntLink's proven
computational efficiency. Further, as we illustrate in the alternate protocols,
the lightweight minimizer-based mappings that enable ntLink scaffolding can
also be utilized for other downstream applications, such as misassembly
detection. With its modularity and multiple modes of execution, ntLink has
broad benefit to the genomics community, from genome scaffolding and beyond.
ntLink is an open-source project and is freely available from
https://github.com/bcgsc/ntLink.
| [
{
"created": "Fri, 20 Jan 2023 20:08:00 GMT",
"version": "v1"
}
] | 2023-06-09 | [
[
"Coombe",
"Lauren",
""
],
[
"Warren",
"René L.",
""
],
[
"Wong",
"Johnathan",
""
],
[
"Nikolic",
"Vladimir",
""
],
[
"Birol",
"Inanc",
""
]
] | With the increasing affordability and accessibility of genome sequencing data, de novo genome assembly is an important first step to a wide variety of downstream studies and analyses. Therefore, bioinformatics tools that enable the generation of high-quality genome assemblies in a computationally efficient manner are essential. Recent developments in long-read sequencing technologies have greatly benefited genome assembly work, including scaffolding, by providing long-range evidence that can aid in resolving the challenging repetitive regions of complex genomes. ntLink is a flexible and resource-efficient genome scaffolding tool that utilizes long-read sequencing data to improve upon draft genome assemblies built from any sequencing technologies, including the same long reads. Instead of using read alignments to identify candidate joins, ntLink utilizes minimizer-based mappings to infer how input sequences should be ordered and oriented into scaffolds. Recent improvements to ntLink have added important features such as overlap detection, gap-filling and in-code scaffolding iterations. Here, we present three basic protocols demonstrating how to use each of these new features to yield highly contiguous genome assemblies, while still maintaining ntLink's proven computational efficiency. Further, as we illustrate in the alternate protocols, the lightweight minimizer-based mappings that enable ntLink scaffolding can also be utilized for other downstream applications, such as misassembly detection. With its modularity and multiple modes of execution, ntLink has broad benefit to the genomics community, from genome scaffolding and beyond. ntLink is an open-source project and is freely available from https://github.com/bcgsc/ntLink. |
2407.12051 | Zhiyuan Peng | Zhiyuan Peng, Yuanbo Tang, Yang Li | Dy-mer: An Explainable DNA Sequence Representation Scheme using Sparse
Recovery | null | null | null | null | q-bio.GN cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | DNA sequences encode vital genetic and biological information, yet these
unfixed-length sequences cannot serve as the input of common data mining
algorithms. Hence, various representation schemes have been developed to
transform DNA sequences into fixed-length numerical representations. However,
these schemes face difficulties in learning high-quality representations due to
the complexity and sparsity of DNA data. Additionally, DNA sequences are
inherently noisy because of mutations. While several schemes have been proposed
for their effectiveness, they often lack semantic structure, making it
difficult for biologists to validate and leverage the results. To address these
challenges, we propose \textbf{Dy-mer}, an explainable and robust DNA
representation scheme based on sparse recovery. Leveraging the underlying
semantic structure of DNA, we modify the traditional sparse recovery to capture
recurring patterns indicative of biological functions by representing frequent
K-mers as basis vectors and reconstructing each DNA sequence through simple
concatenation. Experimental results demonstrate that \textbf{Dy-mer} achieves
state-of-the-art performance in DNA promoter classification, yielding a
remarkable \textbf{13\%} increase in accuracy. Moreover, its inherent
explainability facilitates DNA clustering and motif detection, enhancing its
utility in biological research.
| [
{
"created": "Sat, 6 Jul 2024 15:08:31 GMT",
"version": "v1"
}
] | 2024-07-18 | [
[
"Peng",
"Zhiyuan",
""
],
[
"Tang",
"Yuanbo",
""
],
[
"Li",
"Yang",
""
]
] | DNA sequences encode vital genetic and biological information, yet these unfixed-length sequences cannot serve as the input of common data mining algorithms. Hence, various representation schemes have been developed to transform DNA sequences into fixed-length numerical representations. However, these schemes face difficulties in learning high-quality representations due to the complexity and sparsity of DNA data. Additionally, DNA sequences are inherently noisy because of mutations. While several schemes have been proposed for their effectiveness, they often lack semantic structure, making it difficult for biologists to validate and leverage the results. To address these challenges, we propose \textbf{Dy-mer}, an explainable and robust DNA representation scheme based on sparse recovery. Leveraging the underlying semantic structure of DNA, we modify the traditional sparse recovery to capture recurring patterns indicative of biological functions by representing frequent K-mers as basis vectors and reconstructing each DNA sequence through simple concatenation. Experimental results demonstrate that \textbf{Dy-mer} achieves state-of-the-art performance in DNA promoter classification, yielding a remarkable \textbf{13\%} increase in accuracy. Moreover, its inherent explainability facilitates DNA clustering and motif detection, enhancing its utility in biological research. |
0806.3048 | Dejan Stokic | Dejan Stokic, Rudolf Hanel, Stefan Thurner | A fast and efficient gene-network reconstruction method from multiple
over-expression experiments | 10 pages, 3 figures | null | null | null | q-bio.MN q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reverse engineering of gene regulatory networks presents one of the big
challenges in systems biology. Gene regulatory networks are usually inferred
from a set of single-gene over-expressions and/or knockout experiments.
Functional relationships between genes are retrieved either from the steady
state gene expressions or from respective time series. We present a novel
algorithm for gene network reconstruction on the basis of steady-state
gene-chip data from over-expression experiments. The algorithm is based on a
straight forward solution of a linear gene-dynamics equation, where
experimental data is fed in as a first predictor for the solution. We compare
the algorithm's performance with the NIR algorithm, both on the well known
E.Coli experimental data and on in-silico experiments. We show superiority of
the proposed algorithm in the number of correctly reconstructed links and
discuss computational time and robustness. The proposed algorithm is not
limited by combinatorial explosion problems and can be used in principle for
large networks of thousands of genes.
| [
{
"created": "Wed, 18 Jun 2008 16:57:10 GMT",
"version": "v1"
}
] | 2008-06-19 | [
[
"Stokic",
"Dejan",
""
],
[
"Hanel",
"Rudolf",
""
],
[
"Thurner",
"Stefan",
""
]
] | Reverse engineering of gene regulatory networks presents one of the big challenges in systems biology. Gene regulatory networks are usually inferred from a set of single-gene over-expressions and/or knockout experiments. Functional relationships between genes are retrieved either from the steady state gene expressions or from respective time series. We present a novel algorithm for gene network reconstruction on the basis of steady-state gene-chip data from over-expression experiments. The algorithm is based on a straight forward solution of a linear gene-dynamics equation, where experimental data is fed in as a first predictor for the solution. We compare the algorithm's performance with the NIR algorithm, both on the well known E.Coli experimental data and on in-silico experiments. We show superiority of the proposed algorithm in the number of correctly reconstructed links and discuss computational time and robustness. The proposed algorithm is not limited by combinatorial explosion problems and can be used in principle for large networks of thousands of genes. |
2003.06122 | Waradon Sungnak | Waradon Sungnak, Ni Huang, Christophe B\'ecavin, Marijn Berg, HCA Lung
Biological Network | SARS-CoV-2 Entry Genes Are Most Highly Expressed in Nasal Goblet and
Ciliated Cells within Human Airways | null | Nature Medicine, 2020 | 10.1038/s41591-020-0868-6 | null | q-bio.CB | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The SARS-CoV-2 coronavirus, the etiologic agent responsible for COVID-19
coronavirus disease, is a global threat. To better understand viral tropism, we
assessed the RNA expression of the coronavirus receptor, ACE2, as well as the
viral S protein priming protease TMPRSS2 thought to govern viral entry in
single-cell RNA-sequencing (scRNA-seq) datasets from healthy individuals
generated by the Human Cell Atlas consortium. We found that ACE2, as well as
the protease TMPRSS2, are differentially expressed in respiratory and gut
epithelial cells. In-depth analysis of epithelial cells in the respiratory tree
reveals that nasal epithelial cells, specifically goblet/secretory cells and
ciliated cells, display the highest ACE2 expression of all the epithelial cells
analyzed. The skewed expression of viral receptors/entry-associated proteins
towards the upper airway may be correlated with enhanced transmissivity.
Finally, we showed that many of the top genes associated with ACE2 airway
epithelial expression are innate immune-associated, antiviral genes, highly
enriched in the nasal epithelial cells. This association with immune pathways
might have clinical implications for the course of infection and viral
pathology, and highlights the specific significance of nasal epithelia in viral
infection. Our findings underscore the importance of the availability of the
Human Cell Atlas as a reference dataset. In this instance, analysis of the
compendium of data points to a particularly relevant role for nasal goblet and
ciliated cells as early viral targets and potential reservoirs of SARS-CoV-2
infection. This, in turn, serves as a biological framework for dissecting viral
transmission and developing clinical strategies for prevention and therapy.
| [
{
"created": "Fri, 13 Mar 2020 05:29:24 GMT",
"version": "v1"
}
] | 2020-04-27 | [
[
"Sungnak",
"Waradon",
""
],
[
"Huang",
"Ni",
""
],
[
"Bécavin",
"Christophe",
""
],
[
"Berg",
"Marijn",
""
],
[
"Network",
"HCA Lung Biological",
""
]
] | The SARS-CoV-2 coronavirus, the etiologic agent responsible for COVID-19 coronavirus disease, is a global threat. To better understand viral tropism, we assessed the RNA expression of the coronavirus receptor, ACE2, as well as the viral S protein priming protease TMPRSS2 thought to govern viral entry in single-cell RNA-sequencing (scRNA-seq) datasets from healthy individuals generated by the Human Cell Atlas consortium. We found that ACE2, as well as the protease TMPRSS2, are differentially expressed in respiratory and gut epithelial cells. In-depth analysis of epithelial cells in the respiratory tree reveals that nasal epithelial cells, specifically goblet/secretory cells and ciliated cells, display the highest ACE2 expression of all the epithelial cells analyzed. The skewed expression of viral receptors/entry-associated proteins towards the upper airway may be correlated with enhanced transmissivity. Finally, we showed that many of the top genes associated with ACE2 airway epithelial expression are innate immune-associated, antiviral genes, highly enriched in the nasal epithelial cells. This association with immune pathways might have clinical implications for the course of infection and viral pathology, and highlights the specific significance of nasal epithelia in viral infection. Our findings underscore the importance of the availability of the Human Cell Atlas as a reference dataset. In this instance, analysis of the compendium of data points to a particularly relevant role for nasal goblet and ciliated cells as early viral targets and potential reservoirs of SARS-CoV-2 infection. This, in turn, serves as a biological framework for dissecting viral transmission and developing clinical strategies for prevention and therapy. |
2002.08501 | Mark Byrne | Mark Byrne | Simple post-translational circadian clock models from selective
sequestration | 8 pages, submitted to JBR | null | null | null | q-bio.MN q-bio.BM q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It is possible that there are post-translational circadian oscillators that
continue functioning in the absence of negative feedback transcriptional
repression in many cell types from diverse organisms. Apart from the KaiABC
system from cyanobacteria, the molecular components and interactions required
to create in-vitro ("test-tube") circadian oscillations in different cell types
are currently unknown. Inspired by the KaiABC system, I provide
"proof-of-principle" mathematical models that a protein with 2 (or more)
modification sites which selectively sequesters an effector/cofactor molecule
can function as a circadian time-keeper. The 2-site mechanism can be
implemented using two relatively simple coupled non-linear ODEs in terms of
site occupancy; the models do not require overly special fine-tuning of
parameters for generating stable limit cycle oscillations.
| [
{
"created": "Thu, 20 Feb 2020 00:02:24 GMT",
"version": "v1"
},
{
"created": "Fri, 21 Feb 2020 05:03:42 GMT",
"version": "v2"
}
] | 2020-02-24 | [
[
"Byrne",
"Mark",
""
]
] | It is possible that there are post-translational circadian oscillators that continue functioning in the absence of negative feedback transcriptional repression in many cell types from diverse organisms. Apart from the KaiABC system from cyanobacteria, the molecular components and interactions required to create in-vitro ("test-tube") circadian oscillations in different cell types are currently unknown. Inspired by the KaiABC system, I provide "proof-of-principle" mathematical models that a protein with 2 (or more) modification sites which selectively sequesters an effector/cofactor molecule can function as a circadian time-keeper. The 2-site mechanism can be implemented using two relatively simple coupled non-linear ODEs in terms of site occupancy; the models do not require overly special fine-tuning of parameters for generating stable limit cycle oscillations. |
2206.02249 | Adedapo Alabi | Adedapo Alabi, Dieter Vanderelst and Ali Minai | Rapid Learning of Spatial Representations for Goal-Directed Navigation
Based on a Novel Model of Hippocampal Place Fields | null | null | null | null | q-bio.NC cs.NE | http://creativecommons.org/licenses/by/4.0/ | The discovery of place cells and other spatially modulated neurons in the
hippocampal complex of rodents has been crucial to elucidating the neural basis
of spatial cognition. More recently, the replay of neural sequences encoding
previously experienced trajectories has been observed during consummatory
behavior potentially with implications for quick memory consolidation and
behavioral planning. Several promising models for robotic navigation and
reinforcement learning have been proposed based on these and previous findings.
Most of these models, however, use carefully engineered neural networks and are
tested in simple environments. In this paper, we develop a self-organized model
incorporating place cells and replay, and demonstrate its utility for rapid
one-shot learning in non-trivial environments with obstacles.
| [
{
"created": "Sun, 5 Jun 2022 19:50:33 GMT",
"version": "v1"
},
{
"created": "Tue, 7 Jun 2022 13:19:40 GMT",
"version": "v2"
},
{
"created": "Sat, 14 Jan 2023 00:25:40 GMT",
"version": "v3"
}
] | 2023-01-18 | [
[
"Alabi",
"Adedapo",
""
],
[
"Vanderelst",
"Dieter",
""
],
[
"Minai",
"Ali",
""
]
] | The discovery of place cells and other spatially modulated neurons in the hippocampal complex of rodents has been crucial to elucidating the neural basis of spatial cognition. More recently, the replay of neural sequences encoding previously experienced trajectories has been observed during consummatory behavior potentially with implications for quick memory consolidation and behavioral planning. Several promising models for robotic navigation and reinforcement learning have been proposed based on these and previous findings. Most of these models, however, use carefully engineered neural networks and are tested in simple environments. In this paper, we develop a self-organized model incorporating place cells and replay, and demonstrate its utility for rapid one-shot learning in non-trivial environments with obstacles. |
q-bio/0505002 | Elchanan Mossel | Elchanan Mossel, Eric Vigoda | Limitations of Markov chain Monte Carlo algorithms for Bayesian
Inference of phylogeny | Published at http://dx.doi.org/10.1214/105051600000000538 in the
Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute
of Mathematical Statistics (http://www.imstat.org) | Annals of Applied Probability 2006, Vol. 16, No. 4, 2215-2234 | 10.1214/105051600000000538 | IMS-AAP-AAP0205 | q-bio.PE q-bio.GN | null | Markov chain Monte Carlo algorithms play a key role in the Bayesian approach
to phylogenetic inference. In this paper, we present the first theoretical work
analyzing the rate of convergence of several Markov chains widely used in
phylogenetic inference. We analyze simple, realistic examples where these
Markov chains fail to converge quickly. In particular, the data studied are
generated from a pair of trees, under a standard evolutionary model. We prove
that many of the popular Markov chains take exponentially long to reach their
stationary distribution. Our construction is pertinent since it is well known
that phylogenetic trees for genes may differ within a single organism. Our
results shed a cautionary light on phylogenetic analysis using Bayesian
inference and highlight future directions for potential theoretical work.
| [
{
"created": "Mon, 2 May 2005 18:41:17 GMT",
"version": "v1"
},
{
"created": "Tue, 24 May 2005 02:43:48 GMT",
"version": "v2"
},
{
"created": "Thu, 22 Dec 2005 19:33:12 GMT",
"version": "v3"
},
{
"created": "Mon, 12 Jun 2006 21:29:57 GMT",
"version": "v4"
},
{
"created": "Wed, 14 Feb 2007 13:24:01 GMT",
"version": "v5"
}
] | 2007-05-23 | [
[
"Mossel",
"Elchanan",
""
],
[
"Vigoda",
"Eric",
""
]
] | Markov chain Monte Carlo algorithms play a key role in the Bayesian approach to phylogenetic inference. In this paper, we present the first theoretical work analyzing the rate of convergence of several Markov chains widely used in phylogenetic inference. We analyze simple, realistic examples where these Markov chains fail to converge quickly. In particular, the data studied are generated from a pair of trees, under a standard evolutionary model. We prove that many of the popular Markov chains take exponentially long to reach their stationary distribution. Our construction is pertinent since it is well known that phylogenetic trees for genes may differ within a single organism. Our results shed a cautionary light on phylogenetic analysis using Bayesian inference and highlight future directions for potential theoretical work. |
0807.3350 | Bradly Alicea | Bradly Alicea | Evaluating Intraspecific Variation and Interspecific Diversity:
comparing humans and fish species | Masters thesis | null | null | null | q-bio.PE q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The analysis of eight molecular datasets involving human and teleost examples
along with morphological samples from several groups of Neotropical electric
fish (Order: Gymnotiformes) were used in this thesis to test the dynamics of
both intraspecific variation and interspecific diversity. In terms of
investigating molecular interspecific diversity among humans, two experimental
exercises were performed. A cladistic exchange experiment tested for the extent
of discontinuity and interbreeding between H. sapiens and neanderthal
populations. As part of the same question, another experimental exercise tested
the amount of molecular variance resulting from simulations which treated
neanderthals as being either a local population of modern humans or as a
distinct subspecies. Finally, comparisons of hominid populations over time with
fish species helped to define what constitutes taxonomically relevant
differences between morphological populations as expressed among both trait
size ranges and through growth patterns that begin during ontogeny. Compared to
the subdivision found within selected teleost species, H. sapiens molecular
data exhibited little variation and discontinuity between geographical regions.
Results of the two experimental exercises concluded that neanderthals exhibit
taxonomic distance from modern H. sapiens. However, this distance was not so
great as to exclude the possibility of interbreeding between the two
subspecific groups. Finally, a series of characters were analyzed among species
of Neotropical electric fish. These analyses were compared with hominid
examples to determine what constituted taxonomically relevant differences
between populations as expressed among specific morphometric traits that
develop during the juvenile phase.
| [
{
"created": "Tue, 22 Jul 2008 01:29:45 GMT",
"version": "v1"
},
{
"created": "Thu, 28 Nov 2013 07:45:08 GMT",
"version": "v2"
}
] | 2013-12-02 | [
[
"Alicea",
"Bradly",
""
]
] | The analysis of eight molecular datasets involving human and teleost examples along with morphological samples from several groups of Neotropical electric fish (Order: Gymnotiformes) were used in this thesis to test the dynamics of both intraspecific variation and interspecific diversity. In terms of investigating molecular interspecific diversity among humans, two experimental exercises were performed. A cladistic exchange experiment tested for the extent of discontinuity and interbreeding between H. sapiens and neanderthal populations. As part of the same question, another experimental exercise tested the amount of molecular variance resulting from simulations which treated neanderthals as being either a local population of modern humans or as a distinct subspecies. Finally, comparisons of hominid populations over time with fish species helped to define what constitutes taxonomically relevant differences between morphological populations as expressed among both trait size ranges and through growth patterns that begin during ontogeny. Compared to the subdivision found within selected teleost species, H. sapiens molecular data exhibited little variation and discontinuity between geographical regions. Results of the two experimental exercises concluded that neanderthals exhibit taxonomic distance from modern H. sapiens. However, this distance was not so great as to exclude the possibility of interbreeding between the two subspecific groups. Finally, a series of characters were analyzed among species of Neotropical electric fish. These analyses were compared with hominid examples to determine what constituted taxonomically relevant differences between populations as expressed among specific morphometric traits that develop during the juvenile phase. |
2206.06691 | Mrinal Kanti Pal | Mrinal Kanti Pal, Swarup Poria | Effect of Non-local Grazing on Dry-land Vegetation Dynamics | 13 pages, 6 figures | null | null | null | q-bio.PE math.DS nlin.PS | http://creativecommons.org/licenses/by/4.0/ | Dry-land ecosystem has turned into a matter of grave concern, due to growing
threat of land degradation and bioproductivity-loss. Self-organized vegetation
patterns are a remarkable characteristic of these ecosystems; apart from being
visually captivating, patterns modulate the system-response to increasing
environmental stress. Empirical studies hinted that herbivory is one the key
regulatory mechanism behind pattern formation and overall ecosystem
functioning. However most of the mathematical models have taken a mean-field
strategy to grazing; foraging has been considered to be independent of spatial
distribution of vegetation. To this end, an extended version of the celebrated
plant-water model due to Klausmeier, has been taken as the base here. To
encompass the effect of heterogeneous vegetation distribution on foraging
intensity and subsequent impact on entire ecosystem, grazing is considered here
to depend on spatially weighted average vegetation density, instead of density
at a particular point. Moreover, varying influence of vegetation at any
location over gazing elsewhere, is incorporated by choosing suitable averaging
function. A comprehensive analysis demonstrates that inclusion of spatial
non-locality, alters the understanding of system dynamics significantly. The
grazing ecosystem is found to be more resilient to increasing aridity than it
was anticipated to be in earlier studies on non-local grazing. The
system-response to rising environmental pressure is also observed to vary
depending on the grazer. Obtained results also suggest possibility of
multi-stability due the history-dependence of system-response. Overall, this
work indicates that the spatial heterogeneity in grazing intensity has a
decisive role to play in the functioning of water-limited ecosystems.
| [
{
"created": "Tue, 14 Jun 2022 08:46:45 GMT",
"version": "v1"
},
{
"created": "Sat, 24 Sep 2022 14:00:18 GMT",
"version": "v2"
}
] | 2022-09-27 | [
[
"Pal",
"Mrinal Kanti",
""
],
[
"Poria",
"Swarup",
""
]
] | Dry-land ecosystem has turned into a matter of grave concern, due to growing threat of land degradation and bioproductivity-loss. Self-organized vegetation patterns are a remarkable characteristic of these ecosystems; apart from being visually captivating, patterns modulate the system-response to increasing environmental stress. Empirical studies hinted that herbivory is one the key regulatory mechanism behind pattern formation and overall ecosystem functioning. However most of the mathematical models have taken a mean-field strategy to grazing; foraging has been considered to be independent of spatial distribution of vegetation. To this end, an extended version of the celebrated plant-water model due to Klausmeier, has been taken as the base here. To encompass the effect of heterogeneous vegetation distribution on foraging intensity and subsequent impact on entire ecosystem, grazing is considered here to depend on spatially weighted average vegetation density, instead of density at a particular point. Moreover, varying influence of vegetation at any location over gazing elsewhere, is incorporated by choosing suitable averaging function. A comprehensive analysis demonstrates that inclusion of spatial non-locality, alters the understanding of system dynamics significantly. The grazing ecosystem is found to be more resilient to increasing aridity than it was anticipated to be in earlier studies on non-local grazing. The system-response to rising environmental pressure is also observed to vary depending on the grazer. Obtained results also suggest possibility of multi-stability due the history-dependence of system-response. Overall, this work indicates that the spatial heterogeneity in grazing intensity has a decisive role to play in the functioning of water-limited ecosystems. |
1710.00495 | Diwakar Shukla | Zahra Shamsi, Kevin J. Cheng and Diwakar Shukla | REinforcement learning based Adaptive samPling: REAPing Rewards by
Exploring Protein Conformational Landscapes | null | null | null | null | q-bio.BM physics.bio-ph physics.chem-ph | http://creativecommons.org/licenses/by/4.0/ | One of the key limitations of Molecular Dynamics simulations is the
computational intractability of sampling protein conformational landscapes
associated with either large system size or long timescales. To overcome this
bottleneck, we present the REinforcement learning based Adaptive samPling
(REAP) algorithm that aims to efficiently sample conformational space by
learning the relative importance of each reaction coordinate as it samples the
landscape. To achieve this, the algorithm uses concepts from the field of
reinforcement learning, a subset of machine learning, which rewards sampling
along important degrees of freedom and disregards others that do not facilitate
exploration or exploitation. We demonstrate the effectiveness of REAP by
comparing the sampling to long continuous MD simulations and least-counts
adaptive sampling on two model landscapes (L-shaped and circular), and
realistic systems such as alanine dipeptide and Src kinase. In all four
systems, the REAP algorithm consistently demonstrates its ability to explore
conformational space faster than the other two methods when comparing the
expected values of the landscape discovered for a given amount of time. The key
advantage of REAP is on-the-fly estimation of the importance of collective
variables, which makes it particularly useful for systems with limited
structural information.
| [
{
"created": "Mon, 2 Oct 2017 05:57:07 GMT",
"version": "v1"
},
{
"created": "Thu, 5 Jul 2018 20:16:49 GMT",
"version": "v2"
}
] | 2018-07-09 | [
[
"Shamsi",
"Zahra",
""
],
[
"Cheng",
"Kevin J.",
""
],
[
"Shukla",
"Diwakar",
""
]
] | One of the key limitations of Molecular Dynamics simulations is the computational intractability of sampling protein conformational landscapes associated with either large system size or long timescales. To overcome this bottleneck, we present the REinforcement learning based Adaptive samPling (REAP) algorithm that aims to efficiently sample conformational space by learning the relative importance of each reaction coordinate as it samples the landscape. To achieve this, the algorithm uses concepts from the field of reinforcement learning, a subset of machine learning, which rewards sampling along important degrees of freedom and disregards others that do not facilitate exploration or exploitation. We demonstrate the effectiveness of REAP by comparing the sampling to long continuous MD simulations and least-counts adaptive sampling on two model landscapes (L-shaped and circular), and realistic systems such as alanine dipeptide and Src kinase. In all four systems, the REAP algorithm consistently demonstrates its ability to explore conformational space faster than the other two methods when comparing the expected values of the landscape discovered for a given amount of time. The key advantage of REAP is on-the-fly estimation of the importance of collective variables, which makes it particularly useful for systems with limited structural information. |
1604.07457 | Panqu Wang | Panqu Wang, Garrison Cottrell | Modeling the Contribution of Central Versus Peripheral Vision in Scene,
Object, and Face Recognition | CogSci 2016 Conference Paper | null | null | null | q-bio.NC cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It is commonly believed that the central visual field is important for
recognizing objects and faces, and the peripheral region is useful for scene
recognition. However, the relative importance of central versus peripheral
information for object, scene, and face recognition is unclear. In a behavioral
study, Larson and Loschky (2009) investigated this question by measuring the
scene recognition accuracy as a function of visual angle, and demonstrated that
peripheral vision was indeed more useful in recognizing scenes than central
vision. In this work, we modeled and replicated the result of Larson and
Loschky (2009), using deep convolutional neural networks. Having fit the data
for scenes, we used the model to predict future data for large-scale scene
recognition as well as for objects and faces. Our results suggest that the
relative order of importance of using central visual field information is face
recognition>object recognition>scene recognition, and vice-versa for peripheral
information.
| [
{
"created": "Mon, 25 Apr 2016 22:01:50 GMT",
"version": "v1"
}
] | 2016-04-27 | [
[
"Wang",
"Panqu",
""
],
[
"Cottrell",
"Garrison",
""
]
] | It is commonly believed that the central visual field is important for recognizing objects and faces, and the peripheral region is useful for scene recognition. However, the relative importance of central versus peripheral information for object, scene, and face recognition is unclear. In a behavioral study, Larson and Loschky (2009) investigated this question by measuring the scene recognition accuracy as a function of visual angle, and demonstrated that peripheral vision was indeed more useful in recognizing scenes than central vision. In this work, we modeled and replicated the result of Larson and Loschky (2009), using deep convolutional neural networks. Having fit the data for scenes, we used the model to predict future data for large-scale scene recognition as well as for objects and faces. Our results suggest that the relative order of importance of using central visual field information is face recognition>object recognition>scene recognition, and vice-versa for peripheral information. |
1706.06089 | Alberto Sorrentino | Filippo Rossi, Gianvittorio Luria, Sara Sommariva and Alberto
Sorrentino | Bayesian multi--dipole localization and uncertainty quantification from
simultaneous EEG and MEG recordings | 4 pages, 3 figures -- conference paper from EMBEC 2017, Tampere,
Finland | null | 10.1007/978-981-10-5122-7_211 | null | q-bio.QM stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We deal with estimation of multiple dipoles from combined MEG and EEG
time--series. We use a sequential Monte Carlo algorithm to characterize the
posterior distribution of the number of dipoles and their locations. By
considering three test cases, we show that using the combined data the method
can localize sources that are not easily (or not at all) visible with either of
the two individual data alone. In addition, the posterior distribution from
combined data exhibits a lower variance, i.e. lower uncertainty, than the
posterior from single device.
| [
{
"created": "Mon, 19 Jun 2017 17:59:44 GMT",
"version": "v1"
}
] | 2017-06-20 | [
[
"Rossi",
"Filippo",
""
],
[
"Luria",
"Gianvittorio",
""
],
[
"Sommariva",
"Sara",
""
],
[
"Sorrentino",
"Alberto",
""
]
] | We deal with estimation of multiple dipoles from combined MEG and EEG time--series. We use a sequential Monte Carlo algorithm to characterize the posterior distribution of the number of dipoles and their locations. By considering three test cases, we show that using the combined data the method can localize sources that are not easily (or not at all) visible with either of the two individual data alone. In addition, the posterior distribution from combined data exhibits a lower variance, i.e. lower uncertainty, than the posterior from single device. |
2206.07015 | Shuke Zhang | Shuke Zhang, Yanzhao Jin, Tianmeng Liu, Qi Wang, Zhaohui Zhang,
Shuliang Zhao, Bo Shan | SS-GNN: A Simple-Structured Graph Neural Network for Affinity Prediction | null | null | null | null | q-bio.BM cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Efficient and effective drug-target binding affinity (DTBA) prediction is a
challenging task due to the limited computational resources in practical
applications and is a crucial basis for drug screening. Inspired by the good
representation ability of graph neural networks (GNNs), we propose a
simple-structured GNN model named SS-GNN to accurately predict DTBA. By
constructing a single undirected graph based on a distance threshold to
represent protein-ligand interactions, the scale of the graph data is greatly
reduced. Moreover, ignoring covalent bonds in the protein further reduces the
computational cost of the model. The GNN-MLP module takes the latent feature
extraction of atoms and edges in the graph as two mutually independent
processes. We also develop an edge-based atom-pair feature aggregation method
to represent complex interactions and a graph pooling-based method to predict
the binding affinity of the complex. We achieve state-of-the-art prediction
performance using a simple model (with only 0.6M parameters) without
introducing complicated geometric feature descriptions. SS-GNN achieves
Pearson's Rp=0.853 on the PDBbind v2016 core set, outperforming
state-of-the-art GNN-based methods by 5.2%. Moreover, the simplified model
structure and concise data processing procedure improve the prediction
efficiency of the model. For a typical protein-ligand complex, affinity
prediction takes only 0.2 ms. All codes are freely accessible at
https://github.com/xianyuco/SS-GNN.
| [
{
"created": "Wed, 25 May 2022 04:47:13 GMT",
"version": "v1"
}
] | 2022-06-15 | [
[
"Zhang",
"Shuke",
""
],
[
"Jin",
"Yanzhao",
""
],
[
"Liu",
"Tianmeng",
""
],
[
"Wang",
"Qi",
""
],
[
"Zhang",
"Zhaohui",
""
],
[
"Zhao",
"Shuliang",
""
],
[
"Shan",
"Bo",
""
]
] | Efficient and effective drug-target binding affinity (DTBA) prediction is a challenging task due to the limited computational resources in practical applications and is a crucial basis for drug screening. Inspired by the good representation ability of graph neural networks (GNNs), we propose a simple-structured GNN model named SS-GNN to accurately predict DTBA. By constructing a single undirected graph based on a distance threshold to represent protein-ligand interactions, the scale of the graph data is greatly reduced. Moreover, ignoring covalent bonds in the protein further reduces the computational cost of the model. The GNN-MLP module takes the latent feature extraction of atoms and edges in the graph as two mutually independent processes. We also develop an edge-based atom-pair feature aggregation method to represent complex interactions and a graph pooling-based method to predict the binding affinity of the complex. We achieve state-of-the-art prediction performance using a simple model (with only 0.6M parameters) without introducing complicated geometric feature descriptions. SS-GNN achieves Pearson's Rp=0.853 on the PDBbind v2016 core set, outperforming state-of-the-art GNN-based methods by 5.2%. Moreover, the simplified model structure and concise data processing procedure improve the prediction efficiency of the model. For a typical protein-ligand complex, affinity prediction takes only 0.2 ms. All codes are freely accessible at https://github.com/xianyuco/SS-GNN. |
1605.02992 | David Richards | David M. Richards, Robert G. Endres | Target shape dependence in a simple model of receptor-mediated
endocytosis and phagocytosis | 18 pages, 5 figures | Proc Natl Acad Sci USA 113(22):6113-6118 (2016) | 10.1073/pnas.1521974113 | null | q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Phagocytosis and receptor-mediated endocytosis are vitally important particle
uptake mechanisms in many cell types, ranging from single-cell organisms to
immune cells. In both processes, engulfment by the cell depends critically on
both particle shape and orientation. However, most previous theoretical work
has focused only on spherical particles and hence disregards the wide-ranging
particle shapes occurring in nature, such as those of bacteria. Here, by
implementing a simple model in one and two dimensions, we compare and contrast
receptor-mediated endocytosis and phagocytosis for a range of biologically
relevant shapes, including spheres, ellipsoids, capped cylinders, and
hourglasses. We find a whole range of different engulfment behaviors with some
ellipsoids engulfing faster than spheres, and that phagocytosis is able to
engulf a greater range of target shapes than other types of endocytosis.
Further, the 2D model can explain why some nonspherical particles engulf
fastest (not at all) when presented to the membrane tip-first (lying flat). Our
work reveals how some bacteria may avoid being internalized simply because of
their shape, and suggests shapes for optimal drug delivery.
| [
{
"created": "Tue, 10 May 2016 13:17:29 GMT",
"version": "v1"
}
] | 2016-06-02 | [
[
"Richards",
"David M.",
""
],
[
"Endres",
"Robert G.",
""
]
] | Phagocytosis and receptor-mediated endocytosis are vitally important particle uptake mechanisms in many cell types, ranging from single-cell organisms to immune cells. In both processes, engulfment by the cell depends critically on both particle shape and orientation. However, most previous theoretical work has focused only on spherical particles and hence disregards the wide-ranging particle shapes occurring in nature, such as those of bacteria. Here, by implementing a simple model in one and two dimensions, we compare and contrast receptor-mediated endocytosis and phagocytosis for a range of biologically relevant shapes, including spheres, ellipsoids, capped cylinders, and hourglasses. We find a whole range of different engulfment behaviors with some ellipsoids engulfing faster than spheres, and that phagocytosis is able to engulf a greater range of target shapes than other types of endocytosis. Further, the 2D model can explain why some nonspherical particles engulf fastest (not at all) when presented to the membrane tip-first (lying flat). Our work reveals how some bacteria may avoid being internalized simply because of their shape, and suggests shapes for optimal drug delivery. |
2008.12020 | Sayantari Ghosh | Sayantari Ghosh and Saumik Bhattacharya | A Data-driven Understanding of COVID-19 Dynamics Using Sequential
Genetic Algorithm Based Probabilistic Cellular Automata | 27 pages, 9 figures | null | null | null | q-bio.QM cs.LG cs.NE nlin.CG physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | COVID-19 pandemic is severely impacting the lives of billions across the
globe. Even after taking massive protective measures like nation-wide
lockdowns, discontinuation of international flight services, rigorous testing
etc., the infection spreading is still growing steadily, causing thousands of
deaths and serious socio-economic crisis. Thus, the identification of the major
factors of this infection spreading dynamics is becoming crucial to minimize
impact and lifetime of COVID-19 and any future pandemic. In this work, a
probabilistic cellular automata based method has been employed to model the
infection dynamics for a significant number of different countries. This study
proposes that for an accurate data-driven modeling of this infection spread,
cellular automata provides an excellent platform, with a sequential genetic
algorithm for efficiently estimating the parameters of the dynamics. To the
best of our knowledge, this is the first attempt to understand and interpret
COVID-19 data using optimized cellular automata, through genetic algorithm. It
has been demonstrated that the proposed methodology can be flexible and robust
at the same time, and can be used to model the daily active cases, total number
of infected people and total death cases through systematic parameter
estimation. Elaborate analyses for COVID-19 statistics of forty countries from
different continents have been performed, with markedly divergent time
evolution of the infection spreading because of demographic and socioeconomic
factors. The substantial predictive power of this model has been established
with conclusions on the key players in this pandemic dynamics.
| [
{
"created": "Thu, 27 Aug 2020 09:53:21 GMT",
"version": "v1"
}
] | 2020-08-28 | [
[
"Ghosh",
"Sayantari",
""
],
[
"Bhattacharya",
"Saumik",
""
]
] | COVID-19 pandemic is severely impacting the lives of billions across the globe. Even after taking massive protective measures like nation-wide lockdowns, discontinuation of international flight services, rigorous testing etc., the infection spreading is still growing steadily, causing thousands of deaths and serious socio-economic crisis. Thus, the identification of the major factors of this infection spreading dynamics is becoming crucial to minimize impact and lifetime of COVID-19 and any future pandemic. In this work, a probabilistic cellular automata based method has been employed to model the infection dynamics for a significant number of different countries. This study proposes that for an accurate data-driven modeling of this infection spread, cellular automata provides an excellent platform, with a sequential genetic algorithm for efficiently estimating the parameters of the dynamics. To the best of our knowledge, this is the first attempt to understand and interpret COVID-19 data using optimized cellular automata, through genetic algorithm. It has been demonstrated that the proposed methodology can be flexible and robust at the same time, and can be used to model the daily active cases, total number of infected people and total death cases through systematic parameter estimation. Elaborate analyses for COVID-19 statistics of forty countries from different continents have been performed, with markedly divergent time evolution of the infection spreading because of demographic and socioeconomic factors. The substantial predictive power of this model has been established with conclusions on the key players in this pandemic dynamics. |
2104.11844 | Brian Corneil | Sebastian J Lehmann and Brian D Corneil | Completing the puzzle: why studies in non-human primates are needed to
better understand the effects of non-invasive brain stimulation | 56 pages, 2 figures | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Brain stimulation is a core method in neuroscience. Numerous non-invasive
brain stimulation (NIBS) techniques are currently in use in basic and clinical
research, and recent advances promise the ability to non-invasively access deep
brain structures. While encouraging, there is a surprising gap in our
understanding of precisely how NIBS perturbs neural activity throughout an
interconnected network, and how such perturbed neural activity ultimately links
to behaviour. In this review, we will consider why non-human primate (NHP)
models of NIBS are ideally situated to address this gap in knowledge, and will
consider why the oculomotor network that moves our line of sight offers a
particularly valuable platform in which to empirically test hypothesis
regarding NIBS-induced changes in brain and behaviour. NHP models of NIBS will
enable investigation of the complex, dynamic effects of brain stimulation
across multiple hierarchically interconnected brain areas, networks, and
effectors. By establishing such links between brain and behavioural output,
work in NHPs can help optimize experimental and therapeutic approaches, improve
NIBS efficacy, and reduce side-effects of NIBS.
| [
{
"created": "Sat, 24 Apr 2021 00:31:06 GMT",
"version": "v1"
},
{
"created": "Mon, 1 Nov 2021 17:37:52 GMT",
"version": "v2"
}
] | 2021-11-02 | [
[
"Lehmann",
"Sebastian J",
""
],
[
"Corneil",
"Brian D",
""
]
] | Brain stimulation is a core method in neuroscience. Numerous non-invasive brain stimulation (NIBS) techniques are currently in use in basic and clinical research, and recent advances promise the ability to non-invasively access deep brain structures. While encouraging, there is a surprising gap in our understanding of precisely how NIBS perturbs neural activity throughout an interconnected network, and how such perturbed neural activity ultimately links to behaviour. In this review, we will consider why non-human primate (NHP) models of NIBS are ideally situated to address this gap in knowledge, and will consider why the oculomotor network that moves our line of sight offers a particularly valuable platform in which to empirically test hypothesis regarding NIBS-induced changes in brain and behaviour. NHP models of NIBS will enable investigation of the complex, dynamic effects of brain stimulation across multiple hierarchically interconnected brain areas, networks, and effectors. By establishing such links between brain and behavioural output, work in NHPs can help optimize experimental and therapeutic approaches, improve NIBS efficacy, and reduce side-effects of NIBS. |
1603.00200 | Javier Buldu | David Papo, Massimiliano Zanin, Johann H. Mart\'inez, and Javier M.
Buld\'u | Beware of the Small-World neuroscientist! | null | Front. Hum. Neurosci. 10:96 (2016) | 10.3389/fnhum.2016.00096 | null | q-bio.NC physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The SW has undeniably been one of the most popular network descriptors in the
neuroscience literature. Two main reasons for its lasting popularity are its
apparent ease of computation and the intuitions it is thought to provide on how
networked systems operate. Over the last few years, some pitfalls of the SW
construct and, more generally, of network summary measures, have widely been
acknowledged.
| [
{
"created": "Tue, 1 Mar 2016 09:44:36 GMT",
"version": "v1"
}
] | 2016-03-02 | [
[
"Papo",
"David",
""
],
[
"Zanin",
"Massimiliano",
""
],
[
"Martínez",
"Johann H.",
""
],
[
"Buldú",
"Javier M.",
""
]
] | The SW has undeniably been one of the most popular network descriptors in the neuroscience literature. Two main reasons for its lasting popularity are its apparent ease of computation and the intuitions it is thought to provide on how networked systems operate. Over the last few years, some pitfalls of the SW construct and, more generally, of network summary measures, have widely been acknowledged. |
1008.1273 | Anyou Wang | Anyou Wang, Hong Li | A Systemic Receptor Network Triggered by Human cytomegalovirus Entry | 26 pages | Advances in Virology Volume 2011 (2011), Article ID 262080, 11
pages | 10.1155/2011/262080 | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Virus entry is a multistep process that triggers a variety of cellular
pathways interconnecting into a complex network, yet the molecular complexity
of this network remains largely unsolved. Here, by employing systems biology
approach, we reveal a systemic virus-entry network initiated by human
cytomegalovirus (HCMV), a widespread opportunistic pathogen. This network
contains all known interactions and functional modules (i.e. groups of
proteins) coordinately responding to HCMV entry. The number of both genes and
functional modules activated in this network dramatically declines shortly,
within 25 min post-infection. While modules annotated as receptor system, ion
transport, and immune response are continuously activated during the entire
process of HCMV entry, those for cell adhesion and skeletal movement are
specifically activated during viral early attachment, and those for immune
response during virus entry. HCMV entry requires a complex receptor network
involving different cellular components, comprising not only cell surface
receptors, but also pathway components in signal transduction, skeletal
development, immune response, endocytosis, ion transport, macromolecule
metabolism and chromatin remodeling. Interestingly, genes that function in
chromatin remodeling are the most abundant in this receptor system, suggesting
that global modulation of transcriptions is one of the most important events in
HCMV entry. Results of in silico knock out further reveal that this entire
receptor network is primarily controlled by multiple elements, such as EGFR
(Epidermal Growth Factor) and SLC10A1 (sodium/bile acid cotransporter family,
member 1). Thus, our results demonstrate that a complex systemic network, in
which components coordinating efficiently in time and space contributes to
virus entry.
| [
{
"created": "Fri, 6 Aug 2010 20:16:33 GMT",
"version": "v1"
}
] | 2011-05-17 | [
[
"Wang",
"Anyou",
""
],
[
"Li",
"Hong",
""
]
] | Virus entry is a multistep process that triggers a variety of cellular pathways interconnecting into a complex network, yet the molecular complexity of this network remains largely unsolved. Here, by employing systems biology approach, we reveal a systemic virus-entry network initiated by human cytomegalovirus (HCMV), a widespread opportunistic pathogen. This network contains all known interactions and functional modules (i.e. groups of proteins) coordinately responding to HCMV entry. The number of both genes and functional modules activated in this network dramatically declines shortly, within 25 min post-infection. While modules annotated as receptor system, ion transport, and immune response are continuously activated during the entire process of HCMV entry, those for cell adhesion and skeletal movement are specifically activated during viral early attachment, and those for immune response during virus entry. HCMV entry requires a complex receptor network involving different cellular components, comprising not only cell surface receptors, but also pathway components in signal transduction, skeletal development, immune response, endocytosis, ion transport, macromolecule metabolism and chromatin remodeling. Interestingly, genes that function in chromatin remodeling are the most abundant in this receptor system, suggesting that global modulation of transcriptions is one of the most important events in HCMV entry. Results of in silico knock out further reveal that this entire receptor network is primarily controlled by multiple elements, such as EGFR (Epidermal Growth Factor) and SLC10A1 (sodium/bile acid cotransporter family, member 1). Thus, our results demonstrate that a complex systemic network, in which components coordinating efficiently in time and space contributes to virus entry. |
1701.06349 | Aparna Rai | Aparna Rai, Priodyuti Pradhan, Jyothi Nagraj, K. Lohitesh, Rajdeep
Chowdhury, Sarika Jalan | Understanding cancer complexome using networks, spectral graph theory
and multilayer framework | 25 pages, 6 figures, accepted in scientific reports (in press) | null | 10.1038/srep41676 | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cancer complexome comprises a heterogeneous and multifactorial milieu that
varies in cytology, physiology, signaling mechanisms and response to therapy.
The combined framework of network theory and spectral graph theory along with
the multilayer anal- ysis provides a comprehensive approach to analyze the
proteomic data of seven different cancers, namely, breast, oral, ovarian,
cervical, lung, colon and prostate. Our analysis demonstrates that the
protein-protein interaction networks of the normal and the cancerous tissues
associated with the seven cancers have overall similar structural and spectral
properties. However, few of these properties implicate unsystematic changes
from the normal to the disease networks depicting difference in the
interactions and highlighting changes in the complexity of different cancers.
Importantly, analysis of common proteins of all the cancer networks reveals few
proteins namely the sensors, which not only occupy significant position in all
the layers but also have direct involvement in causing cancer. The prediction
and analysis of miRNAs targeting these sensor proteins hint towards the
possible role of these proteins in tumorigenesis. This novel approach helps in
understanding cancer at the fundamental level and provides a clue to develop
promising and nascent concept of single drug therapy for multiple diseases as
well as personalized medicine.
| [
{
"created": "Mon, 23 Jan 2017 11:56:06 GMT",
"version": "v1"
},
{
"created": "Thu, 2 Mar 2017 05:46:16 GMT",
"version": "v2"
}
] | 2017-03-03 | [
[
"Rai",
"Aparna",
""
],
[
"Pradhan",
"Priodyuti",
""
],
[
"Nagraj",
"Jyothi",
""
],
[
"Lohitesh",
"K.",
""
],
[
"Chowdhury",
"Rajdeep",
""
],
[
"Jalan",
"Sarika",
""
]
] | Cancer complexome comprises a heterogeneous and multifactorial milieu that varies in cytology, physiology, signaling mechanisms and response to therapy. The combined framework of network theory and spectral graph theory along with the multilayer anal- ysis provides a comprehensive approach to analyze the proteomic data of seven different cancers, namely, breast, oral, ovarian, cervical, lung, colon and prostate. Our analysis demonstrates that the protein-protein interaction networks of the normal and the cancerous tissues associated with the seven cancers have overall similar structural and spectral properties. However, few of these properties implicate unsystematic changes from the normal to the disease networks depicting difference in the interactions and highlighting changes in the complexity of different cancers. Importantly, analysis of common proteins of all the cancer networks reveals few proteins namely the sensors, which not only occupy significant position in all the layers but also have direct involvement in causing cancer. The prediction and analysis of miRNAs targeting these sensor proteins hint towards the possible role of these proteins in tumorigenesis. This novel approach helps in understanding cancer at the fundamental level and provides a clue to develop promising and nascent concept of single drug therapy for multiple diseases as well as personalized medicine. |
1402.3632 | Benjamin Good | Benjamin M. Good, Salvatore Loguercio, Obi L. Griffith, Max Nanis,
Chunlei Wu, Andrew I. Su | The Cure: Making a game of gene selection for breast cancer survival
prediction | null | Good BM, Loguercio S, Griffith OL, Nanis M, Wu C, Su AI The Cure:
Design and Evaluation of a Crowdsourcing Game for Gene Selection for Breast
Cancer Survival Prediction JMIR Serious Games 2014;2(2):e7 | 10.2196/games.3350 | null | q-bio.GN q-bio.QM | http://creativecommons.org/licenses/by/3.0/ | Motivation: Molecular signatures for predicting breast cancer prognosis could
greatly improve care through personalization of treatment. Computational
analyses of genome-wide expression datasets have identified such signatures,
but these signatures leave much to be desired in terms of accuracy,
reproducibility and biological interpretability. Methods that take advantage of
structured prior knowledge (e.g. protein interaction networks) show promise in
helping to define better signatures but most knowledge remains unstructured.
Crowdsourcing via scientific discovery games is an emerging methodology that
has the potential to tap into human intelligence at scales and in modes
previously unheard of. Here, we developed and evaluated a game called The Cure
on the task of gene selection for breast cancer survival prediction. Our
central hypothesis was that knowledge linking expression patterns of specific
genes to breast cancer outcomes could be captured from game players. We
envisioned capturing knowledge both from the players prior experience and from
their ability to interpret text related to candidate genes presented to them in
the context of the game.
Results: Between its launch in Sept. 2012 and Sept. 2013, The Cure attracted
more than 1,000 registered players who collectively played nearly 10,000 games.
Gene sets assembled through aggregation of the collected data clearly
demonstrated the accumulation of relevant expert knowledge. In terms of
predictive accuracy, these gene sets provided comparable performance to gene
sets generated using other methods including those used in commercial tests.
The Cure is available at http://genegames.org/cure/
| [
{
"created": "Sat, 15 Feb 2014 01:07:23 GMT",
"version": "v1"
}
] | 2014-07-31 | [
[
"Good",
"Benjamin M.",
""
],
[
"Loguercio",
"Salvatore",
""
],
[
"Griffith",
"Obi L.",
""
],
[
"Nanis",
"Max",
""
],
[
"Wu",
"Chunlei",
""
],
[
"Su",
"Andrew I.",
""
]
] | Motivation: Molecular signatures for predicting breast cancer prognosis could greatly improve care through personalization of treatment. Computational analyses of genome-wide expression datasets have identified such signatures, but these signatures leave much to be desired in terms of accuracy, reproducibility and biological interpretability. Methods that take advantage of structured prior knowledge (e.g. protein interaction networks) show promise in helping to define better signatures but most knowledge remains unstructured. Crowdsourcing via scientific discovery games is an emerging methodology that has the potential to tap into human intelligence at scales and in modes previously unheard of. Here, we developed and evaluated a game called The Cure on the task of gene selection for breast cancer survival prediction. Our central hypothesis was that knowledge linking expression patterns of specific genes to breast cancer outcomes could be captured from game players. We envisioned capturing knowledge both from the players prior experience and from their ability to interpret text related to candidate genes presented to them in the context of the game. Results: Between its launch in Sept. 2012 and Sept. 2013, The Cure attracted more than 1,000 registered players who collectively played nearly 10,000 games. Gene sets assembled through aggregation of the collected data clearly demonstrated the accumulation of relevant expert knowledge. In terms of predictive accuracy, these gene sets provided comparable performance to gene sets generated using other methods including those used in commercial tests. The Cure is available at http://genegames.org/cure/ |
2209.10545 | Niloofar Aghaieabiane | Niloofar Aghaieabiane and Ioannis Koutis | SGC: A semi-supervised pipeline for gene clustering using self-training
approach in gene co-expression networks | null | null | null | null | q-bio.GN cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | A widely used approach for extracting information from gene expression data
employ the construction of a gene co-expression network and the subsequent
application of algorithms that discover network structure. In particular, a
common goal is the computational discovery of gene clusters, commonly called
modules. When applied on a novel gene expression dataset, the quality of the
computed modules can be evaluated automatically, using Gene Ontology
enrichment, a method that measures the frequencies of Gene Ontology terms in
the computed modules and evaluates their statistical likelihood. In this work
we propose SGC a novel pipeline for gene clustering based on relatively recent
seminal work in the mathematics of spectral network theory. SGC consists of
multiple novel steps that enable the computation of highly enriched modules in
an unsupervised manner. But unlike all existing frameworks, it further
incorporates a novel step that leverages Gene Ontology information in a
semi-supervised clustering method that further improves the quality of the
computed modules. Comparing with already well-known existing frameworks, we
show that SGC results in higher enrichment in real data. In particular, in 12
real gene expression datasets, SGC outperforms in all except one.
| [
{
"created": "Wed, 21 Sep 2022 14:51:08 GMT",
"version": "v1"
}
] | 2022-09-23 | [
[
"Aghaieabiane",
"Niloofar",
""
],
[
"Koutis",
"Ioannis",
""
]
] | A widely used approach for extracting information from gene expression data employ the construction of a gene co-expression network and the subsequent application of algorithms that discover network structure. In particular, a common goal is the computational discovery of gene clusters, commonly called modules. When applied on a novel gene expression dataset, the quality of the computed modules can be evaluated automatically, using Gene Ontology enrichment, a method that measures the frequencies of Gene Ontology terms in the computed modules and evaluates their statistical likelihood. In this work we propose SGC a novel pipeline for gene clustering based on relatively recent seminal work in the mathematics of spectral network theory. SGC consists of multiple novel steps that enable the computation of highly enriched modules in an unsupervised manner. But unlike all existing frameworks, it further incorporates a novel step that leverages Gene Ontology information in a semi-supervised clustering method that further improves the quality of the computed modules. Comparing with already well-known existing frameworks, we show that SGC results in higher enrichment in real data. In particular, in 12 real gene expression datasets, SGC outperforms in all except one. |
2305.14404 | Shuqiang Wang | Qiankun Zuo, Baiying Lei, Ning Zhong, Yi Pan, Shuqiang Wang | Brain Structure-Function Fusing Representation Learning using
Adversarial Decomposed-VAE for Analyzing MCI | null | null | null | null | q-bio.NC cs.AI cs.LG eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Integrating the brain structural and functional connectivity features is of
great significance in both exploring brain science and analyzing cognitive
impairment clinically. However, it remains a challenge to effectively fuse
structural and functional features in exploring the brain network. In this
paper, a novel brain structure-function fusing-representation learning (BSFL)
model is proposed to effectively learn fused representation from diffusion
tensor imaging (DTI) and resting-state functional magnetic resonance imaging
(fMRI) for mild cognitive impairment (MCI) analysis. Specifically, the
decomposition-fusion framework is developed to first decompose the feature
space into the union of the uniform and the unique spaces for each modality,
and then adaptively fuse the decomposed features to learn MCI-related
representation. Moreover, a knowledge-aware transformer module is designed to
automatically capture local and global connectivity features throughout the
brain. Also, a uniform-unique contrastive loss is further devised to make the
decomposition more effective and enhance the complementarity of structural and
functional features. The extensive experiments demonstrate that the proposed
model achieves better performance than other competitive methods in predicting
and analyzing MCI. More importantly, the proposed model could be a potential
tool for reconstructing unified brain networks and predicting abnormal
connections during the degenerative processes in MCI.
| [
{
"created": "Tue, 23 May 2023 11:19:02 GMT",
"version": "v1"
}
] | 2023-05-25 | [
[
"Zuo",
"Qiankun",
""
],
[
"Lei",
"Baiying",
""
],
[
"Zhong",
"Ning",
""
],
[
"Pan",
"Yi",
""
],
[
"Wang",
"Shuqiang",
""
]
] | Integrating the brain structural and functional connectivity features is of great significance in both exploring brain science and analyzing cognitive impairment clinically. However, it remains a challenge to effectively fuse structural and functional features in exploring the brain network. In this paper, a novel brain structure-function fusing-representation learning (BSFL) model is proposed to effectively learn fused representation from diffusion tensor imaging (DTI) and resting-state functional magnetic resonance imaging (fMRI) for mild cognitive impairment (MCI) analysis. Specifically, the decomposition-fusion framework is developed to first decompose the feature space into the union of the uniform and the unique spaces for each modality, and then adaptively fuse the decomposed features to learn MCI-related representation. Moreover, a knowledge-aware transformer module is designed to automatically capture local and global connectivity features throughout the brain. Also, a uniform-unique contrastive loss is further devised to make the decomposition more effective and enhance the complementarity of structural and functional features. The extensive experiments demonstrate that the proposed model achieves better performance than other competitive methods in predicting and analyzing MCI. More importantly, the proposed model could be a potential tool for reconstructing unified brain networks and predicting abnormal connections during the degenerative processes in MCI. |
1907.00249 | Jude Kong | Christina P. Tadiri, Jude D. Kong, Gregor F. Fussmann, Marilyn E.
Scott, and Hao Wang | A Data-Validated Host-Parasite Model for Infectious Disease Outbreaks | * Equally contributing first authors | null | 10.3389/fevo.2019.00307 | null | q-bio.PE math.DS stat.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The use of model experimental systems and mathematical models is important to
further understanding of infectious disease dynamics and strategize disease
mitigation. Gyrodactylids are helminth ectoparasites of teleost fish which have
many dynamical characteristics of microparasites but offer the advantage that
they can be quantified and tracked over time, allowing further insight into
within-host and epidemic dynamics. In this paper, we design a model to describe
host-parasite dynamics of the well-studied guppy-Gyrodactylus turnbulli system,
using experimental data to estimate parameters and validate it. We estimate the
basic reproduction number (R_0), for this system. Sensitivity analysis reveals
that parasite growth rate, and the rate at which the guppy mounts an immune
response have the greatest impact on outbreak peak and timing both for initial
outbreaks and on longer time scales. These findings highlight guppy population
resistance and parasite virulence as key factors in disease control, and future
work should focus on incorporating heterogeneity in host resistance into
disease models and extrapolating to other host-parasite systems.
| [
{
"created": "Sat, 29 Jun 2019 18:21:46 GMT",
"version": "v1"
},
{
"created": "Thu, 29 Aug 2019 17:12:46 GMT",
"version": "v2"
}
] | 2019-08-30 | [
[
"Tadiri",
"Christina P.",
""
],
[
"Kong",
"Jude D.",
""
],
[
"Fussmann",
"Gregor F.",
""
],
[
"Scott",
"Marilyn E.",
""
],
[
"Wang",
"Hao",
""
]
] | The use of model experimental systems and mathematical models is important to further understanding of infectious disease dynamics and strategize disease mitigation. Gyrodactylids are helminth ectoparasites of teleost fish which have many dynamical characteristics of microparasites but offer the advantage that they can be quantified and tracked over time, allowing further insight into within-host and epidemic dynamics. In this paper, we design a model to describe host-parasite dynamics of the well-studied guppy-Gyrodactylus turnbulli system, using experimental data to estimate parameters and validate it. We estimate the basic reproduction number (R_0), for this system. Sensitivity analysis reveals that parasite growth rate, and the rate at which the guppy mounts an immune response have the greatest impact on outbreak peak and timing both for initial outbreaks and on longer time scales. These findings highlight guppy population resistance and parasite virulence as key factors in disease control, and future work should focus on incorporating heterogeneity in host resistance into disease models and extrapolating to other host-parasite systems. |
1903.07308 | Toni Giorgino | Toni Giorgino, Davide Mattioni, Amal Hassan, Mario Milani, Eloise
Mastrangelo, Alberto Barbiroli, Adriaan Verhelle, Jan Gettemans, Maria Monica
Barzago, Luisa Diomede, Matteo de Rosa | Nanobody interaction unveils structure, dynamics and proteotoxicity of
the Finnish-type amyloidogenic gelsolin variant | null | Biochimica et Biophysica Acta (BBA) - Molecular Basis of Disease.
Volume 1865, Issue 3, 1 March 2019, Pages 648-660 | 10.1016/j.bbadis.2019.01.010 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | AGel amyloidosis, formerly known as familial amyloidosis of the Finnish-type,
is caused by pathological aggregation of proteolytic fragments of plasma
gelsolin. So far, four mutations in the gelsolin gene have been reported as
responsible for the disease. Although D187N is the first identified variant and
the best characterized, its structure has been hitherto elusive. Exploiting a
recently-developed nanobody targeting gelsolin, we were able to stabilize the
G2 domain of the D187N protein and obtained, for the first time, its
high-resolution crystal structure. In the nanobody-stabilized conformation, the
main effect of the D187N substitution is the impairment of the calcium binding
capability, leading to a destabilization of the C-terminal tail of G2. However,
molecular dynamics simulations show that in the absence of the nanobody,
D187N-mutated G2 further misfolds, ultimately exposing its hydrophobic core and
the furin cleavage site. The nanobody's protective effect is based on the
enhancement of the thermodynamic stability of different G2 mutants (D187N,
G167R and N184K). In particular, the nanobody reduces the flexibility of
dynamic stretches, and most notably decreases the conformational entropy of the
C-terminal tail, otherwise stabilized by the presence of the Ca2+ ion. A
Caenorhabditis elegans-based assay was also applied to quantify the proteotoxic
potential of the mutants and determine whether nanobody stabilization
translates into a biologically relevant effect. Successful protection from G2
toxicity in vivo points to the use of C. elegans as a tool for investigating
the mechanisms underlying AGel amyloidosis and rapidly screen new therapeutics.
| [
{
"created": "Mon, 18 Mar 2019 08:50:30 GMT",
"version": "v1"
}
] | 2019-03-19 | [
[
"Giorgino",
"Toni",
""
],
[
"Mattioni",
"Davide",
""
],
[
"Hassan",
"Amal",
""
],
[
"Milani",
"Mario",
""
],
[
"Mastrangelo",
"Eloise",
""
],
[
"Barbiroli",
"Alberto",
""
],
[
"Verhelle",
"Adriaan",
""
],
[
"Gettemans",
"Jan",
""
],
[
"Barzago",
"Maria Monica",
""
],
[
"Diomede",
"Luisa",
""
],
[
"de Rosa",
"Matteo",
""
]
] | AGel amyloidosis, formerly known as familial amyloidosis of the Finnish-type, is caused by pathological aggregation of proteolytic fragments of plasma gelsolin. So far, four mutations in the gelsolin gene have been reported as responsible for the disease. Although D187N is the first identified variant and the best characterized, its structure has been hitherto elusive. Exploiting a recently-developed nanobody targeting gelsolin, we were able to stabilize the G2 domain of the D187N protein and obtained, for the first time, its high-resolution crystal structure. In the nanobody-stabilized conformation, the main effect of the D187N substitution is the impairment of the calcium binding capability, leading to a destabilization of the C-terminal tail of G2. However, molecular dynamics simulations show that in the absence of the nanobody, D187N-mutated G2 further misfolds, ultimately exposing its hydrophobic core and the furin cleavage site. The nanobody's protective effect is based on the enhancement of the thermodynamic stability of different G2 mutants (D187N, G167R and N184K). In particular, the nanobody reduces the flexibility of dynamic stretches, and most notably decreases the conformational entropy of the C-terminal tail, otherwise stabilized by the presence of the Ca2+ ion. A Caenorhabditis elegans-based assay was also applied to quantify the proteotoxic potential of the mutants and determine whether nanobody stabilization translates into a biologically relevant effect. Successful protection from G2 toxicity in vivo points to the use of C. elegans as a tool for investigating the mechanisms underlying AGel amyloidosis and rapidly screen new therapeutics. |
1404.7791 | Alexander Iomin | A. Iomin | Maximum therapeutic effect of glioma treatment by radio-frequency
electric field | Published in Chaos, Solitons & Fractals (2014) | null | 10.1016/j.chaos.2014.11.020 | null | q-bio.CB cond-mat.stat-mech physics.bio-ph physics.med-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An influence of a radio-frequency electric field on glioma - brain cancer
development is considered. A specific task emerging here is whether this new
medical technology is effective against invasive cells with a high motility,
when switching between migrating and proliferating phenotypes takes place. This
therapeutic effect is studied in the framework of a continuous time random
walk. It is shown that the migration proliferation dichotomy of cancer cells
leads to the weakening of the electric field treatment.
| [
{
"created": "Wed, 30 Apr 2014 16:41:56 GMT",
"version": "v1"
},
{
"created": "Sat, 13 Dec 2014 10:22:00 GMT",
"version": "v2"
}
] | 2015-06-19 | [
[
"Iomin",
"A.",
""
]
] | An influence of a radio-frequency electric field on glioma - brain cancer development is considered. A specific task emerging here is whether this new medical technology is effective against invasive cells with a high motility, when switching between migrating and proliferating phenotypes takes place. This therapeutic effect is studied in the framework of a continuous time random walk. It is shown that the migration proliferation dichotomy of cancer cells leads to the weakening of the electric field treatment. |
2403.14741 | Alexei Vazquez | Alexei Vazquez | Selective advantage of aerobic glycolysis over oxidative phosphorylation | 4 pages, 1 table | null | null | null | q-bio.BM physics.bio-ph q-bio.MN | http://creativecommons.org/licenses/by/4.0/ | The utilization of glycolysis in aerobic conditions have been a subject of
debate for more than a century. A hypothesis supported by previous data is that
glycolysis has a higher rate of ATP production per protein mass and per
occupied volume than oxidative phosphorylation (OxPhos). However, a recent work
by Shen et al14 challenges previous estimates, reporting that OxPhos has a
higher rate of ATP production per protein mass than glycolysis. Here I show
that Shen et al14 make a key assumption that is a subject of debate: that the
proteomic cost of OxPhos is limited to proteins in enzymes of OxPhos and the
TCA cycle. I argue that an intact mitochondria is required for functional
OxPhos and therefore the whole mitochondrial protein content should be included
for the cost estimate of OxPhos. After doing so, glycolysis is the most
efficient pathway per protein mass or per volume fraction.
| [
{
"created": "Thu, 21 Mar 2024 17:53:31 GMT",
"version": "v1"
}
] | 2024-03-25 | [
[
"Vazquez",
"Alexei",
""
]
] | The utilization of glycolysis in aerobic conditions have been a subject of debate for more than a century. A hypothesis supported by previous data is that glycolysis has a higher rate of ATP production per protein mass and per occupied volume than oxidative phosphorylation (OxPhos). However, a recent work by Shen et al14 challenges previous estimates, reporting that OxPhos has a higher rate of ATP production per protein mass than glycolysis. Here I show that Shen et al14 make a key assumption that is a subject of debate: that the proteomic cost of OxPhos is limited to proteins in enzymes of OxPhos and the TCA cycle. I argue that an intact mitochondria is required for functional OxPhos and therefore the whole mitochondrial protein content should be included for the cost estimate of OxPhos. After doing so, glycolysis is the most efficient pathway per protein mass or per volume fraction. |
2202.12711 | Mansoor Haider | Mansoor A. Haider, Katherine J. Pearce, Naomi C. Chesler, Nicholas A.
Hill and Mette S. Olufsen | Application and reduction of a nonlinear hyperelastic wall model
capturing ex vivo relationships between fluid pressure, area and wall
thickness in normal and hypertensive murine left pulmonary arteries | null | null | null | null | q-bio.TO | http://creativecommons.org/licenses/by/4.0/ | Pulmonary hypertension is a cardiovascular disorder manifested by elevated
arterial blood pressure together with vessel wall stiffening and thickening due
to alterations in collagen, elastin and smooth muscle cells. Hypoxia-induced
(type 3) pulmonary hypertension can be studied in animals exposed to a low
oxygen environment for prolonged time periods leading to biomechanical
alterations in vessel wall structure. This study formulates and systematically
reduces a nonlinear elastic structural wall model for a large pulmonary artery,
generating a novel pressure-area relation capturing remodeling in type 3
pulmonary hypertension. The model is calibrated using {\em ex vivo}
measurements of vessel diameter and wall thickness changes, under controlled
flow conditions, in left pulmonary arteries isolated from control and
hypertensive mice. A two-layer, hyperelastic, anisotropic model incorporating
residual stresses is formulated using the Holzapfel-Gasser-Ogden model. Complex
relations predicting vessel area and wall thickness with increasing blood
pressure are derived and calibrated using the data. Sensitivity analysis,
parameter estimation and subset selection are used to systematically reduce the
16-parameter model to one in which a much smaller subset of identifiable
parameters is estimated via solution of an inverse problem. Our final reduced
model includes a single set of three elastic moduli. Estimated ranges of these
parameters demonstrate that nonlinear stiffening is dominated by elastin in the
control animals and by collagen in the hypertensive group. The novel
pressure-area relation developed in this study has potential impact on
one-dimensional fluids network models of vessel wall remodeling in the presence
of cardiovascular disease.
| [
{
"created": "Fri, 11 Feb 2022 17:35:28 GMT",
"version": "v1"
}
] | 2022-02-28 | [
[
"Haider",
"Mansoor A.",
""
],
[
"Pearce",
"Katherine J.",
""
],
[
"Chesler",
"Naomi C.",
""
],
[
"Hill",
"Nicholas A.",
""
],
[
"Olufsen",
"Mette S.",
""
]
] | Pulmonary hypertension is a cardiovascular disorder manifested by elevated arterial blood pressure together with vessel wall stiffening and thickening due to alterations in collagen, elastin and smooth muscle cells. Hypoxia-induced (type 3) pulmonary hypertension can be studied in animals exposed to a low oxygen environment for prolonged time periods leading to biomechanical alterations in vessel wall structure. This study formulates and systematically reduces a nonlinear elastic structural wall model for a large pulmonary artery, generating a novel pressure-area relation capturing remodeling in type 3 pulmonary hypertension. The model is calibrated using {\em ex vivo} measurements of vessel diameter and wall thickness changes, under controlled flow conditions, in left pulmonary arteries isolated from control and hypertensive mice. A two-layer, hyperelastic, anisotropic model incorporating residual stresses is formulated using the Holzapfel-Gasser-Ogden model. Complex relations predicting vessel area and wall thickness with increasing blood pressure are derived and calibrated using the data. Sensitivity analysis, parameter estimation and subset selection are used to systematically reduce the 16-parameter model to one in which a much smaller subset of identifiable parameters is estimated via solution of an inverse problem. Our final reduced model includes a single set of three elastic moduli. Estimated ranges of these parameters demonstrate that nonlinear stiffening is dominated by elastin in the control animals and by collagen in the hypertensive group. The novel pressure-area relation developed in this study has potential impact on one-dimensional fluids network models of vessel wall remodeling in the presence of cardiovascular disease. |
1402.0042 | Dennis Kostka | Dennis Kostka, Tara Friedrich, Alisha K. Holloway, Katherine S.
Pollard | motifDiverge: a model for assessing the statistical significance of gene
regulatory motif divergence between two DNA sequences | null | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Next-generation sequencing technology enables the identification of thousands
of gene regulatory sequences in many cell types and organisms. We consider the
problem of testing if two such sequences differ in their number of binding site
motifs for a given transcription factor (TF) protein. Binding site motifs
impart regulatory function by providing TFs the opportunity to bind to genomic
elements and thereby affect the expression of nearby genes. Evolutionary
changes to such functional DNA are hypothesized to be major contributors to
phenotypic diversity within and between species; but despite the importance of
TF motifs for gene expression, no method exists to test for motif loss or gain.
Assuming that motif counts are Binomially distributed, and allowing for
dependencies between motif instances in evolutionarily related sequences, we
derive the probability mass function of the difference in motif counts between
two nucleotide sequences. We provide a method to numerically estimate this
distribution from genomic data and show through simulations that our estimator
is accurate. Finally, we introduce the R package {\tt motifDiverge} that
implements our methodology and illustrate its application to gene regulatory
enhancers identified by a mouse developmental time course experiment. While
this study was motivated by analysis of regulatory motifs, our results can be
applied to any problem involving two correlated Bernoulli trials.
| [
{
"created": "Sat, 1 Feb 2014 02:00:16 GMT",
"version": "v1"
}
] | 2014-02-04 | [
[
"Kostka",
"Dennis",
""
],
[
"Friedrich",
"Tara",
""
],
[
"Holloway",
"Alisha K.",
""
],
[
"Pollard",
"Katherine S.",
""
]
] | Next-generation sequencing technology enables the identification of thousands of gene regulatory sequences in many cell types and organisms. We consider the problem of testing if two such sequences differ in their number of binding site motifs for a given transcription factor (TF) protein. Binding site motifs impart regulatory function by providing TFs the opportunity to bind to genomic elements and thereby affect the expression of nearby genes. Evolutionary changes to such functional DNA are hypothesized to be major contributors to phenotypic diversity within and between species; but despite the importance of TF motifs for gene expression, no method exists to test for motif loss or gain. Assuming that motif counts are Binomially distributed, and allowing for dependencies between motif instances in evolutionarily related sequences, we derive the probability mass function of the difference in motif counts between two nucleotide sequences. We provide a method to numerically estimate this distribution from genomic data and show through simulations that our estimator is accurate. Finally, we introduce the R package {\tt motifDiverge} that implements our methodology and illustrate its application to gene regulatory enhancers identified by a mouse developmental time course experiment. While this study was motivated by analysis of regulatory motifs, our results can be applied to any problem involving two correlated Bernoulli trials. |
2101.08860 | Jingyi Jessica Li | Nan Miles Xi and Jingyi Jessica Li | Protocol for Executing and Benchmarking Eight Computational
Doublet-Detection Methods in Single-Cell RNA Sequencing Data Analysis | null | STAR Protocols 2(3) (2021) 100699 | 10.1016/j.xpro.2021.100699 | null | q-bio.GN | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The existence of doublets is a key confounder in single-cell RNA sequencing
(scRNA-seq) data analysis. Computational methods have been developed for
detecting doublets from scRNA-seq data. We developed an R package
DoubletCollection to integrate the installation and execution of eight
doublet-detection methods. DoubletCollection also provides a unified interface
to perform and visualize downstream analysis after doublet detection. Here, we
present a protocol of using DoubletCollection to benchmark doublet-detection
methods. This protocol can automatically accommodate new doublet-detection
methods in the fast-growing scRNA-seq field.
| [
{
"created": "Thu, 21 Jan 2021 21:38:36 GMT",
"version": "v1"
},
{
"created": "Thu, 3 Jun 2021 01:21:49 GMT",
"version": "v2"
},
{
"created": "Fri, 25 Jun 2021 22:30:51 GMT",
"version": "v3"
}
] | 2021-12-01 | [
[
"Xi",
"Nan Miles",
""
],
[
"Li",
"Jingyi Jessica",
""
]
] | The existence of doublets is a key confounder in single-cell RNA sequencing (scRNA-seq) data analysis. Computational methods have been developed for detecting doublets from scRNA-seq data. We developed an R package DoubletCollection to integrate the installation and execution of eight doublet-detection methods. DoubletCollection also provides a unified interface to perform and visualize downstream analysis after doublet detection. Here, we present a protocol of using DoubletCollection to benchmark doublet-detection methods. This protocol can automatically accommodate new doublet-detection methods in the fast-growing scRNA-seq field. |
1703.03441 | Indika Rajapakse | Scott Ronquist, Geoff Patterson, Markus Brown, Haiming Chen, Anthony
Bloch, Lindsey Muir, Roger Brockett, Indika Rajapakse | An Algorithm for Cellular Reprogramming | null | null | 10.1073/pnas.1712350114 | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The day we understand the time evolution of subcellular elements at a level
of detail comparable to physical systems governed by Newton's laws of motion
seems far away. Even so, quantitative approaches to cellular dynamics add to
our understanding of cell biology, providing data-guided frameworks that allow
us to develop better predictions about and methods for control over specific
biological processes and system-wide cell behavior. In this paper we describe
an approach to optimizing the use of transcription factors in the context of
cellular reprogramming. We construct an approximate model for the natural
evolution of a synchronized population of fibroblasts, based on data obtained
by sampling the expression of some 22,083 genes at several times along the cell
cycle. (These data are based on a colony of cells that have been cell cycle
synchronized) In order to arrive at a model of moderate complexity, we cluster
gene expression based on the division of the genome into topologically
associating domains (TADs) and then model the dynamics of the expression levels
of the TADs. Based on this dynamical model and known bioinformatics, we develop
a methodology for identifying the transcription factors that are the most
likely to be effective toward a specific cellular reprogramming task. The
approach used is based on a device commonly used in optimal control. From this
data-guided methodology, we identify a number of validated transcription
factors used in reprogramming and/or natural differentiation. Our findings
highlight the immense potential of dynamical models models, mathematics, and
data guided methodologies for improving methods for control over biological
processes.
| [
{
"created": "Thu, 9 Mar 2017 19:49:58 GMT",
"version": "v1"
},
{
"created": "Fri, 14 Jul 2017 01:24:38 GMT",
"version": "v2"
}
] | 2022-06-08 | [
[
"Ronquist",
"Scott",
""
],
[
"Patterson",
"Geoff",
""
],
[
"Brown",
"Markus",
""
],
[
"Chen",
"Haiming",
""
],
[
"Bloch",
"Anthony",
""
],
[
"Muir",
"Lindsey",
""
],
[
"Brockett",
"Roger",
""
],
[
"Rajapakse",
"Indika",
""
]
] | The day we understand the time evolution of subcellular elements at a level of detail comparable to physical systems governed by Newton's laws of motion seems far away. Even so, quantitative approaches to cellular dynamics add to our understanding of cell biology, providing data-guided frameworks that allow us to develop better predictions about and methods for control over specific biological processes and system-wide cell behavior. In this paper we describe an approach to optimizing the use of transcription factors in the context of cellular reprogramming. We construct an approximate model for the natural evolution of a synchronized population of fibroblasts, based on data obtained by sampling the expression of some 22,083 genes at several times along the cell cycle. (These data are based on a colony of cells that have been cell cycle synchronized) In order to arrive at a model of moderate complexity, we cluster gene expression based on the division of the genome into topologically associating domains (TADs) and then model the dynamics of the expression levels of the TADs. Based on this dynamical model and known bioinformatics, we develop a methodology for identifying the transcription factors that are the most likely to be effective toward a specific cellular reprogramming task. The approach used is based on a device commonly used in optimal control. From this data-guided methodology, we identify a number of validated transcription factors used in reprogramming and/or natural differentiation. Our findings highlight the immense potential of dynamical models models, mathematics, and data guided methodologies for improving methods for control over biological processes. |
1301.0209 | Fabio Lorenzo Traversa Ph.D. | Fabio Lorenzo Traversa, Yuriy V. Pershin, Massimiliano Di Ventra | Memory models of adaptive behaviour | null | IEEE Transactions on Neural Networks and Learning Systems, vol.
24, is. 9, pg. 1437 - 1448, year 2013 | 10.1109/TNNLS.2013.2261545 | null | q-bio.CB cond-mat.other nlin.AO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Adaptive response to a varying environment is a common feature of biological
organisms. Reproducing such features in electronic systems and circuits is of
great importance for a variety of applications. Here, we consider memory models
inspired by an intriguing ability of slime molds to both memorize the period of
temperature and humidity variations, and anticipate the next variations to
come, when appropriately trained. Effective circuit models of such behavior are
designed using i) a set of LC-contours with memristive damping, and ii) a
single memcapacitive system-based adaptive contour with memristive damping. We
consider these two approaches in detail by comparing their results and
predictions. Finally, possible biological experiments that would discriminate
between the models are discussed. In this work, we also introduce an effective
description of certain memory circuit elements.
| [
{
"created": "Wed, 2 Jan 2013 11:06:36 GMT",
"version": "v1"
},
{
"created": "Thu, 2 May 2013 00:18:00 GMT",
"version": "v2"
}
] | 2013-10-29 | [
[
"Traversa",
"Fabio Lorenzo",
""
],
[
"Pershin",
"Yuriy V.",
""
],
[
"Di Ventra",
"Massimiliano",
""
]
] | Adaptive response to a varying environment is a common feature of biological organisms. Reproducing such features in electronic systems and circuits is of great importance for a variety of applications. Here, we consider memory models inspired by an intriguing ability of slime molds to both memorize the period of temperature and humidity variations, and anticipate the next variations to come, when appropriately trained. Effective circuit models of such behavior are designed using i) a set of LC-contours with memristive damping, and ii) a single memcapacitive system-based adaptive contour with memristive damping. We consider these two approaches in detail by comparing their results and predictions. Finally, possible biological experiments that would discriminate between the models are discussed. In this work, we also introduce an effective description of certain memory circuit elements. |
2312.17513 | Jessica Dubois | Jessica Dubois (CEA, INSERM) | Multi-modal MRI sensitive to age: Focus on early brain development in
infants | null | Handbook of Pediatric Brain Imaging: Methods and Applications,
2021, 978-0128166338 | null | null | q-bio.NC eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Exploring the developing brain is a major issue in understanding what enables
children to acquire amazing abilities, and how early disruptions can lead to a
wide range of neurodevelopmental disorders. MRI plays a key role here by
providing a non-invasive way to link brain and behavioral changes. Several
modalities are used in newborns and infants to characterize the properties of
the developing brain, from growth, morphology to microstructure and functional
specialization. Recent multi-modal studies have sought to couple complementary
MRI markers to provide a more integrated view of brain development. In this
chapter, we describe successively how these approaches have made it possible to
assess the early maturation of brain tissues, to link different aspects of
structural development, and to compare structural and functional brain
development.
| [
{
"created": "Fri, 29 Dec 2023 08:20:48 GMT",
"version": "v1"
}
] | 2024-01-01 | [
[
"Dubois",
"Jessica",
"",
"CEA, INSERM"
]
] | Exploring the developing brain is a major issue in understanding what enables children to acquire amazing abilities, and how early disruptions can lead to a wide range of neurodevelopmental disorders. MRI plays a key role here by providing a non-invasive way to link brain and behavioral changes. Several modalities are used in newborns and infants to characterize the properties of the developing brain, from growth, morphology to microstructure and functional specialization. Recent multi-modal studies have sought to couple complementary MRI markers to provide a more integrated view of brain development. In this chapter, we describe successively how these approaches have made it possible to assess the early maturation of brain tissues, to link different aspects of structural development, and to compare structural and functional brain development. |
2212.08638 | Saskia De Vries | Saskia E. J. de Vries, Joshua H. Siegle, Christof Koch | Sharing Neurophysiology Data from the Allen Brain Observatory: Lessons
Learned | 20 pages, 4 figures | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Making all data for any observation or experiment openly available is a
defining feature of empirical science (e.g., nullius in verba, the motto of the
Royal Society). It enhances transparency, reproducibility, and societal trust.
While embraced in spirit by many, in practice open data sharing remains the
exception in contemporary systems neuroscience. Here, we take stock of the
Allen Brain Observatory, an effort to share data and metadata associated with
surveys of neuronal activity in the visual system of laboratory mice. The data
from these surveys have been used to produce new discoveries, to validate
computational algorithms, and as a benchmark for comparison with other data,
resulting in over 100 publications and preprints to date. We distill some of
the lessons learned about open surveys and data reuse, including remaining
barriers to data sharing and what might be done to address these.
| [
{
"created": "Fri, 16 Dec 2022 18:30:47 GMT",
"version": "v1"
}
] | 2022-12-19 | [
[
"de Vries",
"Saskia E. J.",
""
],
[
"Siegle",
"Joshua H.",
""
],
[
"Koch",
"Christof",
""
]
] | Making all data for any observation or experiment openly available is a defining feature of empirical science (e.g., nullius in verba, the motto of the Royal Society). It enhances transparency, reproducibility, and societal trust. While embraced in spirit by many, in practice open data sharing remains the exception in contemporary systems neuroscience. Here, we take stock of the Allen Brain Observatory, an effort to share data and metadata associated with surveys of neuronal activity in the visual system of laboratory mice. The data from these surveys have been used to produce new discoveries, to validate computational algorithms, and as a benchmark for comparison with other data, resulting in over 100 publications and preprints to date. We distill some of the lessons learned about open surveys and data reuse, including remaining barriers to data sharing and what might be done to address these. |
2104.01457 | Mar\'ia Vallet-Regi | R.Diez-Orejas, L.Casarrubios, M.J. Feito, J.M.Rojo, M.Vallet-Regi,
D.Arcos, M.T.Portoles | Effects of mesoporous SiO2-CaO nanospheres on the murine peritoneal
macrophages -- Candida albicans interface | 35 pages, 6 figures | Int Immunopharmacol International Immunopharmacology. 2021 Mar
20;94:107457 | 10.1016/j.intimp.2021.107457 | null | q-bio.TO | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The use of nanoparticles for intracellular drug delivery could reduce the
toxicity and side effects of the drug but, the uptake of these nanocarriers
could induce adverse effects on cells and tissues after their incorporation.
Macrophages play a central role in host defense and are responsible for in vivo
nanoparticle trafficking. Assessment of their defense capacity against
pathogenic micro-organisms after nanoparticle uptake, is necessary to prevent
infections associated with nanoparticle therapies. In this study, the effects
of hollow mesoporous SiO2-CaO nanospheres labeled with fluorescein
isothiocyanate (FITC-NanoMBGs) on the function of peritoneal macrophages was
assessed by measuring their ability to phagocytize Candida albicans expressing
a red fluorescent protein. Two macrophage-fungus ratios (MOI 1 and MOI 5) were
used and two experimental strategies were carried out: a) pretreatment of
macrophages with FITC-NanoMBGs and subsequent fungal infection; b) competition
assays after simultaneous addition of fungus and nanospheres. Macrophage
pro-inflammatory phenotype markers (CD80 expression and interleukin 6
secretion) were also evaluated. Significant decreases of CD80+ macrophage
percentage and interleukin 6 secretion were observed after 30 min, indicating
that the simultaneous incorporation of NanoMBG and fungus favors the macrophage
noninflammatory phenotype. The present study evidences that the uptake of these
nanospheres in all the studied conditions does not alter the macrophage
function. Moreover, intracellular FITCNanoMBGs induce a transitory increase of
the fungal phagocytosis by macrophages at MOI 1 and after a short time of
interaction. In the competition assays, as the intracellular fungus quantity
increased, the intracellular FITC-NanoMBG content decreased in a MOI- and
timedependent manner...
| [
{
"created": "Sat, 3 Apr 2021 18:08:28 GMT",
"version": "v1"
}
] | 2021-04-06 | [
[
"Diez-Orejas",
"R.",
""
],
[
"Casarrubios",
"L.",
""
],
[
"Feito",
"M. J.",
""
],
[
"Rojo",
"J. M.",
""
],
[
"Vallet-Regi",
"M.",
""
],
[
"Arcos",
"D.",
""
],
[
"Portoles",
"M. T.",
""
]
] | The use of nanoparticles for intracellular drug delivery could reduce the toxicity and side effects of the drug but, the uptake of these nanocarriers could induce adverse effects on cells and tissues after their incorporation. Macrophages play a central role in host defense and are responsible for in vivo nanoparticle trafficking. Assessment of their defense capacity against pathogenic micro-organisms after nanoparticle uptake, is necessary to prevent infections associated with nanoparticle therapies. In this study, the effects of hollow mesoporous SiO2-CaO nanospheres labeled with fluorescein isothiocyanate (FITC-NanoMBGs) on the function of peritoneal macrophages was assessed by measuring their ability to phagocytize Candida albicans expressing a red fluorescent protein. Two macrophage-fungus ratios (MOI 1 and MOI 5) were used and two experimental strategies were carried out: a) pretreatment of macrophages with FITC-NanoMBGs and subsequent fungal infection; b) competition assays after simultaneous addition of fungus and nanospheres. Macrophage pro-inflammatory phenotype markers (CD80 expression and interleukin 6 secretion) were also evaluated. Significant decreases of CD80+ macrophage percentage and interleukin 6 secretion were observed after 30 min, indicating that the simultaneous incorporation of NanoMBG and fungus favors the macrophage noninflammatory phenotype. The present study evidences that the uptake of these nanospheres in all the studied conditions does not alter the macrophage function. Moreover, intracellular FITCNanoMBGs induce a transitory increase of the fungal phagocytosis by macrophages at MOI 1 and after a short time of interaction. In the competition assays, as the intracellular fungus quantity increased, the intracellular FITC-NanoMBG content decreased in a MOI- and timedependent manner... |
q-bio/0507034 | Leonard M. Sander | Evgeniy Khain and Leonard M. Sander | Dynamics and pattern formation in invasive tumor growth | 4 pages, 4 figures | null | 10.1103/PhysRevLett.96.188103 | null | q-bio.CB | null | In this work, we study the in-vitro dynamics of the most malignant form of
the primary brain tumor: Glioblastoma Multiforme. Typically, the growing tumor
consists of the inner dense proliferating zone and the outer less dense
invasive region. Experiments with different types of cells show qualitatively
different behavior. Wild-type cells invade a spherically symmetric manner, but
mutant cells are organized in tenuous branches. We formulate a model for this
sort of growth using two coupled reaction-diffusion equations for the cell and
nutrient concentrations. When the ratio of the nutrient and cell diffusion
coefficients exceeds some critical value, the plane propagating front becomes
unstable with respect to transversal perturbations. The instability threshold
and the full phase-plane diagram in the parameter space are determined. The
results are in a good agreement with experimental findings for the two types of
cells.
| [
{
"created": "Fri, 22 Jul 2005 16:36:33 GMT",
"version": "v1"
}
] | 2009-11-11 | [
[
"Khain",
"Evgeniy",
""
],
[
"Sander",
"Leonard M.",
""
]
] | In this work, we study the in-vitro dynamics of the most malignant form of the primary brain tumor: Glioblastoma Multiforme. Typically, the growing tumor consists of the inner dense proliferating zone and the outer less dense invasive region. Experiments with different types of cells show qualitatively different behavior. Wild-type cells invade a spherically symmetric manner, but mutant cells are organized in tenuous branches. We formulate a model for this sort of growth using two coupled reaction-diffusion equations for the cell and nutrient concentrations. When the ratio of the nutrient and cell diffusion coefficients exceeds some critical value, the plane propagating front becomes unstable with respect to transversal perturbations. The instability threshold and the full phase-plane diagram in the parameter space are determined. The results are in a good agreement with experimental findings for the two types of cells. |
1505.06440 | Kimberly Glass | Marieke Lydia Kuijjer, Matthew Tung, GuoCheng Yuan, John Quackenbush,
Kimberly Glass | Estimating sample-specific regulatory networks | null | null | null | null | q-bio.MN | http://creativecommons.org/licenses/by/4.0/ | Biological systems are driven by intricate interactions among the complex
array of molecules that comprise the cell. Many methods have been developed to
reconstruct network models of those interactions. These methods often draw on
large numbers of samples with measured gene expression profiles to infer
connections between genes (or gene products). The result is an aggregate
network model representing a single estimate for the likelihood of each
interaction, or "edge," in the network. While informative, aggregate models
fail to capture the heterogeneity that is represented in any population. Here
we propose a method to reverse engineer sample-specific networks from aggregate
network models. We demonstrate the accuracy and applicability of our approach
in several data sets, including simulated data, microarray expression data from
synchronized yeast cells, and RNA-seq data collected from human lymphoblastoid
cell lines. We show that these sample-specific networks can be used to study
changes in network topology across time and to characterize shifts in gene
regulation that may not be apparent in expression data. We believe the ability
to generate sample-specific networks will greatly facilitate the application of
network methods to the increasingly large, complex, and heterogeneous
multi-omic data sets that are currently being generated, and ultimately support
the emerging field of precision network medicine.
| [
{
"created": "Sun, 24 May 2015 13:45:51 GMT",
"version": "v1"
},
{
"created": "Thu, 28 Jun 2018 15:24:31 GMT",
"version": "v2"
}
] | 2018-06-29 | [
[
"Kuijjer",
"Marieke Lydia",
""
],
[
"Tung",
"Matthew",
""
],
[
"Yuan",
"GuoCheng",
""
],
[
"Quackenbush",
"John",
""
],
[
"Glass",
"Kimberly",
""
]
] | Biological systems are driven by intricate interactions among the complex array of molecules that comprise the cell. Many methods have been developed to reconstruct network models of those interactions. These methods often draw on large numbers of samples with measured gene expression profiles to infer connections between genes (or gene products). The result is an aggregate network model representing a single estimate for the likelihood of each interaction, or "edge," in the network. While informative, aggregate models fail to capture the heterogeneity that is represented in any population. Here we propose a method to reverse engineer sample-specific networks from aggregate network models. We demonstrate the accuracy and applicability of our approach in several data sets, including simulated data, microarray expression data from synchronized yeast cells, and RNA-seq data collected from human lymphoblastoid cell lines. We show that these sample-specific networks can be used to study changes in network topology across time and to characterize shifts in gene regulation that may not be apparent in expression data. We believe the ability to generate sample-specific networks will greatly facilitate the application of network methods to the increasingly large, complex, and heterogeneous multi-omic data sets that are currently being generated, and ultimately support the emerging field of precision network medicine. |
1610.02395 | Salvador Malo | Salvador Malo | Causal-order superposition as an enabler of free will | 6 pages | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It is often argued that bottom-up causation under a physicalist, reductionist
worldview precludes free will in the libertarian sense. On the one hand, the
paradigm of classical mechanics makes determinism inescapable, while on the
other, the leading models that allow a role for quantum effects are
noncommittal regarding how conscious agents are supposed to translate
indeterminacy into self-formed choice. Recent developments, however, not only
imply that self-formed decisions are possible, but actually suggest how they
might come about. The cornerstone appears to be causality superposition rather
than quantum-state entanglement, as is usually assumed, and the natural arena
for applying these developments is (perhaps ironically) a framework that was
built without any consideration for quantum effects.
| [
{
"created": "Sat, 8 Oct 2016 00:39:25 GMT",
"version": "v1"
}
] | 2016-10-11 | [
[
"Malo",
"Salvador",
""
]
] | It is often argued that bottom-up causation under a physicalist, reductionist worldview precludes free will in the libertarian sense. On the one hand, the paradigm of classical mechanics makes determinism inescapable, while on the other, the leading models that allow a role for quantum effects are noncommittal regarding how conscious agents are supposed to translate indeterminacy into self-formed choice. Recent developments, however, not only imply that self-formed decisions are possible, but actually suggest how they might come about. The cornerstone appears to be causality superposition rather than quantum-state entanglement, as is usually assumed, and the natural arena for applying these developments is (perhaps ironically) a framework that was built without any consideration for quantum effects. |
2002.00200 | Bob Eisenberg | Robert S. Eisenberg | Energetic Controls are Essential: New and Notable Note for Biophysical
Journal | Biophysical Journal (2020) | null | 101.1016/j.bpj.2020.01.029 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mikhail Shapiro's lab investigated a promising new method for non-invasive
control of calcium currents in individual cells in the nervous system by the
selective heating of nanoparticles and show that simple physical laws, properly
applied, explain what is happening, and so can be a foundation for constructing
improved methods and techniques. They use custom built instrumentation and
detailed quantitative measurement to show that special surface properties are
not needed to explain their results. Their work is taken as an example of the
need for quantitative measurement in biophysics and biology in general.
Classical examples are cited and the argument is made that the success of
structural and molecular biology has hidden the need for quantitative
measurements and controls. Semiconductor and computational electronics is cited
as a science even more successful than structural and molecular biology that
depends on quantitative measurement, controls, and analysis.
| [
{
"created": "Sat, 1 Feb 2020 12:41:31 GMT",
"version": "v1"
}
] | 2020-06-16 | [
[
"Eisenberg",
"Robert S.",
""
]
] | Mikhail Shapiro's lab investigated a promising new method for non-invasive control of calcium currents in individual cells in the nervous system by the selective heating of nanoparticles and show that simple physical laws, properly applied, explain what is happening, and so can be a foundation for constructing improved methods and techniques. They use custom built instrumentation and detailed quantitative measurement to show that special surface properties are not needed to explain their results. Their work is taken as an example of the need for quantitative measurement in biophysics and biology in general. Classical examples are cited and the argument is made that the success of structural and molecular biology has hidden the need for quantitative measurements and controls. Semiconductor and computational electronics is cited as a science even more successful than structural and molecular biology that depends on quantitative measurement, controls, and analysis. |
2407.09100 | Polina Turishcheva | Polina Turishcheva, Paul G. Fahey, Michaela Vystr\v{c}ilov\'a, Laura
Hansel, Rachel Froebe, Kayla Ponder, Yongrong Qiu, Konstantin F. Willeke,
Mohammad Bashiri, Ruslan Baikulov, Yu Zhu, Lei Ma, Shan Yu, Tiejun Huang,
Bryan M. Li, Wolf De Wulf, Nina Kudryashova, Matthias H. Hennig, Nathalie L.
Rochefort, Arno Onken, Eric Wang, Zhiwei Ding, Andreas S. Tolias, Fabian H.
Sinz, Alexander S Ecker | Retrospective for the Dynamic Sensorium Competition for predicting
large-scale mouse primary visual cortex activity from videos | null | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Understanding how biological visual systems process information is
challenging because of the nonlinear relationship between visual input and
neuronal responses. Artificial neural networks allow computational
neuroscientists to create predictive models that connect biological and machine
vision. Machine learning has benefited tremendously from benchmarks that
compare different model on the same task under standardized conditions.
However, there was no standardized benchmark to identify state-of-the-art
dynamic models of the mouse visual system. To address this gap, we established
the Sensorium 2023 Benchmark Competition with dynamic input, featuring a new
large-scale dataset from the primary visual cortex of ten mice. This dataset
includes responses from 78,853 neurons to 2 hours of dynamic stimuli per
neuron, together with the behavioral measurements such as running speed, pupil
dilation, and eye movements. The competition ranked models in two tracks based
on predictive performance for neuronal responses on a held-out test set: one
focusing on predicting in-domain natural stimuli and another on
out-of-distribution (OOD) stimuli to assess model generalization. As part of
the NeurIPS 2023 competition track, we received more than 160 model submissions
from 22 teams. Several new architectures for predictive models were proposed,
and the winning teams improved the previous state-of-the-art model by 50%.
Access to the dataset as well as the benchmarking infrastructure will remain
online at www.sensorium-competition.net.
| [
{
"created": "Fri, 12 Jul 2024 09:02:10 GMT",
"version": "v1"
}
] | 2024-07-15 | [
[
"Turishcheva",
"Polina",
""
],
[
"Fahey",
"Paul G.",
""
],
[
"Vystrčilová",
"Michaela",
""
],
[
"Hansel",
"Laura",
""
],
[
"Froebe",
"Rachel",
""
],
[
"Ponder",
"Kayla",
""
],
[
"Qiu",
"Yongrong",
""
],
[
"Willeke",
"Konstantin F.",
""
],
[
"Bashiri",
"Mohammad",
""
],
[
"Baikulov",
"Ruslan",
""
],
[
"Zhu",
"Yu",
""
],
[
"Ma",
"Lei",
""
],
[
"Yu",
"Shan",
""
],
[
"Huang",
"Tiejun",
""
],
[
"Li",
"Bryan M.",
""
],
[
"De Wulf",
"Wolf",
""
],
[
"Kudryashova",
"Nina",
""
],
[
"Hennig",
"Matthias H.",
""
],
[
"Rochefort",
"Nathalie L.",
""
],
[
"Onken",
"Arno",
""
],
[
"Wang",
"Eric",
""
],
[
"Ding",
"Zhiwei",
""
],
[
"Tolias",
"Andreas S.",
""
],
[
"Sinz",
"Fabian H.",
""
],
[
"Ecker",
"Alexander S",
""
]
] | Understanding how biological visual systems process information is challenging because of the nonlinear relationship between visual input and neuronal responses. Artificial neural networks allow computational neuroscientists to create predictive models that connect biological and machine vision. Machine learning has benefited tremendously from benchmarks that compare different model on the same task under standardized conditions. However, there was no standardized benchmark to identify state-of-the-art dynamic models of the mouse visual system. To address this gap, we established the Sensorium 2023 Benchmark Competition with dynamic input, featuring a new large-scale dataset from the primary visual cortex of ten mice. This dataset includes responses from 78,853 neurons to 2 hours of dynamic stimuli per neuron, together with the behavioral measurements such as running speed, pupil dilation, and eye movements. The competition ranked models in two tracks based on predictive performance for neuronal responses on a held-out test set: one focusing on predicting in-domain natural stimuli and another on out-of-distribution (OOD) stimuli to assess model generalization. As part of the NeurIPS 2023 competition track, we received more than 160 model submissions from 22 teams. Several new architectures for predictive models were proposed, and the winning teams improved the previous state-of-the-art model by 50%. Access to the dataset as well as the benchmarking infrastructure will remain online at www.sensorium-competition.net. |
2007.05228 | Vo Hong Thanh | Vo Hong Thanh, Dani Korpela and Pekka Orponen | Cotranscriptional kinetic folding of RNA secondary structures including
pseudoknots | 20 pages, 15 figures | Journal of Computational Biology, 2021 | 10.1089/cmb.2020.0606 | null | q-bio.BM | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Computational prediction of RNA structures is an important problem in
computational structural biology. Studies of RNA structure formation often
assume that the process starts from a fully synthesized sequence. Experimental
evidence, however, has shown that RNA folds concurrently with its elongation.
We investigate RNA secondary structure formation, including pseudoknots, that
takes into account the cotranscriptional effects. We propose a
single-nucleotide resolution kinetic model of the folding process of RNA
molecules, where the polymerase-driven elongation of an RNA strand by a new
nucleotide is included as a primitive operation, together with a stochastic
simulation method that implements this folding concurrently with the
transcriptional synthesis. Numerical case studies show that our
cotranscriptional RNA folding model can predict the formation of conformations
that are favored in actual biological systems. Our new computational tool can
thus provide quantitative predictions and offer useful insights into the
kinetics of RNA folding.
| [
{
"created": "Fri, 10 Jul 2020 08:06:15 GMT",
"version": "v1"
},
{
"created": "Tue, 1 Dec 2020 10:28:31 GMT",
"version": "v2"
},
{
"created": "Tue, 8 Dec 2020 15:17:37 GMT",
"version": "v3"
},
{
"created": "Sun, 21 Feb 2021 12:53:10 GMT",
"version": "v4"
},
{
"created": "Wed, 17 Mar 2021 11:30:47 GMT",
"version": "v5"
}
] | 2021-04-28 | [
[
"Thanh",
"Vo Hong",
""
],
[
"Korpela",
"Dani",
""
],
[
"Orponen",
"Pekka",
""
]
] | Computational prediction of RNA structures is an important problem in computational structural biology. Studies of RNA structure formation often assume that the process starts from a fully synthesized sequence. Experimental evidence, however, has shown that RNA folds concurrently with its elongation. We investigate RNA secondary structure formation, including pseudoknots, that takes into account the cotranscriptional effects. We propose a single-nucleotide resolution kinetic model of the folding process of RNA molecules, where the polymerase-driven elongation of an RNA strand by a new nucleotide is included as a primitive operation, together with a stochastic simulation method that implements this folding concurrently with the transcriptional synthesis. Numerical case studies show that our cotranscriptional RNA folding model can predict the formation of conformations that are favored in actual biological systems. Our new computational tool can thus provide quantitative predictions and offer useful insights into the kinetics of RNA folding. |
2208.04186 | Maciej Roso{\l} | Maciej Roso{\l}, Jakub S. G\k{a}sior, Iwona Walecka, Bo\.zena Werner,
Gerard Cybulski, Marcel M{\l}y\'nczak | Causality in cardiorespiratory signals in pediatric cardiac patients | Accepted for the 44th International Engineering in Medicine and
Biology Conference, EMBC 2022, organized by IEEE Engineering in Medicine and
Biology Society | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Four different Granger causality-based methods - one linear and three
nonlinear (Granger Causality, Kernel Granger Causality, large-scale Nonlinear
Granger Causality, and Neural Network Granger Causality) were used for
assessment and causal-based quantification of the respiratory sinus arrythmia
(RSA) in the group of pediatric cardiac patients, based on the single-lead ECG
and impedance pneumography signals (the latter as the tidal volume curve
equivalent). Each method was able to detect the dependency (in terms of causal
inference) between respiratory and cardiac signals. The correlations between
quantified RSA and the demographic parameters were also studied, but the
results differ for each method.
| [
{
"created": "Fri, 5 Aug 2022 09:24:46 GMT",
"version": "v1"
}
] | 2022-08-09 | [
[
"Rosoł",
"Maciej",
""
],
[
"Gąsior",
"Jakub S.",
""
],
[
"Walecka",
"Iwona",
""
],
[
"Werner",
"Bożena",
""
],
[
"Cybulski",
"Gerard",
""
],
[
"Młyńczak",
"Marcel",
""
]
] | Four different Granger causality-based methods - one linear and three nonlinear (Granger Causality, Kernel Granger Causality, large-scale Nonlinear Granger Causality, and Neural Network Granger Causality) were used for assessment and causal-based quantification of the respiratory sinus arrythmia (RSA) in the group of pediatric cardiac patients, based on the single-lead ECG and impedance pneumography signals (the latter as the tidal volume curve equivalent). Each method was able to detect the dependency (in terms of causal inference) between respiratory and cardiac signals. The correlations between quantified RSA and the demographic parameters were also studied, but the results differ for each method. |
2004.13017 | Karl Friston | Karl J. Friston, Thomas Parr, Peter Zeidman, Adeel Razi, Guillaume
Flandin, Jean Daunizeau, Oliver J. Hulme, Alexander J. Billig, Vladimir
Litvak, Cathy J. Price, Rosalyn J. Moran and Christian Lambert | Second waves, social distancing, and the spread of COVID-19 across
America | Technical report: 35 pages, 14 figures, 1 table | null | null | null | q-bio.PE q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | We recently described a dynamic causal model of a COVID-19 outbreak within a
single region. Here, we combine several of these (epidemic) models to create a
(pandemic) model of viral spread among regions. Our focus is on a second wave
of new cases that may result from loss of immunity--and the exchange of people
between regions--and how mortality rates can be ameliorated under different
strategic responses. In particular, we consider hard or soft social distancing
strategies predicated on national (Federal) or regional (State) estimates of
the prevalence of infection in the population. The modelling is demonstrated
using timeseries of new cases and deaths from the United States to estimate the
parameters of a factorial (compartmental) epidemiological model of each State
and, crucially, coupling between States. Using Bayesian model reduction, we
identify the effective connectivity between States that best explains the
initial phases of the outbreak in the United States. Using the ensuing
posterior parameter estimates, we then evaluate the likely outcomes of
different policies in terms of mortality, working days lost due to lockdown and
demands upon critical care. The provisional results of this modelling suggest
that social distancing and loss of immunity are the two key factors that
underwrite a return to endemic equilibrium.
| [
{
"created": "Sun, 26 Apr 2020 14:38:29 GMT",
"version": "v1"
}
] | 2020-04-29 | [
[
"Friston",
"Karl J.",
""
],
[
"Parr",
"Thomas",
""
],
[
"Zeidman",
"Peter",
""
],
[
"Razi",
"Adeel",
""
],
[
"Flandin",
"Guillaume",
""
],
[
"Daunizeau",
"Jean",
""
],
[
"Hulme",
"Oliver J.",
""
],
[
"Billig",
"Alexander J.",
""
],
[
"Litvak",
"Vladimir",
""
],
[
"Price",
"Cathy J.",
""
],
[
"Moran",
"Rosalyn J.",
""
],
[
"Lambert",
"Christian",
""
]
] | We recently described a dynamic causal model of a COVID-19 outbreak within a single region. Here, we combine several of these (epidemic) models to create a (pandemic) model of viral spread among regions. Our focus is on a second wave of new cases that may result from loss of immunity--and the exchange of people between regions--and how mortality rates can be ameliorated under different strategic responses. In particular, we consider hard or soft social distancing strategies predicated on national (Federal) or regional (State) estimates of the prevalence of infection in the population. The modelling is demonstrated using timeseries of new cases and deaths from the United States to estimate the parameters of a factorial (compartmental) epidemiological model of each State and, crucially, coupling between States. Using Bayesian model reduction, we identify the effective connectivity between States that best explains the initial phases of the outbreak in the United States. Using the ensuing posterior parameter estimates, we then evaluate the likely outcomes of different policies in terms of mortality, working days lost due to lockdown and demands upon critical care. The provisional results of this modelling suggest that social distancing and loss of immunity are the two key factors that underwrite a return to endemic equilibrium. |
2401.15727 | Francisco Torres-Ruiz | Antonio Di Crescenzo, Paola Paraggio, Patricia Rom\'an-Rom\'an,
Francisco Torres-Ruiz | Applications of the multi-sigmoidal deterministic and stochastic
logistic models for plant dynamics | 28 pages, 30 figures | Applied Mathematical Modelling, 92, 2021,884-904 | 10.1016/j.apm.2020.11.046 | null | q-bio.PE math.PR | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We consider a generalization of the classical logistic growth model
introducing more than one inflection point. The growth, called multi-sigmoidal,
is firstly analyzed from a deterministic point of view in order to obtain the
main properties of the curve, such as the limit behavior, the inflection points
and the threshold-crossing-time through a fixed boundary. We also present an
application in population dynamics of plants based on real data. Then, we
define two different birth-death processes, one with linear birth and death
rates and the other with quadratic rates, and we analyze their main features.
The conditions under which the processes have a mean of multi-sigmoidal
logistic type and the first-passage-time problem are also discussed. Finally,
with the aim of obtaining a more manageable stochastic description of the
growth, we perform a scaling procedure leading to a lognormal diffusion process
with mean of multi-sigmoidal logistic type. We finally conduct a detailed
probabilistic analysis of this process
| [
{
"created": "Sun, 28 Jan 2024 18:43:50 GMT",
"version": "v1"
}
] | 2024-01-31 | [
[
"Di Crescenzo",
"Antonio",
""
],
[
"Paraggio",
"Paola",
""
],
[
"Román-Román",
"Patricia",
""
],
[
"Torres-Ruiz",
"Francisco",
""
]
] | We consider a generalization of the classical logistic growth model introducing more than one inflection point. The growth, called multi-sigmoidal, is firstly analyzed from a deterministic point of view in order to obtain the main properties of the curve, such as the limit behavior, the inflection points and the threshold-crossing-time through a fixed boundary. We also present an application in population dynamics of plants based on real data. Then, we define two different birth-death processes, one with linear birth and death rates and the other with quadratic rates, and we analyze their main features. The conditions under which the processes have a mean of multi-sigmoidal logistic type and the first-passage-time problem are also discussed. Finally, with the aim of obtaining a more manageable stochastic description of the growth, we perform a scaling procedure leading to a lognormal diffusion process with mean of multi-sigmoidal logistic type. We finally conduct a detailed probabilistic analysis of this process |
2206.04915 | \'Eric Herbert | Clara Ledoux, Florence Chapeland-Leclerc, Gwena\"el Ruprich-Robert,
C\'ecilia Bob\'ee, Christophe Lalanne, \'Eric Herbert, Pascal David | Prediction and experimental evidence of the optimisation of the angular
branching process in the thallus growth of Podospora anserina | Submitted to Scientific Report | null | 10.1038/s41598-022-16245-9 | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Based upon apical growth and hyphal branching, the two main processes that
drive the growth pattern of a fungal network, we propose here a two-dimensions
simulation based on a binary-tree modelling allowing us to extract the main
characteristics of a generic thallus growth. In particular, we showed that, in
a homogeneous environment, the fungal growth can be optimized for exploration
and exploitation of its surroundings with a specific angular distribution of
apical branching. Two complementary methods of extracting angle values have
been used to confront the result of the simulation with experimental data
obtained from the thallus growth of the saprophytic filamentous fungus
Podospora anserina. Finally, we propose here a validated model that, while
being computationally low-cost, is powerful enough to test quickly multiple
conditions and constraints. It will allow in future works to deepen the
characterization of the growth dynamic of fungal network, in addition to
laboratory experiments, that could be sometimes expensive, tedious or of
limited scope.
| [
{
"created": "Fri, 10 Jun 2022 07:29:20 GMT",
"version": "v1"
}
] | 2022-07-21 | [
[
"Ledoux",
"Clara",
""
],
[
"Chapeland-Leclerc",
"Florence",
""
],
[
"Ruprich-Robert",
"Gwenaël",
""
],
[
"Bobée",
"Cécilia",
""
],
[
"Lalanne",
"Christophe",
""
],
[
"Herbert",
"Éric",
""
],
[
"David",
"Pascal",
""
]
] | Based upon apical growth and hyphal branching, the two main processes that drive the growth pattern of a fungal network, we propose here a two-dimensions simulation based on a binary-tree modelling allowing us to extract the main characteristics of a generic thallus growth. In particular, we showed that, in a homogeneous environment, the fungal growth can be optimized for exploration and exploitation of its surroundings with a specific angular distribution of apical branching. Two complementary methods of extracting angle values have been used to confront the result of the simulation with experimental data obtained from the thallus growth of the saprophytic filamentous fungus Podospora anserina. Finally, we propose here a validated model that, while being computationally low-cost, is powerful enough to test quickly multiple conditions and constraints. It will allow in future works to deepen the characterization of the growth dynamic of fungal network, in addition to laboratory experiments, that could be sometimes expensive, tedious or of limited scope. |
1701.02988 | Dmitri Parkhomchuk | Alexey A. Shadrin and Dmitri V. Parkhomchuk | Darwinian Genetic Drift | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Genetic drift is stochastic fluctuations of alleles frequencies in a
population due to sampling effects. We consider a model of drift in an
equilibrium population, with high mutation rates: few functional mutations per
generation. Such mutation rates are common in multicellular organisms including
humans, however they are not explicitly considered in most population genetics
models. Under these assumptions the drift shows properties distinct from the
classical drift models, which ignore realistic mutation rates: i) All
(non-lethal) variants of a site have a characteristic average frequencies,
which are independent of population size, however the magnitude of fluctuations
around these frequencies depends on population size. ii) There is no
"mutational meltdown" due to "low efficiency of selection" for small population
size. Population average fitness does not depend on population size. iii) Drift
(and molecular clock) can be represented as wandering by compensatory
mutations, postulate of neutral mutations is not necessary for explaining the
high rate of mutation accumulation. Our results, which adjust the meaning of
the neutral theory from the individual neutrality of the majority of mutations,
to the collective neutrality of compensatory mutations, are applicable to
investigations in phylogeny and coalescent and for GWAS design and analysis.
| [
{
"created": "Wed, 11 Jan 2017 14:36:41 GMT",
"version": "v1"
}
] | 2017-01-12 | [
[
"Shadrin",
"Alexey A.",
""
],
[
"Parkhomchuk",
"Dmitri V.",
""
]
] | Genetic drift is stochastic fluctuations of alleles frequencies in a population due to sampling effects. We consider a model of drift in an equilibrium population, with high mutation rates: few functional mutations per generation. Such mutation rates are common in multicellular organisms including humans, however they are not explicitly considered in most population genetics models. Under these assumptions the drift shows properties distinct from the classical drift models, which ignore realistic mutation rates: i) All (non-lethal) variants of a site have a characteristic average frequencies, which are independent of population size, however the magnitude of fluctuations around these frequencies depends on population size. ii) There is no "mutational meltdown" due to "low efficiency of selection" for small population size. Population average fitness does not depend on population size. iii) Drift (and molecular clock) can be represented as wandering by compensatory mutations, postulate of neutral mutations is not necessary for explaining the high rate of mutation accumulation. Our results, which adjust the meaning of the neutral theory from the individual neutrality of the majority of mutations, to the collective neutrality of compensatory mutations, are applicable to investigations in phylogeny and coalescent and for GWAS design and analysis. |
1304.5947 | Carsten Baldauf | Carsten Baldauf, Kevin Pagel, Stephan Warnke, Gert von Helden, Beate
Koksch, Volker Blum, Matthias Scheffler | How Cations Change Peptide Structure | 30 pages, 7 figures | Chem. Eur. J. 2013 (19) 11224-11234 | 10.1002/chem.201204554 | null | q-bio.BM physics.bio-ph physics.chem-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Specific interactions between cations and proteins have a strong impact on
peptide and protein structure. We here shed light on the nature of the
underlying interactions, especially regarding the effects on the polyamide
backbone structure. To do so, we compare the conformational ensembles of model
peptides in isolation and in the presence of either Li+ or Na+ cations by
state-of-the-art density-functional theory (including van der Waals effects)
and gas-phase infrared spectroscopy. These monovalent cations have a drastic
effect on the local backbone conformation of turn-forming peptides, by
disruption of the H bonding networks and the resulting severe distortion of the
backbone conformations. In fact, Li+ and Na+ can even have different
conformational effects on the same peptide. We also assess the predictive power
of current approximate density functionals for peptide-cation systems and
compare to results from established protein force fields as well as to
high-level quantum chemistry (CCSD(T)).
| [
{
"created": "Mon, 22 Apr 2013 13:35:15 GMT",
"version": "v1"
},
{
"created": "Mon, 29 Apr 2013 11:21:38 GMT",
"version": "v2"
}
] | 2013-08-27 | [
[
"Baldauf",
"Carsten",
""
],
[
"Pagel",
"Kevin",
""
],
[
"Warnke",
"Stephan",
""
],
[
"von Helden",
"Gert",
""
],
[
"Koksch",
"Beate",
""
],
[
"Blum",
"Volker",
""
],
[
"Scheffler",
"Matthias",
""
]
] | Specific interactions between cations and proteins have a strong impact on peptide and protein structure. We here shed light on the nature of the underlying interactions, especially regarding the effects on the polyamide backbone structure. To do so, we compare the conformational ensembles of model peptides in isolation and in the presence of either Li+ or Na+ cations by state-of-the-art density-functional theory (including van der Waals effects) and gas-phase infrared spectroscopy. These monovalent cations have a drastic effect on the local backbone conformation of turn-forming peptides, by disruption of the H bonding networks and the resulting severe distortion of the backbone conformations. In fact, Li+ and Na+ can even have different conformational effects on the same peptide. We also assess the predictive power of current approximate density functionals for peptide-cation systems and compare to results from established protein force fields as well as to high-level quantum chemistry (CCSD(T)). |
1501.06947 | Vahid Salari | Vahid Salari, Felix Scholkmann, Farhad Shahbazi, Istvan Bokkon, Jack
Tuszynski | Comment on Activation of Visual Pigments by Light and Heat (Science 332,
1307-312, 2011) | 7 pages, 3 figures | null | null | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It is known that the Arrhenius equation, based on the Boltzmann distribution,
can model only a part (e.g. half of the activation energy) for retinal discrete
dark noise observed for vertebrate rod and cone pigments. Luo et al (Science,
332, 1307-312, 2011) presented a new approach to explain this discrepancy by
showing that applying the Hinshelwood distribution instead the Boltzmann
distribution in the Arrhenius equation solves the problem successfully.
However, a careful reanalysis of the methodology and results shows that the
approach of Luo et al is questionable and the results found do not solve the
problem completely.
| [
{
"created": "Tue, 27 Jan 2015 22:12:19 GMT",
"version": "v1"
},
{
"created": "Thu, 14 May 2015 21:57:52 GMT",
"version": "v2"
}
] | 2015-05-18 | [
[
"Salari",
"Vahid",
""
],
[
"Scholkmann",
"Felix",
""
],
[
"Shahbazi",
"Farhad",
""
],
[
"Bokkon",
"Istvan",
""
],
[
"Tuszynski",
"Jack",
""
]
] | It is known that the Arrhenius equation, based on the Boltzmann distribution, can model only a part (e.g. half of the activation energy) for retinal discrete dark noise observed for vertebrate rod and cone pigments. Luo et al (Science, 332, 1307-312, 2011) presented a new approach to explain this discrepancy by showing that applying the Hinshelwood distribution instead the Boltzmann distribution in the Arrhenius equation solves the problem successfully. However, a careful reanalysis of the methodology and results shows that the approach of Luo et al is questionable and the results found do not solve the problem completely. |
2110.15066 | Vitaly Vanchurin | Vitaly Vanchurin, Yuri I. Wolf, Eugene V. Koonin, Mikhail I.
Katsnelson | Thermodynamics of Evolution and the Origin of Life | 23 pages | null | 10.1073/pnas.2120042119 | null | q-bio.PE cond-mat.dis-nn cond-mat.stat-mech cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We outline a phenomenological theory of evolution and origin of life by
combining the formalism of classical thermodynamics with a statistical
description of learning. The maximum entropy principle constrained by the
requirement for minimization of the loss function is employed to derive a
canonical ensemble of organisms (population), the corresponding partition
function (macroscopic counterpart of fitness) and free energy (macroscopic
counterpart of additive fitness). We further define the biological counterparts
of temperature (biological temperature) as the measure of stochasticity of the
evolutionary process and of chemical potential (evolutionary potential) as the
amount of evolutionary work required to add a new trainable variable (such as
an additional gene) to the evolving system. We then develop a phenomenological
approach to the description of evolution, which involves modeling the grand
potential as a function of the biological temperature and evolutionary
potential. We demonstrate how this phenomenological approach can be used to
study the "ideal mutation" model of evolution and its generalizations. Finally,
we show that, within this thermodynamics framework, major transitions in
evolution, such as the transition from an ensemble of molecules to an ensemble
of organisms, that is, the origin of life, can be modeled as a special case of
bona fide physical phase transitions that are associated with the emergence of
a new type of grand canonical ensemble and the corresponding new level of
description
| [
{
"created": "Thu, 28 Oct 2021 12:27:33 GMT",
"version": "v1"
}
] | 2022-10-12 | [
[
"Vanchurin",
"Vitaly",
""
],
[
"Wolf",
"Yuri I.",
""
],
[
"Koonin",
"Eugene V.",
""
],
[
"Katsnelson",
"Mikhail I.",
""
]
] | We outline a phenomenological theory of evolution and origin of life by combining the formalism of classical thermodynamics with a statistical description of learning. The maximum entropy principle constrained by the requirement for minimization of the loss function is employed to derive a canonical ensemble of organisms (population), the corresponding partition function (macroscopic counterpart of fitness) and free energy (macroscopic counterpart of additive fitness). We further define the biological counterparts of temperature (biological temperature) as the measure of stochasticity of the evolutionary process and of chemical potential (evolutionary potential) as the amount of evolutionary work required to add a new trainable variable (such as an additional gene) to the evolving system. We then develop a phenomenological approach to the description of evolution, which involves modeling the grand potential as a function of the biological temperature and evolutionary potential. We demonstrate how this phenomenological approach can be used to study the "ideal mutation" model of evolution and its generalizations. Finally, we show that, within this thermodynamics framework, major transitions in evolution, such as the transition from an ensemble of molecules to an ensemble of organisms, that is, the origin of life, can be modeled as a special case of bona fide physical phase transitions that are associated with the emergence of a new type of grand canonical ensemble and the corresponding new level of description |
1712.05182 | Giuditta Franco | Giuditta Franco, Francesco Bellamoli, and Silvia Lampis | Experimental Analysis of XPCR-based protocols | 14 pages, 10 figures, experimental results, not yet published | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper reports some experimental results validating in a broader context
a variant of PCR, called XPCR, previously introduced and tested on relatively
short synthetic DNA sequences. Basic XPCR technique confirmed to work as
expected, to concatenate two genes of different lengths, while a library of all
permutations of three different genes (extracted from the bacterial strain
Bulkolderia fungorum DBT1) has been realized in one step by multiple XPCR.
Limits and potentialities of the protocols have been discussed, and tested in
several experimental conditions, by aside showing that overlap concatenation of
multiple copies of one only gene is not realizable by these procedures, due to
strand displacement phenomena. In this case, in fact, one copy of the gene is
obtained as a unique amplification product.
| [
{
"created": "Thu, 14 Dec 2017 11:40:19 GMT",
"version": "v1"
},
{
"created": "Fri, 15 Dec 2017 14:58:32 GMT",
"version": "v2"
}
] | 2017-12-18 | [
[
"Franco",
"Giuditta",
""
],
[
"Bellamoli",
"Francesco",
""
],
[
"Lampis",
"Silvia",
""
]
] | This paper reports some experimental results validating in a broader context a variant of PCR, called XPCR, previously introduced and tested on relatively short synthetic DNA sequences. Basic XPCR technique confirmed to work as expected, to concatenate two genes of different lengths, while a library of all permutations of three different genes (extracted from the bacterial strain Bulkolderia fungorum DBT1) has been realized in one step by multiple XPCR. Limits and potentialities of the protocols have been discussed, and tested in several experimental conditions, by aside showing that overlap concatenation of multiple copies of one only gene is not realizable by these procedures, due to strand displacement phenomena. In this case, in fact, one copy of the gene is obtained as a unique amplification product. |
1605.01421 | Stefan Widgren | Stefan Widgren, Pavol Bauer, Robin Eriksson, Stefan Engblom | SimInf: An R package for Data-driven Stochastic Disease Spread
Simulations | The manual has been updated to the latest version of SimInf (v6.0.0).
41 pages, 16 figures | J. Stat. Softw., 91(12):1--42 (2019) | 10.18637/jss.v091.i12 | null | q-bio.PE stat.AP stat.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present the R package SimInf which provides an efficient and very flexible
framework to conduct data-driven epidemiological modeling in realistic large
scale disease spread simulations. The framework integrates infection dynamics
in subpopulations as continuous-time Markov chains using the Gillespie
stochastic simulation algorithm and incorporates available data such as births,
deaths and movements as scheduled events at predefined time-points. Using C
code for the numerical solvers and OpenMP to divide work over multiple
processors ensures high performance when simulating a sample outcome. One of
our design goal was to make SimInf extendable and enable usage of the numerical
solvers from other R extension packages in order to facilitate complex
epidemiological research. In this paper, we provide a technical description of
the framework and demonstrate its use on some basic examples. We also discuss
how to specify and extend the framework with user-defined models.
| [
{
"created": "Wed, 4 May 2016 20:16:20 GMT",
"version": "v1"
},
{
"created": "Wed, 14 Jun 2017 19:04:00 GMT",
"version": "v2"
},
{
"created": "Thu, 3 May 2018 16:56:46 GMT",
"version": "v3"
}
] | 2021-08-10 | [
[
"Widgren",
"Stefan",
""
],
[
"Bauer",
"Pavol",
""
],
[
"Eriksson",
"Robin",
""
],
[
"Engblom",
"Stefan",
""
]
] | We present the R package SimInf which provides an efficient and very flexible framework to conduct data-driven epidemiological modeling in realistic large scale disease spread simulations. The framework integrates infection dynamics in subpopulations as continuous-time Markov chains using the Gillespie stochastic simulation algorithm and incorporates available data such as births, deaths and movements as scheduled events at predefined time-points. Using C code for the numerical solvers and OpenMP to divide work over multiple processors ensures high performance when simulating a sample outcome. One of our design goal was to make SimInf extendable and enable usage of the numerical solvers from other R extension packages in order to facilitate complex epidemiological research. In this paper, we provide a technical description of the framework and demonstrate its use on some basic examples. We also discuss how to specify and extend the framework with user-defined models. |
1711.02448 | Rui Ponte Costa | Rui Ponte Costa, Yannis M. Assael, Brendan Shillingford, Nando de
Freitas and Tim P. Vogels | Cortical microcircuits as gated-recurrent neural networks | To appear in Advances in Neural Information Processing Systems 30
(NIPS 2017). 13 pages, 2 figures (and 1 supp. figure) | null | null | null | q-bio.NC cs.NE stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cortical circuits exhibit intricate recurrent architectures that are
remarkably similar across different brain areas. Such stereotyped structure
suggests the existence of common computational principles. However, such
principles have remained largely elusive. Inspired by gated-memory networks,
namely long short-term memory networks (LSTMs), we introduce a recurrent neural
network in which information is gated through inhibitory cells that are
subtractive (subLSTM). We propose a natural mapping of subLSTMs onto known
canonical excitatory-inhibitory cortical microcircuits. Our empirical
evaluation across sequential image classification and language modelling tasks
shows that subLSTM units can achieve similar performance to LSTM units. These
results suggest that cortical circuits can be optimised to solve complex
contextual problems and proposes a novel view on their computational function.
Overall our work provides a step towards unifying recurrent networks as used in
machine learning with their biological counterparts.
| [
{
"created": "Tue, 7 Nov 2017 13:03:35 GMT",
"version": "v1"
},
{
"created": "Wed, 3 Jan 2018 17:29:28 GMT",
"version": "v2"
}
] | 2018-01-04 | [
[
"Costa",
"Rui Ponte",
""
],
[
"Assael",
"Yannis M.",
""
],
[
"Shillingford",
"Brendan",
""
],
[
"de Freitas",
"Nando",
""
],
[
"Vogels",
"Tim P.",
""
]
] | Cortical circuits exhibit intricate recurrent architectures that are remarkably similar across different brain areas. Such stereotyped structure suggests the existence of common computational principles. However, such principles have remained largely elusive. Inspired by gated-memory networks, namely long short-term memory networks (LSTMs), we introduce a recurrent neural network in which information is gated through inhibitory cells that are subtractive (subLSTM). We propose a natural mapping of subLSTMs onto known canonical excitatory-inhibitory cortical microcircuits. Our empirical evaluation across sequential image classification and language modelling tasks shows that subLSTM units can achieve similar performance to LSTM units. These results suggest that cortical circuits can be optimised to solve complex contextual problems and proposes a novel view on their computational function. Overall our work provides a step towards unifying recurrent networks as used in machine learning with their biological counterparts. |
2108.12261 | Xilin Liu | Xilin Liu, Andrew G. Richardson | A System-on-Chip for Closed-loop Optogenetic Sleep Modulation | 8 pages, 9 figures, 2 tables | null | null | null | q-bio.NC cs.SY eess.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Stimulation of target neuronal populations using optogenetic techniques
during specific sleep stages has begun to elucidate the mechanisms and effects
of sleep. To conduct closed-loop optogenetic sleep studies in untethered
animals, we designed a fully integrated, low-power system-on-chip (SoC) for
real-time sleep stage classification and stage-specific optical stimulation.
The SoC consists of a 4-channel analog front-end for recording polysomnography
signals, a mixed-signal machine-learning (ML) core, and a 16-channel optical
stimulation back-end. A novel ML algorithm and innovative circuit design
techniques improved the online classification performance while minimizing
power consumption. The SoC was designed and simulated in 180 nm CMOS
technology. In an evaluation using an expert labeled sleep database with 20
subjects, the SoC achieves a high sensitivity of 0.806 and a specificity of
0.947 in discriminating 5 sleep stages. Overall power consumption in continuous
operation is 97 uW.
| [
{
"created": "Sat, 17 Jul 2021 01:03:48 GMT",
"version": "v1"
}
] | 2021-08-30 | [
[
"Liu",
"Xilin",
""
],
[
"Richardson",
"Andrew G.",
""
]
] | Stimulation of target neuronal populations using optogenetic techniques during specific sleep stages has begun to elucidate the mechanisms and effects of sleep. To conduct closed-loop optogenetic sleep studies in untethered animals, we designed a fully integrated, low-power system-on-chip (SoC) for real-time sleep stage classification and stage-specific optical stimulation. The SoC consists of a 4-channel analog front-end for recording polysomnography signals, a mixed-signal machine-learning (ML) core, and a 16-channel optical stimulation back-end. A novel ML algorithm and innovative circuit design techniques improved the online classification performance while minimizing power consumption. The SoC was designed and simulated in 180 nm CMOS technology. In an evaluation using an expert labeled sleep database with 20 subjects, the SoC achieves a high sensitivity of 0.806 and a specificity of 0.947 in discriminating 5 sleep stages. Overall power consumption in continuous operation is 97 uW. |
1301.5512 | Amaury Lambert | Amaury Lambert and H\'el\`ene Morlon and Rampal S. Etienne | The reconstructed tree in the lineage-based model of protracted
speciation | 27 pages, 5 figures | null | null | null | q-bio.PE math.PR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A popular line of research in evolutionary biology is the use of
time-calibrated phylogenies for the inference of diversification processes.
This requires computing the likelihood of a given ultrametric tree as the
reconstructed tree produced by a given model of diversification. Etienne &
Rosindell (2012) proposed a lineage-based model of diversification, called
protracted speciation, where species remain incipient during a random duration
before turning good species, and showed that this can explain the slowdown in
lineage accumulation observed in real phylogenies. However, they were unable to
provide a general likelihood formula. Here, we present a likelihood formula for
protracted speciation models, where rates at which species turn good or become
extinct can depend both on their age and on time. Our only restrictive
assumption is that speciation rate does not depend on species status.
Our likelihood formula utilizes a new technique, based on the contour of the
phylogenetic tree and first developed in Lambert (2010). We consider the
reconstructed trees spanned by all extant species, by all good extant species,
or by all representative species, which are either good extant species or
incipient species representative of some good extinct species. Specifically, we
prove that each of these trees is a coalescent point process, that is, a
planar, ultrametric tree where the coalescence times between two consecutive
tips are independent, identically distributed random variables. We characterize
the common distribution of these coalescence times in some, biologically
meaningful, special cases for which the likelihood reduces to an elegant
analytical formula or becomes numerically tractable.
| [
{
"created": "Wed, 23 Jan 2013 14:29:24 GMT",
"version": "v1"
}
] | 2013-01-24 | [
[
"Lambert",
"Amaury",
""
],
[
"Morlon",
"Hélène",
""
],
[
"Etienne",
"Rampal S.",
""
]
] | A popular line of research in evolutionary biology is the use of time-calibrated phylogenies for the inference of diversification processes. This requires computing the likelihood of a given ultrametric tree as the reconstructed tree produced by a given model of diversification. Etienne & Rosindell (2012) proposed a lineage-based model of diversification, called protracted speciation, where species remain incipient during a random duration before turning good species, and showed that this can explain the slowdown in lineage accumulation observed in real phylogenies. However, they were unable to provide a general likelihood formula. Here, we present a likelihood formula for protracted speciation models, where rates at which species turn good or become extinct can depend both on their age and on time. Our only restrictive assumption is that speciation rate does not depend on species status. Our likelihood formula utilizes a new technique, based on the contour of the phylogenetic tree and first developed in Lambert (2010). We consider the reconstructed trees spanned by all extant species, by all good extant species, or by all representative species, which are either good extant species or incipient species representative of some good extinct species. Specifically, we prove that each of these trees is a coalescent point process, that is, a planar, ultrametric tree where the coalescence times between two consecutive tips are independent, identically distributed random variables. We characterize the common distribution of these coalescence times in some, biologically meaningful, special cases for which the likelihood reduces to an elegant analytical formula or becomes numerically tractable. |
q-bio/0404028 | Frank Schweitzer | Robert Mach, Frank Schweitzer | Modeling Vortex Swarming In Daphnia | 24 pages including 11 multi-part figs. Major revisions compared to
version 1, new results on transition from uncorrelated rotation to vortex
swarming. Extended discussion. For related publications see
http://www.sg.ethz.ch/people/scfrank/Publications | Bulletin of Mathematical Biology, vol. 69 (2007) pp. 539-562 | 10.1007/s11538-006-9135-3 | null | q-bio.PE cond-mat.stat-mech physics.bio-ph | null | Based on experimental observations in \textit{Daphnia}, we introduce an
agent-based model for the motion of single and swarms of animals. Each agent is
described by a stochastic equation that also considers the conditions for
active biological motion. An environmental potential further reflects local
conditions for \textit{Daphnia}, such as attraction to light sources. This
model is sufficient to describe the observed cycling behavior of single
\textit{Daphnia}. To simulate vortex swarming of many \textit{Daphnia}, i.e.
the collective rotation of the swarm in one direction, we extend the model by
considering avoidance of collisions. Two different ansatzes to model such a
behavior are developed and compared. By means of computer simulations of a
multi-agent system we show that local avoidance - as a special form of
asymmetric repulsion between animals - leads to the emergence of a vortex
swarm. The transition from uncorrelated rotation of single agents to the vortex
swarming as a function of the swarm size is investigated. Eventually, some
evidence of avoidance behavior in \textit{Daphnia} is provided by comparing
experimental and simulation results for two animals.
| [
{
"created": "Thu, 22 Apr 2004 23:06:38 GMT",
"version": "v1"
},
{
"created": "Wed, 24 Nov 2004 07:19:59 GMT",
"version": "v2"
},
{
"created": "Fri, 16 Mar 2007 10:17:46 GMT",
"version": "v3"
}
] | 2007-05-23 | [
[
"Mach",
"Robert",
""
],
[
"Schweitzer",
"Frank",
""
]
] | Based on experimental observations in \textit{Daphnia}, we introduce an agent-based model for the motion of single and swarms of animals. Each agent is described by a stochastic equation that also considers the conditions for active biological motion. An environmental potential further reflects local conditions for \textit{Daphnia}, such as attraction to light sources. This model is sufficient to describe the observed cycling behavior of single \textit{Daphnia}. To simulate vortex swarming of many \textit{Daphnia}, i.e. the collective rotation of the swarm in one direction, we extend the model by considering avoidance of collisions. Two different ansatzes to model such a behavior are developed and compared. By means of computer simulations of a multi-agent system we show that local avoidance - as a special form of asymmetric repulsion between animals - leads to the emergence of a vortex swarm. The transition from uncorrelated rotation of single agents to the vortex swarming as a function of the swarm size is investigated. Eventually, some evidence of avoidance behavior in \textit{Daphnia} is provided by comparing experimental and simulation results for two animals. |
0909.0189 | Zhanghan Wu | Zhanghan Wu, Vlad Elgart, Hong Qian, Jianhua Xing | Amplification and detection of single molecule conformational
fluctuation through a protein interaction network with bimodal distributions | 9 pages, 5 figures | J. Phys. Chem. B, 2009, 113 (36), pp 12375-12381 | 10.1021/jp903548d | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A protein undergoes conformational dynamics with multiple time scales, which
results in fluctuating enzyme activities. Recent studies in single molecule
enzymology have observe this "age-old" dynamic disorder phenomenon directly.
However, the single molecule technique has its limitation. To be able to
observe this molecular effect with real biochemical functions {\it in situ}, we
propose to couple the fluctuations in enzymatic activity to noise propagations
in small protein interaction networks such as zeroth order ultra-sensitive
phosphorylation-dephosphorylation cycle. We showed that enzyme fluctuations
could indeed be amplified by orders of magnitude into fluctuations in the level
of substrate phosphorylation | a quantity widely interested in cellular
biology. Enzyme conformational fluctuations sufficiently slower than the
catalytic reaction turn over rate result in a bimodal concentration
distribution of the phosphorylated substrate. In return, this network amplified
single enzyme fluctuation can be used as a novel biochemical "reporter" for
measuring single enzyme conformational fluctuation rates.
| [
{
"created": "Tue, 1 Sep 2009 14:13:09 GMT",
"version": "v1"
}
] | 2009-11-10 | [
[
"Wu",
"Zhanghan",
""
],
[
"Elgart",
"Vlad",
""
],
[
"Qian",
"Hong",
""
],
[
"Xing",
"Jianhua",
""
]
] | A protein undergoes conformational dynamics with multiple time scales, which results in fluctuating enzyme activities. Recent studies in single molecule enzymology have observe this "age-old" dynamic disorder phenomenon directly. However, the single molecule technique has its limitation. To be able to observe this molecular effect with real biochemical functions {\it in situ}, we propose to couple the fluctuations in enzymatic activity to noise propagations in small protein interaction networks such as zeroth order ultra-sensitive phosphorylation-dephosphorylation cycle. We showed that enzyme fluctuations could indeed be amplified by orders of magnitude into fluctuations in the level of substrate phosphorylation | a quantity widely interested in cellular biology. Enzyme conformational fluctuations sufficiently slower than the catalytic reaction turn over rate result in a bimodal concentration distribution of the phosphorylated substrate. In return, this network amplified single enzyme fluctuation can be used as a novel biochemical "reporter" for measuring single enzyme conformational fluctuation rates. |
1909.10405 | John-Antonio Argyriadis | John-Antonio Argyriadis, Yang-Hui He, Vishnu Jejjala, Djordje Minic | Dynamics of genetic code evolution: The emergence of universality | 38 pages | Phys. Rev. E 103, 052409 (2021) | 10.1103/PhysRevE.103.052409 | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the dynamics of genetic code evolution. The model of Vetsigian et
al. [1] and Vetsigian [2] uses the mechanism of horizontal gene transfer to
demonstrate convergence of the genetic code to a near universal solution. We
reproduce and analyze the algorithm as a dynamical system. All the parameters
used in the model are varied to assess their impact on convergence and
optimality score. We show that by allowing specific parameters to vary with
time, the solution exhibits attractor dynamics. Finally, we study automorphisms
of the genetic code arising due to this model. We use this to examine the
scaling of the solutions in order to re-examine universality and find that
there is a direct link to mutation rate.
| [
{
"created": "Mon, 23 Sep 2019 14:59:09 GMT",
"version": "v1"
},
{
"created": "Fri, 27 Nov 2020 17:59:57 GMT",
"version": "v2"
}
] | 2021-05-26 | [
[
"Argyriadis",
"John-Antonio",
""
],
[
"He",
"Yang-Hui",
""
],
[
"Jejjala",
"Vishnu",
""
],
[
"Minic",
"Djordje",
""
]
] | We study the dynamics of genetic code evolution. The model of Vetsigian et al. [1] and Vetsigian [2] uses the mechanism of horizontal gene transfer to demonstrate convergence of the genetic code to a near universal solution. We reproduce and analyze the algorithm as a dynamical system. All the parameters used in the model are varied to assess their impact on convergence and optimality score. We show that by allowing specific parameters to vary with time, the solution exhibits attractor dynamics. Finally, we study automorphisms of the genetic code arising due to this model. We use this to examine the scaling of the solutions in order to re-examine universality and find that there is a direct link to mutation rate. |
2202.07325 | Christopher Overton | Christopher E. Overton, Luke Webb, Uma Datta, Mike Fursman, Jo
Hardstaff, Iina Hiironen, Karthik Paranthaman, Heather Riley, James Sedgwick,
Julia Verne, Steve Willner, Lorenzo Pellis, Ian Hall | Novel methods for estimating the instantaneous and overall COVID-19 case
fatality risk among care home residents in England | null | null | 10.1371/journal.pcbi.1010554 | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | The COVID-19 pandemic has had high mortality rates in the elderly and frail
worldwide, particularly in care homes. This is driven by the difficulty of
isolating care homes from the wider community, the large population sizes
within care facilities (relative to typical households), and the age/frailty of
the residents. To quantify the mortality risk posed by disease, the case
fatality risk (CFR) is an important tool. This quantifies the proportion of
cases that result in death. Throughout the pandemic, CFR amongst care home
residents in England has been monitored closely. To estimate CFR, we apply both
novel and existing methods to data on deaths in care homes, collected by Public
Health England and the Care Quality Commission. We compare these different
methods, evaluating their relative strengths and weaknesses. Using these
methods, we estimate temporal trends in the instantaneous CFR (at both daily
and weekly resolutions) and the overall CFR across the whole of England, and
dis-aggregated at regional level. We also investigate how the CFR varies based
on age and on the type of care required, dis-aggregating by whether care homes
include nursing staff and by age of residents. This work has contributed to the
summary of measures used for monitoring the UK epidemic.
| [
{
"created": "Tue, 15 Feb 2022 11:28:28 GMT",
"version": "v1"
}
] | 2023-01-11 | [
[
"Overton",
"Christopher E.",
""
],
[
"Webb",
"Luke",
""
],
[
"Datta",
"Uma",
""
],
[
"Fursman",
"Mike",
""
],
[
"Hardstaff",
"Jo",
""
],
[
"Hiironen",
"Iina",
""
],
[
"Paranthaman",
"Karthik",
""
],
[
"Riley",
"Heather",
""
],
[
"Sedgwick",
"James",
""
],
[
"Verne",
"Julia",
""
],
[
"Willner",
"Steve",
""
],
[
"Pellis",
"Lorenzo",
""
],
[
"Hall",
"Ian",
""
]
] | The COVID-19 pandemic has had high mortality rates in the elderly and frail worldwide, particularly in care homes. This is driven by the difficulty of isolating care homes from the wider community, the large population sizes within care facilities (relative to typical households), and the age/frailty of the residents. To quantify the mortality risk posed by disease, the case fatality risk (CFR) is an important tool. This quantifies the proportion of cases that result in death. Throughout the pandemic, CFR amongst care home residents in England has been monitored closely. To estimate CFR, we apply both novel and existing methods to data on deaths in care homes, collected by Public Health England and the Care Quality Commission. We compare these different methods, evaluating their relative strengths and weaknesses. Using these methods, we estimate temporal trends in the instantaneous CFR (at both daily and weekly resolutions) and the overall CFR across the whole of England, and dis-aggregated at regional level. We also investigate how the CFR varies based on age and on the type of care required, dis-aggregating by whether care homes include nursing staff and by age of residents. This work has contributed to the summary of measures used for monitoring the UK epidemic. |
2211.08522 | Michael Levin | Leo Pio-Lopez, Johanna Bischof, Jennifer V. LaPalme, and Michael Levin | The scaling of goals via homeostasis: an evolutionary simulation,
experiment and analysis | 27 pages, 11 Figures, 2 Algorithms | null | null | null | q-bio.PE cs.MA cs.NE q-bio.TO | http://creativecommons.org/licenses/by-nc-nd/4.0/ | All cognitive agents are composite beings. Specifically, complex living
agents consist of cells, which are themselves competent sub-agents navigating
physiological and metabolic spaces. Behavior science, evolutionary
developmental biology, and the field of machine intelligence all seek an answer
to the scaling of biological cognition: what evolutionary dynamics enable
individual cells to integrate their activities to result in the emergence of a
novel, higher-level intelligence that has goals and competencies that belong to
it and not to its parts? Here, we report the results of simulations based on
the TAME framework, which proposes that evolution pivoted the collective
intelligence of cells during morphogenesis of the body into traditional
behavioral intelligence by scaling up the goal states at the center of
homeostatic processes. We tested the hypothesis that a minimal evolutionary
framework is sufficient for small, low-level setpoints of metabolic homeostasis
in cells to scale up into collectives (tissues) which solve a problem in
morphospace: the organization of a body-wide positional information axis (the
classic French Flag problem). We found that these emergent morphogenetic agents
exhibit a number of predicted features, including the use of stress propagation
dynamics to achieve its target morphology as well as the ability to recover
from perturbation (robustness) and long-term stability (even though neither of
these was directly selected for). Moreover we observed unexpected behavior of
sudden remodeling long after the system stabilizes. We tested this prediction
in a biological system - regenerating planaria - and observed a very similar
phenomenon. We propose that this system is a first step toward a quantitative
understanding of how evolution scales minimal goal-directed behavior
(homeostatic loops) into higher-level problem-solving agents in morphogenetic
and other spaces.
| [
{
"created": "Tue, 15 Nov 2022 21:48:44 GMT",
"version": "v1"
}
] | 2022-11-17 | [
[
"Pio-Lopez",
"Leo",
""
],
[
"Bischof",
"Johanna",
""
],
[
"LaPalme",
"Jennifer V.",
""
],
[
"Levin",
"Michael",
""
]
] | All cognitive agents are composite beings. Specifically, complex living agents consist of cells, which are themselves competent sub-agents navigating physiological and metabolic spaces. Behavior science, evolutionary developmental biology, and the field of machine intelligence all seek an answer to the scaling of biological cognition: what evolutionary dynamics enable individual cells to integrate their activities to result in the emergence of a novel, higher-level intelligence that has goals and competencies that belong to it and not to its parts? Here, we report the results of simulations based on the TAME framework, which proposes that evolution pivoted the collective intelligence of cells during morphogenesis of the body into traditional behavioral intelligence by scaling up the goal states at the center of homeostatic processes. We tested the hypothesis that a minimal evolutionary framework is sufficient for small, low-level setpoints of metabolic homeostasis in cells to scale up into collectives (tissues) which solve a problem in morphospace: the organization of a body-wide positional information axis (the classic French Flag problem). We found that these emergent morphogenetic agents exhibit a number of predicted features, including the use of stress propagation dynamics to achieve its target morphology as well as the ability to recover from perturbation (robustness) and long-term stability (even though neither of these was directly selected for). Moreover we observed unexpected behavior of sudden remodeling long after the system stabilizes. We tested this prediction in a biological system - regenerating planaria - and observed a very similar phenomenon. We propose that this system is a first step toward a quantitative understanding of how evolution scales minimal goal-directed behavior (homeostatic loops) into higher-level problem-solving agents in morphogenetic and other spaces. |
2208.14445 | Zahra Riahi Samani | Zahra Riahi Samani, Drew Parker, Hamed Akbari, Spyridon Bakas, Ronald
L. Wolf, Steven Brem, Ragini Verma | Artificial intelligence-based locoregional markers of brain peritumoral
microenvironment | null | null | null | null | q-bio.QM cs.CV eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In malignant primary brain tumors, cancer cells infiltrate into the
peritumoral brain structures which results in inevitable recurrence.
Quantitative assessment of infiltrative heterogeneity in the peritumoral
region, the area where biopsy or resection can be hazardous, is important for
clinical decision making. Previous work on characterizing the infiltrative
heterogeneity in the peritumoral region used various imaging modalities, but
information of extracellular free water movement restriction has been limitedly
explored. Here, we derive a unique set of Artificial Intelligence (AI)-based
markers capturing the heterogeneity of tumor infiltration, by characterizing
free water movement restriction in the peritumoral region using Diffusion
Tensor Imaging (DTI)-based free water volume fraction maps. A novel voxel-wise
deep learning-based peritumoral microenvironment index (PMI) is first extracted
by leveraging the widely different water diffusivity properties of
glioblastomas and brain metastases as regions with and without infiltrations in
the peritumoral tissue. Descriptive characteristics of locoregional hubs of
uniformly high PMI values are extracted as AI-based markers to capture distinct
aspects of infiltrative heterogeneity. The proposed markers are applied to two
clinical use cases on an independent population of 275 adult-type diffuse
gliomas (CNS WHO grade 4), analyzing the duration of survival among
Isocitrate-Dehydrogenase 1 (IDH1)-wildtypes and the differences with
IDH1-mutants. Our findings provide a panel of markers as surrogates of
infiltration that captures unique insight about underlying biology of
peritumoral microstructural heterogeneity, establishing them as biomarkers of
prognosis pertaining to survival and molecular stratification, with potential
applicability in clinical decision making.
| [
{
"created": "Mon, 29 Aug 2022 22:04:06 GMT",
"version": "v1"
}
] | 2022-09-01 | [
[
"Samani",
"Zahra Riahi",
""
],
[
"Parker",
"Drew",
""
],
[
"Akbari",
"Hamed",
""
],
[
"Bakas",
"Spyridon",
""
],
[
"Wolf",
"Ronald L.",
""
],
[
"Brem",
"Steven",
""
],
[
"Verma",
"Ragini",
""
]
] | In malignant primary brain tumors, cancer cells infiltrate into the peritumoral brain structures which results in inevitable recurrence. Quantitative assessment of infiltrative heterogeneity in the peritumoral region, the area where biopsy or resection can be hazardous, is important for clinical decision making. Previous work on characterizing the infiltrative heterogeneity in the peritumoral region used various imaging modalities, but information of extracellular free water movement restriction has been limitedly explored. Here, we derive a unique set of Artificial Intelligence (AI)-based markers capturing the heterogeneity of tumor infiltration, by characterizing free water movement restriction in the peritumoral region using Diffusion Tensor Imaging (DTI)-based free water volume fraction maps. A novel voxel-wise deep learning-based peritumoral microenvironment index (PMI) is first extracted by leveraging the widely different water diffusivity properties of glioblastomas and brain metastases as regions with and without infiltrations in the peritumoral tissue. Descriptive characteristics of locoregional hubs of uniformly high PMI values are extracted as AI-based markers to capture distinct aspects of infiltrative heterogeneity. The proposed markers are applied to two clinical use cases on an independent population of 275 adult-type diffuse gliomas (CNS WHO grade 4), analyzing the duration of survival among Isocitrate-Dehydrogenase 1 (IDH1)-wildtypes and the differences with IDH1-mutants. Our findings provide a panel of markers as surrogates of infiltration that captures unique insight about underlying biology of peritumoral microstructural heterogeneity, establishing them as biomarkers of prognosis pertaining to survival and molecular stratification, with potential applicability in clinical decision making. |
2201.13378 | Cooper Mellema | Cooper J. Mellema, Albert Montillo | Novel Machine Learning Approaches for Improving the Reproducibility and
Reliability of Functional and Effective Connectivity from Functional MRI | 23 pages, 5 figures, 3 algorithms, 2 equations, 3 tables, 5 pages
supplemental | null | null | null | q-bio.NC q-bio.QM | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Objective: New measures of human brain connectivity are needed to address
gaps in the existing measures and facilitate the study of brain function,
cognitive capacity, and identify early markers of human disease. Traditional
approaches to measure functional connectivity between pairs of brain regions in
functional MRI, such as correlation and partial correlation, fail to capture
nonlinear aspects in the regional associations. We propose a new machine
learning based measure of functional connectivity which efficiently captures
linear and nonlinear aspects. Approach: We propose two new EC measures. The
first, a machine learning based measure of effective connectivity, measures
nonlinear aspects across the entire brain. The second, Structurally Projected
Granger Causality adapts Granger Causal connectivity to efficiently
characterize and regularize the whole brain EC connectome to respect underlying
biological structural connectivity. The proposed measures are compared to
traditional measures in terms of reproducibility and the ability to predict
individual traits in order to demonstrate these measures internal validity. We
use four repeat scans of the same individuals from the Human Connectome Project
and measure the ability of the measures to predict individual subject
physiologic and cognitive traits. Main results: The proposed new FC measure of
ML.FC attains high reproducibility with an R squared of 0.44, while the
proposed EC measure of SP.GC attains the highest predictive power with an R
squared of 0.66. Significance: The proposed methods are highly suitable for
achieving high reproducibility and predictiveness.
| [
{
"created": "Mon, 31 Jan 2022 17:43:04 GMT",
"version": "v1"
},
{
"created": "Fri, 25 Aug 2023 15:56:04 GMT",
"version": "v2"
}
] | 2023-08-28 | [
[
"Mellema",
"Cooper J.",
""
],
[
"Montillo",
"Albert",
""
]
] | Objective: New measures of human brain connectivity are needed to address gaps in the existing measures and facilitate the study of brain function, cognitive capacity, and identify early markers of human disease. Traditional approaches to measure functional connectivity between pairs of brain regions in functional MRI, such as correlation and partial correlation, fail to capture nonlinear aspects in the regional associations. We propose a new machine learning based measure of functional connectivity which efficiently captures linear and nonlinear aspects. Approach: We propose two new EC measures. The first, a machine learning based measure of effective connectivity, measures nonlinear aspects across the entire brain. The second, Structurally Projected Granger Causality adapts Granger Causal connectivity to efficiently characterize and regularize the whole brain EC connectome to respect underlying biological structural connectivity. The proposed measures are compared to traditional measures in terms of reproducibility and the ability to predict individual traits in order to demonstrate these measures internal validity. We use four repeat scans of the same individuals from the Human Connectome Project and measure the ability of the measures to predict individual subject physiologic and cognitive traits. Main results: The proposed new FC measure of ML.FC attains high reproducibility with an R squared of 0.44, while the proposed EC measure of SP.GC attains the highest predictive power with an R squared of 0.66. Significance: The proposed methods are highly suitable for achieving high reproducibility and predictiveness. |
1009.3656 | Efstratios Manousakis | Efstratios Manousakis | When perceptual time stands still: Long stable memory in binocular
rivalry | 7 two-column latex pages and 9 eps figures | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We have carried out binocular rivalry experiments with a large number of
subjects to obtain high quality statistics on probability distribution of
dominance duration (PDDD) for two cases where (a) the rival stimulus is
continuously presented and (b) the rival stimulus is periodically removed, with
stimulus-on and stimulus-off intervals Ton and Toff respectively. It is shown
that the PDDD obtained for the latter case can be reproduced to a reasonable
degree of approximation by simply using the PDDD of part (a) and slicing it at
pieces of time extent Ton and by introducing intervals of length Toff between
the on-intervals where the PDDD is set to zero. This suggests that the
variables representing the perceptual state do not change significantly during
long blank intervals. We argue that these findings impose challenges to
theoretical models which aim at describing visual perception.
| [
{
"created": "Sun, 19 Sep 2010 18:36:31 GMT",
"version": "v1"
}
] | 2010-09-21 | [
[
"Manousakis",
"Efstratios",
""
]
] | We have carried out binocular rivalry experiments with a large number of subjects to obtain high quality statistics on probability distribution of dominance duration (PDDD) for two cases where (a) the rival stimulus is continuously presented and (b) the rival stimulus is periodically removed, with stimulus-on and stimulus-off intervals Ton and Toff respectively. It is shown that the PDDD obtained for the latter case can be reproduced to a reasonable degree of approximation by simply using the PDDD of part (a) and slicing it at pieces of time extent Ton and by introducing intervals of length Toff between the on-intervals where the PDDD is set to zero. This suggests that the variables representing the perceptual state do not change significantly during long blank intervals. We argue that these findings impose challenges to theoretical models which aim at describing visual perception. |
1703.06532 | Chen Jia | Chen Jia, Peng Xie, Min Chen, Michael Q. Zhang | Stochastic fluctuations can reveal the feedback signs of gene regulatory
networks at the single-molecule level | 13 pages, 5 figures | null | null | null | q-bio.MN cond-mat.stat-mech physics.bio-ph q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding the relationship between spontaneous stochastic fluctuations
and the topology of the underlying gene regulatory network is of fundamental
importance for the study of single-cell stochastic gene expression. Here by
solving the analytical steady-state distribution of the protein copy number in
a general kinetic model of stochastic gene expression with nonlinear feedback
regulation, we reveal the relationship between stochastic fluctuations and
feedback topology at the single-molecule level, which provides novel insights
into how and to what extent a feedback loop can enhance or suppress molecular
fluctuations. Based on such relationship, we also develop an effective method
to extract the topological information of a gene regulatory network from
single-cell gene expression data. The theory is demonstrated by numerical
simulations and, more importantly, validated quantitatively by single-cell data
analysis of a synthetic gene circuit integrated in human kidney cells.
| [
{
"created": "Sun, 19 Mar 2017 22:46:36 GMT",
"version": "v1"
},
{
"created": "Tue, 30 May 2017 10:14:25 GMT",
"version": "v2"
},
{
"created": "Mon, 24 Jul 2017 06:53:32 GMT",
"version": "v3"
},
{
"created": "Tue, 24 Oct 2017 17:55:45 GMT",
"version": "v4"
}
] | 2017-10-25 | [
[
"Jia",
"Chen",
""
],
[
"Xie",
"Peng",
""
],
[
"Chen",
"Min",
""
],
[
"Zhang",
"Michael Q.",
""
]
] | Understanding the relationship between spontaneous stochastic fluctuations and the topology of the underlying gene regulatory network is of fundamental importance for the study of single-cell stochastic gene expression. Here by solving the analytical steady-state distribution of the protein copy number in a general kinetic model of stochastic gene expression with nonlinear feedback regulation, we reveal the relationship between stochastic fluctuations and feedback topology at the single-molecule level, which provides novel insights into how and to what extent a feedback loop can enhance or suppress molecular fluctuations. Based on such relationship, we also develop an effective method to extract the topological information of a gene regulatory network from single-cell gene expression data. The theory is demonstrated by numerical simulations and, more importantly, validated quantitatively by single-cell data analysis of a synthetic gene circuit integrated in human kidney cells. |
q-bio/0508024 | Sachin Talathi | Henry D.I. Abarbanel, Sachin Talathi | Reading Sequences of Interspike Intervals in Biological Neural Circuits | null | null | null | null | q-bio.OT | null | Sensory systems pass information about an animal's environment to higher
nervous system units through sequences of action potentials. When these action
potentials have essentially equivalent waveforms, all information is contained
in the interspike intervals (ISIs) of the spike sequence. We address the
question: How do neural circuits recognize and read these ISI sequences?
Our answer is given in terms of a biologically inspired neural circuit that
we construct using biologically realistic neurons. The essential ingredients of
the ISI Reading Unit (IRU) are (i) a tunable time delay circuit modelled after
one found in the anterior forebrain pathway of the birdsong system and (ii) a
recently observed rule for inhibitory synaptic plasticity. We present a circuit
that can both learn the ISIs of a training sequence using inhibitory synaptic
plasticity and then recognize the same ISI sequence when it is presented on
subsequent occasions. We investigate the ability of this IRU to learn in the
presence of two kinds of noise: jitter in the time of each spike and random
spikes occurring in the ideal spike sequence. We also discuss how the circuit
can be detuned by removing the selected ISI sequence and replacing it by an ISI
sequence with ISIs drawn from a probability distribution.
We have investigated realizations of the time delay circuit using
Hodgkin-Huxley conductance based neurons connected by realistic excitatory and
inhibitory synapses. Our models for the time delay circuit are tunable from
about 10 ms to 100 ms allowing one to learn and recognize ISI sequences within
that range of ISIs. ISIs down to a few ms and longer than 100 ms are possible
with other intrinsic and synaptic currents in the component neurons.
| [
{
"created": "Fri, 19 Aug 2005 17:04:25 GMT",
"version": "v1"
},
{
"created": "Fri, 19 Aug 2005 22:12:32 GMT",
"version": "v2"
}
] | 2007-05-23 | [
[
"Abarbanel",
"Henry D. I.",
""
],
[
"Talathi",
"Sachin",
""
]
] | Sensory systems pass information about an animal's environment to higher nervous system units through sequences of action potentials. When these action potentials have essentially equivalent waveforms, all information is contained in the interspike intervals (ISIs) of the spike sequence. We address the question: How do neural circuits recognize and read these ISI sequences? Our answer is given in terms of a biologically inspired neural circuit that we construct using biologically realistic neurons. The essential ingredients of the ISI Reading Unit (IRU) are (i) a tunable time delay circuit modelled after one found in the anterior forebrain pathway of the birdsong system and (ii) a recently observed rule for inhibitory synaptic plasticity. We present a circuit that can both learn the ISIs of a training sequence using inhibitory synaptic plasticity and then recognize the same ISI sequence when it is presented on subsequent occasions. We investigate the ability of this IRU to learn in the presence of two kinds of noise: jitter in the time of each spike and random spikes occurring in the ideal spike sequence. We also discuss how the circuit can be detuned by removing the selected ISI sequence and replacing it by an ISI sequence with ISIs drawn from a probability distribution. We have investigated realizations of the time delay circuit using Hodgkin-Huxley conductance based neurons connected by realistic excitatory and inhibitory synapses. Our models for the time delay circuit are tunable from about 10 ms to 100 ms allowing one to learn and recognize ISI sequences within that range of ISIs. ISIs down to a few ms and longer than 100 ms are possible with other intrinsic and synaptic currents in the component neurons. |
0903.0184 | Michael B\"orsch | Monika G. Dueser, Nawid Zarrabi, Daniel J. Cipriano, Stefan Ernst,
Gary D. Glick, Stanley D. Dunn, Michael Boersch | 36 degree step size of proton-driven c-ring rotation in FoF1-ATP
synthase | 8 pages, 1 figure | null | null | null | q-bio.BM q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Synthesis of the biological "energy currency molecule" adenosine triphosphate
ATP is accomplished by FoF1-ATP synthase. In the plasma membrane of Escherichia
coli, proton-driven rotation of a ring of 10 c subunits in the Fo motor powers
catalysis in the F1 motor. While F1 uses 120 degree stepping, Fo models predict
a step-by-step rotation of c subunits 36 degree at a time, which is here
demonstrated by single-molecule fluorescence resonance energy transfer.
| [
{
"created": "Sun, 1 Mar 2009 22:11:20 GMT",
"version": "v1"
}
] | 2009-03-03 | [
[
"Dueser",
"Monika G.",
""
],
[
"Zarrabi",
"Nawid",
""
],
[
"Cipriano",
"Daniel J.",
""
],
[
"Ernst",
"Stefan",
""
],
[
"Glick",
"Gary D.",
""
],
[
"Dunn",
"Stanley D.",
""
],
[
"Boersch",
"Michael",
""
]
] | Synthesis of the biological "energy currency molecule" adenosine triphosphate ATP is accomplished by FoF1-ATP synthase. In the plasma membrane of Escherichia coli, proton-driven rotation of a ring of 10 c subunits in the Fo motor powers catalysis in the F1 motor. While F1 uses 120 degree stepping, Fo models predict a step-by-step rotation of c subunits 36 degree at a time, which is here demonstrated by single-molecule fluorescence resonance energy transfer. |
1607.06358 | Ian Vernon Dr | Ian Vernon and Junli Liu and Michael Goldstein and James Rowe and Jen
Topping and Keith Lindsey | Bayesian uncertainty analysis for complex systems biology models:
emulation, global parameter searches and evaluation of gene functions | 26 pages, 13 figures. Version accepted by BMC systems biology | BMC Systems Biology (2018), 12(1) | 10.1186/s12918-017-0484-3 | null | q-bio.MN q-bio.CB q-bio.QM stat.AP stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: Many mathematical models have now been employed across every area
of systems biology. These models increasingly involve large numbers of unknown
parameters, have complex structure which can result in substantial evaluation
time relative to the needs of the analysis, and need to be compared to observed
data. The correct analysis of such models usually requires a global parameter
search, over a high dimensional parameter space, that incorporates and respects
the most important sources of uncertainty. This can be an extremely difficult
task, but it is essential for any meaningful inference or prediction to be made
about any biological system. It hence represents a fundamental challenge for
the whole of systems biology.
Results: Bayesian statistical methodology for the uncertainty analysis of
complex models is introduced, which is designed to address the high dimensional
global parameter search problem. Bayesian emulators that mimic the systems
biology model but which are extremely fast to evaluate are embedded within an
iterative history match: an efficient method to search high dimensional spaces
within a more formal statistical setting, while incorporating major sources of
uncertainty. The approach is demonstrated via application to two models of
hormonal crosstalk in Arabidopsis root development, which have 32 rate
parameters, for which we identify the sets of rate parameter values that lead
to acceptable matches to observed trend data. The biological consequences of
the resulting comparison, including the evaluation of gene functions, are
described.
| [
{
"created": "Thu, 21 Jul 2016 15:10:57 GMT",
"version": "v1"
},
{
"created": "Fri, 12 Jan 2018 11:36:41 GMT",
"version": "v2"
}
] | 2018-01-15 | [
[
"Vernon",
"Ian",
""
],
[
"Liu",
"Junli",
""
],
[
"Goldstein",
"Michael",
""
],
[
"Rowe",
"James",
""
],
[
"Topping",
"Jen",
""
],
[
"Lindsey",
"Keith",
""
]
] | Background: Many mathematical models have now been employed across every area of systems biology. These models increasingly involve large numbers of unknown parameters, have complex structure which can result in substantial evaluation time relative to the needs of the analysis, and need to be compared to observed data. The correct analysis of such models usually requires a global parameter search, over a high dimensional parameter space, that incorporates and respects the most important sources of uncertainty. This can be an extremely difficult task, but it is essential for any meaningful inference or prediction to be made about any biological system. It hence represents a fundamental challenge for the whole of systems biology. Results: Bayesian statistical methodology for the uncertainty analysis of complex models is introduced, which is designed to address the high dimensional global parameter search problem. Bayesian emulators that mimic the systems biology model but which are extremely fast to evaluate are embedded within an iterative history match: an efficient method to search high dimensional spaces within a more formal statistical setting, while incorporating major sources of uncertainty. The approach is demonstrated via application to two models of hormonal crosstalk in Arabidopsis root development, which have 32 rate parameters, for which we identify the sets of rate parameter values that lead to acceptable matches to observed trend data. The biological consequences of the resulting comparison, including the evaluation of gene functions, are described. |
1203.1471 | Marc Robinson-Rechavi | J. Roux, M. Robinson-Rechavi | Developmental constraints on vertebrate genome evolution | null | PLoS Genetics 4 (2008) e1000311 | 10.1371/journal.pgen.1000311 | null | q-bio.PE q-bio.GN | http://creativecommons.org/licenses/by/3.0/ | Constraints in embryonic development are thought to bias the direction of
evolution by making some changes less likely, and others more likely, depending
on their consequences on ontogeny. Here, we characterize the constraints acting
on genome evolution in vertebrates. We used gene expression data from two
vertebrates: zebrafish, using a microarray experiment spanning 14 stages of
development, and mouse, using EST counts for 26 stages of development. We show
that, in both species, genes expressed early in development (1) have a more
dramatic effect of knock-out or mutation and (2) are more likely to revert to
single copy after whole genome duplication, relative to genes expressed late.
This supports high constraints on early stages of vertebrate development,
making them less open to innovations (gene gain or gene loss). Results are
robust to different sources of data-gene expression from microarrays, ESTs, or
in situ hybridizations; and mutants from directed KO, transgenic insertions,
point mutations, or morpholinos. We determine the pattern of these constraints,
which differs from the model used to describe vertebrate morphological
conservation ("hourglass" model). While morphological constraints reach a
maximum at mid-development (the "phylotypic" stage), genomic constraints appear
to decrease in a monotonous manner over developmental time.
| [
{
"created": "Tue, 6 Mar 2012 12:47:06 GMT",
"version": "v1"
}
] | 2012-03-08 | [
[
"Roux",
"J.",
""
],
[
"Robinson-Rechavi",
"M.",
""
]
] | Constraints in embryonic development are thought to bias the direction of evolution by making some changes less likely, and others more likely, depending on their consequences on ontogeny. Here, we characterize the constraints acting on genome evolution in vertebrates. We used gene expression data from two vertebrates: zebrafish, using a microarray experiment spanning 14 stages of development, and mouse, using EST counts for 26 stages of development. We show that, in both species, genes expressed early in development (1) have a more dramatic effect of knock-out or mutation and (2) are more likely to revert to single copy after whole genome duplication, relative to genes expressed late. This supports high constraints on early stages of vertebrate development, making them less open to innovations (gene gain or gene loss). Results are robust to different sources of data-gene expression from microarrays, ESTs, or in situ hybridizations; and mutants from directed KO, transgenic insertions, point mutations, or morpholinos. We determine the pattern of these constraints, which differs from the model used to describe vertebrate morphological conservation ("hourglass" model). While morphological constraints reach a maximum at mid-development (the "phylotypic" stage), genomic constraints appear to decrease in a monotonous manner over developmental time. |
0812.4341 | Vincent Breton | V. Vincent Breton (LPC-Clermont), A. L. Da Costa (LPC-Clermont), P. De
Vlieger (LPC-Clermont), L. Maigne (LPC-Clermont), D. Sarramia (LPC-Clermont),
Y.-M. Kim, D. Kim, H.Q. Nguyen, T. Solomonides, Y.-T. Wu, T. N. Hai | Innovative in silico approaches to address avian flu using grid
technology | 7 pages, submitted to Infectious Disorders - Drug Targets | Infectious Disorders - Drug Targets (2008) 7 p | null | PCCF RI 0803 | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The recent years have seen the emergence of diseases which have spread very
quickly all around the world either through human travels like SARS or animal
migration like avian flu. Among the biggest challenges raised by infectious
emerging diseases, one is related to the constant mutation of the viruses which
turns them into continuously moving targets for drug and vaccine discovery.
Another challenge is related to the early detection and surveillance of the
diseases as new cases can appear just anywhere due to the globalization of
exchanges and the circulation of people and animals around the earth, as
recently demonstrated by the avian flu epidemics. For 3 years now, a
collaboration of teams in Europe and Asia has been exploring some innovative in
silico approaches to better tackle avian flu taking advantage of the very large
computing resources available on international grid infrastructures. Grids were
used to study the impact of mutations on the effectiveness of existing drugs
against H5N1 and to find potentially new leads active on mutated strains. Grids
allow also the integration of distributed data in a completely secured way. The
paper presents how we are currently exploring how to integrate the existing
data sources towards a global surveillance network for molecular epidemiology.
| [
{
"created": "Tue, 23 Dec 2008 07:06:36 GMT",
"version": "v1"
}
] | 2008-12-24 | [
[
"Breton",
"V. Vincent",
"",
"LPC-Clermont"
],
[
"Da Costa",
"A. L.",
"",
"LPC-Clermont"
],
[
"De Vlieger",
"P.",
"",
"LPC-Clermont"
],
[
"Maigne",
"L.",
"",
"LPC-Clermont"
],
[
"Sarramia",
"D.",
"",
"LPC-Clermont"
],
[
"Kim",
"Y. -M.",
""
],
[
"Kim",
"D.",
""
],
[
"Nguyen",
"H. Q.",
""
],
[
"Solomonides",
"T.",
""
],
[
"Wu",
"Y. -T.",
""
],
[
"Hai",
"T. N.",
""
]
] | The recent years have seen the emergence of diseases which have spread very quickly all around the world either through human travels like SARS or animal migration like avian flu. Among the biggest challenges raised by infectious emerging diseases, one is related to the constant mutation of the viruses which turns them into continuously moving targets for drug and vaccine discovery. Another challenge is related to the early detection and surveillance of the diseases as new cases can appear just anywhere due to the globalization of exchanges and the circulation of people and animals around the earth, as recently demonstrated by the avian flu epidemics. For 3 years now, a collaboration of teams in Europe and Asia has been exploring some innovative in silico approaches to better tackle avian flu taking advantage of the very large computing resources available on international grid infrastructures. Grids were used to study the impact of mutations on the effectiveness of existing drugs against H5N1 and to find potentially new leads active on mutated strains. Grids allow also the integration of distributed data in a completely secured way. The paper presents how we are currently exploring how to integrate the existing data sources towards a global surveillance network for molecular epidemiology. |
1306.0701 | Daniel Lawson | Daniel John Lawson | Populations in statistical genetic modelling and inference | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | What is a population? This review considers how a population may be defined
in terms of understanding the structure of the underlying genetics of the
individuals involved. The main approach is to consider statistically
identifiable groups of randomly mating individuals, which is well defined in
theory for any type of (sexual) organism. We discuss generative models using
drift, admixture and spatial structure, and the ancestral recombination graph.
These are contrasted with statistical models for inference, principle component
analysis and other `non-parametric' methods. The relationships between these
approaches are explored with both simulated and real-data examples. The
state-of-the-art practical software tools are discussed and contrasted. We
conclude that populations are a useful theoretical construct that can be well
defined in theory and often approximately exist in practice.
| [
{
"created": "Tue, 4 Jun 2013 08:40:06 GMT",
"version": "v1"
}
] | 2013-06-05 | [
[
"Lawson",
"Daniel John",
""
]
] | What is a population? This review considers how a population may be defined in terms of understanding the structure of the underlying genetics of the individuals involved. The main approach is to consider statistically identifiable groups of randomly mating individuals, which is well defined in theory for any type of (sexual) organism. We discuss generative models using drift, admixture and spatial structure, and the ancestral recombination graph. These are contrasted with statistical models for inference, principle component analysis and other `non-parametric' methods. The relationships between these approaches are explored with both simulated and real-data examples. The state-of-the-art practical software tools are discussed and contrasted. We conclude that populations are a useful theoretical construct that can be well defined in theory and often approximately exist in practice. |
1209.5439 | Mike Taylor | Michael P. Taylor and Mathew J. Wedel | Why sauropods had long necks; and why giraffes have short necks | 39 pages, 11 figures, 3 tables | PeerJ 1:e36 (2013) | 10.7717/peerj.36 | null | q-bio.TO q-bio.PE | http://creativecommons.org/licenses/by/3.0/ | The necks of the sauropod dinosaurs reached 15 m in length: six times longer
than that of the world record giraffe and five times longer than those of all
other terrestrial animals. Several anatomical features enabled this extreme
elongation, including: absolutely large body size and quadrupedal stance
providing a stable platform for a long neck; a small, light head that did not
orally process food; cervical vertebrae that were both numerous and
individually elongate; an efficient air-sac-based respiratory system; and
distinctive cervical architecture. Relevant features of sauropod cervical
vertebrae include: pneumatic chambers that enabled the bone to be positioned in
a mechanically efficient way within the envelope; and muscular attachments of
varying importance to the neural spines, epipophyses and cervical ribs. Other
long-necked tetrapods lacked important features of sauropods, preventing the
evolution of longer necks: for example, giraffes have relatively small torsos
and large, heavy heads, share the usual mammalian constraint of only seven
cervical vertebrae, and lack an air-sac system and pneumatic bones. Among
non-sauropods, their saurischian relatives the theropod dinosaurs seem to have
been best placed to evolve long necks, and indeed they probably surpassed those
of giraffes. But 150 million years of evolution did not suffice for them to
exceed a relatively modest 2.5 m.
| [
{
"created": "Mon, 24 Sep 2012 21:51:58 GMT",
"version": "v1"
}
] | 2013-02-13 | [
[
"Taylor",
"Michael P.",
""
],
[
"Wedel",
"Mathew J.",
""
]
] | The necks of the sauropod dinosaurs reached 15 m in length: six times longer than that of the world record giraffe and five times longer than those of all other terrestrial animals. Several anatomical features enabled this extreme elongation, including: absolutely large body size and quadrupedal stance providing a stable platform for a long neck; a small, light head that did not orally process food; cervical vertebrae that were both numerous and individually elongate; an efficient air-sac-based respiratory system; and distinctive cervical architecture. Relevant features of sauropod cervical vertebrae include: pneumatic chambers that enabled the bone to be positioned in a mechanically efficient way within the envelope; and muscular attachments of varying importance to the neural spines, epipophyses and cervical ribs. Other long-necked tetrapods lacked important features of sauropods, preventing the evolution of longer necks: for example, giraffes have relatively small torsos and large, heavy heads, share the usual mammalian constraint of only seven cervical vertebrae, and lack an air-sac system and pneumatic bones. Among non-sauropods, their saurischian relatives the theropod dinosaurs seem to have been best placed to evolve long necks, and indeed they probably surpassed those of giraffes. But 150 million years of evolution did not suffice for them to exceed a relatively modest 2.5 m. |
0906.0381 | Thilo Gross | Thilo Gross and Ulrike Feudel | Local dynamical equivalence of certain food webs | 20 pages, 5 figures | Ocean Dynamics 59 (2), 417-427, 2009 | 10.1007/s10236-008-0165-2 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | An important challenge in theoretical ecology is to find good, coarse-grained
representations of complex food webs. Here we use the approach of generalized
modeling to show that it may be possible to formulate a coarse-graining
algorithm that conserves the local dynamics of the model exactly. We show
examples of food webs with a different number of species that have exactly
identical local bifurcation diagrams. Based on these observations, we formulate
a conjecture governing which populations of complex food webs can be grouped
together into a single variable without changing the local dynamics. As an
illustration we use this conjecture to show that chaotic regions generically
exist in the parameter space of a class of food webs with more than three
trophic levels. While our conjecture is at present only applicable to
relatively special cases we believe that its applicability could be greatly
extended if a more sophisticated mapping of parameters were used in the model
reduction.
| [
{
"created": "Tue, 2 Jun 2009 14:57:57 GMT",
"version": "v1"
}
] | 2009-06-03 | [
[
"Gross",
"Thilo",
""
],
[
"Feudel",
"Ulrike",
""
]
] | An important challenge in theoretical ecology is to find good, coarse-grained representations of complex food webs. Here we use the approach of generalized modeling to show that it may be possible to formulate a coarse-graining algorithm that conserves the local dynamics of the model exactly. We show examples of food webs with a different number of species that have exactly identical local bifurcation diagrams. Based on these observations, we formulate a conjecture governing which populations of complex food webs can be grouped together into a single variable without changing the local dynamics. As an illustration we use this conjecture to show that chaotic regions generically exist in the parameter space of a class of food webs with more than three trophic levels. While our conjecture is at present only applicable to relatively special cases we believe that its applicability could be greatly extended if a more sophisticated mapping of parameters were used in the model reduction. |
0802.1223 | Michael E. Wall | David W. Dreisigmeyer, Jelena Stajic, Ilya Nemenman, William S.
Hlavacek, Michael E. Wall | Determinants of bistability in induction of the Escherichia coli lac
operon | 19 pages, 10 figures, First q-bio Conference on Cellular Information
Processing | null | null | LA-UR-08-0753 | q-bio.CB q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We have developed a mathematical model of regulation of expression of the
Escherichia coli lac operon, and have investigated bistability in its
steady-state induction behavior in the absence of external glucose. Numerical
analysis of equations describing regulation by artificial inducers revealed two
natural bistability parameters that can be used to control the range of inducer
concentrations over which the model exhibits bistability. By tuning these
bistability parameters, we found a family of biophysically reasonable systems
that are consistent with an experimentally determined bistable region for
induction by thio-methylgalactoside (Ozbudak et al. Nature 427:737, 2004). The
model predicts that bistability can be abolished when passive transport or
permease export becomes sufficiently large; the former case is especially
relevant to induction by isopropyl-beta, D-thiogalactopyranoside. To model
regulation by lactose, we developed similar equations in which allolactose, a
metabolic intermediate in lactose metabolism and a natural inducer of lac, is
the inducer. For biophysically reasonable parameter values, these equations
yield no bistability in response to induction by lactose; however, systems with
an unphysically small permease-dependent export effect can exhibit small
amounts of bistability for limited ranges of parameter values. These results
cast doubt on the relevance of bistability in the lac operon within the natural
context of E. coli, and help shed light on the controversy among existing
theoretical studies that address this issue. The results also suggest an
experimental approach to address the relevance of bistability in the lac operon
within the natural context of E. coli.
| [
{
"created": "Fri, 8 Feb 2008 22:21:49 GMT",
"version": "v1"
}
] | 2008-02-12 | [
[
"Dreisigmeyer",
"David W.",
""
],
[
"Stajic",
"Jelena",
""
],
[
"Nemenman",
"Ilya",
""
],
[
"Hlavacek",
"William S.",
""
],
[
"Wall",
"Michael E.",
""
]
] | We have developed a mathematical model of regulation of expression of the Escherichia coli lac operon, and have investigated bistability in its steady-state induction behavior in the absence of external glucose. Numerical analysis of equations describing regulation by artificial inducers revealed two natural bistability parameters that can be used to control the range of inducer concentrations over which the model exhibits bistability. By tuning these bistability parameters, we found a family of biophysically reasonable systems that are consistent with an experimentally determined bistable region for induction by thio-methylgalactoside (Ozbudak et al. Nature 427:737, 2004). The model predicts that bistability can be abolished when passive transport or permease export becomes sufficiently large; the former case is especially relevant to induction by isopropyl-beta, D-thiogalactopyranoside. To model regulation by lactose, we developed similar equations in which allolactose, a metabolic intermediate in lactose metabolism and a natural inducer of lac, is the inducer. For biophysically reasonable parameter values, these equations yield no bistability in response to induction by lactose; however, systems with an unphysically small permease-dependent export effect can exhibit small amounts of bistability for limited ranges of parameter values. These results cast doubt on the relevance of bistability in the lac operon within the natural context of E. coli, and help shed light on the controversy among existing theoretical studies that address this issue. The results also suggest an experimental approach to address the relevance of bistability in the lac operon within the natural context of E. coli. |
1911.07233 | Sara Clifton | Sara M. Clifton, Rachel J. Whitaker, Zoi Rapti | Temperate and chronic virus competition leads to low lysogen frequency | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The canonical bacteriophage is obligately lytic: the virus infects a
bacterium and hijacks cell functions to produce large numbers of new viruses
which burst from the cell. These viruses are well-studied, but there exist a
wide range of coexisting virus lifestyles that are less understood. Temperate
viruses exhibit both a lytic cycle and a latent (lysogenic) cycle, in which
viral genomes are integrated into the bacterial host. Meanwhile, chronic
(persistent) viruses use cell functions to produce more viruses without killing
the cell; chronic viruses may also exhibit a latent stage in addition to the
productive stage. Here, we study the ecology of these competing viral
strategies. We demonstrate the conditions under which each strategy is
dominant, which aids in control of human bacterial infections using viruses. We
find that low lysogen frequencies provide competitive advantages for both virus
types; however, chronic viruses maximize steady state density by eliminating
lysogeny entirely, while temperate viruses exhibit a non-zero `sweet spot'
lysogen frequency. Viral steady state density maximization leads to coexistence
of temperate and chronic viruses, explaining the presence of multiple viral
strategies in natural environments.
| [
{
"created": "Sun, 17 Nov 2019 13:32:23 GMT",
"version": "v1"
}
] | 2019-11-19 | [
[
"Clifton",
"Sara M.",
""
],
[
"Whitaker",
"Rachel J.",
""
],
[
"Rapti",
"Zoi",
""
]
] | The canonical bacteriophage is obligately lytic: the virus infects a bacterium and hijacks cell functions to produce large numbers of new viruses which burst from the cell. These viruses are well-studied, but there exist a wide range of coexisting virus lifestyles that are less understood. Temperate viruses exhibit both a lytic cycle and a latent (lysogenic) cycle, in which viral genomes are integrated into the bacterial host. Meanwhile, chronic (persistent) viruses use cell functions to produce more viruses without killing the cell; chronic viruses may also exhibit a latent stage in addition to the productive stage. Here, we study the ecology of these competing viral strategies. We demonstrate the conditions under which each strategy is dominant, which aids in control of human bacterial infections using viruses. We find that low lysogen frequencies provide competitive advantages for both virus types; however, chronic viruses maximize steady state density by eliminating lysogeny entirely, while temperate viruses exhibit a non-zero `sweet spot' lysogen frequency. Viral steady state density maximization leads to coexistence of temperate and chronic viruses, explaining the presence of multiple viral strategies in natural environments. |
1604.02270 | Mikhail Kolmogorov | Mikhail Kolmogorov, Eamonn Kennedy, Zhuxin Dong, Gregory Timp and
Pavel Pevzner | Single-Molecule Protein Identification by Sub-Nanopore Sensors | null | null | 10.1371/journal.pcbi.1005356 | null | q-bio.QM cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advances in top-down mass spectrometry enabled identification of
intact proteins, but this technology still faces challenges. For example,
top-down mass spectrometry suffers from a lack of sensitivity since the ion
counts for a single fragmentation event are often low. In contrast, nanopore
technology is exquisitely sensitive to single intact molecules, but it has only
been successfully applied to DNA sequencing, so far. Here, we explore the
potential of sub-nanopores for single-molecule protein identification (SMPI)
and describe an algorithm for identification of the electrical current blockade
signal (nanospectrum) resulting from the translocation of a denaturated,
linearly charged protein through a sub-nanopore. The analysis of identification
p-values suggests that the current technology is already sufficient for
matching nanospectra against small protein databases, e.g., protein
identification in bacterial proteomes.
| [
{
"created": "Fri, 8 Apr 2016 08:16:41 GMT",
"version": "v1"
},
{
"created": "Mon, 9 Jan 2017 23:24:59 GMT",
"version": "v2"
}
] | 2017-07-05 | [
[
"Kolmogorov",
"Mikhail",
""
],
[
"Kennedy",
"Eamonn",
""
],
[
"Dong",
"Zhuxin",
""
],
[
"Timp",
"Gregory",
""
],
[
"Pevzner",
"Pavel",
""
]
] | Recent advances in top-down mass spectrometry enabled identification of intact proteins, but this technology still faces challenges. For example, top-down mass spectrometry suffers from a lack of sensitivity since the ion counts for a single fragmentation event are often low. In contrast, nanopore technology is exquisitely sensitive to single intact molecules, but it has only been successfully applied to DNA sequencing, so far. Here, we explore the potential of sub-nanopores for single-molecule protein identification (SMPI) and describe an algorithm for identification of the electrical current blockade signal (nanospectrum) resulting from the translocation of a denaturated, linearly charged protein through a sub-nanopore. The analysis of identification p-values suggests that the current technology is already sufficient for matching nanospectra against small protein databases, e.g., protein identification in bacterial proteomes. |
1503.06159 | Roman Bauer | Roman Bauer, Marcus Kaiser and Elizabeth Stoll | A Computational Model Incorporating Neural Stem Cell Dynamics Reproduces
Glioma Incidence across the Lifespan in the Human Population | null | Bauer, Roman, Marcus Kaiser, and Elizabeth Stoll. "A Computational
Model Incorporating Neural Stem Cell Dynamics Reproduces Glioma Incidence
across the Lifespan in the Human Population." PloS one 9.11 (2014): e111219 | 10.1371/journal.pone.0111219 | null | q-bio.CB physics.bio-ph q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Glioma is the most common form of primary brain tumor. Demographically, the
risk of occurrence increases until old age. Here we present a novel
computational model to reproduce the probability of glioma incidence across the
lifespan. Previous mathematical models explaining glioma incidence are framed
in a rather abstract way, and do not directly relate to empirical findings. To
decrease this gap between theory and experimental observations, we incorporate
recent data on cellular and molecular factors underlying gliomagenesis. Since
evidence implicates the adult neural stem cell as the likely cell-of-origin of
glioma, we have incorporated empirically-determined estimates of neural stem
cell number, cell division rate, mutation rate and oncogenic potential into our
model. We demonstrate that our model yields results which match actual
demographic data in the human population. In particular, this model accounts
for the observed peak incidence of glioma at approximately 80 years of age,
without the need to assert differential susceptibility throughout the
population. Overall, our model supports the hypothesis that glioma is caused by
randomly-occurring oncogenic mutations within the neural stem cell population.
Based on this model, we assess the influence of the (experimentally indicated)
decrease in the number of neural stem cells and increase of cell division rate
during aging. Our model provides multiple testable predictions, and suggests
that different temporal sequences of oncogenic mutations can lead to
tumorigenesis. Finally, we conclude that four or five oncogenic mutations are
sufficient for the formation of glioma.
| [
{
"created": "Fri, 20 Mar 2015 17:04:07 GMT",
"version": "v1"
}
] | 2015-03-23 | [
[
"Bauer",
"Roman",
""
],
[
"Kaiser",
"Marcus",
""
],
[
"Stoll",
"Elizabeth",
""
]
] | Glioma is the most common form of primary brain tumor. Demographically, the risk of occurrence increases until old age. Here we present a novel computational model to reproduce the probability of glioma incidence across the lifespan. Previous mathematical models explaining glioma incidence are framed in a rather abstract way, and do not directly relate to empirical findings. To decrease this gap between theory and experimental observations, we incorporate recent data on cellular and molecular factors underlying gliomagenesis. Since evidence implicates the adult neural stem cell as the likely cell-of-origin of glioma, we have incorporated empirically-determined estimates of neural stem cell number, cell division rate, mutation rate and oncogenic potential into our model. We demonstrate that our model yields results which match actual demographic data in the human population. In particular, this model accounts for the observed peak incidence of glioma at approximately 80 years of age, without the need to assert differential susceptibility throughout the population. Overall, our model supports the hypothesis that glioma is caused by randomly-occurring oncogenic mutations within the neural stem cell population. Based on this model, we assess the influence of the (experimentally indicated) decrease in the number of neural stem cells and increase of cell division rate during aging. Our model provides multiple testable predictions, and suggests that different temporal sequences of oncogenic mutations can lead to tumorigenesis. Finally, we conclude that four or five oncogenic mutations are sufficient for the formation of glioma. |
1304.5487 | Mark Flegg | Mark B. Flegg, S. Jonathan Chapman, Likun Zheng, Radek Erban | Analysis of the two-regime method on square meshes | null | null | null | null | q-bio.QM q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The two-regime method (TRM) has been recently developed for optimizing
stochastic reaction-diffusion simulations. It is a multiscale (hybrid)
algorithm which uses stochastic reaction-diffusion models with different levels
of detail in different parts of the computational domain. The coupling
condition on the interface between different modelling regimes of the TRM was
previously derived for one-dimensional models. In this paper, the TRM is
generalized to higher dimensional reaction-diffusion systems. Coupling Brownian
dynamics models with compartment-based models on regular (square)
two-dimensional lattices is studied in detail. In this case, the interface
between different modelling regimes contain either flat parts or right-angled
corners. Both cases are studied in the paper. For flat interfaces, it is shown
that the one-dimensional theory can be used along the line perpendicular to the
TRM interface. In the direction tangential to the interface, two choices of the
TRM parameters are presented. Their applicability depends on the compartment
size and the time step used in the molecular-based regime. The two-dimensional
generalization of the TRM is also discussed in the case of corners.
| [
{
"created": "Wed, 17 Apr 2013 22:28:32 GMT",
"version": "v1"
}
] | 2013-04-22 | [
[
"Flegg",
"Mark B.",
""
],
[
"Chapman",
"S. Jonathan",
""
],
[
"Zheng",
"Likun",
""
],
[
"Erban",
"Radek",
""
]
] | The two-regime method (TRM) has been recently developed for optimizing stochastic reaction-diffusion simulations. It is a multiscale (hybrid) algorithm which uses stochastic reaction-diffusion models with different levels of detail in different parts of the computational domain. The coupling condition on the interface between different modelling regimes of the TRM was previously derived for one-dimensional models. In this paper, the TRM is generalized to higher dimensional reaction-diffusion systems. Coupling Brownian dynamics models with compartment-based models on regular (square) two-dimensional lattices is studied in detail. In this case, the interface between different modelling regimes contain either flat parts or right-angled corners. Both cases are studied in the paper. For flat interfaces, it is shown that the one-dimensional theory can be used along the line perpendicular to the TRM interface. In the direction tangential to the interface, two choices of the TRM parameters are presented. Their applicability depends on the compartment size and the time step used in the molecular-based regime. The two-dimensional generalization of the TRM is also discussed in the case of corners. |
2201.11600 | Yujiang Wang | Gabrielle M. Schroeder, Philippa J. Karoly, Matias Maturana, Mariella
Panagiotopoulou, Peter N. Taylor, Mark J. Cook, Yujiang Wang | Chronic iEEG recordings and interictal spike rate reveal multiscale
temporal modulations in seizure states | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background and Objectives: Many biological processes are modulated by rhythms
on circadian and multidien timescales. In focal epilepsy, various seizure
features, such as spread and duration, can change from one seizure to the next
within the same patient. However, the specific timescales of this variability,
as well as the specific seizure characteristics that change over time, are
unclear.
Methods: Here, in a cross-sectional observational study, we analysed
within-patient seizure variability in 10 patients with chronic intracranial EEG
recordings (185-767 days of recording time, 57-452 analysed seizures/patient).
We characterised the seizure evolutions as sequences of a finite number of
patient-specific functional seizure network states (SNSs). We then compared SNS
occurrence and duration to (1) time since implantation and (2) patient-specific
circadian and multidien cycles in interictal spike rate.
Results: In most patients, the occurrence or duration of at least one SNS was
associated with the time since implantation. Some patients had one or more SNSs
that were associated with phases of circadian and/or multidien spike rate
cycles. A given SNS's occurrence and duration were usually not associated with
the same timescale.
Discussion: Our results suggest that different time-varying factors modulate
within-patient seizure evolutions over multiple timescales, with separate
processes modulating a SNS's occurrence and duration. These findings imply that
the development of time-adaptive treatments in epilepsy must account for
several separate properties of epileptic seizures, and similar principles
likely apply to other neurological conditions.
| [
{
"created": "Thu, 27 Jan 2022 15:59:15 GMT",
"version": "v1"
},
{
"created": "Tue, 13 Jun 2023 16:09:09 GMT",
"version": "v2"
}
] | 2023-06-14 | [
[
"Schroeder",
"Gabrielle M.",
""
],
[
"Karoly",
"Philippa J.",
""
],
[
"Maturana",
"Matias",
""
],
[
"Panagiotopoulou",
"Mariella",
""
],
[
"Taylor",
"Peter N.",
""
],
[
"Cook",
"Mark J.",
""
],
[
"Wang",
"Yujiang",
""
]
] | Background and Objectives: Many biological processes are modulated by rhythms on circadian and multidien timescales. In focal epilepsy, various seizure features, such as spread and duration, can change from one seizure to the next within the same patient. However, the specific timescales of this variability, as well as the specific seizure characteristics that change over time, are unclear. Methods: Here, in a cross-sectional observational study, we analysed within-patient seizure variability in 10 patients with chronic intracranial EEG recordings (185-767 days of recording time, 57-452 analysed seizures/patient). We characterised the seizure evolutions as sequences of a finite number of patient-specific functional seizure network states (SNSs). We then compared SNS occurrence and duration to (1) time since implantation and (2) patient-specific circadian and multidien cycles in interictal spike rate. Results: In most patients, the occurrence or duration of at least one SNS was associated with the time since implantation. Some patients had one or more SNSs that were associated with phases of circadian and/or multidien spike rate cycles. A given SNS's occurrence and duration were usually not associated with the same timescale. Discussion: Our results suggest that different time-varying factors modulate within-patient seizure evolutions over multiple timescales, with separate processes modulating a SNS's occurrence and duration. These findings imply that the development of time-adaptive treatments in epilepsy must account for several separate properties of epileptic seizures, and similar principles likely apply to other neurological conditions. |
1312.5566 | Clemence Martin | Denis Michel (Irset) | Kinetic approaches to lactose operon induction and bimodality | null | Journal of Theoretical Biology 2013;325:62-75 | 10.1016/j.jtbi.2013.02.005 | null | q-bio.MN physics.bio-ph q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The quasi-equilibrium approximation is acceptable when molecular interactions
are fast enough compared to circuit dynamics, but is no longer allowed when
cellular activities are governed by rare events. A typical example is the
lactose operon (lac), one of the most famous paradigms of transcription
regulation, for which several theories still coexist to describe its behaviors.
The lac system is generally analyzed by using equilibrium constants,
contradicting single-event hypotheses long suggested by Novick and Weiner
(1957). Enzyme induction as an all-or-none phenomenon. Proc. Natl. Acad. Sci.
USA 43, 553-566) and recently refined in the study of (Choi et al., 2008. A
stochastic single-molecule event triggers phenotype switching of a bacterial
cell. Science 322, 442-446). In the present report, a lac repressor
(LacI)-mediated DNA immunoprecipitation experiment reveals that the natural
LacI-lac DNA complex built in vivo is extremely tight and long-lived compared
to the time scale of lac expression dynamics, which could functionally
disconnect the abortive expression bursts and forbid using the standard modes
of lac bistability. As alternatives, purely kinetic mechanisms are examined for
their capacity to restrict induction through: (i) widely scattered derepression
related to the arrival time variance of a predominantly backward asymmetric
random walk and (ii) an induction threshold arising in a single window of
derepression without recourse to nonlinear multimeric binding and Hill
functions. Considering the complete disengagement of the lac repressor from the
lac promoter as the probabilistic consequence of a transient stepwise
mechanism, is sufficient to explain the sigmoidal lac responses as functions of
time and of inducer concentration. This sigmoidal shape can be misleadingly
interpreted as a phenomenon of equilibrium cooperativity classically used to
explain bistability, but which has been reported to be weak in this system.
| [
{
"created": "Thu, 19 Dec 2013 14:39:32 GMT",
"version": "v1"
}
] | 2013-12-20 | [
[
"Michel",
"Denis",
"",
"Irset"
]
] | The quasi-equilibrium approximation is acceptable when molecular interactions are fast enough compared to circuit dynamics, but is no longer allowed when cellular activities are governed by rare events. A typical example is the lactose operon (lac), one of the most famous paradigms of transcription regulation, for which several theories still coexist to describe its behaviors. The lac system is generally analyzed by using equilibrium constants, contradicting single-event hypotheses long suggested by Novick and Weiner (1957). Enzyme induction as an all-or-none phenomenon. Proc. Natl. Acad. Sci. USA 43, 553-566) and recently refined in the study of (Choi et al., 2008. A stochastic single-molecule event triggers phenotype switching of a bacterial cell. Science 322, 442-446). In the present report, a lac repressor (LacI)-mediated DNA immunoprecipitation experiment reveals that the natural LacI-lac DNA complex built in vivo is extremely tight and long-lived compared to the time scale of lac expression dynamics, which could functionally disconnect the abortive expression bursts and forbid using the standard modes of lac bistability. As alternatives, purely kinetic mechanisms are examined for their capacity to restrict induction through: (i) widely scattered derepression related to the arrival time variance of a predominantly backward asymmetric random walk and (ii) an induction threshold arising in a single window of derepression without recourse to nonlinear multimeric binding and Hill functions. Considering the complete disengagement of the lac repressor from the lac promoter as the probabilistic consequence of a transient stepwise mechanism, is sufficient to explain the sigmoidal lac responses as functions of time and of inducer concentration. This sigmoidal shape can be misleadingly interpreted as a phenomenon of equilibrium cooperativity classically used to explain bistability, but which has been reported to be weak in this system. |
2312.12094 | Sheng Xu | Linglin Jing, Sheng Xu, Yifan Wang, Yuzhe Zhou, Tao Shen, Zhigang Ji,
Hui Fang, Zhen Li, Siqi Sun | CrossBind: Collaborative Cross-Modal Identification of Protein
Nucleic-Acid-Binding Residues | Accepted to AAAI-24 | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate identification of protein nucleic-acid-binding residues poses a
significant challenge with important implications for various biological
processes and drug design. Many typical computational methods for protein
analysis rely on a single model that could ignore either the semantic context
of the protein or the global 3D geometric information. Consequently, these
approaches may result in incomplete or inaccurate protein analysis. To address
the above issue, in this paper, we present CrossBind, a novel collaborative
cross-modal approach for identifying binding residues by exploiting both
protein geometric structure and its sequence prior knowledge extracted from a
large-scale protein language model. Specifically, our multi-modal approach
leverages a contrastive learning technique and atom-wise attention to capture
the positional relationships between atoms and residues, thereby incorporating
fine-grained local geometric knowledge, for better binding residue prediction.
Extensive experimental results demonstrate that our approach outperforms the
next best state-of-the-art methods, GraphSite and GraphBind, on DNA and RNA
datasets by 10.8/17.3% in terms of the harmonic mean of precision and recall
(F1-Score) and 11.9/24.8% in Matthews correlation coefficient (MCC),
respectively. We release the code at https://github.com/BEAM-Labs/CrossBind.
| [
{
"created": "Tue, 19 Dec 2023 12:17:13 GMT",
"version": "v1"
},
{
"created": "Wed, 20 Dec 2023 07:21:54 GMT",
"version": "v2"
}
] | 2023-12-21 | [
[
"Jing",
"Linglin",
""
],
[
"Xu",
"Sheng",
""
],
[
"Wang",
"Yifan",
""
],
[
"Zhou",
"Yuzhe",
""
],
[
"Shen",
"Tao",
""
],
[
"Ji",
"Zhigang",
""
],
[
"Fang",
"Hui",
""
],
[
"Li",
"Zhen",
""
],
[
"Sun",
"Siqi",
""
]
] | Accurate identification of protein nucleic-acid-binding residues poses a significant challenge with important implications for various biological processes and drug design. Many typical computational methods for protein analysis rely on a single model that could ignore either the semantic context of the protein or the global 3D geometric information. Consequently, these approaches may result in incomplete or inaccurate protein analysis. To address the above issue, in this paper, we present CrossBind, a novel collaborative cross-modal approach for identifying binding residues by exploiting both protein geometric structure and its sequence prior knowledge extracted from a large-scale protein language model. Specifically, our multi-modal approach leverages a contrastive learning technique and atom-wise attention to capture the positional relationships between atoms and residues, thereby incorporating fine-grained local geometric knowledge, for better binding residue prediction. Extensive experimental results demonstrate that our approach outperforms the next best state-of-the-art methods, GraphSite and GraphBind, on DNA and RNA datasets by 10.8/17.3% in terms of the harmonic mean of precision and recall (F1-Score) and 11.9/24.8% in Matthews correlation coefficient (MCC), respectively. We release the code at https://github.com/BEAM-Labs/CrossBind. |
0706.3101 | Dietrich Stauffer | D. Stauffer | The Penna Model of Biological Aging | 16-page invited review submitted to Bioinformatics and Biology
Insights | null | null | null | q-bio.PE | null | This review deals with computer simulation of biological ageing, particularly
with the Penna model of 1995.
| [
{
"created": "Thu, 21 Jun 2007 08:22:44 GMT",
"version": "v1"
}
] | 2007-06-22 | [
[
"Stauffer",
"D.",
""
]
] | This review deals with computer simulation of biological ageing, particularly with the Penna model of 1995. |
1807.06901 | Pablo Rodr\'iguez-S\'anchez | Pablo Rodr\'iguez-S\'anchez, Egbert H. van Nes, Marten Scheffer | Neutral competition boosts chaos in food webs | Added biodiversity measures, lacking in our previous analysis | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Similarity of competitors has been proposed to facilitate coexistence of
species because it slows down competitive exclusion, thus making it easier for
equalizing mechanisms to maintain diverse communities. On the other hand, chaos
can promote coexistence of species. Here we link these two previously unrelated
findings, by analyzing the dynamics of food web models. We show that
near-neutrality of competition of prey, in the presence of predators, increases
the chance of developing chaotic dynamics. Moreover we confirm that this
results in a higher biodiversity. Our results suggest that near-neutrality may
promote biodiversity in two ways: through reducing the rates of competitive
displacement and through promoting non-equilibrium dynamics.
| [
{
"created": "Wed, 18 Jul 2018 12:59:01 GMT",
"version": "v1"
},
{
"created": "Tue, 26 Feb 2019 13:17:36 GMT",
"version": "v2"
}
] | 2019-02-27 | [
[
"Rodríguez-Sánchez",
"Pablo",
""
],
[
"van Nes",
"Egbert H.",
""
],
[
"Scheffer",
"Marten",
""
]
] | Similarity of competitors has been proposed to facilitate coexistence of species because it slows down competitive exclusion, thus making it easier for equalizing mechanisms to maintain diverse communities. On the other hand, chaos can promote coexistence of species. Here we link these two previously unrelated findings, by analyzing the dynamics of food web models. We show that near-neutrality of competition of prey, in the presence of predators, increases the chance of developing chaotic dynamics. Moreover we confirm that this results in a higher biodiversity. Our results suggest that near-neutrality may promote biodiversity in two ways: through reducing the rates of competitive displacement and through promoting non-equilibrium dynamics. |
2003.02251 | Kathryn Link | Kathryn G. Link, Matthew G. Sorrells, Nicholas A. Danes, Keith B.
Neeves, Karin Leiderman, and Aaron L. Fogelson | A Mathematical Model of Platelet Aggregation in an Extravascular Injury
Under Flow | 35 pages, 14 figures | null | null | null | q-bio.CB math.DS q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present the first mathematical model of flow-mediated primary hemostasis
in an extravascular injury, which can track the process from initial deposition
to occlusion. The model consists of a system of ordinary differential equations
(ODE) that describe platelet aggregation (adhesion and cohesion),
soluble-agonist-dependent platelet activation, and the flow of blood through
the injury. The formation of platelet aggregates increases resistance to flow
through the injury, which is modeled using the Stokes-Brinkman equations. Data
from analogous experimental (microfluidic flow) and partial differential
equation models informed parameter values used in the ODE model description of
platelet adhesion, cohesion, and activation. This model predicts injury
occlusion under a range of flow and platelet activation conditions. Simulations
testing the effects of shear and activation rates resulted in delayed occlusion
and aggregate heterogeneity. These results validate our hypothesis that
flow-mediated dilution of activating chemical ADP hinders aggregate
development. This novel modeling framework can be extended to include more
mechanisms of platelet activation as well as the addition of the biochemical
reactions of coagulation, resulting in a computationally efficient high
throughput screening tool.
| [
{
"created": "Wed, 4 Mar 2020 18:47:01 GMT",
"version": "v1"
}
] | 2020-03-05 | [
[
"Link",
"Kathryn G.",
""
],
[
"Sorrells",
"Matthew G.",
""
],
[
"Danes",
"Nicholas A.",
""
],
[
"Neeves",
"Keith B.",
""
],
[
"Leiderman",
"Karin",
""
],
[
"Fogelson",
"Aaron L.",
""
]
] | We present the first mathematical model of flow-mediated primary hemostasis in an extravascular injury, which can track the process from initial deposition to occlusion. The model consists of a system of ordinary differential equations (ODE) that describe platelet aggregation (adhesion and cohesion), soluble-agonist-dependent platelet activation, and the flow of blood through the injury. The formation of platelet aggregates increases resistance to flow through the injury, which is modeled using the Stokes-Brinkman equations. Data from analogous experimental (microfluidic flow) and partial differential equation models informed parameter values used in the ODE model description of platelet adhesion, cohesion, and activation. This model predicts injury occlusion under a range of flow and platelet activation conditions. Simulations testing the effects of shear and activation rates resulted in delayed occlusion and aggregate heterogeneity. These results validate our hypothesis that flow-mediated dilution of activating chemical ADP hinders aggregate development. This novel modeling framework can be extended to include more mechanisms of platelet activation as well as the addition of the biochemical reactions of coagulation, resulting in a computationally efficient high throughput screening tool. |
1907.10808 | Ricardo Ugarte | Ricardo Ugarte | Approximate calculation of the binding energy between
17$\beta$-estradiol and human estrogen receptor alpha | 11 pages, 5 figures | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Estrogen receptors (ERs) are a group of proteins activated by
17$\beta$-estradiol. The endocrine-disrupting chemicals (EDCs) mimic estrogen
action by bind directly to the ligand binding domain of ER. From this
perspective, ER represent a good model for identifying and assessing the health
risk of potential EDCs. This ability is best reflected by the ligand-ER binding
energy. Multilayer fragment molecular orbital (MFMO) calculations were
performed which allowed us to obtain the binding energy using a calculation
scheme that considers the molecular interactions that occur on the following
model systems: the bound and free receptor, 17$\beta$-estradiol and a water
cluster. The bound and free receptor and 17$\beta$-estradiol were surrounded by
a water shell containing the same number of molecules as the water cluster. The
structures required for MFMO calculations were obtained from molecular dynamics
simulations and cluster analysis. Attractive dispersion interactions were
observed between 17$\beta$-estradiol and the binding site hydrophobic residues.
In addition, strong electrostatic interactions were found between
17$\beta$-estradiol and the following charged/polarized residues: Glu 353, His
524 and Arg 394. The FMO2-RHF/STO-3G:MP2/6-31G(d) weighted binding energy was
of -67.2 kcal/mol. We hope that the model developed in this study can be useful
for identifying and assessing the health risk of potential EDCs.
| [
{
"created": "Thu, 25 Jul 2019 02:52:34 GMT",
"version": "v1"
}
] | 2019-07-26 | [
[
"Ugarte",
"Ricardo",
""
]
] | Estrogen receptors (ERs) are a group of proteins activated by 17$\beta$-estradiol. The endocrine-disrupting chemicals (EDCs) mimic estrogen action by bind directly to the ligand binding domain of ER. From this perspective, ER represent a good model for identifying and assessing the health risk of potential EDCs. This ability is best reflected by the ligand-ER binding energy. Multilayer fragment molecular orbital (MFMO) calculations were performed which allowed us to obtain the binding energy using a calculation scheme that considers the molecular interactions that occur on the following model systems: the bound and free receptor, 17$\beta$-estradiol and a water cluster. The bound and free receptor and 17$\beta$-estradiol were surrounded by a water shell containing the same number of molecules as the water cluster. The structures required for MFMO calculations were obtained from molecular dynamics simulations and cluster analysis. Attractive dispersion interactions were observed between 17$\beta$-estradiol and the binding site hydrophobic residues. In addition, strong electrostatic interactions were found between 17$\beta$-estradiol and the following charged/polarized residues: Glu 353, His 524 and Arg 394. The FMO2-RHF/STO-3G:MP2/6-31G(d) weighted binding energy was of -67.2 kcal/mol. We hope that the model developed in this study can be useful for identifying and assessing the health risk of potential EDCs. |
2107.12221 | Umberto Lucia Prof. | Umberto Lucia, Giulia Grisolia | Thermal resonance in cancer | null | null | null | null | q-bio.CB | http://creativecommons.org/licenses/by/4.0/ | In the end of the second decade of 20th century, Warburg showed how cancer
cells present a fermentative respiration process, related to a metabolic
injury. Here, we develop an analysis of the cell process based on its heat
outflow, in order to control cancer progression. Engineering thermodynamics
represent a powerful approach to develop this analysis and we introduce its
methods to biosystems, in relation to heat outflow for its control. Cells
regulate their metabolisms by energy and ion flows, and the heat flux is
controlled by the convective interaction with their environment. We introduce
the characteristic frequency of a biosystem, its biothermodynamic
characteristic frequency, which results to be evaluated by a classical heat
transfer approach. Resonance forces natural behaviours of systems, and, here,
we introduce it in order to control the fluxes through the cancer membrane, and
to control of the cellular metabolic processes, and, consequently, the energy
available to cancer, for its growth. The result obtained in some experiments is
that the cancer growth rate can be reduced.
| [
{
"created": "Mon, 26 Jul 2021 13:59:40 GMT",
"version": "v1"
}
] | 2021-07-27 | [
[
"Lucia",
"Umberto",
""
],
[
"Grisolia",
"Giulia",
""
]
] | In the end of the second decade of 20th century, Warburg showed how cancer cells present a fermentative respiration process, related to a metabolic injury. Here, we develop an analysis of the cell process based on its heat outflow, in order to control cancer progression. Engineering thermodynamics represent a powerful approach to develop this analysis and we introduce its methods to biosystems, in relation to heat outflow for its control. Cells regulate their metabolisms by energy and ion flows, and the heat flux is controlled by the convective interaction with their environment. We introduce the characteristic frequency of a biosystem, its biothermodynamic characteristic frequency, which results to be evaluated by a classical heat transfer approach. Resonance forces natural behaviours of systems, and, here, we introduce it in order to control the fluxes through the cancer membrane, and to control of the cellular metabolic processes, and, consequently, the energy available to cancer, for its growth. The result obtained in some experiments is that the cancer growth rate can be reduced. |
2305.14356 | Rohan Agarwal | Rohan Agarwal | Creativity as Variations on a Theme: Formalizations, Evidence, and
Engineered Applications | null | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | There are many philosophies and theories on what creativity is and how it
works, but one popular idea is that of variations on a theme and intersection
of concepts. This literature review explores philosophical proposals of how
creativity emerges from variations on a theme, and how formalizations of these
proposals in human subject studies and computational methods result in
creativity. Specifically, the philosophical idea of intangible clouds of
concepts is analyzed with empirical studies of concept representation and
mental model formation, and mathematical formalizations of such ideas.
Empirical findings on emergent neural activity from neural network combinations
are also examined for evidence of novel, emergent ideas from the collision of
existing ones. Finally, work on human-AI co-creativity is used as a lens for
concept collision and the effectiveness of this model of creativity. This paper
also proposes directions for further research in studying creativity as
variations on a theme.
| [
{
"created": "Sun, 7 May 2023 22:42:02 GMT",
"version": "v1"
}
] | 2023-05-25 | [
[
"Agarwal",
"Rohan",
""
]
] | There are many philosophies and theories on what creativity is and how it works, but one popular idea is that of variations on a theme and intersection of concepts. This literature review explores philosophical proposals of how creativity emerges from variations on a theme, and how formalizations of these proposals in human subject studies and computational methods result in creativity. Specifically, the philosophical idea of intangible clouds of concepts is analyzed with empirical studies of concept representation and mental model formation, and mathematical formalizations of such ideas. Empirical findings on emergent neural activity from neural network combinations are also examined for evidence of novel, emergent ideas from the collision of existing ones. Finally, work on human-AI co-creativity is used as a lens for concept collision and the effectiveness of this model of creativity. This paper also proposes directions for further research in studying creativity as variations on a theme. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.