id stringlengths 9 13 | submitter stringlengths 4 48 | authors stringlengths 4 9.62k | title stringlengths 4 343 | comments stringlengths 2 480 ⌀ | journal-ref stringlengths 9 309 ⌀ | doi stringlengths 12 138 ⌀ | report-no stringclasses 277 values | categories stringlengths 8 87 | license stringclasses 9 values | orig_abstract stringlengths 27 3.76k | versions listlengths 1 15 | update_date stringlengths 10 10 | authors_parsed listlengths 1 147 | abstract stringlengths 24 3.75k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1510.06017 | Yun S. Song | John A. Kamm, Jeffrey P. Spence, Jeffrey Chan, Yun S. Song | Two-Locus Likelihoods under Variable Population Size and Fine-Scale
Recombination Rate Estimation | 32 pages, 13 figures | null | null | null | q-bio.PE math.PR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Two-locus sampling probabilities have played a central role in devising an
efficient composite likelihood method for estimating fine-scale recombination
rates. Due to mathematical and computational challenges, these sampling
probabilities are typically computed under the unrealistic assumption of a
constant population size, and simulation studies have shown that resulting
recombination rate estimates can be severely biased in certain cases of
historical population size changes. To alleviate this problem, we develop here
new methods to compute the sampling probability for variable population size
functions that are piecewise constant. Our main theoretical result, implemented
in a new software package called LDpop, is a novel formula for the sampling
probability that can be evaluated by numerically exponentiating a large but
sparse matrix. This formula can handle moderate sample sizes ($n \leq 50$) and
demographic size histories with a large number of epochs ($\mathcal{D} \geq
64$). In addition, LDpop implements an approximate formula for the sampling
probability that is reasonably accurate and scales to hundreds in sample size
($n \geq 256$). Finally, LDpop includes an importance sampler for the posterior
distribution of two-locus genealogies, based on a new result for the optimal
proposal distribution in the variable-size setting. Using our methods, we study
how a sharp population bottleneck followed by rapid growth affects the
correlation between partially linked sites. Then, through an extensive
simulation study, we show that accounting for population size changes under
such a demographic model leads to substantial improvements in fine-scale
recombination rate estimation. LDpop is freely available for download at
https://github.com/popgenmethods/ldpop
| [
{
"created": "Tue, 20 Oct 2015 19:43:49 GMT",
"version": "v1"
},
{
"created": "Sun, 10 Apr 2016 20:23:00 GMT",
"version": "v2"
}
] | 2016-04-12 | [
[
"Kamm",
"John A.",
""
],
[
"Spence",
"Jeffrey P.",
""
],
[
"Chan",
"Jeffrey",
""
],
[
"Song",
"Yun S.",
""
]
] | Two-locus sampling probabilities have played a central role in devising an efficient composite likelihood method for estimating fine-scale recombination rates. Due to mathematical and computational challenges, these sampling probabilities are typically computed under the unrealistic assumption of a constant population size, and simulation studies have shown that resulting recombination rate estimates can be severely biased in certain cases of historical population size changes. To alleviate this problem, we develop here new methods to compute the sampling probability for variable population size functions that are piecewise constant. Our main theoretical result, implemented in a new software package called LDpop, is a novel formula for the sampling probability that can be evaluated by numerically exponentiating a large but sparse matrix. This formula can handle moderate sample sizes ($n \leq 50$) and demographic size histories with a large number of epochs ($\mathcal{D} \geq 64$). In addition, LDpop implements an approximate formula for the sampling probability that is reasonably accurate and scales to hundreds in sample size ($n \geq 256$). Finally, LDpop includes an importance sampler for the posterior distribution of two-locus genealogies, based on a new result for the optimal proposal distribution in the variable-size setting. Using our methods, we study how a sharp population bottleneck followed by rapid growth affects the correlation between partially linked sites. Then, through an extensive simulation study, we show that accounting for population size changes under such a demographic model leads to substantial improvements in fine-scale recombination rate estimation. LDpop is freely available for download at https://github.com/popgenmethods/ldpop |
1208.0863 | Renato Vicente | Roberto H. Schonmann, Renato Vicente and Nestor Caticha | Altruism can proliferate through group/kin selection despite high random
gene flow | 5 pages, 2 figures. Supplementary material with 50 pages and 26
figures | PLoS ONE 8(8):e72043 (2013) | 10.1371/journal.pone.0072043 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The ways in which natural selection can allow the proliferation of
cooperative behavior have long been seen as a central problem in evolutionary
biology. Most of the literature has focused on interactions between pairs of
individuals and on linear public goods games. This emphasis led to the
conclusion that even modest levels of migration would pose a serious problem to
the spread of altruism in group structured populations. Here we challenge this
conclusion, by analyzing evolution in a framework which allows for complex
group interactions and random migration among groups. We conclude that
contingent forms of strong altruism can spread when rare under realistic group
sizes and levels of migration. Our analysis combines group-centric and
gene-centric perspectives, allows for arbitrary strength of selection, and
leads to extensions of Hamilton's rule for the spread of altruistic alleles,
applicable under broad conditions.
| [
{
"created": "Fri, 3 Aug 2012 22:38:26 GMT",
"version": "v1"
},
{
"created": "Tue, 7 Aug 2012 22:31:05 GMT",
"version": "v2"
}
] | 2013-10-01 | [
[
"Schonmann",
"Roberto H.",
""
],
[
"Vicente",
"Renato",
""
],
[
"Caticha",
"Nestor",
""
]
] | The ways in which natural selection can allow the proliferation of cooperative behavior have long been seen as a central problem in evolutionary biology. Most of the literature has focused on interactions between pairs of individuals and on linear public goods games. This emphasis led to the conclusion that even modest levels of migration would pose a serious problem to the spread of altruism in group structured populations. Here we challenge this conclusion, by analyzing evolution in a framework which allows for complex group interactions and random migration among groups. We conclude that contingent forms of strong altruism can spread when rare under realistic group sizes and levels of migration. Our analysis combines group-centric and gene-centric perspectives, allows for arbitrary strength of selection, and leads to extensions of Hamilton's rule for the spread of altruistic alleles, applicable under broad conditions. |
q-bio/0408008 | Ines Samengo | D. Oliva, I. Samengo, S. Leutgeb, S. Mizumori | A subjective distance between stimuli: quantifying the metric structure
of representations | 25 pages, 7 figures, submitted to Neural Computation | null | null | null | q-bio.NC | null | As subjects perceive the sensory world, different stimuli elicit a number of
neural representations. Here, a subjective distance between stimuli is defined,
measuring the degree of similarity between the underlying representations. As
an example, the subjective distance between different locations in space is
calculated from the activity of rodent hippocampal place cells, and lateral
septal cells. Such a distance is compared to the real distance, between
locations. As the number of sampled neurons increases, the subjective distance
shows a tendency to resemble the metrics of real space.
| [
{
"created": "Fri, 13 Aug 2004 14:49:45 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Oliva",
"D.",
""
],
[
"Samengo",
"I.",
""
],
[
"Leutgeb",
"S.",
""
],
[
"Mizumori",
"S.",
""
]
] | As subjects perceive the sensory world, different stimuli elicit a number of neural representations. Here, a subjective distance between stimuli is defined, measuring the degree of similarity between the underlying representations. As an example, the subjective distance between different locations in space is calculated from the activity of rodent hippocampal place cells, and lateral septal cells. Such a distance is compared to the real distance, between locations. As the number of sampled neurons increases, the subjective distance shows a tendency to resemble the metrics of real space. |
1310.2068 | Laurence Loewe | Kurt Ehlert and Laurence Loewe | Lazy Updating increases the speed of stochastic simulations | manuscript, 29 pages, including 1 table and 7 figures | null | null | null | q-bio.QM q-bio.MN q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Biological reaction networks often contain what might be called 'hub
molecules', which are involved in many reactions. For example, ATP is commonly
consumed and produced. When reaction networks contain molecules like ATP, they
are difficult to efficiently simulate, because every time such a molecule is
consumed or produced, the propensities of numerous reactions need to be
updated. In order to increase the speed of simulations, we developed 'Lazy
Updating', which postpones some propensity updates until some aspect of the
state of the system changes by more than a defined threshold. Lazy Updating
works with several existing stochastic simulation algorithms, including
Gillespie's direct method and the Next Reaction Method. We tested Lazy Updating
on two example models, and for the larger model it increased the speed of
simulations over eight-fold while maintaining a high level of accuracy. These
increases in speed will be larger for models with more widely connected hub
molecules. Thus Lazy Updating can contribute towards making models with a
limited computing time budget more realistic by including previously neglected
hub molecules.
| [
{
"created": "Tue, 8 Oct 2013 10:05:27 GMT",
"version": "v1"
}
] | 2013-10-09 | [
[
"Ehlert",
"Kurt",
""
],
[
"Loewe",
"Laurence",
""
]
] | Biological reaction networks often contain what might be called 'hub molecules', which are involved in many reactions. For example, ATP is commonly consumed and produced. When reaction networks contain molecules like ATP, they are difficult to efficiently simulate, because every time such a molecule is consumed or produced, the propensities of numerous reactions need to be updated. In order to increase the speed of simulations, we developed 'Lazy Updating', which postpones some propensity updates until some aspect of the state of the system changes by more than a defined threshold. Lazy Updating works with several existing stochastic simulation algorithms, including Gillespie's direct method and the Next Reaction Method. We tested Lazy Updating on two example models, and for the larger model it increased the speed of simulations over eight-fold while maintaining a high level of accuracy. These increases in speed will be larger for models with more widely connected hub molecules. Thus Lazy Updating can contribute towards making models with a limited computing time budget more realistic by including previously neglected hub molecules. |
1501.03131 | Marco Kienzle | Marco Kienzle | Hazard function models to estimate mortality rates affecting fish
populations with application to the sea mullet (Mugil cephalus) fishery on
the Queensland coast (Australia) | null | null | 10.1007/s13253-015-0237-y | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fisheries management agencies around the world collect age data for the
purpose of assessing the status of natural resources in their jurisdiction.
Estimates of mortality rates represent a key information to assess the
sustainability of fish stocks exploitation. Contrary to medical research or
manufacturing where survival analysis is routinely applied to estimate failure
rates, survival analysis has seldom been applied in fisheries stock assessment
despite similar purposes between these fields of applied statistics. In this
paper, we developed hazard functions to model the dynamic of an exploited fish
population. These functions were used to estimate all parameters necessary for
stock assessment (including natural and fishing mortality rates as well as gear
selectivity) by maximum likelihood using age data from a sample of catch. This
novel application of survival analysis to fisheries stock assessment was tested
by Monte Carlo simulations to assert that it provided un-biased estimations of
relevant quantities. The method was applied to data from the Queensland
(Australia) sea mullet (Mugil cephalus) commercial fishery collected between
2007 and 2014. It provided, for the first time, an estimate of natural
mortality affecting this stock: 0.22 $\pm$ 0.08 year$^{-1}$.
| [
{
"created": "Tue, 13 Jan 2015 19:59:53 GMT",
"version": "v1"
},
{
"created": "Mon, 19 Oct 2015 01:41:59 GMT",
"version": "v2"
}
] | 2015-11-04 | [
[
"Kienzle",
"Marco",
""
]
] | Fisheries management agencies around the world collect age data for the purpose of assessing the status of natural resources in their jurisdiction. Estimates of mortality rates represent a key information to assess the sustainability of fish stocks exploitation. Contrary to medical research or manufacturing where survival analysis is routinely applied to estimate failure rates, survival analysis has seldom been applied in fisheries stock assessment despite similar purposes between these fields of applied statistics. In this paper, we developed hazard functions to model the dynamic of an exploited fish population. These functions were used to estimate all parameters necessary for stock assessment (including natural and fishing mortality rates as well as gear selectivity) by maximum likelihood using age data from a sample of catch. This novel application of survival analysis to fisheries stock assessment was tested by Monte Carlo simulations to assert that it provided un-biased estimations of relevant quantities. The method was applied to data from the Queensland (Australia) sea mullet (Mugil cephalus) commercial fishery collected between 2007 and 2014. It provided, for the first time, an estimate of natural mortality affecting this stock: 0.22 $\pm$ 0.08 year$^{-1}$. |
2007.02835 | Yu Rong | Yu Rong, Yatao Bian, Tingyang Xu, Weiyang Xie, Ying Wei, Wenbing
Huang, Junzhou Huang | Self-Supervised Graph Transformer on Large-Scale Molecular Data | 17 pages, 7 figures | null | null | null | q-bio.BM cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | How to obtain informative representations of molecules is a crucial
prerequisite in AI-driven drug design and discovery. Recent researches abstract
molecules as graphs and employ Graph Neural Networks (GNNs) for molecular
representation learning. Nevertheless, two issues impede the usage of GNNs in
real scenarios: (1) insufficient labeled molecules for supervised training; (2)
poor generalization capability to new-synthesized molecules. To address them
both, we propose a novel framework, GROVER, which stands for Graph
Representation frOm self-superVised mEssage passing tRansformer. With carefully
designed self-supervised tasks in node-, edge- and graph-level, GROVER can
learn rich structural and semantic information of molecules from enormous
unlabelled molecular data. Rather, to encode such complex information, GROVER
integrates Message Passing Networks into the Transformer-style architecture to
deliver a class of more expressive encoders of molecules. The flexibility of
GROVER allows it to be trained efficiently on large-scale molecular dataset
without requiring any supervision, thus being immunized to the two issues
mentioned above. We pre-train GROVER with 100 million parameters on 10 million
unlabelled molecules -- the biggest GNN and the largest training dataset in
molecular representation learning. We then leverage the pre-trained GROVER for
molecular property prediction followed by task-specific fine-tuning, where we
observe a huge improvement (more than 6% on average) from current
state-of-the-art methods on 11 challenging benchmarks. The insights we gained
are that well-designed self-supervision losses and largely-expressive
pre-trained models enjoy the significant potential on performance boosting.
| [
{
"created": "Thu, 18 Jun 2020 08:37:04 GMT",
"version": "v1"
},
{
"created": "Thu, 29 Oct 2020 03:46:04 GMT",
"version": "v2"
}
] | 2020-10-30 | [
[
"Rong",
"Yu",
""
],
[
"Bian",
"Yatao",
""
],
[
"Xu",
"Tingyang",
""
],
[
"Xie",
"Weiyang",
""
],
[
"Wei",
"Ying",
""
],
[
"Huang",
"Wenbing",
""
],
[
"Huang",
"Junzhou",
""
]
] | How to obtain informative representations of molecules is a crucial prerequisite in AI-driven drug design and discovery. Recent researches abstract molecules as graphs and employ Graph Neural Networks (GNNs) for molecular representation learning. Nevertheless, two issues impede the usage of GNNs in real scenarios: (1) insufficient labeled molecules for supervised training; (2) poor generalization capability to new-synthesized molecules. To address them both, we propose a novel framework, GROVER, which stands for Graph Representation frOm self-superVised mEssage passing tRansformer. With carefully designed self-supervised tasks in node-, edge- and graph-level, GROVER can learn rich structural and semantic information of molecules from enormous unlabelled molecular data. Rather, to encode such complex information, GROVER integrates Message Passing Networks into the Transformer-style architecture to deliver a class of more expressive encoders of molecules. The flexibility of GROVER allows it to be trained efficiently on large-scale molecular dataset without requiring any supervision, thus being immunized to the two issues mentioned above. We pre-train GROVER with 100 million parameters on 10 million unlabelled molecules -- the biggest GNN and the largest training dataset in molecular representation learning. We then leverage the pre-trained GROVER for molecular property prediction followed by task-specific fine-tuning, where we observe a huge improvement (more than 6% on average) from current state-of-the-art methods on 11 challenging benchmarks. The insights we gained are that well-designed self-supervision losses and largely-expressive pre-trained models enjoy the significant potential on performance boosting. |
1701.03943 | Alberto Fachechi | Eleonora Alfinito, Matteo Beccaria, Alberto Fachechi and Guido
Macorini | Reactive immunization on complex networks | null | null | 10.1209/0295-5075/117/18002 | null | q-bio.PE physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Epidemic spreading on complex networks depends on the topological structure
as well as on the dynamical properties of the infection itself. Generally
speaking, highly connected individuals play the role of hubs and are crucial to
channel information across the network. On the other hand, static topological
quantities measuring the connectivity structure are independent on the
dynamical mechanisms of the infection. A natural question is therefore how to
improve the topological analysis by some kind of dynamical information that may
be extracted from the ongoing infection itself. In this spirit, we propose a
novel vaccination scheme that exploits information from the details of the
infection pattern at the moment when the vaccination strategy is applied.
Numerical simulations of the infection process show that the proposed
immunization strategy is effective and robust on a wide class of complex
networks.
| [
{
"created": "Sat, 14 Jan 2017 16:28:08 GMT",
"version": "v1"
},
{
"created": "Tue, 14 Feb 2017 16:14:23 GMT",
"version": "v2"
}
] | 2017-04-05 | [
[
"Alfinito",
"Eleonora",
""
],
[
"Beccaria",
"Matteo",
""
],
[
"Fachechi",
"Alberto",
""
],
[
"Macorini",
"Guido",
""
]
] | Epidemic spreading on complex networks depends on the topological structure as well as on the dynamical properties of the infection itself. Generally speaking, highly connected individuals play the role of hubs and are crucial to channel information across the network. On the other hand, static topological quantities measuring the connectivity structure are independent on the dynamical mechanisms of the infection. A natural question is therefore how to improve the topological analysis by some kind of dynamical information that may be extracted from the ongoing infection itself. In this spirit, we propose a novel vaccination scheme that exploits information from the details of the infection pattern at the moment when the vaccination strategy is applied. Numerical simulations of the infection process show that the proposed immunization strategy is effective and robust on a wide class of complex networks. |
2206.01482 | Fabio Chalub | Diogo Costa-Cabanas, Fabio A. C. C. Chalub, Max O. Souza | Entropy and the arrow of time in population dynamics | 27 pages, 2 figures | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The concept of entropy in statistical physics is related to the existence of
irreversible macroscopic processes. In this work, we explore a recently
introduced entropy formula for a class of stochastic processes with more than
one absorbing state that is extensively used in population genetics models. We
will consider the Moran process as a paradigm for this class, and will extend
our discussion to other models outside this class. We will also discuss the
relation between non-extensive entropies in physics and epistasis (i.e., when
the effects of different alleles are not independent) and the role of
symmetries in population genetic models.
| [
{
"created": "Fri, 3 Jun 2022 10:15:16 GMT",
"version": "v1"
},
{
"created": "Thu, 20 Oct 2022 16:04:59 GMT",
"version": "v2"
}
] | 2022-10-21 | [
[
"Costa-Cabanas",
"Diogo",
""
],
[
"Chalub",
"Fabio A. C. C.",
""
],
[
"Souza",
"Max O.",
""
]
] | The concept of entropy in statistical physics is related to the existence of irreversible macroscopic processes. In this work, we explore a recently introduced entropy formula for a class of stochastic processes with more than one absorbing state that is extensively used in population genetics models. We will consider the Moran process as a paradigm for this class, and will extend our discussion to other models outside this class. We will also discuss the relation between non-extensive entropies in physics and epistasis (i.e., when the effects of different alleles are not independent) and the role of symmetries in population genetic models. |
0705.1535 | Nicolas Ferey | Nicolas F\'erey (LIMSI), Pierre-Emmanuel Gros (LIMSI), Joan H\'erisson
(LIMSI), Rachid Gherbi (LIMSI) | Visual Data Mining of Genomic Databases by Immersive Graph-Based
Exploration | null | Visual Data Mining of Genomic Databases by Immersive Graph-Based
Exploration (2005) 4 | null | null | q-bio.QM | null | Biologists are leading current research on genome characterization
(sequencing, alignment, transcription), providing a huge quantity of raw data
about many genome organisms. Extracting knowledge from this raw data is an
important process for biologists, using usually data mining approaches.
However, it is difficult to deals with these genomic information using actual
bioinformatics data mining tools, because data are heterogeneous, huge in
quantity and geographically distributed. In this paper, we present a new
approach between data mining and virtual reality visualization, called visual
data mining. Indeed Virtual Reality becomes ripe, with efficient display
devices and intuitive interaction in an immersive context. Moreover, biologists
use to work with 3D representation of their molecules, but in a desktop
context. We present a software solution, Genome3DExplorer, which addresses the
problem of genomic data visualization, of scene management and interaction.
This solution is based on a well-adapted graphical and interaction paradigm,
where local and global topological characteristics of data are easily visible,
on the contrary to traditional genomic database browsers, always focused on the
zoom and details level.
| [
{
"created": "Thu, 10 May 2007 19:09:08 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Férey",
"Nicolas",
"",
"LIMSI"
],
[
"Gros",
"Pierre-Emmanuel",
"",
"LIMSI"
],
[
"Hérisson",
"Joan",
"",
"LIMSI"
],
[
"Gherbi",
"Rachid",
"",
"LIMSI"
]
] | Biologists are leading current research on genome characterization (sequencing, alignment, transcription), providing a huge quantity of raw data about many genome organisms. Extracting knowledge from this raw data is an important process for biologists, using usually data mining approaches. However, it is difficult to deals with these genomic information using actual bioinformatics data mining tools, because data are heterogeneous, huge in quantity and geographically distributed. In this paper, we present a new approach between data mining and virtual reality visualization, called visual data mining. Indeed Virtual Reality becomes ripe, with efficient display devices and intuitive interaction in an immersive context. Moreover, biologists use to work with 3D representation of their molecules, but in a desktop context. We present a software solution, Genome3DExplorer, which addresses the problem of genomic data visualization, of scene management and interaction. This solution is based on a well-adapted graphical and interaction paradigm, where local and global topological characteristics of data are easily visible, on the contrary to traditional genomic database browsers, always focused on the zoom and details level. |
2405.14508 | Even Moa Myklebust | Even Moa Myklebust, Arnoldo Frigessi, Fredrik Schjesvold, Jasmine Foo,
Kevin Leder, Alvaro K\"ohn-Luque | Prediction of cancer dynamics under treatment using Bayesian neural
networks: A simulated study | 22 pages, 10 figures | null | null | null | q-bio.QM stat.ML | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Predicting cancer dynamics under treatment is challenging due to high
inter-patient heterogeneity, lack of predictive biomarkers, and sparse and
noisy longitudinal data. Mathematical models can summarize cancer dynamics by a
few interpretable parameters per patient. Machine learning methods can then be
trained to predict the model parameters from baseline covariates, but do not
account for uncertainty in the parameter estimates. Instead, hierarchical
Bayesian modeling can model the relationship between baseline covariates to
longitudinal measurements via mechanistic parameters while accounting for
uncertainty in every part of the model.
The mapping from baseline covariates to model parameters can be modeled in
several ways. A linear mapping simplifies inference but fails to capture
nonlinear covariate effects and scale poorly for interaction modeling when the
number of covariates is large. In contrast, Bayesian neural networks can
potentially discover interactions between covariates automatically, but at a
substantial cost in computational complexity.
In this work, we develop a hierarchical Bayesian model of subpopulation
dynamics that uses baseline covariate information to predict cancer dynamics
under treatment, inspired by cancer dynamics in multiple myeloma (MM), where
serum M protein is a well-known proxy of tumor burden. As a working example, we
apply the model to a simulated dataset and compare its ability to predict M
protein trajectories to a model with linear covariate effects. Our results show
that the Bayesian neural network covariate effect model predicts cancer
dynamics more accurately than a linear covariate effect model when covariate
interactions are present. The framework can also be applied to other types of
cancer or other time series prediction problems that can be described with a
parametric model.
| [
{
"created": "Thu, 23 May 2024 12:47:19 GMT",
"version": "v1"
}
] | 2024-05-24 | [
[
"Myklebust",
"Even Moa",
""
],
[
"Frigessi",
"Arnoldo",
""
],
[
"Schjesvold",
"Fredrik",
""
],
[
"Foo",
"Jasmine",
""
],
[
"Leder",
"Kevin",
""
],
[
"Köhn-Luque",
"Alvaro",
""
]
] | Predicting cancer dynamics under treatment is challenging due to high inter-patient heterogeneity, lack of predictive biomarkers, and sparse and noisy longitudinal data. Mathematical models can summarize cancer dynamics by a few interpretable parameters per patient. Machine learning methods can then be trained to predict the model parameters from baseline covariates, but do not account for uncertainty in the parameter estimates. Instead, hierarchical Bayesian modeling can model the relationship between baseline covariates to longitudinal measurements via mechanistic parameters while accounting for uncertainty in every part of the model. The mapping from baseline covariates to model parameters can be modeled in several ways. A linear mapping simplifies inference but fails to capture nonlinear covariate effects and scale poorly for interaction modeling when the number of covariates is large. In contrast, Bayesian neural networks can potentially discover interactions between covariates automatically, but at a substantial cost in computational complexity. In this work, we develop a hierarchical Bayesian model of subpopulation dynamics that uses baseline covariate information to predict cancer dynamics under treatment, inspired by cancer dynamics in multiple myeloma (MM), where serum M protein is a well-known proxy of tumor burden. As a working example, we apply the model to a simulated dataset and compare its ability to predict M protein trajectories to a model with linear covariate effects. Our results show that the Bayesian neural network covariate effect model predicts cancer dynamics more accurately than a linear covariate effect model when covariate interactions are present. The framework can also be applied to other types of cancer or other time series prediction problems that can be described with a parametric model. |
1709.03978 | Marinho Lopes | Marinho A. Lopes, KyoungEun Lee, Alexander V. Goltsev | A neuronal network model of interictal and recurrent ictal activity | 9 pages, 7 figures | Phys. Rev. E 96, 062412 (2017) | 10.1103/PhysRevE.96.062412 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a neuronal network model which undergoes a saddle-node bifurcation
on an invariant circle as the mechanism of the transition from the interictal
to the ictal (seizure) state. In the vicinity of this transition, the model
captures important dynamical features of both interictal and ictal states. We
study the nature of interictal spikes and early warnings of the transition
predicted by this model. We further demonstrate that recurrent seizures emerge
due to the interaction between two networks.
| [
{
"created": "Mon, 11 Sep 2017 22:39:35 GMT",
"version": "v1"
}
] | 2017-12-27 | [
[
"Lopes",
"Marinho A.",
""
],
[
"Lee",
"KyoungEun",
""
],
[
"Goltsev",
"Alexander V.",
""
]
] | We propose a neuronal network model which undergoes a saddle-node bifurcation on an invariant circle as the mechanism of the transition from the interictal to the ictal (seizure) state. In the vicinity of this transition, the model captures important dynamical features of both interictal and ictal states. We study the nature of interictal spikes and early warnings of the transition predicted by this model. We further demonstrate that recurrent seizures emerge due to the interaction between two networks. |
q-bio/0609011 | Tobias Bollenbach | T. Bollenbach, K. Kruse, P. Pantazis, M. Gonzalez-Gaitan, and F.
Julicher | Morphogen Transport in Epithelia | null | null | 10.1103/PhysRevE.75.011901 | null | q-bio.OT physics.bio-ph | null | We present a general theoretical framework to discuss mechanisms of morphogen
transport and gradient formation in a cell layer. Trafficking events on the
cellular scale lead to transport on larger scales. We discuss in particular the
case of transcytosis where morphogens undergo repeated rounds of
internalization into cells and recycling. Based on a description on the
cellular scale, we derive effective nonlinear transport equations in one and
two dimensions which are valid on larger scales. We derive analytic expressions
for the concentration dependence of the effective diffusion coefficient and the
effective degradation rate. We discuss the effects of a directional bias on
morphogen transport and those of the coupling of the morphogen and receptor
kinetics. Furthermore, we discuss general properties of cellular transport
processes such as the robustness of gradients and relate our results to recent
experiments on the morphogen Decapentaplegic (Dpp) that acts in the fruit fly
Drosophila.
| [
{
"created": "Thu, 7 Sep 2006 18:32:45 GMT",
"version": "v1"
}
] | 2015-06-26 | [
[
"Bollenbach",
"T.",
""
],
[
"Kruse",
"K.",
""
],
[
"Pantazis",
"P.",
""
],
[
"Gonzalez-Gaitan",
"M.",
""
],
[
"Julicher",
"F.",
""
]
] | We present a general theoretical framework to discuss mechanisms of morphogen transport and gradient formation in a cell layer. Trafficking events on the cellular scale lead to transport on larger scales. We discuss in particular the case of transcytosis where morphogens undergo repeated rounds of internalization into cells and recycling. Based on a description on the cellular scale, we derive effective nonlinear transport equations in one and two dimensions which are valid on larger scales. We derive analytic expressions for the concentration dependence of the effective diffusion coefficient and the effective degradation rate. We discuss the effects of a directional bias on morphogen transport and those of the coupling of the morphogen and receptor kinetics. Furthermore, we discuss general properties of cellular transport processes such as the robustness of gradients and relate our results to recent experiments on the morphogen Decapentaplegic (Dpp) that acts in the fruit fly Drosophila. |
2102.07610 | Rajat Desikan | Mario Giorgi, Rajat Desikan, Piet H. van der Graaf, Andrzej M. Kierzek | Application of Quantitative Systems Pharmacology to guide the optimal
dosing of COVID-19 vaccines | Perspective, 7 pages, 2 figures | CPT: Pharmacometrics and Systems Pharmacology, 2021 | 10.1002/psp4.12700 | null | q-bio.QM | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Optimal use and distribution of Covid-19 vaccines involves adjustments of
dosing. Due to the rapidly-evolving pandemic, such adjustments often need to be
introduced before full efficacy data are available. As demonstrated in other
areas of drug development, quantitative systems pharmacology (QSP) is well
placed to guide such extrapolation in a rational and timely manner. Here we
propose for the first time how QSP can be applied real time in the context of
COVID-19 vaccine development.
| [
{
"created": "Mon, 15 Feb 2021 15:52:41 GMT",
"version": "v1"
}
] | 2021-08-17 | [
[
"Giorgi",
"Mario",
""
],
[
"Desikan",
"Rajat",
""
],
[
"van der Graaf",
"Piet H.",
""
],
[
"Kierzek",
"Andrzej M.",
""
]
] | Optimal use and distribution of Covid-19 vaccines involves adjustments of dosing. Due to the rapidly-evolving pandemic, such adjustments often need to be introduced before full efficacy data are available. As demonstrated in other areas of drug development, quantitative systems pharmacology (QSP) is well placed to guide such extrapolation in a rational and timely manner. Here we propose for the first time how QSP can be applied real time in the context of COVID-19 vaccine development. |
1807.05016 | Philip Bourne | Philip E. Bourne | Ten Simple Rules When Considering Retirement | null | null | 10.1371/journal.pcbi.1006411 | null | q-bio.OT | http://creativecommons.org/licenses/by/4.0/ | This is an article submitted to the Ten Simple Rules series of professional
development articles published by PLOS Computational Biology.
| [
{
"created": "Fri, 13 Jul 2018 11:39:54 GMT",
"version": "v1"
}
] | 2018-11-21 | [
[
"Bourne",
"Philip E.",
""
]
] | This is an article submitted to the Ten Simple Rules series of professional development articles published by PLOS Computational Biology. |
2203.11578 | Martina Conte | Martina Conte and Yvonne Dzierma and Sven Knobe and Christina
Surulescu | Mathematical modeling of glioma invasion and therapy approaches via
kinetic theory of active particles | 29 pages, 12 figures | null | null | null | q-bio.CB | http://creativecommons.org/licenses/by/4.0/ | A multiscale model for glioma spread in brain tissue under the influence of
vascularization and vascular endothelial growth factors is proposed. It
accounts for the interplay between the different components of the neoplasm and
the healthy tissue and it investigates and compares various therapy approaches.
Precisely, these involve radio- and chemotherapy in a concurrent or adjuvant
manner together with anti-angiogenic therapy affecting the vascular component
of the system. We assess tumor growth and spread on the basis of DTI data,
which allows us to reconstruct a realistic brain geometry and tissue structure,
and we apply our model to real glioma patient data. In this latter case, a
space-dependent radiotherapy description is considered using data about the
corresponding isodose curves.
| [
{
"created": "Tue, 22 Mar 2022 10:02:09 GMT",
"version": "v1"
},
{
"created": "Thu, 25 Apr 2024 16:56:47 GMT",
"version": "v2"
}
] | 2024-04-26 | [
[
"Conte",
"Martina",
""
],
[
"Dzierma",
"Yvonne",
""
],
[
"Knobe",
"Sven",
""
],
[
"Surulescu",
"Christina",
""
]
] | A multiscale model for glioma spread in brain tissue under the influence of vascularization and vascular endothelial growth factors is proposed. It accounts for the interplay between the different components of the neoplasm and the healthy tissue and it investigates and compares various therapy approaches. Precisely, these involve radio- and chemotherapy in a concurrent or adjuvant manner together with anti-angiogenic therapy affecting the vascular component of the system. We assess tumor growth and spread on the basis of DTI data, which allows us to reconstruct a realistic brain geometry and tissue structure, and we apply our model to real glioma patient data. In this latter case, a space-dependent radiotherapy description is considered using data about the corresponding isodose curves. |
1810.01062 | Gennadi Glinsky | Gennadi Glinsky | Analysis of evolutionary origins of genomic loci harboring 59,732
candidate human-specific regulatory sequences identifies genetic divergence
patterns during evolution of Great Apes | 4 figures; 8 tables; 6 supplemental tablles | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Our view of the universe of genomic regions harboring various types of
candidate human-specific regulatory sequences (HSRS) has been markedly expanded
in recent years. To infer the evolutionary origins of loci harboring HSRS,
analyses of conservations patterns of 59,732 loci in Modern Humans, Chimpanzee,
Bonobo, Gorilla, Orangutan, Gibbon, and Rhesus genomes have been performed. Two
major evolutionary pathways have been identified comprising thousands of
sequences that were either inherited from extinct common ancestors (ECAs) or
created de novo in humans after human/chimpanzee split. Thousands of HSRS
appear inherited from ECAs yet bypassed genomes of our closest evolutionary
relatives, presumably due to the incomplete lineage sorting and/or
species-specific loss or regulatory DNA. The bypassing pattern is prominent for
HSRS associated with development and functions of human brain. Common genomic
loci that may contributed to speciation during evolution of Great Apes comprise
248 insertions sites of African Great Ape-specific retrovirus PtERV1 (45.9%; p
= 1.03E-44) intersecting regions harboring 442 HSRS, which are enriched for
HSRS associated with human-specific (HS) changes of gene expression in cerebral
organoids. Among non-human primates (NHP), most significant fractions of
candidate HSRS associated with HS expression changes in both excitatory neurons
(347 loci; 67%) and radial glia (683 loci; 72%) are highly conserved in Gorilla
genome. Modern Humans acquired unique combinations of regulatory sequences
highly conserved in distinct species of six NHP separated by 30 million years
of evolution. Concurrently, this unique mosaic of regulatory sequences
inherited from ECAs was supplemented with 12,486 created de novo HSRS. These
observations support the model of complex continuous speciation process during
evolution of Great Apes that is not likely to occur as an instantaneous event.
| [
{
"created": "Tue, 2 Oct 2018 04:31:21 GMT",
"version": "v1"
}
] | 2018-10-03 | [
[
"Glinsky",
"Gennadi",
""
]
] | Our view of the universe of genomic regions harboring various types of candidate human-specific regulatory sequences (HSRS) has been markedly expanded in recent years. To infer the evolutionary origins of loci harboring HSRS, analyses of conservations patterns of 59,732 loci in Modern Humans, Chimpanzee, Bonobo, Gorilla, Orangutan, Gibbon, and Rhesus genomes have been performed. Two major evolutionary pathways have been identified comprising thousands of sequences that were either inherited from extinct common ancestors (ECAs) or created de novo in humans after human/chimpanzee split. Thousands of HSRS appear inherited from ECAs yet bypassed genomes of our closest evolutionary relatives, presumably due to the incomplete lineage sorting and/or species-specific loss or regulatory DNA. The bypassing pattern is prominent for HSRS associated with development and functions of human brain. Common genomic loci that may contributed to speciation during evolution of Great Apes comprise 248 insertions sites of African Great Ape-specific retrovirus PtERV1 (45.9%; p = 1.03E-44) intersecting regions harboring 442 HSRS, which are enriched for HSRS associated with human-specific (HS) changes of gene expression in cerebral organoids. Among non-human primates (NHP), most significant fractions of candidate HSRS associated with HS expression changes in both excitatory neurons (347 loci; 67%) and radial glia (683 loci; 72%) are highly conserved in Gorilla genome. Modern Humans acquired unique combinations of regulatory sequences highly conserved in distinct species of six NHP separated by 30 million years of evolution. Concurrently, this unique mosaic of regulatory sequences inherited from ECAs was supplemented with 12,486 created de novo HSRS. These observations support the model of complex continuous speciation process during evolution of Great Apes that is not likely to occur as an instantaneous event. |
q-bio/0608008 | David Steinsaltz | Steven N. Evans, David Steinsaltz | Damage segregation at fissioning may increase growth rates: A
superprocess model | Version 2 had significant conceptual and organizational changes,
though only minor changes to the mathematics. Version 3 has minor
proofreading corrections, and a few new references. The paper will appear in
Theoretical Population Biology | null | null | null | q-bio.PE math.PR | null | A fissioning organism may purge unrepairable damage by bequeathing it
preferentially to one of its daughters. Using the mathematical formalism of
superprocesses, we propose a flexible class of analytically tractable models
that allow quite general effects of damage on death rates and splitting rates
and similarly general damage segregation mechanisms. We show that, in a
suitable regime, the effects of randomness in damage segregation at fissioning
are indistinguishable from those of randomness in the mechanism of damage
accumulation during the organism's lifetime. Moreover, the optimal population
growth is achieved for a particular finite, non-zero level of combined
randomness from these two sources. In particular, when damage accumulates
deterministically, optimal population growth is achieved by a moderately
unequal division of damage between the daughters. Too little or too much
division is sub-optimal. Connections are drawn both to recent experimental
results on inheritance of damage in protozoans, to theories of the evolution of
aging, and to models of resource division between siblings.
| [
{
"created": "Thu, 3 Aug 2006 19:58:13 GMT",
"version": "v1"
},
{
"created": "Mon, 11 Dec 2006 16:04:10 GMT",
"version": "v2"
},
{
"created": "Sun, 8 Apr 2007 10:53:29 GMT",
"version": "v3"
}
] | 2007-05-23 | [
[
"Evans",
"Steven N.",
""
],
[
"Steinsaltz",
"David",
""
]
] | A fissioning organism may purge unrepairable damage by bequeathing it preferentially to one of its daughters. Using the mathematical formalism of superprocesses, we propose a flexible class of analytically tractable models that allow quite general effects of damage on death rates and splitting rates and similarly general damage segregation mechanisms. We show that, in a suitable regime, the effects of randomness in damage segregation at fissioning are indistinguishable from those of randomness in the mechanism of damage accumulation during the organism's lifetime. Moreover, the optimal population growth is achieved for a particular finite, non-zero level of combined randomness from these two sources. In particular, when damage accumulates deterministically, optimal population growth is achieved by a moderately unequal division of damage between the daughters. Too little or too much division is sub-optimal. Connections are drawn both to recent experimental results on inheritance of damage in protozoans, to theories of the evolution of aging, and to models of resource division between siblings. |
2309.10162 | Andreas Hilfinger | Raymond Fan, Andreas Hilfinger | Characterizing the non-monotonic behavior of mutual information along
biochemical reaction cascades | 17 pages, 10 figures | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cells sense environmental signals and transmit information intracellularly
through changes in the abundance of molecular components. Such molecular
abundances can be measured in single cells and exhibit significant
heterogeneity in clonal populations even in identical environments.
Experimentally observed joint probability distributions can then be used to
quantify the covariability and mutual information between molecular abundances
along signaling cascades. However, because stationary state abundances along
stochastic biochemical reaction cascades are not conditionally independent,
their mutual information is not constrained by the data processing inequality.
Here, we report the conditions under which the mutual information between
stationary state abundances increases along a cascade of biochemical reactions.
This non-monotonic behavior can be intuitively understood in terms of noise
propagation and time-averaging stochastic fluctuations that are short-lived
compared to an extrinsic signal. Our results re-emphasize that mutual
information measurements of stationary state distributions of cellular
components may be of limited utility for characterizing cellular signaling
processes because they do not measure information transfer.
| [
{
"created": "Mon, 18 Sep 2023 21:18:21 GMT",
"version": "v1"
},
{
"created": "Thu, 8 Aug 2024 17:19:20 GMT",
"version": "v2"
}
] | 2024-08-09 | [
[
"Fan",
"Raymond",
""
],
[
"Hilfinger",
"Andreas",
""
]
] | Cells sense environmental signals and transmit information intracellularly through changes in the abundance of molecular components. Such molecular abundances can be measured in single cells and exhibit significant heterogeneity in clonal populations even in identical environments. Experimentally observed joint probability distributions can then be used to quantify the covariability and mutual information between molecular abundances along signaling cascades. However, because stationary state abundances along stochastic biochemical reaction cascades are not conditionally independent, their mutual information is not constrained by the data processing inequality. Here, we report the conditions under which the mutual information between stationary state abundances increases along a cascade of biochemical reactions. This non-monotonic behavior can be intuitively understood in terms of noise propagation and time-averaging stochastic fluctuations that are short-lived compared to an extrinsic signal. Our results re-emphasize that mutual information measurements of stationary state distributions of cellular components may be of limited utility for characterizing cellular signaling processes because they do not measure information transfer. |
2101.03126 | Zaheer Ullah Khan | Zaheer Ullah Khan, Dechang Pi, Izhar Ahmed Khan, Asif Nawaz, Jamil
Ahmad, Mushtaq Hussain | piSAAC: Extended notion of SAAC feature selection novel method for
discrimination of Enzymes model using different machine learning algorithm | 3 Figures, 5 Tables, 6 Pages | null | null | null | q-bio.BM cs.LG | http://creativecommons.org/licenses/by/4.0/ | Enzymes and proteins are live driven biochemicals, which has a dramatic
impact over the environment, in which it is active. So, therefore, it is highly
looked-for to build such a robust and highly accurate automatic and
computational model to accurately predict enzymes nature. In this study, a
novel split amino acid composition model named piSAAC is proposed. In this
model, protein sequence is discretized in equal and balanced terminus to fully
evaluate the intrinsic correlation properties of the sequence. Several
state-of-the-art algorithms have been employed to evaluate the proposed model.
A 10-folds cross-validation evaluation is used for finding out the authenticity
and robust-ness of the model using different statistical measures e.g.
Accuracy, sensitivity, specificity, F-measure and area un-der ROC curve. The
experimental results show that, probabilistic neural network algorithm with
piSAAC feature extraction yields an accuracy of 98.01%, sensitivity of 97.12%,
specificity of 95.87%, f-measure of 0.9812and AUC 0.95812, over dataset S1,
accuracy of 97.85%, sensitivity of 97.54%, specificity of 96.24%, f-measure of
0.9774 and AUC 0.9803 over dataset S2. Evident from these excellent empirical
results, the proposed model would be a very useful tool for academic research
and drug designing related application areas.
| [
{
"created": "Wed, 16 Dec 2020 03:45:21 GMT",
"version": "v1"
}
] | 2021-01-11 | [
[
"Khan",
"Zaheer Ullah",
""
],
[
"Pi",
"Dechang",
""
],
[
"Khan",
"Izhar Ahmed",
""
],
[
"Nawaz",
"Asif",
""
],
[
"Ahmad",
"Jamil",
""
],
[
"Hussain",
"Mushtaq",
""
]
] | Enzymes and proteins are live driven biochemicals, which has a dramatic impact over the environment, in which it is active. So, therefore, it is highly looked-for to build such a robust and highly accurate automatic and computational model to accurately predict enzymes nature. In this study, a novel split amino acid composition model named piSAAC is proposed. In this model, protein sequence is discretized in equal and balanced terminus to fully evaluate the intrinsic correlation properties of the sequence. Several state-of-the-art algorithms have been employed to evaluate the proposed model. A 10-folds cross-validation evaluation is used for finding out the authenticity and robust-ness of the model using different statistical measures e.g. Accuracy, sensitivity, specificity, F-measure and area un-der ROC curve. The experimental results show that, probabilistic neural network algorithm with piSAAC feature extraction yields an accuracy of 98.01%, sensitivity of 97.12%, specificity of 95.87%, f-measure of 0.9812and AUC 0.95812, over dataset S1, accuracy of 97.85%, sensitivity of 97.54%, specificity of 96.24%, f-measure of 0.9774 and AUC 0.9803 over dataset S2. Evident from these excellent empirical results, the proposed model would be a very useful tool for academic research and drug designing related application areas. |
0709.4164 | Eric Lichtfouse | Eric Lichtfouse (ASD) | Compound-specific isotope analysis | null | Rapid Communications in Mass Spectrometry 14 (2000) 1337 - 1344 | 10.1002/1097-0231(20000815)14:15<1337::AID-RCM9>3.0.CO;2-B | null | q-bio.BM | null | The isotopic composition, for example, 14C/12C, 13C/12C, 2H/1H, 15N/14N and
18O/16O, of the elements of matter is heterogeneous. It is ruled by physical,
chemical and biological mechanisms. Isotopes can be employed to follow the fate
of mineral and organic compounds during biogeochemical transformations. The
determination of the isotopic composition of organic substances occurring at
trace level in very complex mixtures such as sediments, soils and blood, has
been made possible during the last 20 years due to the rapid development of
molecular level isotopic techniques. After a brief glance at pioneering studies
revealing isotopic breakthroughs at the molecular and intramolecular levels,
this paper reviews selected applications of compound-specific isotope analysis
in various scientific fields.
| [
{
"created": "Wed, 26 Sep 2007 13:59:03 GMT",
"version": "v1"
}
] | 2007-09-27 | [
[
"Lichtfouse",
"Eric",
"",
"ASD"
]
] | The isotopic composition, for example, 14C/12C, 13C/12C, 2H/1H, 15N/14N and 18O/16O, of the elements of matter is heterogeneous. It is ruled by physical, chemical and biological mechanisms. Isotopes can be employed to follow the fate of mineral and organic compounds during biogeochemical transformations. The determination of the isotopic composition of organic substances occurring at trace level in very complex mixtures such as sediments, soils and blood, has been made possible during the last 20 years due to the rapid development of molecular level isotopic techniques. After a brief glance at pioneering studies revealing isotopic breakthroughs at the molecular and intramolecular levels, this paper reviews selected applications of compound-specific isotope analysis in various scientific fields. |
2306.16796 | Claudia Wohlgemuth | Sebastian Aland, Claudia Wohlgemuth | A phase-field model for active contractile surfaces | null | null | null | null | q-bio.CB | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The morphogenesis of cells and tissues involves an interplay between chemical
signals and active forces on their surrounding surface layers. The complex
interaction of hydrodynamics and material flows on such active surfaces leads
to pattern formation and shape dynamics which can involve topological
transitions, for example during cell division. To better understand such
processes requires novel numerical tools. Here, we present a phase-field model
for an active deformable surface interacting with the surrounding fluids. The
model couples hydrodynamics in the bulk to viscous flow along the diffuse
surface, driven by active contraction of a surface species. As a new feature in
phase-field modeling, we include the viscosity of a diffuse interface and
stabilize the interface profile in the Stokes-Cahn-Hilliard equation by an
auxiliary advection velocity, which is constant normal to the interface. The
method is numerically validated with previous results based on linear stability
analysis. Further, we highlight some distinct features of the new method, like
the avoidance of re-meshing and the inclusion of contact mechanics, as we
simulate the self-organized polarization and migration of a cell through a
narrow channel. Finally, we study the formation of a contractile ring on the
surface and illustrate the capability of the method to resolve topological
transitions by a first simulation of a full cell division.
| [
{
"created": "Thu, 29 Jun 2023 09:04:19 GMT",
"version": "v1"
}
] | 2023-06-30 | [
[
"Aland",
"Sebastian",
""
],
[
"Wohlgemuth",
"Claudia",
""
]
] | The morphogenesis of cells and tissues involves an interplay between chemical signals and active forces on their surrounding surface layers. The complex interaction of hydrodynamics and material flows on such active surfaces leads to pattern formation and shape dynamics which can involve topological transitions, for example during cell division. To better understand such processes requires novel numerical tools. Here, we present a phase-field model for an active deformable surface interacting with the surrounding fluids. The model couples hydrodynamics in the bulk to viscous flow along the diffuse surface, driven by active contraction of a surface species. As a new feature in phase-field modeling, we include the viscosity of a diffuse interface and stabilize the interface profile in the Stokes-Cahn-Hilliard equation by an auxiliary advection velocity, which is constant normal to the interface. The method is numerically validated with previous results based on linear stability analysis. Further, we highlight some distinct features of the new method, like the avoidance of re-meshing and the inclusion of contact mechanics, as we simulate the self-organized polarization and migration of a cell through a narrow channel. Finally, we study the formation of a contractile ring on the surface and illustrate the capability of the method to resolve topological transitions by a first simulation of a full cell division. |
1902.09059 | Jiawei Yin | Jiawei Yin, Agung Julius, John T. Wen | Rapid Circadian Entrainment in Models of Circadian Genes Regulation | null | null | null | null | q-bio.MN math.OC nlin.AO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The light-based minimum-time circadian entrainment problem for mammals,
Neurospora, and Drosophila is studied based on the mathematical models of their
circadian gene regulation. These models contain high order nonlinear
differential equations. Two model simplification methods are applied to these
high-order models: the phase response curves (PRC) and the Principal Orthogonal
Decomposition (POD). The variational calculus and a gradient descent algorithm
are applied for solving the optimal light input in the high-order models. As
the results of the gradient descent algorithm rely heavily on the initial
guesses, we use the optimal control of the PRC and the simplified model to
initialize the gradient descent algorithm. In this paper, we present: (1) the
application of PRC and direct shooting algorithm on high-order nonlinear
models; (2) a general process for solving the minimum-time optimal control
problem on high-order models; (3) the impacts of minimum-time optimal light on
circadian gene transcription and protein synthesis.
| [
{
"created": "Mon, 25 Feb 2019 02:06:40 GMT",
"version": "v1"
},
{
"created": "Sat, 9 Mar 2019 00:17:07 GMT",
"version": "v2"
}
] | 2019-03-12 | [
[
"Yin",
"Jiawei",
""
],
[
"Julius",
"Agung",
""
],
[
"Wen",
"John T.",
""
]
] | The light-based minimum-time circadian entrainment problem for mammals, Neurospora, and Drosophila is studied based on the mathematical models of their circadian gene regulation. These models contain high order nonlinear differential equations. Two model simplification methods are applied to these high-order models: the phase response curves (PRC) and the Principal Orthogonal Decomposition (POD). The variational calculus and a gradient descent algorithm are applied for solving the optimal light input in the high-order models. As the results of the gradient descent algorithm rely heavily on the initial guesses, we use the optimal control of the PRC and the simplified model to initialize the gradient descent algorithm. In this paper, we present: (1) the application of PRC and direct shooting algorithm on high-order nonlinear models; (2) a general process for solving the minimum-time optimal control problem on high-order models; (3) the impacts of minimum-time optimal light on circadian gene transcription and protein synthesis. |
1602.00587 | Marek Czachor | Marek Czachor | Information processing and Fechner's problem as a choice of arithmetic | To appear in "Information Studies and Transdisciplinarity", ed. M.
Burgin, World Scientific (2016) | M. Burgin and W. Hofkirchner (eds.), Information Studies and the
Quest for Interdisciplinarity: Unity Through Diversity, pp. 363-372, World
Scientific, Singapore (2017) | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fechner's law and its modern generalizations can be regarded as
manifestations of alternative forms of arithmetic, coexisting at stimulus and
sensation levels. The world of sensations may be thus described by a
generalization of the standard mathematical calculus.
| [
{
"created": "Fri, 29 Jan 2016 10:02:01 GMT",
"version": "v1"
},
{
"created": "Thu, 25 Feb 2016 12:35:49 GMT",
"version": "v2"
}
] | 2017-07-25 | [
[
"Czachor",
"Marek",
""
]
] | Fechner's law and its modern generalizations can be regarded as manifestations of alternative forms of arithmetic, coexisting at stimulus and sensation levels. The world of sensations may be thus described by a generalization of the standard mathematical calculus. |
0912.2089 | Franziska Hinkelmann | Franziska Hinkelmann and Reinhard Laubenbacher | Boolean Models of Bistable Biological Systems | null | null | 10.3934/dcdss.2011.4.1443 | null | q-bio.MN q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents an algorithm for approximating certain types of dynamical
systems given by a system of ordinary delay differential equations by a Boolean
network model. Often Boolean models are much simpler to understand than complex
differential equations models. The motivation for this work comes from
mathematical systems biology. While Boolean mechanisms do not provide
information about exact concentration rates or time scales, they are often
sufficient to capture steady states and other key dynamics. Due to their
intuitive nature, such models are very appealing to researchers in the life
sciences. This paper is focused on dynamical systems that exhibit bistability
and are desc ribedby delay equations. It is shown that if a certain motif
including a feedback loop is present in the wiring diagram of the system, the
Boolean model captures the bistability of molecular switches. The method is
appl ied to two examples from biology, the lac operon and the phage lambda
lysis/lysogeny switch.
| [
{
"created": "Thu, 10 Dec 2009 20:31:14 GMT",
"version": "v1"
}
] | 2011-05-10 | [
[
"Hinkelmann",
"Franziska",
""
],
[
"Laubenbacher",
"Reinhard",
""
]
] | This paper presents an algorithm for approximating certain types of dynamical systems given by a system of ordinary delay differential equations by a Boolean network model. Often Boolean models are much simpler to understand than complex differential equations models. The motivation for this work comes from mathematical systems biology. While Boolean mechanisms do not provide information about exact concentration rates or time scales, they are often sufficient to capture steady states and other key dynamics. Due to their intuitive nature, such models are very appealing to researchers in the life sciences. This paper is focused on dynamical systems that exhibit bistability and are desc ribedby delay equations. It is shown that if a certain motif including a feedback loop is present in the wiring diagram of the system, the Boolean model captures the bistability of molecular switches. The method is appl ied to two examples from biology, the lac operon and the phage lambda lysis/lysogeny switch. |
1207.5933 | Robin Ince | Robin A. A. Ince | Open-source software for studying neural codes | To appear in Quian Quiroga and Panzeri (Eds) Principles of Neural
Coding, CRC Press, 2012 | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this chapter we first outline some of the popular computing environments
used for analysing neural data, followed by a brief discussion of 'software
carpentry', basic tools and skills from software engineering that can be of
great use to computational scientists. We then introduce the concept of
open-source software and explain some of its potential benefits for the
academic community before giving a brief directory of some freely available
open source software packages that address various aspects of the study of
neural codes. While there are many commercial offerings that provide similar
functionality, we concentrate here on open source packages, which in addition
to being available free of charge, also have the source code available for
study and modification.
| [
{
"created": "Wed, 25 Jul 2012 09:34:25 GMT",
"version": "v1"
}
] | 2012-07-26 | [
[
"Ince",
"Robin A. A.",
""
]
] | In this chapter we first outline some of the popular computing environments used for analysing neural data, followed by a brief discussion of 'software carpentry', basic tools and skills from software engineering that can be of great use to computational scientists. We then introduce the concept of open-source software and explain some of its potential benefits for the academic community before giving a brief directory of some freely available open source software packages that address various aspects of the study of neural codes. While there are many commercial offerings that provide similar functionality, we concentrate here on open source packages, which in addition to being available free of charge, also have the source code available for study and modification. |
1611.05814 | Masashi K. Kajita Mr | Masashi K. Kajita, Kazuyuki Aihara, Tetsuya J. Kobayashi | Balancing specificity, sensitivity and speed of ligand discrimination by
zero-order ultraspecificity | null | Phys. Rev. E 96, 012405 (2017) | 10.1103/PhysRevE.96.012405 | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Specific interactions between receptors and their target ligands in the
presence of non-target ligands are crucial for biological processes such as T
cell ligand discrimination. To discriminate between the target and non-target
ligands, cells have to increase specificity by amplifying the small differences
in affinity among ligands. In addition, sensitivity to ligand concentration and
quick discrimination are also important to detect low amounts of target ligands
and facilitate fast cellular decision making after ligand recognition. In this
work, we find that ultraspecificity is naturally derived from a well-known
mechanism for zero-order ultrasensitivity to concentration. We also show that
this mechanism can produce an optimal balance of specificity, sensitivity, and
quick discrimination. Furthermore, we show that a model for insensitivity to
large number of non-terget ligands can be naturally derived from the
ultraspecificity model. Zero-order ultraspecificity may provide alternative way
to understand ligand discrimination from the viewpoint of the nonlinear
properties of biochemical reactions.
| [
{
"created": "Thu, 17 Nov 2016 18:49:49 GMT",
"version": "v1"
}
] | 2017-07-19 | [
[
"Kajita",
"Masashi K.",
""
],
[
"Aihara",
"Kazuyuki",
""
],
[
"Kobayashi",
"Tetsuya J.",
""
]
] | Specific interactions between receptors and their target ligands in the presence of non-target ligands are crucial for biological processes such as T cell ligand discrimination. To discriminate between the target and non-target ligands, cells have to increase specificity by amplifying the small differences in affinity among ligands. In addition, sensitivity to ligand concentration and quick discrimination are also important to detect low amounts of target ligands and facilitate fast cellular decision making after ligand recognition. In this work, we find that ultraspecificity is naturally derived from a well-known mechanism for zero-order ultrasensitivity to concentration. We also show that this mechanism can produce an optimal balance of specificity, sensitivity, and quick discrimination. Furthermore, we show that a model for insensitivity to large number of non-terget ligands can be naturally derived from the ultraspecificity model. Zero-order ultraspecificity may provide alternative way to understand ligand discrimination from the viewpoint of the nonlinear properties of biochemical reactions. |
1706.05279 | Mareike Fischer | Kristina Wicke and Mareike Fischer | Phylogenetic diversity and biodiversity indices on phylogenetic networks | null | null | null | null | q-bio.PE math.CO math.PR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In biodiversity conservation it is often necessary to prioritize the species
to conserve. Existing approaches to prioritization, e.g. the Fair Proportion
Index and the Shapley Value, are based on phylogenetic trees and rank species
according to their contribution to overall phylogenetic diversity. However, in
many cases evolution is not treelike and thus, phylogenetic networks have come
to the fore as a generalization of phylogenetic trees, allowing for the
representation of non-treelike evolutionary events, such as horizontal gene
transfer or hybridization. Here, we extend the concepts of phylogenetic
diversity and phylogenetic diversity indices from phylogenetic trees to
phylogenetic networks. On the one hand, we consider the treelike content of a
phylogenetic network, e.g. the (multi)set of phylogenetic trees displayed by a
network and the LSA tree associated with it. On the other hand, we derive the
phylogenetic diversity of subsets of taxa and biodiversity indices directly
from the internal structure of the network. Furthermore, we introduce our
software package NetDiversity, which was implemented in Perl and allows for the
calculation of all generalized measures of phylogenetic diversity and
generalized phylogenetic diversity indices established in this note that are
independent of inheritance probabilities. We apply our methods to a phylogentic
network representing the evolutionary relationships among swordtails and
platyfishes (Xiphophorus: Poeciliidae), a group of species characterized by
widespread hybridization.
| [
{
"created": "Fri, 16 Jun 2017 14:08:24 GMT",
"version": "v1"
},
{
"created": "Mon, 28 Aug 2017 14:59:21 GMT",
"version": "v2"
},
{
"created": "Mon, 5 Feb 2018 21:32:11 GMT",
"version": "v3"
}
] | 2018-02-07 | [
[
"Wicke",
"Kristina",
""
],
[
"Fischer",
"Mareike",
""
]
] | In biodiversity conservation it is often necessary to prioritize the species to conserve. Existing approaches to prioritization, e.g. the Fair Proportion Index and the Shapley Value, are based on phylogenetic trees and rank species according to their contribution to overall phylogenetic diversity. However, in many cases evolution is not treelike and thus, phylogenetic networks have come to the fore as a generalization of phylogenetic trees, allowing for the representation of non-treelike evolutionary events, such as horizontal gene transfer or hybridization. Here, we extend the concepts of phylogenetic diversity and phylogenetic diversity indices from phylogenetic trees to phylogenetic networks. On the one hand, we consider the treelike content of a phylogenetic network, e.g. the (multi)set of phylogenetic trees displayed by a network and the LSA tree associated with it. On the other hand, we derive the phylogenetic diversity of subsets of taxa and biodiversity indices directly from the internal structure of the network. Furthermore, we introduce our software package NetDiversity, which was implemented in Perl and allows for the calculation of all generalized measures of phylogenetic diversity and generalized phylogenetic diversity indices established in this note that are independent of inheritance probabilities. We apply our methods to a phylogentic network representing the evolutionary relationships among swordtails and platyfishes (Xiphophorus: Poeciliidae), a group of species characterized by widespread hybridization. |
1602.08486 | Garrison Cottrell | Honghao Shan, Matthew H. Tong, Garrison W. Cottrell | A Single Model Explains both Visual and Auditory Precortical Coding | null | null | null | null | q-bio.NC cs.CV cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Precortical neural systems encode information collected by the senses, but
the driving principles of the encoding used have remained a subject of debate.
We present a model of retinal coding that is based on three constraints:
information preservation, minimization of the neural wiring, and response
equalization. The resulting novel version of sparse principal components
analysis successfully captures a number of known characteristics of the retinal
coding system, such as center-surround receptive fields, color opponency
channels, and spatiotemporal responses that correspond to magnocellular and
parvocellular pathways. Furthermore, when trained on auditory data, the same
model learns receptive fields well fit by gammatone filters, commonly used to
model precortical auditory coding. This suggests that efficient coding may be a
unifying principle of precortical encoding across modalities.
| [
{
"created": "Fri, 26 Feb 2016 10:17:53 GMT",
"version": "v1"
},
{
"created": "Thu, 7 Apr 2016 19:19:11 GMT",
"version": "v2"
}
] | 2016-04-08 | [
[
"Shan",
"Honghao",
""
],
[
"Tong",
"Matthew H.",
""
],
[
"Cottrell",
"Garrison W.",
""
]
] | Precortical neural systems encode information collected by the senses, but the driving principles of the encoding used have remained a subject of debate. We present a model of retinal coding that is based on three constraints: information preservation, minimization of the neural wiring, and response equalization. The resulting novel version of sparse principal components analysis successfully captures a number of known characteristics of the retinal coding system, such as center-surround receptive fields, color opponency channels, and spatiotemporal responses that correspond to magnocellular and parvocellular pathways. Furthermore, when trained on auditory data, the same model learns receptive fields well fit by gammatone filters, commonly used to model precortical auditory coding. This suggests that efficient coding may be a unifying principle of precortical encoding across modalities. |
1201.2045 | Thomas Handford P | T. P. Handford, F.-J. Perez-Reche, S. N. Taraskin, L. da F. Costa, M.
Miazaki, F. M. Neri, C. A. Gilligan | Epidemics in Networks of Spatially Correlated Three-dimensional Root
Branching Structures | 21 pages, 8 figures | J. R. Soc. Interface 2011, 8, 423-434 | 10.1098/rsif.2010.0296 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Using digitized images of the three-dimensional, branching structures for
root systems of bean seedlings, together with analytical and numerical methods
that map a common 'SIR' epidemiological model onto the bond percolation
problem, we show how the spatially-correlated branching structures of plant
roots affect transmission efficiencies, and hence the invasion criterion, for a
soil-borne pathogen as it spreads through ensembles of morphologically complex
hosts. We conclude that the inherent heterogeneities in transmissibilities
arising from correlations in the degrees of overlap between neighbouring
plants, render a population of root systems less susceptible to epidemic
invasion than a corresponding homogeneous system. Several components of
morphological complexity are analysed that contribute to disorder and
heterogeneities in transmissibility of infection. Anisotropy in root shape is
shown to increase resilience to epidemic invasion, while increasing the degree
of branching enhances the spread of epidemics in the population of roots. Some
extension of the methods for other epidemiological systems are discussed.
| [
{
"created": "Tue, 10 Jan 2012 13:18:50 GMT",
"version": "v1"
}
] | 2012-01-11 | [
[
"Handford",
"T. P.",
""
],
[
"Perez-Reche",
"F. -J.",
""
],
[
"Taraskin",
"S. N.",
""
],
[
"Costa",
"L. da F.",
""
],
[
"Miazaki",
"M.",
""
],
[
"Neri",
"F. M.",
""
],
[
"Gilligan",
"C. A.",
""
]
] | Using digitized images of the three-dimensional, branching structures for root systems of bean seedlings, together with analytical and numerical methods that map a common 'SIR' epidemiological model onto the bond percolation problem, we show how the spatially-correlated branching structures of plant roots affect transmission efficiencies, and hence the invasion criterion, for a soil-borne pathogen as it spreads through ensembles of morphologically complex hosts. We conclude that the inherent heterogeneities in transmissibilities arising from correlations in the degrees of overlap between neighbouring plants, render a population of root systems less susceptible to epidemic invasion than a corresponding homogeneous system. Several components of morphological complexity are analysed that contribute to disorder and heterogeneities in transmissibility of infection. Anisotropy in root shape is shown to increase resilience to epidemic invasion, while increasing the degree of branching enhances the spread of epidemics in the population of roots. Some extension of the methods for other epidemiological systems are discussed. |
1611.08948 | Mihai Nadin | Mihai Nadin | No crisis should go to waste | null | null | null | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The crisis in the reproducibility of experiments invites a re-evaluation of
methods of inquiry and validation procedures. The text challenges current
assumptions of knowledge acquisition and introduces G-complexity for defining
decidable vs. non-decidable knowledge domains. A "second Cartesian revolution"
should result in scientific methods that transcend determinism and
reductionism.
| [
{
"created": "Mon, 28 Nov 2016 01:04:04 GMT",
"version": "v1"
}
] | 2016-11-29 | [
[
"Nadin",
"Mihai",
""
]
] | The crisis in the reproducibility of experiments invites a re-evaluation of methods of inquiry and validation procedures. The text challenges current assumptions of knowledge acquisition and introduces G-complexity for defining decidable vs. non-decidable knowledge domains. A "second Cartesian revolution" should result in scientific methods that transcend determinism and reductionism. |
2406.01630 | Larissa De Ruijter | Larissa de Ruijter, Gabriele Cesa | Equivariant amortized inference of poses for cryo-EM | Published at the GEM workshop, ICLR 2024 | null | null | null | q-bio.QM cs.LG | http://creativecommons.org/licenses/by/4.0/ | Cryo-EM is a vital technique for determining 3D structure of biological
molecules such as proteins and viruses. The cryo-EM reconstruction problem is
challenging due to the high noise levels, the missing poses of particles, and
the computational demands of processing large datasets. A promising solution to
these challenges lies in the use of amortized inference methods, which have
shown particular efficacy in pose estimation for large datasets. However, these
methods also encounter convergence issues, often necessitating sophisticated
initialization strategies or engineered solutions for effective convergence.
Building upon the existing cryoAI pipeline, which employs a symmetric loss
function to address convergence problems, this work explores the emergence and
persistence of these issues within the pipeline. Additionally, we explore the
impact of equivariant amortized inference on enhancing convergence. Our
investigations reveal that, when applied to simulated data, a pipeline
incorporating an equivariant encoder not only converges faster and more
frequently than the standard approach but also demonstrates superior
performance in terms of pose estimation accuracy and the resolution of the
reconstructed volume. Notably, $D_4$-equivariant encoders make the symmetric
loss superfluous and, therefore, allow for a more efficient reconstruction
pipeline.
| [
{
"created": "Sat, 1 Jun 2024 11:36:29 GMT",
"version": "v1"
}
] | 2024-06-05 | [
[
"de Ruijter",
"Larissa",
""
],
[
"Cesa",
"Gabriele",
""
]
] | Cryo-EM is a vital technique for determining 3D structure of biological molecules such as proteins and viruses. The cryo-EM reconstruction problem is challenging due to the high noise levels, the missing poses of particles, and the computational demands of processing large datasets. A promising solution to these challenges lies in the use of amortized inference methods, which have shown particular efficacy in pose estimation for large datasets. However, these methods also encounter convergence issues, often necessitating sophisticated initialization strategies or engineered solutions for effective convergence. Building upon the existing cryoAI pipeline, which employs a symmetric loss function to address convergence problems, this work explores the emergence and persistence of these issues within the pipeline. Additionally, we explore the impact of equivariant amortized inference on enhancing convergence. Our investigations reveal that, when applied to simulated data, a pipeline incorporating an equivariant encoder not only converges faster and more frequently than the standard approach but also demonstrates superior performance in terms of pose estimation accuracy and the resolution of the reconstructed volume. Notably, $D_4$-equivariant encoders make the symmetric loss superfluous and, therefore, allow for a more efficient reconstruction pipeline. |
2304.01355 | Ioannis Iossifidis | Muhammad Saif-ur-Rehman, Omair Ali, Christian Klaes, Ioannis
Iossifidis | Adaptive SpikeDeep-Classifier: Self-organizing and self-supervised
machine learning algorithm for online spike sorting | null | null | null | null | q-bio.NC cs.AI cs.IT cs.LG math.IT | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Objective. Research on brain-computer interfaces (BCIs) is advancing towards
rehabilitating severely disabled patients in the real world. Two key factors
for successful decoding of user intentions are the size of implanted
microelectrode arrays and a good online spike sorting algorithm. A small but
dense microelectrode array with 3072 channels was recently developed for
decoding user intentions. The process of spike sorting determines the spike
activity (SA) of different sources (neurons) from recorded neural data.
Unfortunately, current spike sorting algorithms are unable to handle the
massively increasing amount of data from dense microelectrode arrays, making
spike sorting a fragile component of the online BCI decoding framework.
Approach. We proposed an adaptive and self-organized algorithm for online spike
sorting, named Adaptive SpikeDeep-Classifier (Ada-SpikeDeepClassifier), which
uses SpikeDeeptector for channel selection, an adaptive background activity
rejector (Ada-BAR) for discarding background events, and an adaptive spike
classifier (Ada-Spike classifier) for classifying the SA of different neural
units. Results. Our algorithm outperformed our previously published
SpikeDeep-Classifier and eight other spike sorting algorithms, as evaluated on
a human dataset and a publicly available simulated dataset. Significance. The
proposed algorithm is the first spike sorting algorithm that automatically
learns the abrupt changes in the distribution of noise and SA. It is an
artificial neural network-based algorithm that is well-suited for hardware
implementation on neuromorphic chips that can be used for wearable invasive
BCIs.
| [
{
"created": "Thu, 30 Mar 2023 15:30:37 GMT",
"version": "v1"
}
] | 2023-04-05 | [
[
"Saif-ur-Rehman",
"Muhammad",
""
],
[
"Ali",
"Omair",
""
],
[
"Klaes",
"Christian",
""
],
[
"Iossifidis",
"Ioannis",
""
]
] | Objective. Research on brain-computer interfaces (BCIs) is advancing towards rehabilitating severely disabled patients in the real world. Two key factors for successful decoding of user intentions are the size of implanted microelectrode arrays and a good online spike sorting algorithm. A small but dense microelectrode array with 3072 channels was recently developed for decoding user intentions. The process of spike sorting determines the spike activity (SA) of different sources (neurons) from recorded neural data. Unfortunately, current spike sorting algorithms are unable to handle the massively increasing amount of data from dense microelectrode arrays, making spike sorting a fragile component of the online BCI decoding framework. Approach. We proposed an adaptive and self-organized algorithm for online spike sorting, named Adaptive SpikeDeep-Classifier (Ada-SpikeDeepClassifier), which uses SpikeDeeptector for channel selection, an adaptive background activity rejector (Ada-BAR) for discarding background events, and an adaptive spike classifier (Ada-Spike classifier) for classifying the SA of different neural units. Results. Our algorithm outperformed our previously published SpikeDeep-Classifier and eight other spike sorting algorithms, as evaluated on a human dataset and a publicly available simulated dataset. Significance. The proposed algorithm is the first spike sorting algorithm that automatically learns the abrupt changes in the distribution of noise and SA. It is an artificial neural network-based algorithm that is well-suited for hardware implementation on neuromorphic chips that can be used for wearable invasive BCIs. |
0704.0034 | Vasily Ogryzko V | Vasily Ogryzko | Origin of adaptive mutants: a quantum measurement? | 5 pages | null | null | null | q-bio.PE q-bio.CB quant-ph | null | This is a supplement to the paper arXiv:q-bio/0701050, containing the text of
correspondence sent to Nature in 1990.
| [
{
"created": "Sat, 31 Mar 2007 15:36:48 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Ogryzko",
"Vasily",
""
]
] | This is a supplement to the paper arXiv:q-bio/0701050, containing the text of correspondence sent to Nature in 1990. |
1012.3555 | Michael Bridges | M. Bridges, E. A. Heron, C. O'Dushlaine, R. Segurado, The
International Schizophrenia Consortium (ISC), D. Morris, A. Corvin, M. Gill,
C. Pinto | Genetic Classification of Populations using Supervised Learning | Accepted PLOS One | null | 10.1371/journal.pone.0014802 | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There are many instances in genetics in which we wish to determine whether
two candidate populations are distinguishable on the basis of their genetic
structure. Examples include populations which are geographically separated,
case--control studies and quality control (when participants in a study have
been genotyped at different laboratories). This latter application is of
particular importance in the era of large scale genome wide association
studies, when collections of individuals genotyped at different locations are
being merged to provide increased power. The traditional method for detecting
structure within a population is some form of exploratory technique such as
principal components analysis. Such methods, which do not utilise our prior
knowledge of the membership of the candidate populations. are termed
\emph{unsupervised}. Supervised methods, on the other hand are able to utilise
this prior knowledge when it is available.
In this paper we demonstrate that in such cases modern supervised approaches
are a more appropriate tool for detecting genetic differences between
populations. We apply two such methods, (neural networks and support vector
machines) to the classification of three populations (two from Scotland and one
from Bulgaria). The sensitivity exhibited by both these methods is considerably
higher than that attained by principal components analysis and in fact
comfortably exceeds a recently conjectured theoretical limit on the sensitivity
of unsupervised methods. In particular, our methods can distinguish between the
two Scottish populations, where principal components analysis cannot. We
suggest, on the basis of our results that a supervised learning approach should
be the method of choice when classifying individuals into pre-defined
populations, particularly in quality control for large scale genome wide
association studies.
| [
{
"created": "Thu, 16 Dec 2010 10:57:02 GMT",
"version": "v1"
}
] | 2015-05-20 | [
[
"Bridges",
"M.",
"",
"ISC"
],
[
"Heron",
"E. A.",
"",
"ISC"
],
[
"O'Dushlaine",
"C.",
"",
"ISC"
],
[
"Segurado",
"R.",
"",
"ISC"
],
[
"Consortium",
"The International Schizophrenia",
"",
"ISC"
],
[
"Morris",
"D.",
""
],
[
"Corvin",
"A.",
""
],
[
"Gill",
"M.",
""
],
[
"Pinto",
"C.",
""
]
] | There are many instances in genetics in which we wish to determine whether two candidate populations are distinguishable on the basis of their genetic structure. Examples include populations which are geographically separated, case--control studies and quality control (when participants in a study have been genotyped at different laboratories). This latter application is of particular importance in the era of large scale genome wide association studies, when collections of individuals genotyped at different locations are being merged to provide increased power. The traditional method for detecting structure within a population is some form of exploratory technique such as principal components analysis. Such methods, which do not utilise our prior knowledge of the membership of the candidate populations. are termed \emph{unsupervised}. Supervised methods, on the other hand are able to utilise this prior knowledge when it is available. In this paper we demonstrate that in such cases modern supervised approaches are a more appropriate tool for detecting genetic differences between populations. We apply two such methods, (neural networks and support vector machines) to the classification of three populations (two from Scotland and one from Bulgaria). The sensitivity exhibited by both these methods is considerably higher than that attained by principal components analysis and in fact comfortably exceeds a recently conjectured theoretical limit on the sensitivity of unsupervised methods. In particular, our methods can distinguish between the two Scottish populations, where principal components analysis cannot. We suggest, on the basis of our results that a supervised learning approach should be the method of choice when classifying individuals into pre-defined populations, particularly in quality control for large scale genome wide association studies. |
1711.04935 | Michael Hendriksen | Michael Hendriksen | Tree-Based Unrooted Nonbinary Phylogenetic Networks | Primarily minor textual changes to improve clarity. Revision of
Theorem 4.3 to include star tree case, small corrections to Lemma 5.2 and
Theorem 5.3. Added acknowledgements | null | null | null | q-bio.PE math.CO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Phylogenetic networks are a generalisation of phylogenetic trees that allow
for more complex evolutionary histories that include hybridisation-like
processes. It is of considerable interest whether a network can be considered
`tree-like' or not, which lead to the introduction of \textit{tree-based}
networks in the rooted, binary context. Tree-based networks are those networks
which can be constructed by adding additional edges into a given phylogenetic
tree, called the \textit{base tree}. Previous extensions have considered
extending to the binary, unrooted case and the nonbinary, rooted case. We
extend tree-based networks to the context of unrooted, nonbinary networks in
three ways, depending on the types of additional edges that are permitted. A
phylogenetic network in which every embedded tree is a base tree is termed a
\textit{fully tree-based} network. We also extend this concept to unrooted,
nonbinary phylogenetic networks and classify the resulting networks. We also
derive some results on the colourability of tree-based networks, which can be
useful to determine whether a network is tree-based.
| [
{
"created": "Tue, 14 Nov 2017 03:58:48 GMT",
"version": "v1"
},
{
"created": "Mon, 20 Nov 2017 03:00:04 GMT",
"version": "v2"
}
] | 2017-11-21 | [
[
"Hendriksen",
"Michael",
""
]
] | Phylogenetic networks are a generalisation of phylogenetic trees that allow for more complex evolutionary histories that include hybridisation-like processes. It is of considerable interest whether a network can be considered `tree-like' or not, which lead to the introduction of \textit{tree-based} networks in the rooted, binary context. Tree-based networks are those networks which can be constructed by adding additional edges into a given phylogenetic tree, called the \textit{base tree}. Previous extensions have considered extending to the binary, unrooted case and the nonbinary, rooted case. We extend tree-based networks to the context of unrooted, nonbinary networks in three ways, depending on the types of additional edges that are permitted. A phylogenetic network in which every embedded tree is a base tree is termed a \textit{fully tree-based} network. We also extend this concept to unrooted, nonbinary phylogenetic networks and classify the resulting networks. We also derive some results on the colourability of tree-based networks, which can be useful to determine whether a network is tree-based. |
1112.4908 | Eric Forgoston | Ira B. Schwartz, Eric Forgoston, Simone Bianco, Leah B. Shaw | Converging towards the optimal path to extinction | 18 pages, 5 figures, Final revision in Journal of the Royal Society
Interface. arXiv admin note: substantial text overlap with arXiv:1003.0912 | null | null | null | q-bio.PE cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Extinction appears ubiquitously in many fields, including chemical reactions,
population biology, evolution, and epidemiology. Even though extinction as a
random process is a rare event, its occurrence is observed in large finite
populations. Extinction occurs when fluctuations due to random transitions act
as an effective force which drives one or more components or species to vanish.
Although there are many random paths to an extinct state, there is an optimal
path that maximizes the probability to extinction. In this article, we show
that the optimal path is associated with the dynamical systems idea of having
maximum sensitive dependence to initial conditions. Using the equivalence
between the sensitive dependence and the path to extinction, we show that the
dynamical systems picture of extinction evolves naturally toward the optimal
path in several stochastic models of epidemics.
| [
{
"created": "Wed, 21 Dec 2011 02:03:13 GMT",
"version": "v1"
}
] | 2011-12-22 | [
[
"Schwartz",
"Ira B.",
""
],
[
"Forgoston",
"Eric",
""
],
[
"Bianco",
"Simone",
""
],
[
"Shaw",
"Leah B.",
""
]
] | Extinction appears ubiquitously in many fields, including chemical reactions, population biology, evolution, and epidemiology. Even though extinction as a random process is a rare event, its occurrence is observed in large finite populations. Extinction occurs when fluctuations due to random transitions act as an effective force which drives one or more components or species to vanish. Although there are many random paths to an extinct state, there is an optimal path that maximizes the probability to extinction. In this article, we show that the optimal path is associated with the dynamical systems idea of having maximum sensitive dependence to initial conditions. Using the equivalence between the sensitive dependence and the path to extinction, we show that the dynamical systems picture of extinction evolves naturally toward the optimal path in several stochastic models of epidemics. |
1312.0329 | Jing Jiang | Jing Jiang, Xinhao Wang, Ran Chao, Yukun Ren, Chengpeng Hu, Zhida Xu,
Gang Logan Liu | Cellphone based Portable Bacteria Pre-Concentrating microfluidic Sensor
and Impedance Sensing System | 15 pages, 5 figures, accepted in Sensors and Actuators B: Chemical | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Portable low-cost sensors and sensing systems for the identification and
quantitative measurement of bacteria in field water are critical in preventing
drinking water from being contaminated by bacteria. In this article, we
reported the design, fabrication and testing of a low-cost, miniaturized and
sensitive bacteria sensor based on electrical impedance spectroscopy method
using a smartphone as the platform. Our design of microfluidics enabled the
pre-concentration of the bacteria which lowered the detection limit to 10
bacterial cells per milliliter. We envision that our demonstrated
smartphone-based sensing system will realize highly-sensitive and rapid
in-field quantification of multiple species of bacteria and pathogens.
| [
{
"created": "Mon, 2 Dec 2013 04:36:26 GMT",
"version": "v1"
}
] | 2013-12-03 | [
[
"Jiang",
"Jing",
""
],
[
"Wang",
"Xinhao",
""
],
[
"Chao",
"Ran",
""
],
[
"Ren",
"Yukun",
""
],
[
"Hu",
"Chengpeng",
""
],
[
"Xu",
"Zhida",
""
],
[
"Liu",
"Gang Logan",
""
]
] | Portable low-cost sensors and sensing systems for the identification and quantitative measurement of bacteria in field water are critical in preventing drinking water from being contaminated by bacteria. In this article, we reported the design, fabrication and testing of a low-cost, miniaturized and sensitive bacteria sensor based on electrical impedance spectroscopy method using a smartphone as the platform. Our design of microfluidics enabled the pre-concentration of the bacteria which lowered the detection limit to 10 bacterial cells per milliliter. We envision that our demonstrated smartphone-based sensing system will realize highly-sensitive and rapid in-field quantification of multiple species of bacteria and pathogens. |
2304.10072 | Eric De Giuli | Masanari Shimada and Pegah Behrad and Eric De Giuli | Universal Slow Dynamics of Chemical Reaction Networks | 11 pages + 21 pages SI; v2: added references, explanatory text, &
examples, revised discussion of stochastic treatment. v3: added discussion of
minor generalizations | Physical Review E 109 (4), 044105, 2024 | 10.1103/PhysRevE.109.044105 | null | q-bio.MN cond-mat.stat-mech nlin.AO | http://creativecommons.org/licenses/by/4.0/ | Understanding the emergent behavior of chemical reaction networks (CRNs) is a
fundamental aspect of biology and its origin from inanimate matter. A closed
CRN monotonically tends to thermal equilibrium, but when it is opened to
external reservoirs, a range of behaviors is possible, including transition to
a new equilibrium state, a non-equilibrium state, or indefinite growth. This
study shows that slowly driven CRNs are governed by the conserved quantities of
the closed system, which are generally far fewer in number than the species.
Considering both deterministic and stochastic dynamics, a universal slow
dynamics equation is derived with singular perturbation methods, and is shown
to be thermodynamically consistent. The slow dynamics is highly robust against
microscopic details of the network, which may be unknown in practical
situations. In particular, non-equilibrium states of realistic large CRNs can
be sought without knowledge of bulk reaction rates. The framework is
successfully tested against a suite of networks of increasing complexity and
argued to be relevant in the treatment of open CRNs as chemical machines.
| [
{
"created": "Thu, 20 Apr 2023 03:57:27 GMT",
"version": "v1"
},
{
"created": "Tue, 22 Aug 2023 19:42:09 GMT",
"version": "v2"
},
{
"created": "Wed, 13 Mar 2024 20:15:37 GMT",
"version": "v3"
}
] | 2024-05-16 | [
[
"Shimada",
"Masanari",
""
],
[
"Behrad",
"Pegah",
""
],
[
"De Giuli",
"Eric",
""
]
] | Understanding the emergent behavior of chemical reaction networks (CRNs) is a fundamental aspect of biology and its origin from inanimate matter. A closed CRN monotonically tends to thermal equilibrium, but when it is opened to external reservoirs, a range of behaviors is possible, including transition to a new equilibrium state, a non-equilibrium state, or indefinite growth. This study shows that slowly driven CRNs are governed by the conserved quantities of the closed system, which are generally far fewer in number than the species. Considering both deterministic and stochastic dynamics, a universal slow dynamics equation is derived with singular perturbation methods, and is shown to be thermodynamically consistent. The slow dynamics is highly robust against microscopic details of the network, which may be unknown in practical situations. In particular, non-equilibrium states of realistic large CRNs can be sought without knowledge of bulk reaction rates. The framework is successfully tested against a suite of networks of increasing complexity and argued to be relevant in the treatment of open CRNs as chemical machines. |
1707.01375 | Luo-Luo Jiang | Jia Quan Shen, Luo-Luo Jiang | Loss impresses human beings more than gain in the decision-making game | 11pages, 5 figure | null | null | null | q-bio.NC cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | What happen in the brain when human beings play games with computers? Here a
simple zero-sum game was conducted to investigate how people make decision via
their brain even they know that their opponent is a computer. There are two
choices (a low or high number) for people and also two strategies for the
computer (red color or green color). When the number selected by the human
subject meet the red color, the person loses the score which is equal to the
number. On the contrary, the person gains the number of score if the computer
chooses a green color for the number selected by the human being. Both the
human subject and the computer give their choice at the same time, and subjects
have been told that the computer make its decision randomly on the red color or
green color. During the experiments, the signal of electroencephalograph (EEG)
obtained from brain of subjects was recorded. From the analysis of EEG, we find
that people mind the loss more than the gain, and the phenomenon becoming
obvious when the gap between loss and gain grows. In addition, the signal of
EEG is clearly distinguishable before making different decisions. It is
observed that significant negative waves in the entire brain region when the
participant has a greater expectation for the outcome, and these negative waves
are mainly concentrated in the forebrain region in the brain of human beings.
| [
{
"created": "Wed, 5 Jul 2017 12:57:52 GMT",
"version": "v1"
}
] | 2017-07-06 | [
[
"Shen",
"Jia Quan",
""
],
[
"Jiang",
"Luo-Luo",
""
]
] | What happen in the brain when human beings play games with computers? Here a simple zero-sum game was conducted to investigate how people make decision via their brain even they know that their opponent is a computer. There are two choices (a low or high number) for people and also two strategies for the computer (red color or green color). When the number selected by the human subject meet the red color, the person loses the score which is equal to the number. On the contrary, the person gains the number of score if the computer chooses a green color for the number selected by the human being. Both the human subject and the computer give their choice at the same time, and subjects have been told that the computer make its decision randomly on the red color or green color. During the experiments, the signal of electroencephalograph (EEG) obtained from brain of subjects was recorded. From the analysis of EEG, we find that people mind the loss more than the gain, and the phenomenon becoming obvious when the gap between loss and gain grows. In addition, the signal of EEG is clearly distinguishable before making different decisions. It is observed that significant negative waves in the entire brain region when the participant has a greater expectation for the outcome, and these negative waves are mainly concentrated in the forebrain region in the brain of human beings. |
2101.00758 | Hoang Anh Ngo | Hoang Anh Ngo and Hung Dang Nguyen and Mehmet Dik | Stability analysis of a novel Delay Differential Equation of HIV
Infection of CD4$^+$ T-cells | 32 pages, 7 figures, 3 tables | null | null | null | q-bio.PE physics.soc-ph | http://creativecommons.org/licenses/by/4.0/ | In this paper, we investigate a novel 3-compartment model of HIV infection of
CD4$^+$ T-cells with a mass action term by including two versions: one baseline
ODE model and one delay-differential equation (DDE) model with a constant
discrete time delay. Similar to various endemic models, the dynamics within the
ODE model is fully determined by the basic reproduction term $R_0$. If $R_0<1$,
the disease-free (zero) equilibrium will be asymptotically stable and the
disease gradually dies out. On the other hand, if $R_0>1$, there exists a
positive equilibrium that is globally/orbitally asymptotically stable within
the interior of a predefined region. To present the incubation time of the
virus, a constant delay term $\tau$ is added, forming a DDE model. In this
model, this time delay (of the transmission between virus and healthy cells)
can destabilize the system, arising periodic solutions through Hopf
bifurcation. Finally, numerical simulations are conducted to illustrate and
verify the results.
| [
{
"created": "Mon, 4 Jan 2021 04:12:59 GMT",
"version": "v1"
}
] | 2021-01-05 | [
[
"Ngo",
"Hoang Anh",
""
],
[
"Nguyen",
"Hung Dang",
""
],
[
"Dik",
"Mehmet",
""
]
] | In this paper, we investigate a novel 3-compartment model of HIV infection of CD4$^+$ T-cells with a mass action term by including two versions: one baseline ODE model and one delay-differential equation (DDE) model with a constant discrete time delay. Similar to various endemic models, the dynamics within the ODE model is fully determined by the basic reproduction term $R_0$. If $R_0<1$, the disease-free (zero) equilibrium will be asymptotically stable and the disease gradually dies out. On the other hand, if $R_0>1$, there exists a positive equilibrium that is globally/orbitally asymptotically stable within the interior of a predefined region. To present the incubation time of the virus, a constant delay term $\tau$ is added, forming a DDE model. In this model, this time delay (of the transmission between virus and healthy cells) can destabilize the system, arising periodic solutions through Hopf bifurcation. Finally, numerical simulations are conducted to illustrate and verify the results. |
1305.2550 | Guiomar Niso | Guiomar Niso, Ricardo Bru\~na, Ernesto Pereda, Ricardo Guti\'errez,
Ricardo Bajo, Fernando Maest\'u and Francisco del-Pozo | HERMES: towards an integrated toolbox to characterize functional and
effective brain connectivity | 58 pages, 10 figures, 3 tables, Neuroinformatics 2013 | null | 10.1007/s12021-013-9186-1 | null | q-bio.NC cs.CE cs.MS physics.bio-ph physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The analysis of the interdependence between time series has become an
important field of research in the last years, mainly as a result of advances
in the characterization of dynamical systems from the signals they produce, the
introduction of concepts such as generalized and phase synchronization and the
application of information theory to time series analysis. In neurophysiology,
different analytical tools stemming from these concepts have added to the
'traditional' set of linear methods, which includes the cross-correlation and
the coherency function in the time and frequency domain, respectively, or more
elaborated tools such as Granger Causality. This increase in the number of
approaches to tackle the existence of functional (FC) or effective connectivity
(EC) between two (or among many) neural networks, along with the mathematical
complexity of the corresponding time series analysis tools, makes it desirable
to arrange them into a unified-easy-to-use software package. The goal is to
allow neuroscientists, neurophysiologists and researchers from related fields
to easily access and make use of these analysis methods from a single
integrated toolbox. Here we present HERMES (http://hermes.ctb.upm.es), a
toolbox for the Matlab environment (The Mathworks, Inc), which is designed for
the analysis functional and effective brain connectivity from
neurophysiological data such as multivariate EEG and/or MEG records. It
includes also visualization tools and statistical methods to address the
problem of multiple comparisons. We believe that this toolbox will be very
helpful to all the researchers working in the emerging field of brain
connectivity analysis.
| [
{
"created": "Sun, 12 May 2013 01:04:55 GMT",
"version": "v1"
},
{
"created": "Mon, 27 May 2013 03:53:02 GMT",
"version": "v2"
}
] | 2013-05-28 | [
[
"Niso",
"Guiomar",
""
],
[
"Bruña",
"Ricardo",
""
],
[
"Pereda",
"Ernesto",
""
],
[
"Gutiérrez",
"Ricardo",
""
],
[
"Bajo",
"Ricardo",
""
],
[
"Maestú",
"Fernando",
""
],
[
"del-Pozo",
"Francisco",
""
]
] | The analysis of the interdependence between time series has become an important field of research in the last years, mainly as a result of advances in the characterization of dynamical systems from the signals they produce, the introduction of concepts such as generalized and phase synchronization and the application of information theory to time series analysis. In neurophysiology, different analytical tools stemming from these concepts have added to the 'traditional' set of linear methods, which includes the cross-correlation and the coherency function in the time and frequency domain, respectively, or more elaborated tools such as Granger Causality. This increase in the number of approaches to tackle the existence of functional (FC) or effective connectivity (EC) between two (or among many) neural networks, along with the mathematical complexity of the corresponding time series analysis tools, makes it desirable to arrange them into a unified-easy-to-use software package. The goal is to allow neuroscientists, neurophysiologists and researchers from related fields to easily access and make use of these analysis methods from a single integrated toolbox. Here we present HERMES (http://hermes.ctb.upm.es), a toolbox for the Matlab environment (The Mathworks, Inc), which is designed for the analysis functional and effective brain connectivity from neurophysiological data such as multivariate EEG and/or MEG records. It includes also visualization tools and statistical methods to address the problem of multiple comparisons. We believe that this toolbox will be very helpful to all the researchers working in the emerging field of brain connectivity analysis. |
1701.00602 | Christoph Zechner | Christoph Zechner and Mustafa Khammash | A Molecular Implementation of the Least Mean Squares Estimator | Molecular circuits, synthetic biology, least mean squares estimator,
adaptive systems | 2016 IEEE 55th Conference on Decision and Control (CDC) | 10.1109/CDC.2016.7799172 | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In order to function reliably, synthetic molecular circuits require
mechanisms that allow them to adapt to environmental disturbances. Least mean
squares (LMS) schemes, such as commonly encountered in signal processing and
control, provide a powerful means to accomplish that goal. In this paper we
show how the traditional LMS algorithm can be implemented at the molecular
level using only a few elementary biomolecular reactions. We demonstrate our
approach using several simulation studies and discuss its relevance to
synthetic biology.
| [
{
"created": "Tue, 3 Jan 2017 08:55:23 GMT",
"version": "v1"
}
] | 2017-01-04 | [
[
"Zechner",
"Christoph",
""
],
[
"Khammash",
"Mustafa",
""
]
] | In order to function reliably, synthetic molecular circuits require mechanisms that allow them to adapt to environmental disturbances. Least mean squares (LMS) schemes, such as commonly encountered in signal processing and control, provide a powerful means to accomplish that goal. In this paper we show how the traditional LMS algorithm can be implemented at the molecular level using only a few elementary biomolecular reactions. We demonstrate our approach using several simulation studies and discuss its relevance to synthetic biology. |
1612.08457 | Boris Barbour | Boris Barbour | Analysis of claims that the brain extracellular impedance is high and
non-resistive | 6 pages | null | 10.1016/j.bpj.2017.05.054 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Numerous measurements in the brain of the impedance between two extracellular
electrodes have shown that it is approximately resistive in the range of
biological interest, $<10\,$kHz, and has a value close to that expected from
the conductivity of physiological saline and the extracellular volume fraction
in brain tissue. Recent work from the group of Claude B\'edard and Alain
Destexhe has claimed that the impedance of the extracellular space is some
three orders of magnitude greater than these values and also displays a
$1/\sqrt{f}$ frequency dependence (above a low-frequency corner frequency).
Their measurements were performed between an intracellular electrode and an
extracellular electrode. It is argued that they incorrectly extracted the
extracellular impedance because of an inaccurate representation of the large,
confounding impedance of the neuronal membrane. In conclusion, no compelling
evidence has been provided to undermine the well established and physically
plausible consensus that the brain extracellular impedance is low and
approximately resistive
| [
{
"created": "Mon, 26 Dec 2016 23:09:41 GMT",
"version": "v1"
},
{
"created": "Wed, 28 Dec 2016 02:07:56 GMT",
"version": "v2"
},
{
"created": "Sat, 31 Dec 2016 22:04:40 GMT",
"version": "v3"
},
{
"created": "Tue, 3 Jan 2017 07:37:02 GMT",
"version": "v4"
},
{
"created": "Sun, 26 Mar 2017 21:58:42 GMT",
"version": "v5"
}
] | 2017-11-22 | [
[
"Barbour",
"Boris",
""
]
] | Numerous measurements in the brain of the impedance between two extracellular electrodes have shown that it is approximately resistive in the range of biological interest, $<10\,$kHz, and has a value close to that expected from the conductivity of physiological saline and the extracellular volume fraction in brain tissue. Recent work from the group of Claude B\'edard and Alain Destexhe has claimed that the impedance of the extracellular space is some three orders of magnitude greater than these values and also displays a $1/\sqrt{f}$ frequency dependence (above a low-frequency corner frequency). Their measurements were performed between an intracellular electrode and an extracellular electrode. It is argued that they incorrectly extracted the extracellular impedance because of an inaccurate representation of the large, confounding impedance of the neuronal membrane. In conclusion, no compelling evidence has been provided to undermine the well established and physically plausible consensus that the brain extracellular impedance is low and approximately resistive |
2106.10211 | Pedro Mediano | Pedro A.M. Mediano, Fernando E. Rosas, Juan Carlos Farah, Murray
Shanahan, Daniel Bor and Adam B. Barrett | Integrated information as a common signature of dynamical and
information-processing complexity | null | null | 10.1063/5.0063384 | null | q-bio.NC nlin.AO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The apparent dichotomy between information-processing and dynamical
approaches to complexity science forces researchers to choose between two
diverging sets of tools and explanations, creating conflict and often hindering
scientific progress. Nonetheless, given the shared theoretical goals between
both approaches, it is reasonable to conjecture the existence of underlying
common signatures that capture interesting behaviour in both dynamical and
information-processing systems. Here we argue that a pragmatic use of
Integrated Information Theory (IIT), originally conceived in theoretical
neuroscience, can provide a potential unifying framework to study complexity in
general multivariate systems. Furthermore, by leveraging metrics put forward by
the integrated information decomposition ($\Phi$ID) framework, our results
reveal that integrated information can effectively capture surprisingly
heterogeneous signatures of complexity -- including metastability and
criticality in networks of coupled oscillators as well as distributed
computation and emergent stable particles in cellular automata -- without
relying on idiosyncratic, ad-hoc criteria. These results show how an agnostic
use of IIT can provide important steps towards bridging the gap between
informational and dynamical approaches to complex systems.
| [
{
"created": "Fri, 18 Jun 2021 16:31:01 GMT",
"version": "v1"
}
] | 2022-01-26 | [
[
"Mediano",
"Pedro A. M.",
""
],
[
"Rosas",
"Fernando E.",
""
],
[
"Farah",
"Juan Carlos",
""
],
[
"Shanahan",
"Murray",
""
],
[
"Bor",
"Daniel",
""
],
[
"Barrett",
"Adam B.",
""
]
] | The apparent dichotomy between information-processing and dynamical approaches to complexity science forces researchers to choose between two diverging sets of tools and explanations, creating conflict and often hindering scientific progress. Nonetheless, given the shared theoretical goals between both approaches, it is reasonable to conjecture the existence of underlying common signatures that capture interesting behaviour in both dynamical and information-processing systems. Here we argue that a pragmatic use of Integrated Information Theory (IIT), originally conceived in theoretical neuroscience, can provide a potential unifying framework to study complexity in general multivariate systems. Furthermore, by leveraging metrics put forward by the integrated information decomposition ($\Phi$ID) framework, our results reveal that integrated information can effectively capture surprisingly heterogeneous signatures of complexity -- including metastability and criticality in networks of coupled oscillators as well as distributed computation and emergent stable particles in cellular automata -- without relying on idiosyncratic, ad-hoc criteria. These results show how an agnostic use of IIT can provide important steps towards bridging the gap between informational and dynamical approaches to complex systems. |
1711.00526 | Jesus Malo | Marina Martinez-Garcia, Praveen Cyriac, Thomas Batard, Marcelo
Bertalmio and Jesus Malo | Derivatives and Inverse of Cascaded Linear+Nonlinear Neural Models | Reproducible results: associated Matlab toolbox available at
http://isp.uv.es/docs/BioMultiLayer_L_NL.zip | null | 10.1371/journal.pone.0201326 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In vision science, cascades of Linear+Nonlinear transforms are very
successful in modeling a number of perceptual experiences [Carandini&Heeger12].
However, the conventional literature is usually too focused on only describing
the input->output transform.
Instead, here we present the maths of such cascades beyond the forward
transform, namely the Jacobians and the inverse. The fundamental reason for
this analytical treatment is that it offers useful insight into the
psychophysics, the physiology, and the function of the visual system. For
instance, we show how the trends of the sensitivity (discrimination regions)
and the adaptation of the receptive fields can be seen in the expression of the
Jacobian wrt the stimulus. This matrix also tells us which regions of the
stimulus space are encoded more efficiently in multi-information terms. The
Jacobian wrt the parameters shows which aspects of the model have bigger impact
in the response, and hence bigger relevance. The analytic inverse implies
conditions for the response and the model to ensure decoding. From an applied
perspective, (a) the Jacobian wrt the stimulus is necessary in new experimental
methods based on the synthesis of visual stimuli with interesting geometry, (b)
the Jacobian matrices wrt the parameters are convenient to learn the model from
classical experiments or alternative optimization goals, and (c) the inverse is
a model-based alternative to blind machine-learning neural decoding that does
not include meaningful biological information.
The theory is checked by building a derivable and invertible vision model
that actually follows the modular program suggested by Carandini&Heeger. To
stress the generality of this modular setting we show examples where some of
the canonical Divisive Normalization layers are substituted by equivalent
layers such as the Wilson-Cowan model at V1, or a tone-mapping model at the
retina.
| [
{
"created": "Wed, 1 Nov 2017 20:07:37 GMT",
"version": "v1"
},
{
"created": "Sun, 26 Nov 2017 19:01:15 GMT",
"version": "v2"
},
{
"created": "Sun, 20 May 2018 12:06:41 GMT",
"version": "v3"
}
] | 2018-11-21 | [
[
"Martinez-Garcia",
"Marina",
""
],
[
"Cyriac",
"Praveen",
""
],
[
"Batard",
"Thomas",
""
],
[
"Bertalmio",
"Marcelo",
""
],
[
"Malo",
"Jesus",
""
]
] | In vision science, cascades of Linear+Nonlinear transforms are very successful in modeling a number of perceptual experiences [Carandini&Heeger12]. However, the conventional literature is usually too focused on only describing the input->output transform. Instead, here we present the maths of such cascades beyond the forward transform, namely the Jacobians and the inverse. The fundamental reason for this analytical treatment is that it offers useful insight into the psychophysics, the physiology, and the function of the visual system. For instance, we show how the trends of the sensitivity (discrimination regions) and the adaptation of the receptive fields can be seen in the expression of the Jacobian wrt the stimulus. This matrix also tells us which regions of the stimulus space are encoded more efficiently in multi-information terms. The Jacobian wrt the parameters shows which aspects of the model have bigger impact in the response, and hence bigger relevance. The analytic inverse implies conditions for the response and the model to ensure decoding. From an applied perspective, (a) the Jacobian wrt the stimulus is necessary in new experimental methods based on the synthesis of visual stimuli with interesting geometry, (b) the Jacobian matrices wrt the parameters are convenient to learn the model from classical experiments or alternative optimization goals, and (c) the inverse is a model-based alternative to blind machine-learning neural decoding that does not include meaningful biological information. The theory is checked by building a derivable and invertible vision model that actually follows the modular program suggested by Carandini&Heeger. To stress the generality of this modular setting we show examples where some of the canonical Divisive Normalization layers are substituted by equivalent layers such as the Wilson-Cowan model at V1, or a tone-mapping model at the retina. |
2202.12268 | Ali Kamali | Ali Kamali, Laurel Dieckhaus, Emily C. Peters, Collin A. Preszler,
Russel S. Witte, Paulo W. Pires, Elizabeth B. Hutchinson, Kaveh Laksari | Hyperacute pathophysiology of traumatic and vascular brain injury
captured by ultrasound, photoacoustic, and magnetic resonance imaging | Update in order of figure panels for Figure 3 | null | null | null | q-bio.QM q-bio.TO | http://creativecommons.org/licenses/by/4.0/ | Cerebrovascular dynamics and pathomechanisms that evolve in the minutes and
hours following traumatic vascular injury in the brain remain largely unknown.
We investigated the pathophysiology evolution within the first three hours
after closed-head traumatic brain injury (TBI) and subarachnoid hemorrhage
(SAH), two common traumatic vascular injuries, in mice. We took a multi-modal
imaging approach using photoacoustic, color Doppler, and magnetic resonance
imaging (MRI) in mice. Brain oxygenation (%sO2) and velocity-weighted volume of
blood flow (VVF) values significantly decreased from baseline to fifteen
minutes after both TBI and SAH. TBI resulted in 19.2% and 41.0% ipsilateral
%sO2 and VVF reductions 15 minutes post injury while SAH resulted in 43.9% %sO2
and 85.0% VVF reduction ipsilaterally (p<0.001). We found partial recovery of
%sO2 from 15 minutes to 3-hours after injury for TBI but not SAH. Hemorrhage,
edema, reduced perfusion, and altered diffusivity were evident from MRI scans
acquired 90-150 minutes after injury in both models although the spatial
distribution was mostly focal for TBI and diffuse for SAH. The results reveal
that the cerebral %sO2 deficits immediately following injuries are reversible
for TBI and irreversible for SAH. Our findings can inform future studies on
mitigating these early responses to improve long-term recovery.
| [
{
"created": "Thu, 24 Feb 2022 18:12:26 GMT",
"version": "v1"
},
{
"created": "Mon, 2 May 2022 18:13:46 GMT",
"version": "v2"
}
] | 2022-05-04 | [
[
"Kamali",
"Ali",
""
],
[
"Dieckhaus",
"Laurel",
""
],
[
"Peters",
"Emily C.",
""
],
[
"Preszler",
"Collin A.",
""
],
[
"Witte",
"Russel S.",
""
],
[
"Pires",
"Paulo W.",
""
],
[
"Hutchinson",
"Elizabeth B.",
""
],
[
"Laksari",
"Kaveh",
""
]
] | Cerebrovascular dynamics and pathomechanisms that evolve in the minutes and hours following traumatic vascular injury in the brain remain largely unknown. We investigated the pathophysiology evolution within the first three hours after closed-head traumatic brain injury (TBI) and subarachnoid hemorrhage (SAH), two common traumatic vascular injuries, in mice. We took a multi-modal imaging approach using photoacoustic, color Doppler, and magnetic resonance imaging (MRI) in mice. Brain oxygenation (%sO2) and velocity-weighted volume of blood flow (VVF) values significantly decreased from baseline to fifteen minutes after both TBI and SAH. TBI resulted in 19.2% and 41.0% ipsilateral %sO2 and VVF reductions 15 minutes post injury while SAH resulted in 43.9% %sO2 and 85.0% VVF reduction ipsilaterally (p<0.001). We found partial recovery of %sO2 from 15 minutes to 3-hours after injury for TBI but not SAH. Hemorrhage, edema, reduced perfusion, and altered diffusivity were evident from MRI scans acquired 90-150 minutes after injury in both models although the spatial distribution was mostly focal for TBI and diffuse for SAH. The results reveal that the cerebral %sO2 deficits immediately following injuries are reversible for TBI and irreversible for SAH. Our findings can inform future studies on mitigating these early responses to improve long-term recovery. |
2111.02714 | Piotr Guzowski | Piotr Guzowski | Did the Black Death reach the Kingdom of Poland in the middle of the
14th century? | null | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | The Black Death is regarded as a turning point in late medieval European
history. Recent studies have shown that even regions that have so far been
perceived in the literature as not or only marginally affected by the epidemic,
suffered from its profound demographic and economic consequences. The scale and
geographical range of the plague in Central Europe, the Kingdom of Poland
included remains, however, a matter of dispute and from the beginning scholar's
views on this matter have been divided. What is particularly important, the
outbreak of the plague in Western Europe coincided with the reign of Casimir of
the Piast dynasty, the only ruler of Poland to receive the nickname the Great,
who is associated with the modernization and extraordinary development of his
kingdom which was entering the golden age of its history.
| [
{
"created": "Thu, 4 Nov 2021 09:51:50 GMT",
"version": "v1"
}
] | 2021-11-05 | [
[
"Guzowski",
"Piotr",
""
]
] | The Black Death is regarded as a turning point in late medieval European history. Recent studies have shown that even regions that have so far been perceived in the literature as not or only marginally affected by the epidemic, suffered from its profound demographic and economic consequences. The scale and geographical range of the plague in Central Europe, the Kingdom of Poland included remains, however, a matter of dispute and from the beginning scholar's views on this matter have been divided. What is particularly important, the outbreak of the plague in Western Europe coincided with the reign of Casimir of the Piast dynasty, the only ruler of Poland to receive the nickname the Great, who is associated with the modernization and extraordinary development of his kingdom which was entering the golden age of its history. |
q-bio/0406001 | Alexander Iomin | A. Iomin, S. Dorfman, and L. Dorfman | On tumor development: fractional transport approach | null | null | null | null | q-bio.TO | null | A growth of malignant neoplasm is considered as a fractional transport
approach. We suggested that the main process of the tumor development through a
lymphatic net is fractional transport of cells. In the framework of this
fractional kinetics we were able to show that the mean size of main growth is
due to subdiffusion, while the appearance of metaphases is determined by
superdiffusion.
| [
{
"created": "Tue, 1 Jun 2004 09:50:57 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Iomin",
"A.",
""
],
[
"Dorfman",
"S.",
""
],
[
"Dorfman",
"L.",
""
]
] | A growth of malignant neoplasm is considered as a fractional transport approach. We suggested that the main process of the tumor development through a lymphatic net is fractional transport of cells. In the framework of this fractional kinetics we were able to show that the mean size of main growth is due to subdiffusion, while the appearance of metaphases is determined by superdiffusion. |
1805.02609 | Dongwook Kim | Dongwook Kim | The Effect of In vivo-like Synaptic Inputs on Stellate Cells | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Previous experimental work has shown high-frequency Poisson-distributed
trains of combined excitatory and inhibitory conductance- and current-based
synaptic inputs reduce amplitude of subthreshold oscillations of SCs. In this
paper, we investigate the mechanism underlying these phenomena in the context
of the model. More specially, we studied the effects of both conductance- and
current-based synaptic inputs at various maximal conductance values on a SC
model. Our numerical simulations show that conductance-based synaptic inputs
reduce the amplitude of SC's subthreshold oscillations for low enough value of
the maximal synaptic conductance value but amplify these oscillations at a
higher range. These results contrast with the experimental results.
| [
{
"created": "Thu, 3 May 2018 13:53:23 GMT",
"version": "v1"
}
] | 2018-05-08 | [
[
"Kim",
"Dongwook",
""
]
] | Previous experimental work has shown high-frequency Poisson-distributed trains of combined excitatory and inhibitory conductance- and current-based synaptic inputs reduce amplitude of subthreshold oscillations of SCs. In this paper, we investigate the mechanism underlying these phenomena in the context of the model. More specially, we studied the effects of both conductance- and current-based synaptic inputs at various maximal conductance values on a SC model. Our numerical simulations show that conductance-based synaptic inputs reduce the amplitude of SC's subthreshold oscillations for low enough value of the maximal synaptic conductance value but amplify these oscillations at a higher range. These results contrast with the experimental results. |
2210.01053 | Lucrezia Carboni | Lucrezia Carboni, Michel Dojat, Sophie Achard | Nodal statistics-based equivalence relation for graph collections | 15 pages, 16 figures, | null | 10.1103/PhysRevE.107.014302 | null | q-bio.NC | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Node role explainability in complex networks is very difficult, yet is
crucial in different application domains such as social science, neurosciences
or computer science. Many efforts have been made on the quantification of hubs
revealing particular nodes in a network using a given structural property. Yet,
in several applications, when multiple instances of networks are available and
several structural properties appear to be relevant, the identification of node
roles remains largely unexplored. Inspired by the node automorphically
equivalence relation, we define an equivalence relation on graph nodes
associated with any collection of nodal statistics (i.e. any functions on the
node-set). This allows us to define new graph global measures: the power
coefficient, and the orthogonality score to evaluate the parsimony and
heterogeneity of a given nodal statistics collection. In addition, we introduce
a new method based on structural patterns to compare graphs that have the same
vertices set. This method assigns a value to a node to determine its role
distinctiveness in a graph family. Extensive numerical results of our method
are conducted on both generative graph models and real data concerning human
brain functional connectivity. The differences in nodal statistics are shown to
be dependent on the underlying graph structure. Comparisons between generative
models and real networks combining two different nodal statistics reveal the
complexity of human brain functional connectivity with differences at both
global and nodal levels. Using a group of 200 healthy controls connectivity
networks, our method computes high correspondence scores among the whole
population, to detect homotopy, and finally quantify differences between
comatose patients and healthy controls.
| [
{
"created": "Mon, 3 Oct 2022 16:10:07 GMT",
"version": "v1"
}
] | 2023-02-01 | [
[
"Carboni",
"Lucrezia",
""
],
[
"Dojat",
"Michel",
""
],
[
"Achard",
"Sophie",
""
]
] | Node role explainability in complex networks is very difficult, yet is crucial in different application domains such as social science, neurosciences or computer science. Many efforts have been made on the quantification of hubs revealing particular nodes in a network using a given structural property. Yet, in several applications, when multiple instances of networks are available and several structural properties appear to be relevant, the identification of node roles remains largely unexplored. Inspired by the node automorphically equivalence relation, we define an equivalence relation on graph nodes associated with any collection of nodal statistics (i.e. any functions on the node-set). This allows us to define new graph global measures: the power coefficient, and the orthogonality score to evaluate the parsimony and heterogeneity of a given nodal statistics collection. In addition, we introduce a new method based on structural patterns to compare graphs that have the same vertices set. This method assigns a value to a node to determine its role distinctiveness in a graph family. Extensive numerical results of our method are conducted on both generative graph models and real data concerning human brain functional connectivity. The differences in nodal statistics are shown to be dependent on the underlying graph structure. Comparisons between generative models and real networks combining two different nodal statistics reveal the complexity of human brain functional connectivity with differences at both global and nodal levels. Using a group of 200 healthy controls connectivity networks, our method computes high correspondence scores among the whole population, to detect homotopy, and finally quantify differences between comatose patients and healthy controls. |
1310.3180 | Filipe Tostevin | Alexander Buchner, Filipe Tostevin, Ulrich Gerland | Clustering and optimal arrangement of enzymes in reaction-diffusion
systems | 5 pages, 4 figures + 3 pages supplementary material | Phys. Rev. Lett. 110, 208104 (2013) | 10.1103/PhysRevLett.110.208104 | null | q-bio.MN q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Enzymes within biochemical pathways are often colocalized, yet the
consequences of specific spatial enzyme arrangements remain poorly understood.
We study the impact of enzyme arrangement on reaction efficiency within a
reaction-diffusion model. The optimal arrangement transitions from a cluster to
a distributed profile as a single parameter, which controls the probability of
reaction versus diffusive loss of pathway intermediates, is varied. We
introduce the concept of enzyme exposure to explain how this transition arises
from the stochastic nature of molecular reactions and diffusion.
| [
{
"created": "Fri, 11 Oct 2013 15:57:59 GMT",
"version": "v1"
}
] | 2013-10-14 | [
[
"Buchner",
"Alexander",
""
],
[
"Tostevin",
"Filipe",
""
],
[
"Gerland",
"Ulrich",
""
]
] | Enzymes within biochemical pathways are often colocalized, yet the consequences of specific spatial enzyme arrangements remain poorly understood. We study the impact of enzyme arrangement on reaction efficiency within a reaction-diffusion model. The optimal arrangement transitions from a cluster to a distributed profile as a single parameter, which controls the probability of reaction versus diffusive loss of pathway intermediates, is varied. We introduce the concept of enzyme exposure to explain how this transition arises from the stochastic nature of molecular reactions and diffusion. |
1404.1360 | Lester Ingber | Lester Ingber | Biological Impact on Military Intelligence: Application or Metaphor? | Most current paper can be downloaded as
http://ingber.com/combat14_milint.pdf. arXiv admin note: substantial text
overlap with arXiv:cs/0607103, arXiv:1105.2352, arXiv:cs/0612087 | null | null | Ingber:2014:BIMI | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ideas by Statistical Mechanics (ISM) is a generic program to model evolution
and propagation of ideas/patterns throughout populations subjected to
endogenous and exogenous interactions. The program is based on the author's
work in Statistical Mechanics of Neocortical Interactions (SMNI). This product
can be used for decision support for projects ranging from diplomatic,
information, military, and economic (DIME) factors of propagation/evolution of
ideas, to commercial sales, trading indicators across sectors of financial
markets, advertising and political campaigns, etc. It seems appropriate to base
an approach for propagation of ideas on the only system so far demonstrated to
develop and nurture ideas, i.e., the neocortical brain. The issue here is
whether such biological intelligence is a valid application to military
intelligence, or is it simply a metaphor?
| [
{
"created": "Fri, 4 Apr 2014 19:38:41 GMT",
"version": "v1"
}
] | 2014-04-07 | [
[
"Ingber",
"Lester",
""
]
] | Ideas by Statistical Mechanics (ISM) is a generic program to model evolution and propagation of ideas/patterns throughout populations subjected to endogenous and exogenous interactions. The program is based on the author's work in Statistical Mechanics of Neocortical Interactions (SMNI). This product can be used for decision support for projects ranging from diplomatic, information, military, and economic (DIME) factors of propagation/evolution of ideas, to commercial sales, trading indicators across sectors of financial markets, advertising and political campaigns, etc. It seems appropriate to base an approach for propagation of ideas on the only system so far demonstrated to develop and nurture ideas, i.e., the neocortical brain. The issue here is whether such biological intelligence is a valid application to military intelligence, or is it simply a metaphor? |
1805.12005 | Roseric Azondekon | Roseric Azondekon, Zachary James Harper, and Charles Michael Welzig | Combined MEG and fMRI Exponential Random Graph Modeling for inferring
functional Brain Connectivity | 22 pages | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Estimated connectomes by the means of neuroimaging techniques have enriched
our knowledge of the organizational properties of the brain leading to the
development of network-based clinical diagnostics. Unfortunately, to date, many
of those network-based clinical diagnostics tools, based on the mere
description of isolated instances of observed connectomes are noisy estimates
of the true connectivity network. Modeling brain connectivity networks is
therefore important to better explain the functional organization of the brain
and allow inference of specific brain properties. In this report, we present
pilot results on the modeling of combined MEG and fMRI neuroimaging data
acquired during an n-back memory task experiment. We adopted a pooled
Exponential Random Graph Model (ERGM) as a network statistical model to capture
the underlying process in functional brain networks of 9 subjects MEG and fMRI
data out of 32 during a 0-back vs 2-back memory task experiment. Our results
suggested strong evidence that all the functional connectomes of the 9 subjects
have small world properties. A group level comparison using comparing the
conditions pairwise showed no significant difference in the functional
connectomes across the subjects. Our pooled ERGMs successfully reproduced
important brain properties such as functional segregation and functional
integration. However, the ERGMs reproducing the functional segregation of the
brain networks discriminated between the 0-back and 2-back conditions while the
models reproducing both properties failed to successfully discriminate between
both conditions. Our results are promising and would improve in robustness with
a larger sample size. Nevertheless, our pilot results tend to support previous
findings that functional segregation and integration are sufficient to
statistically reproduce the main properties of brain network.
| [
{
"created": "Wed, 23 May 2018 18:59:31 GMT",
"version": "v1"
}
] | 2018-05-31 | [
[
"Azondekon",
"Roseric",
""
],
[
"Harper",
"Zachary James",
""
],
[
"Welzig",
"Charles Michael",
""
]
] | Estimated connectomes by the means of neuroimaging techniques have enriched our knowledge of the organizational properties of the brain leading to the development of network-based clinical diagnostics. Unfortunately, to date, many of those network-based clinical diagnostics tools, based on the mere description of isolated instances of observed connectomes are noisy estimates of the true connectivity network. Modeling brain connectivity networks is therefore important to better explain the functional organization of the brain and allow inference of specific brain properties. In this report, we present pilot results on the modeling of combined MEG and fMRI neuroimaging data acquired during an n-back memory task experiment. We adopted a pooled Exponential Random Graph Model (ERGM) as a network statistical model to capture the underlying process in functional brain networks of 9 subjects MEG and fMRI data out of 32 during a 0-back vs 2-back memory task experiment. Our results suggested strong evidence that all the functional connectomes of the 9 subjects have small world properties. A group level comparison using comparing the conditions pairwise showed no significant difference in the functional connectomes across the subjects. Our pooled ERGMs successfully reproduced important brain properties such as functional segregation and functional integration. However, the ERGMs reproducing the functional segregation of the brain networks discriminated between the 0-back and 2-back conditions while the models reproducing both properties failed to successfully discriminate between both conditions. Our results are promising and would improve in robustness with a larger sample size. Nevertheless, our pilot results tend to support previous findings that functional segregation and integration are sufficient to statistically reproduce the main properties of brain network. |
1406.4069 | Bryan Daniels | Bryan C. Daniels and Ilya Nemenman | Efficient inference of parsimonious phenomenological models of cellular
dynamics using S-systems and alternating regression | 14 pages, 2 figures | null | 10.1371/journal.pone.0119821 | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The nonlinearity of dynamics in systems biology makes it hard to infer them
from experimental data. Simple linear models are computationally efficient, but
cannot incorporate these important nonlinearities. An adaptive method based on
the S-system formalism, which is a sensible representation of nonlinear
mass-action kinetics typically found in cellular dynamics, maintains the
efficiency of linear regression. We combine this approach with adaptive model
selection to obtain efficient and parsimonious representations of cellular
dynamics. The approach is tested by inferring the dynamics of yeast glycolysis
from simulated data. With little computing time, it produces dynamical models
with high predictive power and with structural complexity adapted to the
difficulty of the inference problem.
| [
{
"created": "Mon, 16 Jun 2014 17:01:45 GMT",
"version": "v1"
}
] | 2015-08-19 | [
[
"Daniels",
"Bryan C.",
""
],
[
"Nemenman",
"Ilya",
""
]
] | The nonlinearity of dynamics in systems biology makes it hard to infer them from experimental data. Simple linear models are computationally efficient, but cannot incorporate these important nonlinearities. An adaptive method based on the S-system formalism, which is a sensible representation of nonlinear mass-action kinetics typically found in cellular dynamics, maintains the efficiency of linear regression. We combine this approach with adaptive model selection to obtain efficient and parsimonious representations of cellular dynamics. The approach is tested by inferring the dynamics of yeast glycolysis from simulated data. With little computing time, it produces dynamical models with high predictive power and with structural complexity adapted to the difficulty of the inference problem. |
1909.05566 | Jean-Louis Milan | Ian Manifacier (ISM, AMU), Kevin Beussman, Sangyoon Han, Nathan
Sniadecki, Imad About (IMEB, ISM, AMU), Jean-Louis Milan (ISM, AMU) | The consequence of substrates of large-scale rigidity on actin network
tension in adherent cells | null | Computer Methods in Biomechanics and Biomedical Engineering,
Taylor & Francis, 2019, 22 (13), pp.1073-1082 | 10.1080/10255842.2019.1629428 | null | q-bio.CB physics.bio-ph q-bio.SC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There is compelling evidence that substrate stiffness affects cell adhesion
as well as cytoskeleton organization and contractile activity. This work was
designed to study the cytoskeletal contractile activity of cells plated on
microposts of different stiffness using a numerical model simulating the
intracellular tension of individual cells. We allowed cells to adhere onto
micropost substrates of various rigidities and used experimental traction force
data to infer cell contractility using a numerical model. The model
discriminates between the influence of substrate stiffness on cell tension and
shows that higher substrate stiffness leads to an increase in intracellular
tension. The strength of this model is its ability to calculate the mechanical
state of each cell in accordance to its individual cytoskeletal structure. This
is achieved by regenerating a numerical cytoskeleton based
| [
{
"created": "Thu, 12 Sep 2019 11:15:22 GMT",
"version": "v1"
}
] | 2019-09-13 | [
[
"Manifacier",
"Ian",
"",
"ISM, AMU"
],
[
"Beussman",
"Kevin",
"",
"IMEB, ISM, AMU"
],
[
"Han",
"Sangyoon",
"",
"IMEB, ISM, AMU"
],
[
"Sniadecki",
"Nathan",
"",
"IMEB, ISM, AMU"
],
[
"About",
"Imad",
"",
"IMEB, ISM, AMU"
],
[
"Milan",
"Jean-Louis",
"",
"ISM, AMU"
]
] | There is compelling evidence that substrate stiffness affects cell adhesion as well as cytoskeleton organization and contractile activity. This work was designed to study the cytoskeletal contractile activity of cells plated on microposts of different stiffness using a numerical model simulating the intracellular tension of individual cells. We allowed cells to adhere onto micropost substrates of various rigidities and used experimental traction force data to infer cell contractility using a numerical model. The model discriminates between the influence of substrate stiffness on cell tension and shows that higher substrate stiffness leads to an increase in intracellular tension. The strength of this model is its ability to calculate the mechanical state of each cell in accordance to its individual cytoskeletal structure. This is achieved by regenerating a numerical cytoskeleton based |
2302.11345 | Rebecca Crossley | Rebecca M. Crossley, Philip K. Maini, Tommaso Lorenzi, Ruth E. Baker | Travelling waves in a coarse-grained model of volume-filling cell
invasion: Simulations and comparisons | 28 pages, 12 figures | null | null | null | q-bio.CB math.AP q-bio.PE q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many reaction-diffusion models produce travelling wave solutions that can be
interpreted as waves of invasion in biological scenarios such as wound healing
or tumour growth. These partial differential equation models have since been
adapted to describe the interactions between cells and extracellular matrix
(ECM), using a variety of different underlying assumptions. In this work, we
derive a system of reaction-diffusion equations, with cross-species
density-dependent diffusion, by coarse-graining an agent-based, volume-filling
model of cell invasion into ECM. We study the resulting travelling wave
solutions both numerically and analytically across various parameter regimes.
Subsequently, we perform a systematic comparison between the behaviours
observed in this model and those predicted by simpler models in the literature
that do not take into account volume-filling effects in the same way. Our study
justifies the use of some of these simpler, more analytically tractable models
in reproducing the qualitative properties of the solutions in some parameter
regimes, but it also reveals some interesting properties arising from the
introduction of cell and ECM volume-filling effects, where standard model
simplifications might not be appropriate.
| [
{
"created": "Wed, 22 Feb 2023 12:39:48 GMT",
"version": "v1"
},
{
"created": "Fri, 30 Jun 2023 09:06:58 GMT",
"version": "v2"
}
] | 2023-07-03 | [
[
"Crossley",
"Rebecca M.",
""
],
[
"Maini",
"Philip K.",
""
],
[
"Lorenzi",
"Tommaso",
""
],
[
"Baker",
"Ruth E.",
""
]
] | Many reaction-diffusion models produce travelling wave solutions that can be interpreted as waves of invasion in biological scenarios such as wound healing or tumour growth. These partial differential equation models have since been adapted to describe the interactions between cells and extracellular matrix (ECM), using a variety of different underlying assumptions. In this work, we derive a system of reaction-diffusion equations, with cross-species density-dependent diffusion, by coarse-graining an agent-based, volume-filling model of cell invasion into ECM. We study the resulting travelling wave solutions both numerically and analytically across various parameter regimes. Subsequently, we perform a systematic comparison between the behaviours observed in this model and those predicted by simpler models in the literature that do not take into account volume-filling effects in the same way. Our study justifies the use of some of these simpler, more analytically tractable models in reproducing the qualitative properties of the solutions in some parameter regimes, but it also reveals some interesting properties arising from the introduction of cell and ECM volume-filling effects, where standard model simplifications might not be appropriate. |
1608.03993 | Elisenda Feliu | Carsten Conradi, Elisenda Feliu, Maya Mincheva, Carsten Wiuf | Identifying parameter regions for multistationarity | In this version the paper has been substantially rewritten and
reorganised. Theorem 1 has been reformulated and Corollary 1 added | null | 10.1371/journal.pcbi.1005751 | null | q-bio.MN math.AG math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mathematical modelling has become an established tool for studying the
dynamics of biological systems. Current applications range from building models
that reproduce quantitative data to identifying systems with predefined
qualitative features, such as switching behaviour, bistability or oscillations.
Mathematically, the latter question amounts to identifying parameter values
associated with a given qualitative feature.
We introduce a procedure to partition the parameter space of a parameterized
system of ordinary differential equations into regions for which the system has
a unique or multiple equilibria. The procedure is based on the computation of
the Brouwer degree, and it creates a multivariate polynomial with parameter
depending coefficients. The signs of the coefficients determine parameter
regions with and without multistationarity. A particular strength of the
procedure is the avoidance of numerical analysis and parameter sampling.
The procedure consists of a number of steps. Each of these steps might be
addressed algorithmically using various computer programs and available
software, or manually. We demonstrate our procedure on several models of gene
transcription and cell signalling, and show that in many cases we obtain a
complete partitioning of the parameter space with respect to multistationarity.
| [
{
"created": "Sat, 13 Aug 2016 15:23:24 GMT",
"version": "v1"
},
{
"created": "Sat, 6 May 2017 13:05:48 GMT",
"version": "v2"
}
] | 2018-02-07 | [
[
"Conradi",
"Carsten",
""
],
[
"Feliu",
"Elisenda",
""
],
[
"Mincheva",
"Maya",
""
],
[
"Wiuf",
"Carsten",
""
]
] | Mathematical modelling has become an established tool for studying the dynamics of biological systems. Current applications range from building models that reproduce quantitative data to identifying systems with predefined qualitative features, such as switching behaviour, bistability or oscillations. Mathematically, the latter question amounts to identifying parameter values associated with a given qualitative feature. We introduce a procedure to partition the parameter space of a parameterized system of ordinary differential equations into regions for which the system has a unique or multiple equilibria. The procedure is based on the computation of the Brouwer degree, and it creates a multivariate polynomial with parameter depending coefficients. The signs of the coefficients determine parameter regions with and without multistationarity. A particular strength of the procedure is the avoidance of numerical analysis and parameter sampling. The procedure consists of a number of steps. Each of these steps might be addressed algorithmically using various computer programs and available software, or manually. We demonstrate our procedure on several models of gene transcription and cell signalling, and show that in many cases we obtain a complete partitioning of the parameter space with respect to multistationarity. |
1001.0595 | Michael Shapiro | Michael Shapiro and Edgar Delgado-Eckert | Finding an individual's probability of infection in an SIR network is
NP-hard | 13 pages | null | null | null | q-bio.PE cs.CC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The celebrated Kermack-McKendric model of epidemics studies the transmission
of a disease in a population where each individual is initially susceptible
(S), may become infective (I) and then removed or recovered (R) and plays no
further epidemiological role. This ODE model arises as the limiting case of a
network model where each individual has an equal chance of infecting every
other. More recent work gives explicit consideration to the network of social
interaction and attendant probability of transmission for each interacting
pair. The state of such a network is an assignment of the values {S,I,R} to its
members. Given such a network, an initial state and a particular susceptible
individual, we would like to compute their probability of becoming infected in
the course of an epidemic. It turns out that this problem is NP-hard. In
particular, it belongs in a class of problems all of whose known solutions
require an exponential amount of computation and for which it is unlikely that
there will be more efficient solutions.
| [
{
"created": "Mon, 4 Jan 2010 22:34:00 GMT",
"version": "v1"
},
{
"created": "Tue, 11 May 2010 18:24:03 GMT",
"version": "v2"
},
{
"created": "Sun, 22 Aug 2010 03:15:14 GMT",
"version": "v3"
},
{
"created": "Tue, 27 Sep 2011 16:11:14 GMT",
"version": "v4"
}
] | 2015-03-13 | [
[
"Shapiro",
"Michael",
""
],
[
"Delgado-Eckert",
"Edgar",
""
]
] | The celebrated Kermack-McKendric model of epidemics studies the transmission of a disease in a population where each individual is initially susceptible (S), may become infective (I) and then removed or recovered (R) and plays no further epidemiological role. This ODE model arises as the limiting case of a network model where each individual has an equal chance of infecting every other. More recent work gives explicit consideration to the network of social interaction and attendant probability of transmission for each interacting pair. The state of such a network is an assignment of the values {S,I,R} to its members. Given such a network, an initial state and a particular susceptible individual, we would like to compute their probability of becoming infected in the course of an epidemic. It turns out that this problem is NP-hard. In particular, it belongs in a class of problems all of whose known solutions require an exponential amount of computation and for which it is unlikely that there will be more efficient solutions. |
2003.09260 | Olivier Colliot | Alexandre Morin (ARAMIS), Jorge Samper-Gonz\'alez (ARAMIS), Anne
Bertrand (ARAMIS), Sebastian Stroer, Didier Dormont (ICM, ARAMIS), Aline
Mendes, Pierrick Coup\'e, Jamila Ahdidan, Marcel L\'evy (IM2A), Dalila Samri,
Harald Hampel, Bruno Dubois (APM), Marc Teichmann (FRONTlab), St\'ephane
Epelbaum (ARAMIS), Olivier Colliot (ARAMIS) | Accuracy of MRI Classification Algorithms in a Tertiary Memory Center
Clinical Routine Cohort | null | Journal of Alzheimer's Disease, IOS Press, 2020, pp.1-10 | 10.3233/JAD-190594 | null | q-bio.QM cs.CV eess.IV eess.SP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | BACKGROUND:Automated volumetry software (AVS) has recently become widely
available to neuroradiologists. MRI volumetry with AVS may support the
diagnosis of dementias by identifying regional atrophy. Moreover, automatic
classifiers using machine learning techniques have recently emerged as
promising approaches to assist diagnosis. However, the performance of both AVS
and automatic classifiers has been evaluated mostly in the artificial setting
of research datasets.OBJECTIVE:Our aim was to evaluate the performance of two
AVS and an automatic classifier in the clinical routine condition of a memory
clinic.METHODS:We studied 239 patients with cognitive troubles from a single
memory center cohort. Using clinical routine T1-weighted MRI, we evaluated the
classification performance of: 1) univariate volumetry using two AVS (volBrain
and Neuroreader$^{TM}$); 2) Support Vector Machine (SVM) automatic classifier,
using either the AVS volumes (SVM-AVS), or whole gray matter (SVM-WGM); 3)
reading by two neuroradiologists. The performance measure was the balanced
diagnostic accuracy. The reference standard was consensus diagnosis by three
neurologists using clinical, biological (cerebrospinal fluid) and imaging data
and following international criteria.RESULTS:Univariate AVS volumetry provided
only moderate accuracies (46% to 71% with hippocampal volume). The accuracy
improved when using SVM-AVS classifier (52% to 85%), becoming close to that of
SVM-WGM (52 to 90%). Visual classification by neuroradiologists ranged between
SVM-AVS and SVM-WGM.CONCLUSION:In the routine practice of a memory clinic, the
use of volumetric measures provided by AVS yields only moderate accuracy.
Automatic classifiers can improve accuracy and could be a useful tool to assist
diagnosis.
| [
{
"created": "Thu, 19 Mar 2020 08:44:46 GMT",
"version": "v1"
}
] | 2020-03-23 | [
[
"Morin",
"Alexandre",
"",
"ARAMIS"
],
[
"Samper-González",
"Jorge",
"",
"ARAMIS"
],
[
"Bertrand",
"Anne",
"",
"ARAMIS"
],
[
"Stroer",
"Sebastian",
"",
"ICM, ARAMIS"
],
[
"Dormont",
"Didier",
"",
"ICM, ARAMIS"
],
[
"Mendes",
"Aline",
"",
"IM2A"
],
[
"Coupé",
"Pierrick",
"",
"IM2A"
],
[
"Ahdidan",
"Jamila",
"",
"IM2A"
],
[
"Lévy",
"Marcel",
"",
"IM2A"
],
[
"Samri",
"Dalila",
"",
"APM"
],
[
"Hampel",
"Harald",
"",
"APM"
],
[
"Dubois",
"Bruno",
"",
"APM"
],
[
"Teichmann",
"Marc",
"",
"FRONTlab"
],
[
"Epelbaum",
"Stéphane",
"",
"ARAMIS"
],
[
"Colliot",
"Olivier",
"",
"ARAMIS"
]
] | BACKGROUND:Automated volumetry software (AVS) has recently become widely available to neuroradiologists. MRI volumetry with AVS may support the diagnosis of dementias by identifying regional atrophy. Moreover, automatic classifiers using machine learning techniques have recently emerged as promising approaches to assist diagnosis. However, the performance of both AVS and automatic classifiers has been evaluated mostly in the artificial setting of research datasets.OBJECTIVE:Our aim was to evaluate the performance of two AVS and an automatic classifier in the clinical routine condition of a memory clinic.METHODS:We studied 239 patients with cognitive troubles from a single memory center cohort. Using clinical routine T1-weighted MRI, we evaluated the classification performance of: 1) univariate volumetry using two AVS (volBrain and Neuroreader$^{TM}$); 2) Support Vector Machine (SVM) automatic classifier, using either the AVS volumes (SVM-AVS), or whole gray matter (SVM-WGM); 3) reading by two neuroradiologists. The performance measure was the balanced diagnostic accuracy. The reference standard was consensus diagnosis by three neurologists using clinical, biological (cerebrospinal fluid) and imaging data and following international criteria.RESULTS:Univariate AVS volumetry provided only moderate accuracies (46% to 71% with hippocampal volume). The accuracy improved when using SVM-AVS classifier (52% to 85%), becoming close to that of SVM-WGM (52 to 90%). Visual classification by neuroradiologists ranged between SVM-AVS and SVM-WGM.CONCLUSION:In the routine practice of a memory clinic, the use of volumetric measures provided by AVS yields only moderate accuracy. Automatic classifiers can improve accuracy and could be a useful tool to assist diagnosis. |
2010.06357 | Rui Wang | Jiahui Chen, Kaifu Gao, Rui Wang, Guowei Wei | Prediction and mitigation of mutation threats to COVID-19 vaccines and
antibody therapies | 28 pages, 17 figures | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Antibody therapeutics and vaccines are among our last resort to end the
raging COVID-19 pandemic. They, however, are prone to over 5,000 mutations on
the spike (S) protein uncovered by a Mutation Tracker based on over 200,000
genome isolates. It is imperative to understand how mutations would impact
vaccines and antibodies in the development. In this work, we study the
mechanism, frequency, and ratio of mutations on the S protein. Additionally, we
use 56 antibody structures and analyze their 2D and 3D characteristics.
Moreover, we predict the mutation-induced binding free energy (BFE) changes for
the complexes of S protein and antibodies or ACE2. By integrating genetics,
biophysics, deep learning, and algebraic topology, we reveal that most of 462
mutations on the receptor-binding domain (RBD) will weaken the binding of S
protein and antibodies and disrupt the efficacy and reliability of antibody
therapies and vaccines. A list of 31 vaccine escape mutants is identified,
while many other disruptive mutations are detailed as well. We also unveil that
about 65\% existing RBD mutations, including those variants recently found in
the United Kingdom (UK) and South Africa, are binding-strengthen mutations,
resulting in more infectious COVID-19 variants. We discover the disparity
between the extreme values of RBD mutation-induced BFE strengthening and
weakening of the bindings with antibodies and ACE2, suggesting that SARS-CoV-2
is at an advanced stage of evolution for human infection, while the human
immune system is able to produce optimized antibodies. This discovery implies
the vulnerability of current vaccines and antibody drugs to new mutations. Our
predictions were validated by comparison with more than 1,400 deep mutations on
the S protein RBD. Our results show the urgent need to develop new
mutation-resistant vaccines and antibodies and to prepare for seasonal
vaccinations.
| [
{
"created": "Tue, 13 Oct 2020 13:13:10 GMT",
"version": "v1"
},
{
"created": "Tue, 9 Mar 2021 05:05:39 GMT",
"version": "v2"
}
] | 2021-03-10 | [
[
"Chen",
"Jiahui",
""
],
[
"Gao",
"Kaifu",
""
],
[
"Wang",
"Rui",
""
],
[
"Wei",
"Guowei",
""
]
] | Antibody therapeutics and vaccines are among our last resort to end the raging COVID-19 pandemic. They, however, are prone to over 5,000 mutations on the spike (S) protein uncovered by a Mutation Tracker based on over 200,000 genome isolates. It is imperative to understand how mutations would impact vaccines and antibodies in the development. In this work, we study the mechanism, frequency, and ratio of mutations on the S protein. Additionally, we use 56 antibody structures and analyze their 2D and 3D characteristics. Moreover, we predict the mutation-induced binding free energy (BFE) changes for the complexes of S protein and antibodies or ACE2. By integrating genetics, biophysics, deep learning, and algebraic topology, we reveal that most of 462 mutations on the receptor-binding domain (RBD) will weaken the binding of S protein and antibodies and disrupt the efficacy and reliability of antibody therapies and vaccines. A list of 31 vaccine escape mutants is identified, while many other disruptive mutations are detailed as well. We also unveil that about 65\% existing RBD mutations, including those variants recently found in the United Kingdom (UK) and South Africa, are binding-strengthen mutations, resulting in more infectious COVID-19 variants. We discover the disparity between the extreme values of RBD mutation-induced BFE strengthening and weakening of the bindings with antibodies and ACE2, suggesting that SARS-CoV-2 is at an advanced stage of evolution for human infection, while the human immune system is able to produce optimized antibodies. This discovery implies the vulnerability of current vaccines and antibody drugs to new mutations. Our predictions were validated by comparison with more than 1,400 deep mutations on the S protein RBD. Our results show the urgent need to develop new mutation-resistant vaccines and antibodies and to prepare for seasonal vaccinations. |
1811.09714 | C\u{a}t\u{a}lina Cangea | C\u{a}t\u{a}lina Cangea, Arturas Grauslys, Pietro Li\`o, Francesco
Falciani | Structure-Based Networks for Drug Validation | Machine Learning for Health (ML4H) Workshop at NeurIPS 2018
arXiv:1811.07216 | null | null | ML4H/2018/89 | q-bio.QM cs.AI cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Classifying chemicals according to putative modes of action (MOAs) is of
paramount importance in the context of risk assessment. However, current
methods are only able to handle a very small proportion of the existing
chemicals. We address this issue by proposing an integrative deep learning
architecture that learns a joint representation from molecular structures of
drugs and their effects on human cells. Our choice of architecture is motivated
by the significant influence of a drug's chemical structure on its MOA. We
improve on the strong ability of a unimodal architecture (F1 score of 0.803) to
classify drugs by their toxic MOAs (Verhaar scheme) through adding another
learning stream that processes transcriptional responses of human cells
affected by drugs. Our integrative model achieves an even higher classification
performance on the LINCS L1000 dataset - the error is reduced by 4.6%. We
believe that our method can be used to extend the current Verhaar scheme and
constitute a basis for fast drug validation and risk assessment.
| [
{
"created": "Wed, 21 Nov 2018 12:39:19 GMT",
"version": "v1"
}
] | 2018-11-28 | [
[
"Cangea",
"Cătălina",
""
],
[
"Grauslys",
"Arturas",
""
],
[
"Liò",
"Pietro",
""
],
[
"Falciani",
"Francesco",
""
]
] | Classifying chemicals according to putative modes of action (MOAs) is of paramount importance in the context of risk assessment. However, current methods are only able to handle a very small proportion of the existing chemicals. We address this issue by proposing an integrative deep learning architecture that learns a joint representation from molecular structures of drugs and their effects on human cells. Our choice of architecture is motivated by the significant influence of a drug's chemical structure on its MOA. We improve on the strong ability of a unimodal architecture (F1 score of 0.803) to classify drugs by their toxic MOAs (Verhaar scheme) through adding another learning stream that processes transcriptional responses of human cells affected by drugs. Our integrative model achieves an even higher classification performance on the LINCS L1000 dataset - the error is reduced by 4.6%. We believe that our method can be used to extend the current Verhaar scheme and constitute a basis for fast drug validation and risk assessment. |
1409.7774 | Victor Garcia | Victor Garcia and Roland Regoes | The effect of interference on the CD8+ T cell escape rates in HIV | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In early HIV infection, the virus population escapes from multiple CD8+ cell
responses. The later an escape mutation emerges, the slower it outgrows its
competition, i. e. the escape rate is lower. This pattern could indicate that
the strength of the CD8+ cell responses is waning, or that later viral escape
mutants carry a larger fitness cost. In this paper, we investigate whether the
pattern of decreasing escape rate could also be caused by genetic interference
among different escape strains. To this end, we developed a mathematical
multi-epitope model of HIV dynamics, which incorporates stochastic effects,
recombination and mutation. We used cumulative linkage disequilibrium measures
to quantify the amount of interference. We found that nearly synchronous,
similarly strong immune responses in two-locus systems enhance the generation
of genetic interference. This effect, combined with densely spaced sampling
times at the beginning of infection, leads to decreasing successive escape rate
estimates, even when there were no selection differences among alleles. These
predictions are supported by experimental data from one HIV-infected patient.
Thus, interference could explain why later escapes are slower. Considering
escape mutations in isolation, neglecting their genetic linkage, conceals the
underlying haplotype dynamics and can affect the estimation of the selective
pressure exerted by CD8+ cells. In systems in which multiple escape mutations
appear, the occurrence of interference dynamics should be assessed by measuring
the linkage between different escape mutations.
| [
{
"created": "Sat, 27 Sep 2014 06:53:13 GMT",
"version": "v1"
}
] | 2014-09-30 | [
[
"Garcia",
"Victor",
""
],
[
"Regoes",
"Roland",
""
]
] | In early HIV infection, the virus population escapes from multiple CD8+ cell responses. The later an escape mutation emerges, the slower it outgrows its competition, i. e. the escape rate is lower. This pattern could indicate that the strength of the CD8+ cell responses is waning, or that later viral escape mutants carry a larger fitness cost. In this paper, we investigate whether the pattern of decreasing escape rate could also be caused by genetic interference among different escape strains. To this end, we developed a mathematical multi-epitope model of HIV dynamics, which incorporates stochastic effects, recombination and mutation. We used cumulative linkage disequilibrium measures to quantify the amount of interference. We found that nearly synchronous, similarly strong immune responses in two-locus systems enhance the generation of genetic interference. This effect, combined with densely spaced sampling times at the beginning of infection, leads to decreasing successive escape rate estimates, even when there were no selection differences among alleles. These predictions are supported by experimental data from one HIV-infected patient. Thus, interference could explain why later escapes are slower. Considering escape mutations in isolation, neglecting their genetic linkage, conceals the underlying haplotype dynamics and can affect the estimation of the selective pressure exerted by CD8+ cells. In systems in which multiple escape mutations appear, the occurrence of interference dynamics should be assessed by measuring the linkage between different escape mutations. |
1404.4804 | Tom\'as Revilla | Tom\'as A. Revilla and Francisco Encinas-Viso | Dynamical transitions in a pollination--herbivory interaction | 20 pages, 7 main figures, 2 appendix figures | PLoS ONE. Vol. 10(2), pp. e0117964 (2015) | 10.1371/journal.pone.0117964 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Plant-pollinator associations are often seen as purely mutualistic, while in
reality they can be more complex. Indeed they may also display a diverse array
of antagonistic interactions, such as competition and victim--exploiter
interactions. In some cases mutualistic and antagonistic interactions are
carried-out by the same species but at different life-stages. As a consequence,
population structure affects the balance of inter-specific associations, a
topic that is receiving increased attention. In this paper, we developed a
model that captures the basic features of the interaction between a flowering
plant and an insect with a larval stage that feeds on the plant's vegetative
tissues (e.g. leaves) and an adult pollinator stage. Our model is able to
display a rich set of dynamics, the most remarkable of which involves
victim--exploiter oscillations that allow plants to attain abundances above
their carrying capacities, and the periodic alternation between states
dominated by mutualism or antagonism. Our study indicates that changes in the
insect's life cycle can modify the balance between mutualism and antagonism,
causing important qualitative changes in the interaction dynamics. These
changes in the life cycle could be caused by a variety of external drivers,
such as temperature, plant nutrients, pesticides and changes in the diet of
adult pollinators.
Abstract Keywords: mutualism, pollination, herbivory, insects,
stage-structure, oscillations
| [
{
"created": "Fri, 18 Apr 2014 14:57:59 GMT",
"version": "v1"
},
{
"created": "Mon, 13 Oct 2014 16:14:40 GMT",
"version": "v2"
},
{
"created": "Wed, 20 Jul 2016 14:30:57 GMT",
"version": "v3"
}
] | 2016-07-21 | [
[
"Revilla",
"Tomás A.",
""
],
[
"Encinas-Viso",
"Francisco",
""
]
] | Plant-pollinator associations are often seen as purely mutualistic, while in reality they can be more complex. Indeed they may also display a diverse array of antagonistic interactions, such as competition and victim--exploiter interactions. In some cases mutualistic and antagonistic interactions are carried-out by the same species but at different life-stages. As a consequence, population structure affects the balance of inter-specific associations, a topic that is receiving increased attention. In this paper, we developed a model that captures the basic features of the interaction between a flowering plant and an insect with a larval stage that feeds on the plant's vegetative tissues (e.g. leaves) and an adult pollinator stage. Our model is able to display a rich set of dynamics, the most remarkable of which involves victim--exploiter oscillations that allow plants to attain abundances above their carrying capacities, and the periodic alternation between states dominated by mutualism or antagonism. Our study indicates that changes in the insect's life cycle can modify the balance between mutualism and antagonism, causing important qualitative changes in the interaction dynamics. These changes in the life cycle could be caused by a variety of external drivers, such as temperature, plant nutrients, pesticides and changes in the diet of adult pollinators. Abstract Keywords: mutualism, pollination, herbivory, insects, stage-structure, oscillations |
1007.4903 | Brian Williams Dr | Brian G. Williams | Optimal pooling strategies for laboratory testing | Three pages | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the optimal strategy for laboratory testing of biological samples
when we wish to know the results for each sample rather than the average
prevalence of positive samples. If the proportion of positive samples is low
considerable resources may be devoted to testing samples most of which are
negative. An attractive strategy is to pool samples. If the pooled samples test
positive one must then test the individual samples, otherwise they can all be
assumed to be negative. The pool should be big enough to reduce the number of
tests but not so big that the pooled samples are almost all positive. We show
that if the prevalence of positive samples is greater than 30% it is never
worth pooling. From 30% down to 1% pools of size 4 are close to optimal. Below
1% substantial gains can be made by pooling, especially if the samples are
pooled twice. However, with large pools the sensitivity of the test will fall
correspondingly and this must be taken into consideration. We derive simple
expressions for the optimal pool size and for the corresponding proportion of
samples tested.
| [
{
"created": "Wed, 28 Jul 2010 09:03:01 GMT",
"version": "v1"
}
] | 2010-07-29 | [
[
"Williams",
"Brian G.",
""
]
] | We consider the optimal strategy for laboratory testing of biological samples when we wish to know the results for each sample rather than the average prevalence of positive samples. If the proportion of positive samples is low considerable resources may be devoted to testing samples most of which are negative. An attractive strategy is to pool samples. If the pooled samples test positive one must then test the individual samples, otherwise they can all be assumed to be negative. The pool should be big enough to reduce the number of tests but not so big that the pooled samples are almost all positive. We show that if the prevalence of positive samples is greater than 30% it is never worth pooling. From 30% down to 1% pools of size 4 are close to optimal. Below 1% substantial gains can be made by pooling, especially if the samples are pooled twice. However, with large pools the sensitivity of the test will fall correspondingly and this must be taken into consideration. We derive simple expressions for the optimal pool size and for the corresponding proportion of samples tested. |
1007.0300 | Michael Prentiss | Michael C. Prentiss, David J. Wales, Peter G. Wolynes | The Energy Landscape, Folding Pathways and the Kinetics of a Knotted
Protein | 19 pages | Prentiss MC, Wales DJ, Wolynes PG (2010) The Energy Landscape,
Folding Pathways and the Kinetics of a Knotted Protein. PLoS Comput Biol 6(7) | 10.1371/journal.pcbi.1000835 | null | q-bio.BM cond-mat.soft | http://creativecommons.org/licenses/by/3.0/ | The folding pathway and rate coefficients of the folding of a knotted protein
are calculated for a potential energy function with minimal energetic
frustration. A kinetic transition network is constructed using the discrete
path sampling approach, and the resulting potential energy surface is
visualized by constructing disconnectivity graphs. Owing to topological
constraints, the low-lying portion of the landscape consists of three distinct
regions, corresponding to the native knotted state and to configurations where
either the N- or C-terminus is not yet folded into the knot. The fastest
folding pathways from denatured states exhibit early formation of the
N-terminus portion of the knot and a rate-determining step where the C-terminus
is incorporated. The low-lying minima with the N-terminus knotted and the
C-terminus free therefore constitute an off-pathway intermediate for this
model. The insertion of both the N- and C-termini into the knot occur late in
the folding process, creating large energy barriers that are the rate limiting
steps in the folding process. When compared to other protein folding proteins
of a similar length, this system folds over six orders of magnitude more
slowly.
| [
{
"created": "Fri, 2 Jul 2010 05:49:16 GMT",
"version": "v1"
}
] | 2010-07-05 | [
[
"Prentiss",
"Michael C.",
""
],
[
"Wales",
"David J.",
""
],
[
"Wolynes",
"Peter G.",
""
]
] | The folding pathway and rate coefficients of the folding of a knotted protein are calculated for a potential energy function with minimal energetic frustration. A kinetic transition network is constructed using the discrete path sampling approach, and the resulting potential energy surface is visualized by constructing disconnectivity graphs. Owing to topological constraints, the low-lying portion of the landscape consists of three distinct regions, corresponding to the native knotted state and to configurations where either the N- or C-terminus is not yet folded into the knot. The fastest folding pathways from denatured states exhibit early formation of the N-terminus portion of the knot and a rate-determining step where the C-terminus is incorporated. The low-lying minima with the N-terminus knotted and the C-terminus free therefore constitute an off-pathway intermediate for this model. The insertion of both the N- and C-termini into the knot occur late in the folding process, creating large energy barriers that are the rate limiting steps in the folding process. When compared to other protein folding proteins of a similar length, this system folds over six orders of magnitude more slowly. |
1811.11688 | Jeremy Guillon | Jeremy Guillon, Mario Chavez, Federico Battiston, Yohan Attal,
Valentina La Corte, Michel Thiebaut de Schotten, Bruno Dubois, Denis
Schwartz, Olivier Colliot and Fabrizio De Vico Fallani | Disrupted core-periphery structure of multimodal brain networks in
Alzheimer's Disease | 5 figures, 1 table, 1 supplementary figure, 2 supplementary tables | Network Neuroscience 2019 | 10.1162/netn_a_00087 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In Alzheimer's disease (AD), the progressive atrophy leads to aberrant
network reconfigurations both at structural and functional levels. In such
network reorganization, the core and peripheral nodes appear to be crucial for
the prediction of clinical outcome due to their ability to influence
large-scale functional integration. However, the role of the different types of
brain connectivity in such prediction still remains unclear. Using a multiplex
network approach we integrated information from DWI, fMRI and MEG brain
connectivity to extract an enriched description of the core-periphery structure
in a group of AD patients and age-matched controls. Globally, the regional
coreness - i.e., the probability of a region to be in the multiplex core -
significantly decreased in AD patients as a result of the randomization process
initiated by the neurodegeneration. Locally, the most impacted areas were in
the core of the network - including temporal, parietal and occipital areas -
while we reported compensatory increments for the peripheral regions in the
sensorimotor system. Furthermore, these network changes significantly predicted
the cognitive and memory impairment of patients. Taken together these results
indicate that a more accurate description of neurodegenerative diseases can be
obtained from the multimodal integration of neuroimaging-derived network data.
| [
{
"created": "Wed, 28 Nov 2018 17:14:48 GMT",
"version": "v1"
},
{
"created": "Fri, 7 Dec 2018 16:41:21 GMT",
"version": "v2"
}
] | 2019-06-10 | [
[
"Guillon",
"Jeremy",
""
],
[
"Chavez",
"Mario",
""
],
[
"Battiston",
"Federico",
""
],
[
"Attal",
"Yohan",
""
],
[
"La Corte",
"Valentina",
""
],
[
"de Schotten",
"Michel Thiebaut",
""
],
[
"Dubois",
"Bruno",
""
],
[
"Schwartz",
"Denis",
""
],
[
"Colliot",
"Olivier",
""
],
[
"Fallani",
"Fabrizio De Vico",
""
]
] | In Alzheimer's disease (AD), the progressive atrophy leads to aberrant network reconfigurations both at structural and functional levels. In such network reorganization, the core and peripheral nodes appear to be crucial for the prediction of clinical outcome due to their ability to influence large-scale functional integration. However, the role of the different types of brain connectivity in such prediction still remains unclear. Using a multiplex network approach we integrated information from DWI, fMRI and MEG brain connectivity to extract an enriched description of the core-periphery structure in a group of AD patients and age-matched controls. Globally, the regional coreness - i.e., the probability of a region to be in the multiplex core - significantly decreased in AD patients as a result of the randomization process initiated by the neurodegeneration. Locally, the most impacted areas were in the core of the network - including temporal, parietal and occipital areas - while we reported compensatory increments for the peripheral regions in the sensorimotor system. Furthermore, these network changes significantly predicted the cognitive and memory impairment of patients. Taken together these results indicate that a more accurate description of neurodegenerative diseases can be obtained from the multimodal integration of neuroimaging-derived network data. |
2209.07563 | Bruce Pell | Bruce Pell, Samantha Brozak, Tin Phan, Fuqing Wu, Yang Kuang | The emergence of a virus variant: dynamics of a competition model with
cross-immunity time-delay validated by wastewater surveillance data for
COVID-19 | null | null | null | null | q-bio.PE math.DS physics.soc-ph | http://creativecommons.org/licenses/by/4.0/ | We consider the dynamics of a virus spreading through a population that
produces a mutant strain with the ability to infect individuals that were
infected with the established strain. Temporary cross-immunity is included
using a time delay, but is found to be a harmless delay. We provide some
sufficient conditions that guarantee local and global asymptotic stability of
the disease-free equilibrium and the two boundary equilibria when the two
strains outcompete one another. It is shown that, due to the immune evasion of
the emerging strain, the reproduction number of the emerging strain must be
significantly lower than that of the established strain for the local stability
of the established-strain-only boundary equilibrium. To analyze the unique
coexistence equilibrium we apply a quasi steady-state argument to reduce the
full model to a two-dimensional one that exhibits a global asymptotically
stable established-strain-only equilibrium or global asymptotically stable
coexistence equilibrium. Our results indicate that the basic reproduction
numbers of both strains govern the overall dynamics, but in nontrivial ways due
to the inclusion of cross-immunity. The model is applied to study the emergence
of the SARS-CoV-2 Delta variant in the presence of the Alpha variant using
wastewater surveillance data from the Deer Island Treatment Plant in
Massachusetts, USA.
| [
{
"created": "Thu, 15 Sep 2022 19:02:19 GMT",
"version": "v1"
},
{
"created": "Wed, 15 Feb 2023 15:30:34 GMT",
"version": "v2"
}
] | 2023-02-16 | [
[
"Pell",
"Bruce",
""
],
[
"Brozak",
"Samantha",
""
],
[
"Phan",
"Tin",
""
],
[
"Wu",
"Fuqing",
""
],
[
"Kuang",
"Yang",
""
]
] | We consider the dynamics of a virus spreading through a population that produces a mutant strain with the ability to infect individuals that were infected with the established strain. Temporary cross-immunity is included using a time delay, but is found to be a harmless delay. We provide some sufficient conditions that guarantee local and global asymptotic stability of the disease-free equilibrium and the two boundary equilibria when the two strains outcompete one another. It is shown that, due to the immune evasion of the emerging strain, the reproduction number of the emerging strain must be significantly lower than that of the established strain for the local stability of the established-strain-only boundary equilibrium. To analyze the unique coexistence equilibrium we apply a quasi steady-state argument to reduce the full model to a two-dimensional one that exhibits a global asymptotically stable established-strain-only equilibrium or global asymptotically stable coexistence equilibrium. Our results indicate that the basic reproduction numbers of both strains govern the overall dynamics, but in nontrivial ways due to the inclusion of cross-immunity. The model is applied to study the emergence of the SARS-CoV-2 Delta variant in the presence of the Alpha variant using wastewater surveillance data from the Deer Island Treatment Plant in Massachusetts, USA. |
1803.03692 | Zhinus Marzi | Zhinus Marzi, Joao Hespanha and Upamanyu Madhow | On the information in spike timing: neural codes derived from
polychronous groups | null | null | null | null | q-bio.NC cs.NE stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There is growing evidence regarding the importance of spike timing in neural
information processing, with even a small number of spikes carrying
information, but computational models lag significantly behind those for rate
coding. Experimental evidence on neuronal behavior is consistent with the
dynamical and state dependent behavior provided by recurrent connections. This
motivates the minimalistic abstraction investigated in this paper, aimed at
providing insight into information encoding in spike timing via recurrent
connections. We employ information-theoretic techniques for a simple reservoir
model which encodes input spatiotemporal patterns into a sparse neural code,
translating the polychronous groups introduced by Izhikevich into codewords on
which we can perform standard vector operations. We show that the distance
properties of the code are similar to those for (optimal) random codes. In
particular, the code meets benchmarks associated with both linear
classification and capacity, with the latter scaling exponentially with
reservoir size.
| [
{
"created": "Fri, 9 Mar 2018 20:53:31 GMT",
"version": "v1"
}
] | 2018-03-13 | [
[
"Marzi",
"Zhinus",
""
],
[
"Hespanha",
"Joao",
""
],
[
"Madhow",
"Upamanyu",
""
]
] | There is growing evidence regarding the importance of spike timing in neural information processing, with even a small number of spikes carrying information, but computational models lag significantly behind those for rate coding. Experimental evidence on neuronal behavior is consistent with the dynamical and state dependent behavior provided by recurrent connections. This motivates the minimalistic abstraction investigated in this paper, aimed at providing insight into information encoding in spike timing via recurrent connections. We employ information-theoretic techniques for a simple reservoir model which encodes input spatiotemporal patterns into a sparse neural code, translating the polychronous groups introduced by Izhikevich into codewords on which we can perform standard vector operations. We show that the distance properties of the code are similar to those for (optimal) random codes. In particular, the code meets benchmarks associated with both linear classification and capacity, with the latter scaling exponentially with reservoir size. |
2401.16073 | Li Chen | Chenyang Zhao, Guozhong Zheng, Chun Zhang, Jiqiang Zhang, and Li Chen | Emergence of cooperation under punishment: A reinforcement learning
perspective | 7 pages, 6 figures | null | null | null | q-bio.PE cond-mat.dis-nn nlin.AO physics.soc-ph | http://creativecommons.org/licenses/by/4.0/ | Punishment is a common tactic to sustain cooperation and has been extensively
studied for a long time. While most of previous game-theoretic work adopt the
imitation learning where players imitate the strategies who are better off, the
learning logic in the real world is often much more complex. In this work, we
turn to the reinforcement learning paradigm, where individuals make their
decisions based upon their past experience and long-term returns. Specifically,
we investigate the Prisoners' dilemma game with Q-learning algorithm, and
cooperators probabilistically pose punishment on defectors in their
neighborhood. Interestingly, we find that punishment could lead to either
continuous or discontinuous cooperation phase transitions, and the nucleation
process of cooperation clusters is reminiscent of the liquid-gas transition.
The uncovered first-order phase transition indicates that great care needs to
be taken when implementing the punishment compared to the continuous scenario.
| [
{
"created": "Mon, 29 Jan 2024 11:30:35 GMT",
"version": "v1"
}
] | 2024-01-30 | [
[
"Zhao",
"Chenyang",
""
],
[
"Zheng",
"Guozhong",
""
],
[
"Zhang",
"Chun",
""
],
[
"Zhang",
"Jiqiang",
""
],
[
"Chen",
"Li",
""
]
] | Punishment is a common tactic to sustain cooperation and has been extensively studied for a long time. While most of previous game-theoretic work adopt the imitation learning where players imitate the strategies who are better off, the learning logic in the real world is often much more complex. In this work, we turn to the reinforcement learning paradigm, where individuals make their decisions based upon their past experience and long-term returns. Specifically, we investigate the Prisoners' dilemma game with Q-learning algorithm, and cooperators probabilistically pose punishment on defectors in their neighborhood. Interestingly, we find that punishment could lead to either continuous or discontinuous cooperation phase transitions, and the nucleation process of cooperation clusters is reminiscent of the liquid-gas transition. The uncovered first-order phase transition indicates that great care needs to be taken when implementing the punishment compared to the continuous scenario. |
1006.2490 | Sebastian Schreiber | Sebastian J. Schreiber and Maureen E. Ryan | Invasion speeds for structured populations in fluctuating environments | null | Theoretical Ecology November 2011, Volume 4, Issue 4, pp 423-434 | 10.1007/s12080-010-0098-5 | null | q-bio.PE q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We live in a time where climate models predict future increases in
environmental variability and biological invasions are becoming increasingly
frequent. A key to developing effective responses to biological invasions in
increasingly variable environments will be estimates of their rates of spatial
spread and the associated uncertainty of these estimates. Using stochastic,
stage-structured, integro-difference equation models, we show analytically that
invasion speeds are asymptotically normally distributed with a variance that
decreases in time. We apply our methods to a simple juvenile-adult model with
stochastic variation in reproduction and an illustrative example with published
data for the perennial herb, \emph{Calathea ovandensis}. These examples
buttressed by additional analysis reveal that increased variability in vital
rates simultaneously slow down invasions yet generate greater uncertainty about
rates of spatial spread. Moreover, while temporal autocorrelations in vital
rates inflate variability in invasion speeds, the effect of these
autocorrelations on the average invasion speed can be positive or negative
depending on life history traits and how well vital rates ``remember'' the
past.
| [
{
"created": "Sat, 12 Jun 2010 20:43:41 GMT",
"version": "v1"
}
] | 2015-12-16 | [
[
"Schreiber",
"Sebastian J.",
""
],
[
"Ryan",
"Maureen E.",
""
]
] | We live in a time where climate models predict future increases in environmental variability and biological invasions are becoming increasingly frequent. A key to developing effective responses to biological invasions in increasingly variable environments will be estimates of their rates of spatial spread and the associated uncertainty of these estimates. Using stochastic, stage-structured, integro-difference equation models, we show analytically that invasion speeds are asymptotically normally distributed with a variance that decreases in time. We apply our methods to a simple juvenile-adult model with stochastic variation in reproduction and an illustrative example with published data for the perennial herb, \emph{Calathea ovandensis}. These examples buttressed by additional analysis reveal that increased variability in vital rates simultaneously slow down invasions yet generate greater uncertainty about rates of spatial spread. Moreover, while temporal autocorrelations in vital rates inflate variability in invasion speeds, the effect of these autocorrelations on the average invasion speed can be positive or negative depending on life history traits and how well vital rates ``remember'' the past. |
2104.12356 | Michael X Cohen | Michael X Cohen | A tutorial on generalized eigendecomposition for denoising, contrast
enhancement, and dimension reduction in multichannel electrophysiology | null | null | null | null | q-bio.QM eess.SP | http://creativecommons.org/licenses/by-sa/4.0/ | The goal of this paper is to present a theoretical and practical introduction
to generalized eigendecomposition (GED), which is a robust and flexible
framework used for dimension reduction and source separation in multichannel
signal processing. In cognitive electrophysiology, GED is used to create
spatial filters that maximize a researcher-specified contrast. For example, one
may wish to exploit an assumption that different sources have different
frequency content, or that sources vary in magnitude across experimental
conditions. GED is fast and easy to compute, performs well in simulated and
real data, and is easily adaptable to a variety of specific research goals.
This paper introduces GED in a way that ties together myriad individual
publications and applications of GED in electrophysiology, and provides sample
MATLAB and Python code that can be tested and adapted. Practical considerations
and issues that often arise in applications are discussed.
| [
{
"created": "Mon, 26 Apr 2021 05:45:01 GMT",
"version": "v1"
},
{
"created": "Sat, 4 Sep 2021 08:21:33 GMT",
"version": "v2"
},
{
"created": "Fri, 28 Jan 2022 14:50:19 GMT",
"version": "v3"
}
] | 2022-01-31 | [
[
"Cohen",
"Michael X",
""
]
] | The goal of this paper is to present a theoretical and practical introduction to generalized eigendecomposition (GED), which is a robust and flexible framework used for dimension reduction and source separation in multichannel signal processing. In cognitive electrophysiology, GED is used to create spatial filters that maximize a researcher-specified contrast. For example, one may wish to exploit an assumption that different sources have different frequency content, or that sources vary in magnitude across experimental conditions. GED is fast and easy to compute, performs well in simulated and real data, and is easily adaptable to a variety of specific research goals. This paper introduces GED in a way that ties together myriad individual publications and applications of GED in electrophysiology, and provides sample MATLAB and Python code that can be tested and adapted. Practical considerations and issues that often arise in applications are discussed. |
1501.02366 | Ehtibar Dzhafarov | Ru Zhang and Ehtibar N. Dzhafarov | Noncontextuality with Marginal Selectivity in Reconstructing Mental
Architectures | published in Frontiers in Psychology: Cognition 1:12 doi:
10.3389/fpsyg.2015.00735 (special issue "Quantum Structures in Cognitive and
Social Science") | null | null | null | q-bio.NC stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a general theory of series-parallel mental architectures with
selectively influenced stochastically non-independent components. A mental
architecture is a hypothetical network of processes aimed at performing a task,
of which we only observe the overall time it takes under variable parameters of
the task. It is usually assumed that the network contains several processes
selectively influenced by different experimental factors, and then the question
is asked as to how these processes are arranged within the network, e.g.,
whether they are concurrent or sequential. One way of doing this is to consider
the distribution functions for the overall processing time and compute certain
linear combinations thereof (interaction contrasts). The theory of selective
influences in psychology can be viewed as a special application of the
interdisciplinary theory of (non)contextuality having its origins and main
applications in quantum theory. In particular, lack of contextuality is
equivalent to the existence of a "hidden" random entity of which all the random
variables in play are functions. Consequently, for any given value of this
common random entity, the processing times and their compositions (minima,
maxima, or sums) become deterministic quantities. These quantities, in turn,
can be treated as random variables with (shifted) Heaviside distribution
functions, for which one can easily compute various linear combinations across
different treatments, including interaction contrasts. This mathematical fact
leads to a simple method, more general than the previously used ones, to
investigate and characterize the interaction contrast for different types of
series-parallel architectures.
| [
{
"created": "Sat, 10 Jan 2015 16:23:21 GMT",
"version": "v1"
},
{
"created": "Thu, 2 Apr 2015 12:45:54 GMT",
"version": "v2"
},
{
"created": "Sun, 17 May 2015 17:54:33 GMT",
"version": "v3"
},
{
"created": "Fri, 19 Jun 2015 02:53:17 GMT",
"version": "v4"
}
] | 2015-06-22 | [
[
"Zhang",
"Ru",
""
],
[
"Dzhafarov",
"Ehtibar N.",
""
]
] | We present a general theory of series-parallel mental architectures with selectively influenced stochastically non-independent components. A mental architecture is a hypothetical network of processes aimed at performing a task, of which we only observe the overall time it takes under variable parameters of the task. It is usually assumed that the network contains several processes selectively influenced by different experimental factors, and then the question is asked as to how these processes are arranged within the network, e.g., whether they are concurrent or sequential. One way of doing this is to consider the distribution functions for the overall processing time and compute certain linear combinations thereof (interaction contrasts). The theory of selective influences in psychology can be viewed as a special application of the interdisciplinary theory of (non)contextuality having its origins and main applications in quantum theory. In particular, lack of contextuality is equivalent to the existence of a "hidden" random entity of which all the random variables in play are functions. Consequently, for any given value of this common random entity, the processing times and their compositions (minima, maxima, or sums) become deterministic quantities. These quantities, in turn, can be treated as random variables with (shifted) Heaviside distribution functions, for which one can easily compute various linear combinations across different treatments, including interaction contrasts. This mathematical fact leads to a simple method, more general than the previously used ones, to investigate and characterize the interaction contrast for different types of series-parallel architectures. |
2011.06804 | Mauro Gobbi Dr. | Mauro Gobbi | Global warNing: challenges, threats and opportunities for ground beetles
(Coleoptera: Carabidae) in high altitude habitats | 6 figures. This is the pre-print version in the non-final form,
accepted for publication on Acta Zoologica Hungarica after the editing
changes are made | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Aim of this paper is to provide the first comprehensive synthesis about
ground beetle (Coleoptera: Carabidae) distribution in high altitude habitats.
Specifically, the attention is focused on the species assemblages living on the
most common ice-related mountain landforms (glaciers, debris-covered glaciers,
glacier forelands and rock glaciers) and on the challenges, threats and
opportunities carabids living in these habitats have to face in relation to the
ongoing climate warming. The suggested role of the ice-related alpine
landforms, as present climatic refugia for cold-adapted ground beetles, is
discussed. Finally, the needs to develop a large-scale High-alpine Biodiversity
Monitoring Program to describe how the current climate change is shaping the
distribution of high altitude specialists, is highlighted.
| [
{
"created": "Fri, 13 Nov 2020 08:15:32 GMT",
"version": "v1"
}
] | 2020-11-16 | [
[
"Gobbi",
"Mauro",
""
]
] | Aim of this paper is to provide the first comprehensive synthesis about ground beetle (Coleoptera: Carabidae) distribution in high altitude habitats. Specifically, the attention is focused on the species assemblages living on the most common ice-related mountain landforms (glaciers, debris-covered glaciers, glacier forelands and rock glaciers) and on the challenges, threats and opportunities carabids living in these habitats have to face in relation to the ongoing climate warming. The suggested role of the ice-related alpine landforms, as present climatic refugia for cold-adapted ground beetles, is discussed. Finally, the needs to develop a large-scale High-alpine Biodiversity Monitoring Program to describe how the current climate change is shaping the distribution of high altitude specialists, is highlighted. |
2011.02534 | Carlos Toscano-Ochoa | Carlos Toscano-Ochoa, Jordi Garcia-Ojalvo | A tunable population timer in multicellular consortia | 11 pages, 5 figures | null | null | null | q-bio.MN | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Processing time-dependent information requires cells to quantify the duration
of past regulatory events and program the time span of future signals. At the
single-cell level, timer mechanisms can be implemented with genetic circuits:
sets of genes connected to achieve specific tasks. However, such systems are
difficult to implement in single cells due to saturation in molecular
components and stochasticity in the limited intracellular space. Multicellular
implementations, on the other hand, outsource some of the components of
information-processing circuits to the extracellular space, and thereby might
escape those constraints. Here we develop a theoretical framework, based on a
trilinear coordinate representation, to study the collective behavior of a
cellular population composed of three cell types under stationary conditions.
This framework reveals that distributing different processes (in our case the
production, detection and degradation of a time-encoding signal) across
distinct cell types enables the robust implementation of a multicellular timer.
Our analysis also shows the circuit to be easily tunable by varying the
cellular composition of the consortium.
| [
{
"created": "Wed, 4 Nov 2020 21:03:36 GMT",
"version": "v1"
},
{
"created": "Sat, 13 Mar 2021 15:27:49 GMT",
"version": "v2"
}
] | 2021-03-16 | [
[
"Toscano-Ochoa",
"Carlos",
""
],
[
"Garcia-Ojalvo",
"Jordi",
""
]
] | Processing time-dependent information requires cells to quantify the duration of past regulatory events and program the time span of future signals. At the single-cell level, timer mechanisms can be implemented with genetic circuits: sets of genes connected to achieve specific tasks. However, such systems are difficult to implement in single cells due to saturation in molecular components and stochasticity in the limited intracellular space. Multicellular implementations, on the other hand, outsource some of the components of information-processing circuits to the extracellular space, and thereby might escape those constraints. Here we develop a theoretical framework, based on a trilinear coordinate representation, to study the collective behavior of a cellular population composed of three cell types under stationary conditions. This framework reveals that distributing different processes (in our case the production, detection and degradation of a time-encoding signal) across distinct cell types enables the robust implementation of a multicellular timer. Our analysis also shows the circuit to be easily tunable by varying the cellular composition of the consortium. |
2105.07833 | Govind Kaigala | Robert D. Lovchik, David Taylor, Govind Kaigala | Rapid micro-immunohistochemistry | null | null | null | null | q-bio.TO q-bio.BM | http://creativecommons.org/licenses/by/4.0/ | We present a new and versatile implementation of rapid and localized
immunohistochemical staining of tissue sections. Immunohistochemistry (IHC)
allows to detect specific proteins on tissue sections and comprises a sequence
of specific biochemical reactions. For the rapid implementation of IHC, we
fabricated horizontally oriented microfluidic probes (MFP) with functionally
designed apertures to enable square and circular footprints, which we employ to
locally expose a tissue to time-optimized sequences of different biochemicals.
| [
{
"created": "Mon, 17 May 2021 13:47:30 GMT",
"version": "v1"
}
] | 2021-05-18 | [
[
"Lovchik",
"Robert D.",
""
],
[
"Taylor",
"David",
""
],
[
"Kaigala",
"Govind",
""
]
] | We present a new and versatile implementation of rapid and localized immunohistochemical staining of tissue sections. Immunohistochemistry (IHC) allows to detect specific proteins on tissue sections and comprises a sequence of specific biochemical reactions. For the rapid implementation of IHC, we fabricated horizontally oriented microfluidic probes (MFP) with functionally designed apertures to enable square and circular footprints, which we employ to locally expose a tissue to time-optimized sequences of different biochemicals. |
1805.11704 | Arna Ghosh | Arna Ghosh, Fabien dal Maso, Marc Roig, Georgios D Mitsis and
Marie-H\'el\`ene Boudrias | Deep Semantic Architecture with discriminative feature visualization for
neuroimage analysis | null | null | null | null | q-bio.NC cs.AI q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neuroimaging data analysis often involves \emph{a-priori} selection of data
features to study the underlying neural activity. Since this could lead to
sub-optimal feature selection and thereby prevent the detection of subtle
patterns in neural activity, data-driven methods have recently gained
popularity for optimizing neuroimaging data analysis pipelines and thereby,
improving our understanding of neural mechanisms. In this context, we developed
a deep convolutional architecture that can identify discriminating patterns in
neuroimaging data and applied it to electroencephalography (EEG) recordings
collected from 25 subjects performing a hand motor task before and after a rest
period or a bout of exercise. The deep network was trained to classify subjects
into exercise and control groups based on differences in their EEG signals.
Subsequently, we developed a novel method termed the cue-combination for Class
Activation Map (ccCAM), which enabled us to identify discriminating
spatio-temporal features within definite frequency bands (23--33 Hz) and assess
the effects of exercise on the brain. Additionally, the proposed architecture
allowed the visualization of the differences in the propagation of underlying
neural activity across the cortex between the two groups, for the first time in
our knowledge. Our results demonstrate the feasibility of using deep network
architectures for neuroimaging analysis in different contexts such as, for the
identification of robust brain biomarkers to better characterize and
potentially treat neurological disorders.
| [
{
"created": "Tue, 29 May 2018 20:55:09 GMT",
"version": "v1"
},
{
"created": "Fri, 29 Jun 2018 20:17:16 GMT",
"version": "v2"
}
] | 2018-07-03 | [
[
"Ghosh",
"Arna",
""
],
[
"Maso",
"Fabien dal",
""
],
[
"Roig",
"Marc",
""
],
[
"Mitsis",
"Georgios D",
""
],
[
"Boudrias",
"Marie-Hélène",
""
]
] | Neuroimaging data analysis often involves \emph{a-priori} selection of data features to study the underlying neural activity. Since this could lead to sub-optimal feature selection and thereby prevent the detection of subtle patterns in neural activity, data-driven methods have recently gained popularity for optimizing neuroimaging data analysis pipelines and thereby, improving our understanding of neural mechanisms. In this context, we developed a deep convolutional architecture that can identify discriminating patterns in neuroimaging data and applied it to electroencephalography (EEG) recordings collected from 25 subjects performing a hand motor task before and after a rest period or a bout of exercise. The deep network was trained to classify subjects into exercise and control groups based on differences in their EEG signals. Subsequently, we developed a novel method termed the cue-combination for Class Activation Map (ccCAM), which enabled us to identify discriminating spatio-temporal features within definite frequency bands (23--33 Hz) and assess the effects of exercise on the brain. Additionally, the proposed architecture allowed the visualization of the differences in the propagation of underlying neural activity across the cortex between the two groups, for the first time in our knowledge. Our results demonstrate the feasibility of using deep network architectures for neuroimaging analysis in different contexts such as, for the identification of robust brain biomarkers to better characterize and potentially treat neurological disorders. |
2003.01812 | Matthew Macauley | Matthew Macauley, Nora Youngs | The case for algebraic biology: from research to education | 14 pages, 1 figure | null | null | null | q-bio.NC q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Though it goes without saying that linear algebra is fundamental to
mathematical biology, polynomial algebra is less visible. In this article, we
will give a brief tour of four diverse biological problems where multivariate
polynomials play a central role -- a subfield that is sometimes called
"algebraic biology." Namely, these topics include biochemical reaction
networks, Boolean models of gene regulatory networks, algebraic statistics and
genomics, and place fields in neuroscience. After that, we will summarize the
history of discrete and algebraic structures in mathematical biology, from
their early appearances in the late 1960s to the current day. Finally, we will
discuss the role of algebraic biology in the modern classroom and curriculum,
including resources in the literature and relevant software. Our goal is to
make this article widely accessible, reaching the mathematical biologist who
knows no algebra, the algebraist who knows no biology, and especially the
interested student who is curious about the synergy between these two seemingly
unrelated fields.
| [
{
"created": "Sat, 29 Feb 2020 18:06:29 GMT",
"version": "v1"
}
] | 2020-03-05 | [
[
"Macauley",
"Matthew",
""
],
[
"Youngs",
"Nora",
""
]
] | Though it goes without saying that linear algebra is fundamental to mathematical biology, polynomial algebra is less visible. In this article, we will give a brief tour of four diverse biological problems where multivariate polynomials play a central role -- a subfield that is sometimes called "algebraic biology." Namely, these topics include biochemical reaction networks, Boolean models of gene regulatory networks, algebraic statistics and genomics, and place fields in neuroscience. After that, we will summarize the history of discrete and algebraic structures in mathematical biology, from their early appearances in the late 1960s to the current day. Finally, we will discuss the role of algebraic biology in the modern classroom and curriculum, including resources in the literature and relevant software. Our goal is to make this article widely accessible, reaching the mathematical biologist who knows no algebra, the algebraist who knows no biology, and especially the interested student who is curious about the synergy between these two seemingly unrelated fields. |
1310.5876 | Roman Cherniha | R. Cherniha, J. Stachowska-Pietka and J. Waniewski | A mathematical model for fluid-glucose-albumin transport in peritoneal
dialysis | 28 pages, 8 figures. arXiv admin note: text overlap with
arXiv:1110.1283 | Int. J. Appl. Math.Comp.Sci. vol. 24 (2014) P. 837-851 | 10.2478/amcs-2014-0062 | null | q-bio.TO math.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A mathematical model for fluid and solute transport in peritoneal dialysis is
constructed. The model is based on a three-component nonlinear system of
two-dimensional partial differential equations for fluid, glucose and albumin
transport with the relevant boundary and initial conditions. Its aim is to
model ultrafiltration of water combined with inflow of glucose to the tissue
and removal of albumin from the body during dialysis, and it does this by
finding the spatial distributions of glucose and albumin concentrations and
hydrostatic pressure. The model is developed in one spatial dimension
approximation and a governing equation for each of the variables is derived
from physical principles. Under certain assumptions the model are simplified
with the aim of obtaining exact formulae for spatially non-uniform steady-state
solutions.
As the result, the exact formulae for the fluid fluxes from blood to tissue
and across the tissue are constructed together with two linear autonomous ODEs
for glucose and albumin concentrations in the tissue. The obtained analytical
results are checked for their applicability for the description of
fluid-glucose-albumin transport during peritoneal dialysis.
| [
{
"created": "Tue, 22 Oct 2013 10:54:06 GMT",
"version": "v1"
}
] | 2015-07-08 | [
[
"Cherniha",
"R.",
""
],
[
"Stachowska-Pietka",
"J.",
""
],
[
"Waniewski",
"J.",
""
]
] | A mathematical model for fluid and solute transport in peritoneal dialysis is constructed. The model is based on a three-component nonlinear system of two-dimensional partial differential equations for fluid, glucose and albumin transport with the relevant boundary and initial conditions. Its aim is to model ultrafiltration of water combined with inflow of glucose to the tissue and removal of albumin from the body during dialysis, and it does this by finding the spatial distributions of glucose and albumin concentrations and hydrostatic pressure. The model is developed in one spatial dimension approximation and a governing equation for each of the variables is derived from physical principles. Under certain assumptions the model are simplified with the aim of obtaining exact formulae for spatially non-uniform steady-state solutions. As the result, the exact formulae for the fluid fluxes from blood to tissue and across the tissue are constructed together with two linear autonomous ODEs for glucose and albumin concentrations in the tissue. The obtained analytical results are checked for their applicability for the description of fluid-glucose-albumin transport during peritoneal dialysis. |
1812.01091 | Bing Liu | Bing Liu, Benjamin M. Gyori and P.S. Thiagarajan | Statistical Model Checking based Analysis of Biological Networks | Accepted for Publication in Automated Reasoning for Systems Biology
and Medicine on 2018-09-18 | null | null | null | q-bio.QM q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a framework for analyzing ordinary differential equation (ODE)
models of biological networks using statistical model checking (SMC). A key
aspect of our work is the modeling of single-cell variability by assigning a
probability distribution to intervals of initial concentration values and
kinetic rate constants. We propagate this distribution through the system
dynamics to obtain a distribution over the set of trajectories of the ODEs.
This in turn opens the door for performing statistical analysis of the ODE
system's behavior. To illustrate this we first encode quantitative data and
qualitative trends as bounded linear time temporal logic (BLTL) formulas. Based
on this we construct a parameter estimation method using an SMC-driven
evaluation procedure applied to the stochastic version of the behavior of the
ODE system. We then describe how this SMC framework can be generalized to
hybrid automata by exploiting the given distribution over the initial states
and the--much more sophisticated--system dynamics to associate a Markov chain
with the hybrid automaton. We then establish a strong relationship between the
behaviors of the hybrid automaton and its associated Markov chain.
Consequently, we sample trajectories from the hybrid automaton in a way that
mimics the sampling of the trajectories of the Markov chain. This enables us to
verify approximately that the Markov chain meets a BLTL specification with high
probability. We have applied these methods to ODE based models of Toll-like
receptor signaling and the crosstalk between autophagy and apoptosis, as well
as to systems exhibiting hybrid dynamics including the circadian clock pathway
and cardiac cell physiology. We present an overview of these applications and
summarize the main empirical results. These case studies demonstrate that the
our methods can be applied in a variety of practical settings.
| [
{
"created": "Mon, 3 Dec 2018 21:51:24 GMT",
"version": "v1"
}
] | 2018-12-05 | [
[
"Liu",
"Bing",
""
],
[
"Gyori",
"Benjamin M.",
""
],
[
"Thiagarajan",
"P. S.",
""
]
] | We introduce a framework for analyzing ordinary differential equation (ODE) models of biological networks using statistical model checking (SMC). A key aspect of our work is the modeling of single-cell variability by assigning a probability distribution to intervals of initial concentration values and kinetic rate constants. We propagate this distribution through the system dynamics to obtain a distribution over the set of trajectories of the ODEs. This in turn opens the door for performing statistical analysis of the ODE system's behavior. To illustrate this we first encode quantitative data and qualitative trends as bounded linear time temporal logic (BLTL) formulas. Based on this we construct a parameter estimation method using an SMC-driven evaluation procedure applied to the stochastic version of the behavior of the ODE system. We then describe how this SMC framework can be generalized to hybrid automata by exploiting the given distribution over the initial states and the--much more sophisticated--system dynamics to associate a Markov chain with the hybrid automaton. We then establish a strong relationship between the behaviors of the hybrid automaton and its associated Markov chain. Consequently, we sample trajectories from the hybrid automaton in a way that mimics the sampling of the trajectories of the Markov chain. This enables us to verify approximately that the Markov chain meets a BLTL specification with high probability. We have applied these methods to ODE based models of Toll-like receptor signaling and the crosstalk between autophagy and apoptosis, as well as to systems exhibiting hybrid dynamics including the circadian clock pathway and cardiac cell physiology. We present an overview of these applications and summarize the main empirical results. These case studies demonstrate that the our methods can be applied in a variety of practical settings. |
1207.1312 | Pablo Cordero | Pablo Cordero, Wipapat Kladwang, Christopher C. VanLang, Rhiju Das | Quantitative DMS mapping for automated RNA secondary structure inference | null | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | For decades, dimethyl sulfate (DMS) mapping has informed manual modeling of
RNA structure in vitro and in vivo. Here, we incorporate DMS data into
automated secondary structure inference using a pseudo-energy framework
developed for 2'-OH acylation (SHAPE) mapping. On six non-coding RNAs with
crystallographic models, DMS- guided modeling achieves overall false negative
and false discovery rates of 9.5% and 11.6%, comparable or better than
SHAPE-guided modeling; and non-parametric bootstrapping provides
straightforward confidence estimates. Integrating DMS/SHAPE data and including
CMCT reactivities give small additional improvements. These results establish
DMS mapping - an already routine technique - as a quantitative tool for
unbiased RNA structure modeling.
| [
{
"created": "Thu, 5 Jul 2012 18:04:17 GMT",
"version": "v1"
}
] | 2012-07-06 | [
[
"Cordero",
"Pablo",
""
],
[
"Kladwang",
"Wipapat",
""
],
[
"VanLang",
"Christopher C.",
""
],
[
"Das",
"Rhiju",
""
]
] | For decades, dimethyl sulfate (DMS) mapping has informed manual modeling of RNA structure in vitro and in vivo. Here, we incorporate DMS data into automated secondary structure inference using a pseudo-energy framework developed for 2'-OH acylation (SHAPE) mapping. On six non-coding RNAs with crystallographic models, DMS- guided modeling achieves overall false negative and false discovery rates of 9.5% and 11.6%, comparable or better than SHAPE-guided modeling; and non-parametric bootstrapping provides straightforward confidence estimates. Integrating DMS/SHAPE data and including CMCT reactivities give small additional improvements. These results establish DMS mapping - an already routine technique - as a quantitative tool for unbiased RNA structure modeling. |
2005.13516 | Mohamed Taha Rouabah | Mohamed Taha Rouabah, Abdellah Tounsi and Nacer Eddine Belaloui | Genetic algorithm with cross validation-based epidemic model and
application to early diffusion of COVID-19 in Algeria | 12 pages, 5 figures, 1 table, git at
https://github.com/Taha-Rouabah/COVID-19, data at
https://github.com/Taha-Rouabah/COVID-19/tree/master/Data | Scientific African, Volume 14, e01050, 2021 | 10.1016/j.sciaf.2021.e01050 | null | q-bio.PE q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | A dynamical epidemic model optimized using genetic algorithm and cross
validation method to overcome the overfitting problem is proposed. The cross
validation procedure is applied so that available data are split into a
training subset used to fit the algorithm's parameters, and a smaller subset
used for validation. This process is tested on the countries of Italy, Spain,
Germany and South Korea before being applied to Algeria. Interestingly, our
study reveals an inverse relationship between the size of the training sample
and the number of generations required in the genetic algorithm. Moreover, the
enhanced compartmental model presented in this work is proven to be a reliable
tool to estimate key epidemic parameters and non-measurable asymptomatic
infected portion of the susceptible population in order to establish realistic
nowcast and forecast of epidemic's evolution. The model is employed to study
the COVID-19 outbreak dynamics in Algeria between February 25th and May 24th,
2020. The basic reproduction number and effective reproduction number on May
24th, after three months of the outbreak, are estimated to be 3.78 (95% CI
3.033-4.53) and 0.651 (95% CI 0.539-0.761) respectively. Disease incidence, CFR
and IFR are also calculated. Numerical programs developed for the purpose of
this study are made publicly accessible for reproduction and further use.
| [
{
"created": "Tue, 26 May 2020 11:17:47 GMT",
"version": "v1"
},
{
"created": "Mon, 15 Jun 2020 15:33:03 GMT",
"version": "v2"
},
{
"created": "Wed, 24 Jun 2020 23:06:19 GMT",
"version": "v3"
},
{
"created": "Fri, 28 Aug 2020 12:20:47 GMT",
"version": "v4"
},
{
"created": "Sun, 11 Jul 2021 13:49:27 GMT",
"version": "v5"
}
] | 2021-12-01 | [
[
"Rouabah",
"Mohamed Taha",
""
],
[
"Tounsi",
"Abdellah",
""
],
[
"Belaloui",
"Nacer Eddine",
""
]
] | A dynamical epidemic model optimized using genetic algorithm and cross validation method to overcome the overfitting problem is proposed. The cross validation procedure is applied so that available data are split into a training subset used to fit the algorithm's parameters, and a smaller subset used for validation. This process is tested on the countries of Italy, Spain, Germany and South Korea before being applied to Algeria. Interestingly, our study reveals an inverse relationship between the size of the training sample and the number of generations required in the genetic algorithm. Moreover, the enhanced compartmental model presented in this work is proven to be a reliable tool to estimate key epidemic parameters and non-measurable asymptomatic infected portion of the susceptible population in order to establish realistic nowcast and forecast of epidemic's evolution. The model is employed to study the COVID-19 outbreak dynamics in Algeria between February 25th and May 24th, 2020. The basic reproduction number and effective reproduction number on May 24th, after three months of the outbreak, are estimated to be 3.78 (95% CI 3.033-4.53) and 0.651 (95% CI 0.539-0.761) respectively. Disease incidence, CFR and IFR are also calculated. Numerical programs developed for the purpose of this study are made publicly accessible for reproduction and further use. |
2205.04342 | Hideaki Yamamoto PhD | Yuya Sato, Hideaki Yamamoto, Hideyuki Kato, Takashi Tanii, Shigeo
Sato, Ayumi Hirano-Iwata | Microfluidic cell engineering on high-density microelectrode arrays for
assessing structure-function relationships in living neuronal networks | 18 pages, 5 figures | Front. Neurosci. 16, 943310 (2023) | 10.3389/fnins.2022.943310 | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Neuronal networks in dissociated culture combined with cell engineering
technology offer a pivotal platform to constructively explore the relationship
between structure and function in living neuronal networks. Here, we fabricated
defined neuronal networks possessing a modular architecture on high-density
microelectrode arrays (HD-MEAs), a state-of-the-art electrophysiological tool
for recording neural activity with high spatial and temporal resolutions. We
first established a surface coating protocol using a cell-permissive hydrogel
to stably attach polydimethylsiloxane microfluidic film on the HD-MEA. We then
recorded the spontaneous neural activity of the engineered neuronal network,
which revealed an important portrait of the engineered neuronal
network--modular architecture enhances functional complexity by reducing the
excessive neural correlation between spatially segregated modules. The results
of this study highlight the impact of HD-MEA recordings combined with cell
engineering technologies as a novel tool in neuroscience to constructively
assess the structure-function relationships in neuronal networks.
| [
{
"created": "Mon, 9 May 2022 14:43:08 GMT",
"version": "v1"
},
{
"created": "Wed, 11 May 2022 11:31:37 GMT",
"version": "v2"
}
] | 2023-01-11 | [
[
"Sato",
"Yuya",
""
],
[
"Yamamoto",
"Hideaki",
""
],
[
"Kato",
"Hideyuki",
""
],
[
"Tanii",
"Takashi",
""
],
[
"Sato",
"Shigeo",
""
],
[
"Hirano-Iwata",
"Ayumi",
""
]
] | Neuronal networks in dissociated culture combined with cell engineering technology offer a pivotal platform to constructively explore the relationship between structure and function in living neuronal networks. Here, we fabricated defined neuronal networks possessing a modular architecture on high-density microelectrode arrays (HD-MEAs), a state-of-the-art electrophysiological tool for recording neural activity with high spatial and temporal resolutions. We first established a surface coating protocol using a cell-permissive hydrogel to stably attach polydimethylsiloxane microfluidic film on the HD-MEA. We then recorded the spontaneous neural activity of the engineered neuronal network, which revealed an important portrait of the engineered neuronal network--modular architecture enhances functional complexity by reducing the excessive neural correlation between spatially segregated modules. The results of this study highlight the impact of HD-MEA recordings combined with cell engineering technologies as a novel tool in neuroscience to constructively assess the structure-function relationships in neuronal networks. |
1912.09001 | Antony Jose | Antony M Jose | A framework for parsing heritable information | 13 pages and 4 figures in main article, 8 pages and 4 figures in
supplemental material | null | null | null | q-bio.OT | http://creativecommons.org/licenses/by/4.0/ | Living systems transmit heritable information using the replicating gene
sequences and the cycling regulators assembled around gene sequences. Here I
develop a framework for heredity and development that includes the cycling
regulators parsed in terms of what an organism can sense about itself and its
environment by defining entities, their sensors, and the sensed properties.
Entities include small molecules (ATP, ions, metabolites, etc.), macromolecules
(individual proteins, RNAs, polysaccharides, etc.), and assemblies of
molecules. While concentration may be the only relevant property measured by
sensors for small molecules, multiple properties that include concentration,
sequence, conformation, and modification may all be measured for macromolecules
and assemblies. Each configuration of these entities and sensors that is
recreated in successive generations in a given environment thus specifies a
potentially vast amount of information driving complex development in each
generation. This Entity-Sensor-Property framework explains how sensors limit
the number of distinguishable states, how distinct molecular configurations can
be functionally equivalent, and how regulation of sensors prevents detection of
some perturbations. Overall, this framework is a useful guide for understanding
how life evolves and how the storage of information has itself evolved with
complexity since before the origin of life.
| [
{
"created": "Thu, 19 Dec 2019 03:18:06 GMT",
"version": "v1"
},
{
"created": "Mon, 2 Mar 2020 04:13:07 GMT",
"version": "v2"
}
] | 2020-03-03 | [
[
"Jose",
"Antony M",
""
]
] | Living systems transmit heritable information using the replicating gene sequences and the cycling regulators assembled around gene sequences. Here I develop a framework for heredity and development that includes the cycling regulators parsed in terms of what an organism can sense about itself and its environment by defining entities, their sensors, and the sensed properties. Entities include small molecules (ATP, ions, metabolites, etc.), macromolecules (individual proteins, RNAs, polysaccharides, etc.), and assemblies of molecules. While concentration may be the only relevant property measured by sensors for small molecules, multiple properties that include concentration, sequence, conformation, and modification may all be measured for macromolecules and assemblies. Each configuration of these entities and sensors that is recreated in successive generations in a given environment thus specifies a potentially vast amount of information driving complex development in each generation. This Entity-Sensor-Property framework explains how sensors limit the number of distinguishable states, how distinct molecular configurations can be functionally equivalent, and how regulation of sensors prevents detection of some perturbations. Overall, this framework is a useful guide for understanding how life evolves and how the storage of information has itself evolved with complexity since before the origin of life. |
2307.02659 | Lysea Haggie | Lysea Haggie (1), Thor Besier (1), Angus McMorland (1 and 2) ((1)
Auckland Bioengineering Institute, University of Auckland, Auckland, New
Zealand, (2) Department of Exercise Sciences, University of Auckland,
Auckland, New Zealand) | Modelling Spontaneous Firing Activity of the Motor Cortex in a Spiking
Neural Network with Random and Local Connectivity | null | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Computational models of cortical activity provide insight into the mechanisms
of higher-order processing in the human brain including planning, perception
and the control of movement. Activity in the cortex is ongoing even in the
absence of sensory input or discernible movements and is thought to be linked
to the topology of cortical circuitry. However, the connectivity and its
functional role in the generation of spatio-temporal firing patterns and
cortical computations are still unknown. Movement of the body is a key function
of the brain, with the motor cortex the main cortical area implicated in the
generation of movement. We built a spiking neural network model of the motor
cortex which incorporates a laminar structure and circuitry based on a previous
cortical model. A local connectivity scheme was implemented to introduce more
physiological plausibility to the cortex model, and the effect on the rates,
distributions and irregularity of neuronal firing was compared to the original
random connectivity method and experimental data. Local connectivity increased
the distribution of and overall rate of neuronal firing. It also resulted in
the irregularity of firing being more similar to those observed in experimental
measurements. The larger variability in dynamical behaviour of the local
connectivity model suggests that the topological structure of the connections
in neuronal population plays a significant role in firing patterns during
spontaneous activity. This model took steps towards replicating the macroscopic
network of the motor cortex, replicating realistic spatiotemporal firing to
shed light on information coding in the cortex. Large scale computational
models such as this one can capture how structure and function relate to
observable neuronal firing behaviour, and investigates the underlying
computational mechanisms of the brain.
| [
{
"created": "Wed, 5 Jul 2023 21:23:18 GMT",
"version": "v1"
}
] | 2023-07-07 | [
[
"Haggie",
"Lysea",
"",
"1 and 2"
],
[
"Besier",
"Thor",
"",
"1 and 2"
],
[
"McMorland",
"Angus",
"",
"1 and 2"
]
] | Computational models of cortical activity provide insight into the mechanisms of higher-order processing in the human brain including planning, perception and the control of movement. Activity in the cortex is ongoing even in the absence of sensory input or discernible movements and is thought to be linked to the topology of cortical circuitry. However, the connectivity and its functional role in the generation of spatio-temporal firing patterns and cortical computations are still unknown. Movement of the body is a key function of the brain, with the motor cortex the main cortical area implicated in the generation of movement. We built a spiking neural network model of the motor cortex which incorporates a laminar structure and circuitry based on a previous cortical model. A local connectivity scheme was implemented to introduce more physiological plausibility to the cortex model, and the effect on the rates, distributions and irregularity of neuronal firing was compared to the original random connectivity method and experimental data. Local connectivity increased the distribution of and overall rate of neuronal firing. It also resulted in the irregularity of firing being more similar to those observed in experimental measurements. The larger variability in dynamical behaviour of the local connectivity model suggests that the topological structure of the connections in neuronal population plays a significant role in firing patterns during spontaneous activity. This model took steps towards replicating the macroscopic network of the motor cortex, replicating realistic spatiotemporal firing to shed light on information coding in the cortex. Large scale computational models such as this one can capture how structure and function relate to observable neuronal firing behaviour, and investigates the underlying computational mechanisms of the brain. |
2311.13876 | Francisco Jesus Martinez-Murcia | Francisco Jesus Martinez-Murcia, Andr\'es Ortiz, Juan Manuel G\'orriz,
Javier Ram\'irez, Pedro Javier Lopez-Perez, Miguel L\'opez-Zamora, Juan Luis
Luque | EEG Connectivity Analysis Using Denoising Autoencoders for the Detection
of Dyslexia | 19 pages, 6 figures | INT J NEURAL SYST 30 (7), 2020, 2050037 | 10.1142/S0129065720500379 | null | q-bio.NC cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Temporal Sampling Framework (TSF) theorizes that the characteristic
phonological difficulties of dyslexia are caused by an atypical oscillatory
sampling at one or more temporal rates. The LEEDUCA study conducted a series of
Electroencephalography (EEG) experiments on children listening to amplitude
modulated (AM) noise with slow-rythmic prosodic (0.5-1 Hz), syllabic (4-8 Hz)
or the phoneme (12-40 Hz) rates, aimed at detecting differences in perception
of oscillatory sampling that could be associated with dyslexia. The purpose of
this work is to check whether these differences exist and how they are related
to children's performance in different language and cognitive tasks commonly
used to detect dyslexia. To this purpose, temporal and spectral inter-channel
EEG connectivity was estimated, and a denoising autoencoder (DAE) was trained
to learn a low-dimensional representation of the connectivity matrices. This
representation was studied via correlation and classification analysis, which
revealed ability in detecting dyslexic subjects with an accuracy higher than
0.8, and balanced accuracy around 0.7. Some features of the DAE representation
were significantly correlated ($p<0.005$) with children's performance in
language and cognitive tasks of the phonological hypothesis category such as
phonological awareness and rapid symbolic naming, as well as reading efficiency
and reading comprehension. Finally, a deeper analysis of the adjacency matrix
revealed a reduced bilateral connection between electrodes of the temporal lobe
(roughly the primary auditory cortex) in DD subjects, as well as an increased
connectivity of the F7 electrode, placed roughly on Broca's area. These results
pave the way for a complementary assessment of dyslexia using more objective
methodologies such as EEG.
| [
{
"created": "Thu, 23 Nov 2023 09:49:22 GMT",
"version": "v1"
}
] | 2023-11-27 | [
[
"Martinez-Murcia",
"Francisco Jesus",
""
],
[
"Ortiz",
"Andrés",
""
],
[
"Górriz",
"Juan Manuel",
""
],
[
"Ramírez",
"Javier",
""
],
[
"Lopez-Perez",
"Pedro Javier",
""
],
[
"López-Zamora",
"Miguel",
""
],
[
"Luque",
"Juan Luis",
""
]
] | The Temporal Sampling Framework (TSF) theorizes that the characteristic phonological difficulties of dyslexia are caused by an atypical oscillatory sampling at one or more temporal rates. The LEEDUCA study conducted a series of Electroencephalography (EEG) experiments on children listening to amplitude modulated (AM) noise with slow-rythmic prosodic (0.5-1 Hz), syllabic (4-8 Hz) or the phoneme (12-40 Hz) rates, aimed at detecting differences in perception of oscillatory sampling that could be associated with dyslexia. The purpose of this work is to check whether these differences exist and how they are related to children's performance in different language and cognitive tasks commonly used to detect dyslexia. To this purpose, temporal and spectral inter-channel EEG connectivity was estimated, and a denoising autoencoder (DAE) was trained to learn a low-dimensional representation of the connectivity matrices. This representation was studied via correlation and classification analysis, which revealed ability in detecting dyslexic subjects with an accuracy higher than 0.8, and balanced accuracy around 0.7. Some features of the DAE representation were significantly correlated ($p<0.005$) with children's performance in language and cognitive tasks of the phonological hypothesis category such as phonological awareness and rapid symbolic naming, as well as reading efficiency and reading comprehension. Finally, a deeper analysis of the adjacency matrix revealed a reduced bilateral connection between electrodes of the temporal lobe (roughly the primary auditory cortex) in DD subjects, as well as an increased connectivity of the F7 electrode, placed roughly on Broca's area. These results pave the way for a complementary assessment of dyslexia using more objective methodologies such as EEG. |
2311.16207 | Xinxing Yang | Xinxing Yang, Genke Yang and Jian Chu | The Graph Convolutional Network with Multi-representation Alignment for
Drug Synergy Prediction | 14 pages; | null | null | null | q-bio.QM cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Drug combination refers to the use of two or more drugs to treat a specific
disease at the same time. It is currently the mainstream way to treat complex
diseases. Compared with single drugs, drug combinations have better efficacy
and can better inhibit toxicity and drug resistance. The computational model
based on deep learning concatenates the representation of multiple drugs and
the corresponding cell line feature as input, and the output is whether the
drug combination can have an inhibitory effect on the cell line. However, this
strategy of concatenating multiple representations has the following defects:
the alignment of drug representation and cell line representation is ignored,
resulting in the synergistic relationship not being reflected positionally in
the embedding space. Moreover, the alignment measurement function in deep
learning cannot be suitable for drug synergy prediction tasks due to
differences in input types. Therefore, in this work, we propose a graph
convolutional network with multi-representation alignment (GCNMRA) for
predicting drug synergy. In the GCNMRA model, we designed a
multi-representation alignment function suitable for the drug synergy
prediction task so that the positional relationship between drug
representations and cell line representation is reflected in the embedding
space. In addition, the vector modulus of drug representations and cell line
representation is considered to improve the accuracy of calculation results and
accelerate model convergence. Finally, many relevant experiments were run on
multiple drug synergy datasets to verify the effectiveness of the above
innovative elements and the excellence of the GCNMRA model.
| [
{
"created": "Mon, 27 Nov 2023 15:34:14 GMT",
"version": "v1"
}
] | 2023-11-29 | [
[
"Yang",
"Xinxing",
""
],
[
"Yang",
"Genke",
""
],
[
"Chu",
"Jian",
""
]
] | Drug combination refers to the use of two or more drugs to treat a specific disease at the same time. It is currently the mainstream way to treat complex diseases. Compared with single drugs, drug combinations have better efficacy and can better inhibit toxicity and drug resistance. The computational model based on deep learning concatenates the representation of multiple drugs and the corresponding cell line feature as input, and the output is whether the drug combination can have an inhibitory effect on the cell line. However, this strategy of concatenating multiple representations has the following defects: the alignment of drug representation and cell line representation is ignored, resulting in the synergistic relationship not being reflected positionally in the embedding space. Moreover, the alignment measurement function in deep learning cannot be suitable for drug synergy prediction tasks due to differences in input types. Therefore, in this work, we propose a graph convolutional network with multi-representation alignment (GCNMRA) for predicting drug synergy. In the GCNMRA model, we designed a multi-representation alignment function suitable for the drug synergy prediction task so that the positional relationship between drug representations and cell line representation is reflected in the embedding space. In addition, the vector modulus of drug representations and cell line representation is considered to improve the accuracy of calculation results and accelerate model convergence. Finally, many relevant experiments were run on multiple drug synergy datasets to verify the effectiveness of the above innovative elements and the excellence of the GCNMRA model. |
1808.03875 | Andrew Glennerster | Andrew Glennerster and Jenny C.A. Read | A single coordinate framework for optic flow and binocular disparity | 65 pages; 17 figures | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Optic flow is two dimensional, but no special qualities are attached to one
or other of these dimensions. For binocular disparity, on the other hand, the
terms 'horizontal' and 'vertical' disparities are commonly used. This is odd,
since binocular disparity and optic flow describe essentially the same thing.
The difference is that, generally, people tend to fixate relatively close to
the direction of heading as they move, meaning that fixation is close to the
optic flow epipole, whereas, for binocular vision, fixation is close to the
head-centric midline, i.e. approximately 90 degrees from the binocular epipole.
For fixating animals, some separations of flow may lead to simple algorithms
for the judgement of surface structure and the control of action. We consider
the following canonical flow patterns that sum to produce overall flow: (i)
'towards' flow, the component of translational flow produced by approaching (or
retreating from) the fixated object, which produces pure radial flow on the
retina; (ii) 'sideways' flow, the remaining component of translational flow,
which is produced by translation of the optic centre orthogonal to the
cyclopean line of sight and (iii) 'vergence' flow, rotational flow produced by
a counter-rotation of the eye in order to maintain fixation. A general flow
pattern could also include (iv) 'cyclovergence' flow, produced by rotation of
one eye relative to the other about the line of sight. We consider some
practical advantages of dividing up flow in this way when an observer fixates
as they move. As in some previous treatments, we suggest that there are certain
tasks for which it is sensible to consider 'towards' flow as one component and
'sideways' + 'vergence' flow as another.
| [
{
"created": "Sat, 11 Aug 2018 22:50:52 GMT",
"version": "v1"
}
] | 2018-08-14 | [
[
"Glennerster",
"Andrew",
""
],
[
"Read",
"Jenny C. A.",
""
]
] | Optic flow is two dimensional, but no special qualities are attached to one or other of these dimensions. For binocular disparity, on the other hand, the terms 'horizontal' and 'vertical' disparities are commonly used. This is odd, since binocular disparity and optic flow describe essentially the same thing. The difference is that, generally, people tend to fixate relatively close to the direction of heading as they move, meaning that fixation is close to the optic flow epipole, whereas, for binocular vision, fixation is close to the head-centric midline, i.e. approximately 90 degrees from the binocular epipole. For fixating animals, some separations of flow may lead to simple algorithms for the judgement of surface structure and the control of action. We consider the following canonical flow patterns that sum to produce overall flow: (i) 'towards' flow, the component of translational flow produced by approaching (or retreating from) the fixated object, which produces pure radial flow on the retina; (ii) 'sideways' flow, the remaining component of translational flow, which is produced by translation of the optic centre orthogonal to the cyclopean line of sight and (iii) 'vergence' flow, rotational flow produced by a counter-rotation of the eye in order to maintain fixation. A general flow pattern could also include (iv) 'cyclovergence' flow, produced by rotation of one eye relative to the other about the line of sight. We consider some practical advantages of dividing up flow in this way when an observer fixates as they move. As in some previous treatments, we suggest that there are certain tasks for which it is sensible to consider 'towards' flow as one component and 'sideways' + 'vergence' flow as another. |
1509.00755 | Jaline Gerardin | Jaline Gerardin, Caitlin A. Bever, Busiku Hamainza, John M. Miller,
Philip A. Eckhoff, Edward A. Wenger | Optimal population-level infection detection strategies for malaria
control and elimination in a spatial model of malaria transmission | null | null | 10.1371/journal.pcbi.1004707 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mass campaigns with antimalarial drugs are potentially a powerful tool for
local elimination of malaria, yet current diagnostic technologies are
insufficiently sensitive to identify all individuals who harbor infections. At
the same time, overtreatment of uninfected individuals increases the risk of
accelerating emergence of drug resistance and losing community acceptance.
Local heterogeneity in transmission intensity may allow campaign strategies
that respond to index cases to successfully target subpatent infections while
simultaneously limiting overtreatment. While selective targeting of hotspots of
transmission has been proposed as a strategy for malaria control, such
targeting has not been tested in the context of malaria elimination. Using
household locations, demographics, and prevalence data from a survey of four
health facility catchment areas in southern Zambia and an agent-based model of
malaria transmission and immunity acquisition, a transmission intensity was fit
to each household based on neighborhood age-dependent malaria prevalence. A set
of individual infection trajectories was constructed for every household in
each catchment area, accounting for heterogeneous exposure and immunity.
Various campaign strategies (mass drug administration, mass screen and treat,
focal mass drug administration, snowball reactive case detection, pooled
sampling, and a hypothetical serological diagnostic) were simulated and
evaluated for performance at finding infections, minimizing overtreatment,
reducing clinical case counts, and interrupting transmission. For malaria
control, presumptive treatment leads to substantial overtreatment without
additional morbidity reduction under all but the highest transmission
conditions. Selective targeting of hotspots with drug campaigns is an
ineffective tool for elimination due to limited sensitivity of available field
diagnostics.
| [
{
"created": "Wed, 2 Sep 2015 15:51:34 GMT",
"version": "v1"
}
] | 2016-02-17 | [
[
"Gerardin",
"Jaline",
""
],
[
"Bever",
"Caitlin A.",
""
],
[
"Hamainza",
"Busiku",
""
],
[
"Miller",
"John M.",
""
],
[
"Eckhoff",
"Philip A.",
""
],
[
"Wenger",
"Edward A.",
""
]
] | Mass campaigns with antimalarial drugs are potentially a powerful tool for local elimination of malaria, yet current diagnostic technologies are insufficiently sensitive to identify all individuals who harbor infections. At the same time, overtreatment of uninfected individuals increases the risk of accelerating emergence of drug resistance and losing community acceptance. Local heterogeneity in transmission intensity may allow campaign strategies that respond to index cases to successfully target subpatent infections while simultaneously limiting overtreatment. While selective targeting of hotspots of transmission has been proposed as a strategy for malaria control, such targeting has not been tested in the context of malaria elimination. Using household locations, demographics, and prevalence data from a survey of four health facility catchment areas in southern Zambia and an agent-based model of malaria transmission and immunity acquisition, a transmission intensity was fit to each household based on neighborhood age-dependent malaria prevalence. A set of individual infection trajectories was constructed for every household in each catchment area, accounting for heterogeneous exposure and immunity. Various campaign strategies (mass drug administration, mass screen and treat, focal mass drug administration, snowball reactive case detection, pooled sampling, and a hypothetical serological diagnostic) were simulated and evaluated for performance at finding infections, minimizing overtreatment, reducing clinical case counts, and interrupting transmission. For malaria control, presumptive treatment leads to substantial overtreatment without additional morbidity reduction under all but the highest transmission conditions. Selective targeting of hotspots with drug campaigns is an ineffective tool for elimination due to limited sensitivity of available field diagnostics. |
q-bio/0610014 | Tobias Reichenbach | Tobias Reichenbach, Mauro Mobilia, and Erwin Frey | Stochastic effects on biodiversity in cyclic coevolutionary dynamics | 5 pages, 2 figures. Proceedings paper of the workshop "Stochastic
models in biological sciences" (May 29 - June 2, 2006 in Warsaw) for the
Banach Center Publications | Banach Center Publications Vol. 80, 259-264 (2008) | 10.4064/bc80-0-17 | LMU-ASC 67/06 | q-bio.PE cond-mat.stat-mech physics.bio-ph | null | Finite-size fluctuations arising in the dynamics of competing populations may
have dramatic influence on their fate. As an example, in this article, we
investigate a model of three species which dominate each other in a cyclic
manner. Although the deterministic approach predicts (neutrally) stable
coexistence of all species, for any finite population size, the intrinsic
stochasticity unavoidably causes the eventual extinction of two of them.
| [
{
"created": "Thu, 5 Oct 2006 12:20:41 GMT",
"version": "v1"
}
] | 2011-12-20 | [
[
"Reichenbach",
"Tobias",
""
],
[
"Mobilia",
"Mauro",
""
],
[
"Frey",
"Erwin",
""
]
] | Finite-size fluctuations arising in the dynamics of competing populations may have dramatic influence on their fate. As an example, in this article, we investigate a model of three species which dominate each other in a cyclic manner. Although the deterministic approach predicts (neutrally) stable coexistence of all species, for any finite population size, the intrinsic stochasticity unavoidably causes the eventual extinction of two of them. |
1609.05571 | Sebastian Schreiber | Sebastian J. Schreiber and Swati Patel and Casey terHorst | Evolution as a coexistence mechanism: Does genetic architecture matter? | null | The American Naturalist 191.3 (2018): 407-420 | 10.1086/695832 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Species sharing a prey or a predator species may go extinct due to
exploitative or apparent competition. We examine whether evolution of the
shared species acts as a coexistence mechanism and to what extent the answer
depends on the genetic architecture underlying trait evolution. In our models
of exploitative and apparent competition, the shared species evolves its
defense or prey use. Evolving species are either haploid or diploid. A single
locus pleiotropically determines prey nutritional quality and predator attack
rates. When pleiotropy is sufficiently antagonistic (e.g. nutritional prey are
harder to capture), eco-evolutionary assembly culminates in one of two stable
states supporting only two species. When pleiotropy is weakly antagonistic or
synergistic, assembly is intransitive: species-genotype pairs are cyclically
displaced by rare invasions of the missing genotypes or species. This
intransitivity allows for coexistence if, along its equilibria, the geometric
mean of recovery rates exceeds the geometric mean of loss rates of the rare
genotypes or species. By affecting these rates, synergistic pleiotropy can
mediate coexistence, while antagonistic pleiotropy does not. For diploid
populations experiencing weak antagonistic pleiotropy, superadditive allelic
contributions to fitness can mitigate coexistence via an eco-evolutionary
storage effect. Density-dependence and mutations also promote coexistence.
These results highlight how the efficacy of evolution as a coexistence
mechanism may depend on the underlying genetic architecture.
| [
{
"created": "Mon, 19 Sep 2016 00:07:34 GMT",
"version": "v1"
},
{
"created": "Mon, 12 Jun 2017 03:39:22 GMT",
"version": "v2"
}
] | 2019-02-12 | [
[
"Schreiber",
"Sebastian J.",
""
],
[
"Patel",
"Swati",
""
],
[
"terHorst",
"Casey",
""
]
] | Species sharing a prey or a predator species may go extinct due to exploitative or apparent competition. We examine whether evolution of the shared species acts as a coexistence mechanism and to what extent the answer depends on the genetic architecture underlying trait evolution. In our models of exploitative and apparent competition, the shared species evolves its defense or prey use. Evolving species are either haploid or diploid. A single locus pleiotropically determines prey nutritional quality and predator attack rates. When pleiotropy is sufficiently antagonistic (e.g. nutritional prey are harder to capture), eco-evolutionary assembly culminates in one of two stable states supporting only two species. When pleiotropy is weakly antagonistic or synergistic, assembly is intransitive: species-genotype pairs are cyclically displaced by rare invasions of the missing genotypes or species. This intransitivity allows for coexistence if, along its equilibria, the geometric mean of recovery rates exceeds the geometric mean of loss rates of the rare genotypes or species. By affecting these rates, synergistic pleiotropy can mediate coexistence, while antagonistic pleiotropy does not. For diploid populations experiencing weak antagonistic pleiotropy, superadditive allelic contributions to fitness can mitigate coexistence via an eco-evolutionary storage effect. Density-dependence and mutations also promote coexistence. These results highlight how the efficacy of evolution as a coexistence mechanism may depend on the underlying genetic architecture. |
1605.02977 | Thierry Emonet | Junjiajia Long, Steven W. Zucker and Thierry Emonet | Feedback Between Motion and Sensation Provides Nonlinear Boost in
Run-and-tumble Navigation | 22 pages, 3 figures, and 8 pages of SI | null | 10.1371/journal.pcbi.1005429 | null | q-bio.CB cond-mat.soft physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many organisms navigate gradients by alternating straight motions (runs) with
random reorientations (tumbles), transiently suppressing tumbles whenever
attractant signal increases. This induces a functional coupling between
movement and sensation, since tumbling probability is controlled by the
internal state of the organism which, in turn, depends on previous signal
levels. Although a negative feedback tends to maintain this internal state
close to adapted levels, positive feedback can arise when motion up the
gradient reduces tumbling probability, further boosting drift up the gradient.
Importantly, such positive feedback can drive large fluctuations in the
internal state, complicating analytical approaches. Previous studies focused on
what happens when the negative feedback dominates the dynamics. By contrast, we
show here that there is a large portion of physiologically-relevant parameter
space where the positive feedback can dominate, even when gradients are
relatively shallow. We demonstrate how large transients emerge because of
non-normal dynamics (non-orthogonal eigenvectors near a stable fixed point)
inherent in the positive feedback, and further identify a fundamental
nonlinearity that strongly amplifies their effect. Most importantly, this
amplification is asymmetric, elongating runs in favorable directions and
abbreviating others. The result is a "ratchet-like" gradient climbing behavior
with drift speeds that can approach half the maximum run speed of the organism.
Our results thus show that the classical drawback of run-and-tumble navigation
--- wasteful runs in the wrong direction --- can be mitigated by exploiting the
non-normal dynamics implicit in the run-and-tumble strategy.
| [
{
"created": "Tue, 10 May 2016 12:29:51 GMT",
"version": "v1"
},
{
"created": "Wed, 22 Feb 2017 18:34:13 GMT",
"version": "v2"
}
] | 2017-04-12 | [
[
"Long",
"Junjiajia",
""
],
[
"Zucker",
"Steven W.",
""
],
[
"Emonet",
"Thierry",
""
]
] | Many organisms navigate gradients by alternating straight motions (runs) with random reorientations (tumbles), transiently suppressing tumbles whenever attractant signal increases. This induces a functional coupling between movement and sensation, since tumbling probability is controlled by the internal state of the organism which, in turn, depends on previous signal levels. Although a negative feedback tends to maintain this internal state close to adapted levels, positive feedback can arise when motion up the gradient reduces tumbling probability, further boosting drift up the gradient. Importantly, such positive feedback can drive large fluctuations in the internal state, complicating analytical approaches. Previous studies focused on what happens when the negative feedback dominates the dynamics. By contrast, we show here that there is a large portion of physiologically-relevant parameter space where the positive feedback can dominate, even when gradients are relatively shallow. We demonstrate how large transients emerge because of non-normal dynamics (non-orthogonal eigenvectors near a stable fixed point) inherent in the positive feedback, and further identify a fundamental nonlinearity that strongly amplifies their effect. Most importantly, this amplification is asymmetric, elongating runs in favorable directions and abbreviating others. The result is a "ratchet-like" gradient climbing behavior with drift speeds that can approach half the maximum run speed of the organism. Our results thus show that the classical drawback of run-and-tumble navigation --- wasteful runs in the wrong direction --- can be mitigated by exploiting the non-normal dynamics implicit in the run-and-tumble strategy. |
2311.15687 | Barbara Tillmann | Barbara Tillmann (CAP, LEAD), Jackson Graves (LSP), Francesca
Talamini, Yohana L\'ev\^eque, Lesly Fornoni (PAM), Caliani Hoarau (PAM),
Agathe Pralus (PAM), J\'er\'emie Ginzburg (CRNL, PAM), Philippe Albouy (PAM),
Anne Caclin (PAM) | Auditory cortex and beyond: Deficits in congenital amusia | null | Hearing Research, 2023, 437, pp.108855 | 10.1016/j.heares.2023.108855 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Congenital amusia is a neuro-developmental disorder of music perception and
production, with the observed deficits contrasting with the sophisticated music
processing reported for the general population. Musical deficits within amusia
have been hypothesized to arise from altered pitch processing, with impairments
in pitch discrimination and, notably, short-term memory. We here review
research investigating its behavioral and neural correlates, in particular the
impairments at encoding, retention, and recollection of pitch information, as
well as how these impairments extend to the processing of pitch cues in speech
and emotion. The impairments have been related to altered brain responses in a
distributed fronto-temporal network, which can be observed also at rest.
Neuroimaging studies revealed changes in connectivity patterns within this
network and beyond, shedding light on the brain dynamics underlying auditory
cognition. Interestingly, some studies revealed spared implicit pitch
processing in congenital amusia, showing the power of implicit cognition in the
music domain. Building on these findings, together with audiovisual integration
and other beneficial mechanisms, we outline perspectives for training and
rehabilitation and the future directions of this research domain.
| [
{
"created": "Mon, 27 Nov 2023 10:27:09 GMT",
"version": "v1"
}
] | 2023-11-28 | [
[
"Tillmann",
"Barbara",
"",
"CAP, LEAD"
],
[
"Graves",
"Jackson",
"",
"LSP"
],
[
"Talamini",
"Francesca",
"",
"PAM"
],
[
"Lévêque",
"Yohana",
"",
"PAM"
],
[
"Fornoni",
"Lesly",
"",
"PAM"
],
[
"Hoarau",
"Caliani",
"",
"PAM"
],
[
"Pralus",
"Agathe",
"",
"PAM"
],
[
"Ginzburg",
"Jérémie",
"",
"CRNL, PAM"
],
[
"Albouy",
"Philippe",
"",
"PAM"
],
[
"Caclin",
"Anne",
"",
"PAM"
]
] | Congenital amusia is a neuro-developmental disorder of music perception and production, with the observed deficits contrasting with the sophisticated music processing reported for the general population. Musical deficits within amusia have been hypothesized to arise from altered pitch processing, with impairments in pitch discrimination and, notably, short-term memory. We here review research investigating its behavioral and neural correlates, in particular the impairments at encoding, retention, and recollection of pitch information, as well as how these impairments extend to the processing of pitch cues in speech and emotion. The impairments have been related to altered brain responses in a distributed fronto-temporal network, which can be observed also at rest. Neuroimaging studies revealed changes in connectivity patterns within this network and beyond, shedding light on the brain dynamics underlying auditory cognition. Interestingly, some studies revealed spared implicit pitch processing in congenital amusia, showing the power of implicit cognition in the music domain. Building on these findings, together with audiovisual integration and other beneficial mechanisms, we outline perspectives for training and rehabilitation and the future directions of this research domain. |
1509.02273 | Taoufiq Harach Tao | T. Harach, N. Marungruang, N. Dutilleul, V. Cheatham, K. D. Mc Coy, J.
J. Neher, M. Jucker, F. F{\aa}k, T., Lasser and T. Bolmont | Reduction of Alzheimer's disease beta-amyloid pathology in the absence
of gut microbiota | null | null | null | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Alzheimer's disease is the most common form of dementia in the western world,
however there is no cure available for this devastating neurodegenerative
disorder. Despite clinical and experimental evidence implicating the intestinal
microbiota in a number of brain disorders, its impact on Alzheimer's disease is
not known. We generated a germ-free mouse model of Alzheimer's disease and
discovered a drastic reduction of cerebral Ab amyloid pathology when compared
to control Alzheimer's disease animals with intestinal microbiota. Sequencing
bacterial 16S rRNA from fecal samples revealed a remarkable shift in the gut
microbiota of conventionally-raised Alzheimer's disease mice as compared to
healthy, wild-type mice. Colonization of germ-free Alzheimer mice with
harvested microbiota from conventionally-raised Alzheimer mice dramatically
increased cerebral Ab pathology. In contrast, colonization with microbiota from
control wild-type mice was ineffective in increasing cerebral Ab levels. Our
results indicate a microbial involvement in the development of Alzheimer's
disease pathology, and suggest that microbiota may contribute to the
development of neurodegenerative diseases.
| [
{
"created": "Tue, 8 Sep 2015 08:02:18 GMT",
"version": "v1"
},
{
"created": "Wed, 16 Sep 2015 12:46:25 GMT",
"version": "v2"
}
] | 2015-09-17 | [
[
"Harach",
"T.",
""
],
[
"Marungruang",
"N.",
""
],
[
"Dutilleul",
"N.",
""
],
[
"Cheatham",
"V.",
""
],
[
"Coy",
"K. D. Mc",
""
],
[
"Neher",
"J. J.",
""
],
[
"Jucker",
"M.",
""
],
[
"Fåk",
"F.",
""
],
[
"T.",
"",
""
],
[
"Lasser",
"",
""
],
[
"Bolmont",
"T.",
""
]
] | Alzheimer's disease is the most common form of dementia in the western world, however there is no cure available for this devastating neurodegenerative disorder. Despite clinical and experimental evidence implicating the intestinal microbiota in a number of brain disorders, its impact on Alzheimer's disease is not known. We generated a germ-free mouse model of Alzheimer's disease and discovered a drastic reduction of cerebral Ab amyloid pathology when compared to control Alzheimer's disease animals with intestinal microbiota. Sequencing bacterial 16S rRNA from fecal samples revealed a remarkable shift in the gut microbiota of conventionally-raised Alzheimer's disease mice as compared to healthy, wild-type mice. Colonization of germ-free Alzheimer mice with harvested microbiota from conventionally-raised Alzheimer mice dramatically increased cerebral Ab pathology. In contrast, colonization with microbiota from control wild-type mice was ineffective in increasing cerebral Ab levels. Our results indicate a microbial involvement in the development of Alzheimer's disease pathology, and suggest that microbiota may contribute to the development of neurodegenerative diseases. |
2404.02924 | Maxwell Wang | Maxwell H. Wang and Jukka-Pekka Onnela | Accounting for contact network uncertainty in epidemic inferences | 27 pages, 7 figures | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | When modeling the dynamics of infectious disease, the incorporation of
contact network information allows for the capture of the non-randomness and
heterogeneity of realistic contact patterns. Oftentimes, it is assumed that the
underlying contact pattern is known with perfect certainty. However, in
realistic settings, the observed data often serves as an imperfect proxy of the
actual contact patterns in the population. Furthermore, the epidemic in the
real world are often not fully observed; event times such as infection and
recovery times may be missing. In order to conduct accurate inferences on
parameters of contagion spread, it is crucial to incorporate these sources of
uncertainty. In this paper, we propose the use of Mixture Density Network
compressed ABC (MDN-ABC) to learn informative summary statistics for the
available data. This method will allow for Bayesian inference on the epidemic
parameters of a contagious process, while accounting for imperfect observations
on the epidemic and the contact network. We will demonstrate the use of this
method on simulated epidemics and networks, and extend this framework to
analyze the spread of Tattoo Skin Disease (TSD) among bottlenose dolphins in
Shark Bay, Australia.
| [
{
"created": "Mon, 1 Apr 2024 15:17:22 GMT",
"version": "v1"
},
{
"created": "Tue, 16 Apr 2024 01:56:24 GMT",
"version": "v2"
}
] | 2024-04-17 | [
[
"Wang",
"Maxwell H.",
""
],
[
"Onnela",
"Jukka-Pekka",
""
]
] | When modeling the dynamics of infectious disease, the incorporation of contact network information allows for the capture of the non-randomness and heterogeneity of realistic contact patterns. Oftentimes, it is assumed that the underlying contact pattern is known with perfect certainty. However, in realistic settings, the observed data often serves as an imperfect proxy of the actual contact patterns in the population. Furthermore, the epidemic in the real world are often not fully observed; event times such as infection and recovery times may be missing. In order to conduct accurate inferences on parameters of contagion spread, it is crucial to incorporate these sources of uncertainty. In this paper, we propose the use of Mixture Density Network compressed ABC (MDN-ABC) to learn informative summary statistics for the available data. This method will allow for Bayesian inference on the epidemic parameters of a contagious process, while accounting for imperfect observations on the epidemic and the contact network. We will demonstrate the use of this method on simulated epidemics and networks, and extend this framework to analyze the spread of Tattoo Skin Disease (TSD) among bottlenose dolphins in Shark Bay, Australia. |
1209.5801 | Casey Richardson | Casey Richardson | Antibiotic resistant characteristics from 16S rRNA | null | null | null | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background: Microbiota have evolved to acclimate themselves to many
environments. Humanity is become ever increasingly medicated and many of those
medications are antibiotics. Sadly, Microbiota are adapting to medication and
with each passing generation they become more difficult to subdue. The 16S
small subunit of bacterial ribosomal rRNA provides a wealth of information for
classifying the species level taxonomy of bacteria. Methodology/Principal
Findings: Experiments were collected utilizing broad and narrow spectrum
antibiotics, which act primarily on DNA. In each experiment a statistically
significant, unique and predictable pattern of sequential and thermodynamic
stability or instability was found to correlate to antibiotic resistance.
Conclusions/Significance: Classification of antibiotic resistance is possible
for some species and antibiotic combinations using the 16S rRNA sequential and
thermodynamic properties.
| [
{
"created": "Wed, 26 Sep 2012 00:39:15 GMT",
"version": "v1"
}
] | 2012-09-27 | [
[
"Richardson",
"Casey",
""
]
] | Background: Microbiota have evolved to acclimate themselves to many environments. Humanity is become ever increasingly medicated and many of those medications are antibiotics. Sadly, Microbiota are adapting to medication and with each passing generation they become more difficult to subdue. The 16S small subunit of bacterial ribosomal rRNA provides a wealth of information for classifying the species level taxonomy of bacteria. Methodology/Principal Findings: Experiments were collected utilizing broad and narrow spectrum antibiotics, which act primarily on DNA. In each experiment a statistically significant, unique and predictable pattern of sequential and thermodynamic stability or instability was found to correlate to antibiotic resistance. Conclusions/Significance: Classification of antibiotic resistance is possible for some species and antibiotic combinations using the 16S rRNA sequential and thermodynamic properties. |
1807.00789 | Guramrit Singh | Justin W. Mabin, Lauren A. Woodward, Robert Patton, Zhongxia Yi,
Mengxuan Jia, Vicki Wysocki, Ralf Bundschuh, Guramrit Singh | The exon junction complex undergoes a compositional switch that alters
mRNP structure and nonsense-mediated mRNA decay activity | null | null | null | null | q-bio.GN | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The exon junction complex (EJC) deposited upstream of mRNA exon junctions
shapes structure, composition and fate of spliced mRNA ribonucleoprotein
particles (mRNPs). To achieve this, the EJC core nucleates assembly of a
dynamic shell of peripheral proteins that function in diverse
post-transcriptional processes. To illuminate consequences of EJC composition
change, we purified EJCs from human cells via peripheral proteins RNPS1 and
CASC3. We show that EJC originates as an SR-rich mega-dalton sized RNP that
contains RNPS1 but lacks CASC3. After mRNP export to the cytoplasm and before
translation, the EJC undergoes a remarkable compositional and structural
remodeling into an SR-devoid monomeric complex that contains CASC3.
Surprisingly, RNPS1 is important for nonsense-mediated mRNA decay (NMD) in
general whereas CASC3 is needed for NMD of only select mRNAs. The promotion of
switch to CASC3-EJC slows down NMD. Overall, the EJC compositional switch
dramatically alters mRNP structure and specifies two distinct phases of
EJC-dependent NMD.
| [
{
"created": "Mon, 2 Jul 2018 17:18:16 GMT",
"version": "v1"
}
] | 2018-07-03 | [
[
"Mabin",
"Justin W.",
""
],
[
"Woodward",
"Lauren A.",
""
],
[
"Patton",
"Robert",
""
],
[
"Yi",
"Zhongxia",
""
],
[
"Jia",
"Mengxuan",
""
],
[
"Wysocki",
"Vicki",
""
],
[
"Bundschuh",
"Ralf",
""
],
[
"Singh",
"Guramrit",
""
]
] | The exon junction complex (EJC) deposited upstream of mRNA exon junctions shapes structure, composition and fate of spliced mRNA ribonucleoprotein particles (mRNPs). To achieve this, the EJC core nucleates assembly of a dynamic shell of peripheral proteins that function in diverse post-transcriptional processes. To illuminate consequences of EJC composition change, we purified EJCs from human cells via peripheral proteins RNPS1 and CASC3. We show that EJC originates as an SR-rich mega-dalton sized RNP that contains RNPS1 but lacks CASC3. After mRNP export to the cytoplasm and before translation, the EJC undergoes a remarkable compositional and structural remodeling into an SR-devoid monomeric complex that contains CASC3. Surprisingly, RNPS1 is important for nonsense-mediated mRNA decay (NMD) in general whereas CASC3 is needed for NMD of only select mRNAs. The promotion of switch to CASC3-EJC slows down NMD. Overall, the EJC compositional switch dramatically alters mRNP structure and specifies two distinct phases of EJC-dependent NMD. |
2301.09189 | S. Stanley Young | S. Stanley Young and Warren B. Kindzierski | Statistical reproducibility of meta-analysis research claims for medical
mask use in community settings to prevent COVID infection | 21 pages, 100 references, 3 appendices | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | The coronavirus pandemic (COVID) has been an exceptional test of current
scientific evidence that inform and shape policy. Many US states, cities, and
counties implemented public orders for mask use on the notion that this
intervention would delay and flatten the epidemic peak and largely benefit
public health outcomes. P-value plotting was used to evaluate statistical
reproducibility of meta-analysis research claims of a benefit for medical
(surgical) mask use in community settings to prevent COVID infection. Eight
studies (seven meta-analyses, one systematic review) published between 1
January 2020 and 7 December 2022 were evaluated. Base studies were randomized
control trials with outcomes of medical diagnosis or laboratory-confirmed
diagnosis of viral (Influenza or COVID) illness. Self-reported viral illness
outcomes were excluded because of awareness bias. No evidence was observed for
a medical mask use benefit to prevent viral infections in six p-value plots
(five meta-analyses and one systematic review). Research claims of no benefit
in three meta-analyses and the systematic review were reproduced in p-value
plots. Research claims of a benefit in two meta-analyses were not reproduced in
p-value plots. Insufficient data were available to construct p-value plots for
two meta-analyses because of overreliance on self-reported outcomes. These
findings suggest a benefit for medical mask use in community settings to
prevent viral, including COVID infection, is unproven.
| [
{
"created": "Sun, 22 Jan 2023 19:51:01 GMT",
"version": "v1"
}
] | 2023-01-24 | [
[
"Young",
"S. Stanley",
""
],
[
"Kindzierski",
"Warren B.",
""
]
] | The coronavirus pandemic (COVID) has been an exceptional test of current scientific evidence that inform and shape policy. Many US states, cities, and counties implemented public orders for mask use on the notion that this intervention would delay and flatten the epidemic peak and largely benefit public health outcomes. P-value plotting was used to evaluate statistical reproducibility of meta-analysis research claims of a benefit for medical (surgical) mask use in community settings to prevent COVID infection. Eight studies (seven meta-analyses, one systematic review) published between 1 January 2020 and 7 December 2022 were evaluated. Base studies were randomized control trials with outcomes of medical diagnosis or laboratory-confirmed diagnosis of viral (Influenza or COVID) illness. Self-reported viral illness outcomes were excluded because of awareness bias. No evidence was observed for a medical mask use benefit to prevent viral infections in six p-value plots (five meta-analyses and one systematic review). Research claims of no benefit in three meta-analyses and the systematic review were reproduced in p-value plots. Research claims of a benefit in two meta-analyses were not reproduced in p-value plots. Insufficient data were available to construct p-value plots for two meta-analyses because of overreliance on self-reported outcomes. These findings suggest a benefit for medical mask use in community settings to prevent viral, including COVID infection, is unproven. |
1910.08423 | Betony Adams Ms | Betony Adams and Francesco Petruccione | Quantum effects in the brain: A review | The following article will be submitted to AVS Quantum Science | null | null | null | q-bio.NC quant-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the mid-1990s it was proposed that quantum effects in proteins known as
microtubules play a role in the nature of consciousness. The theory was largely
dismissed due to the fact that quantum effects were thought unlikely to occur
in biological systems, which are warm and wet and subject to decoherence.
However, the development of quantum biology now suggests otherwise. Quantum
effects have been implicated in photosynthesis, a process fundamental to life
on earth. They are also possibly at play in other biological processes such as
avian migration and olfaction. The microtubule mechanism of quantum
consciousness has been joined by other theories of quantum cognition. It has
been proposed that general anaesthetic, which switches off consciousness, does
this through quantum means, measured by changes in electron spin. The
tunnelling hypothesis developed in the context of olfaction has been applied to
the action of neurotransmitters. A recent theory outlines how quantum
entanglement between phosphorus nuclei might influence the firing of neurons.
These, and other theories, have contributed to a growing field of research that
investigates whether quantum effects might contribute to neural processing.
This review aims to investigate the current state of this research and how
fully the theory is supported by convincing experimental evidence. It also aims
to clarify the biological sites of these proposed quantum effects and how
progress made in the wider field of quantum biology might be relevant to the
specific case of the brain.
| [
{
"created": "Thu, 17 Oct 2019 15:03:17 GMT",
"version": "v1"
}
] | 2019-10-21 | [
[
"Adams",
"Betony",
""
],
[
"Petruccione",
"Francesco",
""
]
] | In the mid-1990s it was proposed that quantum effects in proteins known as microtubules play a role in the nature of consciousness. The theory was largely dismissed due to the fact that quantum effects were thought unlikely to occur in biological systems, which are warm and wet and subject to decoherence. However, the development of quantum biology now suggests otherwise. Quantum effects have been implicated in photosynthesis, a process fundamental to life on earth. They are also possibly at play in other biological processes such as avian migration and olfaction. The microtubule mechanism of quantum consciousness has been joined by other theories of quantum cognition. It has been proposed that general anaesthetic, which switches off consciousness, does this through quantum means, measured by changes in electron spin. The tunnelling hypothesis developed in the context of olfaction has been applied to the action of neurotransmitters. A recent theory outlines how quantum entanglement between phosphorus nuclei might influence the firing of neurons. These, and other theories, have contributed to a growing field of research that investigates whether quantum effects might contribute to neural processing. This review aims to investigate the current state of this research and how fully the theory is supported by convincing experimental evidence. It also aims to clarify the biological sites of these proposed quantum effects and how progress made in the wider field of quantum biology might be relevant to the specific case of the brain. |
2209.06211 | Jarek Duda Dr | Jarek Duda | Predicting probability distributions for cancer therapy drug selection
optimization | 4 pages, 4 figures | null | null | null | q-bio.QM cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large variability between cell lines brings a difficult optimization problem
of drug selection for cancer therapy. Standard approaches use prediction of
value for this purpose, corresponding e.g. to expected value of their
distribution. This article shows superiority of working on, predicting the
entire probability distributions - proposing basic tools for this purpose. We
are mostly interested in the best drug in their batch to be tested - proper
optimization of their selection for extreme statistics requires knowledge of
the entire probability distributions, which for distributions of drug
properties among cell lines often turn out binomial, e.g. depending on
corresponding gene. Hence for basic prediction mechanism there is proposed
mixture of two Gaussians, trying to predict its weight based on additional
information.
| [
{
"created": "Tue, 13 Sep 2022 14:13:19 GMT",
"version": "v1"
}
] | 2022-09-15 | [
[
"Duda",
"Jarek",
""
]
] | Large variability between cell lines brings a difficult optimization problem of drug selection for cancer therapy. Standard approaches use prediction of value for this purpose, corresponding e.g. to expected value of their distribution. This article shows superiority of working on, predicting the entire probability distributions - proposing basic tools for this purpose. We are mostly interested in the best drug in their batch to be tested - proper optimization of their selection for extreme statistics requires knowledge of the entire probability distributions, which for distributions of drug properties among cell lines often turn out binomial, e.g. depending on corresponding gene. Hence for basic prediction mechanism there is proposed mixture of two Gaussians, trying to predict its weight based on additional information. |
2005.08121 | Marta Varela | Marta Varela, Adity Roy, Jack Lee | A Survey of Pathways for Mechano-Electric Coupling in the Atria | Accepted for publication in Progress in Biophysics and Molecular
Biology (Special Issue on Mechanobiology), 2020 | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mechano-electric coupling (MEC) in atrial tissue has received sparse
investigation to date, despite the well-known association between chronic
atrial dilation and atrial fibrillation (AF). Of note, no fewer than six
different mechanisms pertaining to stretch-activated channels, cellular
capacitance and geometric effects have been identified in the literature as
potential players. In this mini review, we briefly survey each of these
pathways to MEC. We then perform computational simulations using single cell
and tissue models in presence of various stretch regimes and MEC pathways. This
allows us to assess the relative significance of each pathway in determining
action potential duration, conduction velocity and rotor stability. For chronic
atrial stretch, we find that stretch-induced alterations in membrane
capacitance decrease conduction velocity and increase action potential
duration, in agreement with experimental findings. In the presence of
time-dependent passive atrial stretch, stretch-activated channels play the
largest role, leading to after-depolarizations and rotor hypermeandering. These
findings suggest that physiological atrial stretches, such as passive stretch
during the atrial reservoir phase, may play an important part in the mechanisms
of atrial arrhythmogenesis.
| [
{
"created": "Sat, 16 May 2020 22:36:44 GMT",
"version": "v1"
},
{
"created": "Tue, 29 Sep 2020 15:49:01 GMT",
"version": "v2"
}
] | 2020-09-30 | [
[
"Varela",
"Marta",
""
],
[
"Roy",
"Adity",
""
],
[
"Lee",
"Jack",
""
]
] | Mechano-electric coupling (MEC) in atrial tissue has received sparse investigation to date, despite the well-known association between chronic atrial dilation and atrial fibrillation (AF). Of note, no fewer than six different mechanisms pertaining to stretch-activated channels, cellular capacitance and geometric effects have been identified in the literature as potential players. In this mini review, we briefly survey each of these pathways to MEC. We then perform computational simulations using single cell and tissue models in presence of various stretch regimes and MEC pathways. This allows us to assess the relative significance of each pathway in determining action potential duration, conduction velocity and rotor stability. For chronic atrial stretch, we find that stretch-induced alterations in membrane capacitance decrease conduction velocity and increase action potential duration, in agreement with experimental findings. In the presence of time-dependent passive atrial stretch, stretch-activated channels play the largest role, leading to after-depolarizations and rotor hypermeandering. These findings suggest that physiological atrial stretches, such as passive stretch during the atrial reservoir phase, may play an important part in the mechanisms of atrial arrhythmogenesis. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.