id stringlengths 9 13 | submitter stringlengths 4 48 | authors stringlengths 4 9.62k | title stringlengths 4 343 | comments stringlengths 2 480 ⌀ | journal-ref stringlengths 9 309 ⌀ | doi stringlengths 12 138 ⌀ | report-no stringclasses 277 values | categories stringlengths 8 87 | license stringclasses 9 values | orig_abstract stringlengths 27 3.76k | versions listlengths 1 15 | update_date stringlengths 10 10 | authors_parsed listlengths 1 147 | abstract stringlengths 24 3.75k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1806.08504 | Nathan Harding | Nathan Harding, Ramil Nigmatullin, Mikhail Prokopenko | Thermodynamic efficiency of contagions: A statistical mechanical
analysis of the SIS epidemic model | null | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a novel approach to the study of epidemics on networks as
thermodynamic phenomena, considering the thermodynamic efficiency of
contagions, considered as distributed computational processes. Modelling SIS
dynamics on a contact network statistical-mechanically, we follow the Maximum
Entropy principle to obtain steady state distributions and derive, under
certain assumptions, relevant thermodynamic quantities both analytically and
numerically. In particular, we obtain closed form solutions for some cases,
while interpreting key epidemic variables, such as the reproductive ratio $R_0$
of a SIS model, in a statistical mechanical setting. On the other hand, we
consider configuration and free entropy, as well as the Fisher Information, in
the epidemiological context. This allowed us to identify criticality and
distinct phases of epidemic processes. For each of the considered thermodynamic
quantities, we compare the analytical solutions informed by the Maximum Entropy
principle with the numerical estimates for SIS epidemics simulated on
Watts-Strogatz random graphs.
| [
{
"created": "Fri, 22 Jun 2018 05:55:22 GMT",
"version": "v1"
},
{
"created": "Mon, 22 Oct 2018 22:21:11 GMT",
"version": "v2"
}
] | 2018-10-24 | [
[
"Harding",
"Nathan",
""
],
[
"Nigmatullin",
"Ramil",
""
],
[
"Prokopenko",
"Mikhail",
""
]
] | We present a novel approach to the study of epidemics on networks as thermodynamic phenomena, considering the thermodynamic efficiency of contagions, considered as distributed computational processes. Modelling SIS dynamics on a contact network statistical-mechanically, we follow the Maximum Entropy principle to obtain steady state distributions and derive, under certain assumptions, relevant thermodynamic quantities both analytically and numerically. In particular, we obtain closed form solutions for some cases, while interpreting key epidemic variables, such as the reproductive ratio $R_0$ of a SIS model, in a statistical mechanical setting. On the other hand, we consider configuration and free entropy, as well as the Fisher Information, in the epidemiological context. This allowed us to identify criticality and distinct phases of epidemic processes. For each of the considered thermodynamic quantities, we compare the analytical solutions informed by the Maximum Entropy principle with the numerical estimates for SIS epidemics simulated on Watts-Strogatz random graphs. |
0810.5222 | Andrea De Martino | Andrea De Martino, Carlotta Martelli, Francesco A. Massucci | On the role of conserved moieties in shaping the robustness and
production capabilities of reaction networks | 6 pages | null | null | null | q-bio.MN cond-mat.dis-nn | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study a simplified, solvable model of a fully-connected metabolic network
with constrained quenched disorder to mimic the conservation laws imposed by
stoichiometry on chemical reactions. Within a spin-glass type of approach, we
show that in presence of a conserved metabolic pool the flux state
corresponding to maximal growth is stationary independently of the pool size.
In addition, and at odds with the case of unconstrained networks, the volume of
optimal flux configurations remains finite, indicating that the frustration
imposed by stoichiometric constraints, while reducing growth capabilities,
confers robustness and flexibility to the system. These results have a clear
biological interpretation and provide a basic, fully analytical explanation to
features recently observed in real metabolic networks.
| [
{
"created": "Wed, 29 Oct 2008 09:31:40 GMT",
"version": "v1"
}
] | 2008-10-30 | [
[
"De Martino",
"Andrea",
""
],
[
"Martelli",
"Carlotta",
""
],
[
"Massucci",
"Francesco A.",
""
]
] | We study a simplified, solvable model of a fully-connected metabolic network with constrained quenched disorder to mimic the conservation laws imposed by stoichiometry on chemical reactions. Within a spin-glass type of approach, we show that in presence of a conserved metabolic pool the flux state corresponding to maximal growth is stationary independently of the pool size. In addition, and at odds with the case of unconstrained networks, the volume of optimal flux configurations remains finite, indicating that the frustration imposed by stoichiometric constraints, while reducing growth capabilities, confers robustness and flexibility to the system. These results have a clear biological interpretation and provide a basic, fully analytical explanation to features recently observed in real metabolic networks. |
q-bio/0608036 | Tobias Ambjornsson | Tobias Ambjornsson, Suman K. Banik, Oleg Krichevsky, Ralf Metzler | Sequence sensitivity of breathing dynamics in heteropolymer DNA | 4 pages, 5 figures, to appear in Physical Review Letters | null | 10.1103/PhysRevLett.97.128105 | null | q-bio.BM cond-mat.stat-mech | null | We study the fluctuation dynamics of localized denaturation bubbles in
heteropolymer DNA with a master equation and complementary stochastic
simulation based on novel DNA stability data. A significant dependence of
opening probability and waiting time between bubble events on the local DNA
sequence is revealed and quantified for a biological sequence of the T7
bacteriophage. Quantitative agreement with data from fluorescence correlation
spectroscopy (FCS) is demonstrated.
| [
{
"created": "Thu, 24 Aug 2006 14:47:46 GMT",
"version": "v1"
}
] | 2009-11-13 | [
[
"Ambjornsson",
"Tobias",
""
],
[
"Banik",
"Suman K.",
""
],
[
"Krichevsky",
"Oleg",
""
],
[
"Metzler",
"Ralf",
""
]
] | We study the fluctuation dynamics of localized denaturation bubbles in heteropolymer DNA with a master equation and complementary stochastic simulation based on novel DNA stability data. A significant dependence of opening probability and waiting time between bubble events on the local DNA sequence is revealed and quantified for a biological sequence of the T7 bacteriophage. Quantitative agreement with data from fluorescence correlation spectroscopy (FCS) is demonstrated. |
1408.6079 | Gerrit Ansmann | Gerrit Ansmann, Klaus Lehnertz | Surrogate-assisted analysis of weighted functional brain networks | null | Journal of Neuroscience Methods 208, 165-172 (2012) | 10.1016/j.jneumeth.2012.05.008 | null | q-bio.NC physics.med-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Graph-theoretical analyses of complex brain networks is a rapidly evolving
field with a strong impact for neuroscientific and related clinical research.
Due to a number of confounding variables, however, a reliable and meaningful
characterization of particularly functional brain networks is a major
challenge. Addressing this problem, we present an analysis approach for
weighted networks that makes use of surrogate networks with preserved edge
weights or vertex strengths. We first investigate whether characteristics of
weighted networks are influenced by trivial properties of the edge weights or
vertex strengths (e.g., their standard deviations). If so, these influences are
then effectively segregated with an appropriate surrogate normalization of the
respective network characteristic. We demonstrate this approach by
re-examining, in a time-resolved manner, weighted functional brain networks of
epilepsy patients and control subjects derived from simultaneous EEG/MEG
recordings during different behavioral states. We show that this
surrogate-assisted analysis approach reveals complementary information about
these networks, can aid with their interpretation, and thus can prevent
deriving inappropriate conclusions.
| [
{
"created": "Tue, 26 Aug 2014 11:32:27 GMT",
"version": "v1"
}
] | 2014-08-27 | [
[
"Ansmann",
"Gerrit",
""
],
[
"Lehnertz",
"Klaus",
""
]
] | Graph-theoretical analyses of complex brain networks is a rapidly evolving field with a strong impact for neuroscientific and related clinical research. Due to a number of confounding variables, however, a reliable and meaningful characterization of particularly functional brain networks is a major challenge. Addressing this problem, we present an analysis approach for weighted networks that makes use of surrogate networks with preserved edge weights or vertex strengths. We first investigate whether characteristics of weighted networks are influenced by trivial properties of the edge weights or vertex strengths (e.g., their standard deviations). If so, these influences are then effectively segregated with an appropriate surrogate normalization of the respective network characteristic. We demonstrate this approach by re-examining, in a time-resolved manner, weighted functional brain networks of epilepsy patients and control subjects derived from simultaneous EEG/MEG recordings during different behavioral states. We show that this surrogate-assisted analysis approach reveals complementary information about these networks, can aid with their interpretation, and thus can prevent deriving inappropriate conclusions. |
1101.3343 | Gang Fang | Gang Fang, Wen Wang, Vanja Paunic, Benjamin Oately, Majda Haznadar,
Michael Steinbach, Brian Van Ness, Chad L. Myers, Vipin Kumar | Construction and Functional Analysis of Human Genetic Interaction
Networks with Genome-wide Association Data | null | null | null | null | q-bio.MN q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Genetic interaction measures how different genes collectively contribute to a
phenotype, and can reveal functional compensation and buffering between
pathways under genetic perturbations. Recently, genome-wide screening for
genetic interactions has revealed genetic interaction networks that provide
novel insights either when analyzed by themselves or when integrated with other
functional genomic datasets. For higher eukaryotes such as human, the above
reverse-genetics approaches are not straightforward since the phenotypes of
interest for higher eukaryotes are difficult to study in a cell based assay. We
propose a general framework for constructing and analyzing human genetic
interaction networks from genome-wide single nucleotide polymorphism (SNP) data
used for case-control studies on complex diseases. Specifically, the approach
contains three major steps: (1) estimating SNP-SNP genetic interactions, (2)
identifying linkage disequilibrium (LD) blocks and mapping SNP-SNP interactions
to block-block interactions, and (3) functional mapping for LD blocks. We
performed two sets of functional analyses for each of the six datasets used in
the paper, and demonstrated that (i) the constructed genetic interaction
networks are supported by functional evidence from independent biological
databases, and (ii) the network can be used to discover pairs of compensatory
gene modules (between-pathway models) in their joint association with a disease
phenotype. The proposed framework should provide novel insights beyond existing
approaches that either ignore interactions between SNPs or model different
SNP-SNP pairs with genetic interactions separately. Furthermore, our study
provides evidence that some of the core properties of genetic interaction
networks based on reverse genetics in model organisms like yeast are also
present in genetic interactions revealed by natural variation in human
populations.
| [
{
"created": "Mon, 17 Jan 2011 22:10:09 GMT",
"version": "v1"
}
] | 2015-03-17 | [
[
"Fang",
"Gang",
""
],
[
"Wang",
"Wen",
""
],
[
"Paunic",
"Vanja",
""
],
[
"Oately",
"Benjamin",
""
],
[
"Haznadar",
"Majda",
""
],
[
"Steinbach",
"Michael",
""
],
[
"Van Ness",
"Brian",
""
],
[
"Mye... | Genetic interaction measures how different genes collectively contribute to a phenotype, and can reveal functional compensation and buffering between pathways under genetic perturbations. Recently, genome-wide screening for genetic interactions has revealed genetic interaction networks that provide novel insights either when analyzed by themselves or when integrated with other functional genomic datasets. For higher eukaryotes such as human, the above reverse-genetics approaches are not straightforward since the phenotypes of interest for higher eukaryotes are difficult to study in a cell based assay. We propose a general framework for constructing and analyzing human genetic interaction networks from genome-wide single nucleotide polymorphism (SNP) data used for case-control studies on complex diseases. Specifically, the approach contains three major steps: (1) estimating SNP-SNP genetic interactions, (2) identifying linkage disequilibrium (LD) blocks and mapping SNP-SNP interactions to block-block interactions, and (3) functional mapping for LD blocks. We performed two sets of functional analyses for each of the six datasets used in the paper, and demonstrated that (i) the constructed genetic interaction networks are supported by functional evidence from independent biological databases, and (ii) the network can be used to discover pairs of compensatory gene modules (between-pathway models) in their joint association with a disease phenotype. The proposed framework should provide novel insights beyond existing approaches that either ignore interactions between SNPs or model different SNP-SNP pairs with genetic interactions separately. Furthermore, our study provides evidence that some of the core properties of genetic interaction networks based on reverse genetics in model organisms like yeast are also present in genetic interactions revealed by natural variation in human populations. |
1706.08041 | Pavitra Krishnaswamy | Pavitra Krishnaswamy, Gabriel Obregon-Henao, Jyrki Ahveninen, Sheraz
Khan, Behtash Babadi, Juan Eugenio Iglesias, Matti S. Hamalainen, Patrick L.
Purdon | Sparsity Enables Estimation of both Subcortical and Cortical Activity
from MEG and EEG | 12 pages with 6 figures | null | 10.1073/pnas.1705414114 | null | q-bio.QM q-bio.NC stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Subcortical structures play a critical role in brain function. However,
options for assessing electrophysiological activity in these structures are
limited. Electromagnetic fields generated by neuronal activity in subcortical
structures can be recorded non-invasively using magnetoencephalography (MEG)
and electroencephalography (EEG). However, these subcortical signals are much
weaker than those due to cortical activity. In addition, we show here that it
is difficult to resolve subcortical sources, because distributed cortical
activity can explain the MEG and EEG patterns due to deep sources. We then
demonstrate that if the cortical activity can be assumed to be spatially
sparse, both cortical and subcortical sources can be resolved with M/EEG.
Building on this insight, we develop a novel hierarchical sparse inverse
solution for M/EEG. We assess the performance of this algorithm on realistic
simulations and auditory evoked response data and show that thalamic and
brainstem sources can be correctly estimated in the presence of cortical
activity. Our analysis and method suggest new opportunities and offer practical
tools for characterizing electrophysiological activity in the subcortical
structures of the human brain.
| [
{
"created": "Sun, 25 Jun 2017 06:52:23 GMT",
"version": "v1"
}
] | 2022-06-01 | [
[
"Krishnaswamy",
"Pavitra",
""
],
[
"Obregon-Henao",
"Gabriel",
""
],
[
"Ahveninen",
"Jyrki",
""
],
[
"Khan",
"Sheraz",
""
],
[
"Babadi",
"Behtash",
""
],
[
"Iglesias",
"Juan Eugenio",
""
],
[
"Hamalainen",
"Mat... | Subcortical structures play a critical role in brain function. However, options for assessing electrophysiological activity in these structures are limited. Electromagnetic fields generated by neuronal activity in subcortical structures can be recorded non-invasively using magnetoencephalography (MEG) and electroencephalography (EEG). However, these subcortical signals are much weaker than those due to cortical activity. In addition, we show here that it is difficult to resolve subcortical sources, because distributed cortical activity can explain the MEG and EEG patterns due to deep sources. We then demonstrate that if the cortical activity can be assumed to be spatially sparse, both cortical and subcortical sources can be resolved with M/EEG. Building on this insight, we develop a novel hierarchical sparse inverse solution for M/EEG. We assess the performance of this algorithm on realistic simulations and auditory evoked response data and show that thalamic and brainstem sources can be correctly estimated in the presence of cortical activity. Our analysis and method suggest new opportunities and offer practical tools for characterizing electrophysiological activity in the subcortical structures of the human brain. |
1806.08167 | Thierry Mora | Christophe Gardella, Olivier Marre, Thierry Mora | Modeling the correlated activity of neural populations: A review | null | Neural Computation 31(2) 233-269 (2019) | 10.1162/neco_a_01154 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The principles of neural encoding and computations are inherently collective
and usually involve large populations of interacting neurons with highly
correlated activities. While theories of neural function have long recognized
the importance of collective effects in populations of neurons, only in the
past two decades has it become possible to record from many cells
simulatenously using advanced experimental techniques with single-spike
resolution, and to relate these correlations to function and behaviour. This
review focuses on the modeling and inference approaches that have been recently
developed to describe the correlated spiking activity of populations of
neurons. We cover a variety of models describing correlations between pairs of
neurons as well as between larger groups, synchronous or delayed in time, with
or without the explicit influence of the stimulus, and including or not latent
variables. We discuss the advantages and drawbacks or each method, as well as
the computational challenges related to their application to recordings of ever
larger populations.
| [
{
"created": "Thu, 21 Jun 2018 11:00:13 GMT",
"version": "v1"
}
] | 2019-05-14 | [
[
"Gardella",
"Christophe",
""
],
[
"Marre",
"Olivier",
""
],
[
"Mora",
"Thierry",
""
]
] | The principles of neural encoding and computations are inherently collective and usually involve large populations of interacting neurons with highly correlated activities. While theories of neural function have long recognized the importance of collective effects in populations of neurons, only in the past two decades has it become possible to record from many cells simulatenously using advanced experimental techniques with single-spike resolution, and to relate these correlations to function and behaviour. This review focuses on the modeling and inference approaches that have been recently developed to describe the correlated spiking activity of populations of neurons. We cover a variety of models describing correlations between pairs of neurons as well as between larger groups, synchronous or delayed in time, with or without the explicit influence of the stimulus, and including or not latent variables. We discuss the advantages and drawbacks or each method, as well as the computational challenges related to their application to recordings of ever larger populations. |
1010.4735 | David Wild | Nikolas S. Burkoff, Csilla Varnai, Stephen A. Wells and David L. Wild | Exploring the Energy Landscapes of Protein Folding Simulations with
Bayesian Computation | 28 pages, 16 figures. Under revision for Biophysical Journal | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Nested sampling is a Bayesian sampling technique developed to explore
probability distributions lo- calised in an exponentially small area of the
parameter space. The algorithm provides both posterior samples and an estimate
of the evidence (marginal likelihood) of the model. The nested sampling algo-
rithm also provides an efficient way to calculate free energies and the
expectation value of thermodynamic observables at any temperature, through a
simple post-processing of the output. Previous applications of the algorithm
have yielded large efficiency gains over other sampling techniques, including
parallel tempering (replica exchange). In this paper we describe a parallel
implementation of the nested sampling algorithm and its application to the
problem of protein folding in a Go-type force field of empirical potentials
that were designed to stabilize secondary structure elements in
room-temperature simulations. We demonstrate the method by conducting folding
simulations on a number of small proteins which are commonly used for testing
protein folding procedures: protein G, the SH3 domain of Src tyrosine kinase
and chymotrypsin inhibitor 2. A topological analysis of the posterior samples
is performed to produce energy landscape charts, which give a high level
description of the potential energy surface for the protein folding
simulations. These charts provide qualitative insights into both the folding
process and the nature of the model and force field used.
| [
{
"created": "Fri, 22 Oct 2010 15:19:33 GMT",
"version": "v1"
},
{
"created": "Thu, 13 Jan 2011 17:49:06 GMT",
"version": "v2"
},
{
"created": "Wed, 27 Jul 2011 13:51:08 GMT",
"version": "v3"
},
{
"created": "Thu, 22 Dec 2011 17:13:00 GMT",
"version": "v4"
}
] | 2015-03-17 | [
[
"Burkoff",
"Nikolas S.",
""
],
[
"Varnai",
"Csilla",
""
],
[
"Wells",
"Stephen A.",
""
],
[
"Wild",
"David L.",
""
]
] | Nested sampling is a Bayesian sampling technique developed to explore probability distributions lo- calised in an exponentially small area of the parameter space. The algorithm provides both posterior samples and an estimate of the evidence (marginal likelihood) of the model. The nested sampling algo- rithm also provides an efficient way to calculate free energies and the expectation value of thermodynamic observables at any temperature, through a simple post-processing of the output. Previous applications of the algorithm have yielded large efficiency gains over other sampling techniques, including parallel tempering (replica exchange). In this paper we describe a parallel implementation of the nested sampling algorithm and its application to the problem of protein folding in a Go-type force field of empirical potentials that were designed to stabilize secondary structure elements in room-temperature simulations. We demonstrate the method by conducting folding simulations on a number of small proteins which are commonly used for testing protein folding procedures: protein G, the SH3 domain of Src tyrosine kinase and chymotrypsin inhibitor 2. A topological analysis of the posterior samples is performed to produce energy landscape charts, which give a high level description of the potential energy surface for the protein folding simulations. These charts provide qualitative insights into both the folding process and the nature of the model and force field used. |
1009.5173 | Laurent Jacob | Laurent Jacob, Pierre Neuvial, Sandrine Dudoit | Gains in Power from Structured Two-Sample Tests of Means on Graphs | null | Annals of Applied Statistics 2012, Vol. 6, No. 2, 561-600 | 10.1214/11-AOAS528 | null | q-bio.QM stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider multivariate two-sample tests of means, where the location shift
between the two populations is expected to be related to a known graph
structure. An important application of such tests is the detection of
differentially expressed genes between two patient populations, as shifts in
expression levels are expected to be coherent with the structure of graphs
reflecting gene properties such as biological process, molecular function,
regulation, or metabolism. For a fixed graph of interest, we demonstrate that
accounting for graph structure can yield more powerful tests under the
assumption of smooth distribution shift on the graph. We also investigate the
identification of non-homogeneous subgraphs of a given large graph, which poses
both computational and multiple testing problems. The relevance and benefits of
the proposed approach are illustrated on synthetic data and on breast cancer
gene expression data analyzed in context of KEGG pathways.
| [
{
"created": "Mon, 27 Sep 2010 07:21:22 GMT",
"version": "v1"
}
] | 2014-05-16 | [
[
"Jacob",
"Laurent",
""
],
[
"Neuvial",
"Pierre",
""
],
[
"Dudoit",
"Sandrine",
""
]
] | We consider multivariate two-sample tests of means, where the location shift between the two populations is expected to be related to a known graph structure. An important application of such tests is the detection of differentially expressed genes between two patient populations, as shifts in expression levels are expected to be coherent with the structure of graphs reflecting gene properties such as biological process, molecular function, regulation, or metabolism. For a fixed graph of interest, we demonstrate that accounting for graph structure can yield more powerful tests under the assumption of smooth distribution shift on the graph. We also investigate the identification of non-homogeneous subgraphs of a given large graph, which poses both computational and multiple testing problems. The relevance and benefits of the proposed approach are illustrated on synthetic data and on breast cancer gene expression data analyzed in context of KEGG pathways. |
2212.06308 | Luis Osorio-Olvera | Jorge Sober\'on and Luis Osorio-Olvera | A Dynamic Theory of the Area of Distribution | 45 pages including appendixes, 12 figures, submitted to Journal of
Biogeography | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by/4.0/ | Aims To propose and analyze a general, dynamic, process-oriented theory of
the area of distribution. Methods The area of distribution is modelled by
combining (by multiplication) three matrices: one matrix represents movements,
another niche tolerances, and a third, biotic interactions. Results are derived
from general properties of this product and from simulation of a cellular
automaton defined in terms of the matrix operations. Everything is implemented
practically in an R package. Results Results are obtained by simulation and by
mathematical analysis. We show that the mid-domain effect is a direct
consequence of dispersal; that to include movements to Ecological Niche
Modeling significantly affects results, but cannot be done without choosing an
ancestral area of distribution. We discuss ways of estimating such ancestral
areas. We show that, in our approach, movements and niche effects are mixed in
ways almost impossible to disentangle, and show this is a consequence of the
singularity of a matrix. We introduce a tool (the
Connectivity-Suitability-Dispersal plot) to extend the results of simple niche
modeling to understand the effects of dispersal. Main conclusions The
conceptually straightforward scheme we present for the area of distribution
integrates, in a mathematically sound and computationally feasible way, several
key ideas in biogeography: the geographic and environmental matrix, the
Grinnellian niche, dispersal capacity and the ancestral area of origin of
groups of species. We show that although full simulations are indispensable to
obtain the dynamics of an area of distribution, interesting results can be
derived simply by analyzing the matrices representing the dynamics.
| [
{
"created": "Tue, 13 Dec 2022 01:28:22 GMT",
"version": "v1"
}
] | 2022-12-14 | [
[
"Soberón",
"Jorge",
""
],
[
"Osorio-Olvera",
"Luis",
""
]
] | Aims To propose and analyze a general, dynamic, process-oriented theory of the area of distribution. Methods The area of distribution is modelled by combining (by multiplication) three matrices: one matrix represents movements, another niche tolerances, and a third, biotic interactions. Results are derived from general properties of this product and from simulation of a cellular automaton defined in terms of the matrix operations. Everything is implemented practically in an R package. Results Results are obtained by simulation and by mathematical analysis. We show that the mid-domain effect is a direct consequence of dispersal; that to include movements to Ecological Niche Modeling significantly affects results, but cannot be done without choosing an ancestral area of distribution. We discuss ways of estimating such ancestral areas. We show that, in our approach, movements and niche effects are mixed in ways almost impossible to disentangle, and show this is a consequence of the singularity of a matrix. We introduce a tool (the Connectivity-Suitability-Dispersal plot) to extend the results of simple niche modeling to understand the effects of dispersal. Main conclusions The conceptually straightforward scheme we present for the area of distribution integrates, in a mathematically sound and computationally feasible way, several key ideas in biogeography: the geographic and environmental matrix, the Grinnellian niche, dispersal capacity and the ancestral area of origin of groups of species. We show that although full simulations are indispensable to obtain the dynamics of an area of distribution, interesting results can be derived simply by analyzing the matrices representing the dynamics. |
1501.06529 | David Bardos | David C. Bardos, Gurutzeta Guillera-Arroita and Brendan A. Wintle | Valid auto-models for spatially autocorrelated occupancy and abundance
data | Typos corrected in Table 1. Note that defaults in R package 'spdep'
have changed in response to this paper; some results using defaults are
therefore now version-dependent | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Auto-logistic and related auto-models, implemented approximately as
autocovariate regression, provide simple and direct modelling of spatial
dependence. The autologistic model has been widely applied in ecology since
Augustin, Mugglestone and Buckland (J. Appl. Ecol., 1996, 33, 339) analysed red
deer census data using a hybrid estimation approach, combining maximum
pseudo-likelihood estimation with Gibbs sampling of missing data. However
Dormann (Ecol. Model., 2007, 207, 234) questioned the validity of auto-logistic
regression, giving examples of apparent underestimation of covariate parameters
in analysis of simulated "snouter" data. Dormann et al. (Ecography, 2007, 30,
609) extended this analysis to auto-Poisson and auto-normal models, reporting
similar anomalies. All the above studies employ neighbourhood weighting schemes
inconsistent with conditions (Besag, J. R. Stat. Soc., Ser. B, 1974, 36, 192)
required for auto-model validity; furthermore the auto-Poisson analysis fails
to exclude cooperative interactions. We show that all "snouter" anomalies are
resolved by correct auto-model implementation. Re-analysis of the red deer data
shows that invalid neighbourhood weightings generate only small estimation
errors for the full dataset, but larger errors occur on geographic subsamples.
A substantial fraction of papers applying auto-logistic regression to
ecological data use these invalid weightings, which are default options in the
widely used "spdep" spatial dependence package for R. Auto-logistic analyses
using invalid neighbourhood weightings will be erroneous to an extent that can
vary widely. These analyses can easily be corrected by using valid
neighbourhood weightings available in "spdep". The hybrid estimation approach
for missing data is readily adapted for valid neighbourhood weighting schemes
and is implemented here in R for application to sparse presence-absence data.
| [
{
"created": "Mon, 26 Jan 2015 19:14:57 GMT",
"version": "v1"
},
{
"created": "Tue, 27 Jan 2015 16:51:07 GMT",
"version": "v2"
},
{
"created": "Sun, 15 Feb 2015 14:30:45 GMT",
"version": "v3"
},
{
"created": "Thu, 30 Apr 2015 17:17:06 GMT",
"version": "v4"
}
] | 2015-05-01 | [
[
"Bardos",
"David C.",
""
],
[
"Guillera-Arroita",
"Gurutzeta",
""
],
[
"Wintle",
"Brendan A.",
""
]
] | Auto-logistic and related auto-models, implemented approximately as autocovariate regression, provide simple and direct modelling of spatial dependence. The autologistic model has been widely applied in ecology since Augustin, Mugglestone and Buckland (J. Appl. Ecol., 1996, 33, 339) analysed red deer census data using a hybrid estimation approach, combining maximum pseudo-likelihood estimation with Gibbs sampling of missing data. However Dormann (Ecol. Model., 2007, 207, 234) questioned the validity of auto-logistic regression, giving examples of apparent underestimation of covariate parameters in analysis of simulated "snouter" data. Dormann et al. (Ecography, 2007, 30, 609) extended this analysis to auto-Poisson and auto-normal models, reporting similar anomalies. All the above studies employ neighbourhood weighting schemes inconsistent with conditions (Besag, J. R. Stat. Soc., Ser. B, 1974, 36, 192) required for auto-model validity; furthermore the auto-Poisson analysis fails to exclude cooperative interactions. We show that all "snouter" anomalies are resolved by correct auto-model implementation. Re-analysis of the red deer data shows that invalid neighbourhood weightings generate only small estimation errors for the full dataset, but larger errors occur on geographic subsamples. A substantial fraction of papers applying auto-logistic regression to ecological data use these invalid weightings, which are default options in the widely used "spdep" spatial dependence package for R. Auto-logistic analyses using invalid neighbourhood weightings will be erroneous to an extent that can vary widely. These analyses can easily be corrected by using valid neighbourhood weightings available in "spdep". The hybrid estimation approach for missing data is readily adapted for valid neighbourhood weighting schemes and is implemented here in R for application to sparse presence-absence data. |
1206.1029 | Gopalakrishnan Tr Nair | Baby Jerald, T. R. Gopalakrishnan Nair | Influenza Virus Vaccine Efficacy Based On Conserved Sequence Alignment | International Conference on Biomedical Engineering (ICoBE), 2012 Date
of Conference: 27-28 Feb. 2012 Page(s): 327 - 329, 2 figures, 3 pages | null | 10.1109/ICoBE.2012.6179031 | null | q-bio.OT | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The rapid outbreak of bird flu challenges the outcome of effective vaccine
for the upcoming years. The recent research established different norms to
eliminate flu pandemics. This can be made possible with skilled experimental
analyses and by tracking the recent virulent strain and can be broadly
applicable with effective testing of vaccine efficacy. Every year World Health
Organization (WHO) reveals the administration of drug and vaccine to counter
arrest the spread of flu among the population. As there are recurrent failures
in priming the population, the complete eradication of the flu pandemic is
still appears to be an unresolved problem. To overcome the current scenario,
high level efforts with theoretical and practical research is required and it
can enhance the scope in this field. The recent advancements also allow the
researchers to endeavor effective vaccine to meet the emerging flu types. Only
the standardized vaccination among the population at the time of flu pandemics
will revolutionalize the current propositions against influenza virus. This
paper shows the deficiencies of vaccine fitness research as there are reported
failures and less efficacy of vaccine even after priming the population from
referred evidences and studies. It also shows simple experimental approach in
detecting the effective vaccine among the vaccines announced by WHO.
| [
{
"created": "Sat, 2 Jun 2012 17:49:26 GMT",
"version": "v1"
}
] | 2012-06-06 | [
[
"Jerald",
"Baby",
""
],
[
"Nair",
"T. R. Gopalakrishnan",
""
]
] | The rapid outbreak of bird flu challenges the outcome of effective vaccine for the upcoming years. The recent research established different norms to eliminate flu pandemics. This can be made possible with skilled experimental analyses and by tracking the recent virulent strain and can be broadly applicable with effective testing of vaccine efficacy. Every year World Health Organization (WHO) reveals the administration of drug and vaccine to counter arrest the spread of flu among the population. As there are recurrent failures in priming the population, the complete eradication of the flu pandemic is still appears to be an unresolved problem. To overcome the current scenario, high level efforts with theoretical and practical research is required and it can enhance the scope in this field. The recent advancements also allow the researchers to endeavor effective vaccine to meet the emerging flu types. Only the standardized vaccination among the population at the time of flu pandemics will revolutionalize the current propositions against influenza virus. This paper shows the deficiencies of vaccine fitness research as there are reported failures and less efficacy of vaccine even after priming the population from referred evidences and studies. It also shows simple experimental approach in detecting the effective vaccine among the vaccines announced by WHO. |
1406.2497 | Jennifer Conroy PhD | Jennifer Conroy, Navin K. Verma, Ronan J. Smith, Ehsan Rezvani, Georg
S. Duesberg, Jonathan N. Coleman, Yuri Volkov | Biocompatibility of Pristine Graphene Monolayers, Nanosheets and Thin
Films | null | null | null | null | q-bio.CB cond-mat.mes-hall physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There is an increasing interest to develop nanoscale biocompatible graphene
structures due to their desirable physicochemical properties, unlimited
application opportunities and scalable production. Here we report the
preparation, characterization and biocompatibility assessment of novel graphene
flakes and their enabled thin films suitable for a wide range of biomedical and
electronic applications. Graphene flakes were synthesized by a chemical vapour
deposition method or a liquid-phase exfoliation procedure and then thin films
were prepared by transferring graphene onto glass coverslips. Raman
spectroscopy and transmission electron microscopy confirmed a predominantly
monolayer and a high crystalline quality formation of graphene. The
biocompatibility assessment of graphene thin films and graphene flakes was
performed using cultured human lung epithelial cell line A549 employing a
multimodal approach incorporating automated imaging, high content screening,
real-time impedance sensing in combination with biochemical assays. No
detectable changes in the cellular morphology or attachment of A549 cells
growing on graphene thin films or cells exposed to graphene flakes (0.1 to 5
ug/mL) for 4 to 72 h was observed. Graphene treatments caused a very low level
of increase in cellular production of reactive oxygen species in A549 cells,
but no detectable damage to the nuclei such as changes in morphology,
condensation or fragmentation was observed. In contrast, carbon black proved to
be significantly more toxic than the graphene. These data open up a promising
view of using graphene enabled composites for a diverse scope of safer
applications.
| [
{
"created": "Tue, 10 Jun 2014 10:21:49 GMT",
"version": "v1"
}
] | 2014-06-11 | [
[
"Conroy",
"Jennifer",
""
],
[
"Verma",
"Navin K.",
""
],
[
"Smith",
"Ronan J.",
""
],
[
"Rezvani",
"Ehsan",
""
],
[
"Duesberg",
"Georg S.",
""
],
[
"Coleman",
"Jonathan N.",
""
],
[
"Volkov",
"Yuri",
""
]... | There is an increasing interest to develop nanoscale biocompatible graphene structures due to their desirable physicochemical properties, unlimited application opportunities and scalable production. Here we report the preparation, characterization and biocompatibility assessment of novel graphene flakes and their enabled thin films suitable for a wide range of biomedical and electronic applications. Graphene flakes were synthesized by a chemical vapour deposition method or a liquid-phase exfoliation procedure and then thin films were prepared by transferring graphene onto glass coverslips. Raman spectroscopy and transmission electron microscopy confirmed a predominantly monolayer and a high crystalline quality formation of graphene. The biocompatibility assessment of graphene thin films and graphene flakes was performed using cultured human lung epithelial cell line A549 employing a multimodal approach incorporating automated imaging, high content screening, real-time impedance sensing in combination with biochemical assays. No detectable changes in the cellular morphology or attachment of A549 cells growing on graphene thin films or cells exposed to graphene flakes (0.1 to 5 ug/mL) for 4 to 72 h was observed. Graphene treatments caused a very low level of increase in cellular production of reactive oxygen species in A549 cells, but no detectable damage to the nuclei such as changes in morphology, condensation or fragmentation was observed. In contrast, carbon black proved to be significantly more toxic than the graphene. These data open up a promising view of using graphene enabled composites for a diverse scope of safer applications. |
1305.3801 | Ra\'ul Salgado-Garcia | R. Salgado-Garcia and E. Ugalde | Symbolic Complexity for Nucleotide Sequences: A Sign of the Genome
Structure | 4 pages, 3 figures | J. Phys. A: Math. Theor. 49 (2016) 445601 (21pp) | 10.1088/1751-8113/49/44/445601 | null | q-bio.PE math.PR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a method to estimate the complexity function of symbolic
dynamical systems from a finite sequence of symbols. We test such complexity
estimator on several symbolic dynamical systems whose complexity functions are
known exactly. We use this technique to estimate the complexity function for
genomes of several organisms under the assumption that a genome is a sequence
produced by a (unknown) dynamical system. We show that the genome of several
organisms share the property that their complexity functions behaves
exponentially for words of small length $\ell$ ($0\leq \ell \leq 10$) and
linearly for word lengths in the range $11 \leq \ell \leq 50$. It is also found
that the species which are phylogenetically close each other have similar
complexity functions calculated from a sample of their corresponding coding
regions.
| [
{
"created": "Thu, 16 May 2013 13:43:50 GMT",
"version": "v1"
}
] | 2017-01-19 | [
[
"Salgado-Garcia",
"R.",
""
],
[
"Ugalde",
"E.",
""
]
] | We introduce a method to estimate the complexity function of symbolic dynamical systems from a finite sequence of symbols. We test such complexity estimator on several symbolic dynamical systems whose complexity functions are known exactly. We use this technique to estimate the complexity function for genomes of several organisms under the assumption that a genome is a sequence produced by a (unknown) dynamical system. We show that the genome of several organisms share the property that their complexity functions behaves exponentially for words of small length $\ell$ ($0\leq \ell \leq 10$) and linearly for word lengths in the range $11 \leq \ell \leq 50$. It is also found that the species which are phylogenetically close each other have similar complexity functions calculated from a sample of their corresponding coding regions. |
1910.03263 | Tobias B\"uscher | Tobias B\"uscher, Nirmalendu Ganai, Gerhard Gompper, Jens Elgeti | Tissue evolution: Mechanical interplay of adhesion, pressure, and
heterogeneity | 13 pages, 7 figures | New J. Phys. 22, 033048 (2020) | 10.1088/1367-2630/ab74a5 | null | q-bio.PE q-bio.TO | http://creativecommons.org/licenses/by/4.0/ | The evolution of various competing cell types in tissues, and the resulting
persistent tissue population, is studied numerically and analytically in a
particle-based model of active tissues. Mutations change the properties of
cells in various ways, including their mechanical properties. Each mutation
results in an advantage or disadvantage to grow in the competition between
different cell types. While changes in signaling processes and biochemistry
play an important role, we focus on changes in the mechanical properties by
studying the result of variation of growth force and adhesive
cross-interactions between cell types. For independent mutations of growth
force and adhesion strength, the tissue evolves towards cell types with high
growth force and low internal adhesion strength, as both increase the
homeostatic pressure. Motivated by biological evidence, we postulate a coupling
between both parameters, such that an increased growth force comes at the cost
of a higher internal adhesion strength or vice versa. This tradeoff controls
the evolution of the tissue, ranging from unidirectional evolution to very
heterogeneous and dynamic populations. The special case of two competing cell
types reveals three distinct parameter regimes: Two in which one cell type
outcompetes the other, and one in which both cell types coexist in a highly
mixed state. Interestingly, a single mutated cell alone suffices to reach the
mixed state, while a finite mutation rate affects the results only weakly.
Finally, the coupling between changes in growth force and adhesion strength
reveals a mechanical explanation for the evolution towards intra-tumor
heterogeneity, in which multiple species coexist even under a constant
evolutianary pressure.
| [
{
"created": "Tue, 8 Oct 2019 08:03:03 GMT",
"version": "v1"
}
] | 2024-06-03 | [
[
"Büscher",
"Tobias",
""
],
[
"Ganai",
"Nirmalendu",
""
],
[
"Gompper",
"Gerhard",
""
],
[
"Elgeti",
"Jens",
""
]
] | The evolution of various competing cell types in tissues, and the resulting persistent tissue population, is studied numerically and analytically in a particle-based model of active tissues. Mutations change the properties of cells in various ways, including their mechanical properties. Each mutation results in an advantage or disadvantage to grow in the competition between different cell types. While changes in signaling processes and biochemistry play an important role, we focus on changes in the mechanical properties by studying the result of variation of growth force and adhesive cross-interactions between cell types. For independent mutations of growth force and adhesion strength, the tissue evolves towards cell types with high growth force and low internal adhesion strength, as both increase the homeostatic pressure. Motivated by biological evidence, we postulate a coupling between both parameters, such that an increased growth force comes at the cost of a higher internal adhesion strength or vice versa. This tradeoff controls the evolution of the tissue, ranging from unidirectional evolution to very heterogeneous and dynamic populations. The special case of two competing cell types reveals three distinct parameter regimes: Two in which one cell type outcompetes the other, and one in which both cell types coexist in a highly mixed state. Interestingly, a single mutated cell alone suffices to reach the mixed state, while a finite mutation rate affects the results only weakly. Finally, the coupling between changes in growth force and adhesion strength reveals a mechanical explanation for the evolution towards intra-tumor heterogeneity, in which multiple species coexist even under a constant evolutianary pressure. |
1712.02866 | Michael Green | Alisher M. Kariev and Michael E. Green | Quantum Calculations on the Kv1.2 Channel Voltage Sensing Domain Show H+
Transfer Provides the Gating Current | At the end of the paper, there is an extended supplement with the
calculated coordinates for 14 out of 30 cases that were calculated (all those
that turned out to make a significant contribution). The paper has 6 figures.
Also, an earlier preprint, with a fraction of what is in the present
submission, was posted on BioarXiv on 6/23/2017 | null | 10.1016/j.bpj.2017.11.2615 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Quantum calculations on the voltage sensing domain (VSD) of the Kv1.2
potassium channel (pdb: 3Lut)have been carried out on a 904 atoms subset of the
VSD, plus 24 water molecules. Side chains pointing away from the center of the
VSD were truncated; S1,S2,S3 end atoms were were fixed (all calculations); S4
end atoms could be fixed or free. Open conformations (membrane potentials >= 0)
closely match the known X-ray structure of the open state with salt bridges in
the in the VSD not ionized (H+ on the acid) whether S4 end atoms were fixed or
free (slightly closer fixed than free).The S4 segment backbone, free or not,
moves less than 2.5 A for positive to negative membrane potential switches, not
entirely in the expected direction, leaving H+ motion as the principal
component of the gating current. Groups of 3 - 5 side chains are important for
proton transport, based on the calculations. A proton transfers from tyrosine
(Y266), through arginine (R300), to glutamate (E183), accounting for
approximately 20 - 25% of the gating charge. Clusters of amino acids that can
transfer protons (acids, bases, tyrosine, histidine) are the main paths for
proton transport. A group of five amino acids, bounded by the conserved
aromatic F233, appears to exchange a proton. Dipole rotations may also
contribute. A proton path (calculations still in progress) is proposed for the
remainder of the VSD, suggesting a hypothesis for a complete gating mechanism.
| [
{
"created": "Thu, 7 Dec 2017 21:28:16 GMT",
"version": "v1"
}
] | 2018-10-10 | [
[
"Kariev",
"Alisher M.",
""
],
[
"Green",
"Michael E.",
""
]
] | Quantum calculations on the voltage sensing domain (VSD) of the Kv1.2 potassium channel (pdb: 3Lut)have been carried out on a 904 atoms subset of the VSD, plus 24 water molecules. Side chains pointing away from the center of the VSD were truncated; S1,S2,S3 end atoms were were fixed (all calculations); S4 end atoms could be fixed or free. Open conformations (membrane potentials >= 0) closely match the known X-ray structure of the open state with salt bridges in the in the VSD not ionized (H+ on the acid) whether S4 end atoms were fixed or free (slightly closer fixed than free).The S4 segment backbone, free or not, moves less than 2.5 A for positive to negative membrane potential switches, not entirely in the expected direction, leaving H+ motion as the principal component of the gating current. Groups of 3 - 5 side chains are important for proton transport, based on the calculations. A proton transfers from tyrosine (Y266), through arginine (R300), to glutamate (E183), accounting for approximately 20 - 25% of the gating charge. Clusters of amino acids that can transfer protons (acids, bases, tyrosine, histidine) are the main paths for proton transport. A group of five amino acids, bounded by the conserved aromatic F233, appears to exchange a proton. Dipole rotations may also contribute. A proton path (calculations still in progress) is proposed for the remainder of the VSD, suggesting a hypothesis for a complete gating mechanism. |
0712.3020 | Leonid Mirny | Carlos Gomez-Uribe, George C. Verghese, and Leonid A. Mirny | Operating Regimes of Signaling Cycles: Statics, Dynamics, and Noise
Filtering | to appear in PLoS Computational Biology | null | 10.1371/journal.pcbi.0030246 | null | q-bio.MN q-bio.BM q-bio.QM q-bio.SC | null | A ubiquitous building block of signaling pathways is a cycle of covalent
modification (e.g., phosphorylation and dephosphorylation in MAPK cascades).
Our paper explores the kind of information processing and filtering that can be
accomplished by this simple biochemical circuit.
Signaling cycles are particularly known for exhibiting a highly sigmoidal
(ultrasensitive) input-output characteristic in a certain steady-state regime.
Here we systematically study the cycle's steady-state behavior and its response
to time-varying stimuli. We demonstrate that the cycle can actually operate in
four different regimes, each with its specific input-output characteristics.
These results are obtained using the total quasi-steady-state approximation,
which is more generally valid than the typically used Michaelis-Menten
approximation for enzymatic reactions. We invoke experimental data that
suggests the possibility of signaling cycles operating in one of the new
regimes.
We then consider the cycle's dynamic behavior, which has so far been
relatively neglected. We demonstrate that the intrinsic architecture of the
cycles makes them act - in all four regimes - as tunable low-pass filters,
filtering out high-frequency fluctuations or noise in signals and environmental
cues. Moreover, the cutoff frequency can be adjusted by the cell. Numerical
simulations show that our analytical results hold well even for noise of large
amplitude. We suggest that noise filtering and tunability make signaling cycles
versatile components of more elaborate cell signaling pathways.
| [
{
"created": "Tue, 18 Dec 2007 18:43:43 GMT",
"version": "v1"
}
] | 2015-05-13 | [
[
"Gomez-Uribe",
"Carlos",
""
],
[
"Verghese",
"George C.",
""
],
[
"Mirny",
"Leonid A.",
""
]
] | A ubiquitous building block of signaling pathways is a cycle of covalent modification (e.g., phosphorylation and dephosphorylation in MAPK cascades). Our paper explores the kind of information processing and filtering that can be accomplished by this simple biochemical circuit. Signaling cycles are particularly known for exhibiting a highly sigmoidal (ultrasensitive) input-output characteristic in a certain steady-state regime. Here we systematically study the cycle's steady-state behavior and its response to time-varying stimuli. We demonstrate that the cycle can actually operate in four different regimes, each with its specific input-output characteristics. These results are obtained using the total quasi-steady-state approximation, which is more generally valid than the typically used Michaelis-Menten approximation for enzymatic reactions. We invoke experimental data that suggests the possibility of signaling cycles operating in one of the new regimes. We then consider the cycle's dynamic behavior, which has so far been relatively neglected. We demonstrate that the intrinsic architecture of the cycles makes them act - in all four regimes - as tunable low-pass filters, filtering out high-frequency fluctuations or noise in signals and environmental cues. Moreover, the cutoff frequency can be adjusted by the cell. Numerical simulations show that our analytical results hold well even for noise of large amplitude. We suggest that noise filtering and tunability make signaling cycles versatile components of more elaborate cell signaling pathways. |
1810.03317 | Nadav M. Shnerb | Yitzhak Yahalom and Nadav M. Shnerb | Phase diagram for a logistic system under bounded stochasticity | null | Phys. Rev. Lett. 122, 108102 (2019) | 10.1103/PhysRevLett.122.108102 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Extinction is the ultimate absorbing state of any stochastic birth-death
process, hence the time to extinction is an important characteristic of any
natural population. Here we consider logistic and logistic-like systems under
the combined effect of demographic and bounded environmental stochasticity.
Three phases are identified: an inactive phase where the mean time to
extinction $T$ increases logarithmically with the initial population size, an
active phase where $T$ grows exponentially with the carrying capacity $N$, and
temporal Griffiths phase, with power-law relationship between $T$ and $N$. The
system supports an exponential phase only when the noise is bounded, in which
case the continuum (diffusion) approximation breaks down within the Griffiths
phase. This breakdown is associated with a crossover between qualitatively
different survival statistics and decline modes. To study the power-law phase
we present a new WKB scheme which is applicable both in the diffusive and in
the non-diffusive regime.
| [
{
"created": "Mon, 8 Oct 2018 08:34:34 GMT",
"version": "v1"
},
{
"created": "Sat, 16 Feb 2019 21:55:54 GMT",
"version": "v2"
}
] | 2019-03-27 | [
[
"Yahalom",
"Yitzhak",
""
],
[
"Shnerb",
"Nadav M.",
""
]
] | Extinction is the ultimate absorbing state of any stochastic birth-death process, hence the time to extinction is an important characteristic of any natural population. Here we consider logistic and logistic-like systems under the combined effect of demographic and bounded environmental stochasticity. Three phases are identified: an inactive phase where the mean time to extinction $T$ increases logarithmically with the initial population size, an active phase where $T$ grows exponentially with the carrying capacity $N$, and temporal Griffiths phase, with power-law relationship between $T$ and $N$. The system supports an exponential phase only when the noise is bounded, in which case the continuum (diffusion) approximation breaks down within the Griffiths phase. This breakdown is associated with a crossover between qualitatively different survival statistics and decline modes. To study the power-law phase we present a new WKB scheme which is applicable both in the diffusive and in the non-diffusive regime. |
1508.04561 | Wlodzislaw Duch | W{\l}odzis{\l}aw Duch | Memetics and Neural Models of Conspiracy Theories | 14 pages, 7 figures | null | null | null | q-bio.NC cs.AI cs.NE | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Conspiracy theories, or in general seriously distorted beliefs, are
widespread. How and why are they formed in the brain is still more a matter of
speculation rather than science. In this paper one plausible mechanisms is
investigated: rapid freezing of high neuroplasticity (RFHN). Emotional arousal
increases neuroplasticity and leads to creation of new pathways spreading
neural activation. Using the language of neurodynamics a meme is defined as
quasi-stable associative memory attractor state. Depending on the temporal
characteristics of the incoming information and the plasticity of the network,
memory may self-organize creating memes with large attractor basins, linking
many unrelated input patterns. Memes with fake rich associations distort
relations between memory states. Simulations of various neural network models
trained with competitive Hebbian learning (CHL) on stationary and
non-stationary data lead to the same conclusion: short learning with high
plasticity followed by rapid decrease of plasticity leads to memes with large
attraction basins, distorting input pattern representations in associative
memory. Such system-level models may be used to understand creation of
distorted beliefs and formation of conspiracy memes, understood as strong
attractor states of the neurodynamics.
| [
{
"created": "Wed, 19 Aug 2015 08:20:17 GMT",
"version": "v1"
},
{
"created": "Sun, 17 Jan 2021 17:38:40 GMT",
"version": "v2"
}
] | 2021-01-19 | [
[
"Duch",
"Włodzisław",
""
]
] | Conspiracy theories, or in general seriously distorted beliefs, are widespread. How and why are they formed in the brain is still more a matter of speculation rather than science. In this paper one plausible mechanisms is investigated: rapid freezing of high neuroplasticity (RFHN). Emotional arousal increases neuroplasticity and leads to creation of new pathways spreading neural activation. Using the language of neurodynamics a meme is defined as quasi-stable associative memory attractor state. Depending on the temporal characteristics of the incoming information and the plasticity of the network, memory may self-organize creating memes with large attractor basins, linking many unrelated input patterns. Memes with fake rich associations distort relations between memory states. Simulations of various neural network models trained with competitive Hebbian learning (CHL) on stationary and non-stationary data lead to the same conclusion: short learning with high plasticity followed by rapid decrease of plasticity leads to memes with large attraction basins, distorting input pattern representations in associative memory. Such system-level models may be used to understand creation of distorted beliefs and formation of conspiracy memes, understood as strong attractor states of the neurodynamics. |
2005.13489 | Ravi Kiran | Ravi Kiran, Swati Tyagi, Syed Abbas, Madhumita Roy, A. Taraphder | Immunomodulatory role of black tea in the mitigation of cancer induced
by inorganic arsenic | 23 pages, 15 figures | Eur. Phys. J. Plus (2020) 135: 735 | 10.1140/epjp/s13360-020-00766-1 | null | q-bio.TO q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a model analysis of the tumor and normal cell growth under the
influence of a carcinogenic agent, an immunomdulator (IM) and variable influx
of immune cells including relevant interactions. The tumor growth is
facilitated by carcinogens such as inorganic arsenic while the IM considered
here is black tea (Camellia sinesnsis). The model with variable influx of
immune cells is observed to have considerable advantage over the constant
influx model, and while the tumor cell population is greatly mitigated, normal
cell population remains above healthy levels. The evolution of normal and tumor
cells are computed from the proposed model and their local stabilities are
investigated analytically. Numerical simulations are performed to study the
long term dynamics and an estimation of the effects of various factors is made.
This helps in developing a balanced strategy for tumor mitigation without the
use of chemotherapeutic drugs that usually have strong side-effects.
| [
{
"created": "Wed, 27 May 2020 16:52:32 GMT",
"version": "v1"
},
{
"created": "Tue, 29 Sep 2020 12:31:15 GMT",
"version": "v2"
}
] | 2020-09-30 | [
[
"Kiran",
"Ravi",
""
],
[
"Tyagi",
"Swati",
""
],
[
"Abbas",
"Syed",
""
],
[
"Roy",
"Madhumita",
""
],
[
"Taraphder",
"A.",
""
]
] | We present a model analysis of the tumor and normal cell growth under the influence of a carcinogenic agent, an immunomdulator (IM) and variable influx of immune cells including relevant interactions. The tumor growth is facilitated by carcinogens such as inorganic arsenic while the IM considered here is black tea (Camellia sinesnsis). The model with variable influx of immune cells is observed to have considerable advantage over the constant influx model, and while the tumor cell population is greatly mitigated, normal cell population remains above healthy levels. The evolution of normal and tumor cells are computed from the proposed model and their local stabilities are investigated analytically. Numerical simulations are performed to study the long term dynamics and an estimation of the effects of various factors is made. This helps in developing a balanced strategy for tumor mitigation without the use of chemotherapeutic drugs that usually have strong side-effects. |
1403.7064 | Jack Dekker | J.L. Donnelly, D.C. Adams and J. Dekker | Weedy Adaptation in Setaria spp.: VI. S. faberi Seed hull shape as soil
germination signal antenna | 21 pages, 6 figures, 2 tables | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ecological selection forces for weedy and domesticated traits have influenced
the evolution of seed shape in Setaria resulting in similarity in seed shape
that reflects similarity in ecological function rather than reflecting
phylogenetic relatedness. Seeds from two diploid subspecies of Setaria viridis,
consisting of one weedy subspecies and two races of the domesticated
subspecies, and four other polyploidy weedy species of Setaria. We quantified
seed shape from the silhouettes of the seeds in two separate views. Differences
in shape were compared to ecological role (weed vs. crop) and the evolutionary
trajectory of shape change by phylogenetic grouping from a single reference
species was calculated. Idealized three-dimensional models were created to
examine the differences in shape relative to surface area and volume. All
populations were significantly different in shape, with crops easily
distinguished from weeds, regardless of relatedness between the taxa.
Trajectory of shape change varied by view, but separated crops from weeds and
phylogenetic groupings. Three-dimensional models gave further evidence of
differences in shape reflecting adaptation for environmental exploitation. The
selective forces for weedy and domesticated traits have exceeded phylogenetic
constraints, resulting in seed shape similarity due to ecological role rather
than phylogenetic relatedness. Seed shape and surface-to-volume ratio likely
reflect the importance of the water film that accumulates on the seed surface
when in contact with soil particles. Seed shape may also be a mechanism of
niche separation between taxa.
| [
{
"created": "Thu, 27 Mar 2014 15:01:14 GMT",
"version": "v1"
}
] | 2014-03-28 | [
[
"Donnelly",
"J. L.",
""
],
[
"Adams",
"D. C.",
""
],
[
"Dekker",
"J.",
""
]
] | Ecological selection forces for weedy and domesticated traits have influenced the evolution of seed shape in Setaria resulting in similarity in seed shape that reflects similarity in ecological function rather than reflecting phylogenetic relatedness. Seeds from two diploid subspecies of Setaria viridis, consisting of one weedy subspecies and two races of the domesticated subspecies, and four other polyploidy weedy species of Setaria. We quantified seed shape from the silhouettes of the seeds in two separate views. Differences in shape were compared to ecological role (weed vs. crop) and the evolutionary trajectory of shape change by phylogenetic grouping from a single reference species was calculated. Idealized three-dimensional models were created to examine the differences in shape relative to surface area and volume. All populations were significantly different in shape, with crops easily distinguished from weeds, regardless of relatedness between the taxa. Trajectory of shape change varied by view, but separated crops from weeds and phylogenetic groupings. Three-dimensional models gave further evidence of differences in shape reflecting adaptation for environmental exploitation. The selective forces for weedy and domesticated traits have exceeded phylogenetic constraints, resulting in seed shape similarity due to ecological role rather than phylogenetic relatedness. Seed shape and surface-to-volume ratio likely reflect the importance of the water film that accumulates on the seed surface when in contact with soil particles. Seed shape may also be a mechanism of niche separation between taxa. |
2210.08542 | Malvina Marku | Malvina Marku, Vera Pancaldi | From time-series transcriptomics to gene regulatory networks: a review
on inference methods | null | null | null | null | q-bio.MN | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Inference of gene regulatory networks has been an active area of research for
around 20 years, leading to the development of sophisticated inference
algorithms based on a variety of assumptions and approaches. With the always
increasing demand for more accurate and powerful models, the inference problem
remains of broad scientific interest. The abstract representation of biological
systems through gene regulatory networks represents a powerful method to study
such systems, encoding different amounts and types of information. In this
review, we summarize the different types of inference algorithms specifically
based on time-series transcriptomics, giving an overview of the main
applications of gene regulatory networks in computational biology. This review
is intended to give an updated overview of regulatory networks inference tools
to biologists and researchers new to the topic and guide them in selecting the
appropriate inference method that best fits their questions, aims and
experimental data.
| [
{
"created": "Sun, 16 Oct 2022 13:59:39 GMT",
"version": "v1"
},
{
"created": "Wed, 2 Nov 2022 15:19:16 GMT",
"version": "v2"
}
] | 2022-11-03 | [
[
"Marku",
"Malvina",
""
],
[
"Pancaldi",
"Vera",
""
]
] | Inference of gene regulatory networks has been an active area of research for around 20 years, leading to the development of sophisticated inference algorithms based on a variety of assumptions and approaches. With the always increasing demand for more accurate and powerful models, the inference problem remains of broad scientific interest. The abstract representation of biological systems through gene regulatory networks represents a powerful method to study such systems, encoding different amounts and types of information. In this review, we summarize the different types of inference algorithms specifically based on time-series transcriptomics, giving an overview of the main applications of gene regulatory networks in computational biology. This review is intended to give an updated overview of regulatory networks inference tools to biologists and researchers new to the topic and guide them in selecting the appropriate inference method that best fits their questions, aims and experimental data. |
1008.0237 | Liaofu Luo | Liaofu Luo | Protein Folding as a Quantum Transition Between Conformational States:
Basic Formulas and Applications | 24 pages, 3 figures | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The protein folding is regarded as a quantum transition between torsion
states on polypeptide chain. The deduction of the folding rate formula in our
previous studies is reviewed. The rate formula is generalized to the case of
frequency variation in folding. Then the following problems about the
application of the rate theory are discussed: 1) The unified theory on the
two-state and multi-state protein folding is given based on the concept of
quantum transition. 2) The relationship of folding and unfolding rates vs
denaturant concentration is studied. 3) The temperature dependence of folding
rate is deduced and the non-Arrhenius behaviors of temperature dependence are
interpreted in a natural way. 4) The inertial moment dependence of folding rate
is calculated based on the model of dynamical contact order and consistent
results are obtained by comparison with one-hundred-protein experimental
dataset. 5) The exergonic and endergonic foldings are distinguished through the
comparison between theoretical and experimental rates for each protein. The
ultrafast folding problem is viewed from the point of quantum folding theory
and a new folding speed limit is deduced from quantum uncertainty relation. And
finally, 6) since only the torsion-accessible states are manageable in the
present formulation of quantum transition how the set of torsion-accessible
states can be expanded by using statistical energy landscape approach is
discussed. All above discussions support the view that the protein folding is
essentially a quantum transition between conformational states.
| [
{
"created": "Mon, 2 Aug 2010 07:00:42 GMT",
"version": "v1"
},
{
"created": "Sun, 22 Aug 2010 07:56:01 GMT",
"version": "v2"
}
] | 2010-08-24 | [
[
"Luo",
"Liaofu",
""
]
] | The protein folding is regarded as a quantum transition between torsion states on polypeptide chain. The deduction of the folding rate formula in our previous studies is reviewed. The rate formula is generalized to the case of frequency variation in folding. Then the following problems about the application of the rate theory are discussed: 1) The unified theory on the two-state and multi-state protein folding is given based on the concept of quantum transition. 2) The relationship of folding and unfolding rates vs denaturant concentration is studied. 3) The temperature dependence of folding rate is deduced and the non-Arrhenius behaviors of temperature dependence are interpreted in a natural way. 4) The inertial moment dependence of folding rate is calculated based on the model of dynamical contact order and consistent results are obtained by comparison with one-hundred-protein experimental dataset. 5) The exergonic and endergonic foldings are distinguished through the comparison between theoretical and experimental rates for each protein. The ultrafast folding problem is viewed from the point of quantum folding theory and a new folding speed limit is deduced from quantum uncertainty relation. And finally, 6) since only the torsion-accessible states are manageable in the present formulation of quantum transition how the set of torsion-accessible states can be expanded by using statistical energy landscape approach is discussed. All above discussions support the view that the protein folding is essentially a quantum transition between conformational states. |
1710.10872 | Caroline Gr\"onwall | Caroline Gronwall, Uta Hardt, Johanna T. Gustafsson, Kerstin Elvin,
Kerstin Jensen-Urstad, Marika Kvarnstrom, Giorgia Grosso, Johan Ronnelid,
Leonid Padyukov, Iva Gunnarsson, Gregg J. Silverman, Elisabet Svenungsson | Depressed serum IgM levels in SLE are restricted to defined subgroups | Clin Immunol. 2017 Sep 15 | null | 10.1016/j.clim.2017.09.013 | null | q-bio.BM q-bio.CB | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Natural IgM autoantibodies have been proposed to convey protection from
autoimmune pathogenesis. Herein, we investigated the IgM responses in 396
systemic lupus erythematosus (SLE) patients, divided into subgroups based on
distinct autoantibody profiles. Depressed IgM levels were more common in SLE
than in matched population controls. Strikingly, an autoreactivity profile
defined by IgG anti-Ro/La was associated with reduced levels of specific
natural IgM anti-phosphorylcholine (PC) antigens and anti-malondialdehyde (MDA)
modified-protein, as well total IgM, while no differences were detected in SLE
patients with an autoreactivity profile defined by
anti-cardiolipin/Beta2glycoprotein-I. We also observed an association of
reduced IgM levels with the HLA-DRB1*03 allelic variant amongst SLE patients
and controls. Associations of low IgM anti-PC with cardiovascular disease were
primarily found in patients without antiphospholipid antibodies. These studies
further highlight the clinical relevance of depressed IgM. Our results suggest
that low IgM levels in SLE patients reflect immunological and genetic
differences between SLE subgroups.
| [
{
"created": "Mon, 30 Oct 2017 11:16:40 GMT",
"version": "v1"
}
] | 2017-10-31 | [
[
"Gronwall",
"Caroline",
""
],
[
"Hardt",
"Uta",
""
],
[
"Gustafsson",
"Johanna T.",
""
],
[
"Elvin",
"Kerstin",
""
],
[
"Jensen-Urstad",
"Kerstin",
""
],
[
"Kvarnstrom",
"Marika",
""
],
[
"Grosso",
"Giorgia",
... | Natural IgM autoantibodies have been proposed to convey protection from autoimmune pathogenesis. Herein, we investigated the IgM responses in 396 systemic lupus erythematosus (SLE) patients, divided into subgroups based on distinct autoantibody profiles. Depressed IgM levels were more common in SLE than in matched population controls. Strikingly, an autoreactivity profile defined by IgG anti-Ro/La was associated with reduced levels of specific natural IgM anti-phosphorylcholine (PC) antigens and anti-malondialdehyde (MDA) modified-protein, as well total IgM, while no differences were detected in SLE patients with an autoreactivity profile defined by anti-cardiolipin/Beta2glycoprotein-I. We also observed an association of reduced IgM levels with the HLA-DRB1*03 allelic variant amongst SLE patients and controls. Associations of low IgM anti-PC with cardiovascular disease were primarily found in patients without antiphospholipid antibodies. These studies further highlight the clinical relevance of depressed IgM. Our results suggest that low IgM levels in SLE patients reflect immunological and genetic differences between SLE subgroups. |
2010.14566 | Alex McAvoy | Alex McAvoy, John Wakeley | Evaluating the structure-coefficient theorem of evolutionary game theory | 52 pages; final version | null | 10.1073/pnas.2119656119 | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In order to accommodate the empirical fact that population structures are
rarely simple, modern studies of evolutionary dynamics allow for complicated
and highly-heterogeneous spatial structures. As a result, one of the most
difficult obstacles lies in making analytical deductions, either qualitative or
quantitative, about the long-term outcomes of evolution. The
"structure-coefficient" theorem is a well-known approach to this problem for
mutation-selection processes under weak selection, but a general method of
evaluating the terms it comprises is lacking. Here, we provide such a method
for populations of fixed (but arbitrary) size and structure, using easily
interpretable demographic measures. This method encompasses a large family of
evolutionary update mechanisms and extends the theorem to allow for asymmetric
contests to provide a better understanding of the mutation-selection balance
under more realistic circumstances. We apply the method to study social goods
produced and distributed among individuals in spatially-heterogeneous
populations, where asymmetric interactions emerge naturally and the outcome of
selection varies dramatically depending on the nature of the social good, the
spatial topology, and frequency with which mutations arise.
| [
{
"created": "Tue, 27 Oct 2020 19:15:49 GMT",
"version": "v1"
},
{
"created": "Wed, 27 Oct 2021 16:23:13 GMT",
"version": "v2"
},
{
"created": "Thu, 16 Jun 2022 20:42:16 GMT",
"version": "v3"
}
] | 2022-07-07 | [
[
"McAvoy",
"Alex",
""
],
[
"Wakeley",
"John",
""
]
] | In order to accommodate the empirical fact that population structures are rarely simple, modern studies of evolutionary dynamics allow for complicated and highly-heterogeneous spatial structures. As a result, one of the most difficult obstacles lies in making analytical deductions, either qualitative or quantitative, about the long-term outcomes of evolution. The "structure-coefficient" theorem is a well-known approach to this problem for mutation-selection processes under weak selection, but a general method of evaluating the terms it comprises is lacking. Here, we provide such a method for populations of fixed (but arbitrary) size and structure, using easily interpretable demographic measures. This method encompasses a large family of evolutionary update mechanisms and extends the theorem to allow for asymmetric contests to provide a better understanding of the mutation-selection balance under more realistic circumstances. We apply the method to study social goods produced and distributed among individuals in spatially-heterogeneous populations, where asymmetric interactions emerge naturally and the outcome of selection varies dramatically depending on the nature of the social good, the spatial topology, and frequency with which mutations arise. |
2004.00579 | Julian Monge-Najera | Julian Monge-Najera | History of Onychophorology, 1826-2020 | null | null | null | null | q-bio.OT | http://creativecommons.org/licenses/by-sa/4.0/ | Velvet worms, or onychophorans, are animals of extraordinary importance in
the study of evolution. This is the first history of their study. They were
described by Lansdown Guilding (1797-1831). This paper identifies the landmarks
of their study, in a worldwide level, for almost 200 years. The beginning,
1826-1879, was based on describing their anatomy with light miscroscopy, mostly
by famous French naturalists such as Milne-Edwards and Blanchard. In 1880-1929
peiord, work concentrated in anatomy, physiology, behavior, biogeography and
ecology, but the most important work was Bouvier`s mammoth monograph. The next
period, 1930-1979, was important for the discovery of Cambrian species; Vachons
explanation of how ancient distribution defined the existence of two families;
Pioneer DNA and electron microscopy from Brazil; and primitive attempts at
systematics using embryology or isolated anatomical characteristics. Finally,
the 1980-2020 period, with research centered in Australia, Brazil, Costa Rica
and Germany, is marked by an evolutionary approach to everything, from body and
behavior to distribution; for the solution of the old problem of how they form
their adhesive net and how the glue works; the reconstruction of Cambrian
onychophoran communities, the first experimental taphonomy; the first
countrywide map of conservation status (from Costa Rica); the first model of
why they survive in cities; the discovery of new phenomena like food hiding,
parental feeding investment and ontogenetic diet shift; and for the birth of a
new research branh, Onychophoran Etnobiology, founded in 2015,
| [
{
"created": "Wed, 1 Apr 2020 17:10:00 GMT",
"version": "v1"
}
] | 2020-04-02 | [
[
"Monge-Najera",
"Julian",
""
]
] | Velvet worms, or onychophorans, are animals of extraordinary importance in the study of evolution. This is the first history of their study. They were described by Lansdown Guilding (1797-1831). This paper identifies the landmarks of their study, in a worldwide level, for almost 200 years. The beginning, 1826-1879, was based on describing their anatomy with light miscroscopy, mostly by famous French naturalists such as Milne-Edwards and Blanchard. In 1880-1929 peiord, work concentrated in anatomy, physiology, behavior, biogeography and ecology, but the most important work was Bouvier`s mammoth monograph. The next period, 1930-1979, was important for the discovery of Cambrian species; Vachons explanation of how ancient distribution defined the existence of two families; Pioneer DNA and electron microscopy from Brazil; and primitive attempts at systematics using embryology or isolated anatomical characteristics. Finally, the 1980-2020 period, with research centered in Australia, Brazil, Costa Rica and Germany, is marked by an evolutionary approach to everything, from body and behavior to distribution; for the solution of the old problem of how they form their adhesive net and how the glue works; the reconstruction of Cambrian onychophoran communities, the first experimental taphonomy; the first countrywide map of conservation status (from Costa Rica); the first model of why they survive in cities; the discovery of new phenomena like food hiding, parental feeding investment and ontogenetic diet shift; and for the birth of a new research branh, Onychophoran Etnobiology, founded in 2015, |
1108.5110 | Linlin Su | Linlin Su, Colbert Sesanker, Roger Lui | Coexisting Stable Equilibria in a Multiple-allele Population Genetics
Model | 29 pages, 11 figures, 6 tables | null | null | null | q-bio.PE math.CA | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper we find and classify all patterns for a single locus three- and
four-allele population genetics models in continuous time. A pattern for a
$k$-allele model means all coexisting locally stable equilibria with respect to
the flow defined by the equations $\dot{p}_i = p_i(r_i-r), i=1,...,k,$ where
$p_i, r_i$ are the frequency and marginal fitness of allele $A_i$,
respectively, and $r$ is the mean fitness of the population. It is well known
that for the two-allele model there are only three patterns depending on the
relative fitness between the homozygotes and the heterozygote. It turns out
that for the three-allele model there are 14 patterns and for the four-allele
model there are 117 patterns. With the help of computer simulations, we find
2351 patterns for the five-allele model. For the six-allele model, there are
more than 60,000 patterns. In addition, for each pattern of the three-allele
model, we also determine the asymptotic behavior of solutions of the above
system of equations as $t \to \infty$. The problem of finding patterns has been
studied in the past and it is an important problem because the results can be
used to predict the long-term genetic makeup of a population.
| [
{
"created": "Thu, 25 Aug 2011 15:03:28 GMT",
"version": "v1"
}
] | 2011-08-26 | [
[
"Su",
"Linlin",
""
],
[
"Sesanker",
"Colbert",
""
],
[
"Lui",
"Roger",
""
]
] | In this paper we find and classify all patterns for a single locus three- and four-allele population genetics models in continuous time. A pattern for a $k$-allele model means all coexisting locally stable equilibria with respect to the flow defined by the equations $\dot{p}_i = p_i(r_i-r), i=1,...,k,$ where $p_i, r_i$ are the frequency and marginal fitness of allele $A_i$, respectively, and $r$ is the mean fitness of the population. It is well known that for the two-allele model there are only three patterns depending on the relative fitness between the homozygotes and the heterozygote. It turns out that for the three-allele model there are 14 patterns and for the four-allele model there are 117 patterns. With the help of computer simulations, we find 2351 patterns for the five-allele model. For the six-allele model, there are more than 60,000 patterns. In addition, for each pattern of the three-allele model, we also determine the asymptotic behavior of solutions of the above system of equations as $t \to \infty$. The problem of finding patterns has been studied in the past and it is an important problem because the results can be used to predict the long-term genetic makeup of a population. |
q-bio/0407008 | Axel Brandenburg | Axel Brandenburg and Tuomas Multam\"aki | How Long can Left and Right Handed Life Forms Coexist? | submitted to Int. J. Astrobiol., 15 pages, 10 figs. submitted to Int.
J. Astrobiol., 15 pages, 10 figs | Int.J.Astrobiol. 3 (2004) 209-219 | 10.1017/S1473550404001983 | NORDITA-2004-58 | q-bio.BM astro-ph cond-mat.dis-nn | null | Reaction-diffusion equations based on a polymerization model are solved to
simulate the spreading of hypothetic left and right handed life forms on the
Earth's surface. The equations exhibit front-like behavior as is familiar from
the theory of the spreading of epidemics. It is shown that the relevant time
scale for achieving global homochirality is not, however, the time scale of
front propagation, but the much longer global diffusion time. The process can
be sped up by turbulence and large scale flows. It is speculated that, if the
deep layers of the early ocean were sufficiently quiescent, there may have been
the possibility of competing early life forms with opposite handedness.
| [
{
"created": "Mon, 5 Jul 2004 21:27:06 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Brandenburg",
"Axel",
""
],
[
"Multamäki",
"Tuomas",
""
]
] | Reaction-diffusion equations based on a polymerization model are solved to simulate the spreading of hypothetic left and right handed life forms on the Earth's surface. The equations exhibit front-like behavior as is familiar from the theory of the spreading of epidemics. It is shown that the relevant time scale for achieving global homochirality is not, however, the time scale of front propagation, but the much longer global diffusion time. The process can be sped up by turbulence and large scale flows. It is speculated that, if the deep layers of the early ocean were sufficiently quiescent, there may have been the possibility of competing early life forms with opposite handedness. |
1704.06301 | Veesler Stephane | Charline Gerard (CINaM), Gilles Ferry, Laurent Vuillard, Jean A
Boutin, L\'eonard Chavas (SSOLEIL), Tiphaine Huet (SSOLEIL), Nathalie Ferte
(CINaM), Romain Grossier (CINaM), Nadine Candoni (CINaM), St\'ephane Veesler
(CINaM) | Crystallization via tubing microfluidics permits both in situ and ex
situ X-ray diffraction | null | null | null | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We used a microfluidic platform to address the problems of obtaining
diffraction quality crystals and crystal handling during transfer to the X-ray
diffractometer. We optimize crystallization conditions of a pharmaceutical
protein and collect X-ray data both in situ and ex situ.
| [
{
"created": "Wed, 22 Mar 2017 08:09:41 GMT",
"version": "v1"
}
] | 2017-04-24 | [
[
"Gerard",
"Charline",
"",
"CINaM"
],
[
"Ferry",
"Gilles",
"",
"SSOLEIL"
],
[
"Vuillard",
"Laurent",
"",
"SSOLEIL"
],
[
"Boutin",
"Jean A",
"",
"SSOLEIL"
],
[
"Chavas",
"Léonard",
"",
"SSOLEIL"
],
[
"Huet",
... | We used a microfluidic platform to address the problems of obtaining diffraction quality crystals and crystal handling during transfer to the X-ray diffractometer. We optimize crystallization conditions of a pharmaceutical protein and collect X-ray data both in situ and ex situ. |
2201.12570 | Asif Khan | Asif Khan, Alexander I. Cowen-Rivers, Antoine Grosnit, Derrick-Goh-Xin
Deik, Philippe A. Robert, Victor Greiff, Eva Smorodina, Puneet Rawat, Kamil
Dreczkowski, Rahmad Akbar, Rasul Tutunov, Dany Bou-Ammar, Jun Wang, Amos
Storkey and Haitham Bou-Ammar | AntBO: Towards Real-World Automated Antibody Design with Combinatorial
Bayesian Optimisation | null | null | null | null | q-bio.BM cs.AI cs.LG cs.NE stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Antibodies are canonically Y-shaped multimeric proteins capable of highly
specific molecular recognition. The CDRH3 region located at the tip of variable
chains of an antibody dominates antigen-binding specificity. Therefore, it is a
priority to design optimal antigen-specific CDRH3 regions to develop
therapeutic antibodies. However, the combinatorial nature of CDRH3 sequence
space makes it impossible to search for an optimal binding sequence
exhaustively and efficiently using computational approaches. Here, we present
\texttt{AntBO}: a combinatorial Bayesian optimisation framework enabling
efficient \textit{in silico} design of the CDRH3 region. Ideally, antibodies
are expected to have high target specificity and developability. We introduce a
CDRH3 trust region that restricts the search to sequences with favourable
developability scores to achieve this goal. For benchmarking, \texttt{AntBO}
uses the \texttt{Absolut!} software suite as a black-box oracle to score the
target specificity and affinity of designed antibodies \textit{in silico} in an
unconstrained fashion~\citep{robert2021one}. The experiments performed for
$159$ discretised antigens used in \texttt{Absolut!} demonstrate the benefit of
\texttt{AntBO} in designing CDRH3 regions with diverse biophysical properties.
In under $200$ calls to black-box oracle, \texttt{AntBO} can suggest antibody
sequences that outperform the best binding sequence drawn from 6.9 million
experimentally obtained CDRH3s and a commonly used genetic algorithm baseline.
Additionally, \texttt{AntBO} finds very-high affinity CDRH3 sequences in only
38 protein designs whilst requiring no domain knowledge. We conclude
\texttt{AntBO} brings automated antibody design methods closer to what is
practically viable for in vitro experimentation.
| [
{
"created": "Sat, 29 Jan 2022 12:03:04 GMT",
"version": "v1"
},
{
"created": "Wed, 16 Feb 2022 13:08:36 GMT",
"version": "v2"
},
{
"created": "Fri, 11 Mar 2022 09:40:17 GMT",
"version": "v3"
},
{
"created": "Fri, 14 Oct 2022 18:31:22 GMT",
"version": "v4"
}
] | 2022-10-18 | [
[
"Khan",
"Asif",
""
],
[
"Cowen-Rivers",
"Alexander I.",
""
],
[
"Grosnit",
"Antoine",
""
],
[
"Deik",
"Derrick-Goh-Xin",
""
],
[
"Robert",
"Philippe A.",
""
],
[
"Greiff",
"Victor",
""
],
[
"Smorodina",
"Eva",
... | Antibodies are canonically Y-shaped multimeric proteins capable of highly specific molecular recognition. The CDRH3 region located at the tip of variable chains of an antibody dominates antigen-binding specificity. Therefore, it is a priority to design optimal antigen-specific CDRH3 regions to develop therapeutic antibodies. However, the combinatorial nature of CDRH3 sequence space makes it impossible to search for an optimal binding sequence exhaustively and efficiently using computational approaches. Here, we present \texttt{AntBO}: a combinatorial Bayesian optimisation framework enabling efficient \textit{in silico} design of the CDRH3 region. Ideally, antibodies are expected to have high target specificity and developability. We introduce a CDRH3 trust region that restricts the search to sequences with favourable developability scores to achieve this goal. For benchmarking, \texttt{AntBO} uses the \texttt{Absolut!} software suite as a black-box oracle to score the target specificity and affinity of designed antibodies \textit{in silico} in an unconstrained fashion~\citep{robert2021one}. The experiments performed for $159$ discretised antigens used in \texttt{Absolut!} demonstrate the benefit of \texttt{AntBO} in designing CDRH3 regions with diverse biophysical properties. In under $200$ calls to black-box oracle, \texttt{AntBO} can suggest antibody sequences that outperform the best binding sequence drawn from 6.9 million experimentally obtained CDRH3s and a commonly used genetic algorithm baseline. Additionally, \texttt{AntBO} finds very-high affinity CDRH3 sequences in only 38 protein designs whilst requiring no domain knowledge. We conclude \texttt{AntBO} brings automated antibody design methods closer to what is practically viable for in vitro experimentation. |
1501.01469 | Vincent Niviere | Julien Valton (LCBM - UMR 5249), Laurent Filisetti, Marc Fontecave
(LCBM - UMR 5249), Vincent Nivi\`ere (LCBM - UMR 5249) | A two-component flavin-dependent monooxygenase involved in actinorhodin
biosynthesis in Streptomyces coelicolor | null | The journal of biological chemistry, American Society for
Biochemistry and Molecular Biology, 2004, 279, pp.44362-9 | 10.1074/jbc.M407722200 | null | q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The two-component flavin-dependent monooxygenases belong to an emerging class
of enzymes involved in oxidation reactions in a number of metabolic and
biosynthetic pathways in microorganisms. One component is a NAD(P)H:flavin
oxidoreductase, which provides a reduced flavin to the second component, the
proper monooxygenase. There, the reduced flavin activates molecular oxygen for
substrate oxidation. Here, we study the flavin reductase ActVB and ActVA-ORF5
gene product, both reported to be involved in the last step of biosynthesis of
the natural antibiotic actinorhodin in Streptomyces coelicolor. For the first
time we show that ActVA-ORF5 is a FMN-dependent monooxygenase that together
with the help of the flavin reductase ActVB catalyzes the oxidation reaction.
The mechanism of the transfer of reduced FMN between ActVB and ActVA-ORF5 has
been investigated. Dissociation constant values for oxidized and reduced flavin
(FMNox and FMNred) with regard to ActVB and ActVA-ORF5 have been determined.
The data clearly demonstrate a thermodynamic transfer of FMNred from ActVB to
ActVA-ORF5 without involving a particular interaction between the two protein
components. In full agreement with these data, we propose a reaction mechanism
in which FMNox binds to ActVB, where it is reduced, and the resulting FMNred
moves to ActVA-ORF5, where it reacts with O2 to generate a flavinperoxide
intermediate. A direct spectroscopic evidence for the formation of such species
within ActVA-ORF5 is reported.
| [
{
"created": "Wed, 7 Jan 2015 12:42:09 GMT",
"version": "v1"
}
] | 2015-01-08 | [
[
"Valton",
"Julien",
"",
"LCBM - UMR 5249"
],
[
"Filisetti",
"Laurent",
"",
"LCBM - UMR 5249"
],
[
"Fontecave",
"Marc",
"",
"LCBM - UMR 5249"
],
[
"Nivière",
"Vincent",
"",
"LCBM - UMR 5249"
]
] | The two-component flavin-dependent monooxygenases belong to an emerging class of enzymes involved in oxidation reactions in a number of metabolic and biosynthetic pathways in microorganisms. One component is a NAD(P)H:flavin oxidoreductase, which provides a reduced flavin to the second component, the proper monooxygenase. There, the reduced flavin activates molecular oxygen for substrate oxidation. Here, we study the flavin reductase ActVB and ActVA-ORF5 gene product, both reported to be involved in the last step of biosynthesis of the natural antibiotic actinorhodin in Streptomyces coelicolor. For the first time we show that ActVA-ORF5 is a FMN-dependent monooxygenase that together with the help of the flavin reductase ActVB catalyzes the oxidation reaction. The mechanism of the transfer of reduced FMN between ActVB and ActVA-ORF5 has been investigated. Dissociation constant values for oxidized and reduced flavin (FMNox and FMNred) with regard to ActVB and ActVA-ORF5 have been determined. The data clearly demonstrate a thermodynamic transfer of FMNred from ActVB to ActVA-ORF5 without involving a particular interaction between the two protein components. In full agreement with these data, we propose a reaction mechanism in which FMNox binds to ActVB, where it is reduced, and the resulting FMNred moves to ActVA-ORF5, where it reacts with O2 to generate a flavinperoxide intermediate. A direct spectroscopic evidence for the formation of such species within ActVA-ORF5 is reported. |
1911.05895 | Duncan Kirby | Duncan Kirby, Jeremy Rothschild, Matthew Smart and Anton Zilman | Pleiotropy enables specific and accurate signaling in the presence of
ligand cross talk | null | Phys. Rev. E 103, 042401 (2021) | 10.1103/PhysRevE.103.042401 | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Living cells sense their environment through the binding of extra-cellular
molecular ligands to cell surface receptors. Puzzlingly, vast numbers of
signaling pathways exhibit a high degree of cross talk between different
signals whereby different ligands act through the same receptor or shared
components downstream. It remains unclear how a cell can accurately process
information from the environment in such cross-wired pathways. We show that a
feature which commonly accompanies cross talk - signaling pleiotropy (the
ability of a receptor to produce multiple outputs) - offers a solution to the
cross talk problem. In a minimal model we show that a single pleiotropic
receptor can simultaneously identify and accurately sense the concentrations of
arbitrary unknown ligands present individually or in a mixture. We calculate
the fundamental limits of the signaling specificity and accuracy of such
signaling schemes. The model serves as an elementary "building block" towards
understanding more complex cross-wired receptor-ligand signaling networks.
| [
{
"created": "Thu, 14 Nov 2019 02:09:27 GMT",
"version": "v1"
},
{
"created": "Wed, 3 Mar 2021 17:00:26 GMT",
"version": "v2"
},
{
"created": "Thu, 11 Mar 2021 21:26:18 GMT",
"version": "v3"
},
{
"created": "Fri, 2 Apr 2021 00:59:20 GMT",
"version": "v4"
}
] | 2021-04-07 | [
[
"Kirby",
"Duncan",
""
],
[
"Rothschild",
"Jeremy",
""
],
[
"Smart",
"Matthew",
""
],
[
"Zilman",
"Anton",
""
]
] | Living cells sense their environment through the binding of extra-cellular molecular ligands to cell surface receptors. Puzzlingly, vast numbers of signaling pathways exhibit a high degree of cross talk between different signals whereby different ligands act through the same receptor or shared components downstream. It remains unclear how a cell can accurately process information from the environment in such cross-wired pathways. We show that a feature which commonly accompanies cross talk - signaling pleiotropy (the ability of a receptor to produce multiple outputs) - offers a solution to the cross talk problem. In a minimal model we show that a single pleiotropic receptor can simultaneously identify and accurately sense the concentrations of arbitrary unknown ligands present individually or in a mixture. We calculate the fundamental limits of the signaling specificity and accuracy of such signaling schemes. The model serves as an elementary "building block" towards understanding more complex cross-wired receptor-ligand signaling networks. |
2407.17938 | Ninad Aithal | Debanjali Bhattacharya, Ninad Aithal, Manish Jayswal and Neelam Sinha | Analyzing Brain Tumor Connectomics using Graphs and Persistent Homology | 15 Pages, 7 Figures, 2 Tables, TGI3-MICCAI Workshop | null | null | null | q-bio.NC cs.CV math.AT | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Recent advances in molecular and genetic research have identified a diverse
range of brain tumor sub-types, shedding light on differences in their
molecular mechanisms, heterogeneity, and origins. The present study performs
whole-brain connectome analysis using diffusionweighted images. To achieve
this, both graph theory and persistent homology - a prominent approach in
topological data analysis are employed in order to quantify changes in the
structural connectivity of the wholebrain connectome in subjects with brain
tumors. Probabilistic tractography is used to map the number of streamlines
connecting 84 distinct brain regions, as delineated by the Desikan-Killiany
atlas from FreeSurfer. These streamline mappings form the connectome matrix, on
which persistent homology based analysis and graph theoretical analysis are
executed to evaluate the discriminatory power between tumor sub-types that
include meningioma and glioma. A detailed statistical analysis is conducted on
persistent homology-derived topological features and graphical features to
identify the brain regions where differences between study groups are
statistically significant (p < 0.05). For classification purpose, graph-based
local features are utilized, achieving a highest accuracy of 88%. In
classifying tumor sub-types, an accuracy of 80% is attained. The findings
obtained from this study underscore the potential of persistent homology and
graph theoretical analysis of the whole-brain connectome in detecting
alterations in structural connectivity patterns specific to different types of
brain tumors.
| [
{
"created": "Thu, 25 Jul 2024 10:55:19 GMT",
"version": "v1"
}
] | 2024-07-26 | [
[
"Bhattacharya",
"Debanjali",
""
],
[
"Aithal",
"Ninad",
""
],
[
"Jayswal",
"Manish",
""
],
[
"Sinha",
"Neelam",
""
]
] | Recent advances in molecular and genetic research have identified a diverse range of brain tumor sub-types, shedding light on differences in their molecular mechanisms, heterogeneity, and origins. The present study performs whole-brain connectome analysis using diffusionweighted images. To achieve this, both graph theory and persistent homology - a prominent approach in topological data analysis are employed in order to quantify changes in the structural connectivity of the wholebrain connectome in subjects with brain tumors. Probabilistic tractography is used to map the number of streamlines connecting 84 distinct brain regions, as delineated by the Desikan-Killiany atlas from FreeSurfer. These streamline mappings form the connectome matrix, on which persistent homology based analysis and graph theoretical analysis are executed to evaluate the discriminatory power between tumor sub-types that include meningioma and glioma. A detailed statistical analysis is conducted on persistent homology-derived topological features and graphical features to identify the brain regions where differences between study groups are statistically significant (p < 0.05). For classification purpose, graph-based local features are utilized, achieving a highest accuracy of 88%. In classifying tumor sub-types, an accuracy of 80% is attained. The findings obtained from this study underscore the potential of persistent homology and graph theoretical analysis of the whole-brain connectome in detecting alterations in structural connectivity patterns specific to different types of brain tumors. |
1605.01661 | Ramon Ferrer-i-Cancho | R. Ferrer-i-Cancho, D. Lusseau and B. McCowan | Parallels of human language in the behavior of bottlenose dolphins | In press in Linguistic Frontiers | Linguistic Frontiers (2022) 5(1), 1-7 | 10.2478/lf-2022-0002 | null | q-bio.NC cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A short review of similarities between dolphins and humans with the help of
quantitative linguistics and information theory.
| [
{
"created": "Thu, 5 May 2016 17:38:42 GMT",
"version": "v1"
},
{
"created": "Fri, 25 Mar 2022 09:06:07 GMT",
"version": "v2"
}
] | 2022-09-22 | [
[
"Ferrer-i-Cancho",
"R.",
""
],
[
"Lusseau",
"D.",
""
],
[
"McCowan",
"B.",
""
]
] | A short review of similarities between dolphins and humans with the help of quantitative linguistics and information theory. |
2108.09747 | Vikram Singh | Neha Choudhary and Vikram Singh | Neuromodulators in food ingredients: insights from network
pharmacological evaluation of Ayurvedic herbs | 22 pages, 6 figures | null | null | null | q-bio.BM | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The global burden of neurological diseases, the second leading cause of death
after heart dis-eases constitutes one of the major challenges of modern
medicine. Ayurveda, the traditional Indian medicinal systemenrooted in the
Vedic literature and considered as a schema for the holistic management of
health, characterizes various neurological diseases disorders (NDDs) and
prescribes several herbs, formulations, and bio-cleansing regimes for their
care and cure. In this work, we examined neuro-phytoregulatory potential of
34,472 phytochemicals among 3,038 herbs (including their varieties) mentioned
in Ayurveda using network pharmacology approach and found that 45% of these
Ayurvedic phytochemicals (APCs) have regulatory associations with 1,643
approved protein targets. Metabolite interconversion enzymes and protein
modifying enzymes were found to be the major target classes of APCs against
NDDs. The study further suggests that the actions of Ayurvedic herbs in
managing NDDs were majorly via regulating signalling processes, like, G-protein
signaling, acetylcholine signaling, chemokine signaling pathway and GnRH
signaling. A high confidence network specific to 219 pharmaceutically relevant
neuro-phytoregulators (NPRs) from 1,197 Ayurvedic herbs against 102 approved
protein-targets involved in NDDs was developed and analyzed for gaining
mechanistic insights. The key protein targets of NPRs to elicit their
neuro-regulatory effect were highlighted as CYP and TRPA, while estradiol and
melatonin were identified as the NPRs with high multi-targeting ability. 32
herbs enriched in NPRs were identified that include some of the well-known
Ayurvedic neurological recommendations, like, Papaver somniferum, Glycyrrhiza
glabra, Citrus aurantium, Cannabis sativa etc. Herbs enriched in NPRs may be
used as a chemical source library for drug-discovery against NDDs from systems
medicine perspectives.
| [
{
"created": "Sun, 22 Aug 2021 15:05:16 GMT",
"version": "v1"
}
] | 2021-08-24 | [
[
"Choudhary",
"Neha",
""
],
[
"Singh",
"Vikram",
""
]
] | The global burden of neurological diseases, the second leading cause of death after heart dis-eases constitutes one of the major challenges of modern medicine. Ayurveda, the traditional Indian medicinal systemenrooted in the Vedic literature and considered as a schema for the holistic management of health, characterizes various neurological diseases disorders (NDDs) and prescribes several herbs, formulations, and bio-cleansing regimes for their care and cure. In this work, we examined neuro-phytoregulatory potential of 34,472 phytochemicals among 3,038 herbs (including their varieties) mentioned in Ayurveda using network pharmacology approach and found that 45% of these Ayurvedic phytochemicals (APCs) have regulatory associations with 1,643 approved protein targets. Metabolite interconversion enzymes and protein modifying enzymes were found to be the major target classes of APCs against NDDs. The study further suggests that the actions of Ayurvedic herbs in managing NDDs were majorly via regulating signalling processes, like, G-protein signaling, acetylcholine signaling, chemokine signaling pathway and GnRH signaling. A high confidence network specific to 219 pharmaceutically relevant neuro-phytoregulators (NPRs) from 1,197 Ayurvedic herbs against 102 approved protein-targets involved in NDDs was developed and analyzed for gaining mechanistic insights. The key protein targets of NPRs to elicit their neuro-regulatory effect were highlighted as CYP and TRPA, while estradiol and melatonin were identified as the NPRs with high multi-targeting ability. 32 herbs enriched in NPRs were identified that include some of the well-known Ayurvedic neurological recommendations, like, Papaver somniferum, Glycyrrhiza glabra, Citrus aurantium, Cannabis sativa etc. Herbs enriched in NPRs may be used as a chemical source library for drug-discovery against NDDs from systems medicine perspectives. |
2205.05143 | Nicholas Fuhr | Nicholas E. Fuhr, Mohamed Azize, David J. Bishop | Coronavirus RNA Sensor Using Single-Stranded DNA Bonded to
Sub-Percolated Gold Films on Monolayer Graphene Field-Effect Transistors | 15 pages, 6 figures Keywords: transcriptome, virus, monolayer
graphene, percolated gold, 2DEG, field-effect transistor | null | null | null | q-bio.BM | http://creativecommons.org/licenses/by/4.0/ | Electrical detection of messenger ribonucleic acid (mRNA) is a promising
approach to enhancing transcriptomics and disease diagnostics because of its
sensitivity, rapidity, and modularity. Reported here is a fast SARS-CoV-2 mRNA
biosensor (<1 minute) with a limit of detection of 1 aM, and dynamic range of 4
orders of magnitude and a linear sensitivity of 22 mV per molar decade. These
figures of merit were obtained on photoresistlessly patterned monolayer
graphene field-effect transistors (FETs) derived from commercial four-inch
graphene on 90 nm of silicon dioxide on p-type silicon. Then, to facilitate
mRNA hybridization, graphene sensing mesa were coated with an ultrathin
sub-percolation threshold gold film for bonding 3'-thiolated single-stranded
deoxyribonucleic acid (ssDNA) probes complementary to SARS-CoV-2 nucleocapsid
phosphoprotein (N) gene. Sub-percolated gold was used to minimize the distance
between the graphene material and surface hybridization events. The
liquid-transfer characteristics of the graphene FETs repeatedly shows
correlation between the Dirac voltage and the copy number of polynucleotide.
Ultrathin percolated gold films on graphene FETs facilitate two-dimensional
electron gas (2DEG) mRNA biosensors for transcriptomic profiling.
| [
{
"created": "Tue, 10 May 2022 19:51:02 GMT",
"version": "v1"
}
] | 2022-05-12 | [
[
"Fuhr",
"Nicholas E.",
""
],
[
"Azize",
"Mohamed",
""
],
[
"Bishop",
"David J.",
""
]
] | Electrical detection of messenger ribonucleic acid (mRNA) is a promising approach to enhancing transcriptomics and disease diagnostics because of its sensitivity, rapidity, and modularity. Reported here is a fast SARS-CoV-2 mRNA biosensor (<1 minute) with a limit of detection of 1 aM, and dynamic range of 4 orders of magnitude and a linear sensitivity of 22 mV per molar decade. These figures of merit were obtained on photoresistlessly patterned monolayer graphene field-effect transistors (FETs) derived from commercial four-inch graphene on 90 nm of silicon dioxide on p-type silicon. Then, to facilitate mRNA hybridization, graphene sensing mesa were coated with an ultrathin sub-percolation threshold gold film for bonding 3'-thiolated single-stranded deoxyribonucleic acid (ssDNA) probes complementary to SARS-CoV-2 nucleocapsid phosphoprotein (N) gene. Sub-percolated gold was used to minimize the distance between the graphene material and surface hybridization events. The liquid-transfer characteristics of the graphene FETs repeatedly shows correlation between the Dirac voltage and the copy number of polynucleotide. Ultrathin percolated gold films on graphene FETs facilitate two-dimensional electron gas (2DEG) mRNA biosensors for transcriptomic profiling. |
q-bio/0606042 | Frank M. Hilker | Frank M. Hilker, Frank H. Westerhoff | Preventing extinction and outbreaks in chaotic populations | 10 pages, 6 figures | American Naturalist 170, 232-241 (2007) | 10.1086/518949 | null | q-bio.PE nlin.CD | null | Interactions in ecological communities are inherently nonlinear and can lead
to complex population dynamics including irregular fluctuations induced by
chaos. Chaotic population dynamics can exhibit violent oscillations with
extremely small or large population abundances that might cause extinction and
recurrent outbreaks, respectively. We present a simple method that can guide
management efforts to prevent crashes, peaks, or any other undesirable state.
At the same time, the irregularity of the dynamics can be preserved when chaos
is desirable for the population. The control scheme is easy to implement
because it relies on time series information only. The method is illustrated by
two examples: control of crashes in the Ricker map and control of outbreaks in
a stage-structured model of the flour beetle Tribolium. It turns out to be
effective even with few available data and in the presence of noise, as is
typical for ecological settings.
| [
{
"created": "Fri, 30 Jun 2006 18:11:27 GMT",
"version": "v1"
},
{
"created": "Thu, 20 Jul 2006 10:19:48 GMT",
"version": "v2"
},
{
"created": "Thu, 21 Sep 2006 01:38:39 GMT",
"version": "v3"
},
{
"created": "Wed, 29 Aug 2007 17:56:24 GMT",
"version": "v4"
}
] | 2007-08-29 | [
[
"Hilker",
"Frank M.",
""
],
[
"Westerhoff",
"Frank H.",
""
]
] | Interactions in ecological communities are inherently nonlinear and can lead to complex population dynamics including irregular fluctuations induced by chaos. Chaotic population dynamics can exhibit violent oscillations with extremely small or large population abundances that might cause extinction and recurrent outbreaks, respectively. We present a simple method that can guide management efforts to prevent crashes, peaks, or any other undesirable state. At the same time, the irregularity of the dynamics can be preserved when chaos is desirable for the population. The control scheme is easy to implement because it relies on time series information only. The method is illustrated by two examples: control of crashes in the Ricker map and control of outbreaks in a stage-structured model of the flour beetle Tribolium. It turns out to be effective even with few available data and in the presence of noise, as is typical for ecological settings. |
1002.1428 | Krishnakumar Garikipati | H Narayanan, S N Verner, K.L. Mills, R. Kemkemer and K. Garikipati | In silico estimates of the free energy rates in growing tumor spheroids | 27 pages with 5 figures and 2 tables. Figures and tables appear at
the end of the paper | Journal of Physics: Condensed Matter, Special Issue on
Cell-Substrate Interactions. 22 (2010), 194122. | 10.1088/0953-8984/22/19/194122 | null | q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The physics of solid tumor growth can be considered at three distinct size
scales: the tumor scale, the cell-extracellular matrix (ECM) scale and the
sub-cellular scale. In this paper we consider the tumor scale in the interest
of eventually developing a system-level understanding of the progression of
cancer. At this scale, cell populations and chemical species are best treated
as concentration fields that vary with time and space. The cells have
chemo-mechanical interactions with each other and with the ECM, consume glucose
and oxygen that are transported through the tumor, and create chemical
byproducts. We present a continuum mathematical model for the biochemical
dynamics and mechanics that govern tumor growth. The biochemical dynamics and
mechanics also engender free energy changes that serve as universal measures
for comparison of these processes. Within our mathematical framework we
therefore consider the free energy inequality, which arises from the first and
second laws of thermodynamics. With the model we compute preliminary estimates
of the free energy rates of a growing tumor in its pre-vascular stage by using
currently available data from single cells and multicellular tumor spheroids.
| [
{
"created": "Sun, 7 Feb 2010 04:06:14 GMT",
"version": "v1"
},
{
"created": "Mon, 26 Apr 2010 16:18:23 GMT",
"version": "v2"
}
] | 2010-04-27 | [
[
"Narayanan",
"H",
""
],
[
"Verner",
"S N",
""
],
[
"Mills",
"K. L.",
""
],
[
"Kemkemer",
"R.",
""
],
[
"Garikipati",
"K.",
""
]
] | The physics of solid tumor growth can be considered at three distinct size scales: the tumor scale, the cell-extracellular matrix (ECM) scale and the sub-cellular scale. In this paper we consider the tumor scale in the interest of eventually developing a system-level understanding of the progression of cancer. At this scale, cell populations and chemical species are best treated as concentration fields that vary with time and space. The cells have chemo-mechanical interactions with each other and with the ECM, consume glucose and oxygen that are transported through the tumor, and create chemical byproducts. We present a continuum mathematical model for the biochemical dynamics and mechanics that govern tumor growth. The biochemical dynamics and mechanics also engender free energy changes that serve as universal measures for comparison of these processes. Within our mathematical framework we therefore consider the free energy inequality, which arises from the first and second laws of thermodynamics. With the model we compute preliminary estimates of the free energy rates of a growing tumor in its pre-vascular stage by using currently available data from single cells and multicellular tumor spheroids. |
1312.0570 | Mikhail Tikhonov | Mikhail Tikhonov, Robert W. Leach and Ned S. Wingreen | Interpreting 16S metagenomic data without clustering to achieve sub-OTU
resolution | Updated to match the published version. 12 pages, 5 figures +
supplement. Significantly revised for clarity, references added, results not
changed | The ISME Journal (2015) 9, 68-80 | 10.1038/ismej.2014.117 | null | q-bio.QM q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The standard approach to analyzing 16S tag sequence data, which relies on
clustering reads by sequence similarity into Operational Taxonomic Units
(OTUs), underexploits the accuracy of modern sequencing technology. We present
a clustering-free approach to multi-sample Illumina datasets that can identify
independent bacterial subpopulations regardless of the similarity of their 16S
tag sequences. Using published data from a longitudinal time-series study of
human tongue microbiota, we are able to resolve within standard 97% similarity
OTUs up to 20 distinct subpopulations, all ecologically distinct but with 16S
tags differing by as little as 1 nucleotide (99.2% similarity). A comparative
analysis of oral communities of two cohabiting individuals reveals that most
such subpopulations are shared between the two communities at 100% sequence
identity, and that dynamical similarity between subpopulations in one host is
strongly predictive of dynamical similarity between the same subpopulations in
the other host. Our method can also be applied to samples collected in
cross-sectional studies and can be used with the 454 sequencing platform. We
discuss how the sub-OTU resolution of our approach can provide new insight into
factors shaping community assembly.
| [
{
"created": "Mon, 2 Dec 2013 19:58:41 GMT",
"version": "v1"
},
{
"created": "Thu, 29 Jan 2015 00:13:16 GMT",
"version": "v2"
}
] | 2015-01-30 | [
[
"Tikhonov",
"Mikhail",
""
],
[
"Leach",
"Robert W.",
""
],
[
"Wingreen",
"Ned S.",
""
]
] | The standard approach to analyzing 16S tag sequence data, which relies on clustering reads by sequence similarity into Operational Taxonomic Units (OTUs), underexploits the accuracy of modern sequencing technology. We present a clustering-free approach to multi-sample Illumina datasets that can identify independent bacterial subpopulations regardless of the similarity of their 16S tag sequences. Using published data from a longitudinal time-series study of human tongue microbiota, we are able to resolve within standard 97% similarity OTUs up to 20 distinct subpopulations, all ecologically distinct but with 16S tags differing by as little as 1 nucleotide (99.2% similarity). A comparative analysis of oral communities of two cohabiting individuals reveals that most such subpopulations are shared between the two communities at 100% sequence identity, and that dynamical similarity between subpopulations in one host is strongly predictive of dynamical similarity between the same subpopulations in the other host. Our method can also be applied to samples collected in cross-sectional studies and can be used with the 454 sequencing platform. We discuss how the sub-OTU resolution of our approach can provide new insight into factors shaping community assembly. |
0908.1960 | Moritz Helias | M. Helias, M. Deger, S. Rotter, M. Diesmann | A Fokker-Planck formalism for diffusion with finite increments and
absorbing boundaries | Consists of two parts: main article (3 figures) plus supplementary
text (3 extra figures) | null | 10.1371/journal.pcbi.1000929 | null | q-bio.QM q-bio.OT q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Gaussian white noise is frequently used to model fluctuations in physical
systems. In Fokker-Planck theory, this leads to a vanishing probability density
near the absorbing boundary of threshold models. Here we derive the boundary
condition for the stationary density of a first-order stochastic differential
equation for additive finite-grained Poisson noise and show that the response
properties of threshold units are qualitatively altered. Applied to the
integrate-and-fire neuron model, the response turns out to be instantaneous
rather than exhibiting low-pass characteristics, highly non-linear, and
asymmetric for excitation and inhibition. The novel mechanism is exhibited on
the network level and is a generic property of pulse-coupled systems of
threshold units.
| [
{
"created": "Thu, 13 Aug 2009 19:18:49 GMT",
"version": "v1"
},
{
"created": "Thu, 13 Aug 2009 21:24:01 GMT",
"version": "v2"
},
{
"created": "Fri, 13 Nov 2009 05:58:02 GMT",
"version": "v3"
}
] | 2010-09-17 | [
[
"Helias",
"M.",
""
],
[
"Deger",
"M.",
""
],
[
"Rotter",
"S.",
""
],
[
"Diesmann",
"M.",
""
]
] | Gaussian white noise is frequently used to model fluctuations in physical systems. In Fokker-Planck theory, this leads to a vanishing probability density near the absorbing boundary of threshold models. Here we derive the boundary condition for the stationary density of a first-order stochastic differential equation for additive finite-grained Poisson noise and show that the response properties of threshold units are qualitatively altered. Applied to the integrate-and-fire neuron model, the response turns out to be instantaneous rather than exhibiting low-pass characteristics, highly non-linear, and asymmetric for excitation and inhibition. The novel mechanism is exhibited on the network level and is a generic property of pulse-coupled systems of threshold units. |
2011.06837 | Rapha\"el Candelier | Benjamin Gallois and Rapha\"el Candelier | FastTrack: an open-source software for tracking varying numbers of
deformable objects | null | null | 10.1371/journal.pcbi.1008697 | null | q-bio.QM cs.CV | http://creativecommons.org/licenses/by/4.0/ | Analyzing the dynamical properties of mobile objects requires to extract
trajectories from recordings, which is often done by tracking movies. We
compiled a database of two-dimensional movies for very different biological and
physical systems spanning a wide range of length scales and developed a
general-purpose, optimized, open-source, cross-platform, easy to install and
use, self-updating software called FastTrack. It can handle a changing number
of deformable objects in a region of interest, and is particularly suitable for
animal and cell tracking in two-dimensions. Furthermore, we introduce the
probability of incursions as a new measure of a movie's trackability that
doesn't require the knowledge of ground truth trajectories, since it is
resilient to small amounts of errors and can be computed on the basis of an ad
hoc tracking. We also leveraged the versatility and speed of FastTrack to
implement an iterative algorithm determining a set of nearly-optimized tracking
parameters -- yet further reducing the amount of human intervention -- and
demonstrate that FastTrack can be used to explore the space of tracking
parameters to optimize the number of swaps for a batch of similar movies. A
benchmark shows that FastTrack is orders of magnitude faster than
state-of-the-art tracking algorithms, with a comparable tracking accuracy. The
source code is available under the GNU GPLv3 at
https://github.com/FastTrackOrg/FastTrack and pre-compiled binaries for
Windows, Mac and Linux are available at http://www.fasttrack.sh.
| [
{
"created": "Fri, 13 Nov 2020 09:52:58 GMT",
"version": "v1"
}
] | 2021-06-09 | [
[
"Gallois",
"Benjamin",
""
],
[
"Candelier",
"Raphaël",
""
]
] | Analyzing the dynamical properties of mobile objects requires to extract trajectories from recordings, which is often done by tracking movies. We compiled a database of two-dimensional movies for very different biological and physical systems spanning a wide range of length scales and developed a general-purpose, optimized, open-source, cross-platform, easy to install and use, self-updating software called FastTrack. It can handle a changing number of deformable objects in a region of interest, and is particularly suitable for animal and cell tracking in two-dimensions. Furthermore, we introduce the probability of incursions as a new measure of a movie's trackability that doesn't require the knowledge of ground truth trajectories, since it is resilient to small amounts of errors and can be computed on the basis of an ad hoc tracking. We also leveraged the versatility and speed of FastTrack to implement an iterative algorithm determining a set of nearly-optimized tracking parameters -- yet further reducing the amount of human intervention -- and demonstrate that FastTrack can be used to explore the space of tracking parameters to optimize the number of swaps for a batch of similar movies. A benchmark shows that FastTrack is orders of magnitude faster than state-of-the-art tracking algorithms, with a comparable tracking accuracy. The source code is available under the GNU GPLv3 at https://github.com/FastTrackOrg/FastTrack and pre-compiled binaries for Windows, Mac and Linux are available at http://www.fasttrack.sh. |
q-bio/0605023 | Tihamer Geyer | Tihamer Geyer | Form follows function -- how PufX increases the efficiency of the
light-harvesting complexes of Rhodobacter sphaeroides | Mostly rewritten and shortened text, now also deals with thermal
disorder, more focussed on the biological relevance of the results. 16 pages
LaTeX article, 7 figures. Submitted to Biophys. J | null | null | null | q-bio.QM | null | Some species of purple bacteria as, e.g., Rhodobacter sphaeroides contain the
protein PufX. Concurrently, the light harvesting complexes 1 (LH1) form dimers
of open rings. In mutants without PufX, the LH1s are closed rings and
photosynthesis breaks down, because the ubiquinone exchange at the reaction
center is blocked. Thus, PufX is regarded essential for quinone exchange.
In contrast to this view, which implicitly treats the LH1s as obstacles to
photosynthesis, we propose that the primary purpose of PufX is to improve the
efficiency of light harvesting by inducing the LH1 dimerization. Calculations
with a dipole model, which compare the photosynthetic efficiency of various
configurations of monomeric and dimeric core complexes, show that the dimer can
absorb photons directly into the RC about 30% more efficient, when related to
the number of bacteriochlorophylls, but that the performance of the more
sophisticated dimeric LH1 antenna degrades faster with structural
perturbations. The calculations predict an optimal orientation of the reaction
centers relative to the LH1 dimer, which agrees well with the experimentally
found configuration.
For the increased required rigidity of the dimer additional modifications of
the LH1 subunits are necessary, which would lead to the observed ubiquinone
blockage, when PufX is missing.
| [
{
"created": "Tue, 16 May 2006 12:17:06 GMT",
"version": "v1"
},
{
"created": "Fri, 16 Mar 2007 14:42:40 GMT",
"version": "v2"
}
] | 2007-05-23 | [
[
"Geyer",
"Tihamer",
""
]
] | Some species of purple bacteria as, e.g., Rhodobacter sphaeroides contain the protein PufX. Concurrently, the light harvesting complexes 1 (LH1) form dimers of open rings. In mutants without PufX, the LH1s are closed rings and photosynthesis breaks down, because the ubiquinone exchange at the reaction center is blocked. Thus, PufX is regarded essential for quinone exchange. In contrast to this view, which implicitly treats the LH1s as obstacles to photosynthesis, we propose that the primary purpose of PufX is to improve the efficiency of light harvesting by inducing the LH1 dimerization. Calculations with a dipole model, which compare the photosynthetic efficiency of various configurations of monomeric and dimeric core complexes, show that the dimer can absorb photons directly into the RC about 30% more efficient, when related to the number of bacteriochlorophylls, but that the performance of the more sophisticated dimeric LH1 antenna degrades faster with structural perturbations. The calculations predict an optimal orientation of the reaction centers relative to the LH1 dimer, which agrees well with the experimentally found configuration. For the increased required rigidity of the dimer additional modifications of the LH1 subunits are necessary, which would lead to the observed ubiquinone blockage, when PufX is missing. |
2201.03164 | Yue Wang | Yue Wang, Siqi He | Inference on autoregulation in gene expression | null | null | null | null | q-bio.MN | http://creativecommons.org/licenses/by/4.0/ | Some genes can promote or repress their own expressions, which is called
autoregulation. Although gene regulation is a central topic in biology,
autoregulation is much less studied. In general, it is extremely difficult to
determine the existence of autoregulation with direct biochemical approaches.
Nevertheless, some papers have observed that certain types of autoregulations
are linked to noise levels in gene expression. We generalize these results by
two propositions on discrete-state continuous-time Markov chains. These two
propositions form a simple but robust method to infer the existence of
autoregulation from gene expression data. This method only needs to compare the
mean and variance of the gene expression level. Compared to other methods for
inferring autoregulation, our method only requires non-interventional one-time
data, and does not need to estimate parameters. Besides, our method has few
restrictions on the model. We apply this method to four groups of experimental
data and find some genes that might have autoregulation. Some inferred
autoregulations have been verified by experiments or other theoretical works.
| [
{
"created": "Mon, 10 Jan 2022 05:18:37 GMT",
"version": "v1"
},
{
"created": "Tue, 18 Jan 2022 20:51:09 GMT",
"version": "v2"
},
{
"created": "Wed, 23 Mar 2022 04:30:43 GMT",
"version": "v3"
},
{
"created": "Sat, 2 Jul 2022 00:48:53 GMT",
"version": "v4"
},
{
"cr... | 2022-08-30 | [
[
"Wang",
"Yue",
""
],
[
"He",
"Siqi",
""
]
] | Some genes can promote or repress their own expressions, which is called autoregulation. Although gene regulation is a central topic in biology, autoregulation is much less studied. In general, it is extremely difficult to determine the existence of autoregulation with direct biochemical approaches. Nevertheless, some papers have observed that certain types of autoregulations are linked to noise levels in gene expression. We generalize these results by two propositions on discrete-state continuous-time Markov chains. These two propositions form a simple but robust method to infer the existence of autoregulation from gene expression data. This method only needs to compare the mean and variance of the gene expression level. Compared to other methods for inferring autoregulation, our method only requires non-interventional one-time data, and does not need to estimate parameters. Besides, our method has few restrictions on the model. We apply this method to four groups of experimental data and find some genes that might have autoregulation. Some inferred autoregulations have been verified by experiments or other theoretical works. |
1101.3983 | Lipi Acharya | Lipi Acharya, Thair Judeh, Zhansheng Duan, Michael Rabbat and Dongxiao
Zhu | GSGS: A Computational Framework to Reconstruct Signaling Pathways from
Gene Sets | null | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel two-stage Gene Set Gibbs Sampling (GSGS) framework, to
reverse engineer signaling pathways from gene sets inferred from molecular
profiling data. We hypothesize that signaling pathways are structurally an
ensemble of overlapping linear signal transduction events which we encode as
Information Flow Gene Sets (IFGS's). We infer pathways from gene sets
corresponding to these events subjected to a random permutation of genes within
each set. In Stage I, we use a source separation algorithm to derive unordered
and overlapping IFGS's from molecular profiling data, allowing cross talk among
IFGS's. In Stage II, we develop a Gibbs sampling like algorithm, Gene Set Gibbs
Sampler, to reconstruct signaling pathways from the latent IFGS's derived in
Stage I. The novelty of this framework lies in the seamless integration of the
two stages and the hypothesis of IFGS's as the basic building blocks for signal
pathways. In the proof-of-concept studies, our approach is shown to outperform
the existing Bayesian network approaches using both continuous and discrete
data generated from benchmark networks in the DREAM initiative. We perform a
comprehensive sensitivity analysis to assess the robustness of the approach.
Finally, we implement the GSGS framework to reconstruct signaling pathways in
breast cancer cells.
| [
{
"created": "Thu, 20 Jan 2011 18:08:48 GMT",
"version": "v1"
},
{
"created": "Sun, 23 Jan 2011 07:22:58 GMT",
"version": "v2"
},
{
"created": "Thu, 13 Oct 2011 21:53:19 GMT",
"version": "v3"
}
] | 2011-10-17 | [
[
"Acharya",
"Lipi",
""
],
[
"Judeh",
"Thair",
""
],
[
"Duan",
"Zhansheng",
""
],
[
"Rabbat",
"Michael",
""
],
[
"Zhu",
"Dongxiao",
""
]
] | We propose a novel two-stage Gene Set Gibbs Sampling (GSGS) framework, to reverse engineer signaling pathways from gene sets inferred from molecular profiling data. We hypothesize that signaling pathways are structurally an ensemble of overlapping linear signal transduction events which we encode as Information Flow Gene Sets (IFGS's). We infer pathways from gene sets corresponding to these events subjected to a random permutation of genes within each set. In Stage I, we use a source separation algorithm to derive unordered and overlapping IFGS's from molecular profiling data, allowing cross talk among IFGS's. In Stage II, we develop a Gibbs sampling like algorithm, Gene Set Gibbs Sampler, to reconstruct signaling pathways from the latent IFGS's derived in Stage I. The novelty of this framework lies in the seamless integration of the two stages and the hypothesis of IFGS's as the basic building blocks for signal pathways. In the proof-of-concept studies, our approach is shown to outperform the existing Bayesian network approaches using both continuous and discrete data generated from benchmark networks in the DREAM initiative. We perform a comprehensive sensitivity analysis to assess the robustness of the approach. Finally, we implement the GSGS framework to reconstruct signaling pathways in breast cancer cells. |
1509.01638 | Susan Martonosi | Harry J. Dudley, Abhishek Goenka, Cesar J. Orellana, Susan E.
Martonosi | Multi-year optimization of malaria intervention: a mathematical model | 27 pages, 9 figures, 6 tables. Under review | null | 10.1186/s12936-016-1182-0 | null | q-bio.PE math.OC physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Malaria is a mosquito-borne, lethal disease that affects millions and kills
hundreds of thousands of people each year. In this paper, we develop a model
for allocating malaria interventions across geographic regions and time,
subject to budget constraints, with the aim of minimizing the number of
person-days of malaria infection. The model considers a range of several
conditions: climatic characteristics, treatment efficacy, distribution costs,
and treatment coverage. We couple an expanded susceptible-infected-recovered
(SIR) compartment model for the disease dynamics with an integer linear
programming (ILP) model for selecting the disease interventions. Our model
produces an intervention plan for all regions, identifying which combination of
interventions, with which level of coverage, to use in each region and year in
a five-year planning horizon. Simulations using the model yield high-level,
qualitative insights on optimal intervention policies: The optimal policy is
different when considering a five-year time horizon than when considering only
a single year, due to the effects that interventions have on the disease
transmission dynamics. The vaccine intervention is rarely selected, except if
its assumed cost is significantly lower than that predicted in the literature.
Increasing the available budget causes the number of person-days of malaria
infection to decrease linearly up to a point, after which the benefit of
increased budget starts to taper. The optimal policy is highly dependent on
assumptions about mosquito density, selecting different interventions for wet
climates with high density than for dry climates with low density, and the
interventions are found to be less effective at controlling malaria in the wet
climates when attainable intervention coverage is 60% or lower. However, when
intervention coverage of 80% is attainable, then malaria prevalence drops
quickly.
| [
{
"created": "Fri, 4 Sep 2015 23:33:32 GMT",
"version": "v1"
},
{
"created": "Thu, 4 Feb 2016 19:25:15 GMT",
"version": "v2"
}
] | 2023-09-28 | [
[
"Dudley",
"Harry J.",
""
],
[
"Goenka",
"Abhishek",
""
],
[
"Orellana",
"Cesar J.",
""
],
[
"Martonosi",
"Susan E.",
""
]
] | Malaria is a mosquito-borne, lethal disease that affects millions and kills hundreds of thousands of people each year. In this paper, we develop a model for allocating malaria interventions across geographic regions and time, subject to budget constraints, with the aim of minimizing the number of person-days of malaria infection. The model considers a range of several conditions: climatic characteristics, treatment efficacy, distribution costs, and treatment coverage. We couple an expanded susceptible-infected-recovered (SIR) compartment model for the disease dynamics with an integer linear programming (ILP) model for selecting the disease interventions. Our model produces an intervention plan for all regions, identifying which combination of interventions, with which level of coverage, to use in each region and year in a five-year planning horizon. Simulations using the model yield high-level, qualitative insights on optimal intervention policies: The optimal policy is different when considering a five-year time horizon than when considering only a single year, due to the effects that interventions have on the disease transmission dynamics. The vaccine intervention is rarely selected, except if its assumed cost is significantly lower than that predicted in the literature. Increasing the available budget causes the number of person-days of malaria infection to decrease linearly up to a point, after which the benefit of increased budget starts to taper. The optimal policy is highly dependent on assumptions about mosquito density, selecting different interventions for wet climates with high density than for dry climates with low density, and the interventions are found to be less effective at controlling malaria in the wet climates when attainable intervention coverage is 60% or lower. However, when intervention coverage of 80% is attainable, then malaria prevalence drops quickly. |
q-bio/0411003 | Kai Wang | Kai Wang, Nilanjana Banerjee, Adam Margolin, Ilya Nemenman, Katia
Basso, Riccardo Favera, Andrea Califano | Conditional Network Analysis Identifies Candidate Regulator Genes in
Human B Cells | Submitted to RECOMB 2005 (11 pages, 4 figures, 2 tables) | null | null | null | q-bio.MN q-bio.GN q-bio.QM | null | Cellular phenotypes are determined by the dynamical activity of networks of
co-regulated genes. Elucidating such networks is crucial for the understanding
of normal cell physiology as well as for the dissection of complex pathologic
phenotypes. Existing methods for such "reverse engineering" of genetic networks
from microarray expression data have been successful only in prokaryotes (E.
coli) and lower eukaryotes (S. cerevisiae) with relatively simple genomes.
Additionally, they have mostly attempted to reconstruct average properties
about the network connectivity without capturing the highly conditional nature
of the interactions. In this paper we extend the ARACNE algorithm, which we
recently introduced and successfully applied to the reconstruction of
whole-genome transcriptional networks from mammalian cells, precisely to link
the existence of specific network structures to the expression or lack thereof
of specific regulator genes. This is accomplished by analyzing thousands of
alternative network topologies generated by constraining the data set on the
presence or absence of putative regulator genes. By considering interactions
that are consistently supported across several such constraints, we identify
many transcriptional interactions that would not have been detectable by the
original method. By selecting genes that produce statistically significant
changes in network topology, we identify novel candidate regulator genes.
Further analysis shows that transcription factors, kinases, phosphatases, and
other gene families known to effect biochemical interactions, are significantly
overrepresented among the set of candidate regulator genes identified in
silico, indirectly supporting the validity of the approach.
| [
{
"created": "Sat, 30 Oct 2004 22:40:45 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Wang",
"Kai",
""
],
[
"Banerjee",
"Nilanjana",
""
],
[
"Margolin",
"Adam",
""
],
[
"Nemenman",
"Ilya",
""
],
[
"Basso",
"Katia",
""
],
[
"Favera",
"Riccardo",
""
],
[
"Califano",
"Andrea",
""
]
] | Cellular phenotypes are determined by the dynamical activity of networks of co-regulated genes. Elucidating such networks is crucial for the understanding of normal cell physiology as well as for the dissection of complex pathologic phenotypes. Existing methods for such "reverse engineering" of genetic networks from microarray expression data have been successful only in prokaryotes (E. coli) and lower eukaryotes (S. cerevisiae) with relatively simple genomes. Additionally, they have mostly attempted to reconstruct average properties about the network connectivity without capturing the highly conditional nature of the interactions. In this paper we extend the ARACNE algorithm, which we recently introduced and successfully applied to the reconstruction of whole-genome transcriptional networks from mammalian cells, precisely to link the existence of specific network structures to the expression or lack thereof of specific regulator genes. This is accomplished by analyzing thousands of alternative network topologies generated by constraining the data set on the presence or absence of putative regulator genes. By considering interactions that are consistently supported across several such constraints, we identify many transcriptional interactions that would not have been detectable by the original method. By selecting genes that produce statistically significant changes in network topology, we identify novel candidate regulator genes. Further analysis shows that transcription factors, kinases, phosphatases, and other gene families known to effect biochemical interactions, are significantly overrepresented among the set of candidate regulator genes identified in silico, indirectly supporting the validity of the approach. |
2301.11812 | Johannes Kleiner | Johannes Kleiner and Tim Ludwig | What is a Mathematical Structure of Conscious Experience? | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In consciousness science, several promising approaches have been developed
for how to represent conscious experience in terms of mathematical spaces and
structures. What is missing, however, is an explicit definition of what a
'mathematical structure of conscious experience' is. Here, we propose such a
definition. This definition provides a link between the abstract formal
entities of mathematics and the concreta of conscious experience; it
complements recent approaches that study quality spaces, qualia spaces or
phenomenal spaces; it provides a general method to identify and investigate
structures of conscious experience; and it may serve as a framework to unify
the various approaches from different fields. We hope that ultimately this work
provides a basis for developing a common formal language to study
consciousness.
| [
{
"created": "Thu, 26 Jan 2023 15:25:19 GMT",
"version": "v1"
}
] | 2023-01-30 | [
[
"Kleiner",
"Johannes",
""
],
[
"Ludwig",
"Tim",
""
]
] | In consciousness science, several promising approaches have been developed for how to represent conscious experience in terms of mathematical spaces and structures. What is missing, however, is an explicit definition of what a 'mathematical structure of conscious experience' is. Here, we propose such a definition. This definition provides a link between the abstract formal entities of mathematics and the concreta of conscious experience; it complements recent approaches that study quality spaces, qualia spaces or phenomenal spaces; it provides a general method to identify and investigate structures of conscious experience; and it may serve as a framework to unify the various approaches from different fields. We hope that ultimately this work provides a basis for developing a common formal language to study consciousness. |
2105.05382 | Arna Ghosh | Luke Y. Prince, Roy Henha Eyono, Ellen Boven, Arna Ghosh, Joe
Pemberton, Franz Scherr, Claudia Clopath, Rui Ponte Costa, Wolfgang Maass,
Blake A. Richards, Cristina Savin, Katharina Anna Wilmes | Current State and Future Directions for Learning in Biological Recurrent
Neural Networks: A Perspective Piece | null | null | null | null | q-bio.NC cs.AI | http://creativecommons.org/licenses/by/4.0/ | We provide a brief review of the common assumptions about biological learning
with findings from experimental neuroscience and contrast them with the
efficiency of gradient-based learning in recurrent neural networks. The key
issues discussed in this review include: synaptic plasticity, neural circuits,
theory-experiment divide, and objective functions. We conclude with
recommendations for both theoretical and experimental neuroscientists when
designing new studies that could help bring clarity to these issues.
| [
{
"created": "Wed, 12 May 2021 00:59:40 GMT",
"version": "v1"
},
{
"created": "Wed, 5 Jan 2022 16:47:02 GMT",
"version": "v2"
}
] | 2022-01-06 | [
[
"Prince",
"Luke Y.",
""
],
[
"Eyono",
"Roy Henha",
""
],
[
"Boven",
"Ellen",
""
],
[
"Ghosh",
"Arna",
""
],
[
"Pemberton",
"Joe",
""
],
[
"Scherr",
"Franz",
""
],
[
"Clopath",
"Claudia",
""
],
[
"Co... | We provide a brief review of the common assumptions about biological learning with findings from experimental neuroscience and contrast them with the efficiency of gradient-based learning in recurrent neural networks. The key issues discussed in this review include: synaptic plasticity, neural circuits, theory-experiment divide, and objective functions. We conclude with recommendations for both theoretical and experimental neuroscientists when designing new studies that could help bring clarity to these issues. |
2106.09000 | Xin Yang | Xin Yang, Ning Zhang, Donglin Wang | Deriving Autism Spectrum Disorder Functional Networks from RS-FMRI Data
using Group ICA and Dictionary Learning | Conference | null | 10.5121/csit.2021.110714 | null | q-bio.NC cs.LG eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The objective of this study is to derive functional networks for the autism
spectrum disorder (ASD) population using the group ICA and dictionary learning
model together and to classify ASD and typically developing (TD) participants
using the functional connectivity calculated from the derived functional
networks. In our experiments, the ASD functional networks were derived from
resting-state functional magnetic resonance imaging (rs-fMRI) data. We
downloaded a total of 120 training samples, including 58 ASD and 62 TD
participants, which were obtained from the public repository: Autism Brain
Imaging Data Exchange I (ABIDE I). Our methodology and results have five main
parts. First, we utilize a group ICA model to extract functional networks from
the ASD group and rank the top 20 regions of interest (ROIs). Second, we
utilize a dictionary learning model to extract functional networks from the ASD
group and rank the top 20 ROIs. Third, we merged the 40 selected ROIs from the
two models together as the ASD functional networks. Fourth, we generate three
corresponding masks based on the 20 selected ROIs from group ICA, the 20 ROIs
selected from dictionary learning, and the 40 combined ROIs selected from both.
Finally, we extract ROIs for all training samples using the above three masks,
and the calculated functional connectivity was used as features for ASD and TD
classification. The classification results showed that the functional networks
derived from ICA and dictionary learning together outperform those derived from
a single ICA model or a single dictionary learning model.
| [
{
"created": "Mon, 7 Jun 2021 07:58:52 GMT",
"version": "v1"
}
] | 2021-06-17 | [
[
"Yang",
"Xin",
""
],
[
"Zhang",
"Ning",
""
],
[
"Wang",
"Donglin",
""
]
] | The objective of this study is to derive functional networks for the autism spectrum disorder (ASD) population using the group ICA and dictionary learning model together and to classify ASD and typically developing (TD) participants using the functional connectivity calculated from the derived functional networks. In our experiments, the ASD functional networks were derived from resting-state functional magnetic resonance imaging (rs-fMRI) data. We downloaded a total of 120 training samples, including 58 ASD and 62 TD participants, which were obtained from the public repository: Autism Brain Imaging Data Exchange I (ABIDE I). Our methodology and results have five main parts. First, we utilize a group ICA model to extract functional networks from the ASD group and rank the top 20 regions of interest (ROIs). Second, we utilize a dictionary learning model to extract functional networks from the ASD group and rank the top 20 ROIs. Third, we merged the 40 selected ROIs from the two models together as the ASD functional networks. Fourth, we generate three corresponding masks based on the 20 selected ROIs from group ICA, the 20 ROIs selected from dictionary learning, and the 40 combined ROIs selected from both. Finally, we extract ROIs for all training samples using the above three masks, and the calculated functional connectivity was used as features for ASD and TD classification. The classification results showed that the functional networks derived from ICA and dictionary learning together outperform those derived from a single ICA model or a single dictionary learning model. |
2206.11129 | Olga Shishkov | O. Shishkov and O. Peleg | Social Insects and Beyond: The Physics of Soft, Dense Invertebrate
Aggregations | 23 pages, 6 figures | null | null | null | q-bio.PE | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Aggregation is a common behavior by which groups of organisms arrange into
cohesive groups. Whether suspended in the air (like honey bee clusters), built
on the ground (such as army ant bridges), or immersed in water (such as sludge
worm blobs), these collectives serve a multitude of biological functions, from
protection against predation to the ability to maintain a relatively desirable
local environment despite a variable ambient environment. In this review, we
survey dense aggregations of a variety of insects, other arthropods, and worms
from a soft matter standpoint. An aggregation can be orders of magnitude larger
than its individual organisms, consisting of tens to hundreds of thousands of
individuals, and yet functions as a coherent entity. Understanding how
aggregating organisms coordinate with one another to form a superorganism
requires an interdisciplinary approach. We discuss how the physics of the
aggregation can yield additional insights to those gained from ecological and
physiological considerations, given that the aggregating individuals exchange
information, energy, and matter continually with the environment and one
another. While the connection between animal aggregations and the physics of
non-living materials has been proposed since the early 1900s, the recent advent
of physics of behavior studies provides new insights into social interactions
governed by physical principles. Current efforts focus on eusocial insects;
however, we show that these may just be the tip of an iceberg of superorganisms
that take advantage of physical interactions and simple behavioral rules to
adapt to changing environments. By bringing attention to a wide range of
invertebrate aggregations, we wish to inspire a new generation of scientists to
explore collective dynamics and bring a deeper understanding of the physics of
dense living aggregations.
| [
{
"created": "Wed, 22 Jun 2022 14:21:53 GMT",
"version": "v1"
}
] | 2022-06-23 | [
[
"Shishkov",
"O.",
""
],
[
"Peleg",
"O.",
""
]
] | Aggregation is a common behavior by which groups of organisms arrange into cohesive groups. Whether suspended in the air (like honey bee clusters), built on the ground (such as army ant bridges), or immersed in water (such as sludge worm blobs), these collectives serve a multitude of biological functions, from protection against predation to the ability to maintain a relatively desirable local environment despite a variable ambient environment. In this review, we survey dense aggregations of a variety of insects, other arthropods, and worms from a soft matter standpoint. An aggregation can be orders of magnitude larger than its individual organisms, consisting of tens to hundreds of thousands of individuals, and yet functions as a coherent entity. Understanding how aggregating organisms coordinate with one another to form a superorganism requires an interdisciplinary approach. We discuss how the physics of the aggregation can yield additional insights to those gained from ecological and physiological considerations, given that the aggregating individuals exchange information, energy, and matter continually with the environment and one another. While the connection between animal aggregations and the physics of non-living materials has been proposed since the early 1900s, the recent advent of physics of behavior studies provides new insights into social interactions governed by physical principles. Current efforts focus on eusocial insects; however, we show that these may just be the tip of an iceberg of superorganisms that take advantage of physical interactions and simple behavioral rules to adapt to changing environments. By bringing attention to a wide range of invertebrate aggregations, we wish to inspire a new generation of scientists to explore collective dynamics and bring a deeper understanding of the physics of dense living aggregations. |
2103.03292 | Francesco Zamponi | Jeanne Trinquier, Guido Uguzzoni, Andrea Pagnani, Francesco Zamponi,
Martin Weigt | Efficient generative modeling of protein sequences using simple
autoregressive models | 12 pages, 4 Figures + Supplementary Material | Nature Communications 12, 5800 (2021) | 10.1038/s41467-021-25756-4 | null | q-bio.BM cond-mat.dis-nn cond-mat.stat-mech q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Generative models emerge as promising candidates for novel sequence-data
driven approaches to protein design, and for the extraction of structural and
functional information about proteins deeply hidden in rapidly growing sequence
databases. Here we propose simple autoregressive models as highly accurate but
computationally efficient generative sequence models. We show that they perform
similarly to existing approaches based on Boltzmann machines or deep generative
models, but at a substantially lower computational cost (by a factor between
$10^2$ and $10^3$). Furthermore, the simple structure of our models has
distinctive mathematical advantages, which translate into an improved
applicability in sequence generation and evaluation. Within these models, we
can easily estimate both the probability of a given sequence, and, using the
model's entropy, the size of the functional sequence space related to a
specific protein family. In the example of response regulators, we find a huge
number of ca. $10^{68}$ possible sequences, which nevertheless constitute only
the astronomically small fraction $10^{-80}$ of all amino-acid sequences of the
same length. These findings illustrate the potential and the difficulty in
exploring sequence space via generative sequence models.
| [
{
"created": "Thu, 4 Mar 2021 20:05:58 GMT",
"version": "v1"
},
{
"created": "Mon, 13 Sep 2021 14:16:33 GMT",
"version": "v2"
},
{
"created": "Tue, 9 Nov 2021 08:00:41 GMT",
"version": "v3"
}
] | 2021-11-10 | [
[
"Trinquier",
"Jeanne",
""
],
[
"Uguzzoni",
"Guido",
""
],
[
"Pagnani",
"Andrea",
""
],
[
"Zamponi",
"Francesco",
""
],
[
"Weigt",
"Martin",
""
]
] | Generative models emerge as promising candidates for novel sequence-data driven approaches to protein design, and for the extraction of structural and functional information about proteins deeply hidden in rapidly growing sequence databases. Here we propose simple autoregressive models as highly accurate but computationally efficient generative sequence models. We show that they perform similarly to existing approaches based on Boltzmann machines or deep generative models, but at a substantially lower computational cost (by a factor between $10^2$ and $10^3$). Furthermore, the simple structure of our models has distinctive mathematical advantages, which translate into an improved applicability in sequence generation and evaluation. Within these models, we can easily estimate both the probability of a given sequence, and, using the model's entropy, the size of the functional sequence space related to a specific protein family. In the example of response regulators, we find a huge number of ca. $10^{68}$ possible sequences, which nevertheless constitute only the astronomically small fraction $10^{-80}$ of all amino-acid sequences of the same length. These findings illustrate the potential and the difficulty in exploring sequence space via generative sequence models. |
1611.08310 | Jiaying Zhang Jiaying Zhang | Xuehai Wu, Jiaying Zhang, Zaixu Cui, Weijun Tang, Chunhong Shao, Jin
Hu, Jianhong Zhu, Liangfu Zhou, Yao Zhao, Lu Lu, Gang Chen, Georg Northoff,
Gaolang Gong, Ying Mao, Yong He | White matter deficits underlie the loss of consciousness level and
predict recovery outcome in disorders of consciousness | null | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This study aimed to identify white matter (WM) deficits underlying the loss
of consciousness in disorder of consciousness (DOC) patients using Diffusion
Tensor Imaging (DTI) and to demonstrate the potential value of DTI parameters
in predicting recovery outcomes of DOC patients. With 30 DOC patients (8
comatose, 8 unresponsive wakefulness syndrome/vegetative state, and 14 minimal
conscious state) and 25 patient controls, we performed group comparison of DTI
parameters across 48 core WM regions of interest (ROIs) using Analysis of
Covariance. Compared with controls, DOC patients had decreased Fractional
anisotropy (FA) and increased diffusivities in widespread WM area.The
corresponding DTI parameters of those WM deficits in DOC patients significantly
correlated with the consciousness level evaluated by Coma Recovery Scale
Revised (CRS-R) and Glasgow Coma Scale (GCS). As for predicting the recovery
outcomes (i.e., regaining consciousness or not, grouped by their Glasgow
Outcome Scale more than 2 or not) at 3 months post scan, radial diffusivity of
left superior cerebellar peduncle and FA of right sagittal stratum reached an
accuracy of 87.5% and 75% respectively. Our findings showed multiple WM
deficits underlying the loss of consciousness level, and demonstrated the
potential value of these WM areas in predicting the recovery outcomes of DOC
patients who have lost awareness of the environment and themselves.
| [
{
"created": "Thu, 24 Nov 2016 21:09:55 GMT",
"version": "v1"
}
] | 2016-11-28 | [
[
"Wu",
"Xuehai",
""
],
[
"Zhang",
"Jiaying",
""
],
[
"Cui",
"Zaixu",
""
],
[
"Tang",
"Weijun",
""
],
[
"Shao",
"Chunhong",
""
],
[
"Hu",
"Jin",
""
],
[
"Zhu",
"Jianhong",
""
],
[
"Zhou",
"Liangfu... | This study aimed to identify white matter (WM) deficits underlying the loss of consciousness in disorder of consciousness (DOC) patients using Diffusion Tensor Imaging (DTI) and to demonstrate the potential value of DTI parameters in predicting recovery outcomes of DOC patients. With 30 DOC patients (8 comatose, 8 unresponsive wakefulness syndrome/vegetative state, and 14 minimal conscious state) and 25 patient controls, we performed group comparison of DTI parameters across 48 core WM regions of interest (ROIs) using Analysis of Covariance. Compared with controls, DOC patients had decreased Fractional anisotropy (FA) and increased diffusivities in widespread WM area.The corresponding DTI parameters of those WM deficits in DOC patients significantly correlated with the consciousness level evaluated by Coma Recovery Scale Revised (CRS-R) and Glasgow Coma Scale (GCS). As for predicting the recovery outcomes (i.e., regaining consciousness or not, grouped by their Glasgow Outcome Scale more than 2 or not) at 3 months post scan, radial diffusivity of left superior cerebellar peduncle and FA of right sagittal stratum reached an accuracy of 87.5% and 75% respectively. Our findings showed multiple WM deficits underlying the loss of consciousness level, and demonstrated the potential value of these WM areas in predicting the recovery outcomes of DOC patients who have lost awareness of the environment and themselves. |
0705.0912 | Erzs\'ebet Ravasz Regan | Erzsebet Ravasz, S. Gnanakaran and Zoltan Toroczkai | Network Structure of Protein Folding Pathways | 15 pages, 4 figures | null | null | null | q-bio.BM q-bio.MN | null | The classical approach to protein folding inspired by statistical mechanics
avoids the high dimensional structure of the conformation space by using
effective coordinates. Here we introduce a network approach to capture the
statistical properties of the structure of conformation spaces. Conformations
are represented as nodes of the network, while links are transitions via
elementary rotations around a chemical bond. Self-avoidance of a polypeptide
chain introduces degree correlations in the conformation network, which in turn
lead to energy landscape correlations. Folding can be interpreted as a biased
random walk on the conformation network. We show that the folding pathways
along energy gradients organize themselves into scale free networks, thus
explaining previous observations made via molecular dynamics simulations. We
also show that these energy landscape correlations are essential for recovering
the observed connectivity exponent, which belongs to a different universality
class than that of random energy models. In addition, we predict that the
exponent and therefore the structure of the folding network fundamentally
changes at high temperatures, as verified by our simulations on the AK peptide.
| [
{
"created": "Mon, 7 May 2007 14:12:07 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Ravasz",
"Erzsebet",
""
],
[
"Gnanakaran",
"S.",
""
],
[
"Toroczkai",
"Zoltan",
""
]
] | The classical approach to protein folding inspired by statistical mechanics avoids the high dimensional structure of the conformation space by using effective coordinates. Here we introduce a network approach to capture the statistical properties of the structure of conformation spaces. Conformations are represented as nodes of the network, while links are transitions via elementary rotations around a chemical bond. Self-avoidance of a polypeptide chain introduces degree correlations in the conformation network, which in turn lead to energy landscape correlations. Folding can be interpreted as a biased random walk on the conformation network. We show that the folding pathways along energy gradients organize themselves into scale free networks, thus explaining previous observations made via molecular dynamics simulations. We also show that these energy landscape correlations are essential for recovering the observed connectivity exponent, which belongs to a different universality class than that of random energy models. In addition, we predict that the exponent and therefore the structure of the folding network fundamentally changes at high temperatures, as verified by our simulations on the AK peptide. |
2208.04774 | Peter Boldog | Peter Boldog | Exact lattice-based stochastic cell culture simulation algorithms
incorporating spontaneous and contact-dependent reactions | 22 pages, 6 figures | null | null | null | q-bio.PE cond-mat.soft cond-mat.stat-mech q-bio.CB q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we address the modeling issues of cell movement and division
with a special focus on the phenomenon of volume exclusion in a lattice-based,
exact stochastic simulation framework. We propose a new exact method, called
Reduced Rate Method -- RRM, that is substantially quicker than the previously
used exclusion method, for large number of cells. In addition, we introduce
three novel reaction types: the contact-inhibited, the contact-promoted, and
the spontaneous reactions. To the best of our knowledge, these reaction types
have not been taken into account in lattice-based stochastic simulations of
cell cultures. These new types of events may be easily applied to complicated
systems, enabling the generation of biologically feasible stochastic cell
culture simulations. Furthermore, we show that the exclusion algorithm and our
RRM algorithm are mathematically equivalent in the sense that the next reaction
to be realized and the corresponding sojourn time both belong to the same
reaction and time distributions in the two approaches -- even with the newly
introduced reaction types.
Exact, agent-based, stochastic methods of cell culture simulations seem to be
undervalued and are mostly used as benchmarking tools to validate deterministic
approximations of the corresponding stochastic models. Our proposed methods are
exact, they are easy to implement, have a high predictive value, and can be
conveniently extended with new features. Therefore, these approaches promise a
great potential.
| [
{
"created": "Tue, 9 Aug 2022 13:36:18 GMT",
"version": "v1"
}
] | 2022-08-10 | [
[
"Boldog",
"Peter",
""
]
] | In this paper, we address the modeling issues of cell movement and division with a special focus on the phenomenon of volume exclusion in a lattice-based, exact stochastic simulation framework. We propose a new exact method, called Reduced Rate Method -- RRM, that is substantially quicker than the previously used exclusion method, for large number of cells. In addition, we introduce three novel reaction types: the contact-inhibited, the contact-promoted, and the spontaneous reactions. To the best of our knowledge, these reaction types have not been taken into account in lattice-based stochastic simulations of cell cultures. These new types of events may be easily applied to complicated systems, enabling the generation of biologically feasible stochastic cell culture simulations. Furthermore, we show that the exclusion algorithm and our RRM algorithm are mathematically equivalent in the sense that the next reaction to be realized and the corresponding sojourn time both belong to the same reaction and time distributions in the two approaches -- even with the newly introduced reaction types. Exact, agent-based, stochastic methods of cell culture simulations seem to be undervalued and are mostly used as benchmarking tools to validate deterministic approximations of the corresponding stochastic models. Our proposed methods are exact, they are easy to implement, have a high predictive value, and can be conveniently extended with new features. Therefore, these approaches promise a great potential. |
1907.04820 | Stefan Schuster | Stefan Schuster, Jan Ewald, Thomas Dandekar, Sybille D\"uhring | Optimizing defence, counter-defence and counter-counter defence in
parasitic and trophic interactions -- A modelling study | 20 pages, 6 figures | null | null | null | q-bio.SC | http://creativecommons.org/publicdomain/zero/1.0/ | In host-pathogen interactions, often the host (attacked organism) defends
itself by some toxic compound and the parasite, in turn, responds by producing
an enzyme that inactivates that compound. In some cases, the host can respond
by producing an inhibitor of that enzyme, which can be considered as a
counter-counter defence. An example is provided by cephalosporins,
beta-lactamases and clavulanic acid (an inhibitor of beta-lactamases). Here, we
tackle the question under which conditions it pays, during evolution, to
establish a counter-counter defence rather than to intensify or widen the
defence mechanisms. We establish a mathematical model describing this
phenomenon, based on enzyme kinetics for competitive inhibition. We use an
objective function based on Haber's rule, which says that the toxic effect is
proportional to the time integral of toxin concentration. The optimal
allocation of defence and counter-counter defence can be calculated in an
analytical way despite the nonlinearity in the underlying differential
equation. The calculation provides a threshold value for the dissociation
constant of the inhibitor. Only if the inhibition constant is below that
threshold, that is, in the case of strong binding of the inhibitor, it pays to
have a counter-counter defence. This theoretical prediction accounts for the
observation that not for all defence mechanisms, a counter-counter defence
exists. Our results should be of interest for computing optimal mixtures of
beta-lactam antibiotics and beta-lactamase inhibitors such as sulbactam, as
well as for plant-herbivore and other molecular-ecological interactions and to
fight antibiotic resistance in general.
| [
{
"created": "Wed, 10 Jul 2019 16:35:00 GMT",
"version": "v1"
}
] | 2019-07-11 | [
[
"Schuster",
"Stefan",
""
],
[
"Ewald",
"Jan",
""
],
[
"Dandekar",
"Thomas",
""
],
[
"Dühring",
"Sybille",
""
]
] | In host-pathogen interactions, often the host (attacked organism) defends itself by some toxic compound and the parasite, in turn, responds by producing an enzyme that inactivates that compound. In some cases, the host can respond by producing an inhibitor of that enzyme, which can be considered as a counter-counter defence. An example is provided by cephalosporins, beta-lactamases and clavulanic acid (an inhibitor of beta-lactamases). Here, we tackle the question under which conditions it pays, during evolution, to establish a counter-counter defence rather than to intensify or widen the defence mechanisms. We establish a mathematical model describing this phenomenon, based on enzyme kinetics for competitive inhibition. We use an objective function based on Haber's rule, which says that the toxic effect is proportional to the time integral of toxin concentration. The optimal allocation of defence and counter-counter defence can be calculated in an analytical way despite the nonlinearity in the underlying differential equation. The calculation provides a threshold value for the dissociation constant of the inhibitor. Only if the inhibition constant is below that threshold, that is, in the case of strong binding of the inhibitor, it pays to have a counter-counter defence. This theoretical prediction accounts for the observation that not for all defence mechanisms, a counter-counter defence exists. Our results should be of interest for computing optimal mixtures of beta-lactam antibiotics and beta-lactamase inhibitors such as sulbactam, as well as for plant-herbivore and other molecular-ecological interactions and to fight antibiotic resistance in general. |
0901.0663 | Joachim Krug | Andrea Wolff and Joachim Krug | Robustness and epistasis in mutation-selection models | 20 pages, 14 figures | Physical Biology 6 (2009) 036007 (with some revisions) | 10.1088/1478-3975/6/3/036007 | null | q-bio.PE cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We investigate the fitness advantage associated with the robustness of a
phenotype against deleterious mutations using deterministic mutation-selection
models of quasispecies type equipped with a mesa shaped fitness landscape. We
obtain analytic results for the robustness effect which become exact in the
limit of infinite sequence length. Thereby, we are able to clarify a seeming
contradiction between recent rigorous work and an earlier heuristic treatment
based on a mapping to a Schr\"odinger equation. We exploit the quantum
mechanical analogy to calculate a correction term for finite sequence lengths
and verify our analytic results by numerical studies. In addition, we
investigate the occurrence of an error threshold for a general class of
epistatic landscape and show that diminishing epistasis is a necessary but not
sufficient condition for error threshold behavior.
| [
{
"created": "Tue, 6 Jan 2009 16:16:39 GMT",
"version": "v1"
}
] | 2015-05-13 | [
[
"Wolff",
"Andrea",
""
],
[
"Krug",
"Joachim",
""
]
] | We investigate the fitness advantage associated with the robustness of a phenotype against deleterious mutations using deterministic mutation-selection models of quasispecies type equipped with a mesa shaped fitness landscape. We obtain analytic results for the robustness effect which become exact in the limit of infinite sequence length. Thereby, we are able to clarify a seeming contradiction between recent rigorous work and an earlier heuristic treatment based on a mapping to a Schr\"odinger equation. We exploit the quantum mechanical analogy to calculate a correction term for finite sequence lengths and verify our analytic results by numerical studies. In addition, we investigate the occurrence of an error threshold for a general class of epistatic landscape and show that diminishing epistasis is a necessary but not sufficient condition for error threshold behavior. |
1208.6350 | Mengyao Zhao | Mengyao Zhao, Wan-Ping Lee, Erik Garrison and Gabor T. Marth | SSW Library: An SIMD Smith-Waterman C/C++ Library for Use in Genomic
Applications | 3 pages, 2 figures | null | 10.1371/journal.pone.0082138 | null | q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Summary: The Smith Waterman (SW) algorithm, which produces the optimal
pairwise alignment between two sequences, is frequently used as a key component
of fast heuristic read mapping and variation detection tools, but current
implementations are either designed as monolithic protein database searching
tools or are embedded into other tools. To facilitate easy integration of the
fast Single Instruction Multiple Data (SIMD) SW algorithm into third party
software, we wrote a C/C++ library, which extends Farrars Striped SW (SSW) to
return alignment information in addition to the optimal SW score. Availability:
SSW is available both as a C/C++ software library, as well as a stand alone
alignment tool wrapping the librarys functionality at
https://github.com/mengyao/Complete- Striped-Smith-Waterman-Library Contact:
marth@bc.edu
| [
{
"created": "Fri, 31 Aug 2012 02:03:43 GMT",
"version": "v1"
},
{
"created": "Fri, 26 Apr 2013 22:02:44 GMT",
"version": "v2"
}
] | 2014-03-05 | [
[
"Zhao",
"Mengyao",
""
],
[
"Lee",
"Wan-Ping",
""
],
[
"Garrison",
"Erik",
""
],
[
"Marth",
"Gabor T.",
""
]
] | Summary: The Smith Waterman (SW) algorithm, which produces the optimal pairwise alignment between two sequences, is frequently used as a key component of fast heuristic read mapping and variation detection tools, but current implementations are either designed as monolithic protein database searching tools or are embedded into other tools. To facilitate easy integration of the fast Single Instruction Multiple Data (SIMD) SW algorithm into third party software, we wrote a C/C++ library, which extends Farrars Striped SW (SSW) to return alignment information in addition to the optimal SW score. Availability: SSW is available both as a C/C++ software library, as well as a stand alone alignment tool wrapping the librarys functionality at https://github.com/mengyao/Complete- Striped-Smith-Waterman-Library Contact: marth@bc.edu |
q-bio/0703035 | Brigitte Gaillard | S. Bourgeon (DEPE-Iphc), T. Raclot (DEPE-Iphc), Y. Le Maho
(DEPE-Iphc), D. Ricquier (CREMD), F. Criscuolo (CREMD) | Innate immunity, assessed by plasma NO measurements, is not suppressed
during the incubation fast in eiders | null | Dev. Comp. Immunol. (19/12/2006) 29 pages | 10.1016/j.dci.2006.11.009 | null | q-bio.PE | null | Immunity is hypothesized to share limited resources with other physiological
functions and may mediate life history trade-offs, for example between
reproduction and survival. However, vertebrate immune defense is a complex
system that consists of three components. To date, no study has assessed all of
these components for the same animal model and within a given situation.
Previous studies have determined that the acquired immunity of common eiders
(Somateria mollissima) is suppressed during incubation. The present paper aims
to assess the innate immune response in fasting eiders in relation to their
initial body condition. Innate immunity was assessed by measuring plasma nitric
oxide (NO) levels, prior to and after injection of lipopolysaccharides (LPS), a
method which is easily applicable to many wild animals. Body condition index
and corticosterone levels were subsequently determined as indicators of body
condition and stress level prior to LPS injection. The innate immune response
in eiders did not vary significantly throughout the incubation period. The
innate immune response of eiders did not vary significantly in relation to
their initial body condition but decreased significantly when corticosterone
levels increased. However, NO levels after LPS injection were significantly and
positively related to initial body condition, while there was a significant
negative relationship with plasma corticosterone levels. Our study suggests
that female eiders preserve an effective innate immune response during
incubation and this response might be partially determined by the initial body
condition.
| [
{
"created": "Thu, 15 Mar 2007 12:44:23 GMT",
"version": "v1"
}
] | 2007-05-23 | [
[
"Bourgeon",
"S.",
"",
"DEPE-Iphc"
],
[
"Raclot",
"T.",
"",
"DEPE-Iphc"
],
[
"Maho",
"Y. Le",
"",
"DEPE-Iphc"
],
[
"Ricquier",
"D.",
"",
"CREMD"
],
[
"Criscuolo",
"F.",
"",
"CREMD"
]
] | Immunity is hypothesized to share limited resources with other physiological functions and may mediate life history trade-offs, for example between reproduction and survival. However, vertebrate immune defense is a complex system that consists of three components. To date, no study has assessed all of these components for the same animal model and within a given situation. Previous studies have determined that the acquired immunity of common eiders (Somateria mollissima) is suppressed during incubation. The present paper aims to assess the innate immune response in fasting eiders in relation to their initial body condition. Innate immunity was assessed by measuring plasma nitric oxide (NO) levels, prior to and after injection of lipopolysaccharides (LPS), a method which is easily applicable to many wild animals. Body condition index and corticosterone levels were subsequently determined as indicators of body condition and stress level prior to LPS injection. The innate immune response in eiders did not vary significantly throughout the incubation period. The innate immune response of eiders did not vary significantly in relation to their initial body condition but decreased significantly when corticosterone levels increased. However, NO levels after LPS injection were significantly and positively related to initial body condition, while there was a significant negative relationship with plasma corticosterone levels. Our study suggests that female eiders preserve an effective innate immune response during incubation and this response might be partially determined by the initial body condition. |
1003.2366 | Henrik Jeldtoft Jensen | Tomas Alarcon and Henrik Jeldtoft Jensen | From gene regulatory networks to population dynamics: robustness,
diversity and their role in progression to cancer | 16 pages, 6 figures | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The aim of this paper is to discuss the role of robustness and diversity in
population dynamics in particular to some properties of the multi-step from
healthy tissue to fully malignant tumours. Recent evidence shows that diversity
within the cell population of a neoplasm, a pre-tumoural lession that can
develop into a fully malignant tumour, is the best predictor for its evolving
into a tumour. By studying the dynamics of a population described by a
multi-type, population-size limited branching process in terms of the
evolutionary formalism, we show some general principles regarding the
probability of a resident population to being invaded by a mutant population in
terms of the number of types present in the population and their resilience. We
show that, although diversity in the mutant population poses a barrier for the
emergence of the initial (benign) lession, under appropiate conditions, namely,
the phenotypes in the mutant population being more resilient than those of the
resident population, a more variable noeplastic population is more likely to be
invaded by a more malignant one. Analysis of a model of gene regulatory
networks suggest possible mechanisms giving rise to mutants with increased
phenotypic diversity and robustness. We then go on to show how these results
may help us to interpret some recent data regarding the evolution of Barrett's
oesophagus into throat cancer.
| [
{
"created": "Thu, 11 Mar 2010 17:26:44 GMT",
"version": "v1"
}
] | 2010-03-12 | [
[
"Alarcon",
"Tomas",
""
],
[
"Jensen",
"Henrik Jeldtoft",
""
]
] | The aim of this paper is to discuss the role of robustness and diversity in population dynamics in particular to some properties of the multi-step from healthy tissue to fully malignant tumours. Recent evidence shows that diversity within the cell population of a neoplasm, a pre-tumoural lession that can develop into a fully malignant tumour, is the best predictor for its evolving into a tumour. By studying the dynamics of a population described by a multi-type, population-size limited branching process in terms of the evolutionary formalism, we show some general principles regarding the probability of a resident population to being invaded by a mutant population in terms of the number of types present in the population and their resilience. We show that, although diversity in the mutant population poses a barrier for the emergence of the initial (benign) lession, under appropiate conditions, namely, the phenotypes in the mutant population being more resilient than those of the resident population, a more variable noeplastic population is more likely to be invaded by a more malignant one. Analysis of a model of gene regulatory networks suggest possible mechanisms giving rise to mutants with increased phenotypic diversity and robustness. We then go on to show how these results may help us to interpret some recent data regarding the evolution of Barrett's oesophagus into throat cancer. |
1304.2917 | Flavia Maria Darcie Marquitti | Flavia Maria Darcie Marquitti, Paulo Roberto Guimaraes Jr., Mathias
Mistretta Pires, Luiz Fernando Bittencourt | MODULAR: Software for the Autonomous Computation of Modularity in Large
Network Sets | null | null | null | null | q-bio.QM cs.SI physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Ecological systems can be seen as networks of interactions between
individual, species, or habitat patches. A key feature of many ecological
networks is their organization into modules, which are subsets of elements that
are more connected to each other than to the other elements in the network. We
introduce MODULAR to perform rapid and autonomous calculation of modularity in
sets of networks. MODULAR reads a set of files with matrices or edge lists that
represent unipartite or bipartite networks, and identify modules using two
different modularity metrics that have been previously used in studies of
ecological networks. To find the network partition that maximizes modularity,
the software offers five optimization methods to the user. We also included two
of the most common null models that are used in studies of ecological networks
to verify how the modularity found by the maximization of each metric differs
from a theoretical benchmark.
| [
{
"created": "Tue, 9 Apr 2013 14:37:44 GMT",
"version": "v1"
}
] | 2013-04-11 | [
[
"Marquitti",
"Flavia Maria Darcie",
""
],
[
"Guimaraes",
"Paulo Roberto",
"Jr."
],
[
"Pires",
"Mathias Mistretta",
""
],
[
"Bittencourt",
"Luiz Fernando",
""
]
] | Ecological systems can be seen as networks of interactions between individual, species, or habitat patches. A key feature of many ecological networks is their organization into modules, which are subsets of elements that are more connected to each other than to the other elements in the network. We introduce MODULAR to perform rapid and autonomous calculation of modularity in sets of networks. MODULAR reads a set of files with matrices or edge lists that represent unipartite or bipartite networks, and identify modules using two different modularity metrics that have been previously used in studies of ecological networks. To find the network partition that maximizes modularity, the software offers five optimization methods to the user. We also included two of the most common null models that are used in studies of ecological networks to verify how the modularity found by the maximization of each metric differs from a theoretical benchmark. |
2109.11358 | Jari Pronold | Jari Pronold, Jakob Jordan, Brian J. N. Wylie, Itaru Kitayama, Markus
Diesmann, Susanne Kunkel | Routing brain traffic through the von Neumann bottleneck: Parallel
sorting and refactoring | null | null | 10.3389/fninf.2021.785068 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Generic simulation code for spiking neuronal networks spends the major part
of time in the phase where spikes have arrived at a compute node and need to be
delivered to their target neurons. These spikes were emitted over the last
interval between communication steps by source neurons distributed across many
compute nodes and are inherently irregular with respect to their targets. For
finding the targets, the spikes need to be dispatched to a three-dimensional
data structure with decisions on target thread and synapse type to be made on
the way. With growing network size a compute node receives spikes from an
increasing number of different source neurons until in the limit each synapse
on the compute node has a unique source. Here we show analytically how this
sparsity emerges over the practically relevant range of network sizes from a
hundred thousand to a billion neurons. By profiling a production code we
investigate opportunities for algorithmic changes to avoid indirections and
branching. Every thread hosts an equal share of the neurons on a compute node.
In the original algorithm all threads search through all spikes to pick out the
relevant ones. With increasing network size the fraction of hits remains
invariant but the absolute number of rejections grows. An alternative algorithm
equally divides the spikes among the threads and sorts them in parallel
according to target thread and synapse type. After this every thread completes
delivery solely of the section of spikes for its own neurons. The new algorithm
halves the number of instructions in spike delivery which leads to a reduction
of simulation time of up to 40 %. Thus, spike delivery is a fully
parallelizable process with a single synchronization point and thereby well
suited for many-core systems. Our analysis indicates that further progress
requires a reduction of the latency instructions experience in accessing
memory.
| [
{
"created": "Thu, 23 Sep 2021 13:15:34 GMT",
"version": "v1"
},
{
"created": "Fri, 24 Sep 2021 07:11:12 GMT",
"version": "v2"
},
{
"created": "Thu, 10 Mar 2022 11:35:47 GMT",
"version": "v3"
}
] | 2022-03-14 | [
[
"Pronold",
"Jari",
""
],
[
"Jordan",
"Jakob",
""
],
[
"Wylie",
"Brian J. N.",
""
],
[
"Kitayama",
"Itaru",
""
],
[
"Diesmann",
"Markus",
""
],
[
"Kunkel",
"Susanne",
""
]
] | Generic simulation code for spiking neuronal networks spends the major part of time in the phase where spikes have arrived at a compute node and need to be delivered to their target neurons. These spikes were emitted over the last interval between communication steps by source neurons distributed across many compute nodes and are inherently irregular with respect to their targets. For finding the targets, the spikes need to be dispatched to a three-dimensional data structure with decisions on target thread and synapse type to be made on the way. With growing network size a compute node receives spikes from an increasing number of different source neurons until in the limit each synapse on the compute node has a unique source. Here we show analytically how this sparsity emerges over the practically relevant range of network sizes from a hundred thousand to a billion neurons. By profiling a production code we investigate opportunities for algorithmic changes to avoid indirections and branching. Every thread hosts an equal share of the neurons on a compute node. In the original algorithm all threads search through all spikes to pick out the relevant ones. With increasing network size the fraction of hits remains invariant but the absolute number of rejections grows. An alternative algorithm equally divides the spikes among the threads and sorts them in parallel according to target thread and synapse type. After this every thread completes delivery solely of the section of spikes for its own neurons. The new algorithm halves the number of instructions in spike delivery which leads to a reduction of simulation time of up to 40 %. Thus, spike delivery is a fully parallelizable process with a single synchronization point and thereby well suited for many-core systems. Our analysis indicates that further progress requires a reduction of the latency instructions experience in accessing memory. |
2401.00024 | Yixun Xing | Yixun Xing, Casey Moore, Debabrata Saha, Dan Nguyen, MaryLena Bleile,
Xun Jia, Robert Timmerman, Hao Peng, Steve Jiang | Mathematical Modeling of the Synergetic Effect between Radiotherapy and
Immunotherapy | null | null | null | null | q-bio.QM | http://creativecommons.org/licenses/by-sa/4.0/ | Achieving effective synergy between radiotherapy and immunotherapy is
critical for optimizing tumor control and treatment outcomes. To explore the
underlying mechanisms of this synergy, we have investigated a novel treatment
approach known as personalized ultra-fractionated stereotactic adaptive
radiation therapy (PULSAR), which emphasizes the impact of radiation timing on
treatment efficacy. However, the precise mechanism remains unclear. Building on
insights from small animal PULSAR studies, we developed a mathematical
framework consisting of multiple ordinary differential equations to elucidate
the temporal dynamics of tumor control resulting from radiation and the
adaptive immune response. The model accounts for the migration and infiltration
of T-cells within the tumor microenvironment. This proposed model establishes a
causal and quantitative link between radiation therapy and immunotherapy,
providing a valuable in-silico analysis tool for designing future PULSAR
trials.
| [
{
"created": "Thu, 28 Dec 2023 23:29:11 GMT",
"version": "v1"
}
] | 2024-01-02 | [
[
"Xing",
"Yixun",
""
],
[
"Moore",
"Casey",
""
],
[
"Saha",
"Debabrata",
""
],
[
"Nguyen",
"Dan",
""
],
[
"Bleile",
"MaryLena",
""
],
[
"Jia",
"Xun",
""
],
[
"Timmerman",
"Robert",
""
],
[
"Peng",
... | Achieving effective synergy between radiotherapy and immunotherapy is critical for optimizing tumor control and treatment outcomes. To explore the underlying mechanisms of this synergy, we have investigated a novel treatment approach known as personalized ultra-fractionated stereotactic adaptive radiation therapy (PULSAR), which emphasizes the impact of radiation timing on treatment efficacy. However, the precise mechanism remains unclear. Building on insights from small animal PULSAR studies, we developed a mathematical framework consisting of multiple ordinary differential equations to elucidate the temporal dynamics of tumor control resulting from radiation and the adaptive immune response. The model accounts for the migration and infiltration of T-cells within the tumor microenvironment. This proposed model establishes a causal and quantitative link between radiation therapy and immunotherapy, providing a valuable in-silico analysis tool for designing future PULSAR trials. |
2005.14700 | Omar El Housni | Omar El Housni, Mika Sumida, Paat Rusmevichientong, Huseyin Topaloglu,
Serhan Ziya | Can Testing Ease Social Distancing Measures? Future Evolution of
COVID-19 in NYC | null | null | null | null | q-bio.PE physics.soc-ph stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The "New York State on Pause" executive order came into effect on March 22
with the goal of ensuring adequate social distancing to alleviate the spread of
COVID-19. Pause will remain effective in New York City in some form until early
June. We use a compartmentalized model to study the effects of testing capacity
and social distancing measures on the evolution of the pandemic in the
"post-Pause" period in the City. We find that testing capacity must increase
dramatically if it is to counterbalance even relatively small relaxations in
social distancing measures in the immediate post-Pause period. In particular,
if the City performs 20,000 tests per day and relaxes the social distancing
measures to the pre-Pause norms, then the total number of deaths by the end of
September can reach 250,000. By keeping the social distancing measures to
somewhere halfway between the pre- and in-Pause norms and performing 100,000
tests per day, the total number of deaths by the end of September can be kept
at around 27,000. Going back to the pre-Pause social distancing norms quickly
must be accompanied by an exorbitant testing capacity, if one is to suppress
excessive deaths. If the City is to go back to the "pre-Pause" social
distancing norms in the immediate post-Pause period and keep the total number
of deaths by the end of September at around 35,000, then it should be
performing 500,000 tests per day. Our findings have important implications on
the magnitude of the testing capacity the City needs as it relaxes the social
distancing measures to reopen its economy.
| [
{
"created": "Wed, 27 May 2020 22:08:34 GMT",
"version": "v1"
}
] | 2020-06-01 | [
[
"Housni",
"Omar El",
""
],
[
"Sumida",
"Mika",
""
],
[
"Rusmevichientong",
"Paat",
""
],
[
"Topaloglu",
"Huseyin",
""
],
[
"Ziya",
"Serhan",
""
]
] | The "New York State on Pause" executive order came into effect on March 22 with the goal of ensuring adequate social distancing to alleviate the spread of COVID-19. Pause will remain effective in New York City in some form until early June. We use a compartmentalized model to study the effects of testing capacity and social distancing measures on the evolution of the pandemic in the "post-Pause" period in the City. We find that testing capacity must increase dramatically if it is to counterbalance even relatively small relaxations in social distancing measures in the immediate post-Pause period. In particular, if the City performs 20,000 tests per day and relaxes the social distancing measures to the pre-Pause norms, then the total number of deaths by the end of September can reach 250,000. By keeping the social distancing measures to somewhere halfway between the pre- and in-Pause norms and performing 100,000 tests per day, the total number of deaths by the end of September can be kept at around 27,000. Going back to the pre-Pause social distancing norms quickly must be accompanied by an exorbitant testing capacity, if one is to suppress excessive deaths. If the City is to go back to the "pre-Pause" social distancing norms in the immediate post-Pause period and keep the total number of deaths by the end of September at around 35,000, then it should be performing 500,000 tests per day. Our findings have important implications on the magnitude of the testing capacity the City needs as it relaxes the social distancing measures to reopen its economy. |
1309.1086 | David Hsu | David Hsu, Murielle Hsu, Heidi L. Grabenstatter, Gregory A. Worrell,
and Thomas P. Sutula | Characterization of high frequency oscillations and EEG frequency
spectra using the damped-oscillator oscillator detector (DOOD) | 25 pages, 10 figures | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Objective: The surgical resection of brain areas with high rates of visually
identified high frequency oscillations (HFOs) on EEG has been correlated with
improved seizure control. However, it can be difficult to distinguish normal
from pathological HFOs, and the visual detection of HFOs is very
time-intensive. An automated algorithm for detecting HFOs and for wide-band
spectral analysis is desirable.
Methods: The damped-oscillator oscillator detector (DOOD) is adapted for HFO
detection, and tested on recordings from one rat and one human. The rat data
consist of recordings from the hippocampus just prior to induction of status
epilepticus, and again 6 weeks after induction, after the rat is epileptic. The
human data are temporal lobe depth electrode recordings from a patient who
underwent pre-surgical evaluation.
Results: Sensitivities and positive predictive values are presented which
depend on specifying a threshold value for HFO detection. Wide-band
time-frequency and HFO-associated frequency spectra are also presented. In the
rat data, four high frequency bands are identified at 80-250 Hz, 250-500 Hz,
600-900 Hz and 1000-3000 Hz. The human data was low-passed filtered at 1000 Hz
and showed HFO-associated bands at 15 Hz, 85 Hz, 400 Hz and 700 Hz.
Conclusion: The DOOD algorithm is capable of high resolution time-frequency
spectra, and it can be adapted to detect HFOs with high positive predictive
value. HFO-associated wide-band data show intricate low-frequency structure.
Significance: DOOD may ease the labor intensity of HFO detection. DOOD
wide-band analysis may in future help distinguish normal from pathological
HFOs.
| [
{
"created": "Wed, 4 Sep 2013 16:10:02 GMT",
"version": "v1"
}
] | 2013-09-05 | [
[
"Hsu",
"David",
""
],
[
"Hsu",
"Murielle",
""
],
[
"Grabenstatter",
"Heidi L.",
""
],
[
"Worrell",
"Gregory A.",
""
],
[
"Sutula",
"Thomas P.",
""
]
] | Objective: The surgical resection of brain areas with high rates of visually identified high frequency oscillations (HFOs) on EEG has been correlated with improved seizure control. However, it can be difficult to distinguish normal from pathological HFOs, and the visual detection of HFOs is very time-intensive. An automated algorithm for detecting HFOs and for wide-band spectral analysis is desirable. Methods: The damped-oscillator oscillator detector (DOOD) is adapted for HFO detection, and tested on recordings from one rat and one human. The rat data consist of recordings from the hippocampus just prior to induction of status epilepticus, and again 6 weeks after induction, after the rat is epileptic. The human data are temporal lobe depth electrode recordings from a patient who underwent pre-surgical evaluation. Results: Sensitivities and positive predictive values are presented which depend on specifying a threshold value for HFO detection. Wide-band time-frequency and HFO-associated frequency spectra are also presented. In the rat data, four high frequency bands are identified at 80-250 Hz, 250-500 Hz, 600-900 Hz and 1000-3000 Hz. The human data was low-passed filtered at 1000 Hz and showed HFO-associated bands at 15 Hz, 85 Hz, 400 Hz and 700 Hz. Conclusion: The DOOD algorithm is capable of high resolution time-frequency spectra, and it can be adapted to detect HFOs with high positive predictive value. HFO-associated wide-band data show intricate low-frequency structure. Significance: DOOD may ease the labor intensity of HFO detection. DOOD wide-band analysis may in future help distinguish normal from pathological HFOs. |
2007.05800 | Dimitrios Adamos Dr | Nikolaos Laskaris, Dimitrios A. Adamos, Anastasios Bezerianos | A Tutorial on Graph Theory for Brain Signal Analysis | To appear in Springer Handbook of Neuroengineering | null | null | null | q-bio.NC cs.LG eess.SP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This tutorial paper refers to the use of graph-theoretic concepts for
analyzing brain signals. For didactic purposes it splits into two parts: theory
and application. In the first part, we commence by introducing some basic
elements from graph theory and stemming algorithmic tools, which can be
employed for data-analytic purposes. Next, we describe how these concepts are
adapted for handling evolving connectivity and gaining insights into network
reorganization. Finally, the notion of signals residing on a given graph is
introduced and elements from the emerging field of graph signal processing
(GSP) are provided. The second part serves as a pragmatic demonstration of the
tools and techniques described earlier. It is based on analyzing a multi-trial
dataset containing single-trial responses from a visual ERP paradigm. The paper
ends with a brief outline of the most recent trends in graph theory that are
about to shape brain signal processing in the near future and a more general
discussion on the relevance of graph-theoretic methodologies for analyzing
continuous-mode neural recordings.
| [
{
"created": "Sat, 11 Jul 2020 15:36:52 GMT",
"version": "v1"
}
] | 2020-07-14 | [
[
"Laskaris",
"Nikolaos",
""
],
[
"Adamos",
"Dimitrios A.",
""
],
[
"Bezerianos",
"Anastasios",
""
]
] | This tutorial paper refers to the use of graph-theoretic concepts for analyzing brain signals. For didactic purposes it splits into two parts: theory and application. In the first part, we commence by introducing some basic elements from graph theory and stemming algorithmic tools, which can be employed for data-analytic purposes. Next, we describe how these concepts are adapted for handling evolving connectivity and gaining insights into network reorganization. Finally, the notion of signals residing on a given graph is introduced and elements from the emerging field of graph signal processing (GSP) are provided. The second part serves as a pragmatic demonstration of the tools and techniques described earlier. It is based on analyzing a multi-trial dataset containing single-trial responses from a visual ERP paradigm. The paper ends with a brief outline of the most recent trends in graph theory that are about to shape brain signal processing in the near future and a more general discussion on the relevance of graph-theoretic methodologies for analyzing continuous-mode neural recordings. |
1906.10729 | Farzad Khalvati | Yucheng Zhang, Edrise M. Lobo-Mueller, Paul Karanicolas, Steven
Gallinger, Masoom A. Haider, Farzad Khalvati | CNN-based Survival Model for Pancreatic Ductal Adenocarcinoma in Medical
Imaging | null | null | null | null | q-bio.QM cs.CV cs.LG eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cox proportional hazard model (CPH) is commonly used in clinical research for
survival analysis. In quantitative medical imaging (radiomics) studies, CPH
plays an important role in feature reduction and modeling. However, the
underlying linear assumption of CPH model limits the prognostic performance. In
addition, the multicollinearity of radiomic features and multiple testing
problem further impedes the CPH models performance. In this work, using
transfer learning, a convolutional neural network (CNN) based survival model
was built and tested on preoperative CT images of resectable Pancreatic Ductal
Adenocarcinoma (PDAC) patients. The proposed CNN-based survival model
outperformed the traditional CPH-based radiomics approach in terms of
concordance index by 22%, providing a better fit for patients' survival
patterns. The proposed CNN-based survival model outperforms CPH-based radiomics
pipeline in PDAC prognosis. This approach offers a better fit for survival
patterns based on CT images and overcomes the limitations of conventional
survival models.
| [
{
"created": "Tue, 25 Jun 2019 19:12:39 GMT",
"version": "v1"
}
] | 2019-06-27 | [
[
"Zhang",
"Yucheng",
""
],
[
"Lobo-Mueller",
"Edrise M.",
""
],
[
"Karanicolas",
"Paul",
""
],
[
"Gallinger",
"Steven",
""
],
[
"Haider",
"Masoom A.",
""
],
[
"Khalvati",
"Farzad",
""
]
] | Cox proportional hazard model (CPH) is commonly used in clinical research for survival analysis. In quantitative medical imaging (radiomics) studies, CPH plays an important role in feature reduction and modeling. However, the underlying linear assumption of CPH model limits the prognostic performance. In addition, the multicollinearity of radiomic features and multiple testing problem further impedes the CPH models performance. In this work, using transfer learning, a convolutional neural network (CNN) based survival model was built and tested on preoperative CT images of resectable Pancreatic Ductal Adenocarcinoma (PDAC) patients. The proposed CNN-based survival model outperformed the traditional CPH-based radiomics approach in terms of concordance index by 22%, providing a better fit for patients' survival patterns. The proposed CNN-based survival model outperforms CPH-based radiomics pipeline in PDAC prognosis. This approach offers a better fit for survival patterns based on CT images and overcomes the limitations of conventional survival models. |
2003.11864 | Meltem Civas | Ozgur B. Akan, Hamideh Ramezani, Meltem Civas, Oktay Cetinkaya,
Bilgesu A. Bilgin, Naveed A. Abbasi | Information and Communication Theoretical Understanding and Treatment of
Spinal Cord Injuries: State-of-the-art and Research Challenges | IEEE Reviews in Biomedical Engineering | null | 10.1109/RBME.2021.3056455 | null | q-bio.NC cs.ET | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Among the various key networks in the human body, the nervous system occupies
central importance. The debilitating effects of spinal cord injuries (SCI)
impact a significant number of people throughout the world, and to date, there
is no satisfactory method to treat them. In this paper, we review the major
treatment techniques for SCI that include promising solutions based on
information and communication technology (ICT) and identify the key
characteristics of such systems. We then introduce two novel ICT-based
treatment approaches for SCI. The first proposal is based on neural interface
systems (NIS) with enhanced feedback, where the external machines are
interfaced with the brain and the spinal cord such that the brain signals are
directly routed to the limbs for movement. The second proposal relates to the
design of self-organizing artificial neurons (ANs) that can be used to replace
the injured or dead biological neurons. Apart from SCI treatment, the proposed
methods may also be utilized as enabling technologies for neural interface
applications by acting as bio-cyber interfaces between the nervous system and
machines. Furthermore, under the framework of Internet of Bio- Nano Things
(IoBNT), experience gained from SCI treatment techniques can be transferred to
nano communication research.
| [
{
"created": "Thu, 26 Mar 2020 12:32:46 GMT",
"version": "v1"
},
{
"created": "Thu, 11 Mar 2021 16:50:07 GMT",
"version": "v2"
}
] | 2021-03-12 | [
[
"Akan",
"Ozgur B.",
""
],
[
"Ramezani",
"Hamideh",
""
],
[
"Civas",
"Meltem",
""
],
[
"Cetinkaya",
"Oktay",
""
],
[
"Bilgin",
"Bilgesu A.",
""
],
[
"Abbasi",
"Naveed A.",
""
]
] | Among the various key networks in the human body, the nervous system occupies central importance. The debilitating effects of spinal cord injuries (SCI) impact a significant number of people throughout the world, and to date, there is no satisfactory method to treat them. In this paper, we review the major treatment techniques for SCI that include promising solutions based on information and communication technology (ICT) and identify the key characteristics of such systems. We then introduce two novel ICT-based treatment approaches for SCI. The first proposal is based on neural interface systems (NIS) with enhanced feedback, where the external machines are interfaced with the brain and the spinal cord such that the brain signals are directly routed to the limbs for movement. The second proposal relates to the design of self-organizing artificial neurons (ANs) that can be used to replace the injured or dead biological neurons. Apart from SCI treatment, the proposed methods may also be utilized as enabling technologies for neural interface applications by acting as bio-cyber interfaces between the nervous system and machines. Furthermore, under the framework of Internet of Bio- Nano Things (IoBNT), experience gained from SCI treatment techniques can be transferred to nano communication research. |
1807.00509 | David Hofmann | Chenfei Zhang, David Hofmann, Andreas Neef and Fred Wolf | Ultrafast population coding and axo-somatic compartmentalization | 15 pages, 6 figures | null | null | null | q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Cortical neurons in the fluctuation driven regime can realize ultrafast
population encoding. The underlying biophysical mechanisms, however, are not
well understood. Reducing the sharpness of the action potential onset can
impair ultrafast population encoding, but it is not clear whether a sharp
action potential onset is sufficient for ultrafast population encoding. One
hypothesis proposes that the sharp action potential onset is caused by the
electrotonic separation of the site of action potential initiation from the
soma, and that this spatial separation also results in ultrafast population
encoding. Here we examined this hypothesis by studying the linear response
properties of model neurons with a defined initiation site. We find that
placing the initiation site at different axonal positions has only a weak
impact on the linear response function of the model. It fails to generate the
ultrafast response and high bandwidth that is observed in cortical neurons.
Furthermore, the high frequency regime of the linear response function of this
model is insensitive to correlation times of the input current contradicting
empirical evidence. When we increase the voltage sensitivity of sodium channels
at the initiation site, the two empirically observed phenomena can be
recovered. We provide an explanation for the dissociation of sharp action
potential onset and ultrafast response. By investigating varying soma sizes, we
furthermore highlight the effect of neuron morphology on the linear response.
Our results show that a sharp onset of action potentials is not sufficient for
the ultrafast response. In the light of recent reports of activity-dependent
repositioning of the axon initial segment, our study predicts that a more
distal initiation site can lead to an increased sharpness of the somatic
waveform but it does not affect the linear response of a population of neurons.
| [
{
"created": "Mon, 2 Jul 2018 08:04:57 GMT",
"version": "v1"
},
{
"created": "Tue, 3 Jul 2018 09:24:14 GMT",
"version": "v2"
}
] | 2018-07-04 | [
[
"Zhang",
"Chenfei",
""
],
[
"Hofmann",
"David",
""
],
[
"Neef",
"Andreas",
""
],
[
"Wolf",
"Fred",
""
]
] | Cortical neurons in the fluctuation driven regime can realize ultrafast population encoding. The underlying biophysical mechanisms, however, are not well understood. Reducing the sharpness of the action potential onset can impair ultrafast population encoding, but it is not clear whether a sharp action potential onset is sufficient for ultrafast population encoding. One hypothesis proposes that the sharp action potential onset is caused by the electrotonic separation of the site of action potential initiation from the soma, and that this spatial separation also results in ultrafast population encoding. Here we examined this hypothesis by studying the linear response properties of model neurons with a defined initiation site. We find that placing the initiation site at different axonal positions has only a weak impact on the linear response function of the model. It fails to generate the ultrafast response and high bandwidth that is observed in cortical neurons. Furthermore, the high frequency regime of the linear response function of this model is insensitive to correlation times of the input current contradicting empirical evidence. When we increase the voltage sensitivity of sodium channels at the initiation site, the two empirically observed phenomena can be recovered. We provide an explanation for the dissociation of sharp action potential onset and ultrafast response. By investigating varying soma sizes, we furthermore highlight the effect of neuron morphology on the linear response. Our results show that a sharp onset of action potentials is not sufficient for the ultrafast response. In the light of recent reports of activity-dependent repositioning of the axon initial segment, our study predicts that a more distal initiation site can lead to an increased sharpness of the somatic waveform but it does not affect the linear response of a population of neurons. |
1411.1176 | Masaki Watabe | Masaki Watabe, Satya N. V. Arjunan, Seiya Fukushima, Kazunari Iwamoto,
Jun Kozuka, Satomi Matsuoka, Yuki Shindo, Masahiro Ueda and Koichi Takahashi | A computational framework for bioimaging simulation | 57 pages | null | 10.1371/journal.pone.0130089 | null | q-bio.QM physics.bio-ph physics.optics | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Using bioimaging technology, biologists have attempted to identify and
document analytical interpretations that underlie biological phenomena in
biological cells. Theoretical biology aims at distilling those interpretations
into knowledge in the mathematical form of biochemical reaction networks and
understanding how higher level functions emerge from the combined action of
biomolecules. However, there still remain formidable challenges in bridging the
gap between bioimaging and mathematical modeling. Generally, measurements using
fluorescence microscopy systems are influenced by systematic effects that arise
from stochastic nature of biological cells, the imaging apparatus, and optical
physics. Such systematic effects are always present in all bioimaging systems
and hinder quantitative comparison between the cell model and bioimages.
Computational tools for such a comparison are still unavailable. Thus, in this
work, we present a computational framework for handling the parameters of the
cell models and the optical physics governing bioimaging systems. Simulation
using this framework can generate digital images of cell simulation results
after accounting for the systematic effects. We then demonstrate that such a
framework enables comparison at the level of photon-counting units.
| [
{
"created": "Wed, 5 Nov 2014 07:54:02 GMT",
"version": "v1"
},
{
"created": "Thu, 13 Nov 2014 10:01:16 GMT",
"version": "v2"
},
{
"created": "Mon, 8 Dec 2014 06:09:01 GMT",
"version": "v3"
},
{
"created": "Thu, 19 Mar 2015 09:57:18 GMT",
"version": "v4"
},
{
"cre... | 2015-07-08 | [
[
"Watabe",
"Masaki",
""
],
[
"Arjunan",
"Satya N. V.",
""
],
[
"Fukushima",
"Seiya",
""
],
[
"Iwamoto",
"Kazunari",
""
],
[
"Kozuka",
"Jun",
""
],
[
"Matsuoka",
"Satomi",
""
],
[
"Shindo",
"Yuki",
""
],
... | Using bioimaging technology, biologists have attempted to identify and document analytical interpretations that underlie biological phenomena in biological cells. Theoretical biology aims at distilling those interpretations into knowledge in the mathematical form of biochemical reaction networks and understanding how higher level functions emerge from the combined action of biomolecules. However, there still remain formidable challenges in bridging the gap between bioimaging and mathematical modeling. Generally, measurements using fluorescence microscopy systems are influenced by systematic effects that arise from stochastic nature of biological cells, the imaging apparatus, and optical physics. Such systematic effects are always present in all bioimaging systems and hinder quantitative comparison between the cell model and bioimages. Computational tools for such a comparison are still unavailable. Thus, in this work, we present a computational framework for handling the parameters of the cell models and the optical physics governing bioimaging systems. Simulation using this framework can generate digital images of cell simulation results after accounting for the systematic effects. We then demonstrate that such a framework enables comparison at the level of photon-counting units. |
1304.4216 | Natalia Denesyuk | Natalia A. Denesyuk and D. Thirumalai | A Coarse-Grained Model for Predicting RNA Folding Thermodynamics | null | null | 10.1021/jp401087x | null | q-bio.BM cond-mat.soft physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a thermodynamically robust coarse-grained model to simulate
folding of RNA in monovalent salt solutions. The model includes stacking,
hydrogen bond and electrostatic interactions as fundamental components in
describing the stability of RNA structures. The stacking interactions are
parametrized using a set of nucleotide-specific parameters, which were
calibrated against the thermodynamic measurements for single-base stacks and
base-pair stacks. All hydrogen bonds are assumed to have the same strength,
regardless of their context in the RNA structure. The ionic buffer is modeled
implicitly, using the concept of counterion condensation and the Debye-H\"uckel
theory. The three adjustable parameters in the model were determined by fitting
the experimental data for two RNA hairpins and a pseudoknot. A single set of
parameters provides good agreement with thermodynamic data for the three RNA
molecules over a wide range of temperatures and salt concentrations. In the
process of calibrating the model, we establish the extent of counterion
condensation onto the single-stranded RNA backbone. The reduced backbone charge
is independent of the ionic strength and is 60% of the RNA bare charge at 37
degrees Celsius. Our model can be used to predict the folding thermodynamics
for any RNA molecule in the presence of monovalent ions.
| [
{
"created": "Mon, 15 Apr 2013 19:44:13 GMT",
"version": "v1"
}
] | 2013-04-16 | [
[
"Denesyuk",
"Natalia A.",
""
],
[
"Thirumalai",
"D.",
""
]
] | We present a thermodynamically robust coarse-grained model to simulate folding of RNA in monovalent salt solutions. The model includes stacking, hydrogen bond and electrostatic interactions as fundamental components in describing the stability of RNA structures. The stacking interactions are parametrized using a set of nucleotide-specific parameters, which were calibrated against the thermodynamic measurements for single-base stacks and base-pair stacks. All hydrogen bonds are assumed to have the same strength, regardless of their context in the RNA structure. The ionic buffer is modeled implicitly, using the concept of counterion condensation and the Debye-H\"uckel theory. The three adjustable parameters in the model were determined by fitting the experimental data for two RNA hairpins and a pseudoknot. A single set of parameters provides good agreement with thermodynamic data for the three RNA molecules over a wide range of temperatures and salt concentrations. In the process of calibrating the model, we establish the extent of counterion condensation onto the single-stranded RNA backbone. The reduced backbone charge is independent of the ionic strength and is 60% of the RNA bare charge at 37 degrees Celsius. Our model can be used to predict the folding thermodynamics for any RNA molecule in the presence of monovalent ions. |
1609.00658 | Alexandre Castro | Alexandre de Castro | On the quantum principles of cognitive learning | null | Behav Brain Sci. 2013 Jun;36(3):281-2 | 10.1017/S0140525X12002919 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pothos & Busemeyer's (P&B's) query about whether quantum probability can
provide a foundation for the cognitive modeling embodies so many underlying
implications that the subject is far from exhausted. In this brief commentary,
however, I suggest that the conceptual thresholds of the meaningful learning
give rise to a typical Boltzmann's weighting measure, which indicates a
statistical verisimilitude of quantum behavior in the human cognitive ensemble.
| [
{
"created": "Sun, 14 Aug 2016 06:03:55 GMT",
"version": "v1"
}
] | 2016-09-05 | [
[
"de Castro",
"Alexandre",
""
]
] | Pothos & Busemeyer's (P&B's) query about whether quantum probability can provide a foundation for the cognitive modeling embodies so many underlying implications that the subject is far from exhausted. In this brief commentary, however, I suggest that the conceptual thresholds of the meaningful learning give rise to a typical Boltzmann's weighting measure, which indicates a statistical verisimilitude of quantum behavior in the human cognitive ensemble. |
1911.03839 | Dongrui Wu | Bo Zhang and Yuqi Cui and Meng Wang and Jingjing Li and Lei Jin and
Dongrui Wu | In Vitro Fertilization (IVF) Cumulative Pregnancy Rate Prediction from
Basic Patient Characteristics | null | null | null | null | q-bio.QM cs.CY cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Tens of millions of women suffer from infertility worldwide each year. In
vitro fertilization (IVF) is the best choice for many such patients. However,
IVF is expensive, time-consuming, and both physically and emotionally
demanding. The first question that a patient usually asks before the IVF is how
likely she will conceive, given her basic medical examination information. This
paper proposes three approaches to predict the cumulative pregnancy rate after
multiple oocyte pickup cycles. Experiments on 11,190 patients showed that first
clustering the patients into different groups and then building a support
vector machine model for each group can achieve the best overall performance.
Our model could be a quick and economic approach for reliably estimating the
cumulative pregnancy rate for a patient, given only her basic medical
examination information, well before starting the actual IVF procedure. The
predictions can help the patient make optimal decisions on whether to use her
own oocyte or donor oocyte, how many oocyte pickup cycles she may need, whether
to use embryo frozen, etc. They will also reduce the patient's cost and time to
pregnancy, and improve her quality of life.
| [
{
"created": "Sun, 10 Nov 2019 03:00:07 GMT",
"version": "v1"
}
] | 2019-11-12 | [
[
"Zhang",
"Bo",
""
],
[
"Cui",
"Yuqi",
""
],
[
"Wang",
"Meng",
""
],
[
"Li",
"Jingjing",
""
],
[
"Jin",
"Lei",
""
],
[
"Wu",
"Dongrui",
""
]
] | Tens of millions of women suffer from infertility worldwide each year. In vitro fertilization (IVF) is the best choice for many such patients. However, IVF is expensive, time-consuming, and both physically and emotionally demanding. The first question that a patient usually asks before the IVF is how likely she will conceive, given her basic medical examination information. This paper proposes three approaches to predict the cumulative pregnancy rate after multiple oocyte pickup cycles. Experiments on 11,190 patients showed that first clustering the patients into different groups and then building a support vector machine model for each group can achieve the best overall performance. Our model could be a quick and economic approach for reliably estimating the cumulative pregnancy rate for a patient, given only her basic medical examination information, well before starting the actual IVF procedure. The predictions can help the patient make optimal decisions on whether to use her own oocyte or donor oocyte, how many oocyte pickup cycles she may need, whether to use embryo frozen, etc. They will also reduce the patient's cost and time to pregnancy, and improve her quality of life. |
1407.5586 | Conrad Cabral | Conrad Cabral, Chintamani Pai, Kashmira Prasade, Smruti Deoghare,
Urooz Kazi, Sonalia Fernandes | Variation in Microbial Growth under Hypergravity | 6 pages, 4 figures | null | null | null | q-bio.CB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We report bacterial growth under hypergravitational stress. Cultures of E.
coli and B. subtilis were subjected to the gravitational stress (38g) and their
growth curves were measured using UV-VIS spectrophotometer. Experiments were
also carried out to investigate nutrient consumption under hypergravitational
conditions. Our results show considerable difference between samples subjected
to hypergravity and normal conditions. This study has importance to understand
bacterial response to external stress factors like gravity and changes in
bacterial system in order to adapt with stress conditions for its survival.
| [
{
"created": "Mon, 21 Jul 2014 18:16:55 GMT",
"version": "v1"
}
] | 2014-07-22 | [
[
"Cabral",
"Conrad",
""
],
[
"Pai",
"Chintamani",
""
],
[
"Prasade",
"Kashmira",
""
],
[
"Deoghare",
"Smruti",
""
],
[
"Kazi",
"Urooz",
""
],
[
"Fernandes",
"Sonalia",
""
]
] | We report bacterial growth under hypergravitational stress. Cultures of E. coli and B. subtilis were subjected to the gravitational stress (38g) and their growth curves were measured using UV-VIS spectrophotometer. Experiments were also carried out to investigate nutrient consumption under hypergravitational conditions. Our results show considerable difference between samples subjected to hypergravity and normal conditions. This study has importance to understand bacterial response to external stress factors like gravity and changes in bacterial system in order to adapt with stress conditions for its survival. |
2208.07369 | Norichika Ogata | Norichika Ogata and Aoi Hosaka | Cellular liberality is measurable as Lempel-Ziv complexity of fastq
files | 6 pages, single table, 4 figures | null | null | null | q-bio.QM cs.IT math.IT | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Many studies used the Shannon entropy of transcriptome data to determine cell
dedifferentiation and differentiation. The collection of evidence has
strengthened the certainty that the transcriptome's Shannon entropy may be used
to quantify cellular dedifferentiation and differentiation. Quantifying this
cellular status is being justified, we propose the term liberality for the
quantitative value of cellular dedifferentiation and differentiation. In
previous studies, we must convert the raw transcriptome data into quantitative
transcriptome data through mapping, tag counting, assembling, and more
bioinformatic processing to calculate the liberality. If we could remove this
conversion step from estimating liberality, we could save computing resources
and time and remove technical difficulties in using the computer. In this
study, we propose a method of calculating cellular liberality without those
transcriptome data conversion processes. We could calculate liberality by
measuring the compression rate of raw transcriptome data. This technique,
independent of reference genome data, increased the generality of cellular
liberality.
| [
{
"created": "Sun, 14 Aug 2022 09:10:16 GMT",
"version": "v1"
},
{
"created": "Fri, 2 Sep 2022 13:11:50 GMT",
"version": "v2"
},
{
"created": "Mon, 3 Oct 2022 08:17:28 GMT",
"version": "v3"
},
{
"created": "Wed, 19 Oct 2022 15:51:49 GMT",
"version": "v4"
}
] | 2022-10-20 | [
[
"Ogata",
"Norichika",
""
],
[
"Hosaka",
"Aoi",
""
]
] | Many studies used the Shannon entropy of transcriptome data to determine cell dedifferentiation and differentiation. The collection of evidence has strengthened the certainty that the transcriptome's Shannon entropy may be used to quantify cellular dedifferentiation and differentiation. Quantifying this cellular status is being justified, we propose the term liberality for the quantitative value of cellular dedifferentiation and differentiation. In previous studies, we must convert the raw transcriptome data into quantitative transcriptome data through mapping, tag counting, assembling, and more bioinformatic processing to calculate the liberality. If we could remove this conversion step from estimating liberality, we could save computing resources and time and remove technical difficulties in using the computer. In this study, we propose a method of calculating cellular liberality without those transcriptome data conversion processes. We could calculate liberality by measuring the compression rate of raw transcriptome data. This technique, independent of reference genome data, increased the generality of cellular liberality. |
1511.04345 | Lilianne Mujica-Parodi | D.J. DeDora, S. Nedic, P. Katti, S. Arnab, L.L. Wald, A. Takahashi,
K.R.A. Van Dijk, H.H. Strey, L.R. Mujica-Parodi | Signal Fluctuation Sensitivity: an improved metric for optimizing
detection of resting-state fMRI networks | 27 pages, 4 figures, 2 tables. Contact Information: Lilianne R.
Mujica-Parodi, Laboratory for Computational Neurodiagnostics, Department of
Biomedical Engineering, Stony Brook University, Stony Brook, NY,
Lilianne.Strey@stonybrook.edu (www.lcneuro.org) | https://www.frontiersin.org/article/10.3389/fnins.2016.00180 | 10.3389/fnins.2016.00180 | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Task-free connectivity analyses have emerged as a powerful tool in functional
neuroimaging. Because the cross-correlations that underlie connectivity
measures are sensitive to distortion of time-series, here we used a novel
dynamic phantom to provide a ground truth for dynamic fidelity between blood
oxygen level dependent (BOLD)-like inputs and fMRI outputs. We found that the
de facto quality-metric for task-free fMRI, temporal signal to noise ratio
(tSNR), correlated inversely with dynamic fidelity; thus, studies optimized for
tSNR actually produced time-series that showed the greatest distortion of
signal dynamics. Instead, the phantom showed that dynamic fidelity is
reasonably approximated by a measure that, unlike tSNR, dissociates signal
dynamics from scanner artifact. We then tested this measure, signal fluctuation
sensitivity (SFS), against human resting-state data. As predicted by the
phantom, SFS--and not tSNR--is associated with enhanced sensitivity to both
local and long-range connectivity within the brain's default mode network.
| [
{
"created": "Fri, 13 Nov 2015 16:34:14 GMT",
"version": "v1"
}
] | 2020-02-05 | [
[
"DeDora",
"D. J.",
""
],
[
"Nedic",
"S.",
""
],
[
"Katti",
"P.",
""
],
[
"Arnab",
"S.",
""
],
[
"Wald",
"L. L.",
""
],
[
"Takahashi",
"A.",
""
],
[
"Van Dijk",
"K. R. A.",
""
],
[
"Strey",
"H. H... | Task-free connectivity analyses have emerged as a powerful tool in functional neuroimaging. Because the cross-correlations that underlie connectivity measures are sensitive to distortion of time-series, here we used a novel dynamic phantom to provide a ground truth for dynamic fidelity between blood oxygen level dependent (BOLD)-like inputs and fMRI outputs. We found that the de facto quality-metric for task-free fMRI, temporal signal to noise ratio (tSNR), correlated inversely with dynamic fidelity; thus, studies optimized for tSNR actually produced time-series that showed the greatest distortion of signal dynamics. Instead, the phantom showed that dynamic fidelity is reasonably approximated by a measure that, unlike tSNR, dissociates signal dynamics from scanner artifact. We then tested this measure, signal fluctuation sensitivity (SFS), against human resting-state data. As predicted by the phantom, SFS--and not tSNR--is associated with enhanced sensitivity to both local and long-range connectivity within the brain's default mode network. |
1001.4584 | Yohsuke Murase | Yohsuke Murase, Takashi Shimada, Nobuyasu Ito, Per Arne Rikvold | Effects of demographic stochasticity on biological community assembly on
evolutionary time scales | null | Phys. Rev. E 81, 041908 (2010) | 10.1103/PhysRevE.81.041908 | null | q-bio.PE cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study the effects of demographic stochasticity on the long-term dynamics
of biological coevolution models of community assembly. The noise is induced in
order to check the validity of deterministic population dynamics. While
mutualistic communities show little dependence on the stochastic population
fluctuations, predator-prey models show strong dependence on the stochasticity,
indicating the relevance of the finiteness of the populations. For a
predator-prey model, the noise causes drastic decreases in diversity and total
population size. The communities that emerge under influence of the noise
consist of species strongly coupled with each other and have stronger linear
stability around the fixed-point populations than the corresponding noiseless
model. The dynamics on evolutionary time scales for the predator-prey model are
also altered by the noise. Approximate $1/f$ fluctuations are observed with
noise, while $1/f^{2}$ fluctuations are found for the model without demographic
noise.
| [
{
"created": "Tue, 26 Jan 2010 02:42:18 GMT",
"version": "v1"
},
{
"created": "Sun, 4 Apr 2010 02:59:09 GMT",
"version": "v2"
}
] | 2010-04-16 | [
[
"Murase",
"Yohsuke",
""
],
[
"Shimada",
"Takashi",
""
],
[
"Ito",
"Nobuyasu",
""
],
[
"Rikvold",
"Per Arne",
""
]
] | We study the effects of demographic stochasticity on the long-term dynamics of biological coevolution models of community assembly. The noise is induced in order to check the validity of deterministic population dynamics. While mutualistic communities show little dependence on the stochastic population fluctuations, predator-prey models show strong dependence on the stochasticity, indicating the relevance of the finiteness of the populations. For a predator-prey model, the noise causes drastic decreases in diversity and total population size. The communities that emerge under influence of the noise consist of species strongly coupled with each other and have stronger linear stability around the fixed-point populations than the corresponding noiseless model. The dynamics on evolutionary time scales for the predator-prey model are also altered by the noise. Approximate $1/f$ fluctuations are observed with noise, while $1/f^{2}$ fluctuations are found for the model without demographic noise. |
1210.6089 | Gabriele Scheler | Gabriele Scheler | Self-organization of signal transduction | updated version, 13 pages, 4 figures, 3 Tables, supplemental table | F1000Research 2013, 2:116 | 10.12688/f1000research.2-116.v1 | null | q-bio.MN q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a model of parameter learning for signal transduction, where the
objective function is defined by signal transmission efficiency. We apply this
to learn kinetic rates as a form of evolutionary learning, and look for
parameters which satisfy the objective. This is a novel approach compared to
the usual technique of adjusting parameters only on the basis of experimental
data. The resulting model is self-organizing, i.e. perturbations in protein
concentrations or changes in extracellular signaling will automatically lead to
adaptation. We systematically perturb protein concentrations and observe the
response of the system. We find compensatory or co-regulation of protein
expression levels. In a novel experiment, we alter the distribution of
extracellular signaling, and observe adaptation based on optimizing signal
transmission. We also discuss the relationship between signaling with and
without transients. Signaling by transients may involve maximization of signal
transmission efficiency for the peak response, but a minimization in
steady-state responses. With an appropriate objective function, this can also
be achieved by concentration adjustment. Self-organizing systems may be
predictive of unwanted drug interference effects, since they aim to mimic
complex cellular adaptation in a unified way.
| [
{
"created": "Tue, 23 Oct 2012 00:07:56 GMT",
"version": "v1"
},
{
"created": "Tue, 8 Jan 2013 06:07:29 GMT",
"version": "v2"
}
] | 2014-08-12 | [
[
"Scheler",
"Gabriele",
""
]
] | We propose a model of parameter learning for signal transduction, where the objective function is defined by signal transmission efficiency. We apply this to learn kinetic rates as a form of evolutionary learning, and look for parameters which satisfy the objective. This is a novel approach compared to the usual technique of adjusting parameters only on the basis of experimental data. The resulting model is self-organizing, i.e. perturbations in protein concentrations or changes in extracellular signaling will automatically lead to adaptation. We systematically perturb protein concentrations and observe the response of the system. We find compensatory or co-regulation of protein expression levels. In a novel experiment, we alter the distribution of extracellular signaling, and observe adaptation based on optimizing signal transmission. We also discuss the relationship between signaling with and without transients. Signaling by transients may involve maximization of signal transmission efficiency for the peak response, but a minimization in steady-state responses. With an appropriate objective function, this can also be achieved by concentration adjustment. Self-organizing systems may be predictive of unwanted drug interference effects, since they aim to mimic complex cellular adaptation in a unified way. |
1906.03958 | Collins Assisi | Rishika Mohanta and Collins Assisi | Parallel scalable simulations of biological neural networks using
TensorFlow: A beginner's guide | Download the associated tutorials from
https://github.com/neurorishika/PSST or http://doi.org/10.17605/OSF.IO/YBZKQ.
You can also find them online as a JupyterBook here:
https://neurorishika.github.io/PSST. Revision Notes: The manuscript and the
online tutorials have been edited for clarity and enhanced readability based
on NBDT reviewers' recommendations | null | null | null | q-bio.NC q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | Biological neural networks are often modeled as systems of coupled,
nonlinear, ordinary or partial differential equations. The number of
differential equations used to model a network increases with the size of the
network and the level of detail used to model individual neurons and synapses.
As one scales up the size of the simulation, it becomes essential to utilize
powerful computing platforms. While many tools exist that solve these equations
numerically, they are often platform-specific. Further, there is a high barrier
of entry to developing flexible platform-independent general-purpose code that
supports hardware acceleration on modern computing architectures such as
GPUs/TPUs and Distributed Platforms. TensorFlow is a Python-based open-source
package designed for machine learning algorithms. However, it is also a
scalable environment for a variety of computations, including solving
differential equations using iterative algorithms such as Runge-Kutta methods.
In this article and the accompanying tutorials, we present a simple exposition
of numerical methods to solve ordinary differential equations using Python and
TensorFlow. The tutorials consist of a series of Python notebooks that, over
the course of five sessions, will lead novice programmers from writing programs
to integrate simple one-dimensional ordinary differential equations using
Python to solving a large system (1000's of differential equations) of coupled
conductance-based neurons using a highly parallelized and scalable framework.
Embedded with the tutorial is a physiologically realistic implementation of a
network in the insect olfactory system. This system, consisting of multiple
neuron and synapse types, can serve as a template to simulate other networks.
| [
{
"created": "Mon, 10 Jun 2019 13:01:57 GMT",
"version": "v1"
},
{
"created": "Fri, 14 Jan 2022 06:54:26 GMT",
"version": "v2"
},
{
"created": "Mon, 8 Aug 2022 15:21:13 GMT",
"version": "v3"
}
] | 2022-08-09 | [
[
"Mohanta",
"Rishika",
""
],
[
"Assisi",
"Collins",
""
]
] | Biological neural networks are often modeled as systems of coupled, nonlinear, ordinary or partial differential equations. The number of differential equations used to model a network increases with the size of the network and the level of detail used to model individual neurons and synapses. As one scales up the size of the simulation, it becomes essential to utilize powerful computing platforms. While many tools exist that solve these equations numerically, they are often platform-specific. Further, there is a high barrier of entry to developing flexible platform-independent general-purpose code that supports hardware acceleration on modern computing architectures such as GPUs/TPUs and Distributed Platforms. TensorFlow is a Python-based open-source package designed for machine learning algorithms. However, it is also a scalable environment for a variety of computations, including solving differential equations using iterative algorithms such as Runge-Kutta methods. In this article and the accompanying tutorials, we present a simple exposition of numerical methods to solve ordinary differential equations using Python and TensorFlow. The tutorials consist of a series of Python notebooks that, over the course of five sessions, will lead novice programmers from writing programs to integrate simple one-dimensional ordinary differential equations using Python to solving a large system (1000's of differential equations) of coupled conductance-based neurons using a highly parallelized and scalable framework. Embedded with the tutorial is a physiologically realistic implementation of a network in the insect olfactory system. This system, consisting of multiple neuron and synapse types, can serve as a template to simulate other networks. |
1707.08236 | Youfang Cao | Youfang Cao, Anna Terebus, Jie Liang | State space truncation with quantified errors for accurate solutions to
discrete Chemical Master Equation | 41 pages, 6 figures | Bulletin of Mathematical Biology. 78 (2016) 617-661 | 10.1007/s11538-016-0149-1 | null | q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | The discrete chemical master equation (dCME) provides a general framework for
studying stochasticity in mesoscopic reaction networks. Since its direct
solution rapidly becomes intractable due to the increasing size of the state
space, truncation of the state space is necessary for solving most dCMEs. It is
therefore important to assess the consequences of state space truncations so
errors can be quantified and minimized. Here we describe a novel method for
state space truncation. By partitioning a reaction network into multiple
molecular equivalence groups (MEG), we truncate the state space by limiting the
total molecular copy numbers in each MEG. We further describe a theoretical
framework for analysis of the truncation error in the steady state probability
landscape using reflecting boundaries. By aggregating the state space based on
the usage of a MEG and constructing an aggregated Markov process, we show that
the truncation error of a MEG can be asymptotically bounded by the probability
of states on the reflecting boundary of the MEG. Furthermore, truncating states
of an arbitrary MEG will not undermine the estimated error of truncating any
other MEGs. We then provide an error estimate for networks with multiple MEGs.
To rapidly determine the appropriate size of an arbitrary MEG, we introduce an
a priori method to estimate the upper bound of its truncation error, which can
be rapidly computed from reaction rates, without costly trial solutions of the
dCME. We show results of applying our methods to four stochastic networks. We
demonstrate how truncation errors and steady state probability landscapes can
be computed using different sizes of the MEG(s) and how the results validate
out theories. Overall, the novel state space truncation and error analysis
methods developed here can be used to ensure accurate direct solutions to the
dCME for a large class of stochastic networks.
| [
{
"created": "Tue, 25 Jul 2017 21:58:30 GMT",
"version": "v1"
}
] | 2017-07-27 | [
[
"Cao",
"Youfang",
""
],
[
"Terebus",
"Anna",
""
],
[
"Liang",
"Jie",
""
]
] | The discrete chemical master equation (dCME) provides a general framework for studying stochasticity in mesoscopic reaction networks. Since its direct solution rapidly becomes intractable due to the increasing size of the state space, truncation of the state space is necessary for solving most dCMEs. It is therefore important to assess the consequences of state space truncations so errors can be quantified and minimized. Here we describe a novel method for state space truncation. By partitioning a reaction network into multiple molecular equivalence groups (MEG), we truncate the state space by limiting the total molecular copy numbers in each MEG. We further describe a theoretical framework for analysis of the truncation error in the steady state probability landscape using reflecting boundaries. By aggregating the state space based on the usage of a MEG and constructing an aggregated Markov process, we show that the truncation error of a MEG can be asymptotically bounded by the probability of states on the reflecting boundary of the MEG. Furthermore, truncating states of an arbitrary MEG will not undermine the estimated error of truncating any other MEGs. We then provide an error estimate for networks with multiple MEGs. To rapidly determine the appropriate size of an arbitrary MEG, we introduce an a priori method to estimate the upper bound of its truncation error, which can be rapidly computed from reaction rates, without costly trial solutions of the dCME. We show results of applying our methods to four stochastic networks. We demonstrate how truncation errors and steady state probability landscapes can be computed using different sizes of the MEG(s) and how the results validate out theories. Overall, the novel state space truncation and error analysis methods developed here can be used to ensure accurate direct solutions to the dCME for a large class of stochastic networks. |
1704.08583 | Roberto Alamino | Roberto C. Alamino | A Model for Emergence of Multiple Anti-Microbial Resistance in a Petri
Torus | 7 pages, 7 figures | null | null | null | q-bio.PE cond-mat.stat-mech | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work introduces a new statistical physics lattice model of bacteria
interacting with anti-microbial drugs that can reproduce qualitative features
of resistance emergence and whose model parameters and outputs can be measured
with controlled \textit{in vitro} experiments. The lattice is inhabited by
agents modeled by Ising perceptrons. The results show the advantage of mixing
drugs among the population compared to other treatment protocols.
| [
{
"created": "Tue, 25 Apr 2017 22:49:20 GMT",
"version": "v1"
},
{
"created": "Tue, 16 May 2017 12:23:17 GMT",
"version": "v2"
}
] | 2017-05-17 | [
[
"Alamino",
"Roberto C.",
""
]
] | This work introduces a new statistical physics lattice model of bacteria interacting with anti-microbial drugs that can reproduce qualitative features of resistance emergence and whose model parameters and outputs can be measured with controlled \textit{in vitro} experiments. The lattice is inhabited by agents modeled by Ising perceptrons. The results show the advantage of mixing drugs among the population compared to other treatment protocols. |
1602.07170 | Pramod Shinde | Pramod Shinde, Alok Yadav, Aparna Rai and Sarika Jalan | Dissortativity and duplications in Oral cancer | null | The European Physical Journal B. 2015 Aug 1;88(8):1-7 | 10.1140/epjb/e2015-60426-5 | null | q-bio.MN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | More than 300,000 new cases worldwide are being diagnosed with oral cancer
annually. Complexity of oral cancer renders designing drug targets very
difficult. We analyse protein-protein interaction network for the normal and
oral cancer tissue and detect crucial changes in the structural properties of
the networks in terms of the interactions of the hub proteins and the
degree-degree correlations. Further analysis of the spectra of both the
networks, while exhibiting universal statistical behavior, manifest distinction
in terms of the zero degeneracy, providing insight to the complexity of the
underlying system.
| [
{
"created": "Tue, 23 Feb 2016 14:42:48 GMT",
"version": "v1"
}
] | 2016-03-08 | [
[
"Shinde",
"Pramod",
""
],
[
"Yadav",
"Alok",
""
],
[
"Rai",
"Aparna",
""
],
[
"Jalan",
"Sarika",
""
]
] | More than 300,000 new cases worldwide are being diagnosed with oral cancer annually. Complexity of oral cancer renders designing drug targets very difficult. We analyse protein-protein interaction network for the normal and oral cancer tissue and detect crucial changes in the structural properties of the networks in terms of the interactions of the hub proteins and the degree-degree correlations. Further analysis of the spectra of both the networks, while exhibiting universal statistical behavior, manifest distinction in terms of the zero degeneracy, providing insight to the complexity of the underlying system. |
1007.0622 | Masahiro Anazawa | Masahiro Anazawa | Combined effect of successive competition periods on population dynamics | 20 pages, 4 figures | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This study investigates the effect of competition between individuals on
population dynamics when they compete for different resources during different
seasons or during different growth stages. Individuals are assumed to compete
for a single resource during each of these periods according to one of the
following competition types: scramble, contest, or an intermediate between the
two. The effect of two successive competition periods is determined to be
expressed by simple relations on products of two "transition matrices" for
various sets of competition types for the two periods. In particular, for the
scramble and contest competition combination, results vary widely depending on
the order of the two competition types. Furthermore, the stability properties
of derived population models as well as the effect of more than two successive
competition periods are discussed.
| [
{
"created": "Mon, 5 Jul 2010 06:18:54 GMT",
"version": "v1"
}
] | 2010-07-06 | [
[
"Anazawa",
"Masahiro",
""
]
] | This study investigates the effect of competition between individuals on population dynamics when they compete for different resources during different seasons or during different growth stages. Individuals are assumed to compete for a single resource during each of these periods according to one of the following competition types: scramble, contest, or an intermediate between the two. The effect of two successive competition periods is determined to be expressed by simple relations on products of two "transition matrices" for various sets of competition types for the two periods. In particular, for the scramble and contest competition combination, results vary widely depending on the order of the two competition types. Furthermore, the stability properties of derived population models as well as the effect of more than two successive competition periods are discussed. |
2308.09558 | Pelin Icer Baykal | Pelin Icer Baykal, Pawe{\l} P. {\L}abaj, Florian Markowetz, Lynn M.
Schriml, Daniel J. Stekhoven, Serghei Mangul, Niko Beerenwinkel | Genomic reproducibility in the bioinformatics era | 10 pages, 2 figures, 2 tables | null | null | null | q-bio.GN | http://creativecommons.org/licenses/by/4.0/ | In biomedical research, validation of a new scientific discovery is tied to
the reproducibility of its experimental results. However, in genomics, the
definition and implementation of reproducibility still remain imprecise. Here,
we argue that genomic reproducibility, defined as the ability of bioinformatics
tools to maintain consistent genomics results across technical replicates, is
key to generating scientific knowledge and enabling medical applications. We
first discuss different concepts of reproducibility and then focus on
reproducibility in the context of genomics, aiming to establish clear
definitions of relevant terms. We then focus on the role of bioinformatics
tools and their impact on genomic reproducibility and assess methods of
evaluating bioinformatics tools in terms of genomic reproducibility. Lastly, we
suggest best practices for enhancing genomic reproducibility, with an emphasis
on assessing the performance of bioinformatics tools through rigorous testing
across multiple technical replicates.
| [
{
"created": "Fri, 18 Aug 2023 13:43:36 GMT",
"version": "v1"
}
] | 2023-08-21 | [
[
"Baykal",
"Pelin Icer",
""
],
[
"Łabaj",
"Paweł P.",
""
],
[
"Markowetz",
"Florian",
""
],
[
"Schriml",
"Lynn M.",
""
],
[
"Stekhoven",
"Daniel J.",
""
],
[
"Mangul",
"Serghei",
""
],
[
"Beerenwinkel",
"Niko",
... | In biomedical research, validation of a new scientific discovery is tied to the reproducibility of its experimental results. However, in genomics, the definition and implementation of reproducibility still remain imprecise. Here, we argue that genomic reproducibility, defined as the ability of bioinformatics tools to maintain consistent genomics results across technical replicates, is key to generating scientific knowledge and enabling medical applications. We first discuss different concepts of reproducibility and then focus on reproducibility in the context of genomics, aiming to establish clear definitions of relevant terms. We then focus on the role of bioinformatics tools and their impact on genomic reproducibility and assess methods of evaluating bioinformatics tools in terms of genomic reproducibility. Lastly, we suggest best practices for enhancing genomic reproducibility, with an emphasis on assessing the performance of bioinformatics tools through rigorous testing across multiple technical replicates. |
1312.6660 | Mariano Sigman | Ariel D Zylberberg, Luciano Paz, Pieter R Roelfsema, Stanislas
Dehaene, Mariano Sigman | A neuronal device for the control of multi-step computations | 13 pages, 6 figures | Papers in Physics 5, 050006 (2013) | 10.4279/PIP.050006 | null | q-bio.NC | http://creativecommons.org/licenses/by/3.0/ | We describe the operation of a neuronal device which embodies the
computational principles of the `paper-and-pencil' machine envisioned by Alan
Turing. The network is based on principles of cortical organization. We develop
a plausible solution to implement pointers and investigate how neuronal
circuits may instantiate the basic operations involved in assigning a value to
a variable (i.e., x=5), in determining whether two variables have the same
value and in retrieving the value of a given variable to be accessible to other
nodes of the network. We exemplify the collective function of the network in
simplified arithmetic and problem solving (blocks-world) tasks.
| [
{
"created": "Tue, 10 Dec 2013 13:55:34 GMT",
"version": "v1"
}
] | 2013-12-24 | [
[
"Zylberberg",
"Ariel D",
""
],
[
"Paz",
"Luciano",
""
],
[
"Roelfsema",
"Pieter R",
""
],
[
"Dehaene",
"Stanislas",
""
],
[
"Sigman",
"Mariano",
""
]
] | We describe the operation of a neuronal device which embodies the computational principles of the `paper-and-pencil' machine envisioned by Alan Turing. The network is based on principles of cortical organization. We develop a plausible solution to implement pointers and investigate how neuronal circuits may instantiate the basic operations involved in assigning a value to a variable (i.e., x=5), in determining whether two variables have the same value and in retrieving the value of a given variable to be accessible to other nodes of the network. We exemplify the collective function of the network in simplified arithmetic and problem solving (blocks-world) tasks. |
1710.10641 | Qifan Yang | Qifan Yang, Gennady V. Roshchupkin, Wiro J. Niessen, Sarah E. Medland,
Alyssa H. Zhu, Paul M. Thompson, Neda Jahanshad | A Fast, Accurate Two-Step Linear Mixed Model for Genetic Analysis
Applied to Repeat MRI Measurements | 2017 Neural Information Processing Systems (NeurIPS) BigNeuro
Workshop | null | null | null | q-bio.QM stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large-scale biobanks are being collected around the world in efforts to
better understand human health and risk factors for disease. They often survey
hundreds of thousands of individuals, combining questionnaires with clinical,
genetic, demographic, and imaging assessments; some of this data may be
collected longitudinally. Genetic associations analysis of such datasets
requires methods to properly handle relatedness, population structure and other
types of biases introduced by confounders. Most popular and accurate approaches
rely on linear mixed model (LMM) algorithms, which are iterative and
computational complexity of each iteration scales by the square of the sample
size, slowing the pace of discoveries (up to several days for single trait
analysis), and, furthermore, limiting the use of repeat phenotypic
measurements. Here, we describe our new, non-iterative, much faster and
accurate Two-Step Linear Mixed Model (Two-Step LMM) approach, that has a
computational complexity that scales linearly with sample size. We show that
the first step retains accurate estimates of the heritability (the proportion
of the trait variance explained by additive genetic factors), even when
increasingly complex genetic relationships between individuals are modeled.
Second step provides a faster framework to obtain the effect sizes of
covariates in regression model. We applied Two-Step LMM to real data from the
UK Biobank, which recently released genotyping information and processed MRI
data from 9,725 individuals. We used the left and right hippocampus volume (HV)
as repeated measures, and observed increased and more accurate heritability
estimation, consistent with simulations.
| [
{
"created": "Sun, 29 Oct 2017 16:24:40 GMT",
"version": "v1"
},
{
"created": "Thu, 7 Dec 2017 13:07:09 GMT",
"version": "v2"
},
{
"created": "Fri, 15 Feb 2019 01:42:45 GMT",
"version": "v3"
},
{
"created": "Fri, 15 Mar 2019 19:01:43 GMT",
"version": "v4"
}
] | 2019-03-19 | [
[
"Yang",
"Qifan",
""
],
[
"Roshchupkin",
"Gennady V.",
""
],
[
"Niessen",
"Wiro J.",
""
],
[
"Medland",
"Sarah E.",
""
],
[
"Zhu",
"Alyssa H.",
""
],
[
"Thompson",
"Paul M.",
""
],
[
"Jahanshad",
"Neda",
""
... | Large-scale biobanks are being collected around the world in efforts to better understand human health and risk factors for disease. They often survey hundreds of thousands of individuals, combining questionnaires with clinical, genetic, demographic, and imaging assessments; some of this data may be collected longitudinally. Genetic associations analysis of such datasets requires methods to properly handle relatedness, population structure and other types of biases introduced by confounders. Most popular and accurate approaches rely on linear mixed model (LMM) algorithms, which are iterative and computational complexity of each iteration scales by the square of the sample size, slowing the pace of discoveries (up to several days for single trait analysis), and, furthermore, limiting the use of repeat phenotypic measurements. Here, we describe our new, non-iterative, much faster and accurate Two-Step Linear Mixed Model (Two-Step LMM) approach, that has a computational complexity that scales linearly with sample size. We show that the first step retains accurate estimates of the heritability (the proportion of the trait variance explained by additive genetic factors), even when increasingly complex genetic relationships between individuals are modeled. Second step provides a faster framework to obtain the effect sizes of covariates in regression model. We applied Two-Step LMM to real data from the UK Biobank, which recently released genotyping information and processed MRI data from 9,725 individuals. We used the left and right hippocampus volume (HV) as repeated measures, and observed increased and more accurate heritability estimation, consistent with simulations. |
2011.04415 | Fernando Antoneli Jr | Jo\~ao Luiz de Oliveira Madeira, Fernando Antoneli | Homeostasis in Networks with Multiple Input Nodes and Robustness in
Bacterial Chemotaxis | 64 pages, 9 figures. Substantial revision with rearrangement of
sections | null | null | null | q-bio.MN math.DS physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A biological system achieve homeostasis when there is a regulated quantity
that is maintained within a narrow range of values. Here we consider
homeostasis as a phenomenon of network dynamics. In this context, we improve a
general theory for the analysis of homeostasis in network dynamical systems
with distinguished input and output nodes, called `input-output networks'. The
theory allows one to define `homeostasis types' of a given network in a `model
independent' fashion, in the sense that the classification depends on the
network topology rather than on the specific model equations. Each `homeostasis
type' represents a possible mechanism for generating homeostasis and is
associated with a suitable `subnetwork motif' of the original network. Our
contribution is an extension of the theory to the case of networks with
multiple input nodes. To showcase our theory, we apply it to bacterial
chemotaxis, a paradigm for homeostasis in biochemical systems. By considering a
representative model of Escherichia coli chemotaxis, we verify that the
corresponding abstract network has multiple input nodes. Thus showing that our
extension of the theory allows for the inclusion of an important class of
models that were previously out of reach. Moreover, from our abstract point of
view, the occurrence of homeostasis in the studied model is caused by a new
mechanism, called input counterweight homeostasis. This new homeostasis
mechanism was discovered in the course of our investigation and is generated by
a balancing between the several input nodes of the network -- therefore, it
requires the existence of at least two input nodes to occur. Finally, the
framework developed here allows one to formalize a notion of `robustness' of
homeostasis based on the concept of `genericity' from the theory dynamical
systems. We discuss how this kind of robustness of homeostasis appears in the
chemotaxis model.
| [
{
"created": "Thu, 5 Nov 2020 21:28:51 GMT",
"version": "v1"
},
{
"created": "Fri, 16 Apr 2021 17:06:03 GMT",
"version": "v2"
},
{
"created": "Tue, 25 Jan 2022 17:53:30 GMT",
"version": "v3"
}
] | 2024-05-08 | [
[
"Madeira",
"João Luiz de Oliveira",
""
],
[
"Antoneli",
"Fernando",
""
]
] | A biological system achieve homeostasis when there is a regulated quantity that is maintained within a narrow range of values. Here we consider homeostasis as a phenomenon of network dynamics. In this context, we improve a general theory for the analysis of homeostasis in network dynamical systems with distinguished input and output nodes, called `input-output networks'. The theory allows one to define `homeostasis types' of a given network in a `model independent' fashion, in the sense that the classification depends on the network topology rather than on the specific model equations. Each `homeostasis type' represents a possible mechanism for generating homeostasis and is associated with a suitable `subnetwork motif' of the original network. Our contribution is an extension of the theory to the case of networks with multiple input nodes. To showcase our theory, we apply it to bacterial chemotaxis, a paradigm for homeostasis in biochemical systems. By considering a representative model of Escherichia coli chemotaxis, we verify that the corresponding abstract network has multiple input nodes. Thus showing that our extension of the theory allows for the inclusion of an important class of models that were previously out of reach. Moreover, from our abstract point of view, the occurrence of homeostasis in the studied model is caused by a new mechanism, called input counterweight homeostasis. This new homeostasis mechanism was discovered in the course of our investigation and is generated by a balancing between the several input nodes of the network -- therefore, it requires the existence of at least two input nodes to occur. Finally, the framework developed here allows one to formalize a notion of `robustness' of homeostasis based on the concept of `genericity' from the theory dynamical systems. We discuss how this kind of robustness of homeostasis appears in the chemotaxis model. |
1610.03653 | Ugo Bardi | Ilaria Perissi, Ugo Bardi, Toufic El Asmar, Alessandro Lavacchi | Dynamic patterns of overexploitation in fisheries | 23 pages, 9 figures | null | null | null | q-bio.PE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding overfishing phenomenon and regulating fishing quotas is a major
global challenge for the 21st Century both in terms of providing food for
humankind and to preserve the oceans ecosystems. However, fishing is a complex
economic activity, affected not just by overfishing but also by such factors as
pollution, technology, financial factors and more. For this reason, it is often
difficult to state with complete certainty that overfishing is the cause of the
decline of a fishery. In this study, we developed a simple dynamic model based
on the earlier, well-known Lotka-Volterra model or Prey-Predator model. To
describe exploitation patterns, we assume that the fish stock and the fishing
industry are coupled stock variables in the model and they dynamically affect
each other, with the fishing yield proportional to both the fishing capital and
the fish stock. The model is based on the concept that the fishing industry
acts as the predator of the resource and that its growth and subsequent decline
is directly related to the abundance of the fish stock. If the model can be fit
historical data relative to specific fisheries, then it is a strong indication
that the fishing industry is strongly affected by the magnitude of the fish
stock and that, in particular, the decline of the yield and the decline of the
stock are linked to each other. The model does not pretend to be a general
description of the fishing industry in all its varied forms; however, the data
reported here show that the model can indeed qualitatively describe several
historical case of the collapse of fisheries. The model can also be used as a
qualitative guide to understand the behavior of several other fisheries. These
result indicate that one of the main factors causing the present crisis of the
world's fisheries is the overexploitation of the fish stocks.
| [
{
"created": "Wed, 12 Oct 2016 09:53:39 GMT",
"version": "v1"
},
{
"created": "Wed, 26 Oct 2016 15:28:52 GMT",
"version": "v2"
}
] | 2016-10-27 | [
[
"Perissi",
"Ilaria",
""
],
[
"Bardi",
"Ugo",
""
],
[
"Asmar",
"Toufic El",
""
],
[
"Lavacchi",
"Alessandro",
""
]
] | Understanding overfishing phenomenon and regulating fishing quotas is a major global challenge for the 21st Century both in terms of providing food for humankind and to preserve the oceans ecosystems. However, fishing is a complex economic activity, affected not just by overfishing but also by such factors as pollution, technology, financial factors and more. For this reason, it is often difficult to state with complete certainty that overfishing is the cause of the decline of a fishery. In this study, we developed a simple dynamic model based on the earlier, well-known Lotka-Volterra model or Prey-Predator model. To describe exploitation patterns, we assume that the fish stock and the fishing industry are coupled stock variables in the model and they dynamically affect each other, with the fishing yield proportional to both the fishing capital and the fish stock. The model is based on the concept that the fishing industry acts as the predator of the resource and that its growth and subsequent decline is directly related to the abundance of the fish stock. If the model can be fit historical data relative to specific fisheries, then it is a strong indication that the fishing industry is strongly affected by the magnitude of the fish stock and that, in particular, the decline of the yield and the decline of the stock are linked to each other. The model does not pretend to be a general description of the fishing industry in all its varied forms; however, the data reported here show that the model can indeed qualitatively describe several historical case of the collapse of fisheries. The model can also be used as a qualitative guide to understand the behavior of several other fisheries. These result indicate that one of the main factors causing the present crisis of the world's fisheries is the overexploitation of the fish stocks. |
1206.0560 | Alberto Mazzoni | Alberto Mazzoni, Nikos K. Logothetis and Stefano Panzeri | The information content of Local Field Potentials: experiments and
models | To appear in Quian Quiroga and Panzeri (Eds) Principles of Neural
Coding, CRC Press, 2012 | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The LFPs is a broadband signal that captures variations of neural population
activity over a wide range of time scales. The range of time scales available
in LFPs is particularly interesting from the neural coding point of view
because it opens up the possibility to investigate whether there are privileged
time scales for information processing, a question that has been hotly debated
over the last one or two decades.It is possible that information is represented
by only a small number of specific frequency ranges, each carrying a separate
contribution to the information representation. To shed light on this issue, it
is important to quantify the information content of each frequency range of
neural activity, and understand which ranges carry complementary or similar
information.
| [
{
"created": "Mon, 4 Jun 2012 09:35:41 GMT",
"version": "v1"
}
] | 2012-06-05 | [
[
"Mazzoni",
"Alberto",
""
],
[
"Logothetis",
"Nikos K.",
""
],
[
"Panzeri",
"Stefano",
""
]
] | The LFPs is a broadband signal that captures variations of neural population activity over a wide range of time scales. The range of time scales available in LFPs is particularly interesting from the neural coding point of view because it opens up the possibility to investigate whether there are privileged time scales for information processing, a question that has been hotly debated over the last one or two decades.It is possible that information is represented by only a small number of specific frequency ranges, each carrying a separate contribution to the information representation. To shed light on this issue, it is important to quantify the information content of each frequency range of neural activity, and understand which ranges carry complementary or similar information. |
1608.04935 | Mahmoud Hassan | Mahmoud Hassan, Isabelle Merlet, Ahmad Mheich, Aya Kabbara, Arnaud
Biraben, Anca Nica and Fabrice Wendling | Identification of interictal epileptic networks from dense-EEG | 30 pages, 5 figures | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Epilepsy is a network disease. The epileptic network usually involves
spatially distributed brain regions. In this context, noninvasive M/EEG source
connectivity is an emerging technique to identify functional brain networks at
cortical level from noninvasive recordings. In this paper, we analyze the
effect of the two key factors involved in EEG source connectivity processing:
i) the algorithm used in the solution of the EEG inverse problem and ii) the
method used in the estimation of the functional connectivity. We evaluate four
inverse solutions algorithms and four connectivity measures on data simulated
from a combined biophysical/physiological model to generate realistic
interictal epileptic spikes reflected in scalp EEG. We use a new network-based
similarity index (SI) to compare between the network identified by each of the
inverse/connectivity combination and the original network generated in the
model. The method will be also applied on real data recorded from one epileptic
patient who underwent a full presurgical evaluation for drug-resistant focal
epilepsy. In simulated data, results revealed that the selection of the
inverse/connectivity combination has a significant impact on the identified
networks. Results suggested that nonlinear methods for measuring the
connectivity are more efficient than the linear one. The wMNE inverse solution
showed higher performance than dSPM, cMEM and sLORETA. In real data, the
combination (wMNE/PLV) led to a very good matching between the interictal
epileptic network identified from noninvasive EEG recordings and the network
obtained from connectivity analysis of intracerebral EEG recordings. These
results suggest that source connectivity method, when appropriately configured,
is able to extract highly relevant diagnostic information about networks
involved in interictal epileptic spikes from non-invasive dense-EEG data.
| [
{
"created": "Wed, 17 Aug 2016 12:07:14 GMT",
"version": "v1"
}
] | 2016-08-18 | [
[
"Hassan",
"Mahmoud",
""
],
[
"Merlet",
"Isabelle",
""
],
[
"Mheich",
"Ahmad",
""
],
[
"Kabbara",
"Aya",
""
],
[
"Biraben",
"Arnaud",
""
],
[
"Nica",
"Anca",
""
],
[
"Wendling",
"Fabrice",
""
]
] | Epilepsy is a network disease. The epileptic network usually involves spatially distributed brain regions. In this context, noninvasive M/EEG source connectivity is an emerging technique to identify functional brain networks at cortical level from noninvasive recordings. In this paper, we analyze the effect of the two key factors involved in EEG source connectivity processing: i) the algorithm used in the solution of the EEG inverse problem and ii) the method used in the estimation of the functional connectivity. We evaluate four inverse solutions algorithms and four connectivity measures on data simulated from a combined biophysical/physiological model to generate realistic interictal epileptic spikes reflected in scalp EEG. We use a new network-based similarity index (SI) to compare between the network identified by each of the inverse/connectivity combination and the original network generated in the model. The method will be also applied on real data recorded from one epileptic patient who underwent a full presurgical evaluation for drug-resistant focal epilepsy. In simulated data, results revealed that the selection of the inverse/connectivity combination has a significant impact on the identified networks. Results suggested that nonlinear methods for measuring the connectivity are more efficient than the linear one. The wMNE inverse solution showed higher performance than dSPM, cMEM and sLORETA. In real data, the combination (wMNE/PLV) led to a very good matching between the interictal epileptic network identified from noninvasive EEG recordings and the network obtained from connectivity analysis of intracerebral EEG recordings. These results suggest that source connectivity method, when appropriately configured, is able to extract highly relevant diagnostic information about networks involved in interictal epileptic spikes from non-invasive dense-EEG data. |
1803.05012 | Ryan Suderman | Ryan Suderman, G. Matthew Fricke, William S. Hlavacek | Using RuleBuilder to graphically define and visualize BioNetGen-language
patterns and reaction rules | 19 pages, 3 figures | null | null | null | q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | RuleBuilder is a tool for drawing graphs that can be represented by the
BioNetGen language (BNGL), which is used to formulate mathematical, rule-based
models of biochemical systems. BNGL provides an intuitive plain-text, or
string, representation of such systems, which is based on a graphical
formalism. Reactions are defined in terms of graph-rewriting rules that specify
the necessary intrinsic properties of the reactants, a transformation, and a
rate law. Rules may also contain contextual constraints that restrict
application of the rule. In some cases, the specification of contextual
constraints can be verbose, making a rule difficult to read. RuleBuilder is
designed to ease the task of reading and writing individual reaction rules, as
well as individual BNGL patterns similar to those found in rules. The software
assists in the reading of existing models by converting BNGL strings of
interest into a graph-based representation composed of nodes and edges.
RuleBuilder also enables the user to construct de novo a visual representation
of BNGL strings using drawing tools available in its interface. As objects are
added to the drawing canvas, the corresponding BNGL string is generated on the
fly, and objects are similarly drawn on the fly as BNGL strings are entered
into the application. RuleBuilder thus facilitates construction and
interpretation of rule-based models.
| [
{
"created": "Tue, 13 Mar 2018 19:07:26 GMT",
"version": "v1"
}
] | 2018-03-15 | [
[
"Suderman",
"Ryan",
""
],
[
"Fricke",
"G. Matthew",
""
],
[
"Hlavacek",
"William S.",
""
]
] | RuleBuilder is a tool for drawing graphs that can be represented by the BioNetGen language (BNGL), which is used to formulate mathematical, rule-based models of biochemical systems. BNGL provides an intuitive plain-text, or string, representation of such systems, which is based on a graphical formalism. Reactions are defined in terms of graph-rewriting rules that specify the necessary intrinsic properties of the reactants, a transformation, and a rate law. Rules may also contain contextual constraints that restrict application of the rule. In some cases, the specification of contextual constraints can be verbose, making a rule difficult to read. RuleBuilder is designed to ease the task of reading and writing individual reaction rules, as well as individual BNGL patterns similar to those found in rules. The software assists in the reading of existing models by converting BNGL strings of interest into a graph-based representation composed of nodes and edges. RuleBuilder also enables the user to construct de novo a visual representation of BNGL strings using drawing tools available in its interface. As objects are added to the drawing canvas, the corresponding BNGL string is generated on the fly, and objects are similarly drawn on the fly as BNGL strings are entered into the application. RuleBuilder thus facilitates construction and interpretation of rule-based models. |
1605.07222 | Vadim N. Biktashev | D. Hornung, V. N. Biktashev, N. F. Otani, T. K. Shajahan, T. Baig, S.
Berg, S. Han, V. Krinsky, S. Luther | Mechanisms of vortices termination in the cardiac muscle | 7 pages, 6 figures, as accepted to Royal Society Open Science | Roy Soc Open Science, 4: 170024, 2017 | 10.1007/s00285-013-0669-3 | null | q-bio.TO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a solution to a long standing problem: how to terminate multiple
vortices in the heart, when the locations of their cores and their critical
time windows are unknown. We scan the phases of all pinned vortices in parallel
with electric field pulses (E-pulses). We specify a condition on pacing
parameters that guarantees termination of one vortex. For more than one vortex
with significantly different frequencies, the success of scanning depends on
chance, and all vortices are terminated with a success rate of less than one.
We found that a similar mechanism terminates also a free (not pinned) vortex. A
series of about 500 experiments with termination of ventricular fibrillation by
E-pulses in pig isolated hearts is evidence that pinned vortices, hidden from
direct observation, are significant in fibrillation. These results form a
physical basis needed for the creation of new effective low energy
defibrillation methods based on the termination of vortices underlying
fibrillation.
| [
{
"created": "Mon, 23 May 2016 22:11:22 GMT",
"version": "v1"
},
{
"created": "Fri, 27 May 2016 11:11:51 GMT",
"version": "v2"
},
{
"created": "Mon, 13 Feb 2017 20:37:30 GMT",
"version": "v3"
},
{
"created": "Wed, 15 Feb 2017 14:15:11 GMT",
"version": "v4"
}
] | 2017-09-26 | [
[
"Hornung",
"D.",
""
],
[
"Biktashev",
"V. N.",
""
],
[
"Otani",
"N. F.",
""
],
[
"Shajahan",
"T. K.",
""
],
[
"Baig",
"T.",
""
],
[
"Berg",
"S.",
""
],
[
"Han",
"S.",
""
],
[
"Krinsky",
"V.",
... | We propose a solution to a long standing problem: how to terminate multiple vortices in the heart, when the locations of their cores and their critical time windows are unknown. We scan the phases of all pinned vortices in parallel with electric field pulses (E-pulses). We specify a condition on pacing parameters that guarantees termination of one vortex. For more than one vortex with significantly different frequencies, the success of scanning depends on chance, and all vortices are terminated with a success rate of less than one. We found that a similar mechanism terminates also a free (not pinned) vortex. A series of about 500 experiments with termination of ventricular fibrillation by E-pulses in pig isolated hearts is evidence that pinned vortices, hidden from direct observation, are significant in fibrillation. These results form a physical basis needed for the creation of new effective low energy defibrillation methods based on the termination of vortices underlying fibrillation. |
2106.16059 | Thomas Shultz | Thomas R Shultz, Ardavan S Nobandegani | A Computational Model of Infant Learning and Reasoning with
Probabilities | To be published in Psychological Review | null | null | null | q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent experiments reveal that 6- to 12-month-old infants can learn
probabilities and reason with them. In this work, we present a novel
computational system called Neural Probability Learner and Sampler (NPLS) that
learns and reasons with probabilities, providing a computationally sufficient
mechanism to explain infant probabilistic learning and inference. In 24
computer simulations, NPLS simulations show how probability distributions can
emerge naturally from neural-network learning of event sequences, providing a
novel explanation of infant probabilistic learning and reasoning. Three
mathematical proofs show how and why NPLS simulates the infant results so
accurately. The results are situated in relation to seven other active research
lines. This work provides an effective way to integrate Bayesian and
neural-network approaches to cognition.
| [
{
"created": "Wed, 30 Jun 2021 13:34:37 GMT",
"version": "v1"
}
] | 2021-07-01 | [
[
"Shultz",
"Thomas R",
""
],
[
"Nobandegani",
"Ardavan S",
""
]
] | Recent experiments reveal that 6- to 12-month-old infants can learn probabilities and reason with them. In this work, we present a novel computational system called Neural Probability Learner and Sampler (NPLS) that learns and reasons with probabilities, providing a computationally sufficient mechanism to explain infant probabilistic learning and inference. In 24 computer simulations, NPLS simulations show how probability distributions can emerge naturally from neural-network learning of event sequences, providing a novel explanation of infant probabilistic learning and reasoning. Three mathematical proofs show how and why NPLS simulates the infant results so accurately. The results are situated in relation to seven other active research lines. This work provides an effective way to integrate Bayesian and neural-network approaches to cognition. |
1306.2258 | Konstantin Berlin | Konstantin Berlin, Nail A. Gumerov, Ramani Duraiswami, David Fushman | Performance of a GPU-based Direct Summation Algorithm for Computation of
Small Angle Scattering Profile | null | null | null | null | q-bio.BM cs.DC physics.comp-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Small Angle Scattering (SAS) of X-rays or neutrons is an experimental
technique that provides valuable structural information for biological
macromolecules under physiological conditions and with no limitation on the
molecular size. In order to refine molecular structure against experimental SAS
data, ab initio prediction of the scattering profile must be recomputed
hundreds of thousands of times, which involves the computation of the sinc
kernel over all pairs of atoms in a molecule. The quadratic computational
complexity of predicting the SAS profile limits the size of the molecules and
and has been a major impediment for integration of SAS data into structure
refinement protocols. In order to significantly speed up prediction of the SAS
profile we present a general purpose graphical processing unit (GPU) algorithm,
written in OpenCL, for the summation of the sinc kernel (Debye summation) over
all pairs of atoms. This program is an order of magnitude faster than a
parallel CPU algorithm, and faster than an FMM-like approximation method for
certain input domains. We show that our algorithm is currently the fastest
method for performing SAS computation for small and medium size molecules
(around 50000 atoms or less). This algorithm is critical for quick and accurate
SAS profile computation of elongated structures, such as DNA, RNA, and sparsely
spaced pseudo-atom molecules.
| [
{
"created": "Mon, 10 Jun 2013 17:44:17 GMT",
"version": "v1"
}
] | 2013-06-11 | [
[
"Berlin",
"Konstantin",
""
],
[
"Gumerov",
"Nail A.",
""
],
[
"Duraiswami",
"Ramani",
""
],
[
"Fushman",
"David",
""
]
] | Small Angle Scattering (SAS) of X-rays or neutrons is an experimental technique that provides valuable structural information for biological macromolecules under physiological conditions and with no limitation on the molecular size. In order to refine molecular structure against experimental SAS data, ab initio prediction of the scattering profile must be recomputed hundreds of thousands of times, which involves the computation of the sinc kernel over all pairs of atoms in a molecule. The quadratic computational complexity of predicting the SAS profile limits the size of the molecules and and has been a major impediment for integration of SAS data into structure refinement protocols. In order to significantly speed up prediction of the SAS profile we present a general purpose graphical processing unit (GPU) algorithm, written in OpenCL, for the summation of the sinc kernel (Debye summation) over all pairs of atoms. This program is an order of magnitude faster than a parallel CPU algorithm, and faster than an FMM-like approximation method for certain input domains. We show that our algorithm is currently the fastest method for performing SAS computation for small and medium size molecules (around 50000 atoms or less). This algorithm is critical for quick and accurate SAS profile computation of elongated structures, such as DNA, RNA, and sparsely spaced pseudo-atom molecules. |
1301.1031 | Kaushik Majumdar | Kaushik Majumdar, Pradeep D. Prasad and Shailesh Verma | Synchronization Implies Seizure or Seizure Implies Synchronization? | 22 pages, 6 figures and 5 tables | null | null | null | q-bio.NC q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Epileptic seizures are considered as abnormally hypersynchronous neuronal
activities of the brain. Do hypersynchronous neuronal activities in a brain
region lead to seizure or the hypersynchronous activities take place due to the
progression of the seizure? We have examined the ECoG signals of 21 epileptic
patients consisting of 87 focal-onset seizures by three different measures
namely, phase synchronization, amplitude correlation and simultaneous
occurrence of peaks and troughs. Each of the measures indicates that for a
majority of the focal-onset seizures, synchronization or correlation or
simultaneity occurs towards the end of the seizure or even after the offset
rather than at the onset or in the beginning or during the progression of the
seizure. We also have outlined how extracellular acidosis caused due to the
seizure in the focal zone can induce synchrony in the seizure generating
network. This implies synchronization is an effect rather than the cause of a
significant number of pharmacologically intractable focal-onset seizures. Since
all the seizures that we have tested belong to the pharmacologically
intractable class, their termination through more coherent neuronal activities
may lead to new and effective ways of discovery and testing of drugs.
| [
{
"created": "Sun, 6 Jan 2013 17:39:51 GMT",
"version": "v1"
}
] | 2013-01-08 | [
[
"Majumdar",
"Kaushik",
""
],
[
"Prasad",
"Pradeep D.",
""
],
[
"Verma",
"Shailesh",
""
]
] | Epileptic seizures are considered as abnormally hypersynchronous neuronal activities of the brain. Do hypersynchronous neuronal activities in a brain region lead to seizure or the hypersynchronous activities take place due to the progression of the seizure? We have examined the ECoG signals of 21 epileptic patients consisting of 87 focal-onset seizures by three different measures namely, phase synchronization, amplitude correlation and simultaneous occurrence of peaks and troughs. Each of the measures indicates that for a majority of the focal-onset seizures, synchronization or correlation or simultaneity occurs towards the end of the seizure or even after the offset rather than at the onset or in the beginning or during the progression of the seizure. We also have outlined how extracellular acidosis caused due to the seizure in the focal zone can induce synchrony in the seizure generating network. This implies synchronization is an effect rather than the cause of a significant number of pharmacologically intractable focal-onset seizures. Since all the seizures that we have tested belong to the pharmacologically intractable class, their termination through more coherent neuronal activities may lead to new and effective ways of discovery and testing of drugs. |
1410.3456 | Juven C. Wang | Juven Wang, Jiunn-Wei Chen | Gene-Mating Dynamic Evolution Theory I: Fundamental assumptions, exactly
solvable models and analytic solutions | 32 pages, 15 figures, 6 tables. (Work completed in 2006 and reported
in 2014.) See Tables and Figures for a summary of key results, sequel
arXiv:1502.07741. Blood type data of world ethnic groups from:
http://www.bloodbook.com/world-abo.html and https://www.human-abo.org/. v3:
Refinement, to appear on Theory in Biosciences - Springer | Theory in Biosciences 139, 105-134, Springer Nature (2020) | 10.1007/s12064-020-00309-3 | null | q-bio.PE nlin.CD physics.bio-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fundamental properties of macroscopic gene-mating dynamic evolutionary
systems are investigated. We focus on a single locus, any number of alleles in
a two-gender dioecious population, for a large class of systems within
population genetics. Our governing equations are time-dependent differential
equations labeled by a set of genotype frequencies. Our equations are uniquely
derived from 4 assumptions within any population: (1) a closed system; (2)
average-and-random mating process (mean-field behavior); (3) Mendelian
inheritance; (4) exponential growth/death. Although our equations are nonlinear
with time-evolutionary dynamics, we have obtained an exact analytic
time-dependent solution and an exactly solvable model. From the
phenomenological viewpoint, any initial parameter of genotype frequencies of a
closed system will eventually approach a stable fixed point. Under time
evolution, we show (1) the monotonic behavior of genotype frequencies, (2) any
genotype or allele that appears in the population will never become extinct,
(3) the Hardy-Weinberg law, and (4) the global stability without chaos in the
parameter space. To demonstrate the experimental evidence for our theory, as an
example, we show a mapping from the data of blood type genotype frequencies of
world ethnic groups to our stable fixed-point solutions. This fixed-point
Hardy-Weinberg manifold, attracting any initial point in any Euclidean fiber
bounded within the genotype frequency space to the fixed point where this fiber
is attached. The stable base manifold and its attached fibers form a fiber
bundle, which fills in the whole genotype frequency space completely. We can
define the genetic distance of two populations as their geodesic distance on
the equilibrium manifold. The modification of our theory under the process of
natural selection and mutation is addressed.
| [
{
"created": "Mon, 13 Oct 2014 19:58:20 GMT",
"version": "v1"
},
{
"created": "Thu, 8 Jan 2015 18:44:54 GMT",
"version": "v2"
},
{
"created": "Mon, 13 Jan 2020 03:00:00 GMT",
"version": "v3"
}
] | 2020-07-22 | [
[
"Wang",
"Juven",
""
],
[
"Chen",
"Jiunn-Wei",
""
]
] | Fundamental properties of macroscopic gene-mating dynamic evolutionary systems are investigated. We focus on a single locus, any number of alleles in a two-gender dioecious population, for a large class of systems within population genetics. Our governing equations are time-dependent differential equations labeled by a set of genotype frequencies. Our equations are uniquely derived from 4 assumptions within any population: (1) a closed system; (2) average-and-random mating process (mean-field behavior); (3) Mendelian inheritance; (4) exponential growth/death. Although our equations are nonlinear with time-evolutionary dynamics, we have obtained an exact analytic time-dependent solution and an exactly solvable model. From the phenomenological viewpoint, any initial parameter of genotype frequencies of a closed system will eventually approach a stable fixed point. Under time evolution, we show (1) the monotonic behavior of genotype frequencies, (2) any genotype or allele that appears in the population will never become extinct, (3) the Hardy-Weinberg law, and (4) the global stability without chaos in the parameter space. To demonstrate the experimental evidence for our theory, as an example, we show a mapping from the data of blood type genotype frequencies of world ethnic groups to our stable fixed-point solutions. This fixed-point Hardy-Weinberg manifold, attracting any initial point in any Euclidean fiber bounded within the genotype frequency space to the fixed point where this fiber is attached. The stable base manifold and its attached fibers form a fiber bundle, which fills in the whole genotype frequency space completely. We can define the genetic distance of two populations as their geodesic distance on the equilibrium manifold. The modification of our theory under the process of natural selection and mutation is addressed. |
2101.03163 | Elham Ghazizadeh Ms | Elham Ghazizadeh, ShiNung Ching | Slow manifolds in recurrent networks encode working memory efficiently
and robustly | null | null | 10.1371/journal.pcbi.1009366 | null | q-bio.NC cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Working memory is a cognitive function involving the storage and manipulation
of latent information over brief intervals of time, thus making it crucial for
context-dependent computation. Here, we use a top-down modeling approach to
examine network-level mechanisms of working memory, an enigmatic issue and
central topic of study in neuroscience and machine intelligence. We train
thousands of recurrent neural networks on a working memory task and then
perform dynamical systems analysis on the ensuing optimized networks, wherein
we find that four distinct dynamical mechanisms can emerge. In particular, we
show the prevalence of a mechanism in which memories are encoded along slow
stable manifolds in the network state space, leading to a phasic neuronal
activation profile during memory periods. In contrast to mechanisms in which
memories are directly encoded at stable attractors, these networks naturally
forget stimuli over time. Despite this seeming functional disadvantage, they
are more efficient in terms of how they leverage their attractor landscape and
paradoxically, are considerably more robust to noise. Our results provide new
dynamical hypotheses regarding how working memory function is encoded in both
natural and artificial neural networks.
| [
{
"created": "Fri, 8 Jan 2021 18:47:02 GMT",
"version": "v1"
}
] | 2021-11-17 | [
[
"Ghazizadeh",
"Elham",
""
],
[
"Ching",
"ShiNung",
""
]
] | Working memory is a cognitive function involving the storage and manipulation of latent information over brief intervals of time, thus making it crucial for context-dependent computation. Here, we use a top-down modeling approach to examine network-level mechanisms of working memory, an enigmatic issue and central topic of study in neuroscience and machine intelligence. We train thousands of recurrent neural networks on a working memory task and then perform dynamical systems analysis on the ensuing optimized networks, wherein we find that four distinct dynamical mechanisms can emerge. In particular, we show the prevalence of a mechanism in which memories are encoded along slow stable manifolds in the network state space, leading to a phasic neuronal activation profile during memory periods. In contrast to mechanisms in which memories are directly encoded at stable attractors, these networks naturally forget stimuli over time. Despite this seeming functional disadvantage, they are more efficient in terms of how they leverage their attractor landscape and paradoxically, are considerably more robust to noise. Our results provide new dynamical hypotheses regarding how working memory function is encoded in both natural and artificial neural networks. |
2003.11363 | Federico Zullo | Federico Zullo | Some numerical observations about the COVID-19 epidemic in Italy | This is versione n. 2. A section and two figures have been added 12
pages, 9 figures | null | null | null | q-bio.PE math.DS physics.soc-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We give some numerical observations on the total number of infected by the
SARS-CoV-2 in Italy. The analysis is based on a tanh formula involving two
parameters. A polynomial correlation between the parameters gives an upper
bound for the time of the peak of new infected. A numerical indicator of the
temporal variability of the upper bound is introduced. The result and the
possibility to extend the analysis to other countries are discussed in the
conclusions.
| [
{
"created": "Wed, 25 Mar 2020 12:31:01 GMT",
"version": "v1"
},
{
"created": "Sat, 11 Apr 2020 18:21:49 GMT",
"version": "v2"
}
] | 2020-04-14 | [
[
"Zullo",
"Federico",
""
]
] | We give some numerical observations on the total number of infected by the SARS-CoV-2 in Italy. The analysis is based on a tanh formula involving two parameters. A polynomial correlation between the parameters gives an upper bound for the time of the peak of new infected. A numerical indicator of the temporal variability of the upper bound is introduced. The result and the possibility to extend the analysis to other countries are discussed in the conclusions. |
1311.1120 | Cory McLean | Eric Y. Durand and Nicholas Eriksson and Cory Y. McLean | Reducing pervasive false positive identical-by-descent segments detected
by large-scale pedigree analysis | 35 pages, 16 figures | null | null | null | q-bio.PE q-bio.GN | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Analysis of genomic segments shared identical-by-descent (IBD) between
individuals is fundamental to many genetic applications, from demographic
inference to estimating the heritability of diseases, but IBD detection
accuracy in non-simulated data is largely unknown. In principle, it can be
evaluated using known pedigrees, as IBD segments are by definition inherited
without recombination down a family tree. We extracted 25,432 genotyped
European individuals containing 2,952 father-mother-child trios from the
23andMe, Inc. dataset. We then used GERMLINE, a widely used IBD detection
method, to detect IBD segments within this cohort. Exploiting known familial
relationships, we identified a false positive rate over 67% for 2-4 centiMorgan
(cM) segments, in sharp contrast with accuracies reported in simulated data at
these sizes. Nearly all false positives arose from the allowance of haplotype
switch errors when detecting IBD, a necessity for retrieving long (> 6 cM)
segments in the presence of imperfect phasing. We introduce HaploScore, a
novel, computationally efficient metric that scores IBD segments proportional
to the number of switch errors they contain. Applying HaploScore filtering to
the IBD data at a precision of 0.8 produced a 13-fold increase in recall when
compared to length-based filtering. We replicate the false IBD findings and
demonstrate the generalizability of HaploScore to alternative data sources
using an independent cohort of 555 European individuals from the 1000 Genomes
project. HaploScore can improve the accuracy of segments reported by any IBD
detection method, provided that estimates of the genotyping error rate and
switch error rate are available.
| [
{
"created": "Tue, 5 Nov 2013 17:01:48 GMT",
"version": "v1"
},
{
"created": "Fri, 7 Feb 2014 23:31:47 GMT",
"version": "v2"
}
] | 2014-02-11 | [
[
"Durand",
"Eric Y.",
""
],
[
"Eriksson",
"Nicholas",
""
],
[
"McLean",
"Cory Y.",
""
]
] | Analysis of genomic segments shared identical-by-descent (IBD) between individuals is fundamental to many genetic applications, from demographic inference to estimating the heritability of diseases, but IBD detection accuracy in non-simulated data is largely unknown. In principle, it can be evaluated using known pedigrees, as IBD segments are by definition inherited without recombination down a family tree. We extracted 25,432 genotyped European individuals containing 2,952 father-mother-child trios from the 23andMe, Inc. dataset. We then used GERMLINE, a widely used IBD detection method, to detect IBD segments within this cohort. Exploiting known familial relationships, we identified a false positive rate over 67% for 2-4 centiMorgan (cM) segments, in sharp contrast with accuracies reported in simulated data at these sizes. Nearly all false positives arose from the allowance of haplotype switch errors when detecting IBD, a necessity for retrieving long (> 6 cM) segments in the presence of imperfect phasing. We introduce HaploScore, a novel, computationally efficient metric that scores IBD segments proportional to the number of switch errors they contain. Applying HaploScore filtering to the IBD data at a precision of 0.8 produced a 13-fold increase in recall when compared to length-based filtering. We replicate the false IBD findings and demonstrate the generalizability of HaploScore to alternative data sources using an independent cohort of 555 European individuals from the 1000 Genomes project. HaploScore can improve the accuracy of segments reported by any IBD detection method, provided that estimates of the genotyping error rate and switch error rate are available. |
0807.3002 | Su-Chan Park | J. Arjan G. M. de Visser, Su-Chan Park, and Joachim Krug | Exploring the effect of sex on an empirical fitness landscape | null | American Naturalist 174 (2009) S15-S30 (with substantial
revisions) | null | null | q-bio.PE cond-mat.dis-nn | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The nature of epistasis has important consequences for the evolutionary
significance of sex and recombination. Recent efforts to find negative
epistasis as source of negative linkage disequilibrium and associated long-term
sex advantage have yielded little support. Sign epistasis, where the sign of
the fitness effects of alleles varies across genetic backgrounds, is
responsible for ruggedness of the fitness landscape with implications for the
evolution of sex that have been largely unexplored. Here, we describe fitness
landscapes for two sets of strains of the asexual fungus \emph{Aspergillus
niger} involving all combinations of five mutations. We find that $\sim 30$% of
the single-mutation fitness effects are positive despite their negative effect
in the wild-type strain, and that several local fitness maxima and minima are
present. We then compare adaptation of sexual and asexual populations on these
empirical fitness landscapes using simulations. The results show a general
disadvantage of sex on these rugged landscapes, caused by the break down by
recombination of genotypes escaping from local peaks. Sex facilitates escape
from a local peak only for some parameter values on one landscape, indicating
its dependence on the landscape's topography. We discuss possible reasons for
the discrepancy between our results and the reports of faster adaptation of
sexual populations.
| [
{
"created": "Fri, 18 Jul 2008 16:15:31 GMT",
"version": "v1"
}
] | 2009-10-02 | [
[
"de Visser",
"J. Arjan G. M.",
""
],
[
"Park",
"Su-Chan",
""
],
[
"Krug",
"Joachim",
""
]
] | The nature of epistasis has important consequences for the evolutionary significance of sex and recombination. Recent efforts to find negative epistasis as source of negative linkage disequilibrium and associated long-term sex advantage have yielded little support. Sign epistasis, where the sign of the fitness effects of alleles varies across genetic backgrounds, is responsible for ruggedness of the fitness landscape with implications for the evolution of sex that have been largely unexplored. Here, we describe fitness landscapes for two sets of strains of the asexual fungus \emph{Aspergillus niger} involving all combinations of five mutations. We find that $\sim 30$% of the single-mutation fitness effects are positive despite their negative effect in the wild-type strain, and that several local fitness maxima and minima are present. We then compare adaptation of sexual and asexual populations on these empirical fitness landscapes using simulations. The results show a general disadvantage of sex on these rugged landscapes, caused by the break down by recombination of genotypes escaping from local peaks. Sex facilitates escape from a local peak only for some parameter values on one landscape, indicating its dependence on the landscape's topography. We discuss possible reasons for the discrepancy between our results and the reports of faster adaptation of sexual populations. |
1803.00643 | Ran Darshan | Ran Darshan, Carl van Vreeswijk and David Hansel | How strong are correlations in strongly recurrent neuronal networks? | null | Phys. Rev. X 8, 031072 (2018) | 10.1103/PhysRevX.8.031072 | null | q-bio.NC cond-mat.dis-nn nlin.CD physics.bio-ph | http://creativecommons.org/licenses/by-sa/4.0/ | Cross-correlations in the activity in neural networks are commonly used to
characterize their dynamical states and their anatomical and functional
organizations. Yet, how these latter network features affect the spatiotemporal
structure of the correlations in recurrent networks is not fully understood.
Here, we develop a general theory for the emergence of correlated neuronal
activity from the dynamics in strongly recurrent networks consisting of several
populations of binary neurons. We apply this theory to the case in which the
connectivity depends on the anatomical or functional distance between the
neurons. We establish the architectural conditions under which the system
settles into a dynamical state where correlations are strong, highly robust and
spatially modulated. We show that such strong correlations arise if the network
exhibits an effective feedforward structure. We establish how this feedforward
structure determines the way correlations scale with the network size and the
degree of the connectivity. In networks lacking an effective feedforward
structure correlations are extremely small and only weakly depend on the number
of connections per neuron. Our work shows how strong correlations can be
consistent with highly irregular activity in recurrent networks, two key
features of neuronal dynamics in the central nervous system.
| [
{
"created": "Thu, 1 Mar 2018 21:51:39 GMT",
"version": "v1"
}
] | 2018-09-26 | [
[
"Darshan",
"Ran",
""
],
[
"van Vreeswijk",
"Carl",
""
],
[
"Hansel",
"David",
""
]
] | Cross-correlations in the activity in neural networks are commonly used to characterize their dynamical states and their anatomical and functional organizations. Yet, how these latter network features affect the spatiotemporal structure of the correlations in recurrent networks is not fully understood. Here, we develop a general theory for the emergence of correlated neuronal activity from the dynamics in strongly recurrent networks consisting of several populations of binary neurons. We apply this theory to the case in which the connectivity depends on the anatomical or functional distance between the neurons. We establish the architectural conditions under which the system settles into a dynamical state where correlations are strong, highly robust and spatially modulated. We show that such strong correlations arise if the network exhibits an effective feedforward structure. We establish how this feedforward structure determines the way correlations scale with the network size and the degree of the connectivity. In networks lacking an effective feedforward structure correlations are extremely small and only weakly depend on the number of connections per neuron. Our work shows how strong correlations can be consistent with highly irregular activity in recurrent networks, two key features of neuronal dynamics in the central nervous system. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.